Application Development Efficiency

McKinsey published an article that discusses increasing application development efficiency. They point out that a number of measurements for development projects are input measurements, rather than output and suggest that Use Cases, and by extension Use Case Points form a solid output measure for determining the progress and effectiveness of a development effort.

Article: Enhancing the efficiency and effectiveness of application development

While I feel the idea is sound at it’s core, it appears that they are drawing an incorrect causal relationship between the tool, the measure and the outcome. I feel the article displays a number of subtle references to the underlying behaviors that are enabling the tool to be deemed successful. 

I believe that: 

  • Use Case’s and by extension Use Case Points suffer from granularity and assumption issues as much as any other technique e.g. interaction 2 on Use Case 111 is deemed to be complexity ratio 2, until work is started and it is discovered that it is actually 10. 
  • Knowing what is important is more important than knowing when we’ll be done, as the first influences the second.
  • Use Case Points as a comparative value across teams suffer the same issues as all other measures. Inconsistencies in scale, skill and methods influence the usefulness of the data.  

Drawing out what I believe the underlying behaviors are that are creating success:

  • Focus on bringing the business closer through an appropriate shared communication tool e.g. examples, use cases, stories
  • Teach critical decision making as a skill in staff in project teams as a means to achieving assumption identification and testing
  • Ensure the measure you use identifies the appropriate analysis method and its influencing properties to provide a useful understanding of performance across teams and projects

None of these are particularly bound to any tool, but together lay a foundation for a number of tools such as use case points, story points, or story counts can be successful.

 

Behavioural Debt

Working with teams to change the way they deliver is nearly always a challenge. Years of working in an industry has shaped the way we behave, to the point where it almost second nature to continue trends we view as unproductive and risky. We know these things aren’t helping us succeed, but we still do them. It’s almost as if we’re controlled by a power greater than our will, which is in a way actually true.

People are well proven to be creatures of habit. As children our parents strive to instill good habits, and avoid bad. These habits live with us throughout our adulthood defining a lot of what we do, both at home and in the workplace. A great example is the frequency and timing of basic hygiene activities like showering, or brushing your teeth. No matter how much my dentist tells me I need to brush at night nearly 30 years of morning routine continues to this day.

Once we reach our teens we start to take jobs. These slowly consume more and more of our lives until we eventually spend more of our waking hours in the office than we do out of it. This makes the organisation we work within a virtual habit factory. The people around us shape the practices that we are indoctrinated into as part of entry to the company. We accept, and grow into these practices which with time become habits.

It is this collection of practices turned habits that I have started referring to as ‘behavioral debt’. It’s a metaphor that I use when describing how comfortable a team is in their existing practices, and how hard it will be to start the stone of change moving downhill toward a new way of working.

Quick Tips: TFS Demand Management via Pivot Table

Context

One of the meetings I commonly lead is a demand management meeting. In this meeting we look at the work currently in the pipeline for delivery and make decisions about what needs to happen with it. When working with TFS I run this meeting via an excel worksheet which I load a backlog query into. 

The default view for this backlog is a table, which is great for the basics of view/edit but suffers the problem of not easily being able to see the forest for the trees. What I really want is a summary view that enables me to roll up the work dynamically to answer questions raised in the meeting.

How To

To achieve this simply: 

  • Select a cell somewhere within your backlog query
  • Select the design tab of your ‘Table Tools’ ribbon category
  • Click ‘Summarize with PivotTable

The options should already have your work item table selected. Either have the table dropped into a new sheet, or your current and click OK. 

Once the table is up I generally roll up size as a sum, and Work Item Id as a count by a field such as Area to get the overview I need. Additionally allowing filtering by blocked items is handy. 

Next Steps

If you need to drill down to the work items, you can simply use the pivot table tools ‘Expand Field’ function to either have the individual work item id or title values listed as a sub field. 

 

 

SQL71567: Filegroup specification issue with SQL projects in VS 2012

If you’re seeing the above error, chances are you’ve got a filegroup specification on both your table and and clustered, or non-clustered inline index. This creates a compile time error as at the SSDT update of September 2012, and left me scratching my head until I saw this forum thread.

To fix the issue, I’ve created a PowerShell script that parses each of the .table.sql files in your database project directory (and subdirectories) to look for a filegroup specification. It then parses the related index files to check for a duplicate specification, and as per the recommendation in the previous forum thread – removes the table script specification. 

The script is available as a gist.

Feedback and forks welcome.

Can we deliver? Projecting probable delivery dates through data.

Kicking off a Project

Looking at the infamous project management triangle we identify three variables. When starting a project, these trigger three questions:

  • What would we like to deliver?
  • When would we like to deliver it?
  • Who will deliver it?

When asked to come up with values for these items I see the following activities occur:

  • Scope is decided
  • A team is decided
  • A set of estimates are formulated
  • A plan is built based on the estimates, dependencies and a buffer of some sort

At this point, the plan is put forth and a fourth question is asked. How confident are you that this is right? The standard response is somewhere between 70 and 90 percent. Anything less and we’d be openly admitting that we’re willing to take a pretty big risk on if we can actually deliver, right? So where does this number actually come from?

There are a few ways I’ve seen the confidence value formulated. Most commonly I see a ‘feel’ provided by project manager based on information provided by a few subject matter experts. Some might take a look at a risk register as well, and I’ve even seen an entirely separate set of estimates generated for the same work by a different team. In the end a lot of the time the primary contributor to our level of comfort is the amount of ‘fat’ or contingency we’ve built in.

I’d like to ask if this is the best we can do?

What are we really looking at?

To answer this question we need to understand what we are trying to derive, and what from. When setting out to build any kind of product we’re looking to exploit an opportunity or reduce a cost. These activities in most cases will also be time sensitive, leading us to an ideal delivery date. This is our first piece of data.

Our second piece of data is our estimate. This provides us with a picture of what we think it will take to get the scope we’ve defined done. These should be relative estimates, and I usually encourage a planning poker approach for a small number of items. Anything more than around 30 and I’d suggest looking at affinity estimating. It’s worked for me and is a great way to get through a large number of items. I’ve used it to estimate ~400 items at a time in around 2 hours – though we padded that out with a couple of breaks to keep us sane.

The third and final required piece of data is also the most problematic to obtain. It’s a rate of delivery over time. Put differently, historically how fast do we progress through our work. I don’t see this value stored in a lot of organisations, and it’s one of the first things we look to build as there’s a lot of value that can be derived from it both in current and predictive terms. You can track it in a few ways – either as a set of units completed in a period e.g. points delivered per iteration, or as an average elapsed time between two points e.g. time from ready for development to production. Either is fine, just be consistent and ensure you record it.  If you don’t have the data then an estimated value will do. Just be sure to record your delivery rate once you start and adjust your view based on this new data.

Confidence Measure

Now we have identified our data how do we understand how likely our ideal outcome actually is? With a historical view of how quickly we delivery work we can calculate a mean rate of delivery.

Note: I’ll be using excel style formulas from here on in. The function calls should be valid if you replace the names with cell references or numbers.

MEAN RATE = SUM OF DELIVERY RATES / COUNT OF MEASUREMENTS

This gives us an average rate per measurement period. I am currently using a daily rate of points, as it eases the extrapolation to dates in later calculations. The next calculation is to work out the standard deviation of the delivery rate.

Note: We are making an assumption that the rate of delivery is normally distributed. This may not be the case, but we have the historical data to better understand how this might be incorrect.

STD. DEV = STDEV.P(LIST OF DELIVERY RATES MEASUREMENTS PERIOD)

At this point we can actually start to calculate what our chances of delivering at different rates is. For example, a mean rate of 8 with a standard deviation of 1.5 gives us a chance of hitting a 15 unit rate of delivery of about 0%, if we were to say 11 then we’d be looking at around 2.28% chance.

To make this useful we next need to know what our required rate of delivery to achieve our target date would be. To calculate this we use the following:

RATE REQUIRED = SUM OF SCOPE / PERIODS UNTIL DATE

We can then understand the probability of reaching that rate by using the data we’ve now got from our calculations as follows:

PROBABILITY = 1 – NORM.DIST(DAILY RATE REQUIRED,MEAN DELIVERY RATE,DELIVERY RATE STD DEVIATION,TRUE)

Mapping this curve we get something like the following:

Delivery Probability

Delivery Probability

What doesn’t this graph tell me?

This graph is a simple probability graph based on historical delivery rate and estimated scope. These in themselves have flaws, as estimates are normally out by factors and scope changes.

What it does do is give us a much better idea of if our dates are even viable. For example you can see from the graph that for the applicable figures the chance of reaching a target date in early January would be pretty low. Especially once we start considering other factors such as reduced availability over the Christmas period.

What benefit does this provide?

The use of data to discover our probability of delivery on a specific date allows us to make a more educated decision on if the risk we are taking to make a specific date is of enough value. This value calculation is influenced by our driver for the work in the first place and it’s attributes such as cost of delay. Armed with this data we can start deciding on influence strategies and even weighing up other opportunities more effectively, rather than relying on our intuition to drive our delivery decisions.

Issues when importing a TFS Project Collection

Recently I moved a project collection between two servers that varied significantly in their configuration. The source server was configured with all the trimmings except for project server integration. The target server was configured as a basic instance with the addition of SQL Reporting and Analysis Services. During the import I encountered 2 issues that I needed to resolve.

Issues 

[TF261007] 

On importing the collection a failure is seen due to lab management configuration being present.

[System.MissingFieldException ‘ProjectServerRegistration’] 

On importing the collection a failure is seen due to the server being unable to locate a specific field required to support Project Server synchronisation.

Reason

As part of Team Foundation Server Service Pack 1 the above issues are patched, and therefore should be resolved. However due to the patching process not performing a restart of the TFS Job Agent the older assemblies may still be loaded. This causes the issues to still surface.

Resolution

There are 2 possible resolutions to these problems. The first is the recommended solution.

1. Restart the server.

2. Quiesce then un-quiesce the application tier using TFSServiceControl.

Both will effectively restart the TFS job service and allow the new assemblies to be loaded.

Retrospectives are more than a ceremony

As a member and scrum master of scrum teams the retrospective is one of the parts of the process that I value the most. It’s the core of the inspect and adapt process, and what helps teams improve and mature. I’ve noticed a behavior during retrospectives recently where all the ceremonies are followed, but if the actions are discussed later it’s discovered that evidence of improvement is lacking.

My suggestion is for teams to look for 3 things out of their retrospectives.

1. Root Cause – perform a root cause analysis on the identified issues. Something similar to the 5 whys exercise; so you have an actionable root cause.

2. Planned Action – for each root cause decide on a planned action to address the root cause. This is aimed at preventing the issue before it becomes one.

3. Measurement – a goal measurement that each action should achieve. This is to allow for an objective assessment of the action to determine if it has addressed the cause. Consider looking for measurements that are targets, rather than volume type measurements.

These 3 outcomes for each issue raised during the retrospective allow us to understand the impact each adjustment has, and ensure we’re focusing on the intent of the retrospective – improvement.

Continuous Delivery Lyrics – DDD Melbourne

At the closing to DDD Melbourne last week I fulfilled a bet that I lost by kickin a rhyme at the closure of the conference. I’ve had a few requests for the lyrics since – so here they are! If you’d like to hear my live rendition click here or if you’re really keen to see me make a fool of myself check youtube.

Warning: I am not a vocalist in any way, shape or form. Subject your ears to it at your own risk!

————————————–

continuous delivery is what I spit
new words on old beats for the instant hit
got j5 here rockin on my microphone
so tweet me up real quick if you like the tone

coming to ya straight outta the S Y D
is the Readify consultant with the funky beat
left right combination fast bringin the heat
deadly like Jackie Chan and Chuck Norris feet
lyrical roundhouse put ya back into ya seat
now check the flavour from my DJ while I chill on this beat

If you only knew, the trials and tribulations we been through
then you’d know, CI’s for everyone and that means you
if you only knew, broken checkin’s they make me spew
then you’d know, the chicken dance is what you gotta do

got delivery pipelines flowing out hot rhymes
yall be actin like you ain’t got no time
for this charasmatic character who is number one
rockin hard live for fun, in the burn city sun
now you think i’m done, but I’ve only just begun
so get ya toes wet and come get some

if you only knew, the trials an tribulations we been through
then you’d know, DI’s for everyone and that means you
if you only knew, resolution’s a hard thing to do
then you’d know, IOC’s what you gotta do

building on the back of hard work
cheeky smile, smart smirk
black hoodie, dark shirt
stay sharp or get burned
when I drop this verse like it hurt
then come back around and spin it all in reverse

— INTERLUDE —

if you only knew, the trials an tribulations we been through
then you’d know, simple is for everyone and that means you
if you only knew, complexity can get you screwed
then you’d know, DDD’s what you gotta do

before I give it up, give it up for your hosts
who spent the whole damn Sat’day keepin yall engrossed
to the dudes with most, I’d like to raise a toast
I wanna make it clear,these fellas right here

better get a few free beers, in the bars tonight
from the crowd, or myself just to set it right
for the conference of the year, run so damn tight
with a setup and speakers that were outta sight
winding up with a hot track and beats that bite
to get us goin for a party way into the night

if you only knew, the trials an tribulations we been through
then you’d know, we’re real people homie just like you
if you only knew, a conference ain’t easy to crew
then you’d know, buyin beer is what you gotta do

Publishing samples with Team Build and TFS 2010

It’s common when writing an API to publish a set of samples. This is a great way to give users an idea of the intended usages of your API. Now samples that are intended for publishing are essentially another production application you need to develop, and therefore should go through the same quality checks and processes that the rest of your code does, including version control.

This can raise a couple of challenges once you start to look at how you publish these samples. The issue is that a server based system will need some way to track the client side changes and with most popular version control systems this will involve the presence of extra files around the samples directories. As a consumer of the samples, the presence of these files in the published artefacts is far from ideal. I really don’t want to have the samples trying to connect to your remote server, or even a local instance of a version control provider I might not have installed.

So how do we break the dependency on the source control provider during the packaging of these samples for publishing? We could manually go through and delete all of the binding mechanisms, but that would expose us to the following types of waste:

  • Defects – manual processes are more prone to defects
  • Waiting – builds need to wait on the samples to be prepared for packaging
  • Extra Processing – extra steps to make our packaging pick up our manually cleansed samples

We can avoid most of this waste by automating the process. So if it’s able to be automated, how do we do it?

Prerequisites

Download the following script https://gist.github.com/967976. This script is written originally by Damian Maclennan, and essentially removes the TFS source binding files, solution and project elements that reference the TFS instance.  My fork includes the ability to decide on if you want the files backed up before removal of the version control sections. I usually choose not to, I don’t want the backup files in the output and have the actual versions in version control if I need them. In addition you may want to do the following:

  • Create a batch file wrapper for the script using the approach defined by Jason Stangroome at http://bit.ly/kmyfY2
  • Commit the script to a path in your repository that is part of your build workspace with the same name as the PowerShell script

First Step: Create a custom build template

  • Setup a build to create your packages, and when you get to choosing the process template clone the default template into a new template
  • Check the new template into source control

Second Step: Set up a folder to publish to

  • Open your cloned template and navigate to the point at which MSBuild is invoked with the ‘Build’ target
  • Drop in a new sequence activity and name it ‘Publish Samples’

  • Create an argument named ‘SampleDropFolder’.
    • This will be the configurable name for the folder placed in the output directory with our samples inside.
    • We’ll talk about how to surface this on the build configuration dialogue later.

  •  Create a variable to hold the full path to the output folder.
    • I’ve named mine ‘SamplesDropFolder’ but that may be a bit close to the argument name for you.
    • I also default this to a combination of the outputDirectory variable specified in the scope of the Compile and Test activity and the SampleDropFolder argument we’ve specified previously.

  • Open up your ‘Publish Samples’ sequence activity and drop in a ‘Create Directory’ activity.
    • Configure it with the ‘SamplesDropFolder’ variable we set up in the last step.
    • This will set up a root directory we can copy all our samples to, and makes it easy to run our binding removal script later on.

Third Step: Copy the samples

Now we’ve got our directory, we need to work out what we want to put in it. In most cases, we’ll have more than a single set of folders to move, so we need to put some smarts around how we identify our targets.

  • First create an Argument called ‘Samples’ and configure it to be of the type String[].

  • Drop a ForEach activity into the ‘Publish Samples’ sequence and configure it to iterate over our Samples argument we just created.
  • Add a variable to contain the local path of the sample directory we’re currently working with as a string with the name ‘sampleLocalPath’
  • Inside the ForEach activity drop a ‘ConvertWorkspaceItem’ activity. This will take our server paths and work out the local path for the directories for us. You’ll need to configure it as follows:

  • Drop in a ‘CopyDirectory’ activity to copy our sample directory from the source to the output directory.  Your ForEach should now look something like this:

Fourth Step: Remove the source control bindings

Now we’ve got our samples into the target directory, we need to strip out the source control bindings so our customers don’t try to connect to our server when they open the solutions.

  • Add an Argument to specify the server path to the batch file we checked in back at the start of the process.

  • Create a variable to house the local path of our binding removal script. I’ve named mine ‘SourceControlRemovalScriptLocalPath’
  • Drop in another ‘ConvertWorkspaceItem’ activity after your ForEach activity. This will be used to convert the argument we just created with the server path to our source binding stripping script to its local path.  It should be configured like this:

Note: While the variables look like they are the same item, they aren't I promise!

  • Drop in an ‘InvokeProcess’ activity after the ‘ConvertWorkspaceItem’ you just added.  A little care is required when configuring this activity to ensure we get a reliable execution, so I’ll list out how I’ve configured each property.

Arguments: Microsoft.VisualBasic.Chr(34) + SamplesDropPath + Microsoft.VisualBasic.Chr(34)

Display Name: Strip Source Control Bindings

File Name: Microsoft.VisualBasic.Chr(34) + SourceControlRemovalScriptLocalPath + Microsoft.VisualBasic.Chr(34)

Working Directory: BinariesDirectory

Any other properties remain unaltered from their default state

Your publish samples sequence activity should now look a little like this

Fifth Step: Surface the arguments

That’s nearly everything we need to do to support the publishing of samples into our output directory. However we’ve set up a few arguments in here to ensure our template is re-usable. We now need to surface them to users via the build configuration dialog. To do this:

  • On the ‘Metadata’ Argument for the build workflow click the ellipsis. You should get a pop up dialog
  • Configure the three arguments we’ve added as follows
    • The samples list
    • The samples drop folder name
    • The source control removal script path
    • These should be configured in the editor as follows:

The only thing that changes across the parameters are the Display Name - the name we surface to the editor, and the Parameter Name – the name we gave our argument that corresponds to this parameter

  • Check in the template

Final Steps: Configure the build!

Now we’ve got our template done, let’s go configure a build! The only real point of interest here are the custom parameters we set up on our way through, so we’ll focus on them – this is a long enough read already!

The points of interest are all on the process tab, so let’s skip there. If you expand your custom section you should see something like this:

All you need to do is fill in the values, so it looks more like this:

Once that’s done, kick off a build and you should be able to locate your samples, without the binding configuration in the drop directory of your build output!

Conclusion

In this article I’ve shown you how to create a reusable template for including useful samples in your build output. I’ve used this particular approach with a few customers and what I particularly like about it is we aren’t moving too far from the out of the box activity set that comes with Team Build. This saves us on overhead, and allows the template to be put together pretty quickly.

Auckland .NET Users Group

Are you an NZ native? From Auckland? Are you interested in .Net development? Then you should really check out the Auckland .Net Users Group! It’s kicking off on the 24th of this month with a great session about Behaviour Driven Development (or BDD for short) and Team Foundation Server. This is a great way to get your toes wet with a set of .Net BDD frameworks and understand how you can integrate them into your development workflow. It’s also a good chance to get along and heckle  hear Rob Maher who I can highly recommend as a speaker.

If you’re interested in the details or to RSVP head over to http://aucklandnetusergroup.groups.live.com/

The short:

WhoAuckland .Net User Group

What: BDD Overview

When: 1730, Tuesday 24th May

Where: IAG – 1 Fanshawe St, Auckland