Application Development Efficiency

McKinsey published an article that discusses increasing application development efficiency. They point out that a number of measurements for development projects are input measurements, rather than output and suggest that Use Cases, and by extension Use Case Points form a solid output measure for determining the progress and effectiveness of a development effort.

Article: Enhancing the efficiency and effectiveness of application development

While I feel the idea is sound at it’s core, it appears that they are drawing an incorrect causal relationship between the tool, the measure and the outcome. I feel the article displays a number of subtle references to the underlying behaviors that are enabling the tool to be deemed successful. 

I believe that: 

  • Use Case’s and by extension Use Case Points suffer from granularity and assumption issues as much as any other technique e.g. interaction 2 on Use Case 111 is deemed to be complexity ratio 2, until work is started and it is discovered that it is actually 10. 
  • Knowing what is important is more important than knowing when we’ll be done, as the first influences the second.
  • Use Case Points as a comparative value across teams suffer the same issues as all other measures. Inconsistencies in scale, skill and methods influence the usefulness of the data.  

Drawing out what I believe the underlying behaviors are that are creating success:

  • Focus on bringing the business closer through an appropriate shared communication tool e.g. examples, use cases, stories
  • Teach critical decision making as a skill in staff in project teams as a means to achieving assumption identification and testing
  • Ensure the measure you use identifies the appropriate analysis method and its influencing properties to provide a useful understanding of performance across teams and projects

None of these are particularly bound to any tool, but together lay a foundation for a number of tools such as use case points, story points, or story counts can be successful.

 

Behavioural Debt

Working with teams to change the way they deliver is nearly always a challenge. Years of working in an industry has shaped the way we behave, to the point where it almost second nature to continue trends we view as unproductive and risky. We know these things aren’t helping us succeed, but we still do them. It’s almost as if we’re controlled by a power greater than our will, which is in a way actually true.

People are well proven to be creatures of habit. As children our parents strive to instill good habits, and avoid bad. These habits live with us throughout our adulthood defining a lot of what we do, both at home and in the workplace. A great example is the frequency and timing of basic hygiene activities like showering, or brushing your teeth. No matter how much my dentist tells me I need to brush at night nearly 30 years of morning routine continues to this day.

Once we reach our teens we start to take jobs. These slowly consume more and more of our lives until we eventually spend more of our waking hours in the office than we do out of it. This makes the organisation we work within a virtual habit factory. The people around us shape the practices that we are indoctrinated into as part of entry to the company. We accept, and grow into these practices which with time become habits.

It is this collection of practices turned habits that I have started referring to as ‘behavioral debt’. It’s a metaphor that I use when describing how comfortable a team is in their existing practices, and how hard it will be to start the stone of change moving downhill toward a new way of working.

Quick Tips: TFS Demand Management via Pivot Table

Context

One of the meetings I commonly lead is a demand management meeting. In this meeting we look at the work currently in the pipeline for delivery and make decisions about what needs to happen with it. When working with TFS I run this meeting via an excel worksheet which I load a backlog query into. 

The default view for this backlog is a table, which is great for the basics of view/edit but suffers the problem of not easily being able to see the forest for the trees. What I really want is a summary view that enables me to roll up the work dynamically to answer questions raised in the meeting.

How To

To achieve this simply: 

  • Select a cell somewhere within your backlog query
  • Select the design tab of your ‘Table Tools’ ribbon category
  • Click ‘Summarize with PivotTable

The options should already have your work item table selected. Either have the table dropped into a new sheet, or your current and click OK. 

Once the table is up I generally roll up size as a sum, and Work Item Id as a count by a field such as Area to get the overview I need. Additionally allowing filtering by blocked items is handy. 

Next Steps

If you need to drill down to the work items, you can simply use the pivot table tools ‘Expand Field’ function to either have the individual work item id or title values listed as a sub field. 

 

 

SQL71567: Filegroup specification issue with SQL projects in VS 2012

If you’re seeing the above error, chances are you’ve got a filegroup specification on both your table and and clustered, or non-clustered inline index. This creates a compile time error as at the SSDT update of September 2012, and left me scratching my head until I saw this forum thread.

To fix the issue, I’ve created a PowerShell script that parses each of the .table.sql files in your database project directory (and subdirectories) to look for a filegroup specification. It then parses the related index files to check for a duplicate specification, and as per the recommendation in the previous forum thread – removes the table script specification. 

The script is available as a gist.

Feedback and forks welcome.

Can we deliver? Projecting probable delivery dates through data.

Kicking off a Project

Looking at the infamous project management triangle we identify three variables. When starting a project, these trigger three questions:

  • What would we like to deliver?
  • When would we like to deliver it?
  • Who will deliver it?

When asked to come up with values for these items I see the following activities occur:

  • Scope is decided
  • A team is decided
  • A set of estimates are formulated
  • A plan is built based on the estimates, dependencies and a buffer of some sort

At this point, the plan is put forth and a fourth question is asked. How confident are you that this is right? The standard response is somewhere between 70 and 90 percent. Anything less and we’d be openly admitting that we’re willing to take a pretty big risk on if we can actually deliver, right? So where does this number actually come from?

There are a few ways I’ve seen the confidence value formulated. Most commonly I see a ‘feel’ provided by project manager based on information provided by a few subject matter experts. Some might take a look at a risk register as well, and I’ve even seen an entirely separate set of estimates generated for the same work by a different team. In the end a lot of the time the primary contributor to our level of comfort is the amount of ‘fat’ or contingency we’ve built in.

I’d like to ask if this is the best we can do?

What are we really looking at?

To answer this question we need to understand what we are trying to derive, and what from. When setting out to build any kind of product we’re looking to exploit an opportunity or reduce a cost. These activities in most cases will also be time sensitive, leading us to an ideal delivery date. This is our first piece of data.

Our second piece of data is our estimate. This provides us with a picture of what we think it will take to get the scope we’ve defined done. These should be relative estimates, and I usually encourage a planning poker approach for a small number of items. Anything more than around 30 and I’d suggest looking at affinity estimating. It’s worked for me and is a great way to get through a large number of items. I’ve used it to estimate ~400 items at a time in around 2 hours – though we padded that out with a couple of breaks to keep us sane.

The third and final required piece of data is also the most problematic to obtain. It’s a rate of delivery over time. Put differently, historically how fast do we progress through our work. I don’t see this value stored in a lot of organisations, and it’s one of the first things we look to build as there’s a lot of value that can be derived from it both in current and predictive terms. You can track it in a few ways – either as a set of units completed in a period e.g. points delivered per iteration, or as an average elapsed time between two points e.g. time from ready for development to production. Either is fine, just be consistent and ensure you record it.  If you don’t have the data then an estimated value will do. Just be sure to record your delivery rate once you start and adjust your view based on this new data.

Confidence Measure

Now we have identified our data how do we understand how likely our ideal outcome actually is? With a historical view of how quickly we delivery work we can calculate a mean rate of delivery.

Note: I’ll be using excel style formulas from here on in. The function calls should be valid if you replace the names with cell references or numbers.

MEAN RATE = SUM OF DELIVERY RATES / COUNT OF MEASUREMENTS

This gives us an average rate per measurement period. I am currently using a daily rate of points, as it eases the extrapolation to dates in later calculations. The next calculation is to work out the standard deviation of the delivery rate.

Note: We are making an assumption that the rate of delivery is normally distributed. This may not be the case, but we have the historical data to better understand how this might be incorrect.

STD. DEV = STDEV.P(LIST OF DELIVERY RATES MEASUREMENTS PERIOD)

At this point we can actually start to calculate what our chances of delivering at different rates is. For example, a mean rate of 8 with a standard deviation of 1.5 gives us a chance of hitting a 15 unit rate of delivery of about 0%, if we were to say 11 then we’d be looking at around 2.28% chance.

To make this useful we next need to know what our required rate of delivery to achieve our target date would be. To calculate this we use the following:

RATE REQUIRED = SUM OF SCOPE / PERIODS UNTIL DATE

We can then understand the probability of reaching that rate by using the data we’ve now got from our calculations as follows:

PROBABILITY = 1 – NORM.DIST(DAILY RATE REQUIRED,MEAN DELIVERY RATE,DELIVERY RATE STD DEVIATION,TRUE)

Mapping this curve we get something like the following:

Delivery Probability

Delivery Probability

What doesn’t this graph tell me?

This graph is a simple probability graph based on historical delivery rate and estimated scope. These in themselves have flaws, as estimates are normally out by factors and scope changes.

What it does do is give us a much better idea of if our dates are even viable. For example you can see from the graph that for the applicable figures the chance of reaching a target date in early January would be pretty low. Especially once we start considering other factors such as reduced availability over the Christmas period.

What benefit does this provide?

The use of data to discover our probability of delivery on a specific date allows us to make a more educated decision on if the risk we are taking to make a specific date is of enough value. This value calculation is influenced by our driver for the work in the first place and it’s attributes such as cost of delay. Armed with this data we can start deciding on influence strategies and even weighing up other opportunities more effectively, rather than relying on our intuition to drive our delivery decisions.

Issues when importing a TFS Project Collection

Recently I moved a project collection between two servers that varied significantly in their configuration. The source server was configured with all the trimmings except for project server integration. The target server was configured as a basic instance with the addition of SQL Reporting and Analysis Services. During the import I encountered 2 issues that I needed to resolve.

Issues 

[TF261007] 

On importing the collection a failure is seen due to lab management configuration being present.

[System.MissingFieldException ‘ProjectServerRegistration’] 

On importing the collection a failure is seen due to the server being unable to locate a specific field required to support Project Server synchronisation.

Reason

As part of Team Foundation Server Service Pack 1 the above issues are patched, and therefore should be resolved. However due to the patching process not performing a restart of the TFS Job Agent the older assemblies may still be loaded. This causes the issues to still surface.

Resolution

There are 2 possible resolutions to these problems. The first is the recommended solution.

1. Restart the server.

2. Quiesce then un-quiesce the application tier using TFSServiceControl.

Both will effectively restart the TFS job service and allow the new assemblies to be loaded.

Retrospectives are more than a ceremony

As a member and scrum master of scrum teams the retrospective is one of the parts of the process that I value the most. It’s the core of the inspect and adapt process, and what helps teams improve and mature. I’ve noticed a behavior during retrospectives recently where all the ceremonies are followed, but if the actions are discussed later it’s discovered that evidence of improvement is lacking.

My suggestion is for teams to look for 3 things out of their retrospectives.

1. Root Cause – perform a root cause analysis on the identified issues. Something similar to the 5 whys exercise; so you have an actionable root cause.

2. Planned Action – for each root cause decide on a planned action to address the root cause. This is aimed at preventing the issue before it becomes one.

3. Measurement – a goal measurement that each action should achieve. This is to allow for an objective assessment of the action to determine if it has addressed the cause. Consider looking for measurements that are targets, rather than volume type measurements.

These 3 outcomes for each issue raised during the retrospective allow us to understand the impact each adjustment has, and ensure we’re focusing on the intent of the retrospective – improvement.