Category Archives: Development

Can we deliver? Projecting probable delivery dates through data.

Kicking off a Project

Looking at the infamous project management triangle we identify three variables. When starting a project, these trigger three questions:

  • What would we like to deliver?
  • When would we like to deliver it?
  • Who will deliver it?

When asked to come up with values for these items I see the following activities occur:

  • Scope is decided
  • A team is decided
  • A set of estimates are formulated
  • A plan is built based on the estimates, dependencies and a buffer of some sort

At this point, the plan is put forth and a fourth question is asked. How confident are you that this is right? The standard response is somewhere between 70 and 90 percent. Anything less and we’d be openly admitting that we’re willing to take a pretty big risk on if we can actually deliver, right? So where does this number actually come from?

There are a few ways I’ve seen the confidence value formulated. Most commonly I see a ‘feel’ provided by project manager based on information provided by a few subject matter experts. Some might take a look at a risk register as well, and I’ve even seen an entirely separate set of estimates generated for the same work by a different team. In the end a lot of the time the primary contributor to our level of comfort is the amount of ‘fat’ or contingency we’ve built in.

I’d like to ask if this is the best we can do?

What are we really looking at?

To answer this question we need to understand what we are trying to derive, and what from. When setting out to build any kind of product we’re looking to exploit an opportunity or reduce a cost. These activities in most cases will also be time sensitive, leading us to an ideal delivery date. This is our first piece of data.

Our second piece of data is our estimate. This provides us with a picture of what we think it will take to get the scope we’ve defined done. These should be relative estimates, and I usually encourage a planning poker approach for a small number of items. Anything more than around 30 and I’d suggest looking at affinity estimating. It’s worked for me and is a great way to get through a large number of items. I’ve used it to estimate ~400 items at a time in around 2 hours – though we padded that out with a couple of breaks to keep us sane.

The third and final required piece of data is also the most problematic to obtain. It’s a rate of delivery over time. Put differently, historically how fast do we progress through our work. I don’t see this value stored in a lot of organisations, and it’s one of the first things we look to build as there’s a lot of value that can be derived from it both in current and predictive terms. You can track it in a few ways – either as a set of units completed in a period e.g. points delivered per iteration, or as an average elapsed time between two points e.g. time from ready for development to production. Either is fine, just be consistent and ensure you record it.  If you don’t have the data then an estimated value will do. Just be sure to record your delivery rate once you start and adjust your view based on this new data.

Confidence Measure

Now we have identified our data how do we understand how likely our ideal outcome actually is? With a historical view of how quickly we delivery work we can calculate a mean rate of delivery.

Note: I’ll be using excel style formulas from here on in. The function calls should be valid if you replace the names with cell references or numbers.

MEAN RATE = SUM OF DELIVERY RATES / COUNT OF MEASUREMENTS

This gives us an average rate per measurement period. I am currently using a daily rate of points, as it eases the extrapolation to dates in later calculations. The next calculation is to work out the standard deviation of the delivery rate.

Note: We are making an assumption that the rate of delivery is normally distributed. This may not be the case, but we have the historical data to better understand how this might be incorrect.

STD. DEV = STDEV.P(LIST OF DELIVERY RATES MEASUREMENTS PERIOD)

At this point we can actually start to calculate what our chances of delivering at different rates is. For example, a mean rate of 8 with a standard deviation of 1.5 gives us a chance of hitting a 15 unit rate of delivery of about 0%, if we were to say 11 then we’d be looking at around 2.28% chance.

To make this useful we next need to know what our required rate of delivery to achieve our target date would be. To calculate this we use the following:

RATE REQUIRED = SUM OF SCOPE / PERIODS UNTIL DATE

We can then understand the probability of reaching that rate by using the data we’ve now got from our calculations as follows:

PROBABILITY = 1 – NORM.DIST(DAILY RATE REQUIRED,MEAN DELIVERY RATE,DELIVERY RATE STD DEVIATION,TRUE)

Mapping this curve we get something like the following:

Delivery Probability

Delivery Probability

What doesn’t this graph tell me?

This graph is a simple probability graph based on historical delivery rate and estimated scope. These in themselves have flaws, as estimates are normally out by factors and scope changes.

What it does do is give us a much better idea of if our dates are even viable. For example you can see from the graph that for the applicable figures the chance of reaching a target date in early January would be pretty low. Especially once we start considering other factors such as reduced availability over the Christmas period.

What benefit does this provide?

The use of data to discover our probability of delivery on a specific date allows us to make a more educated decision on if the risk we are taking to make a specific date is of enough value. This value calculation is influenced by our driver for the work in the first place and it’s attributes such as cost of delay. Armed with this data we can start deciding on influence strategies and even weighing up other opportunities more effectively, rather than relying on our intuition to drive our delivery decisions.

Advertisements

Quick Tip: Display unit test result on double click

In VS 2010 the default behavior for double clicking on a unit test result has changed. Previously when you double clicked on a failing test you’d be shown the test result window with the full result details for the failing test. Now when you perform this action you’re taken to the line of code that has failed in the test.

I find I prefer the previous behavior, so went and found the following setting under the Tools -> Options menu item to revert it…

Building VS Database Projects on x64 – Cannot find sqlceme35.dll

Recently I’ve been helping a fellow Readifarian get started on a brand new project using TFS and VS 2010. As always, we’ve seen a few teething issues along the way. Yesterday he was attempting to get his database project to build on his shiny new Windows 2008 R2 x64 build server and was having some issues with the SqlBuildTask failing. The stack trace was pointing to an unresolvable reference to the SQL CE assemblies.

Database Project Build Error

Error Stack Trace

After doing some digging around the MSDN forums and broader web I found some older posts on x64 issues with SQL CE. While this wasn’t my issue, it led me down the right path. What’s happening here is that there is a minor issue with locating the x64 version of the mentioned assembly cause by a bug in the setup of Team Build. The workaround here is to set the MSBuild framework to x86, which in turn makes the database project locate and use the x86 version of the SQL CE assembly.

Important Note: This work around will only be needed in the Beta 2 timeframe for 2010, as the underlying cause of this issue has been fixed for RTM.

Exploring the TFS API – Change Sets, Work Items and a Null Reference Exception

When using TFS it is good practice when checking in to associate with a work item to describe the cause for change. This creates a relationship between the change set created during the check in and the work item. It’s also exposed through in the TFS API as a property on the change set known as WorkItems. The great thing about this is that it lets you traverse the link in a natural manner, without having to set up the link queries yourself. This is not without its gotcha’s however, as I discovered…

To get your hands on the change set, you’ll initially need to set up a version control server object. Something to watch out for here is when scoping the lifetime of your server object that you’re going to use to get the change set you’ll need to take into account when the work item query is going to happen. You see the property is actually a bit clever about how and when it loads the work items that populate the list. When you obtain your change set, the version control service that you’re using is stored in a property of the change set.

Enter the dangers of not being scope aware!

The issue I am alluding to not so subtly to is one where you de-scope the original team foundation server instance, leaving the change set without an internal (open) server reference. This normally wouldn’t be an issue, as we’ve already got the change set – right? Some digging in reflector shows that our WorkItems collection is actually a lazy load collection, populated on first call and cached thereafter. The process is roughly as follows:

When you run your first get on the list, an instance of the WorkItemStore service is created and a query executed to get the referencing work items. The list of referencing work item URI’s is then used to fetch the work items, and populate an internal list (effectively caching the list for future reference). A clone of the internal list is then returned as the result.

What does this mean to you?

Well, if you scope your server instance with a using statement that is closed before you make use of the WorkItems property your server object (and the internal one used by the WorkItem property) will have already been closed and you’ll recieve a null reference exception. There are 2 ways around this, which you chose is dependent on your scenario. First, include your code that will query the work item list in the appropriate scope. This will extend the lifetime of your server object by the execution time of the extra code at least. The second option is to make an immediate call to the WorkItems property to populate and return you the list, and then allow the server object to be disposed as normal. This means the server object will be closed earlier, but you’ll be carrying around the work item list in memory for longer.

Silverlight in the Capital – with Scott Barnes

Well – July looks like being a massive month for Silverlight here in Canberra! Not only do we have Jordan Knight in town to present the fantastic Silverlight in a Day training session, but Scott Barnes – Microsoft Product Manager for Rich Client Platforms will be in town! I’ve been talking with Adam Cogan who runs the Canberra .Net User Group and there looks like being a special out-of-band user group meeting to give Scott the opportunity to talk to us about all things Silverlight. It’s not too often we have somebody directly involved in the product team in town so bring your questions and suggestions!

Mark these days in your calender now – and if you haven’t registered for the Silverlight in a Day be quick, we’re almost out of seats!

——————-

When: 18th July (0900 – 1700)

What: Silverlight in Day

Where: Cliftons Canberra

Cost: Free!

——————-

When: 28th July (Lunch: 1230-1330,  Evening: 1630-1830)

What: Scott Barnes on Silverlight

Where: Microsoft Canberra

Cost: Free!

——————-

Make sure you don’t miss out on either of these events and rsvp now by emailing me, or for the .Net UG head over to the SSW site to register!

2010 Build Basics – The Build Report and Build Configuration

While I’m working through the upgrade of dependency replicator to 2010 (beta 1) I have a CI build setup to give me feedback on how my check-in’s are going. This is maybe a bit of overkill, being that it’s a one man show but one of the things I was really keen to see was the new build report. I’d seen it in videos, and a little in the CTP but a bug that existed made it pretty hard to get at (I got white screens pretty consistently.

So I finally cracked it open tonight and one of the first things I noticed about my running build was the “Show Property Values” option on the log. This is a really great way to get either a general view of the tasks run during your build, or if the build is failing (as mine was – but that’s for later) to get the actual values being passed around during the build steps.

Build Log Without Parameters

Build Log Without Parameters

Build Log With Parameters

Build Log With Parameters

How is this helpful? Well as I mentioned – my CI build was “failing”. It wasn’t failing per se, rather it would report that the configuration I was trying to build wasn’t valid. When checking the build definition it looked ok as “Release | x86” but there was a key gotcha in the build configuration string. The split on the pipe value meant that the space on either side of the pipe got included in the configuration parameter, and therefore it didn’t match the “Release” and “x86” configuration names I’d specified. I spotted this will checking the build log through the new interface. As you can see, there’s a space either side of the comma – which made me wonder.

Broken Build Configuration

Broken Build Configuration

It turns out the configuration to build is actually a little frail, so if you are experiencing issues with your build definition being unable to be found, the configuration to build is a good place to start. I’ve raised a suggestion on connect in regards to improving this part of the dialog to potentially load the available configs from the solution to build (if it’s available at that point) or simply applying a couple of small reliability fixes such as white space trimming. Vote for it here

Code Camp – VS2010 Q & A

There were a couple of questions I didn’t answer on the spot during my presentation so I thought I’d throw the answers up here for those who are interested. I’m sure there were more questions than just these, but thanks to it being early on a Monday and not having had my daily caffeine ration – they have slipped my mind for now.

If you did ask a question, and I haven’t covered it please send it through (stephen.godbold @ readify.net) and I’ll update this post with the answer.

Q: Does the manual test runner work for WinForms apps as well as web?
A: Yes. Amit Chatterjee has indicated there is support for Web, WinForms and WPF.

Q. Can you build a fully automated test from a test automation strip recorded by the manual test runner? e.g. Something to use as a basis for load tests.
A: Yes. Test automation strips can be turned into Coded UI tests which can be run in a fully automated fashion. These can also be associated with a data source, to provide coverage of a set of scenarios. See here for how to create a coded UI test (from scratch) and here for how to set them up with a data context.

Q: Will the automation work with javascripts actions? Currently Team System 2008 does not support this.
A: Yes. The recording with faithfully reproduce javascript initiated actions of both synchronous and asynchronous nature. Thanks go to Mathew Aniyan for his prompt reply on this one!