Tag Archives: Development

Application Development Efficiency

McKinsey published an article that discusses increasing application development efficiency. They point out that a number of measurements for development projects are input measurements, rather than output and suggest that Use Cases, and by extension Use Case Points form a solid output measure for determining the progress and effectiveness of a development effort.

Article: Enhancing the efficiency and effectiveness of application development

While I feel the idea is sound at it’s core, it appears that they are drawing an incorrect causal relationship between the tool, the measure and the outcome. I feel the article displays a number of subtle references to the underlying behaviors that are enabling the tool to be deemed successful. 

I believe that: 

  • Use Case’s and by extension Use Case Points suffer from granularity and assumption issues as much as any other technique e.g. interaction 2 on Use Case 111 is deemed to be complexity ratio 2, until work is started and it is discovered that it is actually 10. 
  • Knowing what is important is more important than knowing when we’ll be done, as the first influences the second.
  • Use Case Points as a comparative value across teams suffer the same issues as all other measures. Inconsistencies in scale, skill and methods influence the usefulness of the data.  

Drawing out what I believe the underlying behaviors are that are creating success:

  • Focus on bringing the business closer through an appropriate shared communication tool e.g. examples, use cases, stories
  • Teach critical decision making as a skill in staff in project teams as a means to achieving assumption identification and testing
  • Ensure the measure you use identifies the appropriate analysis method and its influencing properties to provide a useful understanding of performance across teams and projects

None of these are particularly bound to any tool, but together lay a foundation for a number of tools such as use case points, story points, or story counts can be successful.

 

Advertisements

Quick Tip: Display unit test result on double click

In VS 2010 the default behavior for double clicking on a unit test result has changed. Previously when you double clicked on a failing test you’d be shown the test result window with the full result details for the failing test. Now when you perform this action you’re taken to the line of code that has failed in the test.

I find I prefer the previous behavior, so went and found the following setting under the Tools -> Options menu item to revert it…

Exploring the TFS API – Change Sets, Work Items and a Null Reference Exception

When using TFS it is good practice when checking in to associate with a work item to describe the cause for change. This creates a relationship between the change set created during the check in and the work item. It’s also exposed through in the TFS API as a property on the change set known as WorkItems. The great thing about this is that it lets you traverse the link in a natural manner, without having to set up the link queries yourself. This is not without its gotcha’s however, as I discovered…

To get your hands on the change set, you’ll initially need to set up a version control server object. Something to watch out for here is when scoping the lifetime of your server object that you’re going to use to get the change set you’ll need to take into account when the work item query is going to happen. You see the property is actually a bit clever about how and when it loads the work items that populate the list. When you obtain your change set, the version control service that you’re using is stored in a property of the change set.

Enter the dangers of not being scope aware!

The issue I am alluding to not so subtly to is one where you de-scope the original team foundation server instance, leaving the change set without an internal (open) server reference. This normally wouldn’t be an issue, as we’ve already got the change set – right? Some digging in reflector shows that our WorkItems collection is actually a lazy load collection, populated on first call and cached thereafter. The process is roughly as follows:

When you run your first get on the list, an instance of the WorkItemStore service is created and a query executed to get the referencing work items. The list of referencing work item URI’s is then used to fetch the work items, and populate an internal list (effectively caching the list for future reference). A clone of the internal list is then returned as the result.

What does this mean to you?

Well, if you scope your server instance with a using statement that is closed before you make use of the WorkItems property your server object (and the internal one used by the WorkItem property) will have already been closed and you’ll recieve a null reference exception. There are 2 ways around this, which you chose is dependent on your scenario. First, include your code that will query the work item list in the appropriate scope. This will extend the lifetime of your server object by the execution time of the extra code at least. The second option is to make an immediate call to the WorkItems property to populate and return you the list, and then allow the server object to be disposed as normal. This means the server object will be closed earlier, but you’ll be carrying around the work item list in memory for longer.

Workflow If-Else Activity Key Not Found Exception

The Situation

I’ve been doing a bit of integration between a group of sequential workflows and a WCF engine service recently. In this particular solution, we’ve used a state machine linked to a number of sequential workflows to get our work done and ease some of the versioning concerns with the current version of WF. We’ve now got a second state machine that manages a similar, but seperate life cycle, and contains some similar sequential workflows (which are named the same, but exist in seperate namespaces).

This should be fine, as the fully qualified name for the workflow will be different. Unfortunately this is not quite true as when I add an If-Else activity I start getting Key not found exceptions!

Unfortunately, even with some pretty explicit custom logging tools this exception  is a hard one to pull apart. In the output from the workflow runtime all that can really be seen is a key not found exception in the dictionary on the if else activity. Digging deeper (with the help of an SOS master – thanks Bud!) we found that the key’s loaded into the dictionary were not the keys that should have been there. They were the keys for an alternate sequential workflow by the same name in a seperate namespace…

So what’s going on?

Well when you add an If-Else activity a rules file is generated to contain the conditions you place on the activity branches. When these are compiled they are embeded into the assembly, and at run time the rules are located in the assembly and associated with their activity not by a fully qualified path, but by the activity name alone. Obviously this causes some contention if you have 2 rules files that belong to activities with the same name in different namespaces!

Fixing the issue

The fix is reasonably simple. Change the name of one of the activities right? Well, yes and no. Due to the limted refactoring support in WF you’ll need to not just change the file name but also manually update the activity name in the properties of the activity. This will break things like send or receive activities and other activities that map their values using a path that contains the acitivity name. So you’ll have to go back and re-bind those bits. Once you’re done though and eveything is building again you should be right to rock!