McKinsey published an article that discusses increasing application development efficiency. They point out that a number of measurements for development projects are input measurements, rather than output and suggest that Use Cases, and by extension Use Case Points form a solid output measure for determining the progress and effectiveness of a development effort.
While I feel the idea is sound at it’s core, it appears that they are drawing an incorrect causal relationship between the tool, the measure and the outcome. I feel the article displays a number of subtle references to the underlying behaviors that are enabling the tool to be deemed successful.
I believe that:
- Use Case’s and by extension Use Case Points suffer from granularity and assumption issues as much as any other technique e.g. interaction 2 on Use Case 111 is deemed to be complexity ratio 2, until work is started and it is discovered that it is actually 10.
- Knowing what is important is more important than knowing when we’ll be done, as the first influences the second.
- Use Case Points as a comparative value across teams suffer the same issues as all other measures. Inconsistencies in scale, skill and methods influence the usefulness of the data.
Drawing out what I believe the underlying behaviors are that are creating success:
- Focus on bringing the business closer through an appropriate shared communication tool e.g. examples, use cases, stories
- Teach critical decision making as a skill in staff in project teams as a means to achieving assumption identification and testing
- Ensure the measure you use identifies the appropriate analysis method and its influencing properties to provide a useful understanding of performance across teams and projects
None of these are particularly bound to any tool, but together lay a foundation for a number of tools such as use case points, story points, or story counts can be successful.