What is Software Productivity

Essentially it’s simple, it’s output over time.

Productivity = Outputs/Time

However the tricky part is finding a suitable measure for outputs. Outputs in the context of software are hard to standardise. In most cases we can think of output of a software endeavour as business value and/or user-recognisable capabilities delivered.

Why Many Companies want to Measure Software Productivity

Managers have a duty to seek quantification and some certainty about cost, schedule and scope for a particular software undertaking. They will also be looking for means and measures to:

  • Benchmark against other organisations
  • Track progress over time
  • Assess team and individual productivity
  • Use productivity to determine offshoring decisions
  • Use productivity to inform prioritisation decisions

General Productivity Measures

Mostly we focus on developer productivity, as developers tend to do the most and cost the most on a software project. But it is also useful to understand the productivity of others involved including testers, analysts, project managers, designers, architects and others involved in software projects.

Criteria of Good Software Productivity Metric

There is no perfect metric for “output” on software work, so we have to find alternative, proxy indicators that can help us get close to quantifying the “output”.

Ideally, for software, we want an “output metric” that achieves the following:

  • strongly correlates to business value
  • is ungameable and consistent, objective, repeatable and independently verifiable
  • is independent regardless of programming language
  • is valid for comparing one project to another.
  • can account for the more complex work being assigned to more capable team members
  • that can be collected cheaply and easily.

Note that ideal metrics for individual productivity do vary slightly from those ideal for determining team productivity.

Ranking of Software Metrics

The following table was produced by Steve McConnell and is discussed in this presentation for ranking suitable software productivity metrics. COSMIC function points were less well known and automated sizing was unavailable when Steve first created this table. I have added the column for COSMIC function points. This extra column is subjective and is in our own opinion and not part of Steve’s original assessment.

LOC/SM
FP/SM
CFP/SM
Story  Points/SM
360 Peer Reviews
Manager Evaluation
Task completion predictability
Test Cases Passed
Defect Rates
Measurement truly reflects productivity 3 4 4 4 3 2 1 4 3
Directly or indirecty accounts for all or most work output 4 4 4 5 5 5 2 4 3
Useful for measuring work of non-programmers 1 4 4 4 5 5 5 4 4
Resists “gaming” by individual contributors 2 5 5 4 3 3 4 4 5
Strongly correlated with business value 2 3 4 4 3 4 2 4 3
Objective, independently verifiable 4 4 5 3 2 2 4 4 4
Measures output the same regardless of programming language used 1 5 5 3 5 5 5 5 5
Supports person to person comparisons within a team 3 4 4 4 5 4 4 4 4
Accounts for best people getting the most difficult assignments 2 4 5 4 5 5 1 5 2
Data can be collected easily and cheaply 3 1 5 3 3 4 3 3 4
25 38 45 38 39 39 31 41 37
Rank 9 5 1 5 3 3 8 2 7

In Steve’s original assessment the conclusion was a close run between serveral approaches, with “Test Cases Passed” as the highest. Now with the advent of a) a simpler approach to functional sizing (COSMIC Function points vs IFPUG function points) and b) automated sizing, there appears to a be a clear leading metric for assessing productivity – COSMIC Function Points.

Misleading Productivity Measures

There is no single perfect metric for software work, but there are some that are useful, valid. There are also some that are likely to mislead, be abused or instigate unhelpful behaviour.

Lines of Code

The productivity measure here is the number of lines of code written over time. Given that different developers can vary in the number of lines of code for a given functionality by 10x, at best, LOC delivered over time can only be an approximation. It is easy to measure but is not necessarily a good measure. Different programming languages require different numbers of lines of code vary significantly from one programming language to another. Using lines of code can also encourage developers to write extra superfluous lines, just in order to be classified as “productive”. This has the opposite effect of that intended, as it can generate additional maintenance costs.

Story Points

Story Points are a subjective, non-standard proxy for effort estimation. Some will claim that they “measure” complexity. At best, they may give an indication of difficulty of a particular task. In essence these are an indication of “ideal staff days”. These are unsuited for contract work and cannot be used to compare one team to the next nor one project to the next. There is no “learning” or “continuous improvement” achievable by adopting story points.

Story Counts

User story size can vary significantly, whether you estimate size in COSMIC funtcion points or ideal staff days. One story may be a 10-minute technical task whereas another story may require a whole month of a team to deliver. In our experience stories vary in size from 0 CFP to 120 CFP.

Other Team Productivity Measures

Score Card – based productivity assessment can be useful in some circumstances especially if the card values are both normalised across teams and chosen to reflect all the criteria that the organisation wants to use.

It is important to choose criteria that focus on organisation’s values (for quality and output). It is also important to get teams aligned with the true objectives of the business.

Recommendation

COSMIC function points

Our recommended output measure is COSMIC function points (CFP), Our reasons for this recommendation are:

  1. CFP is rooted in fundamental principles of software behaviour (inputs, outputs, reads and writes)
  2. CFP is user-focused – it aligns to what is needed to deliver value to the user
  3. CFP is a reasonable proxy for business value.
  4. CFP aligns very well to actual effort.
  5. CFP has a definition for an individual unit of size.
  6. CFP provides a measurement of size which is the most significant factor in determining cost and duration of a project.
  7. CFP allows managers to manage non-development activities using CFP as the base metric also.
  8. CFP is open, free and independently.
  9. CFP are suited to incomplete requirements (ie. Agile)
  10. CFP are suited to almost any type of software
  11. CFP early estimation is mostly automated (by ScopeMaster®)

Software development productivity measurement is the activity of recording the metrics and attributes of a software endeavour for comparative purposes.

CFP is a reliable way of assessing software productivity

How productive are our Software Developers?

Would you like to know if your team is more productive than another or than industry norms?

You can compare the activities of one team vs another within an organisation, internal benchmarking. Comparing against another similar organisation is referred to as external benchmarking.

Although industry benchmarks may be interesting, local/internal benchmarks are the most reliable.

Software development productivity benchmarking – assess your team

If you have a great team, why not publish your productivity index, maybe it will help you win development work?

All you need is a set of user stories from a previous project, about 1 hour and access to ScopeMaster.

  1. Take a set of requirements from a recent project. Upload them into ScopeMaster. Let the analyser work out the size.
  2. Add to the productivity calculator, the total effort it took (in person days), the duration (in months), and defect removal that you achieved.
  3. It will then determine the productivity index of your team.

And that’s all there is to it.

If you are above average, say 6+ for a web team, then you are on the track to greatness. If you like, we will publish your team’s productivity index on our league table.

Beware of misleading metrics

Story points, Lines of code, T-shirt sizes, man-days are all gameable or easy to manipulate, techniques usually used for estimating effort and time. They are invariably manipulated for self purpose. Only ISO standard functional sizing methods are hard to game.

Software development benchmarking