The news of TSB showing incorrect transactions in people’s accounts is evidence of a software project gone wrong due to poor quality work. Why is this and how could it have been avoided? The underlying root cause could come from a range of possibilities, in this article I explore some of the most likely, and the lessons we can learn from them.
Legacy Systems under Pressure
At the core of most banks and large financial institutions is software that was originally written long ago and is has proven to be very reliable. We call these legacy systems and as we demand different ways of accessing our banking data, the underlying code of these legacy systems finds itself working in ways that the original developers never imagined or intended, this can lead to infrequent bugs.
Maintaining Legacy Technologies
Most of these old systems are written in old technologies such as a language called COBOL. Most COBOL developers have retired now and there are very few youngsters who are enthusiastic to learn such dead languages. Consequently there is a shortage of highly skilled developers for these old systems.
Risk Leads to Abstraction
Migrating legacy systems to newer technologies is hard and risky. It can take years to plan and implement. Replacement of a core system can be so potentially disruptive that management avoid doing so altogether. One tactic to allow banks to move forward offering new functions to users, is wrap these old systems using a technique called Abstraction. They are treated as a “black box” and we don’t need to worry about how they work, we just need to be confident of their inputs and outputs. This technique postpones the eventual need to replace these systems.
Architecture and Complexity
As new banking products are created, more and more dependent systems “hang off” the legacy creating an ever more complex picture of how data is moving between these systems. Over-complex systems can be the cause of many bugs. A good IT architecture will help to combat or contain this complexity and it’s associated risk.
Systems that have been around a long time have often been tinkered with by many programmers over the years. Sometimes there is little knowledge handover from one developer to the next. The new guy doesn’t understand what the last guy did and he is reluctant to change existing code, in case he breaks it. So instead he writes new code to to sit alongside the old code. Both sets of code probably do the same thing, but what happens when the next developer comes along, he can’t be sure which code to work on. This is one of the ways that complexity can build up.
Testing alone is not enough to ensure quality
Studies of thousands of IT projects provide us with evidence that testing alone will not ensure that all bugs have been found. In fact, at best, testing will rarely find more than 85% of bugs. Testing needs to be supplemented by other forms of quality improvement techniques that can identify potential bugs even before you start testing. Some of the most effective are static analysis and formal reviews covering requirements, architecture, design and code.
Test that it doesn’t do what it shouldn’t
Suppose you are doing a transaction on your banking app, the connection is lost and only half the instruction is sent to the bank. How does the bank’s system handle half an instruction? If that system is connected to a legacy system, how does the legacy system handle an incomplete instruction. Whenever we make a system change we always test that the new system does what it is supposed to do. We must also ensure that it doesn’t do what it shouldn’t, this kind of testing tends to get overlooked.
No doubt, the work that has taken place by TSB to migrate systems will have had extensive testing. With all such projects though the team will have been under some time pressure to complete their work by a particular date. A compressed timescale is one of the most common causes of IT project failure.
Coding is creative work, that takes ability and experience. For example, there are many ways of coding the same outcome, one coder can achieve a set of functionality in just one or two lines, another may have to write fifty. On the whole, more compact/terse code tends to be of higher quality. Developer skills vary dramatically, compare a singer who is pitch perfect with another who can’t even carry a tune, they both call themselves “singers”. Developers of low competence can introduce more bugs than fixes or functionality.
Business pressure for new requirements
Management are always under pressure to grow their business, and in some cases can lose sight of the importance of stable, accurate systems when under pressure to deliver new features and capabilities.
Bugs not Predicted so the Business Risk was not Understood
Contrary to many people’s beliefs, bugs in software can be predicted and measured. Using standard software metrics it is possible to know how many bugs you have left in a system before going live. If managers are told how many outstanding defects are yet to be found, they might not approve the decision to go live. It is disappointing to see that very few people use these metrics. They are (like COBOL) unfashionable but they work. They bring considerable certainty to an industry that has fallen in love with “fail fast” and “rapid deployment” over proven metrics.
There are many possible reasons for the recent TSB problems, and I suggest some of the them here. The truth is that replacing legacy systems is more than just an IT responsibility, in some cases it is a matter for the entire survival of the business. Banking customers might be tolerant if their banking app is unavailable for a few hours, but they will not accept incorrect balances or severely delayed transactions. This undermines trust, and without trust, banking customers will go elsewhere. Colin Hammond is an IT project assurance consultant and author of ScopeMaster, a tool for bringing certainty to IT projects.