Most IT failures go unnoticed by the public – backend systems break, projects go over-budget, or perhaps a IT misses an internal deadline. Such was not the case with this week’s IT failure at the UK-based Royal Bank of Scotland (RBS), which, incidentally, has a balance sheet the size of the UK economy.
According to Reuters, a bad software upgrade made RBS’ systems unable to process payments for both individual and business customers. The upgrade failure also hit systems from British bank NatWest and Ireland’s Ulster Bank. The BBC describes what happened and explains the ripple effect that occurred when the RBS systems went down:
The bank says there was a failure in the computer system that carries out the overnight transfer of money between accounts.
So, even though payments had been made – such as a business paying wages to staff – this did not show up on their account balances. In turn this meant many customers could not make payments themselves, such as paying rent to their landlord.
The failure effectively caused a traffic jam in the system. It created a huge backlog in updating account balances, which the bank has been trying to clear for some days.
Customers of other banks have also been affected because some payments from RBS, NatWest and Ulster Bank customers have not come through
In what must be an IT failures understatement of the year, RBS chief executive, Stephen Hester, commented:
It shouldn’t have happened and we are very sorry
The BBC’s Business Editor speculated that outsourcing arrangements interfered with the bank’s ability to isolate and solve the problem:
As I understand it, one reason why RBS has not given much detailed information about why its services have been so badly disrupted is that so much of the operational responsibility for IT is outsourced – so there is a sensitive issue of where to attribute blame.
In my conversations with RBS bankers, there is an implication that outsourcing contributed to the problems – though they won’t say whether this is an issue of basic competence or of the complexities of coordinating a rescue when a variety of parties are involved.
Adding to the confusion, Information Age reports that the problems were entirely self-made within RBS:
An RBS spokesperson said this morning that the technical fault, to her knowledge, had taken place on RBS’ own systems, and not those of a supplier or outsourcer.
Although the problem upgrade occurred on Tuesday night and was fixed by Friday, downstream effects will continue through this week. This IT failure continues to have dramatic negative impact on bank customers, many of whom do not have access to their own money.
ZDNet UK: Natwest, RBS customers hit by balance glitch
BusinessWeek: RBS Systems Failure Unlikely to Be Resolved Until Monday
London Evening Standard: NatWest chaos heads into weekend
Irish Times: Angry customers seek answers
BBC: Ulster Bank ‘needs week to clear IT failure backlog’
The Telegraph: NatWest computer glitch ‘fixed but backlog remains’
NatWest: Helpful Banking
A comment thread, containing thousands of messages, on the bank’s online customer service forum describes travelers stranded abroad, home purchase closings that did not go through, and similar stories of difficulty due to customers not having access to funds. Ireland’s Minister for Social Protection said that up to 30,000 people did not receive social welfare payments, even though funds had been withdrawn from government accounts. To assist customers, RBS opened 1200 bank branches on Sunday.
A SHAMEFUL PERFORMANCE
Although IT failures happen, this impact of this one is huge by any standard and several important questions remain unanswered:
- What actually happened? Was this an outsource problem, an in-house problem, or maybe the RBS technology organization is so complex that it’s impossible to isolate exact cause.
- How can one of the world’s largest banks roll out an upgrade without sufficient testing? No matter how you look at it, the upgrade created unexpected issues upon deployment. That’s why we test, test, and test again, even though it appears RBS did not test enough.
- Why the long and difficult recovery? Presumably, a bank this size has worked out its business continuity and recovery plans with a level of efficiency only possible for a huge organization. Or maybe not.
- What accountability does the government hold over banks that evince such apparently poor IT processes as this? IT failures do not just happen randomly or in a vacuum. I suspect process and technology issues will emerge as warning signs that management and technical workers ignored.
THE BOTTOM LINE
This IT failure is sad for RBS and far worse for its customers. I urge the UK government to conduct a detailed review of RBS management, business, and technical processes with respect to all aspects of IT. It appears that RDS did not follow important standard practices in areas such as testing and business continuity planning.
Government regulators should hold RBS to account with stiff fines and other punishments as appropriate. Regulators should treat this IT failure with the same severity and determination they would bring to operational failures in any other part of a major bank.