Yesterday, I reprinted an excellent and thought-provoking blog post by Tim Bray, one of the inventors of XML.
Tim’s post represents a reality check for traditional enterprise software, suggesting we could avoid massive IT failures by following a more web-oriented approach to IT and software development.
Tim summarizes his point this way:
The community of developers whose work you see on the Web, who probably don’t know what ADO or UML or JPA even stand for, deploy better systems at less cost in less time at lower risk than we see in the Enterprise.
He goes on to compare the culture of web development (”starting small, gathering user experience before it seems reasonable”) to that of traditional enterprise IT, where he says failure rates “remain distressing.”
Tim then argues against building large-scale custom software unless absolutely necessary (a premise I wholeheartedly support) and finally concludes that “we need to do it better.” In this case, improvement means philosophically adopting an end user, consumer-oriented approach to software creation.
In Tim’s words:
Suppose you asked one of the blue-suit solution providers to quote you on building Ravelry or Twitter or Basecamp. What would the costs be like? And how much confidence would you have in a good result?…
The point is that that kind of thing simply cannot be built if you start with large formal specifications and fixed-price contracts and change-control procedures and so on. So if your enterprise wants the sort of outcomes we’re seeing on the Web (and a lot more should), you’re going to have to adopt some of the cultures and technologies that got them built.
THE PROJECT FAILURES ANALYSIS
Tim correctly characterizes traditional IT projects as big, expensive, cumbersome, and failure-prone. However, he almost (but perhaps not quite) strays into the realm of utopianism by suggesting that organizations can successfully build large systems as they might construct a relatively simple consumer service such as Twitter.
A year or two ago someone “discovered” that a very common, important record in one of our internal systems had 44 (or something like that) required fields, and decided that this was ridiculous. A team was formed to streamline the processes associated with this record by reducing the number of fields. A detailed process audit was conducted. It turned out that every single one of them played a critical role in a downstream process. All the fields remained, and some old timers smiled knowingly.
We should also remember that teams upgrading old systems or building replacement applications are saddled with costs related to old, messy infrastructure. The folks building Twitter, to use an example, didn’t face that hurdle.
Part of the problem is the legacy burden. Not only are [projects] burdened by previous failure, but also software for on-going concerns is burdened by previous success. That’s why an organization that starts over from scratch can take advantage of their steep learning curve — “innovators dilemma.” They don’t have that installed base and previous success holding them back. That’s why so much of IT budgets are locked up in “fixed costs”.
This all leads to a clear conclusion: simply cutting features from software is not the solution to IT failure because organizations often retain the need for that functionality.
In theory, perhaps we could solve the problem by a wholesale redesigning of IT into federations of small systems that somehow all work together like magic. But that just ain’t gonna happen anytime soon.
Well-known enterprise architect (and fellow Enterprise Irregular), Dion Hinchcliffe, suggests that service-oriented architecture (SOA) offers a step in the right direction:
Allowing interoperability and data sharing between IT systems ultimately leads to seamless functioning together as a unified whole. It’s not a panacea, but can reduce the need for endless, large-scale integration projects that always seem to go over-budget.
However, for most organizations, the barrier to SOA adoption lies in the human realm; it’s not about technology.
Technology is not the problem. These problems are not technical matters, but arise from preconceived notions, deeply embedded work processes, and heavily invested expectation mismatches between IT and the lines of business they serve.
Most of what’s being said doesn’t require Agile, Scrum, Object Oriented Programming, Interpreted Languages, or any of the rest of the buzzwords. Most of it can be understood by anyone who understood the Mythical Man Month. Big IT starts out anything with Second System Effect run rampant. It’s a BAD idea, but it is institutionalized.
So, we combine a hopelessly flawed boil the ocean strategy with talent that isn’t really up to a proper methodology and a compensation plan guaranteed to lead us further astray and we should not be surprised that disasters await.
Individual organizations can make strides to improve their own situation by investing the time and resources needed to improve the culture and processes around IT. As an industry, however, full-scale change is many years away.
My take. Without using the term, Tim’s presents an Enterprise 2.0 perspective that links IT success to simplicity and improved collaboration. In doing so, he raises seemingly-intractable problems that run deep into an organization’s technology, culture, and economics.
At heart, Tim asks a fundamental Enterprise 2.0 question: how does one create small systems that address similar requirements as large ones? It’s a basic matter that henceforth I will call “Tim Bray’s Enterprise 2.0 conundrum.”
[Photo from iStockphoto]