SAP has spent the last few years discussing, theorizing, and otherwise showing off numerous versions of the same idea: using in-memory databases to replace the “not getting any younger” relational database model. Last week at a conference hosted by SAP to showcase its research work with the academic community, supervisory chair and permanent SAP visionary Hasso Plattner took the concept to its ultimate conclusion: an open attack on the sacred relational database that powers three of SAP’s major rivals/partners and forms the technology core of every SAP installation on the planet.
Commercialization of the in-memory concept has already begun in the analytics/BI space with SAP’s Explorer, but the real holy grail is moving the in-memory concept into the day-to-day transactional, ERP world of SAP’s Business Suite. That concept moved a step closer to reality at the SAP Academic Conference, where Plattner discussed research efforts aimed at showing that in-memory databases, running on low-cost, multi-core systems, can work as well or better in a transactional environment as they do in an analytics environment.
The results are exactly what have been expected as in-memory, column-based databases have moved into the limelight in recent years. In a nutshell, research at Plattner’s own academic research center in Germany showed that the remarkable advances in throughput – and an equally remarkable reduction in TCO – that have been obtained in the analytics space with in-memory can be replicated when used for transactional systems.
And once this concept – in-memory, column-based databases on low-cost hardware – is applied to the enterprise, all sorts of things change. For starters, much of what we think of in terms of the typical data center design – massive arrays of disk storage backing up huge databases running on top-of-the-line hardware – can largely go away. Turns out much of the traditional data center was created to force-fit the relational model developed by Codd and Date onto an architecture that was ill-suited for optimizing a relational system. Because hardware and storage were relatively slow – especially 20+ years ago, much less so today – the relational model included an intensive, and excessive, dependence on table structures and indexing to compensate for slow hardware and storage systems.
But in an era when RAM can be measured in tens of gigabytes and arrays of multi-core processors can provide levels of throughput that were inconceivable at the birth of the relational database back in the 1980s, the force-fitting of the relational model no longer makes sense. This doesn’t mean that thinking – and accessing – corporate data in a relational format is obsolete: SAP’s T-Rex in-memory database, as well most other column-based systems, can still use good old SQL. But all that overcompensation in the form of huge storage systems, armies of DBAs, and mind-numbing overhead costs can largely go away.
It’s rather ironic that the conference where Plattner talked about blending transactional and analytical systems in the same in-memory database took place at the Computer History Museum in Mountain View, CA. SAP’s intentions regarding the relational database will eventually relegate that data stalwart to the status of an historical monument, one that in retrospect will be missed as much as many of the other relics, like the punch card and the CRT tube, whose time has come and gone.
All that remains is to see what Oracle, Microsoft, and IBM will do when the hegemony of their respective relational database offerings is challenged by an in-memory, column-based, alternative. My guess: kick and scream, and then eventually get on the bandwagon. This kind of technical advance will be too hard to fight against.