Dr. Hasso Plattner has written a long blog post about Business Suite on HANA. He has 12 reasons justifying why. His passion for the product screams through the post.
I wish he would make it a Baker’s dozen and add a 13th point: that SAP plans to change its business model around HANA.
I am writing a book on the SAP economy so dollars and euros are high on my mind. In the last couple of years, as the Big Data phenomenon has taken off, I have been impressed with all the fresh economic models I have seen.
So, when GE announced its Industrial Data Lakes recently, I spoke to Bill Ruh there. GE is looking at exabytes of data from its aircraft engines, turbines, locomotives, medical scanners and other industrial equipment. GE’s big driver “How do we reduce by orders of magnitude the cost of managing a terabyte of data?” That led to some radical rethinking of data structure, ingestion and management I catalog in the book.
In the last year I have talked to entrepreneurs like Nader Mikhail at Elementum, Omar Tawakol at Bluekai and Adeyemi Ajao at Identified. They have no interest in selling software or hardware. They want to sell you supply chain, marketing and HR data, or even better insight, as a service.
I have seen communities of data scientists emerge at Kaggle and at Topcoder (part of Appirio). It is a new talent sourcing model.
In contrast, SAP still wants people to buy software. Then buy hardware and services from its partners. Beyond 4TB of memory, commodity components don’t scale, so that is not cheap. As someone pointed out “With data volumes exploding, will we need to look at IBM’s custom made 128TB Blue Gene/Q supercomputer? And have to use IBM’s traditional talent model?”
With the HANA cloud, you could argue SAP is moving to a new business model. Not quite. Sit down with two displays and open up SAP’s and Google’s data center web pages. SAP emphasizes privacy, security and data protection (all important, even more so to German and other EU customers). Google focuses on data center innovations – efficient servers, container design, carbon neutrality among other initiatives. SAP is just beginning an efficiency journey that Google has been on for over a decade.
The other challenge with the HANA cloud is customer connectivity. Amazon Web Services with its Direct Connect and Microsoft Azure with AT&T’s NetBond have tried to combat the MPLS and WAN costs carriers charge individual customers. SAP customers do not have such an option.
HANA as an OLTP platform – SAP is actually regressing its business model. To give customers database portability, R/3 kept away from stored procedures and other database specific features. To boost HANA TP performance it will have to move increasing amounts of code away from the application layer.
Over the course of my interviews I have heard many customers say HANA economics are not compelling – even the ones which are early adopters and have good use cases. The recent ASUG survey emphasized the point in spades.
Dr. Plattner don’t ignore all that feedback. Add a thirteenth point to your value proposition – a compelling business model to cover your software and your partner’s hardware and services costs.
(Cross-posted @ DealArchitect Full)
[…] Read the source article at Enterprise Irregulars […]
Dr. plattner responded to me on his blog. His comments
“hi vinnie,
new business models – please look at sap’s business network.
the business suite as on demand service is available in the hana enterprise cloud. some customers prefer to buy the software. once you know you will use it for 4+ years it could be financially more attractive.
the data center comments are interesting and i will let helen arnold , sap’s cio, talk to you.
that r/3 didn’t use stored procedures is true. the sERP version of the suite on hana not only dropped the transactionally maintained aggregates and all redundant materialized views, but heavily uses stored procedures and other libraries of the hana platform. the application code is being simplified dramatically. the transactional performance increases accordingly.
the hardware issue has several aspects. the certification process of hana made people think that hana requires a special hw. that is not true. sap only wanted to make sure that the configuration recommendations were followed. i believe, vendors can now selftest their configurations. one reason was: hana needs dram. but how much? let’s take sap’s erp system: the hot data will require less than 500 gigabytes (i predict less than 300), the cold data will then be around 500-700 gigabytes. the data requirements are in fact really small. for the hot data i recommend that all data is always in memory. for ha a hot standby, which will be used for read only applications, i recommend a hana maintained identical replica. any of the two terabyte single image systems is good enough. for the cold data you can use similar hw or a cheaper scale out approach with smaller blades. not all cold data will stay loaded in ram. a purging algorithm will remove data without access requests. sap is in the top 5% (a guess) of sap users. the largest smp system available for hana has currently 32 cpus with 480 cores and 24 terabytes. i don’t see any hw capacity problem. the pricing varies from vendor to vendor, but the fact that there are several vendors will take care of it. so where is the problem? the hot/cold split hasn’t yet shipped and the the purge algorithm is only in the later releases of hana.
it will be soon available like the hana managed replication. i urge every suite on hana customer on premise to contact sap for sizing and configuration assistance. the sERP simplification will finish this year, sFIN is already shipping and doing great. this is what i meant with trusting sap with regards to keeping delivery promises.
the data explosion is taking place with mostly read only data (text, video, sensor output, etc) which can easily be organized in a scale out fashion on cheap hw. actually, hana is happy to calculate the indexing and keep data only as indices in ram for processing.
thanks for the feedback. as you know i take blogging seriously. everybody around sap tries to scale up and make the value proposition attractive, we are moving much faster than in the r/3 days – in a much larger market.
I responded to Dr Plattner to comment above on his blog
Dr Plattner, I can debate many of your points, but ideally that is done in person.
I do want to pick on one of your points. You say “the (hardware) pricing varies from vendor to vendor, but the fact that there are several vendors will take care of it. so where is the problem?”
Like you I believe in competitive marketplaces and price equilibrium. But in the SAP economy, as my book research shows over and over, even with plenty of choice in R3/ECC systems integration, hosting, offshore application management, MPLS circuits and other elements, the prices have stayed shockingly expensive and the project failure rate/ticket volumes unacceptably high.
As part of my book research, an SAP customer sent me this
“HANA is the new “UNIX”. Big iron, expensive and niche. H/W vendors have to make their margin somewhere. They certainly can’t run a business selling 2 socket servers. Some time ago SAP did have suggested hardware pricing on their website, but it caused a revolt with their “partners” and they took it down. Attached is a copy of the cached version.”
My suggestion – Either because of SAP’s lack of proper controls or your customer inability to manage your partners SAP not to continue to expect the free market to have the impact it does in other sectors. With HANA, SAP should more aggressively manage its ecosystem.