A seriously content rich debate ensued in the past days on the validity of big data and accompanying algorithms that govern them. The central hypothesis we’re debating is MIT Professor Andrew McAfee’s assertion that algorithms are smarter than human judgement.
Three sources you want to peruse if you want to play:
- Professor McAfee’s original Harvard Business Review post and perhaps more important, the comments
- A vigorous debate on Facebook by some super smart folks
- An incredibly well researched post by Diginomica’s Dennis Howlett
Apologies for not quoting snippets but I really do believe that cut outs won’t do these posts justice. So if this topic interests you, I highly recommend a full read.
I’ll use this forum to callout why this topic appealed to me:
We as humans might not be qualified to objectively judge whether big data needs interpreters. If you read the comments on HBR, you can sense an extremely visceral reaction to even the slightest suggestion that algorithms might be more reliable than us humans who have always played middleman. We’re behaving like were under attack and the threat might be real this time.
I look at some of the greatest break-through we’ve had with data and network-driven platforms recently (relatively speaking) and it’s clear that had we not stepped back and let algorithms do their thing, many of these might have never seen the light of day.
Take Uber for example. Had it been brought about by those vested in traditional transportation, we would have focused on how data and its manipulation elevates the role of humans when it comes to analyzing and interpreting. But Uber is predicated on us humans stepping out of the way. The algorithms connects suppliers with buyers directly. It scales this connectivity by removing the switchboard operator from the equation and exponentially increasing the metadata round the transaction – about the location, about the customer, about the vehicle and about the driver. It removes abject latency that’s existed because now, there is no centralizing of dispatch – the algorithm finds the best supplier that most often is 3-6 minutes away from me. It presents a rich dataset about the driver enabling the consumer to directly make a decision on whether they want to transact or not.
I’m willing to bet that had the availability of data and todays sophistication in terms of algorithmic interpretation be put into the hands of those who crunch and route this data, its use would have been limited to making us number crunchers more effective.
Uber is a great example of how we re-imagined an entire industry by objectively considering the reliability of the data and the sophistication of the algorithm to bypass the interpreter.
The problem is most of the conversation about big data today is largely technical. I wrote that “Big Data needs its beef burrito” a while back, suggesting that its about the insights that humanizes the raison d’etre for big data. But what Professor McAfee’s post put into sharp focus for me is that the problem might be even more elementary that what I suggested – it’s human objectivity that’s the under current that’s getting in the way.
Another incredible example of where data and the algorithm are taking the drivers seat is in telemarketing. Read this post by Alexis Madrigal of The Atlantic. Again, take an objective lens to give data a chance. Or sample some of the incredible examples Om Malik cites here for data-driven invention.
Look, the easy, cop-out answer is “it’s both – people and technology”. Well, gosh darn it, of course it is. We know that. But we as humans have a huge conflict of interest here. This is about having the discipline to step back and be ruthlessly honest about that balance of people and data and if warranted, letting the data and insight lead this time. Big data is most certainly all that. But do consider the source when sizing up its role and promise.
(Cross-posted @ Pretzel Logic)