It gives some measure of how seriously we take AI today that I went to two conferences last week and sat through two panel sessions on the subject. At CRM Evolution, I was part of the discussion in a breakfast session Paul Greenberg organizes each year, then I flew to Las Vegas for the Oracle CX show. There executives involved in the adaptive intelligent applications product line presented a session for analysts and press members that tried to define the basics.
I have to say that neither session was especially illuminating which is not to cast aspersions on any of the participants but more to provide a gauge of how early we still are in the market cycle. If it seems hard to define AI today it’s equally difficult to wrap our heads around its potential.
In Washington, at Evolution, people talked about the trust factor and how easy or difficult it will be to accept that an algorithm might know more about a situation than the user. For instance, a GPS system will “know” about road conditions that humans can’t see.
In Las Vegas the discussion started with the now typical dystopian fear that algorithms or bots might be about to steal our jobs and for some reason this seems to engender visceral fear in the population the way that packing up factories and shipping them to low wage countries might not.
It struck me due to an accumulation of research that while we might talk about lost jobs or trust issues that the reasons for unease about AI—or whatever we decide to call it—might be more fundamental. It might be that AI signals the significant diminution of a style of thinking that is uniquely human, something that has evolved with us, to be replaced by a style of thought that has been with us only since the Renaissance and the development of the Scientific Method.
First, let’s agree on terms. The broadly knowledgeable silicon and metal-based intelligent life form that has lurked in science fiction for the better part of a century, is still fiction and will be for some time. Those who are concerned about such an entity replacing us will have to wait many more years before something like HAL is available and then like the first steam engines we’ll discover it’s too big to move around so it will be limited.
The AI that we increasingly see in CRM and other business apps is rather one dimensional, able to tell you the traffic but nothing else. It’s analogous to the robots on car assembly lines, each programmed to make a weld or grind a surface but that’s it. Making an assembly line is a matter of setting up many robots in a row each doing something different and not empowering some super machine to do it all.
So what’s everyone so concerned about? Simply put it’s the difference between deductive and inductive reasoning and now we enter the weeds, just a little. Deductive reasoning is something we humans do well and it involves beginning with a premise and deriving conclusions. Surprisingly math consists of a lot of deductive reasoning. Certain assumptions or postulates start off our reasoning from which we make deductions. More generally you can deduce from basic ideas too like the famous,
All men are mortal
Socrates is a man
Therefore Socrates is mortal
Note however that getting a true and useful conclusion requires a true and useful assumption, postulate, or statement. If we’d started with “All men have feathers,” we would have gotten nowhere fast even though our logic would have been impeccable. Politics is like that today and without trying to hurt anyone’s feelings, there are a lot of examples of situations where we move backward from conclusions to discover the premises it would take to get there. But that’s not the purpose of this piece.
On the other hand inductive reasoning is the logic of science and the kind of thinking we all do sometimes, especially when there’s time and probably paper and pencil. Inductive reasoning involves gathering data and applying statistics to discern patterns. It’s the heart of the scientific method and the reason we live in the world we do instead of one where we’re all subsistence hunters and farmers.
Inductive reasoning involves the language of hypothesis and proof and theory but not belief. We believe what the data tell us, not what we assume and when the data reveal something wrong about our beliefs we change beliefs. We don’t work backward to discover our premises. Inductive reasoning is what drives AI and I think it is the heart of our heartburn.
In both sessions I attended last week someone in the audience inevitably brought up the trust issue as in, I can’t see how I can trust an algorithm and feel I simply must have the option to override it with my gut instinct. If I unpack this I get the notion that we’re comfortable with our deductions and the premises they spring from and it’s rather frightening to have to rely on not much more than statistics. Yet the times in human history when we’ve made progress are precisely those times when we pushed back the boundaries of premise and belief and substituted cold, hard, facts derived from data.
What’s different today is that we don’t have a single man like Galileo proposing that the earth revolves around the sun, because that’s what his data tells him. We have millions of them and their proposals are both profound and banal. In the process, we are rapidly pushing deduction back to a smaller footprint than has ever been the case for humanity and that can feel frightening.
(Cross-posted @ Beagle Research Group)