This is a series of interviews with notable luminaries about the state of AI, the impact AI will have on businesses, and about the ethical implications of intelligent computers. Discussing ethics is always a tough subject. The IEEEhas formed a major initiative to examine the ethical dimensions of AI.
The Global Initiative for Ethical Considerations in the Design of Autonomous Systems is an Industry Connections Program of the IEEE Standards Association. It was formed with two primary deliverables: create a document featuring key issues in AI/AS grounded in ethically-aligned design; and, make Standard Project recommendations to IEEE-SA based on these issues.
Dr. David Bray, Harvard Executive-in-Residence and Eisenhower Fellow, and Michael Krigsman, industry analyst, author, and the host of CXOTalk, are co-chairs of the EPIC/AI committee, one of the seventeen committees comprised of over one hundred global experts in The Initiative.
The EPIC/AI committee includes representatives from the government, independent research organizations, academia, and the commercial sector and we’re examining the public policy aspects of AI and autonomous systems to guide The Initiative’s work. It is essential that policymakers consider all these perspectives.
Ultimately, the document our Committee/The Initiative is creating will be released under a Creative Commons license for any organization to adopt to help prioritize ethically-aligned design in AI/AS. It will also be directly submitted to IEEE for formal consideration, re: policy decisions by their official channels handling these issues.
JT: What is broken in the business process that sets the ground for AI?
MK: The internet has, paradoxically, flooded us with volumes of information that now make current processes unable to move with sufficient agility and resiliency to keep up with this “digital tsunami” as Dr. David Bray, who is also the co-chair of the committee, likes to call our era. AI can help organizations make better decisions more quickly. The impact is profound and will affect many industries and processes, which is why this topic is so important.
JT: What are the biggest ethical concerns for AI
MK and DB: The ethical aspects of AI center on development, use, and application. AI offers its maker advanced capabilities that can be applied to fields as diverse as robotics, medicine, autonomous vehicles, weapons, and much more.
As with any technology, the developer’s goals and objectives dictate how AI technology is used and in what fields it is applied. Given the power of AI to mimic human decisions and intelligence, the question of application is crucial to consider.
For example, imagine AI technologies in the hands of a government planning to identify and target specific populations or groups for attack or discrimination. Most people would say this is an unethical use of AI.
What about companies using AI to target consumers with levels of personalization unattainable today. At what point do we cross the line between appropriate and inappropriate use?
The answers are often not easy or clear and depend on personal values, organizational policies, and a host of other factors on which reasonable people can disagree.
JT: Can AI be perceived as a threat to human dignity? If so, what is the logical fallacy that would mitigate this concern?
MK: AI is a set of technologies that we can use for either good or evil (although even stating the issue in this manner creates an over-simplified caricature). We cannot today anticipate the full scope of how AI will affect our world in the future; this quality of unknown implications is similar to technologies such as nuclear fusion in its earliest days or even the Internet when it was still just a set of protocols.
DB: Humans may choose to use nuclear technologies to create safe nuclear power plants = ethical. They may choose to make nuclear warheads as a deterrent = questionably ethical depending on your philosophy. Nuclear warheads as a way to kill their neighbors to seize more territory = unethical.
MK: Our ethical challenge is developing policies that enable AI and autonomous systems while ensuring they serve the benefit of humankind rather than accelerating that which is “evil.” The problem is significant because definitions of “good” and “evil” are relative and depend on one’s perspective.
DB: At the same time build airplanes to save lives in crisis situations = ethical. Airplanes for travel = neutral or borderline ethical if your philosophy supports growth or freedom of action. Airplanes to fly into a nuclear power plant as domestic terrorism = unethical.
DB: It’s important to separate specific tools from the broad tech in this discussion. Also getting people to articulate whether a war is ever ethical or not is a really had thing to do (it is doable, just hard). I personally think there can be just wars — they’re just rare and still done with a heavy heavy heart and mind.
DB: You’re absolutely right though. If an AI decides that they best way to save lives is at the expense of your own life (the utilitarian/Mills lens) then where is any human agency in that choice?
Or that the means justify the ends (the deontological/Kant lens)?
I am a Rawlsian personally, specifically his theory of the Veil of Ignorance and recognition that from moment we are born our perceptions of what is just are colored by who we are — our gender, race, age, health, social status, etc. all bias what we think “others should do unto others” and what the “maximum benefit to society” is. That’s what makes it so hard to have a philosophical discussion with people from different backgrounds because in their own minds they’re each right.
If we could somehow put on a “Veil of Ignorance” and not know who we would be — prior to birth — and not know if we would be healthy or unhealthy, smart or not smart, a certain gender, certain race, certain social demographic, live somewhere, etc. — what then would we want to agree to as social norms?
This gets to your “unknowable part”
Perhaps a benevolent machine learning could illuminate it. (Rawls argues we would pick norms where whatever group is worse off — and there will always be a worse off group — are “the best of” of all possible scenarios, because none of us know if we might be born as them. I am more interested in the Veil of Ignorance as a philosophy than his conclusion).
Of course if we did somehow construct a benevolent machine that could illuminate how we all could live better — if you jump back to Plato’s The Republic, he would claim (through the voice of Socrates) that fairly quickly we humans would kill any benevolent, uncompromising, single ruler. Because we want compromises and we don’t want the truth that perhaps we have too much or we need to contribute a larger share, etc. Such is human nature: politics is the art of compromise because humans want compromises vs. the truth.
So would we try to kill the machine? Cling to the last vestiges (however frayed) of political caucuses?
Fast-forward to the philosopher Rousseau who, when asked by Poland how best organize their country, recommended they “cut your country in half” and again, and again. Arguing that all politics are local, and efforts to do them broadly always dilute what individuals really need in an effort to find ever increasing complexities of compromise.
Maybe our future — much like what has happened with the ability to personalize what we each see and receive digitally via the Internet through data analytics — can only be solved when we engage each other as countries of one. We would have AIs that would share our preferences for what we “permit to be done unto us” — and those people we meet would do the same. If someone tried to do something to us we did not permit, non-lethal protection and/or of them or us from the situation would occur automatically via the AI. Same if we tried to do something to someone they did not permit. Expressing all that we permit and don’t permit explicitly would be impossible, so it would require a learning machine of our preferences, which would need to understand context and changing sentiments.
Does having every human being super-empowered as a “nation of one” via AI sound scary or anarchy? It’s worth recognizing right now the true state of international affairs (to wit the North Korea missile tests, or the Russian “war game exercises” in Crimea) is anarchy. The only difference is almost all of us have little sway over what nations do on our behalf, nor do we possess means to defend ourselves should national actors attempt to do something we do not permit.
Choice architectures. Individuals use AIs to express what they permit both digitally and physically to them in real-time. In return there are trade-offs — want to fly this plane, participate in this process, etc. then you have to make choices.
AI can help to address the ever increasing connectedness and complexity (in the truest form of the word complex meaning systems that possess multiple feedback loops) of our world.
And in some respects returns us back to the Golden Rule, adapted: “do unto others as they would permit you to do unto them.”
(Cross-posted @ Medium | John Taschek)