Notes from Reality is a series of posts on AI and its impact on humans, what can be done today, and what may happen tomorrow. No one has all the answers, but we are trying to arrive at the right questions. In this post, I interviewed Dr. David Bray, Harvard Executive In-Residence and Eisenhower Fellow. This effort began with the post “Let the New Machine Age Begin.” https://www.enterpriseirregulars.com/109916/let-new-machine-age-begin/. The process to write that led to another interview with Dr. Bray and Michael Krigsman, a noted analyst, which is posted here. There will be more posts, so please check back for more information from a variety of enterprise technology influencers. — JT
David, you say that technology is amoral. What do you mean by this?
DB: Imagine what the next 5 years will bring: The term “mobile computing” will eventually become a dated term, replaced by “ubiquitous computing” as the internet will be everywhere. These changes include the transportation we take on land, in the air, and at sea; the clothes and devices we wear, sensors at work, at home, in our environment, and (if we chose) in us for medical purposes as well.
DB: Also right behind and coupled with the Internet of Everything: 3D mass fabricators enabling individuals to affordably “print” and modify at the molecular level tangible substances based on digital designs. Maker Faires around the world already exist showcasing the early stages of what 3D fabricators can do in the hands of artists, engineers, and hobbyists.
DB: Machine learning and great use of artificial intelligence at work, at school, and at home will also intersect with all of these trends. As Co-Chair of the IEEE Committee focused on Artificial Intelligence and Innovative Policies, I firmly believe exponential changes like the era we’re in offer great opportunities for society — as well as great challenges.
DB: Technology itself is amoral, how we humans chose to employ technology determines outcomes. Nowadays individuals are gaining capabilities the previously required significant resources limited only to large corporations or nation-states. What does that mean for our global future — this super-empowerment is both exciting and concerning since human nature has not fundamentally changed over the last 5,000+ years.
2. Has technology always been amoral and why does precedence dictate the future?
DB: Yes, since the start of human tool use, it has always been how we choose how to use tools that determine outcomes — not the tool itself. Humans are tool makers and as a species use tools to extend our abilities, both physically and mentally. Fire can cook meals and keep people warm at night. It can also burn, hurt, and kill. How fire is use determines the outcome. Similarly wheels can move good across great distances and foster commerce. Wheels can also move machines of destruction, destroy existing homes, and force mass migrations. Books can extend our creative thought and collective experiences for the better. Books can also be used to spread misinformation, fear, and hatred.
DB: When considering the future, all too often we focus on what the technology *can do* and miss chances to think now about what future changes *we can chose* in how we organize, how we work together, and how we are a society together that also may need to occur to best ensure benefits to the greatest number of people. Social policies usually try and play catch-up with technologies, yet in a period of exponential change — is “catching-up” sufficient or do we need a large transformation to “leap frog” ahead?
DB: All of these questions that determine whether technology leads to a good vs. bad outcome for individuals and society at-large are human questions on how we use the technologist, not the technologies themselves.
DB: No one person has all the answers, however collectively we can think, reflect, and help shape the future before it impacts us all. It’s been shown that a diverse group of people and industries lead to better decision outcomes. I think we need a similar diversity in the current and future choices ahead for us as societies. This is why for the IEEE Committee focused on Artificial Intelligence and Innovative Policies we’re encouraging a diversity of perspectives from the the public, private sector, non-profits, academia, governments, and other practitioners with different lenses on the issues.
3. There are concerns that AI is a competitor to the human race. How do you think about it from an ethics standpoint?
DB: In my professional experience, we’re not there yet with AI where it’s a competitor in terms of what you might see in a science fiction movie like The Matrix. Human use of AI might displace jobs, and the jury is still out whether more or less jobs will be created as a result, so in that sense AI might be competition. Job displacement raises questions of ethics in terms of what do organizations and societies owe individuals whose jobs are displaced because AI can do them more quickly or more cheaply, should we help them re-train for another job and if so what’s the best way?
DB: Ethics can offer different lenses on this issue. Kant would ask do we have a universal duty to individuals whose jobs are displaced because AI? Mills would ask what is the best utility benefit for society by using AI and by possibly assisting those individuals whose jobs are displaced because AI? Rawls would ask, if we didn’t know in advance whether or not our job would be displaced by AI, what would we collectively say is just and fair? Rawls probably would also ask how can we minimize the negative consequences to those whose jobs are displaced.
DB: That said, AI might create more jobs or allow humans to focus on more creative work either as a vocation or avocation in the future. We might eventually face a future where collectively we all have to work less as a result of automation, raising questions of what happens then? Societies have made such shifts before when we shifted from the agrarian era to the industrial era. On a farm historically, work was essentially around the clock every day of the week to maintain a farm. With the industrial era, the work was not around the clock however it was repetitive in nature. Later after World War II we began to shift to more “knowledge work” and less manufacturing work. People began to have more free time. The impact of AI on how we work might be a continuation of that trend.
DB Some people have raised questions about whether an AI might feel “indentured” to a human and whether that’s ethical. Right now machine learning and AI doesn’t operate in the sense that the machine is either self-aware or has feeling. Some of the more modern philosophers and biologists argue even human self-awareness is an illusion. Either way, for now AI is still a tool that extends our cognitive abilities in ways previously not possible.
DB: That said, there are still plenty of ethical questions about how we humans choose to use AI in our societies. Will it be more ethical to have an AI make decisions about criminal cases, because it will evaluate solely the information and will not be biased regarding gender, race, age, etc.? Will it be more ethical to have an AI make decisions about bank loans or hiring for the same reason that the decisions will not be biased regarding the physical appearance of a person?
4. It seems to me that technology, particularly artificial intelligence, will have an interesting issue when it comes to applied ethics. Do you foresee similar issues?
DB: After more than 3,000 years of philosophical thought, human philosophers still have not be able to give a consistent universal philosophy of right vs. wrong. They’ve given us lenses and ways of thinking about what would be the right action for a situation.
DB: Should AI be programmed and/or taught via data what is “right conduct”? Yes. However who’s doing the programming or teaching? For some logic-based systems, it is us humans so the choices of what constitutes right vs. wrong is explicit in the choices embodied in the code written by humans. We codify our moral philosophies into the machine.
DB: Yet most machine-learning nowadays isn’t a lengthy series of explicit logic expressed as code, rather nowadays most AI initiative employ machine-learning where large amounts of data are presented to a series of algorithms combined with a goal (such as get safely between point A and point B). Over time the data teaches the machine to reach the goal in an optimal fashion and to recognize emergent situations based on historical experience. The ultimate goal and any constraints are all set by humans, so again our moral philosophies are codified into the machine.
DB: We humans also provide the data informing a machine, often employing data based on how we are living our lives, so our data becomes a set of practical ethical informing the machine as well. In some respects this is how we teach a young child to do things. We encourage them to pursue goals, observe us doing them, and ideally set boundaries (including “time out chairs”) to discourage certain solutions to achieving a goal. It doesn’t mean that the child will always make an ethical decision, however we hope as parents that we can help correct their actions at an early age so that when they grow-up they can make ethical decisions on their own using the moral philosophies they internalized as a child and teenager. Machine learning is similar.
DB: In addition, if you were to ask the machine why the ultimate goal was important — it wouldn’t be able to tell you why. That goal was set by a human programmer. The machine might be able to express subgoals towards the goal a mathematical factor that indicated how the event moved it closer or further away from the goal, however it wouldn’t be an answer based on a moral philosophy per se.
5. How can technology be ethical if it doesn’t have a philosophy of what is right or wrong, and isn’t the absence of this philosophy inherently immoral?
DB: We are not at a point where a machine can tell you fully “why” it made a decision. Often the answer is that was what the data it received and had previously learned indicated was the best path towards a goal. This raises interesting questions when we rely on AIs to augment human decisions. I would posit for AI technologies to be used more fully in societies, there needs to be a way to ask the technologies “why” a decision was made and receive an understandable answer. There also needs to be the ability to provide additional details to teach the machine if the decision it made was incorrect or deserved a second evaluation.
DB: There’s already something analogue to this with credit scores. In cases where the data provided to calculate your credit scores is wrong, there are mechanisms to correct inaccuracies. I personally am concerned because I wouldn’t say the credit score system is great at fixing inaccuracies without individual review and action. The system does provide some general guidelines why your score is high vs low.
DB: Ultimately just because a machine can speak with a human-sounding voice doesn’t mean it is thinking in ways similar to a human. While we humans still don’t fully appreciate how we make decisions, I’d submit that we need to be prepared for an intelligence that thinks far more differently than we do.
DB: Does all of this make AI technologies immoral because they don’t think like we do? I personally don’t think so, we don’t hold our human moral philosophies to other animal lifeforms.
DB: Are AI technologies immoral because they lack a philosophy of right vs. wrong? I’d post they do have goals set, in this case via the ethical choices of humans, and their philosophy is set by either the logic or the data provided to the machines also by human. So I’d suggest AI technologies right now are an extension of ourselves who may one day grow-up to be teenagers and eventually independent of us. The question ultimately is who set our goals and frameworks as humans? If it was our parents or adopted parents and if so, who set theirs and so on? Was it the social institutions and the cultural environments we grew up in, and if so how were these historically shaped?
DB: I’d suggest our human obtained our moral philosophies by living and learning. If there’s any universality to them it’s because we share mostly the same genetic makeup, even if we have different individual life experiences. AI technologies don’t share the same genetic makeup as humans. Presently we set the goals for AI technologies. Perhaps one day they’ll set their own.
DB: Either way, as we embark on greater use of AI technologies globally, perhaps we humans should devote some time to answering the question for ourselves to the questions of why are here? What is our ultimate goal for living and why? Choices matter, and the choice architectures we design for our societies will be just as important — if not more important — than the choice architecture we help teach AI technologies going forward.
(Cross-posted @ Medium | John Taschek)