Artificial Intelligence: What Everyone Needs to Know

Dec 5, 2016

We're asking the wrong questions about artificial intelligence, says AI expert Jerry Kaplan. Machines are not going to take over the world. They don't have emotions or creativity. They are just able to process large amounts of data and draw logical conclusions. These new technologies will bring tremendous advances--along with new ethical and practical issues.

Introduction

JOANNE MYERS: Good evening, everyone. I'm Joanne Myers, director of Public Affairs programs, and I would like to thank you all for coming out on this very rainy evening.

Our guest today is Jerry Kaplan, and he is here to help us demystify artificial intelligence (AI) and the role that AI is playing in our lives and in the workplace. His book is Artificial Intelligence: What Everyone Needs to Know, and it addresses all of the questions you might be wondering about.

As you all should have received a copy of Jerry's bio, let me just briefly note that our guest is a successful Silicon Valley entrepreneur, and has played a major role in the future he describes. Currently he is a fellow at the Stanford Center for Legal Informatics and teaches us about the social and economic impact of artificial intelligence in the Computer Science Department there.

Many of you may be asking: Why this discussion? The answer is a simple one: While once artificial intelligence was only the preserve of science fiction movies, now it is one of the hottest areas of scientific research. Scarcely a day goes by without some startling news emerging about the latest advances in smart machines, robotic surgeons, or self-driving cars. As these new systems emerge—ones that are capable of independent reasoning and action—serious questions about those whose interests are being served and what limits should our society place on their creation and use will continue to impact our lives. Robots are coming, and it is for us to decide—and now—what the future may hold.

For the next 30 minutes or so, Jerry and I will have a conversation, introducing you to some of the basics. During this time, you will look at some of the complex social, legal, and ethical issues raised by artificial intelligence. Then we will open the floor so that you can ask any questions that you feel were not addressed during our conversation.

Welcome back.

JERRY KAPLAN: Thank you, Joanne.

Discussion

JOANNE MYERS: Assuming that some of us know very little about artificial intelligence while others may know a bit more, perhaps you could just spend a few minutes leveling the playing field by telling us what artificial intelligence is, where the term came from, and what is the philosophy behind it.

JERRY KAPLAN: It's a simple question that interestingly enough does not have a good answer, and the reason is that it is not a very well-defined field. Artificial intelligence is not an objective science in the sense of physics or chemistry, which you could define; instead it's kind of a grab bag and collection of a whole bunch of techniques which are generally aimed at a certain aspirational goal.

The usual definition that's given is something like: systems and computer programs that simulate or engage in intelligent behavior similar to what we expect from people. But that's not very meaningful, and it's not even a good description of the field because many of the systems that we develop far exceed human capabilities, and it is not at all clear what human intelligence is or what that means—much less that the kind of techniques that we're using to program these machines have anything to do with the way that people go about doing intelligent things.

JOANNE MYERS: Where did the term originate from?

JERRY KAPLAN: That's an interesting story, too. It was a conference proposal in 1956 by a guy that I knew when I was younger, John McCarthy. He and a few friends got together and held a conference at Dartmouth College in the summer of 1956, and he called it the Dartmouth Summer Conference on Artificial Intelligence, where he basically laid out the direction that I talked about: We're going to try to work on these problems. As he put it in the conference proposal—a bit optimistically—he said, "We believe that significant progress can be made on several of these problems if we get together and work on it for a summer," which, of course, was a bit overoptimistic, which is one of the odd characteristics of the field.

But here's something very few people know: Professor McCarthy actually named the field "artificial intelligence" as a reaction against Norbert Wiener, who was a cyberneticist, which was his term for control systems that engaged in intelligent behavior. McCarthy was a junior professor at Dartmouth—a very young guy—and Wiener was a world-famous Massachusetts Institute of Technology (MIT) professor, and so he was trying to carve out his own space and not just be thought of as, "Well, he's doing something like what this guy at MIT is doing," so he came up with this term.

As I said, I knew John McCarthy a bit. He was—I'm going to put this politely—not very well socialized, and the idea that he would come up with a term like this is really surprising to me because it's one of the great marketing successes in history. If he had not called it artificial intelligence, we wouldn't be here today worrying about it, which I hope we'll get into in some of these questions; if he had called it what he had in mind, which was "logical programming" or "symbolic systems" as it came to be known somewhat later, nobody would be worried about it. But the term itself is so spooky and weird, and it got picked up by science fiction and turned into this big deal, and so that's part of why we're here today; that's where the term came from.

JOANNE MYERS: Do you think because he named it "artificial intelligence" that that's why there's so much controversy surrounding it?

JERRY KAPLAN: I think that is the root of what we're seeing because it created a kind of a false anthropomorphism for what is basically an engineering discipline, to solve certain classes of problems that today require human intelligence and human intention. But if you think more broadly about the history of technology, that has always been the case—we've automated tasks that require human intelligence and human intention; that's what technology is all about.

You can go right back to the Jacquard loom—any of you familiar with it? The Jacquard loom was programmable with cards. It's a beautiful invention from the early 1800s. But before that, you had to be trained for years to be an expert weaver. Weavers had to be trained, and they were experts and all of this, and all of a sudden you had machines that could weave perfect cloth in any pattern. And the people there must have thought, "This is it. Pretty soon these machines are going to be able to do anything." But, of course, they couldn't. Nonetheless, it was a major advance.

That's an example of something that required a great deal of human expertise and training and intention just as, say, radiology does today—which soon will be obsolete; maybe we can get into that. Nonetheless, we somehow have this anthropomorphic reaction to the term that I think is appropriate.

JOANNE MYERS: Is it different than machine learning, then?

JERRY KAPLAN: No. Machine learning is a set of particular technologies that's a part of artificial intelligence. I can explain it.

JOANNE MYERS: Yes. How is it different?

JERRY KAPLAN: Well, let me start with John McCarthy and explain what he thought the basis of human intelligence was. His hypothesis was that it was your ability to reason, which he thought was logical. Mathematical logic; that was his area. He was terrific at it. By the way, he invented a bunch of things that we really use today, so I'm not trying to be critical of him—he invented timesharing; he invented a computer language, Lisp, which was very influential; he did a lot of great stuff.

But his idea; I think because he didn't really have a deep social dimension to his behavior, he thought the basis of intelligence is your ability to reason. So he worked on mathematical logic and putting those into computer programs in practical ways. He thought that would lead to artificial intelligence. That's what it was; that was his framing for the entire field.

At the same time, there was another guy at Cornell. His name was Frank Rosenblatt and nobody's heard of him. But Frank Rosenblatt had a different idea, and he thought—there was some earlier work by some psychologists at the University of Chicago studying how brain cells, neurons, seemed to interact. He thought, "Well, if we can simulate that, that would be another route to solving the same class of problems." For some very interesting reasons, his work was—I would say—lost in antiquity, although there were people who quietly worked on that all the way from 1956 to today. But the dominant paradigm in the field was this logical programming.

About 20 years ago, for two fundamental reasons, that area that Frank Rosenblatt called "neural networks"—that was the term Frank Rosenblatt used—became the dominant paradigm in artificial intelligence, and it's called "machine learning" generally; neural networks is just one approach to it. The reason that happened is not because of great scientific advances in that field; it's because of two fundamental things: First, the computers that we have today are way more powerful—literally 1 million times more powerful—than the machines that were available back then, and it turns out you need a lot of computing power to do machine learning. The second is: Back then, there wasn't any data to work on in electronic form. Today, of course, we're swimming in it. So this has become a very big field, and several significant advances have been made over the past.

I didn't say the important thing: Machine learning is very different than the logical programming that John McCarthy had in mind. What it's really about is finding patterns in extremely large collections of data. So it is used for things like what we think of in the human sense as sensory perception, like being able to pick your face out of a picture on your phone where you see that it puts little boxes around people's faces; that's sensory perception as it's implemented through machine learning. So now, everybody's going around trying to figure out where's the data, and how can we apply machine learning, and what's it going to mean?

JOANNE MYERS: Can artificial intelligence or a robot be programmed to be creative or imaginative in that way that you're speaking about?

JERRY KAPLAN: My personal opinion is words like "creative" and "imaginative" are human characteristics. We can decide to apply that to machines. I think it's a mistake. They're certainly not creative in the human sense.

JOANNE MYERS: When you talk about sensory perception, doesn't that involve—?

JERRY KAPLAN: That's recognizing pictures.

JOANNE MYERS: But that's imagination, no?

JERRY KAPLAN: There may be some relationship—a correlation. But that's like saying airplanes fly and birds fly. They're inspired by birds; aren't airplanes just really advanced birds? It's not the same thing, particularly the techniques that are used that are mostly statistics; it's just extracting patterns out of large collections of data.

Is that creative? Today we might think so because only people can do it, can recognize the style of a great Renaissance artist. But it turns out, if you have enough pictures of Renaissance artists, you can apply machine-learning techniques—which is a purely mechanical process—and you can classify new examples and say that looks like it or that doesn't look like it. And then, if you run the thing in reverse, you can pop out things that look like great Renaissance paintings.

Now we look at that and go, "Oh my god, isn't that creative? Look, it can paint like Rembrandt." But I'm telling you, it's just a mathematical technique, and our view of what that means will shift over time to issues of style and whatnot.

JOANNE MYERS: Then you also would probably agree that you can't program them to feel or think.

JERRY KAPLAN: Each of those is different. If by "think," you mean perform logical inference and do logical induction, that's what John McCarthy had in mind, and I think today it's a reasonable thing. I think we're pretty comfortable when you sit in front of your computer and you get the little spinning wheel of death or whatever it is and say, "Oh, the computer's thinking." That's just an expansion of the use of the term "thinking" to a new class of device.

But feeling—we can build machines that pretend they feel; we can build machines that can recognize feelings in human beings. This is an area called affective computing. It's really interesting.

Let me just state the obvious: Machines are not people, and they didn't come through the biological germ line that we have, and all other animals have. Whether you want to say that just because they can go, "I love you, I love you" means that they're having feelings, that's up to you; but my advice to you is: don't be fooled. Nobody's going to want to go to a robotic undertaker who's going to say, "I am so sorry for your loss." There's no sense of empathy; you can't fake it. It's like when you call those customer service lines and they say, "Your call is really important to us." It's like, "It is? Why don't you pick up the damn phone?" Is the machine expressing its empathy or its feelings? Of course not; it's just meant to keep you on the line.

That's one of the problems with this. We can make machines that look like they're feeling. I imagine many of you are waiting for the exciting conclusion of Westworld. It's the same theme that occurs all throughout the whole science fiction genre of artificial intelligence. It's really a fundamental question, isn't it? Are these machines or are these feelings real? That's what they're exploring.

JOANNE MYERS: Are there any benefits in making computers and robots that act like people?

JERRY KAPLAN: Oh, yes.

JOANNE MYERS: You've talked about the risks, so let's talk a little bit about the benefits and risks.

JERRY KAPLAN: Sure. Well, there's pros and cons. It's unfortunately a very complex and subtle area. The advantage of making machines that can act like people is that they're going to be easier to interact with.

Let me give an example. If I had gone back 20 years with my iPhone and demonstrated Siri to people in, let's say 1980, they would have been absolutely blown away and said, "My god, that's AI. I'm talking to a machine, and the machine understands me, and look, it can even be clever, and it can do all of these various things."

Now today, because many of us have interacted with those systems, they're handy if you keep your conversation to certain things that it's been programmed to understand. For example, I can use it to say, "Wake me up at 7:00 AM." That is a tremendous improvement in user interfaces than having to poke around on my phone to set the alarm. And it's good at that; it interprets it correctly and all of that. But I'm not fooled because when you say just about anything else, just something that's a little bit more abstract and requires some thinking or whatever, today it can't do that. Maybe you've tried it. Most of the time it fails. That's because we don't have a sense of what its limits are.

It's very handy when we build computers that interact with people in ways that people can easily understand. That means it needs less training, and it's easier to use those computers. So that's the positive. But if we're using it to fool people or to get them to do things that are against their interests, that's a problem, and that's ethically I think a bad thing.

I was thinking about this. You go to Times Square and there are these performers and jugglers. I was thinking, you know, I could build a really cute robot—a real robot; not somebody who looks like a robot—that would just sit there and have a little cup like right out of Les Miserables and say, "Please, sir, please give me a dollar," and that thing will collect money all day long because people have an emotional reaction to it. It's a vending machine.

That's the danger. When you've got a robot like, "Oh, come on, please buy this beautiful watch because they're going to cut back my power allotment next week if I don't make my sales goal," you may have a reaction. What's it doing? It's fooling you into taking an action that's not really in your interest. And that is going to be a very big problem, and it was a problem in the election most recently because a lot of the nonsense that was being spread on the Internet was done by fake bots that would spread misinformation.

JOANNE MYERS: You brought up ethics. You've written a lot about it. What are the areas of greatest concern in terms of ethical implications for these robots?

JERRY KAPLAN: If you talk to the technology philosophers, many of whom I know—

JOANNE MYERS: Stanford has has a committee now dealing with ethics going forward and the use of ethics and computers—what we should do to program them to have moral systems.

JERRY KAPLAN: Yes. Let me say several things about this. This is an area which I and a lot of other people are calling "computational ethics" because we're going to be building machines that are engaging in behaviors that normally we would think require some kind of ethical judgment.

The interesting thing is the machines don't have to be ethical; they just have to behave ethically in ways that we find acceptable in society. When you talk to philosophers, their concern with these deep philosophicalis it the ontological ethicsor is it utilitarianism, and you remember back to your college ethics course. Now those are interesting theories, but the problems are really much broader and much more practical.

To me, the greatest challenge for the field of artificial intelligence over the next 10 to 20 years is to ensure that the systems that we build adhere to normal social conventions that we find acceptable, and this is going to come up very quickly with the self-driving cars that will be running around Manhattan here, and many other kinds of robots. We're going to have to rethink or think through a lot of issues about when is it okay for a robot to do something like stand in line for you—is that okay or is that not okay? If you're looking for a parking spot and a self-driving car is looking for a parking spot, is it okay for that car to just zip in ahead of you and take that parking spot? You might not like that. Can it cut in line?

We're experiencing some of this today, but you don't see it. If you've ever tried to buy tickets to a popular concert of some kind, what you find is often for the popular ones it's like you hit the button as fast as you can go right at 10:00 AM when Ticketmaster releases it, and you're up in the rafters or the whole thing's sold out. Isn't that annoying? It's not people that are causing that problem. The scalpers have built robots that are buying up all those tickets instantly using some very sophisticated technology, and you can't get your ticket. If you could see the fact that robots are zooming into line ahead of you and buying up all the tickets, you'd be mad, and we'd be passing laws to avoid that kind of thing.

So there are a number of issues. I can take it down a level, but to what extent can we buy a piece of technology that represents our interests in different circumstances where there's competition, if you will, for some scarce resource?

Then there are things like we're going to have robots delivering Chinese food alongside people walking down the sidewalk. Are the robots going to be pushing people off into the street? Are they going to be taking up all the space?

We went through, by the way, exactly this same problem when cars were first introduced in New York. We don't have time to go through that, but it's an incredibly interesting history about the social backlash against these "horseless carriages" and how it changed the social patterns, and we're going to face that again in the next 10 to 20 years.

JOANNE MYERS: Do you think we should be programming artificial-intelligence robots to act like humans? Perhaps it's unethical to do that because that presents all the problems in the first place.

JERRY KAPLAN: The answer, of course is: It depends. To the extent that it's used for a purpose that we think is good, like helping us to accomplish something—you walk into Penn Station: How do I get to the such-and-such train? What's the best way to get there? That's easier than going up and trying to figure it out. I can never figure those big screens out. So that's good. But to the extent that it's taking your time or attention away from your family or fooling you into thinking it's a sentient being, I think that's a negative. Like any other technology—like a knife—it can be used to cook and cut; it's an incredibly valuable tool, but it can be used to kill people, as artificial intelligence can.

We live with the risks. We mitigate the risks, and we try to get the benefits, and we'll do the same thing with this.

JOANNE MYERS: For a long time people have talked about the threat of artificial intelligence, but lately people like Elon Musk worry that it's potentially more dangerous than nukes. Stephen Hawking and Bill Gates say AI is dangerous to humanity's survival. How do you respond to these statements?

JERRY KAPLAN: I need to be a bit more moderated than I would otherwise be because apparently this is going on television, but these three guys are always trotted out as the example of why we need to worry about this. They're smart guys; there's no question about it.

But let's just say that this is not necessarily their forte. Stephen Hawking—remarkable physicist, but he's not a computer scientist. He's repeating things that other people have told him, and there are people in the field who are promoting, pumping up this idea that we're building ever more intelligent machines and that they represent a danger to the world.

I'll get back to the danger side, but there are two different threads here: One is the so-called "superintelligence" argument—what are we going to do when they get so intelligent that they can do anything? That's not really a problem, and that's not going to happen in the form or the way that that's being described.

That's not to say that the technology isn't dangerous. For those of you who are in my generation, you may remember the old expression, "To err is human, but to really mess things up takes a computer." We're building very powerful computers, and they can be very dangerous. Part of the answer isn't that they're going to go off on their own and do this—that's a design mistake; that's an engineering problem. We need to be careful about what kinds of things we build and how we deploy them.

If we deploy self-driving cars that run people over all the time, you can look at that through the framework of, "Oh my god, the robots are coming, and they're killing people," which is silly, or you can say, "That's a bad design. We don't want that. So what are we going to do about it?"

By the way, just to be clear, I think it is terrific technology, and even though it's going to kill some people, it's going to kill a lot less than human drivers. I'm a big proponent; in fact, I would argue there's an ethical obligation to get that technology out there as quickly as possible.

JOANNE MYERS: You've been affiliated with the Stanford Artificial Intelligence Lab for some years. What are some of the developments there that have struck you as the most important and indicate that we are turning more of our lives over to artificial intelligence, perhaps in ways we don't even think about?

JERRY KAPLAN: I don't think we're turning our lives over to artificial intelligence. We're doing it in the context of trying to use these very valuable tools that the technology industry has provided for communication and for getting the news, and all kinds of advances that have occurred, and there are some downsides to that. If any of you have watched the TV series Black MirrorI'm not recommending it for dramatic purposes, but it does explore a lot of these issues in ways that are pretty accurate and are concerning. I think we just need to be careful about this.

In the AI lab at Stanford, they are mostly working on certain engineering problems. There's a huge vision group and they're doing better and better at being able to recognize common everyday objects in pictures. The latest is to where you focus on trying to describe videos: "That's a boy kicking a ball." That's a very hard problem. It's expanding this machine-learning issue from just looking at a static picture in the context of millions of static pictures to a sequence of moves in a video; that's a real technical challenge to go make that happen.

I'm not seeing some kind of magic that's being conjured in the smoke of some crazy bowl that we need to worry about. A lot of these techniques are now being applied broadly across the industry and we're going to be able to make a lot of good use of them, both for positives and negatives. It is possible today for us to—you just walk down the street, it's 1984. We've hit that point. In principle, a central authority can find you anywhere—in a crowd, walking down a street—because there are so many cameras, and they're so inexpensive, and it's so cheap to stream that data.

The stuff that's going on in China that would make your hair stand on end because they don't have the kind of constraints that we do. It's just a different culture; they don't mind that. They know where you are; they know who you're talking to; they know what you're doing. Personally, here I think we would be aghast to implement those kinds of systems.

On the other hand, you're trying to find a lost child—same technology. I think it works perfectly well for that; or, as it has been used for, out of the thousands of people who streamed through Boston, wherever the end of the marathon was they were able to help identify some people and say, "That's the same guy who was over here. There were two bombs that went off. He was at both places. Let's go find him," and that is part of how they found him.

So these are valuable technologies. They have both positive uses and negative uses.

JOANNE MYERS: Before we open it up to questions from the audience, is there one thing you would like us to know?

JERRY KAPLAN: Yes. We got to get the magic out of this discussion because it's sucking all the oxygen out of the real issues that we need to deal with. The real issues we need to deal with are: To what extent do we want to constrain the technology that we deploy to ensure that it meets our ethical and social conventions. That's an important issue. Instead of that, we're talking about whether robots are going to rise up and take over, which is stupid. That's just right out of science fiction.

If there's one message you could take home tonight that I think is more than just an interesting intellectual thing—it actually has practical effects over the next five, 10, 15 years—it's that AI is not magic, and we're not building ever more general technologies that are going to rival humans and take away their jobs and everything like that. What's really going on is this is a continuation of a long historical pattern of improvements in automation. Artificial intelligence is about automation, and the patterns and the ways in which it will be deployed are going to follow the same historical patterns it had for previous waves of automation as well.

JOANNE MYERS: Thank you very much. I know there are so many more issues we could address—drones; lethal weapons; driverless cars—and I will just leave that to you, the audience, to raise those questions.

Questions

QUESTION: Don Simmons. Thank you for those interesting remarks.

I want to ask for an update on two tests that were proposed many decades ago for identifying what we all would agree is artificial intelligence: One is the machine's ability to engage in a question-and-answer dialogue such that the questioner cannot tell whether he's talking to a human or machine.

The second is the machine's ability to do something beyond our control, something possibly harmful à la 2001: A Space Odyssey. My impressions are that the first test has been foreclosed but that the second is still open.

JERRY KAPLAN: Let me talk about each of those quickly because there are a lot of questions. The first test, of course, is known as the Turing test, and it's based upon a paper written by Alan Turing in 1950. It's been broadly misunderstood. This is a really good paper, and you can look it up. Look at Alan Turing's Turing test. You can get it right online. It's not a technical thing; this was him just throwing out a bunch of ideas.

If you read the paper, it has been grossly misunderstood. He did not propose a test for when machines would be intelligent. What he did—in fact, his answer to the question "Can machines think?" was—this is almost an exact quote: "I regard the question of whether machines can think to be too meaningless as to deserve serious discussion." You're hearing the same thing from me today.

He said, "But I speculate that within 50 years' time people will be able to use the word 'thinking' as it applies to machines without," as he put it, "fear of contradiction," which is a point I actually made earlier in what I said. He's just talking about the expansion of the use of language because the behaviors are such that we're willing to say the machine is thinking. Back then, that would've been considered crazy.

Of course, there's tests done all the time. The test is not up to date in any sense; it's supposed to be done with teletypes and people in other rooms. It's a fascinating thing. I could give another 10 minutes on it, but there are other questions.

Your second one is equally interesting. It was specifically about the HAL 9000?

QUESTIONER [Don Simmons]: Just as an illustration.

JERRY KAPLAN: As an illustration, okay.

QUESTIONER [Don Simmons]: The ability to do something that we can't control.

JERRY KAPLAN: One thing I've discovered is when I talk to audiences, people under the age of about 35, I'd say, "As you saw in 2001," and they all go, "I never saw that. I don't know what that is. What's that? What's that?" How many people have seen 2001: A Space Odyssey? [Show of hands] Good enough. That is a really great film, but in terms of AI it's really ridiculous.

Let me refresh you on a bit of the plot. These guys are on their way to Jupiter, and the machine onboard, the HAL 9000, the advanced computer which can talk like Siri—the machine predicts that a particular communications module is going to fail within the next 56 hours, or something—and it doesn't fail. They go berserk, the people, because they say, as he puts it, "The HAL 9000 has never made an error." They thought that was enough reason to turn the damn thing off.

Now you deal with computers as sophisticated as HAL 9000 all the time. They make mistakes all the time. We're perfectly accustomed to it. But remember what I said about the original idea of artificial intelligence, which was logical? Logic doesn't make mistakes. It's just weird in that way. It was actually Marvin Minsky—who was a colleague and one of the people at that first conference—who was the technical advisor on that project.

They thought it was such a horrible idea that the machine might make a mistake and make a prediction that's wrong—we just saw that a couple of weeks ago about machines making predictions that are wrong—the fact is that they should turn it off. Then the machine decided that its mission was so important it should kill all the people on the thing.

How hard is it, if you were the engineer building that system, to build in something that says, "Killing people is not a good idea" and "Even if your goal is to complete the mission, you need to do it in the context of normal human social conventions, which is 'don't hurt people.'" That's literally what we're looking at today, is how do we build that into machines in a programmatic and sensible way. So the whole premise behind the movie is laughable today, as wonderful as the movie is.

QUESTION: Thank you, Jerry. James Starkman.

What, if any, are the applications to the fastest-growing problem that the world faces, which is cybersecurity?

JERRY KAPLAN: There are many applications, and this is a very serious issue that a lot of people at Stanford and other places are very concerned about. It's an arms race, is the way I would look at it.

Here's a way to think about cybersecurity that might be helpful for the audience: When the Pilgrims first came to the United States, the way they wiped out the native population was not by military force; it was because they brought diseases. That's a pretty well-established and understood thing. The term for that is it was a "disease vector." This happened every time there was some change in transportation. Rats came out of the ships, and they caused and spread the plague.

What's happened here is the cost of communication, which deals with information and ideas—which is itself ethereal but very valuable and very important—has gone down so low that what we've got is a new vector for people to steal stuff from us and to mess with us. Like any communications medium in the past—and I can give you a couple of examples—when it first comes out, it creates problems, and we're going to need to find ways both technologically and policy-wise to address those kinds of issues.

I'm getting involved a major project at Stanford to really look into this from the perspective that I just talked about: What controls do we really need to put on this without stifling speech?

Cybersecurity is largely stopping people from stealing, and we just need better ways to do it. It's a hard problem. AI is being used to do the stealing. We can deploy it to try to prevent the stealing. A lot of the systems that are being developed or are in use today really are of that nature.

For example, I said machine learning is the extraction of patterns out of very large collections of data. One of the important approaches to cybersecurity is the systems are monitoring all the traffic—not in a detailed level, but like how much is coming from where, and what patterns it sees—and so using machine learning you can find patterns that represent anomalies or changes that you think represent attacks and can bring that up for actionable use.

QUESTION: Thanks. Michael Kaufman. I found this very interesting.

I know it's been a limited amount of time you've had to be up there, but when you dismiss the risk of superintelligence you basically just got the point that Stephen Hawking and Elon Musk aren't experts in the field, but you didn't actually address why superintelligence and the risk of it over the next hundred years is not a threat. I was just wondering if you'd give us maybe the three or four reasons why you think so.

JERRY KAPLAN: There are things we should be worrying about and there are things that aren't worth worrying about. If there was one of my counterparts arguing against me today, they would say, "Yeah, well, you can't prove that we can't do this," but the mistake that's being made mostly is this: If you look at the public announcement and the showy, splashy things that you see and hear about successes in artificial intelligence—

Let me list a few that you might know: In 1997, Garry Kasparov was beaten by a computer at chess, and that was held up as almost like a Turing test. It was like, "Well, if it can play chess, that's it. They're really smart, and they'll be able to do anything." People talked about that as though this apocalypse had come. It's 20 years—I mean, what happened?

The truth is it's a program that's designed using the techniques actually that John McCarthy believed in—search—anyway, a logical inference applied to the problem. They were able to beat the best human at that using those techniques, but it did not represent anything much more general than being able to solve this certain class of problems.

Then, let's see, some others: How many people have seen the Jeopardy! win by Watson? It was really impressive. And you might think, "Wow, look at that machine." Of course, they made it look kind of human—it had a head. Was it pleased with itself? It sure sounded like it, but that's just a voice. It was silly. It was a couple of tricks to making it work but most things are a little bit of Las Vegas-style magic tricks to make these things work.

Watson's terrific technology is being applied to a lot of different things, but, my god, they used to hold that up and say, "Sure, a computer can play chess, but it will never be able to win at Jeopardy! because that requires too much general knowledge." Well, it turns out that it's not that big a deal.

Self-driving cars; people said, "Well, okay, they won at Jeopardy! but they'll never be able to drive a car. I mean, that requires a lot of human judgment and split-second"—of course they can drive a car; that's silly. They can drive a car better than a human can. No question about it.

Then this past year, a computer beat one of the leading Go champions in Korea. It was a replay for the people who were too young to be there in 1997; it was the same story. People in Korea and China are going crazy over this. I just got back from both of those places. It's like, "What are we gonna do?" They're starting Manhattan Projects to try to deal with this. It's, you know, "These crazy Westerners are going to beat us at this technology." It's really an interesting reaction.

My point is this: You might think, "These computers are getting more and more intelligent and they're solving all of these, more and more problems," but those were all solved in very different ways. It's like you got a tool bag, you're pulling out different tools, and you're putting them together to solve these problems.

There's a concept called AGI, artificial general intelligence. It's a little bit like—to me—in the Middle Ages people tried to turn lead into gold. I'm sure a lot of good chemistry got done trying to turn lead into gold, but it was kind of silly. The truth is that these do not represent some kind of progress toward a generally intelligent machine.

Just as in the kitchen you've got all these different gadgets, but if you can't see the kitchen, you might think, "My god, there's an incredible electronic, mechanical chef in that kitchen." The same thing is true with all of these different examples of advances in artificial intelligence; they're not entirely narrow, but they're fairly narrow, and they cover certain classes of problems.

So if you look at it thinking that this is some kind of linear scale and then you project it up and you hand-wave about exponential curves and all of this stuff, you can come up with this idea that we're going to have this superintelligent machine. There is not a shred of evidence that we're on that path. There is no objective evidence. It's nonsense. I'm an engineer. I've been trained as a scientist, but I'm an engineer, and you're talking about people who are scientists or academics who are worrying about this.

There is more evidence that we should be worried about what we're going to do when the aliens land than there is we should be worrying about superintelligence in machines because there's a pretty good argument that aliens could land at any minute. That's at least a sensible thing. But worrying about whether we're going to have Westworld is like worrying about whether The Walking Dead is going to happen. First, you got to get zombies, and there aren't any zombies. That's crazy.

QUESTIONER [Michael Kaufman]: I don't think anyone's arguing it's on the near-term horizon. I think people are looking and sort of saying—I mean look what has been accomplished. Computers are a million times more powerful in the last two decades, so let's just flash-forward 80 more years or 100 more years. What is achievable?

JERRY KAPLAN: That's the problem. There is this mistaken myth that human intelligence is some kind of linear scale—we don't really have time to go into this, but there's no evidence for that at all—and that somehow these things are going to leapfrog us, and then what are we going to do? They're going to be smarter than we are. Just like the weavers might have said, "This machine, look how brilliant it is. It can make these incredibly intricate patterns without making any errors."

I can't really say what's going to happen 100 or 200 years from now. This is not something we should be worrying about. It may or may not—maybe we will come up with something, but I'm just telling you it's going to require something very different than what we're doing today.

I point back to the history of AI, which I've covered very briefly. There were two different approaches. It wasn't like we had one approach in physics, and we sort of made progress; they compete, and they're good for different classes of problems.

It's like climbing a tree and claiming progress toward making it to the Moon, putting a man on the Moon. This is just a judgment based on my experience. The problem is just framed wrong. It's not something we should be worrying about. I can't tell you what's going to happen in 200 years. I have no idea. I have enough trouble knowing what's going to happen tomorrow. We can't predict the weather.

QUESTION: Good evening. George Holevas.

Thank you for your time and knowledge. I have two questions—one you can maybe address afterwards if it's too long of an answer. The first is: We're talking about thinking, but I think maybe a better word would be sapience, and then my question would be: Would learned machines be capable of sapience and sentience? Is that something that they could eventually get to?

JERRY KAPLAN: Here's my point of view on this. That is fundamentally a religious question because when we worry about it in all of these shows—Humans, Westworld, and the whole science fiction genre about AI—it is always about the same thing: Is that a machine or has it come alive and become conscious? Really what the question is, is: Do we owe it the courtesy of our empathy? It's a machine that behaves in a certain way, but whether you think that means we have an obligation to it in the same way that we might have an obligation to higher animals is really what these things are about. For those of you who are watching Westworld, that's the issue that they're dealing with—is it stupid to be worrying about these machines, or is it a moral obligation that we have?

Here's the thing: We have no idea what that means. We have no idea what human sentience means. We have no idea what human consciousness means. This has been debated for centuries, and we haven't made any serious progress on those issues. It may turn out that we're just machines made of meat, and then we have a decision to make: Now we're going to build something new, and it behaves . . . but I'm not really worried about robots coming in and buying up all the prime real estate in Manhattan and drinking all the fine wine, and all of that kind of thing.

To me it's a religious issue more than anything else. Do we think there is such a thing as a human soul, and is there any notion of that being part of a machine? There are two different schools of thought here in the United States. In Japan, interestingly enough, many people believe that all physical objects have a spirit, and so they have a very different view on this issue, and they respect robots and machines in ways that we don't here.

I don't know the answer because I don't know what human sentience means, and it's really the operational question—should we be worrying about a machine's feelings? Does a machine have a right to do anything? That's really the fundamental issue.

QUESTIONER [George Holevas]: Okay. Thank you.

The second one would be: At what point does a learned machine or a robot become a legal entity like you see with companies? Companies became a legal entity, and at that point they spoke for themselves in a way. At what point does a learned machine do that, and at what point are they responsible for what they do?

JERRY KAPLAN: That's a really interesting area. Let me just ask the question in a slightly different form for the audience, and I'm not actually going to answer it. I'm sorry to plug my book, but it's covered in detail in the book. It's a big thing.

The real question is: What kind of legal framework should we have for dealing with machines that can perform actions that weren't necessarily predictable by the people who owned them? They're a little bit more like—there's legal precedent in the way we deal with our dogs, for example. There's a whole bunch of legal theory around dogs that I read about, and it was really interesting. If your dog goes and kills somebody, how responsible are you? The answer is: under certain circumstances you are, and under others you're not.

We'll go through that with these machines because if you send your robot down to buy something at Starbucks and it pushes somebody off the sidewalk and they're killed, you don't want to be charged with murder. It doesn't feel right. Now, do you have some responsibility? Of course.

Now corporations are—I have the whole history of this laid out in my book; the history of corporations is very interesting. There's a legal shorthand for rights and responsibilities that go hand in hand, and it's the term "personhood." It doesn't quite mean the commonsense personhood, but corporations are legal persons—they have certain rights and certain responsibilities—and that's the way that works.

I think we will probably apply the same principles to certain kinds of robots. Your robot has to pass the bar, and then it can engage in certain kinds of legal activities, and those things go together. If it does something bad, it can get disbarred. We need those kinds of systems.

I think we will probably wind up ultimately as a matter of expediency granting some notion of personhood to robots, but don't take that the wrong way. That does not mean they're going to have the right to marry your daughter.

QUESTION: Sondra Stein.

As an ethical issue, as artificial intelligence does more and more things people can do, will we really be able to create enough jobs for all the people in the world?

JERRY KAPLAN: Great question.

QUESTIONER [Sondra Stein]: I was just reading in Switzerland where they manage large amounts of money and they were saying, "Well, the robots will do a lot better job, or can, in terms of artificial intelligence, than the bankers and research people."

JERRY KAPLAN: This is also an area where I think people have a broad misconception. Automation has always eliminated jobs, and the labor force—the labor markets are very resilient, and they evolve surprisingly quickly. To the extent that automation causes certain current professions to become extinct or to be reduced in the number of people we need, that's going to affect employment.

There are papers you can read, and it's always quoted in articles the same way: It's like, "Experts say that in 50 years 50 percent of the work that humans do will be obsolete, and what are we gonna do?" The answer is: If you go back 50 years, 50 percent of the work that people did 50 years ago is no longer the case; 200 years ago 90 percent of the U.S. population worked in agriculture. That's what work meant—you got out there and you tilled the field and picked the crops and planted. That what was work was.

If we suddenly took the technology that we have on farms today and magically transported that back 200 years ago, it's true—instantly 90 percent—people would say, "We don't need to work anymore. The machines are going to do all the work." Only 2 percent of our population today works in agriculture, and that's under threat—it's going to go down way below that due to artificial intelligence in the next couple of years.

But if it happens over a period of time, the labor markets adapt because our expectations about our standard of living go up. The effect of automation is to make us wealthier, and we don't want to live today in a shack with an outhouse. The point is, our expectations rise, and that creates new employment and different kinds of jobs. So we are going to be way wealthier—I can even be specific: The historical pattern has been very clear—this is really weird. The U.S. economy—average household wealth—doubles every 40 years and it has for the last 200 years. The average household income in 1800 in the United States was $1,000, and that's inflation-adjusted; that's in today's dollars. Everybody was dirt poor compared to today. Today it's about $50,000.

We have a problem with how that's distributed—different issue and very important issue. But the truth is as we get wealthier there is more demand for services, and there are new kinds of jobs and services that are created as a result of the new technology. I can go on about that for a long time, but let me just give you that answer.

QUESTION: Richard Horowitz.

Does artificial intelligence equate with the advancement of computer sophistication? In other words, we hear about robotics and all sorts of technologies—is the artificial intelligence aspect of these technologies computer-based, or is there artificial intelligence that's outside of computer advancements?

JERRY KAPLAN: This is coming back to the original question: What is artificial intelligence? When people think about that, they think about robots. They don't really mean every robot because robots in factories have been there for a long time, and there's all kinds of automation and machines that stamp and cut and do all this stuff, but they don't look like people. When the things start to have arms and start to have legs, people go, "Well, now that's a robot," and it can do various things.

There have been advances in mechanical engineering. If AI had been called—the equivalent in mechanical engineering is symbolic logic; we wouldn't be here worrying about it. The truth is that when you combine the advances in mechanical engineering—what kinds of controls and how you work them and motors and arms and all of that—with the advances in software development; when you say "computers," that's what I think you're talking about. You can do some very powerful things.

Recently these—you know what drones are, right? You can buy them at the toy store now. They're really cool. I don't know if you guys have played with them, but they're very stable, and you can just make them go up, go down—they're amazingly stable. They can work in groups using software to coordinate them, and when you see it—if you look at some of the videos—it will completely freak you out. I have one I'm going to show tomorrow night in another talk where these two drones, hovering in the air like this; somebody tosses a ball, and they start batting it back and forth to each other. The precision with which these things can operate is absolutely astonishing. They're like Cirque du Soleil performers. That's going to open up all kinds of opportunities to do things.

To answer your question, you want to call the mechanical engineering part of this or not—I mean, certainly at Stanford and everywhere else there are big robotics groups, but mostly what they're doing is not figuring out a better way to stamp out coins or something; they're really concerned with how to apply the advances in artificial intelligence to the current state of the art in mechanical engineering, to build devices that engage in more flexible kinds of behaviors than the current types of machines that we have.

JOANNE MYERS: I want to thank you once again for a fascinating discussion. We obviously did not touch on all of the issues that artificial intelligence raises, but I did want to share with the audience one thing that I found fascination, is where the word "robot" came from.

JERRY KAPLAN: Oh, yes.

JOANNE MYERS: Right. And it's forced labor. It's a Czech word and it means forced labor.

JERRY KAPLAN: Rabota.

JOANNE MYERS: Rabota, right.

Thank you.

JERRY KAPLAN: Thank you.

You may also like

NOV 21, 2024 Article

A New International Order Is Emerging, We Must Bring Our Principles With Us

On the heels of a new international order, Carnegie Council will continue to champion the vision of peace and cooperation that remains our mission.

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.

OCT 24, 2024 Article

Artificial Intelligence and Election Integrity in 2024

This final project from the first CEF cohort discusses the effects of AI on election integrity as billions of people go to the polls in 2024.

未翻译

此内容尚未翻译成您的语言。您可以点击下面的按钮申请翻译。

要求翻译