在本集《人工智能与平等 》播客中,高级研究员安雅-卡斯帕森(Anja Kasperson)与赫蒂学院的乔安娜-布赖森(Joanna Bryson)教授一起讨论了计算、认知和行为科学与人工智能的交叉问题。对话深入探讨了这些领域交汇形成智能系统的复杂方式,以及这一新兴技术的伦理层面。布赖森利用自己的学术背景和实践经验,对人工智能发展的认知方面及其社会影响提出了宝贵的见解。
本播客录制于 2023 年 9 月 26 日。
安贾-卡斯珀森(ANJA KASPERSEN): 重新认识伦理并增强其在信息时代的能力这一主题已成为Carnegie Council 国际事务伦理的重中之重。这一探索的核心是更深入地理解各种形式的智能,特别是考虑到由于计算科学的进步,我们理解智能的方法发生了深刻的变化。为了深入探讨这一主题,我们向在理解技术和人工智能(AI)对人类、社会和环境的影响方面有深厚造诣的人士征集见解。
今天我们尊敬的嘉宾乔安娜-布赖森(Joanna Bryson)对于大多数参与人工智能讨论的听众来说都是一个熟悉的名字。她拥有心理学和计算科学学位,自2020年起,乔安娜在柏林赫蒂学院 数字治理中心担任伦理与技术教授。
乔安娜,今天能和你谈话,我感到非常激动。
JOANNA BRYSON: 感谢您邀请我来这里。我很高兴来到这里。
ANJA KASPERSEN:在我们深入探讨您的工作和见解之前,乔安娜,为了帮助听众更好地了解您在人工智能领域及相关领域的独特视角,您能否分享一下您的背景,是什么激发了您对这一领域的兴趣,尤其是在人类认知和机器能力的交汇点?
乔安娜-布赖森: 我觉得我的准备工作很独特,但我希望我的观点并不那么独特。这有点奇怪。实际上,柏林有一位当地的政治家,他说:"哦,我现在要做人工智能,我读了你所有的论文,一切看起来都很明显。
实际上,我对动物的行为非常感兴趣,我想了解为什么不同的动物--我对智力很感兴趣,但我意识到人的防御性很强,所以我开始研究其他动物。我是简-古道尔的忠实粉丝。因此,我在芝加哥大学攻读了行为科学学位。在此期间,我意识到人类的很多智力实际上都是自动形成的。
Also, I was a science fiction fan, so I decided I would like to get involved in artificial intelligence. Also, it turned out I was a surprisingly good programmer, so I thought that would be an advantage and I could get into a better Ph.D. program if I did all that.
However, I was not totally sure what to do with my Ph.D., so I wound up programming professionally for five years, which turned out to be really useful. It was everything from fixing printers to designing systems as well as writing a lot of software. That was in the 1980s.
In 1991 I did my Master’s degree in artificial intelligence from Edinburgh. The great thing about that was that Edinburgh actually had artificial intelligence before computer science. They took artificial intelligence to be a mix of philosophy, psychology, neuroscience, music, linguistics, and of course computer science as well, but it was not just some subsidiary bit of engineering. It was a proper discipline in itself. That was fantastic.
Then I got into the Massachusetts Institute of Technology, which was great, except that I didn’t have a computer science degree, so that was a lot of work, but eventually I escaped. Somewhere in the middle I actually got another philosophy degree just because I wanted to work in a lab, and that was the easiest way to get funded to do that for a couple of years.
So I worked in what was called a primate cognitive neuroscience lab, but there was no neuroscience, it was actually all behavioral. The monkeys actually had better touchscreens than I did, my computer was a 286 and theirs were 386s. Anyway, this was a long time ago too.
I did another psychology post-doc at Harvard, also in a primate lab that was also called cognitive neurosciences, so it also didn’t do anything invasive. It was really weird. But I was with Marc Hauser, which was educational in other ways, because he was actually working on—and I think I am credited—his book Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong, that came out right when it was found out that he was committing fraud, so that was interesting. But he was a great mentor, I don’t mean to run him down, and
I really liked working in that group, but I hated being a post-doc.
Also, even Harvard undergraduates could not figure out how to use my software, so I figured out that I needed to spend some more time in the Computer Science Department preferably along with good human-computer interactions so that I could make my AI systems more usable.
That is when I went to the University of Bath. It was also at that time very hard to get positions in artificial intelligence in the United States unless you were either willing to work on weapons systems or pharmaceuticals, and I was still trying to do science. Everybody in Britain welcomed me back with open arms. They still thought of AI as a science and not as engineering destruction or whatever.
I have basically been in Europe professionally ever since, although I did spend a few years living in New Jersey when my husband was poached by Princeton, and I did a good sabbatical there with the Center for Information Technology Policy, which also kept me around as an affiliate for a few years after that.
ANJA KASPERSEN: It is very interesting that you said there was such a difference between the American climate at that point and the European climate. Did you see in America the focus was very much like you said, more on the defense side—and what was the other thing you said?
JOANNA BRYSON: Pharmaceuticals.
ANJA KASPERSEN: Exactly, and it was more treated as a science in Europe. What do you attribute that to, given the timing of it as well?
JOANNA BRYSON: I don’t entirely know, but I have to say that there are way more differences between America and Europe than Europeans tend to realize. America was founded by fundamentalists, and there are a lot more black-and-white attitudes there, and of course people are starting to realize that it is a little more libertarian and less well regulated. I think partly just the domination of the military- industrial complex and the pharmaceutical-industrial complex in the academic complex was why those were the things that were sucking up brains then.
What was interesting was that Europe still thought it was worth funding, at that time at least. Now in Britain this is harder, but they still thought it was worth funding philosophy and “blue sky” science. This was before 2008.
I remember after 2008 a couple of times people talking about how important it was for academic research to really serve a purpose. I was able to get up and say: “Look, I would not have been ready for studying what AI is doing in human society if I had not spent 10 or 15 years of my life looking at theoretical biology, totally blue sky. We have to support the blue-sky sciences.”
I don’t mean to say there is no place in America that does that—maybe right now there are more places than in Europe—but at the time especially Britain was very tolerant of eccentricity. As long as you did a couple of papers on mainstream computer science so that they could show their funding agencies that they had good researchers there, then they let you spend the rest of your time as you saw fit.
I didn’t realize how lucky I was to be in a computer science department. Other disciplines that did not have as obvious a payoff to the economy were under a lot more pressure to perform and to keep coming up with mainstream articles.
ANJA KASPERSEN: I have so many questions for you, Joanna. But before we get to my next questions, you said something in your introduction, “It turned out that I was a really good programmer.” Listening to you speak about your history—because you obviously have an incredible multidisciplinary approach to understanding AI, intelligence, and all of these things that you are working on now—I think there is a recognition that whatever whatever technical field you are in, that if we are going to get it right—be that the governance side or making sure that the technology does not fail us or that we fail our intention that we set out to build these technologies—we have to move past just thinking about these as technical capacities.
Do you think your different approach and different background was what made you a good programmer?
JOANNA BRYSON: I don’t know. That is a really interesting question. I honestly think it is just a part of intelligence somewhere. It is mostly like being mechanically inclined. It is like, can you see how the pieces work? I would take things apart and put them back together, ever since I was a kid—ever since I was a little girl, I should say, just to support feminism. Being mechanically inclined and seeing how to decompose things was one component.
But I think you can look at all kinds of other aspects of my upbringing. My family is religious, so we always did think about ethics and moral things. And my undergraduate degree I said was in behavioral sciences, but it was at the University of Chicago so it was liberal arts, so I was required to take a lot of social sciences and humanities and things like that. That helped me better prepared for a lot of different kinds of inquiry.
I used to feel guilty honestly at the University of Bath because I was encouraging people to come take degrees there, which are amazing degrees—it was a great, great school for computer science—but you basically wound up with only your major, so a student in British computer science winds up where an American student would be halfway through their master’s degree in their discipline, but they get no breadth. I shouldn’t say “British;” this is English. The Scottish system is more like the American system, there is at least a hope for minors and taking some other elective courses.
I got so much out of a liberal arts degree that benefited me, but you had to go and do something else with it. Even to this day there are not a lot of disciplines I could just write a paper by myself in, I really have to work with specialists as well. So, I am kind of also a specialist, I am a specialist in bridging a few disciplines—not all disciplines of course but the ones that I have taken the time to at least understand well enough to be able to see where the hinges might come together.
ANJA KASPERSEN: Joanna, anyone who has prepared for a podcast interview knows the extensive preparation involved. However, as I was crafting questions for our chat, I noticed that you have already done much of the legwork on your website where you have listed several questions that you find pivotal to our current state and future direction.
You posited that the use of information technologies like artificial intelligence is an ethical challenge that can be used, even weaponized, to exploit humans and even societal vulnerabilities and, one could argue, even carries the potential to reshape our concept of humanity and what it means to be human. What is your take on this, Joanna?
JOANNA BRYSON: That’s a great question. I don’t think it is exactly what I say on my website, but that’s okay. If I do, I have forgotten.
You are actually asking two questions. One of them I am spending a lot of time on these days because Europe is worrying about it a great deal, which is we do not just talk about the vulnerability of society, we mostly are talking about the vulnerability of democracy. But I honestly think that is a mistake because I think it is government in general that is challenged. China and Russia are also very worried that people are creating and giving information to their people, to their subjects, that they do not want them to have. I think this transmission of information has impacts.
I think whenever you have a technology that reduces the cost of distance you necessarily also then have to change the way you govern. You have new problems because the nature of power just changes. It changes the landscape of what it is to govern.
On the one hand, there are all these problems that we are working a lot on—and I can go into that if you want to, like the European regulation.
ANJA KASPERSEN: We will come to that later.
JOANNA BRYSON: I really do think the only way that we can get through this is with more information-in-hand, is with people understanding things better, and that may include being able to imagine what it would be like to live when you would be better off doing some kind of web search to figure out what your next move is rather than make it up yourself.
I think that we really have a huge project for the humanities after we get over the challenges I see right now—the challenges of digital governance, the challenges of climate change, of course, which is the biggest one of our generation—but then the next thing over that hurdle is I think in solving those we will have come up to a situation where we have so much information about ourselves, so then we are going to be having a real renaissance of the humanities. That would be my guess.
ANJA KASPERSEN: You alluded to the impact on our society, the impact on our democracy, and the impact of how we think about democracy.
JOANNA BRYSON: One of the most interesting things I heard was an Israeli politician who said: “You know, I don’t care whether Donald Trump really did win or whatever; if Russia helped him or not, that’s America’s business. But what really bothers me is that Donald Trump’s voters don’t care whether Russia is to blame for their opinion. They say, ‘Well, this is my opinion now, so if Russia gave it to me then Russia is not as bad as I thought.’” This is all in her words. Then she said: “I thought that social media was a way that I could find out what my constituents wanted, but now I am finding out that other people can change what they want.”
In a way. I found this whole thing revealing, but it is weird too. What do you think leadership is? Essentially people are getting led, but in a weird, sneaky way, but I don’t think we should be so surprised that people’s desires are not fixed.
The extent to which social media is to blame for polarization seems to be exaggerated. It seems that actually our big problem is that we have not been governing very well. Wealth and the level of inequality has grown to the point where some people have enormous amounts of influence whereas other people’s wealth is declining and they are not able to meet obligations or they are afraid of not being able to pay their mortgage or are afraid of going bankrupt. This seems to be more correlated with polarization than anything else.
On the other hand, going back to Strategic Communication Laboratories and Cambridge Analytica, what you can do really, really well with social media is surveil. If you have all this information about people, you can pick off who are the people who are susceptible to which kinds of messages, including that you may be able to find and target individuals and convince them to do specific actions. So, you can use AI to find someone, but then you can have a human go out and persuade people to do things, but also more generally that you can have this targeted advertising.
That is what the law that is already in place, the European Union’s Digital Services Act, is all about. We want to know: Are people being targeted? How are they being profiled? What determines which recommenders? The three things it is looking at—the profiling of European users, the recommender algorithms that determine what they see, and the targeted advertising that also in some sense determines what they see—and how these things are being used, and whether the companies that are doing it are obeying the older law, the General Data Protection Regulation. Basically, none of this stuff should be targeted on an individual’s protected characteristics. We don’t want to see European exploited basically. We don’t want to see them coerced this way.
It is interesting. I have had some conversations with people in social media. Some of them really, really, really believe in recommender systems. They think if you ask a person what they want to do you are losing.
I am a psychologist. Absolutely it is important to understand that there are implicit things and there are explicit things, that you do not always know what you are going to respond to best or what you enjoy best, and sometimes these things can be read from your behavior.
On the other hand, coming back to ethics, the whole point of being an adult member of a society is that you are responsible for your behavior, and if you make a conscious choice—“I want to do this, I am trying to inform myself on this topic”—and then social media keeps trying to distract you with something else, to me that is the wrong social media. In my mind you don’t want to be on social media with recommender systems. You want to be somewhere where you can curate your experience.
I get that once in a while you want to just sit in a movie theater or watch TikTok, but when you are choosing your news sources, when you are being an informed citizen, that is a point where you ought to have more control. So, I think there is a real problem that some organizations are trying very hard to reduce the amount of control people have.
I encourage some of those organizations who famously have business models based on that to rather be transparent, like, “Go ahead, show people a good time with your AI toys, but show them how it is made. Don’t let it be mystified. Help schools understand. Go ahead and expose tools to people so that they can see how this stuff was built.”
Again, people still enjoy television, movies, novels, any kind of fiction. They still fall in love with characters even though they know it is completely false, and I think the same would be true of AI. There is no reason we can’t enjoy AI, but we need to know that it is actually an extension of corporate intelligence, it is not itself an entity we could be friends with.
ANJA KASPERSEN: How do people protect themselves against bad ideas, misinformation, disinformation, etc.? You alluded to it already. I think kids are in a special category, but as adults with legal agency the responsibility is on us to educate ourselves, to understand, to in some ways inoculate ourselves.
What is your sense of that? Is it possible to inoculate humans against the impacts we see from these technologies and the way that they are being embedded into society?
JOANNA BRYSON: Again, that is a really interesting question. Remember we are talking about really diverse technologies too and a diverse set of problems.
Most of the Internet of things I would not have in my house, I just think it is exposing a cybersecurity/fraud landscape that we don’t need. On the other hand, I know a lot of people use it.
For example, my father at this point cannot type anymore, so he has to have voice assistants to help him interact with this technology. One kind of inoculation is like, “Okay, I don’t want to do this, I can read a book and not be surveilled while I do that,” but, at the same time, there are unfortunately some of the other things people are doing when they have to do this content moderation.
There are images that are life changing. It is one thing to kind of know something exists, it is another thing to see it. There is no way to inoculate yourself against that. We don’t expect to run straight into atrocities. Almost everyone has had some kind of life-changing event, I suppose, and there is nothing you can do to inoculate yourself against that.
I think between these two extremes is the interesting case. Even with this thing about not legal personality but with moral agency there is a continuum I think. If you have undergraduates whom you are teaching, you see this. Some of them come in and they already have a very strong idea of what they want to do. Some will carry through consistently with that idea, they are already running their own businesses or whatever, and they are just getting this degree on the side because it will help them in some way, but they are basically already on their path. Other people are basically still children. They are just wandering around and going to school and trying to figure out who they want to be.
As you inform yourself, I think one of the main things we are going to have to do is get better at recognizing sources than we have been. It is kind of a shame because the world was more pluralistic when anybody could provide content.
But I still think—it is like Wikipedia, where anyone can edit, although sometimes it gets harder to edit these days because there are so many people and even providing good content is more work than it used to be, but it should be. I think we will learn ways. We are getting better on surface and quality.
But that is one of the reasons you go do a degree—I don’t mean to be an academic bigot or whatever—but just connecting yourself into some kind of verification process, whether that is a university, your job, or just getting to know other people who can vouch for each other, I think we will see people having to put time into that especially for the decisions that matter. You keep wondering how they are not already doing that.
Again, I think this comes back to polarization. The highly affectively polarized, those who do not trust anyone, no longer trust the authorities because basically quite often the authorities have let them down and they are in a poor economic situation that they don’t deserve. Then they only trust others who choose the same identity as them and so say the same insane things they say.
I think even there we all have shared a lot of different exposure to a lot of different ideas, and sometimes enough evidence comes through on the other side that they think, Okay, maybe that was a bad idea.
ANJA KASPERSEN: I am going to follow up on something you mentioned, Joanna. It follows in the same vein as my point about cognitive immunology and the point that you made about the need to be transparent about what is being built, how it is being built, and what the capabilities of these systems are.
You have been a vocal critic of language used in AI discussions, cautioning against the confusion arising from anthropomorphizing AI, giving it human attributes, be that intentionally or unintentionally. I think many of us agree with you and are quite worried about it because it is the ultimate deflection strategy in some ways away from what needs to be discussed. Both stances, whether it is intentional or unintentional, essentially hinder governance.
You addressed this—and I thought it was so interesting when I was looking into your huge body of work—way, way back in an article which you called “Just Another Artifact,” but you also looked at it more recently together with some collaborators where you even questioned the term “collaboration” when you talk about human-machine interfaces and interactions.
Could you elaborate on this point and why you think anthropomorphizing both is so prevalent and why it is so dangerous for where we need to go with these technologies?
JOANNA BRYSON: That is a great series of questions, and it is really fun to have you tie together my very first AI ethics paper with my most recent one.
First of all, let’s say a little bit more about terminology. A word means how it is used. That literally is true. That is why large language models (LLMs) work, that is why web search works. It used to be bizarre, esoteric philosophy—I think Wittgenstein or Klein or somebody came up with this—but now it has been proved by the fact that you can search for stuff.
On the other hand, when you are trying to communicate, when you are trying to achieve a particular job, then you need to take one of these definitions. That’s why I learned if you are trying to help people to think about how intelligence is altering their lives, it does not make any sense to use the definition of “intelligence” that means like conscious or has a soul or is a moral agent. That is why I learned those words. It is normal in science. It is a standard part. Physics had to do this too. You learn how you can decompose things.
I still use the intelligent part. The most essential part for intelligence for me is just that you can do anything, that you are responding to your context. I did choose it somewhat arbitrarily out of the available definitions, but it is not totally arbitrary. It was actually more than a century ago that that definition was brought up, and it was brought up in the context of trying to understand animal intelligence.
I was taught it both as an undergraduate in psychology and in my first degree in AI. Again, we are always standing on the shoulders of giants. I am just lucky that I had the education that I to some extent selected, but you can never really know what you are getting into with a degree.
If you think intelligent means human-like, then of course artificial intelligence is human-like machines. Also, if you think language is human-like, you know what? You have evolved to think that.
We respond strongly to language, even as neonates, so before you are born, so fetuses are apparently absorbing their mother’s dialect, and as soon as they are born they attend more to people who speak like she does. It is a very, very big part of being human and it is very hard to ignore language.
There is a study recently that Danish has changed to be more English-like in the five years since we have had voice assistants.
Some people think there is no moral way to have spoken language produced by AI—I hope that is not true because I don’t think we are going to get away with it—but they feel that, especially when it is spoken, it is going to have such a huge impact on your household, you just do migrate toward the language of those you are speaking to.
So now we are handing corporate control of that critically important aspect of our culture and of our intelligence—our words are the levers that we use to get anything done when we are thinking—so we are handing that control over to corporations, which are smoothing it across society—and not intentionally.
You could say radio did the same thing, a lot of accents fell out of Scottish and things like that because people were listening all the time to the BBC or whatever. So, this is a problem we have had with mass media for a while. It is not only a problem, it is also a blessing. It is much easier to understand as you go across Britain now than it used to be, which is a shame in some ways but is honestly quite useful if you are stuck somewhere.
Regarding anthropomorphism, people are right to think that language is something that has historically indicated there is a person. But the fact is we have both these things. The hard thing is trying to communicate that even now the number of people who are working, the experts in LLM, still expect that somehow it is going to magically either know more stuff than we already knew, so know more than there is on the internet. I don’t know if they have some weird faith that actually somebody has imbued all of humanity with all the knowledge there is, so if you could just combine it you would get everything.
It is just not like that. We as a culture keep discovering new things and we get new capacities, so you are not going to by mining existing language discover that many new things. In fact, it looks like you get performance at about the 80th percentile, which is amazing, so you can level up a lot of people by giving them these generative AI tools, but you don’t achieve even the peak of normal human production with generative AI because it is an averaging of all of these people.
The number of people who do not even get things like that, that expect they are actually going to find a whole human—we have things in our brains that are doing very similar things when we are learning language, but we also have all these other parts of our brains and all these other parts of our bodies—we have metabolic systems, we have motivational systems.
For a human it is dysphoric, something you strongly want to avoid, to be alone for great periods. Obviously, there is an optimal amount that you want to be alone, but to be totally socially isolated is a recognized form of torture now. Solitary confinement, even if you do get to see someone for an hour a day or something, is still considered torture.
This is something you are not going to build into a robot. You can mine it, you can get a robot saying, “I don’t like to be alone,” but that as an aversive stimulus is something that is an unavoidable part of who you are, which is part of makes jail something you don’t want to be in, which is one of the parts of our justice system. Also, just social shunning is something you want to avoid. You are not going to build that in in the same way.
Whereas, if you look at a guppy, a dog, or a sheep, they are the same. They are highly social animals. They have that same thing. They will do anything to try to keep in with the flock.
That is something a lot of people are still struggling with, I guess, because they only have computer science degrees.
ANJA KASPERSEN: Can I ask you a provocative question on that? Who benefits? Because these narratives are so present in every discussion about AI, in your view and being as immersed as you are in these communities, who benefits from these narratives of giving it human attributes?
JOANNA BRYSON: You have to ask yourself a lot of questions about the different kinds of benefits.
First of all, I think some people sincerely believe in these things. I have heard from people who work with him that Elon Musk really is afraid of AI. I don’t know why.
I think a lot of people—it is again fundamentally human—want a lot of power or they want to have immortal, perfect children, or whatever, and they want to live forever. They feel like they are totally entitled to that if they can find a way to do it. So, some people want very badly for it to be true, and they want it to be true so badly that it is hard for us to tell and it may be hard for them to tell whether they believe it or if they are just trying to make it true.
You might recognize some of this argument out of Daniel Dennett in Breaking the Spell: Religion as a Natural Phenomenon. We have obviously been this way about other beliefs for a very long time,.
I do think it is related to a kind of religiosity, this great concern about “What am I about?” and “I don’t want to be about just my body,” especially if you are in a situation where you find something about your body bad. A lot of people really, really want to believe that the essential part of them is just their mind and there is some way to extract—again, this goes back thousands of years, it is called Gnosticism—that people have really wanted to think that the body is not that important.
Certainly, technology in general does not tend to last as long as bodies do. It is so weird. If you look at the duration of a particular file format or whatever so that you can meet it again, it does not tend to be eighty years. Some people have told me that Google is working very hard on this problem too and they have some processes that have lasted a very long time and whatever.
But anyway, I do think it is just like a standard pursuit. Why did very powerful people in the past believe they were divine? Again, there was this whole idea that royalty was somehow divine that is pervasive. So, I think it is not surprising that some people really want that.
On the other hand, I do think there are people who are or were just trying to distract regulators. I feel like how generative AI was released had to do with the timing of the European Union trying to finish up their Artificial Intelligence Act. Maybe I am being overly paranoid to think that.
Again, some of those people may actually believe that generative AI might magically turn into a person, but more of them might think, This is going to confuse and distract everybody, which it did.
Not everybody. I was one of the people going, “What are you talking about?” Nobody thought about this. What? Don’t you watch Star Trek? Of course, we are all thinking about that.
I guess there were still some people who thought AI was this little problem that was like word processing or something.
ANJA KASPERSEN: You have been talking as well about where and what we can learn from the nuclear governance regime with its mixture of autonomy, legitimacy, political, and technical capabilities. Not least, as a former director of disarmament, what is unique with the nuclear security and safety entities is of course that they employ regularly, daily, scientists and engineers in their organizations.
You have written about what we can learn from the nuclear regime, particularly from the International Atomic Energy Agency, and what about their activities serves as a valuable comparison for AI governance and also in your view what does not serve as a valuable comparison? Would you mind giving us your thoughts on this?
JOANNA BRYSON: What happened was a few people—including people who I don’t normally side with, like Sam Altman—were saying, “Oh, we should do something like the nuclear regulation,” and then a bunch of people came out and said, “Look at all the ways that AI is different from nuclear regulation.” Yes, of course, that is easy, but it is a metaphor. It is not that you are saying the two things are the same.
The most important things that are the same—actually this is in contrast to other possible models—is, unfortunately, some people again who were trying to avoid regulation wanted to make it like the Intergovernmental Panel on Climate Change. They were like: “Oh, there is this new scientific phenomenon we have discovered and we need a lot of scientists to write a lot of reports, and then we as politicians will have no obligation to do anything for decades.” The climate can no longer take that either.
But AI is not a science. It is a discipline under computer science. There are things about what is feasible to compute, which is sort of a science, but more there is this engineering perspective. I know this is ironic from where I started, about why I left America, I did not want to stay there, but the products are engineered. That makes it much more like nuclear power. Nuclear weapons and nuclear power would not be there except for human actions, and the same is true of AI.
The climate is out there. The ecosystem is out there. The difference between the impact on governance and the extent to which policy can be extractable from governance is based on the fact that this is an engineered artifact. That is one way that it is the same.
The other way that it is the same is that America took an early lead. At the time nuclear power started, one-half of the world’s Gross Domestic Product was being produced by America, and now it is more like 24 percent. But one area where America is still producing about one-half of at least the market capital of is the AI and digital area. So they, for very strong economic and security reasons, want to keep just having control and not give up control.
Both with climate and with atomic America originally dragged their feet but then realized that the best thing it could do is lead through engagement. This is diplomacy. You give up a little bit of control in order to have a better cooperative outcome.
ANJA KASPERSEN: And feature engagement and standards.
JOANNA BRYSON: Yes. The current thing they are saying they want to lead, the Global Partnership on Artificial Intelligence (GPAI), was sort of a spinout of G7, but it includes India and is much larger with 30-odd countries or whatever.
The Trump administration, which was in office at the time when the Global Partnership on AI was created, insisted that the United States would not engage with it unless it didn’t have normative capacity. What that meant was that—like the European Commission has normative capacity, that if they put together some rulings, then other countries have an obligation to make laws that follow that kind of template. That is what they did not want the Global Partnership on AI to have.
Unfortunately, the steering committee took that as an excuse to just squash every effort to get engaged with governance at all. We were just trying to do things like make sure that those who are trying to write new laws know who in other countries that are also GPAI members are writing similar laws and trying to set up some kind of way to make sure those guys can talk to each other. Everything that sounded like it was remotely a form of governance was getting squashed.
I fought this for about three years and then I stopped. I am not saying that I totally walked away from it—the people who got me involved in the first place, some in the German government, would like for me to go back, and maybe I will—but I took a year off.
Anyway, this bit of this also comes out in the article that you saw. I guess I do blame people for it. “Bitterness” isn’t quite the right word—I don’t want to personalize it, but this stuff is important and we do need to find ways to collaborate.
It was great doing that article with somebody who was a nuclear historian, David Backovsky, to see how in the past quite often winning solutions have been built out of the embers of the losing solutions.
You are increasing competence and capacity and people are getting to know each other every time you try to put together some such organization, and then when there is really an opportunity, when there is a crisis or something, you at least have some scaffolding to build from.
ANJA KASPERSEN: Joanna, building on what you just said about governance and importance of regulations, you have also been speaking a lot about the importance of scientific and engineering integrity.
There is a lot of talk about “responsible” AI and “trustworthy” AI, both terms that I personally have big issues with because I think that is another level of anthropomorphizing and also deflecting the focus away from the people and the organizations behind them and making this about introducing fallible, immature technologies into the market when the market is not ready.
You hear a lot about major tech companies, especially lately, announcing that they have invested revenues into responsible AI, which makes you think about the revenues that are being invested in “irresponsible” AI.
What is your take on this? What are we seeing from tech companies right now? Where do you see the trends moving in the AI research field and what we can expect from companies?
JOANNA BRYSON: First of all, you cannot hold AI responsible because the way that we hold other moral agents responsible is through penalties. As I mentioned before, we humans and other animals do not want to be isolated, to lose face, or lose money, and this is not the kind of thing—so you cannot hold AI responsible for it, it makes no sense to talk about AI itself being responsible.
We can talk about the responsible use of AI and we can talk about whether it is responsibly crafted. So I am hoping we are seeing a migration. One of the things I am trying to migrate right now is the alignment problem. Dutch philosophers have this idea: “We will just capture what we think is ethical and then we will build it into the AI system, and then the AI system will be some kind of straitjacket that will keep us from behaving immorally.”
No. First of all, that is not how ethics works at all. We are constantly trying to improve and we have to adjust to our context. That sounds negative, but it is just true. We keep trying to figure out new ways to be fair and it is partly afforded by what we are able to do.
I think the alignment problem is making sure that the systems do some combination of what their developers and their users want. We have to realize that technology is not only an expression of the users’ intentions but also what the corporation allows the users to do. That is becoming more and more true, and this is one of the reasons governance is changing.
In terms of trends, I was just at a Chatham House Rules meeting. It is fantastic. Most of the corporations are responding to obligations.
TikTok has now finally set up an office in Europe. It has to or else it can’t keep running here. So we have someone to call if there is a problem, which we did not used to have.
Facebook and various other organizations are creating modular components in their organizations so that they can comply, so that they can answer the questions they are being asked. That’s great.
Unfortunately, I guess we all know that Elon Musk bought a company and then stripped its capacity to comply, but I think we will see it also comply.
In general, I think we are seeing a lot of organizations that do want to do the right thing and now the right thing has been spelled out and they are working with us.
On the other hand, I do think that there may be something—I was reasoning about it, and I did not see any other way to do it, and I have heard this mentioned—we may have to do something like the Chinese did and block services that are not compliant with the law. So, we may get to the point where services are actually excluded from the European Union if they are not respecting, for example, citizens’ privacy adequately.
ANJA KASPERSEN: Do you worry about the power concentration embedded, in what you just said, with the companies?
JOANNA BRYSON: Within the European Union?
ANJA KASPERSEN: No. The power concentration that these technologies bestow, who owns them vis-à-vis those who use them.
JOANNA BRYSON: I think it is important, and we are still working on getting that right. One of the great things Jack Dorsey said after he sold Twitter was: “We should not have built so many tools to control our users’ experience. We should have built more tools to help the users control their own experience.”
I think people are getting that right so there isn’t too much power in one place, but also getting governance right, so where there is power there is adequate regulation. There is this conflict, that if you do need something to stay that big and to not have market forces so people cannot operate as individuals choosing what they want, then you need to have your government more involved in the governance of that company too, like setting prices and making sure they are complying—so that you get this reward of being this giant, important company and the penalty of having to work more with a legitimate government representing the interests of the people who are your customers.
I think it is complicated when you go into countries where there is not a legitimate government or the government does not care about its legitimacy and does not care how the citizens feel toward it—if such countries really exist, but some of them definitely do not care about legitimacy with respect to some parts of the population—and that is a huge problem. Then you have to say, “Then who regulates the technologies?”
But, as long as we have strong liberal democracies—and it even does not have to be a liberal democracy, just people who care about the wellbeing of humans, of minorities, of others, of the planet and there is enough of them with enough wealth that they attract the attention of the corporations—then I think we have a hope of harnessing this whole thing.
We have obviously already made mistakes and we have suffered losses—warfare, ecological devastation—but it isn’t too late. It is too late for some people who have died, and that is terrible, but also this is the way it has always been. We do keep trying to do a better job, and I think we are on a good path for it right now.
ANJA KASPERSEN: What a great upbeat note to end this conversation.
Thank you, Joanna, again. This has been a very enlightening conversation that has spanned many different fields.
To our listeners, thank you for joining us, and a special shout-out to the dedicated team at the Carnegie Council for making this podcast possible.
For more on ethics and international affairs connect with us on social media @carnegiecouncil.
I am Anja Kaspersen, and I genuinely hope this discussion has been worth your time. Thank you.
Carnegie Council 国际事务伦理中心是一个独立的、无党派的非营利机构。本播客表达的观点仅代表发言者本人,并不一定反映Carnegie Council 的立场。