思想自由,与苏西-阿莱格里合作

2023 年 3 月 21 日 - 34 分钟收听

在第一集中,主持人希拉里-苏克里夫(Hilary Sutcliffe)将从另一个角度探讨.......我们从另一个角度思考的自由。我们可能会觉得自己脑子里想的东西仍然是自己的想法,但国际人权律师苏西-阿莱格里(Susie Alegre)探讨了通过部署人工智能(AI)暴露和操纵我们内心想法的惊人方式。她解释了通常被视为最基本人权的思想自由是如何受到侵蚀的;这在实践中意味着什么,以及我们可以做些什么来保护我们的思想。

阿莱格雷是一本获奖著作的作者、 思想自由:解放思想的长期斗争.您可以阅读她的文章思想自由是一项人权"一文,或浏览她网站上有关这一主题的大量广播和文章。

Spotify 上的 "从另一个角度看阿雷格里 "播客 从另一个角度看苹果播客

HILARY SUTCLIFFE: Hello and welcome to From Another Angle, a Carnegie Council podcast. I am Hilary Sutcliffe, and I am on the Board of Carnegie Council's Artificial Intelligence & Equality Initiative. In this series I get to talk to some of today's most innovative thinkers, who take familiar concepts like democracy, human nature, regulation, or even the way we think about ourselves, and show them to us from a quite different angle. What really excites me about these conversations is the way they challenge our fundamental assumptions. Their fresh thinking makes me—and I hope you too—see the world in a new way and opens up a whole raft of possibilities and ways of looking at the future.

Today I am delighted to welcome Susie Alegre, who is an international human rights lawyer who focuses whose focus is digital and neuro rights. She is a senior fellow at the Centre for International Governance Innovation, and we are going to talk about our freedom of thought from another angle.

Many of us are concerned about the influence of technology on our lives. We might worry about threats to our freedom of expression or our privacy being eroded, and what we say might be up for grabs to the highest bidder online, but at least our thoughts are our own. Whatever our thoughts, however different they are from what we would actually say out loud, at least we know they are ours and that no one can get at them or penalize us for them. What goes on in my head stays in my head, or so I thought.

Susie has written a powerful new book called Freedom to Think: Protecting a Fundamental Human Right in the Digital Age, which was book of the year for 2022 in the Financial Times and The Daily Telegraph, which shows that not only is our freedom of thought under threat but that we have pretty much given it away for free and lost it without even knowing we had, but perhaps it is not too late.

Susie, welcome to the podcast.

SUSIE ALEGRE: Thanks so much for having me, Hilary. It is a pleasure to be here.

HILARY SUTCLIFFE: Before reading your book, Susie, I had not given too much thought to threats to what you lawyers call the forum interna or what goes on in my head, but it seems that I should have. Could you start us from the beginning and help us understand what is this human "right" of freedom of thought and what does it practically mean for us?

SUSIE ALEGRE: Of course. The right to freedom of thought as a legal human right stems from the Universal Declaration on Human Rights, which is in fact 75 years old this year, so we have had a technical international legal right to freedom of thought for 75 years now. Before that philosophers, ethicists, and others have talked about freedom of thought as a "natural" right, this idea that, Well, of course we have freedom of thought, no one can get inside our heads, and therefore it is a natural right. It is not something that can be taken away, so we do not need to worry too much about protecting it.

When the Universal Declaration on Human Rights was drafted in the aftermath of the Second World War the drafters recognized that actually yes we do have this right and we need to explain a little bit more about what it is and why it is important as one of the fundamental rights that make us human and that should be enforced. Even though it was written down in law it is not surprising that you have not thought much about it because frankly not many people have. There is very little written about it and very little jurisprudence or consideration given to the right of freedom of thought as opposed to the right of freedom of religion and belief or the right to freedom of expression until very recently.

What is written about it and what is clear in the academic analysis and in the limited case law was that the right to freedom of thought along with the right to freedom of belief and freedom of opinion have two aspects, if you like. There are two different aspects of the right. One is the right to say what you think, which we see in freedom of expression or the right to manifest our religious beliefs, which we see in going to church, praying, and making religious ceremonies.

There is also an inner aspect to freedom of thought, religion, belief, and opinion, and it is that inner aspect, as you have mentioned, that is described as the forum interna. Academics have looked at that and asked, "Well, what exactly is freedom of thought?" As I say, when you take all these different bits of what goes on inside us it is the whole idea of our inner lives, what we do on the inside—how we feel, how we think.

The external aspects of the right can be limited in certain circumstances. You have the right to freedom of expression until you use it to destroy other people's rights, and then it can be limited, according to the law. Similarly with religion, you have a right to freedom of religion, but you do not have the right to destroy other people's rights in the name of manifesting your religion or belief.

But what goes on inside our heads, what goes on inside us, is protected absolutely in international human rights law. Very few rights are protected absolutely. The other two most common absolute rights that give an idea of how it works are things like the prohibition on torture, inhuman and degrading treatment, and the prohibition on slavery. When you are dealing with an absolute right what that means is that there can never be a justification for interfering with the right, so you cannot torture someone in order to get a confession because that will help national security. You can never ever use torture. It is absolutely prohibited. The same goes for interfering with our inner lives.

Obviously there is a real question there: What does it mean? What does an "interference with our inner lives" look like? There are three key points to consider in that, although where the lines are remains to be seen. We will see it as case law develops, and we see it very much in case studies.

Those three key elements are: The right to keep our thoughts private, so nobody can force us to reveal what we are thinking, the right not to have our inner thoughts manipulated—clearly we always in our day-to-day interactions influence each other. It is my job as an advocate to persuade people that my perspective is the right one, but I hope never to actually fall on the wrong side of manipulation.

The third aspect is the right not to be penalized for our thoughts alone. Again, you might be penalized for expressing thoughts in a way that destroys the rights of others or that incites violence, for example, but you can never be penalized simply for either what you think or for what someone else thinks you think. That was one of the things that the drafters recognized as important in protecting our right to freedom of thought, that inferences about what we are thinking can often be as dangerous and may equally be an interference with our right to freedom of thought, whether or not they are correct.

One of the things in the book that I looked at was the history of witch hunts. I think this gives an example of why inferences can be as dangerous whether or not they are right. If the Witchfinder General turns up and decides that you are a witch because of your black cat and the wart on your nose, an inference that is clearly damning, it does not matter whether or not you have had a witchy thought in your life. You are very likely to still be burned at the stake for a witch. That is one of the reasons why I explored these historic examples because I think they give very good views on what it means to not have freedom of thought and which way it leads if we lose our freedom of thought.

HILARY SUTCLIFFE: Bringing us bang up to date with this, from witches to now, I was quite struck by the way in the book you picked apart the Facebook and Cambridge Analytica case study with those three thoughts in mind. Can you give us that? It had not struck me before how influential that was in areas that I actually did not read about in the newspapers?

SUSIE ALEGRE: For me Cambridge Analytica was the real light bulb moment. Before that I had not thought about freedom of thought at all either. I remember very clearly reading an article about Cambridge Analytica before it became front-page news in early 2017 and when it was connected to both Brexit and to the Trump election. As I was sitting there, thinking, Well, how could these things have happened, as many European human rights lawyers were, reading this article chilled me to the bone. While people were talking about data protection and privacy, both crucial things that I had worked on for years but had never felt so viscerally, I thought, This goes beyond data protection and privacy. Data protection and privacy are gateway rights, if you like, but I thought, This is about our freedom of thought and potentially my freedom of thought.

What struck me was this idea that it can be very subtle and that perhaps my Facebook feed had lulled me into a false sense of security because all my friends agreed that the referendum was pointless and that we would never leave the European Union, and therefore I did not see it as a major threat at the time. I did not go out and campaign. I just assumed that life would go on as usual and did not see this seismic change coming.

The idea that that might have been because of manipulation—whether or not it was, I am a Facebook user, it could have been, may not have been—the idea that that could have happened and could be happening to all of us or any of us at any time really, really disturbed me. That was the first moment. I thought: This is about my freedom of thought. This is about all our freedom of thought and how that then potentially impacts on our democracy.

When I started to look into it, as I mentioned earlier, there was very little written on the right to freedom of thought, so I assumed I must be barking up the wrong tree and that maybe I had missed something, but when I looked at it and saw, okay, this is an absolute right, that means it is something significant that cannot be thrown into the balancing act that we often see with other human rights, it cannot be in that question of, "Well, protecting one's right with another right." This is something where there can never be equivocation about it. That to me seemed to be powerful.

When I found these three elements and thought about them in terms of Cambridge Analytica I thought effectively what they were offering, the service that Cambridge Analytica was selling through Facebook, was firstly an insight into our inner thoughts, feelings, and characters to understand our psychological vulnerabilities as individuals en masse and to use those insights and inferences to go back in and press our psychological buttons in a way that would manipulate us and ultimately manipulate our voting behavior—not changing someone from a Trump voter to a Democrat or a "Brexiteer" to a "remainer" or vice versa, but potentially changing whether or not you would be bothered to get off the sofa depending on which way you were likely to vote and what kind of a person you were. When you are looking at very small margins in elections that has the power to completely change all of our futures including, as we have seen, whether or not we have human rights protections down the line in the future.

These are very important elections. Any election is crucial to our democracy. The idea that the results of elections could be manipulated through effectively artificial intelligence (AI) and data being used to infer things about us and use that to manipulate us was something that I found utterly disturbing.

As I say, it hits on those three elements. It is about getting inside our heads, understanding how they work, and using that to manipulate us in ways that may well be to our detriment or may well penalize us down the line. If you think about it as well, there is potential that this kind of inference can be used to literally stop people going into the voting booth in different ways.

We have seen developments on from Cambridge Analytica. Some researchers claim that just from a still photograph of you they can make inferences about your political opinions. If you think about that kind of technology being used just to scan your face going in to vote, you can see how dangerous that might be in terms of the problems that might suddenly be found in relation to you as a legitimate voter.

HILARY SUTCLIFFE: Yes, terrifying. It is interesting to say as well that from your book I found that Obama used this sort of microtargeting first.

SUSIE ALEGRE: Absolutely.

HILARY SUTCLIFFE: So this is not some Trumpian conspiracy here.

SUSIE ALEGRE: No. That is one thing that is important. It does not matter who is using it. If it is an interference with our right to freedom of thought, it has to be banned. It can never be legitimate, and it does not matter if it is your guy who is using it or the other guy who is using it. It is not okay.

HILARY SUTCLIFFE: I think similarly there are some real shockers in your book on criminal justice, inferring from very dubious technologies things about lying, about guilt, and about all sorts of things. Tell us some more about those.

SUSIE ALEGRE: One of the cases that I looked at in the book was an Indian case where effectively neuroimaging of the woman on trial for murder was used to demonstrate that she was lying. I think it was the first conviction worldwide based entirely on neuroimaging as an assessment of the veracity of what she was saying in court. That conviction was later overturned, but it highlights how technology being used to read our minds in that kind of way—by looking at the colors as your brain whirs into action in response to a highly stressful situation—are potentially interferences with our right to freedom of thought, although they have not been dealt with legally as such yet.

That is one of the areas that I think is ripe for discussion and legal development to see actually where are the limits to neuroscience looking at what is going on inside our heads, whether it is in a criminal justice system, but it could be in an educational system or in an employment situation. Lie detectors are used very often in employment and security screening. I think there is a very big question not just about whether the tech works but whether it is a legitimate technology or technique to be used.

HILARY SUTCLIFFE: Absolutely. Going into the workplace, obviously from COVID-19 I do quite a lot of work on trust, and it was quite clear that employers did not trust their employees because they were surveilled and micromanaged every keystroke. Can you also tell us more about how the future of the workplace might be, the implications of freedom of thought and the use of this sort of technology in the workplace?

SUSIE ALEGRE: What we are seeing increasingly before you even get into the workplace, is in the recruitment area, the idea of AI being used to parse curriculum vitae and to decide whether or not somebody is the right kind of person, which rather predictably has flagged bias issues where suddenly if you are not the right kind of person to be a CEO, maybe if you are a woman, for example, then you do not have the right profile when the AI looks through and compares it to successful CEOs in the past.

What we are seeing increasingly is the use of AI video interviews, AI being run through a computer interview with someone to decide whether or not they are the right kind of person, reading between the lines. So it is not about what they say; it is about the body language effectively, the tone of your voice, and the kinds of words you use.

You do not need to be a genius to work out that that way lies discrimination, bias, and prejudice. Effectively the inferences being made about people based on their accent, the way they look at the camera, the way they hold themselves, all of which are often very much dictated by background, class, or whatever it might be, are the kinds of things the AI is reading to decide whether or not you are the right sort of person for the job.

Then, when you get into the job, we are seeing increasingly employers using AI both to monitor as you have said but also to push and manipulate people. What we can see in AI management techniques are "nudges," if you like, that keep people on the job all the time, that keep people nervous, and that keep people engaged so that you are never quite sure whether you are going to have a job tomorrow, and that keeps you on the job. What we can see from things like technology that tells you to take specific routes, for example, may undermine your ability, your autonomy, and your feeling of having control of yourself and your own life.

The monitoring as well. I think one of the examples that I put in the book was a company that introduced technology so that you would only be allowed into a meeting room if you were smiling, so you have to make sure you have the proper positive attitude before you can join a meeting. That is so Orwellian. Having reread 1984, that was exactly what George Orwell was talking about back in the day, that when faced with a ubiquitous telescreen, everyone has to have a completely fixed smile, an expressionless face, to make sure that no inferences can be made about any subversive ideas or any questioning that might be going on behind your eyes. This is what we are effectively living with now.

We are seeing it in schools too, concentration monitoring happening on children including potentially in a head monitor so that lights can flash up to tell the teacher that you are not concentrating properly. When you think about what those things mean for us as human beings it really is chilling and horrifying, and it does have a chilling effect not only what you say and how you behave but ultimately on what you let yourself think.

HILARY SUTCLIFFE: Absolutely. Some of those are truly horrifying. I cannot even imagine—a lot of this for me is: Who is buying this, who is selling it to them, and how is this even happening?

I like as well what you were saying about the ability to keep your thoughts private because this sort of intrusion shapes your thoughts. It shapes the way you actually approach the world. It changes everything. That simply just for somebody to make some money is truly frightening.

SUSIE ALEGRE: It absolutely is. I think it goes back to that question of regulation and what should be allowed to be developed. There need to be limits on what can be done with technology.

I think as well it is important to bear in mind that for technologists themselves and entrepreneurs this kind of environment does not lend itself to innovation and creativity. Tech innovation going down this route will leash itself.

HILARY SUTCLIFFE: Yes, absolutely, which brings us back to the law, to your area of expertise. I do a lot of work on responsible innovation and trying to have organizations think in an ethical way, which has not been wildly working, to be honest. In recent years I have actually been coming to see that the law is where we have to start.

I love your quote about human rights law as opposed to ethics. You call human rights law "ethics with teeth," and I like that. Explain to us about how ethics and human rights work together.

SUSIE ALEGRE: Ultimately human rights is the legal embodiment of ethics. It comes from ethics. If you study law, you have to study ethics as well.

Human rights law is a way of saying, "These are the very clear boundaries." That said, the way the law works is iterative, and where those boundaries are develops over time, both through looking at particular case studies and saying, "Here is the limit and here is not," but also as our societies develop, human rights law evolves with us. It is not set in stone.

I think that is something that is very different from the way, for example, the U.S. Constitution is viewed. When we look at international human rights law, particularly from a European perspective, they are absolutely viewed as living instruments. So it is not what the drafters meant; it is what those concepts mean today which matters and which makes human rights law a very progressive way of dealing with ethics.

One example of that is: The Isle of Man, where I come from, was slightly unfortunately the home of a seminal judgment in the European Court of Human Rights. In the 1970s the Isle of Man was still using corporal punishment on young men who were convicted of violent offenses generally. They would be given the birch by police as a legal corporal punishment.

Corporal punishment was common across the Council of Europe when the European Convention on Human Rights was developed 20 or 30 years before, so the big question was by the 1970s: Was corporal punish inhuman and degrading treatment or punishment? The Court decided that, yes, by the 1970s it was because pretty much all Council of Europe countries had abolished corporal punishment in that time. While for the drafters it would have been pretty much straightforward a normal part of life, by the 1970s it was not.

That is how human rights law has remained current and how we see its development being applied to technology in practice. It is one of the things we have seen around the right to private life, how that has been adapted and evolved particularly through the use of the right to protection of our personal data to make it relevant and developed for the 21st century. We are only now starting to talk about the right to freedom of thought in that context, but human rights law has the power and the ability to adapt to new situations and create new legal frameworks that we can all use.

HILARY SUTCLIFFE: I am quite suspicious myself sometimes with this proliferation of ethics frameworks, which are sensible in many ways, but could they sometimes be an evasion of the responsibility or a distraction from looking at these things in a legal way?

Also, it is interesting to look at how concepts like "fairness," which are central to the ethical debate, have been debated for quite some time in the law and in human rights law, so we do not have to reinvent the wheel. We do not have to have new special rights.

SUSIE ALEGRE: Completely. One of the things that I am a bit cynical about, like you, is the development of ethics, particularly in relation to AI and technology. Is that being used as a way of saying, "There is nothing to see here; we do not need to worry about the law," and to say effectively: "There is no law. We can just ignore that and make up a new thing."

While I recognize that there can be problems with the law and that what you do not want is a massive proliferation of expensive litigation about everything, what you do want is a legal framework that makes it clear and helps you think about what should never be allowed: Where are the lines which you should never cross?

In my view international human rights law gives you that basis. It does not throw up a huge amount of potential for high-value litigation, but what it does do is create a legal framework, particularly to push governments toward regulation because one of the important things about human rights law is that it has a positive obligation to the state as well as a negative obligation so the state cannot trample all over our human rights and cannot violate our human rights by the actions it takes, for example, police using AI to predict criminality and then convicting people on that basis.

It also creates this positive obligation to prevent us from harming each other, including preventing corporations from harming each other, and to protect us from dangerous environments. One of the examples I used in the book was the European Court of Human Rights finding that Romania had violated a woman's right to physical integrity because it had failed to deal with the wild dog problem in Bucharest, and as a result she had been attacked by a load of wild dogs and been seriously injured.

Clearly the Romanian government did not set the dogs on her, but they knew about the problem and failed to act. I think that is a very useful perspective when thinking about future technology and when thinking about developments in AI. When we are aware of something dangerous coming down the track, it is on governments to make sure that they regulate to protect us from it.

HILARY SUTCLIFFE: I think now they cannot say that they were not told because you and a number of others have made clear what is going on.

Give us some hope. I am pleased to see that at end of your book you talk about that there is a lot of work going on, not just on privacy but also on this new area of freedom of thought. As individuals and our governments, particularly in Europe, are now beyond that stage of, "Oh, my goodness, what are we going to do," and doing some concrete things. Do you have any hope for us personally from the governance point of view, the law point of view, and also from our own point of view of what we can do?

SUSIE ALEGRE: I do have hope actually. At the end of last year I wrote an article in Wired's "World in 2023" issue predicting that 2023 would be the year that we started to see the end of what Shoshana Zuboff has called the "surveillance capitalism" model particularly around targeted advertising, on which Cambridge Analytica and other similar things were built. What we saw in the first week of 2023 was a European Data Protection Board ruling saying that actually the targeted advertising business model of Facebook and Instagram is unlawful in European law.

That is huge. That immediately shakes the foundations of the driver behind tech that surveils us and manipulates us. It is taking away the money potentially, which I think might change the direction of developments in technology. So we are seeing shifts. I could not believe that it happened so fast and that my predictions were coming about so quickly.

Also, as you mentioned, there are so many people doing incredible work on digital rights. It is a space that is developing very fast, and we are seeing people taking cases to court all over the world to challenge these issues but also raising awareness. For me, the reason for writing this book was to get the message out to the general public and to people who do not know about human rights, do not know why they matter to them, and do not really understand how the technology that we are all using every day might be impacting on their human rights now and in the future.

One of the triggers for me was actually looking at it from a personal perspective, using tools to find out how Facebook or Twitter see me. There is an website called applymagicsauce.com, where you can enter your social media data and get a reading of the inferences of what it says about you.

I was quite thrilled looking at Twitter to find out that I was a 30-something man, which probably indicates that I very much use Twitter for professional reasons and therefore I am not giving away too much about myself on Twitter. But even though it was completely off-beam it makes you realize what assumptions are being made about you and how those might be used against you, and it gives you an idea of the kind of data trails that you are leaving.

Another thing for me was to look at the amount of phone use. I have written an entire book about this. I still struggle with my own phone and have to put limitations on it, and I use an app actually called the Freedom App to prevent myself when I need to focus. I switch it on so that effectively I cannot look at any of the apps, I cannot check my emails, and I cannot have a quick gander on Twitter. It is important to identify tools that you can use to reduce your dependency on tech, understanding how you engage in tech, but it is also important to find out about organizations that are doing the work and support them because there is incredible work going on out there. Honestly the first five years of work I did on freedom of thought was completely pro bono. Organizations and activists need support to do the work to change our futures and to make our governments sit up, take notice, and make the changes we need.

HILARY SUTCLIFFE: On that fantastic note, thank you very much, Susie Alegre. Good luck with your incredibly important and interesting work. We hope things progress as you predicted even faster. Thank you very much indeed.

SUSIE ALEGRE: Thank you.

观看视频

Carnegie Council 国际事务伦理中心是一个独立的、无党派的非营利机构。本播客表达的观点仅代表发言者本人,并不一定反映Carnegie Council 的立场。

您可能还喜欢

2023 年 3 月 16 日 - 播客

从另一个角度系列预告片,主持人 Hilary Sutcliffe

在这个全新的Carnegie Council 播客系列中,人工智能与平等(AIEI)顾问委员会成员希拉里-苏克里夫(Hilary Sutcliffe)将从一些...

2022 年 1 月 27 日 - 文章

如果 "信任是必须的",那么人工智能监管机构需要做的三件实事是

"欧盟委员会数字事务负责人玛格丽特-维斯塔格(Margrethe Vestager)在一份声明中宣布:"在人工智能领域,信任是必须的,而不是一个可有可无的东西。

2023 年 1 月 24 日 - 文章

现在是对人工智能和技术治理进行系统重置的时刻了

我们怎样才能确保目前正在开发的技术被用于造福大众,而不是少数人的利益?...

2023 年 3 月 14 日 - 播客

大脑之战》,与妮塔-法拉哈尼(Nita A. Farahany)合著

杜克大学法学院教授妮塔-法拉哈尼(Nita A. Farahany)在其刚刚出版的新书《认知权利》(The ...

2023 年 2 月 7 日 - 播客

技术治理与多边主义的作用》,与阿曼迪普-辛格-吉尔合著

在本期 "AIEI "播客中,联合国秘书长古特雷斯的技术特使阿曼迪普-辛格-吉尔(Amandeep Singh Gill)大使与温德尔-瓦拉赫(Wendell Wallach)和安雅-卡斯珀森(Anja Kaspersen)进行了讨论。他们讨论了一些...