"I think it's a real tragedy that we are stumbling into a world where we are essentially resigned to the power, invisible or acknowledged, of a few major players that have emerged only in the last 15 years. It's a further tragedy if they will be defining what the future of this technology looks like....Instead, we need to think of a whole range of alternatives that define something between these very privatized interests and more public ones."

JULIA POWLES: Thank you very much. It's great to be here with the "Three Musketeers," as Joel put it [referring to the Uehiro Foundation, Oxford Uehiro Centre, and Carnegie Council], and great to have the different perspectives that we've had over the course of the day.

I thought I'd take the chance at day's end to take us out of the rivulets that we're so familiar with and that we so naturally tend towards in our conversations around the future of technology, and AI (artificial intelligence) in particular. I don't quite know why we end up in these streams. I think it is partly through familiarity, it's through expertise and intrigue—wrestling with the challenging questions of what the world's going to look like; but I think it's also got a lot to do with power.

In particular, I'm very struck that we have gone through a whole day of talking about AI and not mentioned any of the major players that are at the forefront of developing AI technologies—what their motivations are, how they influence each of us and the institutions we work for, the ideas that they represent. So I would like to spend some time on what's at stake and what is the state of AI and power today, how it impacts the questions we ask, and how we might think about it in relation to the questions that we should be asking.

I think the notion of "AI" has been inflated in current conversations to cover a whole range of technologies. It has certainly moved from some of the ideas at the origin of the discipline of artificial intelligence to really cover a suite of technologies that involve massive data sets and optimization protocols.

I stir up my colleagues about this. I'm cross-appointed between Cornell Tech, which is a tech faculty with a few rare specimens who come from other disciplines, as well as the law faculty at NYU School of Law. When I'm with computer science colleagues, I challenge them on many of the domains where they're talking about data-driven technologies and machine learning—I ask, "Isn't this just sexed-up statistics with a slant? Is this really some sort of mystical, magical thing?” I think there's a lot that happens in the way that we abstract from real technology to thinking about AI at the broad level.

So, if we look at the current prevalent applications of AI, they are basically in recommendation and matching algorithms, as well as in rather crude sorting and prioritization systems where you're always going to imperfectly fit the population that you're applying the solution to. Crucially, they are driven by the past. In many ways, in fact, they're running a loop on the past and calling it the future. They're just incredibly refined tools for pattern recognition.

There are some really amazing applications of these sorts of tools, of course. I think we all are impressed by tools around language translation and so on. In the health domain, which is an area where I have particular expertise and interest, there are image recognition tools that allow you to detect tiny lesions in images at a level that a human wouldn't be able to process (certainly you'd be exhausted before you identified them all), so you can mark up an image with 4,000 points where the human eye can only do four. So there are these sorts of things that blow our minds.

There are advances in game play, which is what captures the media attention. I also came on as a journalist, and certainly in that domain the fact that a machine has beat a human, I think, persuades many.

But I get alarmed when some of the leading lawyers and judges say to me, "Look, the profession is doomed; the robots will be taking over." I ask them, "What has so persuaded you that we're all toast?" They say, "Well, you know, AlphaGo [Google DeepMind's Go-playing software] beat the world's leading Go player . . ."

It's an extraordinary jump, I think, from the fact that these milestones—which have been set for some time in machine learning research around bound domains and particular problems—have been reached five years before we thought they would be. But it is, I think, a long way from disrupting whole professions.

I think we should be conscious of the stories that we tell about the technologies that are here. I'm particularly conscious of—and I'm guilty of this myself—when trying to interest an editor or writing a piece, I sort of fall into this trap of saying, "We're already in this world where machines and algorithms are doing all of these things, they're deciding who gets jobs, they're entangled in our lives in all sorts of ways"— and I think we romanticize in that process these rather crude statistical techniques and tools. Even worse, we normalize them and we help make them inevitable.

I think by recognizing what technologies and companies we actually see out there, what we might hope for, and the powers that are pushing a certain view of the future, I think we can start to move away from some of the diversionary rivulets that I think we end up in. We can go back to some of the questions that actually we don't ask, maybe because they're just really hard to answer, or maybe they're uncomfortable and we don't have the tools to ask about them, or maybe actually—and I've seen this at the coalface in academia—we realize it's just hard to get funded on certain projects where you're asking questions that I think we should really take upon ourselves to ask.

So I wanted to level fire at a few seductive diversions—mercifully, not all of them have been here today—that I think that we get lured into and are very tempting in terms of talking about AI and the future. I'll talk about five of these diversions in particular: philosophical abstractions, rhetorical abstractions, the existential, the procedural, and the inversionary.

On the philosophical, we haven't had many discussions of the trolley problem, thankfully, but we also haven't had any discussions at all of "Les GAFA," as they get called—Google, Apple, Facebook, and Amazon. I think that there is the search for one problem—you know, "What's the one thing that we should think of when we design AI systems?" and so on. These are the sort of abstract, parlor game problems that feel very tempting, but that I think are diversionary philosophical abstractions.

I would bracket the papers that are asking philosophical precursor questions and which then lead to certain results. I think Carissa [Véliz, addressing what it would take for an AI to be a moral agent] gave an example in her talk about why it matters to ask these questions that are precursors to certain consequences about how we might want to make rules. Those sorts of questions I'm all for.

But I think that some of the academic and philosophical approaches which are more about talking about a toy world—because the power that's present in our real world is too hard—just like some of the trolley problem questions are a distraction.

The rhetorical diversion that concerns me came up here in New York City, around an attempt to bring in some accountability to automated systems. The city announced yesterday a task force that is going to be charged with thinking through options for bringing accountability to algorithms.

All of the experts I've talked to in academia and civil society talk about how we really need to help bring into reality a positive AI future. They say that inquiries like that of the NYC Task Force are going to be great. "People are going to understand about the systems that affect them, and we need to get people to understand about algorithms," they say.

Then when I ask, "Well, what's an algorithm?" I get from them this completely abstract, inane, and meaningless definition: "Oh, it's a process;" "it's sort of like a recipe;" "it enables certain results." I have to say, that doesn't really help us. I don't want to educate people about what that is.

I don't think people need any education about getting shafted, or disadvantaged, by systems. That's how we talk amongst ourselves about what these systems are doing. But in polite discourse this gets abstracted to "algorithms" and "automated decision-making systems" and all of these other complicated notions that I think divorce the conversation from the people who are affected, and systemic access and classification problems that we have had for a long time, and that in many ways we entrench and reinforce in automated systems.

The existential diversion is to superintelligence questions. I think there's a very interesting play that happens there, which is, "Well, it's great to spend time and resources on that, because you don't have to be right in the course of our own lifetimes, and you can be on one side of the fence or the other about whether or not the robots will ultimately take over."

In this context, I find it very frustrating that the existential conversations don't recognize the companies that are part of the conversation and the individuals who would wish to live forever. I think the Zuckerberg dynasty [after the Facebook founder] will live forever, for example. Thinking, again, about the entities and the people who are involved in those conversations, I think there's already more than enough conversation from people who have the luxury to be concerned about such matters.

The procedural diversions are very interesting. So we've got all of these different entities that have emerged in the AI landscape. Wendell [Wallach, discussing moral machines] talked earlier about how he thinks we need multi-stakeholder solutions for AI, and there's this [corporate-led, self-regulating] Partnership on AI, and there are a lot of different efforts. When it comes to academics, I don't think I know any computer science groups in AI research that don't have people who have a halftime gig at one of the tech companies, which to me is super-problematic. I think if it was health research, by contrast, we would find that very difficult [i.e. for academics to simultaneously hold a position at a pharmaceutical, food, or tobacco company] or if it was any other domain of science, actually.

But there is this idea that this technology is so sophisticated, I guess, that we need to have the companies always at the table. Having tracked the realpolitik of Internet governance over the last 15 years, I think there's a lot that you can learn about power in multi-stakeholder settings. The big one is that it's used as a way of disempowering state institutions because, the idea goes, states don't build technology. But my response is that states also don't build cars or drugs or food or all sorts of things, and we still think it's okay to have states regulate those things.

But the devastating trick of multi-stakeholderism is that it offers this promise that we will bring to the table civil society representatives—not necessarily with any representational legitimacy—but this also yields the catch, which is that we'll also bring the companies to the table.

The paradox of it all is that the primary reason we might not want states to regulate is because they get lobbied by companies. And yet the solution of multi-stakeholderism brings those lobbyists direct to the negotiating table! The result is we don't really have any sort of international consensus on lots of questions and there has been a disempowering of institutions that might have had some role—for example, in restraining state surveillance or in regulating privacy—because we have said these are solutions for states and multilateral institutions. I can say more on that in discussion.

I worked at one point at the World Intellectual Property Organization, an organization that has member states with great enthusiasm to regulate the Internet for the cause of protecting intellectual property rights, and those very same diplomats cross the street to the International Telecommunication Union (ITU) and say, as soon as regulation is mentioned, "Russia and China are going to take over the Internet.”

If you want to talk about regulation, there are, I think, multiple modes of geopolitical discourse that happen and we rapidly devolve into these procedural, substance-less discussions. I anticipate that if we put all of the power in the Partnership on AI and other multi-stakeholder arrangements, we'll get exactly the same result as we've had from 10 years of Internet governance, which is that everyone is just so concerned with the process of being participatory that you never do anything substantive. The egos get stroked along the way and you kind of keep everybody in the same room, but it ends up reflecting merely the self-canceling noise of participation.

The fifth diversion I want to speak to is this inversionary issue. I do lots of research with computer science researchers on some of these seemingly "progressive" tech topics around explainable AI, AI bias, and AI discrimination who are becoming increasingly wary of this whole domain of data-driven tools because what happens is, essentially, that we define the world according to a particular model.

There are stories that come up all the time about AI bias—for example, a facial recognition system that doesn't accurately capture a certain discriminated-against group. So there was a study from MIT in the last six months that a series of commercial facial recognition algorithms were not doing a very good job of capturing African American women. But the solution is rather perverse, in that we improve those systems to be able to surveil those underrepresented groups that we know will be in many cases discriminated against when we do that!

I think the challenge with questions about AI bias and discrimination is that they define this narrow computational idea of "fairness" and "bias" and so on, and what they take us away from is a sort of bigger space where we can say: "Is the right solution to this social problem to build a computational solution? Do we preserve space for resisting and refusing these systems?"

I think we always need to have some caution about conceiving of a problem concerning part of a system (and this also connects to the explainability discourse: i.e. because you cannot explain some algorithms, are all algorithms not explainable?) as the whole problem. I think the bias problem, principally, is that we are learning from a historically biased and discriminatory world, and so it's inherent in algorithmic systems that learn from the world as it is that the algorithms, too, will be biased. But then you also get emergent bias; you get compounding bias problems.

Perhaps separating off the problems that have to do with AI at all and then retaining a space—and maybe even new institutions and gatekeepers—for being able to say whether or not we choose to apply these systems to those problems would be useful.

Another inversion that is very interesting, particularly coming from Europe to the United States, is an international one. At some point in any conversation about AI regulation in the West somebody inevitably throws up their arms and says: "But what about China? We're losing out! China is innovating ahead of us," and so on. I think that's just a total way of putting your head in the sand about what you can actually touch, by focusing on what you cannot.

It's the same in Internet regulation—you know, "Oh well, but China does this and that"—and certainly the world is not reduced to just the United States and China, but also this rhetoric takes away from the fact that we have institutions, we have ways that we want to organize our lives, and instead of talking about this there's a lot of "Well, but what about them?" as soon as the spotlight comes onto you.

In sum, those are a few areas that I think we get diverted into. So now I wanted to just spend some time on five areas that I think we should be talking about a lot more.

The first is digital dependency. It was interesting that Kush [Varshney of IBM Research] was talking about AI safety earlier. There is a way in which we have to ask all those sorts of questions because we have already moved to the point where we are completely digitally dependent. We are part of an inevitable trajectory for technology because we are busy wiring ourselves and our cities to global communication networks.

As a result, it reduces a lot of the space for thinking about how you might act ethically because we're just worried about staying alive when we've got systems inside our bodies or carrying our bodies or powering our cities that are vulnerable. I think there's a sort of triaging of issues—"Who cares about privacy, for example, when we can't even keep a city running?”

So there's a digital dependency issue at large that I don't think is being addressed in any way as to what sort of other problems it creates, not least the direction of travel of AI and the parties that will have the most influence over it.

The second issue we need to tackle head on is monopolies. I mentioned Google and Facebook. I find it extraordinary that we can talk about AI without recognizing that part of its popular appeal and the sense of inevitability is because, for better or worse, we are dependent on, and subject to, the immense power of just a handful of companies. Most Internet users outside China use these companies daily, and they have extraordinary information on us, dating back over a good portion of our lives.

What are the implications of those monopolies? I think we really need a radical reappraisal of what sort of data is held, what it could be used for. The "Data for Good" and "AI for Good" initiatives I think for the most part are merely a play to legitimize these data hoardings. They legitimize a certain presence of these companies in a way that assumes they will stay for all time, on the same terms; and then we hope that we might throw out some of the scraps of their data hoardings for good.

I think one of the ways that we might reappraise the state of digital monopolies is to question the state of data. I think the jury is out on the legal basis of those hoardings; certainly in any other domain of intangible or tangible assets we would at some level say you can't retain for all time these incredibly valuable assets that you acquired which don't have a legal status without also saying that there are corresponding levies on their maintenance or use, or there is some sort of limited exclusivity or monopoly period, or there are other real restrictions on use over time.

I'm not sure that we actually have the tools for that yet. The status of corporate data hoards is somewhere between unjust enrichment, competition law, data protection, and intellectual property. But perhaps we need new rules, and we need enforcement of existing rules, to be able to think about monopolies and their role in defining our future.

That connects to my third point, which is about the balance between corporate and regulatory power. I am writing a piece at the moment about the General Data Protection Regulation (GDPR), the new European data protection law, which is going to come into force next Friday (May 25, 2018). There's a lot of hope about what it will do and a lot of fear.

The hope is that somehow, because there has been basically a practice of ignoring the rules around data protection for 20 years, with the threat of large sanctions companies will now actually comply with what those laws say, which include limitations on use of personal data, which include actually people knowing how their data is used, and so on.

The fear is, I think, on the side of the corporates that maybe strong sanctions will be levied, but also on the side of privacy advocates, which is that by upping the stakes with these greater sanctions it is actually going to create diverse outcomes because both it will create a sort of checkbox compliance culture—basically, it has just generated an industry for data protection lawyers—and on the side of regulators it will make for much greater caution in enforcing the law, because there is going to be more resistance and appeals against correspondingly greater penalties and fines.

At the moment data protection is an operational cost for doing business with personal data and companies are readily able to pay out hundreds of thousands in fines on the rare occasion they get penalized. But once you could be charged up to 4 percent of global revenue, as under the GDPR, I think there will be much more resistance.

It feels pretty frail in this situation to just hope—as I discovered in talking to Viviane Reding and to Jan Philipp Albrecht, some of the European architects of this law—that Facebook is just going to flame out somehow, or just going to change its business model in the face of the arrival of the GDPR.

I think that the reason we have to rely on hope is partly because collectively—and also perhaps individually—we don't have mechanisms for really caring about this and how we might organize and how we might think of alternatives.

The space for thinking about alternatives is surprisingly limited. I was pleased with the study that Yoshi [Hiroi of Kokoro Research Center] presented in a Japanese consumer context of actually saying, "Look, we have options for how the future looks and the companies that might exist and the ways that we might organize information given its great power."

But it tends to be in academia and in the media that in most of the conversations about this we have a "chapter 10 problem," that people are very good at doing nine chapters of telling you all the problems and then they have this pretty pathetic chapter 10 where they throw a spitball at a few solutions.

There is very little effort to say—actually everybody's solution at some level is maybe we need something like a Food & Drug Administration or a regulator before you deploy systems at scale that will have a dramatic effect on people's life chances. Maybe if 95 percent of people writing in this area think that to be the case, we should put some resources towards actually mapping what it looks like. But there is very little effort at that sort of coordination of responses.

There are two final things to say in addition to my points so far that, in a state of digital dependency, we should be thinking about monopolies, regulatory power, and how institutions, individuals and communities could be empowered.

I think one major issue that I find striking that we always ignore is the energy cost of automation and computation. I never hear anyone address in the automated, driverless, projected future of autonomous cars, what sort of energy cost does that involve? Or, what sort of alternative might we want to a world where our urban transit is defined simply by cars? What's the difference in the environmental cost of having a physical driver drive across the United States and a driverless, autonomous truck that has to be real-time connected at all times and has to have certain latency and redundancy built in? When 3 percent of global energy use goes on cloud computing, what does it look like for a future that is defined by artificial intelligence doing all sorts of things—in, it must be said, a crude and computational way; a statistical way, which shortchanges many people—as opposed to having humans do it?

The final thing, which is where I have put a lot of attention and where I would like to place more, is around just thinking about this technology from a public interest perspective with respect to public data and public institutions.

I actually believe there has never been a technology more realizable to the public interest than AI. You need longitudinal data sets often held in public domains and you need massive computing resources. These are the sort of resources that could be easily invested in by public institutions, if only there were the will to do so.

I think it's a real tragedy that we are stumbling into a world where we are essentially resigned to the power, invisible or acknowledged, of a few major players that have emerged only in the last 15 years. It's a further tragedy if they will be defining what the future of this technology looks like. We often fall into this trap of going, "Well, but isn't it better that we have Google, Facebook, and so on, rather than nothing at all?" I think our comparison point shouldn't be nothing at all. Instead, we need to think of a whole range of alternatives that define something between these very privatized interests and more public ones.

Thank you.

You may also like

NOV 21, 2024 Article

A New International Order Is Emerging, We Must Bring Our Principles With Us

On the heels of a new international order, Carnegie Council will continue to champion the vision of peace and cooperation that remains our mission.

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.

OCT 24, 2024 Article

Artificial Intelligence and Election Integrity in 2024

This final project from the first CEF cohort discusses the effects of AI on election integrity as billions of people go to the polls in 2024.

未翻译

此内容尚未翻译成您的语言。您可以点击下面的按钮申请翻译。

要求翻译