Image via <a href="https://www.vpnsrus.com">www.vpnsrus.com</a>
图片来自www.vpnsrus.com

让人工智能在道德和责任方面发挥作用,与 Heather M. Roff 合著

2019 年 10 月 7 日

约翰-霍普金斯大学应用物理实验室高级研究分析师希瑟-罗夫(Heather M. Roff)认为,一些研究人员对人工智能的讨论是错误的。我们不应该纠结人工智能是否会成为一个道德主体,而是应该关注如何对技术进行编程,使其在 "道德上安全、正确、正确、合理"。今天,人工智能有哪些实际用途?如何在军事领域负责任地使用它?

ALEX WOODSON: Welcome to Global Ethics Weekly. I'm Alex Woodson from Carnegie Council in New York City.

This week I'm speaking with Heather M. Roff. Heather is a senior research analyst at the Johns Hopkins Applied Physics Lab. She is also a fellow in foreign policy at Brookings and an associate fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.

Heather and I spoke about her article "Artificial Intelligence: Power to the People." This appeared as part of a roundtable in the Summer 2019 issue of our quarterly journal Ethics & International Affairs. We talked about some misunderstandings that she sees in conversations about AI, how the technology should be used in the military, and how to program AI to act in an ethical manner.

A couple months ago, I spoke to Mathias Risse, who also wrote for the journal roundtable. As you’ll hear, Mathias and Heather differ on some key points when it comes to AI, namely whether AI will ever be a truly moral agent. I strongly encourage you to listen to the podcast with Mathias before or after listening to this one.

And for more from Heather and Mathias and any other journal-related content, you can go to ethicsandinternationalaffairs.org. And you can go to carnegiecouncil.org for my podcast with Mathias. We also have a lot more on AI, including a podcast from last winter with bioethicist Wendall Wallach on the governance and ethics of AI.

Special thanks to Adam Read-Brown, editor of Ethics & International Affairs, for his help in setting this talk up and formulating some of the questions.

For now, calling in from Boulder, Colorado, here’s my talk with Heather Roff.

Thank you so much for speaking with us today. I'm looking forward to this talk.

Just to get everyone on the same page, we're going to be talking about your article, "Artificial Intelligence: Power to the People," for the Ethics & International Affairs journal for their Summer 2019 issue.

To get going, I was hoping you could give us your definitions of automation, autonomy, and artificial intelligence (AI), because I know you talk a lot about that in the article and how some different people view them. What are your definitions of those terms?

HEATHER ROFF: Sure. I think to get started right, one is that automation and autonomy, it's really dependent on how you view autonomy. Automation is typically some repetitive task, some sort of mechanization of labor on a task that is done within a specific scope and is repetitive, and it can't really go outside of that. Any sort of mechanization of labor from the steam engine to rail cars, things like this, they don't require any computational architecture behind them to be automated in that way. It wasn't really until the 1950s, when we were seeing the rise in the automotive industry that we even got the word "automation," but that's not to say that we hadn't had mechanization of labor before that in a way that the automotive industry capitalized on.

I think for "autonomy" it really depends on how you want to think about autonomy. I come at this from working for years in the space of autonomous weapons, and so there's a wide variety of definitions around autonomous weapons and what kinds of faculties they need and what they're supposed to do and who, what, where, and how. So, if you think of autonomy as just doing something without human intervention, much like the U.S. Department of Defense's (DoD) definition in its directive and in its policy, then the distinction between automation and autonomy starts to break down.

But if you take a more robust approach to autonomy, something like the Defense Science Board's approach to autonomy really says that it's this cognitive capacity, it's got all kinds of reasoning, perception, and planning. It really is autonomous action is required by artificial intelligence—and to some extent almost human-level artificial intelligence—to be truly autonomous. That's a much more robust definition.

For me, autonomy is about the ability to undertake some sort of activity or task by one's self, even if that's part of a task. You can think about nested skills in a larger task, but you have to be able to something on your own, and you do have to have the ability to have some sort of freedom or action to do that even if that freedom or action is quite bounded and narrow, if that makes sense. That's how I view autonomy.

As to artificial intelligence, for me, "artificial intelligence" is a suite of computational and information-processing techniques that is goal-oriented. It's looking to pursue a goal or an end, so it's goal-driven in nature, and that system has to possess the requisite means. Maybe it's a physical task and it needs robotic manipulation, or maybe it's a cognitive task and it needs planning capabilities or classification capabilities. Whatever the means that it needs to pursue that task, it also has. So, the intelligence, how well it appropriately functions and undertakes its goal-directed task. If it doesn't function so well, it's not that intelligent.

ALEX WOODSON: Your article differs from some other views that are out there in terms of AI, and we'll get to some of that specifically. But just for you to state it, what do you think we're getting wrong in the discussion about AI, we as in the general public, we as in the academic community? What do you think we're getting wrong when we talk about AI?

HEATHER ROFF: I think one of the biggest problems I see happening is that people overestimate the capabilities of an artificial intelligence agent or system right now. They either think that this is magic or that the system itself is much more intelligent and capable than it really is.

Because of that they quickly spiral into attributing agency to the system, to having discussions about artificial general intelligence and superintelligence and whether or not they need to be accorded rights and duties and status, and I just think that is not only putting the cart before the horse, that is putting a road before a cart before a test-level for a horse.

I think there are so many things we have to work out just from the technological side, but before we even get there—we can ponder about what the meaning of life is and what constitutes the mind and what constitutes agency and moral agency in particular, but that has nothing to do with artificial intelligence. We've done that for millennia. What is philosophy of mind? Moral philosophy? These are the same questions that we've been pursuing for millennia, and it doesn't mean that now that "AI is here," this is a pressing issue. We can think about in terms of some hypothetical AI, but it's not going to change the way in which we do philosophy, and the technology is so not ripe to even start having those discussions.

That's one side, I think: really attributing capacities and capabilities to these systems that they don't have.

The other side is the belief that AI can fix anything—sprinkle a little AI on the problem, and the problem will be solved. I think that is also equally a little short-sighted, but it's also in some instances dangerous. The fact is that AI is suited for specific kinds of problems in specific domains, and not all problems are well-suited for AI, and we should have a very good understanding of what sorts of problems, what sorts of tasks, as well as what sorts of environments we want to put particular agents in, such as machine-learning agents or deep-learning agents. We have a multi-agent system.

I think there has been oversubscribing of capabilities, and I think there's the misunderstanding of what you can and cannot do with them.

ALEX WOODSON: You said this misunderstanding can be dangerous. What specifically would be dangerous about this misunderstanding of AI? What are the specific issues that you see right now based on this misunderstanding of AI?

HEATHER ROFF: I think you can think of it—there are societal concerns about using AI-enabled tools to allocate certain benefits. There was recently a case in Kentucky where they used a—they called it an "automated tool"—and I'm not certain whether or not the court documents are going to reveal whether or not it was an AI system—to dole out disability benefits, and the court found that the people that were denied benefits were done so unjustifiably based on whatever sorts of categories that this system was using, and that the system was quite opaque in all of these types of things. I think using AI systems in places where you're talking about the allocation of benefits, the distribution of maybe punishments such as the ProPublica piece that identified recidivism rates and parole. On the societal level, I think there are those kinds of things that over-relying on those types of AI systems to make decisions can be quite dangerous for the fabric of society.

I also think that over-relying on AI systems to tell you the best course of action as a decision aid depending upon what that tool is telling you to do and the context of its use, if it's very murky and very broad and we don't really have a lot of data, so you can think of something like, "This is a need for armed conflict, so I want an AI system to tell me what is the best course of action"—or COA in military-speak—"for this particular mission?"

I have to have a really, really robust data set. I have to have a pretty good understanding of what an adversary will do, so I have to have data on what adversary behaviors will be, and then I have to have a general context and situational awareness that military planners would use to say, "This is the course of action that we need to pursue," and there are all sorts of higher-level objectives and all these types of [inaudible] and nuanced types of information.

I'm a little bit hesitant right now to hand over those types of decisions to machines because as we know armed conflict is very fluid, it's very politically driven, it changes very rapidly and in some instances it's a rare event, and so we may not have the kinds of data necessary for the type of situation on hand. You might have a lot of data about other situations, but we might not have the data about this particular situation, so you'd be getting a bad recommendation.

Those types of systems I think are potentially dangerous. There's a wide breadth. You've got to know what you're looking at and what you're talking about and be able to measure it to tell the system.

ALEX WOODSON: I know lethal autonomous weapons is something you've researched and the use of AI in war. Would there be a good use of AI when it comes to defense, when it comes to military purposes that you can see right now?

HEATHER ROFF: Sure. Again, thinking about AI is a very broad definition of information processing techniques and capabilities. The Defense Department's going to use AI across the Department, which, when you think about how big the Department is—3 million employees, the largest employer in the United States—they're going to use AI in a wide variety of applications. It's not just war fighting or weapons.

Using AI in back office applications, using AI to get through the backlog of security clearances that now take I think on the order of 16 months to get from submission through cleared, using AI to do planning and logistics for various types of operations, using AI for preventive maintenance—do I have a lot of data on when a particular platform or system will fail, and then I can preventatively maintenance that system before it fails and reduce costs and potentially lethal consequences to war fighters? I think there are a lot of really great applications that the DoD can use AI for.

I also think there are some circumscribed areas for the use of autonomy. If it's lethal autonomy, that's a different discussion, but if we're talking about autonomy that is AI-enabled—we can't have systems that are autonomous that don't have AI on them, number one—and if you do have an autonomous system with AI on it, you can have a nonlethal system such as the Sea Hunter, and then you can have systems that do have lethal capacities and lethal effects.

I think as long as those are directed by a human commander and that the human commander has taken the proportionality calculations as well as ensuring precautions in attacks, that the weapon system has been sufficiently reviewed by competent authorities—it has undertaken weapons reviews—and it's a very narrow kind of thing: it attacks these things over here, and the target area has been demarcated by a commander who says from parallel here to here, longitude to latitude here, in this box I have good intelligence there are tanks, and we will send an autonomous weapon system to attack the tanks in that area. I think those types of systems can be useful, and given the right type of testing and experimentation and verification and validation they could potentially reduce civilian casualties.

I think just creating machine learning systems that continually you need to learn online and you need to update, right now technologically I would not feel comfortable fielding those types of systems. We just don't have the kinds of testing and evaluation as well as the verification and validation to do online machine-learning systems. Anything that has a lethal effector on it attached to a machine-learning platform, I would start to feel a little bit worried about using in any conflict because it may learn outside of its parameters, and it may be too late before you know what it did or why it did it. I also would definitely not put artificial intelligence anywhere near the decision to launch nuclear weapons.

ALEX WOODSON: I would assume based on your article, you argue that moral machines will never exist, that AI will never have ethics, so I would think that you would think that lethal autonomous weapons should never be developed really or employed on a battlefield.

HEATHER ROFF: A weapon is a tool. My argument was that you can never give moral agency, that artificial intelligence agents are not moral agents.

That's not to say that they can't be employed maliciously or ethically. Sure. The person, the moral agent who decides to build, field, develop, and employ a system, they're the responsible parties here. Humans are the responsible parties. The system itself is not morally responsible for anything because it's a tool. It would be like saying I need to put my Roomba on trial for murder because it sucked up my cat or something. It smacks against common sense. The article is arguing: do not ascribe moral agency to these systems. They are tools. They are computational tools.

In that vein, you can create systems—we already have autonomous weapons systems, to be frank. Israel has the Israeli Harpy and Harop. We have various types of systems that I think the DoD would consider autonomous underneath its definition, but within its policy it has circumscribed them and says this policy does not apply to these types of systems, though they would be definitionally or ontologically considered autonomous.

We already have autonomous systems in that way, and so far, at least within Geneva and the discussions around prohibition around lethal autonomous weapons, those discussions have somewhat been muddled because the Campaign to Stop Killer Robots has said that they are not concerned with existing weapons systems, they're concerned with future weapons systems.

But when you start to crack open what that means technologically, all the existing systems now that we have that are autonomous, those are off the chopping block, but now we're talking about new systems. What would be on the new system that may not be on the existing system that you would regulate? Would it be machine learning? Would it be computer vision? Well, some of our cruise missiles have computer vision on them. They do terrain mapping.

There's just a lot of confusion about what types of systems are permissible or prohibited and which ones give us pause morally. My personal feeling is that the systems that would give me significant pause morally are online learning systems deployed in the field that cannot be adequately tested, verified, and validated, and that may also engage with other learning systems and learn these kind of odd behaviors that no one understands or can even observe until it's too late, not now.

If you can fix that problem, and they're also not online learning systems—you say, "Look, I'm going to create a system that's no longer learning. I'm going to teach it, it's going to learn, and then I'm going to freeze it, and it's no longer going to learn when it's deployed"—that's one solution to that problem. Then again, having anything to do with AI around decisions to launch nuclear weapons I think is just a bad idea.

ALEX WOODSON: I think I'd agree with that.

Moving beyond the military realm, based on your definition of AI and how you think about AI, what are some of the best uses of AI right now that you're seeing?

HEATHER ROFF: I think a lot of the uses in AI that are really hopeful—this is not to say that something is going to completely turn over this applecart—but I think some of the research going on that's really hopeful is around using AI for planning. You could do a lot of really interesting work with logistics and planning for something like humanitarian operations. I think that has a lot of utility.

I also think AI for medical applications and using image classification for diagnoses and things like that has a lot of interesting and potentially beneficial applications. However, I would caution against that, that what right now what we're seeing in that realm is great, but it hasn't been robustly tested in any sort of clinical trial. So, while we see the potential for good, we need to do more clinical evaluation on that side.

I also see some really great uses of AI for tailored education, thinking about how you teach kids what you teach them and tailoring the kinds of content to the user. That could be really great for kids who have learning disabilities, who have attentional issues, or just personalized education. That said, it also requires a really good user interface design to do that. It's not just the AI picking up on your patterns of behavior and what you're missing or not.

Thinking about beneficial AI applications coming down the pipe, those are all going to be married to other stuff. It's going to require user interface design, it's going to require hardware, it's going to require computational power. There are all sorts of things that come with AI. You can't just say AI.

I think those kinds of applications would be great, and I'm hopeful that in the future someone will figure out self-driving cars and that my current seven-year-old daughter will never, ever have to drive a car when she turns 16.

ALEX WOODSON: I think that's a good plan.

You write in the conclusion of your paper that if we want AI systems to carry out their tasks ethically, then we need to train them to do so. How do you train AI to act ethically? Is this about algorithms, or is this about something that I'm not thinking of? I'm not too technically trained on these things. How would you train AI to act ethically?

HEATHER ROFF: When I talk about training that's usually referring to machine learning. Expert-based systems are rule-based, so the programmer will have figured out how to hand code what we would call "knowledge representations" from the get-go. Whatever ethical dimension or behavior, we would say, "We want the behavior to be X," and then the system would be kind of like a decision tree: if you get in Situation A, do X and X.

For machine-learning systems you've got to train them to do their actions and to learn their behaviors. When I say "train them and train them ethically," it means what kinds of data are you feeding that system: Have you done a sufficiently good job figuring out the provenance of your data, figuring out the representativeness of your data?

You might say, "Sure." If this is a representative sampling of the data, you might go, "Great." But that representative sample might be of people's behaviors toward minorities, and at least in the United States a representative sample could be terrible from a moral standpoint.

So you might say, "Well actually, I don't want a representative sample of that training data for that system. What I want is I actually want to bias the data in a different direction." We hear a lot about bias training data that results in biased outcomes, and there's algorithmic bias and algorithmic fairness, but in some instances you do have to bias systems in a particular way if you want them to behave normatively and not the way that people behave.

I think understanding what kind of data you need for the task and what right looks like from the get-go, and showing that system what right looks like from the get-go. But you—the designer, the human—have determined what right looks like.

The system doesn't even understand the concept. Just because a system might be able to correctly classify—you can train it to identify images and classify cat versus dog—it has no comprehension of what a cat or a dog is. It just sees the relation of pixels that, when they look like this I'm supposed to classify as dog or classify as cat.

These systems are really dumb. They don't understand concepts. They don't have any sort of computational ability to do the kinds of identification and cognitive thinking that humans do, and not just humans, other nonhuman but natural entities—higher-order mammals, even insects can do all sorts of these types of things. Until we have systems that can at least have episodic memory in the way that humans do, that can understand object permanence, basic levels of physics—there are all sorts of things that need to be in place to even begin to talk about them identifying what right looks like. We have to tell them what right looks like.

So, train them to act ethically depending upon the task. But that's really, really hard because ethics and action-guiding principles are very nuanced. If you say, "I want a system to act justly, I want a system to act virtuously," or any of these types of terms that we use, they're not capable of doing that right now. What you have to say is, "I want the system to identify cats or dogs." That's pretty much what we're looking at.

If we go back to the case of armed conflict and you say, "I want a system to identify combatants," well, it really can't identify combatants in the sense of combatancy. What it can do is it can identify particular individuals that might be carrying types of weapons, it might be able to identify particular vehicles that people are inside, it might be able to triangulate the location of a bullet or a shot based on acoustic signatures. But it doesn't understand combatancy. I think in that respect we would say, "What does right look like?" Right looks like when you have a positive identification of an M160 Abrams tank. This is what we know within a certain confidence interval: this system can, within 95 percent, always identify the tank.

The tank is a military object. It can do that. And that's what right looks like. You say, "right looks like taking out tanks." As long as we train those systems to do those types of behaviors within the bounds that we intend, that's what I mean by training a system to act ethically.

ALEX WOODSON: One of our senior fellows, Wendell Wallach, is interested in AI in governance. I believe he's organizing a conference related to that. I did a podcast with him back in February about that and some other issues. What are your thoughts on how we should proceed when it comes to AI and governance worldwide?

HEATHER ROFF: So, global governance or international governance, not national governance. It's a tricky question, and it's a sticky-wicket answer.

The first thing is that our international system is built on the assumption that states are the entities that make the rules, so states to some extent have to agree upon what the rules are. This is complicated in the area around AI because multinational corporations are very powerful players, and they're not states.

So, depending upon the types of governance structures we want, be those in economic fora, be those in military fora, we have to figure out who the relevant actors are and what they have to say and what roles they're going to play. In the economic sphere I can see this being a much more permissible club for all parties; in the military club I do not. States will very much clamp down on that territory and not really care what a Google or an Amazon has to say about governance structures.

But I think in terms of how we go forward in forming them we need to do more kind of identifying areas where we think the tech is mature, where we think the applications are beneficial, and identifying where the technology is not mature and the applications have very high risks associated with them.

Then there needs to be a common understanding about those things, and states need to start behaving in accordance with those understandings, and corporations need to press for social corporate responsibility. That's how we do this bottom-up governance approach. There might be top-down approaches to governance through international law, but I don't see that happening anytime soon, and right now what we do see happening is a lot of bottom-up agreements around principles or around codes of conduct. The Institute of Electrical and Electronic Engineers (IEEE) right now with its Ethically Aligned Design document is attempting to stake out what the IEEE sees as guiding principles going forward and socially and morally acceptable standards. So, maybe there's going to be a push for standards which then can be adopted internationally.

I think it's that bottom-up approach that really has to happen, not only because of the structure of the international system but also because the technology itself is so diffuse, and it's so rapidly changing. What I say is AI, you may not think is AI. You might think, Oh, that's just plain old math. There's no magic behind that. That's not really intelligent. If you can't even start to agree on the object in front of you, governance becomes very difficult.

I see this happening all the time in DC. I talk to military leaders or I talk to congressional leaders. Half the time they think AI is something that it's not, and there needs to be I think a little bit more public education on what the technology is and what its capabilities are, and then we can talk about when it's right to use it and when it's not right to use it, and those governance structures and those partnerships and like-minded states and like-minded companies can engage in soft governance because right now there's no way the hard governance can happen.

ALEX WOODSON: My last question speaks to this a little bit. It's a question that Adam Read-Brown, the editor of the EIA journal wants me to ask. It's a little more philosophical than I would normally get. I also asked Mathias Risse the same question. He has a very different view than you on AI. He used a term called "carbon chauvinism." He believes that—I don't want to restate his views; the podcast is on our website, and I'll link to it. Nevertheless, he has a very different view than you do, which is that moral machines will never exist.

The way I phrased the question to him is similar to how I'll phrase it to you: Is this a difference in philosophy as compared to your view? Is this a different assessment of the technology? How would you really define your differences, which are great, between yourself and someone like Mathias Risse?

HEATHER ROFF: I think it's both—our view of the technology and our philosophy. I don't think there's a sort of "carbon chauvinism" going on from my point of view. It's not that I'm stating that humans have some magic quality. In fact, I think there are a lot of judgments that we would make around moral agency or giving at least status of rights to various species and things like that that have nothing to do with being human.

I think the differences between us are that these are really dumb computational systems, and we have ascribed to them capacities that we don't even fundamentally understand from a philosophic standpoint. Until I think there may be some better agreement on even what the philosophy of mind and consciousness is, I'd be hard-pressed to say, "Oh, that system that I don't understand, that I built and I still don't understand it, has that quality that I also don't even understand about myself."

To me, I think this discussion about, "One day there will be moral machines, one day they will have moral agency, one day they will be conscious."

"No."

"Oh, well, they will have to be embodied or they have to have a sense of pain, and we have to have a theory of mind and all of these things."

The theory of mind we have about ourselves is riddled with difficulties and is riddled with paradoxes. I can assume that you are like me because I have some sort of access to my own mind. But if you go into cognitive science and neurology and neuroscience, that may not also be true, either.

So I feel like the more we know about ourselves, that's great, we can maybe apply that to ourselves. But once we start to ascribe it to something that doesn't think like us, doesn't reason like us, doesn't sense like us—and by "us," meaning humans—we could even say that bees and ants and other distributed intelligences exhibit complex behaviors like human behavior, but they also don't think or reason like us.

Again, I just find it a very odd conversation to have about: "Well, this is going to be a moral agent, or when it is a moral agent." How do you identify that? Is it going to tell you? Because you have no access to its theory of mind, and its mind is so fundamentally different.

In fact, right now the Defense Advanced Research Projects Agency (DARPA) is working on a project on a theory of mind: Can machines create a theory of mind of us? Not the other way around. Can machines, can artificial intelligences create a theory of mind of humans so that it can more effectively interact with humans in human-machine interaction and human teaming?

But that's not bidirectional. They're not working on whether or not humans can have a theory of mind of the machine that they're working with, which in fact is probably what's going to be required if we're going to have a real deep trust in these systems, and I'm very skeptical that that will take place, especially when you start to look at the technological failings of these systems.

Adversarial images and adversarial manipulation—it's very easy to fool or spoof or fake a system because it doesn't reason like us. With 99 percent probability it will say, "That stop sign right there is a speed limit sign. It says 'go fast,'" where the human would look at that and go, "That's a stop sign," no matter what pixel manipulation you have on it or stickers you put on it.

I just think that, again, ascribing moral agency based on our conception and our history of what we think moral agency looks like to these systems that are so brittle, so narrow, and so un-humanlike—that's not to say that my humanness makes me special; it's just that what I think about moral agency as ascribed to humans doesn't even begin to exist in these systems, and if it ever did, it would probably look wildly different, and there would be no way for me to even crack into that system to understand and observe it.

But I just think it's a completely misplaced and frankly waste-of-time discussion. There are so many more pressing issues around ethics and artificial intelligence—what kinds of applications are morally safe, right, correct, justifiable—than whether or not some hypothetical system 10, 15, 20, 25, 50 years down the line I'll have to make room for it in my moral and legal system. We have much bigger, brighter minds that can turn their attention to socially and morally relevant problems today.

ALEX WOODSON: Just a quick follow-up to this. Has this always been your view or was there something that convinced you of this?

HEATHER ROFF: I'd say that this has pretty much always been my view. There's always fun—as an academic and an ethicist, as a political theorist and a political scientist—there's always fun in playing with ideas to the reductio ad absurdum kind of level. But when you're faced with, okay, now I really need to understand what the system in front of me is doing to make real applied ethics judgments or predictions or estimations, I think it would behoove almost every political theorist or moral philosopher or political scientist or anybody else who listens to this podcast or reads the journal to actually go and try to understand what these systems are.

There are so many times I go into conferences with other academics in these disciplines that have no understanding of the system. They black box what AI is, and then they make arguments or judgments about what can or cannot be done without any firm grounding in the technology itself.

I think any good applied ethics needs to have a very robust understanding of the applied side. That might mean talking to other disciplines and doing much more robust interdisciplinary research, which I am a huge fan of, or it might just be, "Hey, I'm going to go to the library and pick up a few books on reinforcement learning, transfer learning, expected utility theory, statistics," whatever it might be because AI is nothing but math. So, if you can start to understand the math, you can start to understand what the system is doing without even needing to understand how to build a system.

You don't need to build a computer, you don't need to have a host of servers, but you can, if you're really interested in it, also go online and buy some cloud computational power and tinker around. You can download code and data sets off of GitHub. You can create your own AI systems and see how bad they are, frankly. That might give people a better understanding of the limitations of these systems and also how hard it is to get them to do what you want and then to be able to build exquisite systems at scale. That's really hard.

That's why I think making these estimations about morality and moral agency are just—it's almost like I've found myself in Aristophanes' cloud cuckoo land. It's a completely silly discussion to have at this point.

ALEX WOODSON: Heather, this has been fascinating. Thank you so much for your time.

HEATHER ROFF: Thank you.

您可能还喜欢

2024 年 12 月 17 日 - 特写

2024 年的道德赋权

探索Carnegie Council的 "2024 年回顾 "资源,其中重点介绍了涉及今年一些关键伦理问题的播客、活动等。

奇爱博士作战室。资料来源:IMDB/哥伦比亚电影公司

2024 年 12 月 10 日 - 文章

电影伦理讨论 "奇爱博士"

这篇评论探讨了与核武器和不扩散、军工复合体有关的伦理问题,以及斯坦利-库布里克的《奇爱博士》中政治讽刺的作用。

2024 年 12 月 3 日 - 文章

美国的儿童贫困与儿童机会平等

这个来自第一批 CEF 学员的毕业设计讨论了美国儿童贫困的影响以及帮助缓解这一问题的道德解决方案。