炒作的故事:与人工智能记者 Will Knight 讨论

2023 年 10 月 26 日 - 43 分钟收听

连线》资深作家威尔-奈特(WillKnight)在与高级研究员亚瑟-霍兰-米歇尔(Arthur Holland Michel)的讨论中,回顾了报道人工智能的忙碌十年。奈特和霍兰-米歇尔从炒作中后退一步(深呼吸),讨论真正的人工智能革命是否真的来临,思考这项技术是如何可控和不可控的,并畅谈与军用机器人面对面的经历。

欲了解奈特的更多信息,请查看他的《连线 档案

来自 "炒作节拍 "的故事 Spotify 播客链接 米歇尔-苹果播客链接

阿瑟-霍兰德-米歇尔: 你好。我叫阿瑟-霍兰-米歇尔,是Carnegie Council 国际事务伦理网站的高级研究员。本期Carnegie Council 播客是与奥斯陆和平研究所(Peace Research Institute Oslo)合作为您带来的,是其RegulAIR 项目的一部分。RegulAIR 是一项为期多年的研究计划,旨在将无人机和其他新兴技术融入日常生活。

威尔-奈特(Will Knight)今天的到来让我无比激动。威尔是 连线杂志的资深撰稿人,负责报道各种形式的人工智能(AI)。威尔对人工智能的报道细致入微、先声夺人,而且我认为最重要的是,在当今这个时代,他是一个头脑冷静的人。今天,他从马萨诸塞州剑桥市发回报道。

威尔,欢迎来到节目。

威尔-奈特:谢谢你邀请我,阿瑟。我很高兴来到这里。

阿瑟-霍兰德-米歇尔: 你为什么不告诉我们多一点你的实际工作呢?

威尔-奈特: 这是一个很好的问题。这就涉及到什么是人工智能的问题。我的工作就是人工智能。实际上,我已经写了十多年关于人工智能的文章,我想早在人工智能像现在这样有趣之前,我就已经对它着迷了,因为它是一种科学努力,一种推进机器智能的理念,它与计算机的历史息息相关。后来,人工智能显然已成为科技行业的一个巨大现象。我认为它就像软件本身一样,正在改变着整个科技行业以及任何与科技相关的行业。

你刚才问我的主题是什么,我尽可能把重点放在人工智能最重要的问题上,这项技术最重要的影响,以及它如何触及我所能想到的最重要的事情,我认为这往往归结为特定公司的权力、国际关系和个人权利。现在的感觉是,这项技术与这些事情息息相关。它是一种无处不在的重要技术,但我认为它与权力、影响力和权利等问题息息相关。这一点很重要。

有很多人在报道这些事情方面做得很好。我想,我在关注一些根本性的进步和试图战略性地了解重要影响之间平衡了时间,因为我确实认为,尤其是现在,试图了解技术本身及其工作原理是非常重要的。这是 2023 年最具挑战性的事情之一,因为我们正在看到一些疯狂的事情发生。

ARTHUR HOLLAND MICHEL: When you say you have been writing about AI for ten years, that makes you old hat in this space. It makes you a veteran in the truest sense of the word. I was wondering if you could share what the landscape of AI was like when you first started writing about it.

WILL KNIGHT: When I first started writing about it was when I came out of college and joined a magazine called New Scientist in the United Kingdom, which, as you would imagine, is very focused on developments in science. I was part of the technology team there and keen to write about AI partly because it was something that always fascinated me, and it was kind of in the doldrums in those days. It was one of these “AI winters,” where I remember buying a textbook that people would have in grad school or undergrad on AI, and neural networks were a small chapter. It was run over very quickly.

At the same time, even going back then, there were starting to be interesting things happening because of advances in computing and the Internet, so you were starting to see some early machine-learning stuff like Bayesian machine learning, transforming things like spam filtering, which was an amazing phenomenon. We take it for granted now, the idea of how machines actually try to quite capably understand what is going on in an email and filter out the ones that are bad or spam. That was something that people would try to do by hand, and then they started to use machine learning. It was a bit of a backwater, although there was this big Internet and technology company boom happening.

ARTHUR HOLLAND MICHEL: Would you say there are any continuities between then and now? What has stayed the same about the technology, the way people talk about it, or the way it is presented to the world?

WILL KNIGHT: That is a great question. The truth is that AI is tied up in advances in computer science, which are often indistinguishable from what people might call AI, and you definitely had moments—I remember Deep Blue happening. This is the chess computer that beat Kasparov. I remember talking to the people who built it and I interviewed Kasparov, which was great fun. They had this custom silicon to try to do this very old-fashioned way of looking ahead as much as possible but doing some more clever heuristics around it.

I think one of the things you will see if you go back then and even way before is that the understanding of AI if often a misunderstanding. It is often the case that when people talk about AI they talk about it as if it is something that is becoming more generally like a person and more generally capable. It has always been the case that you have carved up these small capabilities. Same thing with IBM Watson taking on Jeopardy! or AlphaGo and AlphaZero.

It can do these specific things, but when people see that, we are very hardwired as a species to see intelligence in other things, so we naturally say, “Oh.” Look at the reporting around Deep Blue and go back to many previous generations before I was working on the beat, and you will see the way people talk about AI as if it is some giant brain that is going to take over everything.

It is the same right now, even with this generative AI stuff like ChatGPT. People are extrapolating. It is understandable at each instance, perhaps particularly understandable with ChatGPT and so on, but it is often taking something that you see and then extrapolating in your mind what it is actually capable of and missing what the limitations and the problems of the technology are, which are often manifold.

ARTHUR HOLLAND MICHEL: The reason I ask is because I have been covering similar things—drones, AI, and other emerging technologies—for about the same amount of time, and I have noted something very recursive about the way these technologies are discussed. An example of that is that as long as you or I have been working in this space people have been talking about how the technology is moving at unprecedented speeds or that we are in a moment of unprecedented technological transformation, that things are accelerating in an unprecedented way, as though there is no precedent, but we have been talking about these unprecedented happenings for what feels like an unusually long time.

I wonder if, given what you have just pointed out, you feel like there is or is not anything specifically different about these past 12 or so months. Has something changed?

WILL KNIGHT: That is a wonderful question, and I think that is at the heart of what so many people are trying to figure out. To some degree I do not know because I think that is the reality. People do not know, and that is what is unsettling a lot of people.

Probably because I have been around like you, writing about it for a long time often from an outsider perspective, I do have a hunch that there is a lot more that has to be achieved and that it is not just going to suddenly fall into everybody’s laps in terms of full—whatever you want to call it—“human level intelligence,” and I think when you look carefully at the technology, when you look at what this is, where it is predicting the next word, it is important often to go back to the original idea of artificial intelligence as a discipline, and that was creating human-like intelligence.

If you look at human intelligence, no matter the people who are slavish about their computer science approach, it is the only model we have, and if you look at evidence from cognitive science, linguistics, neuroscience, and all these different fields, there is so much we do not know and there is so much these models cannot and do not do, so there is a lot that is missing. There is a ton that is missing and there is a ton that is problematic, and there is more and more coming out. As I do this reporting and chip away at these models, you see they are weird and quirky in some very fascinating and problematic ways if you are trying to do something that is going to be so generally useful.

At the same time, I think it is fair to say that what happened in the last year blew everybody’s socks off because there were these things that we thought for a long time, We don’t know how to do that, and this technology, specifically this machine-learning, deep-learning approach, does not lead to that, so it was very surprising to a lot of people and unsettling to say, “Oh, you just increase the volume of data and the amount of computing and, lo and behold, some of these things happen.”

I think, again going back to the extrapolations, people even within the field are extrapolating massively—and this is the phenomenon—they will point to this and say, “Well, there has been this progress so it is going to continue and reach human and superhuman levels.”

I do not know if that really follows. Even OpenAI has been saying that there is maybe not much more performance we can get out of just scaling up, so we have to look at other things, which would suggest that it is not going to continue like that. Also, there are things missing that do not seem that if you just do more and more that it will shake out and it suddenly gets able to deal with this stuff, but I would not bet on that. It is fundamentally appealing to believe that we are on the cusp of this once-in-forever moment when we are building something that is going to become superhuman or human-level. I do suspect there may be a lot more twists in that and that it is not quite as straightforward as we are being led to believe or people are kind of worrying in some cases.

ARTHUR HOLLAND MICHEL: It is undeniable that something that has changed in the last 12 or so months is just the number of people who are directly interacting with AI. Would you say that is fair? If so, is that a significant change in the history?

WILL KNIGHT: It is an interesting question. There are a lot of people interacting with these language models, and that is a different modality, a different way of interacting, that is incredibly powerful and affecting. Using language is fundamental to how we communicate, so the idea of communicating with machines in a more advanced way of doing that is a big deal. That is kind of new.

People have been using AI and machine learning going back for many years. It has been increasingly creeping into products and services and so on. I think it is this idea of having something that can actually seemingly converse in languages and everybody being aware of it, like everyone is talking about ChatGPT.

I do not want to downplay the importance because that is extraordinary. We did not think that it was possible. Literally some of the winters of AI were realizing that language was too difficult, so being able to do this much is pretty incredible, but I do also think it is interesting if you look at the perception of AI and the way language works, language relies on us, you and me, having this idea of an intelligence behind the screen, behind the other person’s eyes. We do not have proof of that, but you have this interaction, and language works because we have this mental model of another intelligence and it feeds into this feeling of that.

The other thing is that, compared to, say, Deep Blue or AlphaZero or something, ChatGPT is much more affecting. In terms of that cycle I think it feeds that idea that there is something very alive here or something seemingly intelligent, even more than in previous instances. It is a challenging time to try to make sense of it because there are big developments. It is undeniable. At the same time, it is tricky when those developments are being portrayed as the brink of artificial general intelligence (AGI).

ARTHUR HOLLAND MICHEL: The artificial general intelligence or the artificial superintelligence (ASI) discourse that you are referring to has been, as many people have pointed out, a major distraction.

Part of the reason it is so fascinating to hear you talk about the affecting nature of this new generation of AI tools and the new scale at which people are being affected by their direct interactions with these tools is that that would suggest that there could be a major impact in the way AI ethics is framed, discussed, and popularized as an integral piece of AI more broadly. With that in mind, I was wondering if you have noticed any change or evolution in the way AI ethics or regulations are being discussed, the way people talk about it, if the vocabulary has changed or the mindset of AI ethics is different compared to, say, a few years ago.

WILL KNIGHT: Oh, yes. I think it has completely been flipped upside down in the last year because all of a sudden you have a lot of people—there are always some people talking about superintelligence and existential risk, but now you have a lot of people talking about that and talking to governments about it.

I feel you have this almost split in the AI ethics field, where you have people who have been worried about bias and the way these tools can be used for influence campaigns or misinformation suddenly being pushed aside by people who are talking about long-term risks, which are predicated on this idea of AGI and ASI. I feel that has been quite a disruptive thing, and I think we are in the process of figuring out how that works and where that goes.

The British government is doing this big international summit, and much of the focus is going to be on the existential long-term risks, and I think a lot of people are worrying that the short-term issues and maybe some of the things you could actually hold companies to account on are being pushed to one side. I think that is a real problem potentially.

Talking about how common it is for people to interact with this AI in the form of things like ChatGPT, there are emerging issues which may be related to this new wave of technology that are not long-term existential ones but that we may be missing. Just the very fact that interacting with a language model can influence the way people think—you talk to another person, your views, if you test someone, can be slightly shifted by that conversation, and if people are holding very similar conversations with machines it is possible to sway them. I think that is an important thing we are not seeing discussed very much.

You have these models out there conversing with people, and it is all a bit of a Wild West right now, but you could see how companies might have an interest in using those to put forward a particular position or to subtly adjust people’s views. Governments, of course, are performing very subtle misinformation campaigns that do not even feel like that. Alexa tells you about this product, but it is really convincing because it has been programmed to know how to be very convincing. I think that will be a big thing. I do think some of those short-term risks are not very clear because you have this shouting about long-term existential dangers, which obviously people are going to focus on most because you would.

ARTHUR HOLLAND MICHEL: I have been thinking, for example, about these new AI celebrity influencer avatars that the company Meta has developed. Now you have someone who looks and speaks like the model Kendall Jenner doling out dating advice and potentially advice on other aspects of the closest, most intimate, and most human parts of our lives. In that sense there are some pretty immediate concerns that do not have anything to do with whether that chatbot will some day, I don’t know, get access to the nuclear codes.

WILL KNIGHT: Exactly. One of the things that has made ChatGPT so popular is that they did this reinforcement learning with human feedback, this process of having people use it and then say, “Well, that seemed like a good answer,” or, “That seemed like a convincing answer.” There is no reason why you cannot train through the same process models to be convincing about all sorts of things, if you wanted to present a particular position or sell a certain product.

We are holding a conversation here because this is so fundamental to how humans communicate, interact, and think through our use of language and expression through language. If you have machines that start to do that in an engaging way, it can definitely mess with people a lot. I think it is interesting seeing ChatGPT have this audio voice and vision capability. There was some blow-up on Twitter because people were saying, “It is just like talking to a therapist”—I think it was someone within OpenAI—but therapists are trained for a particular reason. They are not just language models trained on god knows what on Reddit. We are just playing with stuff that could be quite powerful, as you say in ways that do not have anything to do with existential dangers like getting the nuclear codes. Nobody can disprove that that is going to happen. That is a worry for sure, I think.

ARTHUR HOLLAND MICHEL: Something else that has been very repetitive about the AI space for all of these years we have been in it—and personally it feels like a lot more than a decade; maybe you feel that way too—is this notion of needing regulations and being on the cusp of having regulations or this urgent push for rules and guardrails. That has always been a couple of years on the horizon. I wonder if you feel like we are actually likely to see rules with teeth anytime soon, and if not, why? What are these obstacles that keep AI regulations on this infinite horizon?

WILL KNIGHT: That is interesting. I do not feel like we are going to see hugely meaningful regulations. The European Union is proposing slightly more stringent ones. So much of what we are seeing around regulations to me just feels like theater; it is like people want to say, “I’m doing something about it because it is so important.”

You cannot forget that the U.S. government will see this technology and see potentially something that could transform the economy and provide an enormous advantage to the economy, to their different industries, and to their military. They are not going to be very keen to regulate it. They are reacting to the public reaction in saying, “We are going to get people in and have them agree to these voluntary rules” and whatnot, but I don’t think they have very much interest in regulating it at all. It is the opposite. They want this to take off.

You can see similar things happening with autonomous driving. They have been very, very reluctant to regulate that much in the United States because they want to see that industry take off. It is understandable from the government policy and capitalist objective perspective, but it does not make sense that they would want to regulate that so much, so I am not super-optimistic that we are going to see very meaningful regulations, and I think that is probably the reason.

ARTHUR HOLLAND MICHEL: In that sense would you say that AI is different from other spaces and industries that have been regulated like aviation or the motor industry? On the tail end of that, often what we hear is that it is incredibly complicated to govern AI, that AI is just far too complex. I wonder if there is actually an evidentiary basis for that given that we have in the past succeeded in regulating some fairly sophisticated, complex, and multifaceted technologies.

WILL KNIGHT: That is true to some degree. It is probably closer to those industries, and you could certainly come up with much more stringent and much more meaningful regulation.

I am not an expert on regulating different industries. I do not doubt that there are challenges to doing it with AI that are unique, but as you say there are pretty significant challenges doing it for biotechnology and other complex, fast-moving industries, but you do have a moment where you have governments being told by a ton of experts that this technology is a generational shift and is going to change everything. The last thing they want to do is pump the brakes on it and put too many controls on it, especially in the United States, so I think that affects it.

ARTHUR HOLLAND MICHEL: There is something to be asked there about whether that need to balance strategic needs and the safety of one’s citizens is a unique balance in AI or whether perhaps there may be other forces that are driving this notion that regulating AI is going to preclude benefiting from its possibilities.

WILL KNIGHT: That is the narrative, isn’t it, that it is somehow unique. I do not know that it is not unique, but I also do not see that it necessarily is either. There are certainly huge numbers of lobbying forces at play here. Very, very rich and powerful companies are trying to preempt regulation.

Touching on your own work in the use of drones and military technology it also does feel like there is this kind of unusual climate of feeling that this technology has an enormous strategic potential, whether it is intelligence or military technology. Even if it is not out there in public discussions, I think that weighs a lot on the way the government is thinking about it.

ARTHUR HOLLAND MICHEL: I am glad you bring that up because the military space is perhaps where this ethical tension is the most fraught. You published a phenomenal feature a couple of months ago on autonomous military technologies. I would recommend everyone go out and read that story. Just for our purposes today can you tell us a little bit about that story and what were your main conclusions? Part of what I want to ask is also what was it like to actually come face to face with these technologies?

WILL KNIGHT: Thank you for saying that about the story. I was interested in this topic a couple of years ago because I felt that a lot of the reporting around it, mostly about Maven, this Google project, was very knee-jerk, and I thought that technology is not that black and white when it comes to its use in defense or military, so I wanted to learn more about that.

I spent a lot of time trying to build connections and learn more and more. I became very interested in this Navy application of the technology because it had not had much attention. It was also moving quite quickly because there is now an idea of using cheap autonomous systems to increase the visibility and responsiveness of forces, this idea of “maritime awareness,” and it was actually being tested in the Gulf of Oman by the U.S. Fifth Fleet because they had been given license to test some of these technologies.

Diving into it was fascinating because there are a lot of different forces at play often within military or defense-related circles. There are a lot of different views on what technology is going to be important and is not, and there are plenty of people who think AI is not as important as a lot of people who believe it is going to be transformational.

If you look at the history of military conflict and technology related to that, there are these enormously important moments. Technology is so fundamental to military capabilities and power and success and has been over history, so there is a strong incentive to try to be on top of the latest technologies that are going to be meaningfully important. It does not mean that the most exciting thing out of the tech industry is going to be important, but there is also this moment where technologies that have been private sector commercial technologies are suddenly becoming more applicable to the military sphere.

You can see this in Ukraine. From your writings on drones you know this very well. We see this drop in the cost of drones enormously over the last several years, and it is changing the nature of a lot of conflicts. It is massively important. That is not AI specifically, but it is related to AI.

There are a lot of forces at play, and there is this idea that is gaining currency I think that is quite appealing to people, that autonomy and AI are going to be a way to have a military edge.

There is also a strong incentive in some parts of the Pentagon to put this idea out there that they are racing to adopt AI, especially in a maritime situation, so that you create some doubts in the minds of America’s near-peer adversaries. The obvious one is China. Everyone is kind of obsessed with that and obsessed with the idea of some potential conflict. It is alarming to me how hawkish that I feel Washington has become about China.

Reading military history you see how the race to adopt technologies can sometimes become a self-fulfilling prophecy. I am no expert. I am trying to learn—a lot of people are experts on military history far more than I am, but I am trying to understand it, and it does strike you that if you try to race to adopt this stuff and deploy it, it almost feels like an end unto itself.

There is a lot more complexity to the question of AI and its use in military domains. You have written excellent stuff on this. It is not a slam dunk that it works or is useful at all. Often it just is not.

At the same time, there is this big momentum shift in that direction which I think is going to meaningfully—along with things like much cheaper autonomous systems—transform the makeup of different militaries. You see a lot of the investments that different countries are making, and it is a response to what happened in Ukraine, when you saw these cheap systems changing the nature of how one might expect that conflict to go, when you had those cheap drones at the beginning. That changed after a while, but it is interesting. I still feel like I am learning a lot about that. I feel like it is still an open question how useful AI really is and how quickly it will be used.

One of the key questions I think is, if we look at AI being deployed anywhere where there are autonomous systems or these chatbots, when you do it in critical situations what is the engineering around that because you cannot just put these machine-learning systems out there and see what happens because they do not always behave predictably. That is just fundamentally their nature, so you have to engineer to try to deal with that.

I think it should be a real worry that that is an issue because it is an emerging form of engineering. It is not well-known. I would actually expect that the United States might be very, very good at developing a way to do that relatively reliably, but I would worry about a lot of other countries that maybe are not as well-resourced and are racing to try to find parity. That picture is quite concerning.

ARTHUR HOLLAND MICHEL: Something that I always appreciated about your reporting is that you have picked up on questions and perspectives that maybe are not picked up by the, shall we say, the discursive mean of the AI space. For the purposes of our listeners, could you share what you think are some of the areas or questions in this space that you feel like we should be paying more attention to, and in particular if you see any ethical questions here that perhaps have not yet found a satisfying answer?

WILL KNIGHT: I am glad you asked that. One thing that has come up to me and that I find interesting—and I do not know the answer to this—is I have heard people who I very much respect in the AI space who are surprised by the capabilities of some of these models but point out limitations, things like they do not mimic a sense of self, they do not have a consistent one, and they do not have any goals. They do not set their own goals and make their own objectives.

I have heard people ask the question, well, maybe do we want to do that? If you think about what we are doing, you are trying to build something that is not purely at an abstract intellectual—the intelligence we are trying to recreate, especially learning from human behavior, is human-like intelligence. I think we have a lot of problems with very intelligent humans.

Look at the world now. It is really dismaying. When there is all this discussion about existential risk I wonder about the mechanics of how we are building this stuff. Is that the smartest way of doing it? I guess we do not know another way to try to make things very smart, and as I said before we do not have a model that is not human intelligence, but it does not feel like we are asking much about the basics of that when it comes to developing systems that might behave in ways that we don’t like or that are problematic. That is not an existential risk; that is just asking, “How do you avoid mimicking some of these things that maybe we don’t want to have in systems?”

It is a funny question because I do not want to feel like a Luddite who is saying, “Maybe we don’t do this or that with the technology.” A lot of the discussion around this has become a bit performative where people are talking about existential risks. I want to get more into what does that mean. There are some papers where they look at models where they try to deceive people. Is that interesting? Is that actually really a problem? Is that something you can easily fix? Some of the detail of the misbehavior of these systems I think is going to be very interesting and should be focused on a lot more.

Those are some of the main things. I do think the way these models can influence people subtly—developing a system that is not intelligent as I would describe intelligent but that can mimic conversation relentlessly and effectively toward a particular end feels like it could be a pretty unsettling thing to build more and more of them. I think that needs a bit more exploration and discussion.

One other thing. When you have companies going around saying, “I have got the most powerful technology in history potentially,” but they will not make it possible for people to examine it and look at it, there is not much transparency, that feels very weird to me. That does not feel like science; that feels problematic.

I think there are a lot of brilliant people who could try to understand some of the behaviors here that people are worrying about. I feel that is a real mistake. When you have companies driving the discussion around regulation and saying, “Well, we think there should be a national register and that you have to have permission to do it, but it is only going to include us and then we will not reveal how this stuff works,” that feels wrong. I think as a society, especially if it is so powerful and fundamental a technology, we should be trying to understand it more. It feels like people are just throwing things out there for the profit motive.

ARTHUR HOLLAND MICHEL: I would imagine that many of our listeners might find this barrage of questions, considerations, and uncertainties to be pretty daunting. I know that from my perspective I find it pretty daunting as well, as I am sure you do too perhaps at the best or worst of times.

I wanted to ask finally, what is it that gets you out of bed and through the day? What is it that either motivates you? What are you optimistic about? What gives you a joyful sense of the future, if anything?

WILL KNIGHT: It is a bit overwhelming at the moment because there are so many questions that it feels sort of unsettling.

I think it is important to remember, going back to the first question, covering this beat and looking at it early on, that this is an incredible moment honestly. Having a technology that can do something as general-seeming as these models can do is something to be celebrated. I feel lucky to be witnessing that quite close up. I do feel that is amazing.

One thing that is interesting is talking with people who are embedded in the field, they often say, “It is amazing to think my kids are going to grow up where there are these sorts of tools,” whether you think of them as intelligent and whether they might in time be indistinguishable from some kind of intelligence. It is crazy to think about that being something they will grow up with, and that is interesting, to have something that can converse with people so convincingly and often usefully. It is a fundamentally new technology. It is pretty striking to be at that moment in history.

I try to stay optimistic when it comes to everything. I think that is probably the thing that keeps me most optimistic, just thinking that we are at this pretty incredible moment. The hope is that there are a lot of positive outcomes that can be wrung from it, even if it is going to be a slightly unnerving and unsettling period for a while.

ARTHUR HOLLAND MICHEL: I might add that one of the things that keeps me optimistic is that there are folks like yourself who are on top of this, holding those creating these technologies and using these technologies to task about these sorts of questions and will continue to do so while hopefully practicing an enormous amount of self-care.

I would say that is a great note to end on, so I will just finish by saying that, Will, this has been a phenomenally fascinating conversation for me. I am very grateful for your time today.

WILL KNIGHT: You are very welcome. Thank you for having me. It has been fun.

Carnegie Council 国际事务伦理中心是一个独立的、无党派的非营利机构。本播客表达的观点仅代表发言者本人,并不一定反映Carnegie Council 的立场。

您可能还喜欢

2024 年 4 月 30 日 - 播客

乔安娜-布赖森:人工智能只是一种人工制品吗?

在本期节目中,主持人安雅-卡斯珀森(Anja Kaspersen)将与赫蒂学院的乔安娜-布赖森(Joanna Bryson)一起讨论计算科学、认知科学、行为科学和人工智能的交叉学科。

2024 年 4 月 25 日 - 播客

保护网络空间》,与德里克-雷韦龙和约翰-萨维奇合著

德里克-雷弗龙(Derek Reveron)和约翰-萨维奇(John Savage)与 "The Doorstep" 一起讨论他们的著作 "网络时代的安全"。我们如何减轻人工智能的有害影响?

危险大师》一书封面。CREDIT: Sentient Publications.

2024 年 4 月 18 日 - 文章

危险的主人欢迎来到新兴技术世界

在这篇为《危险的主人》一书平装本所写的序言中,温德尔-瓦拉赫讨论了人工智能和新兴技术的突破和伦理问题。