新战争技术与国际法:纳米材料武器化的法律限制》,与 Kobi Leins 合著

2022 年 5 月 25 日 - 62 分钟收听

在一期精彩的 人工智能与平等播客中,高级研究员安雅-卡斯珀森(Anja Kaspersen)与科比-莱恩斯(Kobi Leins)谈论了她的新书《 新战争技术与国际法》(New War Technologies and International Law):纳米材料武器化的法律限制》。科学家和政策制定者如何合作才能对 "纳米级 "材料的使用做出负责任的选择?这种新兴技术对环境、国际安全和现行军备控制制度有何影响?

ANJA KASPERSEN: Kobi, welcome. Congratulations on your book, New War Technologies and International Law: The Legal Limits to Weaponising Nanomaterials. It is an absolutely fascinating and sobering read.

I am always intrigued by the opening lines of books, which in my experience often speak to the core motivation behind writing it. You write in your opening line: "The desire for humanity and the desire for security have co-existed as long as humans have been alive. . . . As science has become increasingly sophisticated, so too have the methods of self-defence" by states. If I understand the core premise of the book correctly, it intends to explain those methods as well as why it is important to connect the different legal regimes and cross-fertilize ideas to ensure that the protection needed is adequately provided by international law.

To get us started, tell us more about what motivated you to write this book beyond the quote that I just provided, why now, and perhaps also who is it for?

KOBI LEINS: That is a lot of questions to unpack. Thanks, Anja, for having me here.

The purpose of the book came out of attending a series of conferences where I saw presentations on various types of technologies, and at the end there would always be a last small session on quantum, cyber, and nano, and they would be scooted over, they wouldn't be talked about deeply.

I became really curious because I was reading articles about brain-human interaction and research that was being done, but I couldn't get any information on what the laws were that would be applying to some of those technologies.

I decided, a little bit crazily, to go into a deep dive into those technologies and to provide a case study if you like in the same way that you need to do a weapons review. For those who don't know, if you acquire or modify a weapon you need to review it. Basically these are three examples of weapons that if you were to use them this is what you would need to consider, not to gloss over it or not go through it quickly, but to actually really contemplate which laws would apply. I kept hearing, "There's no law, there's no law, there's no law," but there is law, and I wanted to frame it in a way that was accessible.

The motivation from a personal level is deep, in that I grew up with a parent who went through a war and I am really conscious of the effects of war, and disarmament is something that is very, very close to my heart, so I also wanted this to be something that was a powerful and useful tool, not just an academic tome.

ANJA KASPERSEN: Technology-neutral policy or political-neutral technology, depending on how you view it, are both perhaps useful and tempting concepts to use, but treacherous concepts nevertheless, that have led many politicians, policymakers, and technology leaders into making poor decisions about important technology issues over the past decade. What are your views on this?

KOBI LEINS: I think the communication between the scientists and the policymakers has not been strong enough. There needs to be better a communication and a better understanding of the technologies, but this goes to a broader issue which I touch on many times in the book, which is that we have siloed all of these different areas of knowledge. Knowledge was not always this way, but over time we picked different subjects, we separated them, and we decided that they are not connected, but we actually need to be connecting those dots.

Just to give a specific example, having attended a defense conference where someone was actually researching how to prohibit enzyme functions in the human body, I spoke to the scientist on the side and I asked, "Have you thought about how this would be used?" He replied, "Well, it's really interesting research."

The connection goes both ways. The leaders are not necessarily thinking about how the science will be used. The scientists are not necessarily thinking: What are the risks in this development—not just direct risks, but how could this potentially be misused, how could there be dual use, that this could be used for another purpose that is not intended?

I think that communication and finding a joint language so that both sorts of information flows could be better is really, really important.

ANJA KASPERSEN: You just alluded to "dual use," which is sometimes a bit of a misnomer. Others have been using "alternate use" or even "dual capability" technologies. Before we dive into the nano aspects of your book, can you explain to our listeners what is meant by that term and why it is also problematic from an international legal point of view?

KOBI LEINS: Absolutely. Beyond the first sentence of humanity and security have always co-existed, I wanted to tell a story of one of the oldest skulls that has been found. The skull has no teeth or has very worn-down teeth, which is an indication that this person was elderly, but there were also weapons nearby. There is again historically this link between science, the same science that was used to potentially provide for someone who was vulnerable, which in this case was a rock or stone, was also used to protect the people.

Again, if we develop any kind of tool, it always has dual use. Dual usage just means that it can be used in many different ways. Any kind of substance in a particular quantity is enough to cause harm. Any kind of technology used in a certain way, even the most harmless thing, can be harmful. So looking around and seeing the world through the lens of: "How could this potentially be misused? What would be misuse?"

We have had instances of that. We are in New York. We had moments when no one thought planes could be weaponized. Any kind of technology can be weaponized.

Thinking about what those risks are in that technology inherently—even if those risks are small, the impacts can be huge if they are misused—some are quite obviously able to be misused. And now with artificial intelligence (AI) we can also expedite some of those misuses, but that is being beyond the scope of this list.

ANJA KASPERSEN: No, I think we should make it into the scope. Can you give us some examples just so people can relate to these concepts that are often thrown around but maybe not understood properly?

KOBI LEINS: Oh, I can talk about risks all day. It is so much fun.

On dual use relating to biological developments, for example, there has been a lot of research just recently. A lot of drugs and interactions with the human body are being tested through AI systems that look at what has worked in the past and what could potentially work in the future, so using machine learning that can potentially predict, those same algorithms could be used to predict substances that might be toxic—which leads into nano a little bit, and I am going to jump into that if I might.

Substances at the nanoscale behave differently, so properties of a particular substance at the regular scale when reduced to the nanoscale can actually—for example, silver can be really toxic, and a lot of metal nanoparticles we are now learning can be incredibly toxic if they can cross the blood-brain barrier, and then they just sit there. They are undetectable. We can't remove them.

This poses new risks. This is a way of thinking about the developments we have made using these tools. Sure, we can also cure cancer potentially, we can deal with a lot of illnesses, we can manage health much better, but the flip side of that is that we can also do a lot of other things that are not as savory. Whether it is using AI to develop toxic substances in new ways that might be outside of the scope of other regulatory regimes, like the Biological Weapons Convention or the Chemical Weapons Convention—although I would argue they still fall within them, there is scope for that conversation.

I think it is really important that we have these conversations and that we start having them now, answering your "why now?" question, because this is happening now. The science, even in the last ten years since I started writing the book, has advanced so tremendously, and the ability to particularly alter the human brain is huge.

ANJA KASPERSEN: So, nano. You mentioned nanoscale. Can you just explain briefly what is nanoscale?

KOBI LEINS: I normally have this wonderful video that takes you down a rabbit hole. Nano is basically the smallest size of molecular structure that we can imagine. Nano is derived from the Greek word nános meaning "dwarf." Nano is a tiny, tiny size that we can't see. It's not visible to the human eye. For those who think visually, a human hair is approximately 100,000 nanometers wide.

It is really looking at things on the tiniest scale of what we are all made of and what everything is made of but we didn't know how to manipulate or use. Nanomaterials have always existed, so they have always been in sea spray and even in milk droplets. Asbestos can be in nano form, which is also why it is particularly harmful, because it lodges in the lungs. Many substances have been in nanomaterial form.

The question for me is always: What is new? The new thing is that we can now manipulate these substances in ways that we couldn't before. We can mix these substances, we can maneuver them, and we can control them in ways that we couldn't before. The possibilities of that, coupled with the toxicity that I was just talking about before, really change what can be done with nanomaterials. We still are learning. There is still a lot of research going on in this area, and there are others who are be far more expert in the nanoscale side of things, the nanomaterial side of things, than I am.

ANJA KASPERSEN: Why do you think that nanoscale materials will be an important part of how countries define self-defense?

KOBI LEINS: To the point that every state is always trying to balance security and humanity, I think we are in a moment right now where we have been in a heightened state of states wanting to defend themselves, and that is completely illustrated or supported by the defense budgets of states around the world. Every state wants the edge, they want the thing that someone else doesn't have, and if they don't have that thing, they want the thing that is going to make the other states fearful that they have the thing. So there is that from a security aspect.

From a humanity aspect there is also the desire to create new medicines, and a lot of this research is actually coming out of a desire for medical interventions. It is a combination, I think, of some of the serendipitous finds in the medical field that are then potentially misappropriated by or used by defense, and this is where the dual-use aspect comes in.

The thing that is really important is that those uses are compliant with existing law, and that is what this book really focuses on because we have laws that govern the use of these materials.

ANJA KASPERSEN: Are the harms different from those of conventional weapons?

KOBI LEINS: Absolutely, and the time scales are different. This is one of the challenges. The traditional laws of war were from a time where a bullet went into a gun and there was a measurement. Even now there is a very good understanding and a specific approach to how weapons are evaluated.

How do you evaluate something you can't see? How do you evaluate a tool that resides permanently in the human body? How do you evaluate a tool that is reversible? These are all concepts and challenges that I had to deal with in the book.

Now we are almost at the point where this can happen, where you can either insert memories or alter memories, what does that even mean? What are the responsibilities that come with that?

The laws of war were not shaped at a time where this was foreseen. The book was a difficult challenge in the sense that it was taking a lot of things from the past that had potential use and saying: "Here are all these new technologies that are coming at us quickly. How can we apply them?"

For that reason, I also go through the history of these negotiations because the intentions of the draft that are relevant under international law are really important to understand. Even though these laws might not have been specifically written for these technologies and don't seem to have relevance, most of them do. Even though no one could have imagined a Blade Runner-type world, here we are. We have these laws that still can apply, and they can be discussed and strengthened to support these new technologies. That is the other aspect.

ANJA KASPERSEN: We will come back to the promise and peril. I just want to take one question from one of our attendees.

QUESTION: I wonder if there is enough political will to actually regulate nanotechnology. The reason I ask is because I was at a conference a couple of years ago where a Food and Drug Administration (FDA) scientist talked about the harmful nanoparticles put into sunscreen that go right into the bloodstream and then travel around. I use a lot of sunscreen, so I was surprised about that. It struck me that she is with the FDA, and yet they were having trouble regulating this. Is there really political will? If not, do you see an ability to catch up with developments?

KOBI LEINS: That is a fantastic question. Firstly, I commend you on wearing sunscreen. As an Australin, I really value that. It is a really important thing in my life.

The links between civilian regulation and hostile-use regulation are important. To mention another book, I co-authored a chapter on he nanomaterial side with Professor Diana Bowman looking at exactly that and asking: "What's happening in the civilian sphere? How can we learn about what's happening there?" It's slow, but I think to answer your question, in the civilian sphere there will be litigation and companies that have risks will move very quickly, and that is already happening in certain cases.

But to Anja's question about what is different, with nanomaterials it depends on the quantity that you are using, but basically the residual aspect and the blood-brain barrier transfer aspect are what are important. Once they are in your body, they are in your body and you can't remove them—at least at this stage we don't know how to—so that long-term effect is what is interesting. It is a little bit like, if you think of some of the litigation around other chemicals that we have seen, then you are looking at that longer-term impact.

I think, though, that there needs to be much better communication, and this is why I contributed to this collected edition that is edited by Bill Boothby, because often the movement is much faster in the civilian sphere.

People say: "Well, you know, there are bombs dropping. Who cares about this stuff?" Well, I would argue that when you have residual materials that go into water tables and into food chains that we then ingest and are in us long term, we need to think about risks and harms of war differently. I don't think we are doing that enough. Particularly as food shortages become an issue, if that food is then toxic longer term, we need to think more collaboratively and more collectively about what those long-term impacts are.

I am not sure if that answers your question.

QUESTIONER: Political will worries me.

KOBI LEINS: The political will I think is in the civilian sphere. If the company is going to be sued, they are going to have political will to change. Unfortunately, it is often litigation that brings about the change. Whether it's Roundup or Teflon or any number of products that we have seen where it has come to light that they are really toxic to humans, the switch can be incredibly quick.

I think the challenge will be translating that into an understanding in the policy community in wartime because they are different people and those communications are not strong enough. An argument I also make is that there needs to be a strengthening.

Where science is moving rapidly policymakers in the international sphere need to get across what those risks are and there need to be understandings. If the FDA regulates and says this is a harm or it is known that there is a harm—it shouldn't even take the FDA regulating—then policymakers need to say: "Is it okay to leave a battlefield toxic for the next 100 or 200 years or permanently?" We don't know how long these things remain. I think that has to happen very urgently, and that is part of the call in the book.

Thanks for asking that question.

ANJA KASPERSEN: We see already that microplastics at nanoscale are already seeping into our systems and creating damages that we really can't tell exactly what that manifests itself as in the long term.

KOBI LEINS: Exactly. Also, small quantities accumulate. Food chain and microplastics are probably easier for people to visualize, but things accumulate, so it is not just the exposure once—to your point about sunscreen—it is multiple exposures.

So again, even if you have not necessarily nanomaterials in weapons systems, but if you have nanomaterials in waste that is produced or left on a battlefield and that is in the water tables, longer term often the women and children are the ones who are in the dirt, who collect the water, who are the most affected, and they will then over time have exposure that is potentially toxic. Again, these are things that we need to be talking about and thinking about very differently.

ANJA KASPERSEN: And we will come back to this issue around modifying the environment and the long-term consequences, but first, one more question.

TOM PHILBECK: I have a question about how you differentiate or make some distinction between dual use as a notion dealing with technology specifically and the instrumentalist argument that simply says, "Technology is just a tool. Clearly you don't necessarily have to have responsibility for development because anything you can make, and it doesn't matter what it is or how you can develop it, can be used in multiple ways." It is really just the "guns don't people, people kill people" argument.

But if we look at technologies, you can conflate those two if you don't make a distinction about the difference between design and thinking about ethics in terms of development technologies and how they create political facets that have impacts on society and we just start thinking of them as instruments. Is there any kind of distinction that you make either in the book or just in your personal philosophy around this where you can keep people from conflating those two?

KOBI LEINS: I might tell a personal story. I was here in New York with my son, and he was asking about all the guns. We have gun laws in Australia. We have far fewer people who are killed by guns because we have fewer guns and tighter regulations around those guns.

For starters, yes people are at the core, but philosophically I think there are two separate points that you are making. I think one is around the tools and how they are regulated. I strongly and firmly believe that these tools need to be regulated, and that is the reason for the book, which is that we have all of this international law, how does it apply to these new technologies?

But fundamentally and philosophically every tool comes out of a social context. The way that we are building these tools and the way that we are thinking about these tools always has values embedded in them, to your point. I don't think there is a neutral tool. We build tools for a purpose, so there is already an agenda, there is a political background, there is a context, and that has a lot of influence in terms of how we use the tools.

In the case of nanomaterials, not all states have access. This was another impetus for the book. I was attending a conference where for three days we sat around with numerous states discussing what potential uses of technology could be, and again nano was tacked on at the end. At the end we did an around-the-table take-home, who had learned what from the session? And one of the states said, "Well, I have learned that we are never going to have most of these technologies, so we are just going to go back to basic weapons."

Again that unintended consequence of perhaps having more sophisticated, extremely highly developed weapons that create fear. As we have seen through the use of chemical weapons over the years—that response to weaponry is not just about who is going to die, it is also a point of fear—may actually have unintended consequences that others will then respond to in kind and do different things.

I suppose what I am trying to say is that even though we think that these tools are neutral, just the way we are developing them, thinking about them, and using them will have societal impacts. So I don't think that they are neutral from a philosophical perspective.

When we are talking about other technologies—again, this is one of the other dangers, and I really struggle with this compartmentalization of weapons—they are going to be used together. Particularly when you think about nanomaterials where you can manipulate bodily function, we have seen some advances and attempts to advance the uses of machine learning and artificial intelligence. So they will be coupled, all of these technologies will be linked, and I think another danger is that we often overlook the use of the technologies together. I doubt that nanomaterials will be used individually, and one of the technologies I looked at is manipulation of the human brain, which requires other interactions that will be covered by other technologies.

ANJA KASPERSEN: So the "horse and tank" moment is not one technology per se but the combinatorial capabilities when we put these technologies together. That's really what's going to define the "horse and tank" moment.

KOBI LEINS: Absolutely. I don't think we are being creative or broad-minded enough. Again, these communities sit in silos; in the international community you have biological experts, you have chemical experts, different people who work on different treaties traditionally.

We saw this just recently with the use of a drone on the attack of the Russian ship. People were not really thinking about drones as a destructor, they were thinking about them as carrying weapons. We need to be thinking about these technologies in ways that are more creative because they will be used in ways that we are not thinking about yet, and together, not just in isolation. That is one of the biggest risks I think in how they are being thought about at the moment, at least from a policy perspective.

ANJA KASPERSEN: We are going to take another question.

CORDEL GREEN: Hi, Kobi. Great talk.

If we could take this to the level of the individual, we now know we are hearing about implanted chips that are capable of reading brain waves. Do you believe that the international human rights framework that exists now is capable of giving adequate protection at the level of the human, or do we need to be thinking about either extending existing rights or creating newer rights?

KOBI LEINS: Again, that is a question I grappled with, particularly where those effects are reversible. The laws of war really clear where you have impacts that are permanent.

To your point, you can actually read brain waves already. I went to a conference in Arizona a few years ago, a conference on emerging technologies. I wish Wendell were here because he had a posting. They showed there by using a brain implant what someone could see. That was one of the moments where I just went I thought this was science fiction. The technology is actually further advanced than people realize.

Before we get caught up in the amazing technology, I think the reversibility aspect is covered by human rights. Again by way of analogy, there is a prohibition on torture, and I would argue—and I do argue in the book—that if you are having these effects on the treatment of people, then that does violate human rights.

Should we strengthen the existing law or should we have new law? I don't know that we are really in a time and place where we are going to have a whole lot of new law, but what we do need to do is strengthen the law that we have and say: "See this law that we have got? This applies to these new technologies, and this is why."

I think the scientific communities really need to step up and be involved in talking to the policymakers to explain (a) what the capabilities are and (b) what the risks are. Then I think policymakers in their regular meetings, which happen for most of the treaties, need to say: "We include these technologies. It is really important that we think about these technologies."

I am not sure if that answers your question.

I actually wouldn't like to have new law. In one of the versions of this talk I ask: Does size really matter? You talk about regulating nanomaterials, but it's really talking about regulating little things. Do we have a treaty on regulating big things? No. So why are we talking about that for nanomaterials? What we need to do is look at the materials like every other substance and ask: What are the risks and what are the uses?

As the book demonstrates, the uses are so varied. If you are genetically modifying someone, that is very, very different from enhancing, which is also possible; that's very different from changing brain function; and it is very different from some of the hardware that is in the barracks, an example that I also use in my book.

Different laws apply to different applications, it really depends on how the nanomaterials are used, but in each instance of those very disparate examples there is clearly law. It is there already. It can be applied. Lawyers apply law all the time to new technologies. We are not in a new thing, we have always had new technologies. I would like to see more creative lawyers.

ANJA KASPERSEN: Which prompts a question. We are seeing flagrant violations of international law across all domains. Certainly when it comes to governing new technologies or the weaponization of technologies there is still a lot of terrain that hasn't been explored. Some would argue that we need new treaties and new law because we simply haven't thought about these applications and this combinatorial effect that you were talking about, and others saying: "No, we actually have the laws. We just need to actually amend them and find maybe ways of catering to those new applications as opposed to redefining the law."

I think the bigger issue we are seeing is a disrespect or an unwillingness to apply what is already there. Are we fooling ourselves to think that new treaty frameworks would even help us to get any further?

What is your take on that? One of the core arguments in your book is that we need to dig in deeper, we need to activate, revive, and make sure that the laws out there are actually being applied even to those borderline cases where we might be seeing new applications of known or established technologies.

KOBI LEINS: Absolutely. It feels a little bit facetious to be talking about this to some people. In some of the conversations we have been having the position is, "Well, the law is being violated all the time, so who cares about the law?"

My very legal response—I am very conscious of that—is law gets violated all the time in the domestic context, all the time, and yet we still have domestic courts, and that's really important. To anyone who says they call for new law, I say, "Have you been to a treaty negotiation?" and most of those people calling for new law have not.

It takes years. It takes buy-in from parties. It takes an understanding. These are complex issues. We already have these mechanisms where the experts to give an example, the Chemical Weapons Convention—meet regularly and they already have a list of regulated substances.

ANJA KASPERSEN: Which took almost a decade.

KOBI LEINS: Which took almost a decade to negotiate with the buy-in of industry and with the buy-in of a number of states already pushing for—it was a different time. It was a different era. We are in a different time now.

But what we could do is really strengthen—and some of those conversations are starting to happen—and say: "In those meetings we are going to clearly state that any material, any chemical, even at the nanoscale, is included."

What it might also include—back to the sunscreen question, because it is all connected—is relying on the research in the civilian sector to say, "Okay, we are not prohibiting silver." Silver is a really good example. Silver is not prohibited under the Chemical Weapons Convention, but on the nanoscale we know it is toxic at X.

The devil is going to be in how to prohibit the transportation and the use. With chemicals you can say, "Okay, X amount," and certain chemicals are allowed in certain amounts for certain reasons, and governments have very strict regulations around this in compliance with the Convention.

How do you do that for substances you don't see? There is going to need to be better collaboration between the civilian sector—exactly to your point—and the defense users to say, "This is what we will tolerate."

I think we might also need some creative ways to govern that are not necessarily just treaty-based but also culture-based. You need the web of prevention, the old ICIC idea of scientists being more aware of what the risks of their works are, and also being able to have whistleblower protection to say, "This is happening and I don't think this is a good idea." Again, it is not just one blunt instrument of law. People love to call for more law, which just makes me want to cry, because it takes forever and it is really painful to get.

We have a whole suite of tools available to us. We need to be more creative in thinking about those tools for preventing harm, which is ultimately what I would like to see and I think most people concerned about the laws of war and prohibiting harm would like to do.

ANJA KASPERSEN: So scientific collaboration, not just in advancing the science itself but also to protect against the harms.

KOBI LEINS: A hundred percent, to really own those risks and to be more engaged with the policymakers—much like Pugwash, which Einstein set up as one of his last wishes, which brings together every year the scientists and policymakers, not in their roles as state representatives but as scientists, because there was such deep concern about the use of nuclear weapons.

ANJA KASPERSEN: Other questions?

QUESTION: Yes. Relative to what you were just saying, I do AI and ethics. Yoshua Bengio at the University of Montreal recently initiated what is called the Montreal Declaration for Responsible Use of AI. Anybody can sign that. It's on the Web. We could all sign it ten minutes from now. Ideally, scientists will read this, sign it, and they will have a unified set of ethics for developing AI.

I'm wondering if you think that would be effective with nanotechnology as well, or if you think anything like this is even effective. I'm not so sure. That is supposed to be the scientists taking ownership of the risk.

KOBI LEINS: Again, none of these solutions are sufficient. To use a computer science term, they are necessary but not sufficient. So each of these approaches—whether we are talking about supporting international law, improving communication between policymakers and scientists, or having scientists have more awareness—are "ands" not "ors." One alone is not going to be satisfactory or sufficient to make sure that humans are safe.

What I do think is missing is the ability for scientists to reflect. The community of scientists working on nanomaterials is not enormous and they are very specialized and highly qualified largely, so I don't think it is impossible that there is a greater awareness.

This has happened with the biologists. Biology is also quite disparate—and I know the research of nanomaterials is also quite disparate—but that is already happening with the biological scientists, where they were brought together to have these conversations. It was not as successful as it could have been, but it is better than it would have been had there not been those sorts of conversations, where scientists at the water cooler will say: "What are you working on? Have you thought about the implications of that?"

Where I think it gets really complicated—and this is again going into your field and talking about AI—is when those technologies are combined. When you are using fast-moving technologies to move science along, where are those conversations happening? Is it in the scientific community for nanomaterials? Is it the people who are building algorithms, because they don't necessarily understand the implications? Again, there is siloing, and I think there needs to be much better communication about the risks.

The point that I make in the book is that not only do we need to review weapons at the point of purchase or modification, which are very specific circumstances, but there also needs to be under Article 82 of the Geneva Conventions the ability to provide information in the battlefield. How do you do that? This is traditionally done by lawyers. Lawyers don't understand how these things work.

That makes it really clear that there is a responsibility to tell commanders, "This is the risk of this." Again, with a gun and a bullet it's quite simple, "This is going to cause this harm." It is really complicated now, and we are not really looking forward enough to how we minimize those risks, not just before use but also during use.

We don't have the relative expertise. We don't have people who can translate across those areas. And this goes for all technology, so it does tie a little bit to also the autonomous weapons debate. Again, there is law, but for how we do it again the devil is in the detail.

ANJA KASPERSEN: And the verification part of it. You do speak a lot about implementation, enforcement, and verification in your book. That sometimes is difficult. It is not that we lack the frameworks, but that we lack—to the point that was raised about political will—the political will, but sometimes also the skill sets and even the technologies that could actually help us do that verification that we need, especially when you are talking about weaponization of technologies at the nanoscale or technologies that essentially require you to have a very deep insight into software, algorithmic processes, etc.

You and I wrote an article not so long ago "7 Myths of Using the Term 'Human on the Loop'," about how our illusion that putting a human in the loop to ensure oversight needs to be looked at, addressed, and fully understood.

Can you speak more about the enforcement, compliance, and oversight angle to all of this?

KOBI LEINS: I think there are a lot of things. They are not talking about Pierre Bourdieu's GAPS (gender and population studies) but they were also not looking again at the complexity.

In one of my roles I looked at governments' compliance with the Chemical Weapons Convention and the Biological Weapons Convention. To comply with the Chemical Weapons Convention requires numerous pieces of legislation—it is import/export controls, it's policing, it's scientific development, there are all of these different angles—and this is before we get to nano, so I'm just starting with basic weapons control.

Once you have those regulations, how do you actually enforce them? You can have all of those regulations, but if you don't ensure compliance and review the import and export of these materials, it is not really going to help you.

Another bit that I've more recently become aware of this also who finances this. It's not necessarily states anymore, it's who is moving what around. You can track through other methods what is being tracked, how it is being tracked, and how compliance is being followed up. This is a huge question, and a lot of smaller states have signed on to these various treaties that do not necessarily have the compliance mechanisms that they should have, the political will to your point, or the capacity. For small states it's an enormous undertaking.

Now add to that the next layer, which is that you cannot see this stuff. We are now talking about materials that you can't access or have oversight over, so there is that added layer to the already complex issue. Even if the regulations are there, the culture of compliance that plays a huge role in all of this is not necessarily there, and again that is something that needs to be thought about.

ANJA KASPERSEN: How would you do that in practice, in your view?

KOBI LEINS: I cannot fix it all. (laughs)

I think almost every answer comes down to funding, there needs to be more money to actually ensure compliance, but there also needs to be training of the scientists. I would like to see more awareness of the culture between the scientists, that web of prevention—the toolbox, if you like, of things that we could use—and we really need to connect and enable people to have conversations across disciplines. All of that requires money and will.

ANJA KASPERSEN: Do you think the fast-growing defense industrial complex where—

KOBI LEINS: Is the place for that to happen?

ANJA KASPERSEN: No, is challenging some of these means of providing the right level of oversight. I mean, a complex that is growing, not just to include the classical defense companies, but where what is regarded as part of the defense ecosystem is widening by the day given these developments in technology.

KOBI LEINS: There is a center here in the United States that develops nanomaterials for defense, and you can't actually access it unless you have security clearance. I didn't do that, and I don't know what's happening or happened in review or whether there is any kind of oversight.

Even outside of those particular centers, universities in engineering, computer science—any of the sciences—are now largely funded by or have enormous input from defense. This is another point that I raise: Again, you have a gun and you have a bullet, you put the bullet in the gun, there is a point in time where you are definitely going to use a weapon.

But it's much more subtle when it comes to scientific research. If you are developing material that can stop enzymes in the body from functioning, which can result in death, at what point is that scientist engaging in creating a weapon? At what point do you ask, "Does there need to be a review of this substance?" These are all questions that we are also not answering.

In addition, just because something happens in a university does not mean that it is not defense-related. To your point, a lot of this work is defense-related. It appears innocuous, it appears to be pure scientific research, but by the time it is developed it is almost too late because so much has been invested in its use. Particularly when we are talking about complex systems again, when we are talking about intervention or use of different types of technologies together in ways that are being explored in some fairly sinister ways, I think we need to be having those conversations earlier and really thinking more about ethics. I don't think ethics boards in universities have a clue in terms of how some of these things are being developed and how they will be used, and I am not sure that they would approve them if they knew. I don't know.

ANJA KASPERSEN: We will come back to the ethics issue in a bit.

I want to shift your focus to a topic that I know is very near and dear to your heart, which is that of weaponizing the environment. You have been doing a lot of work on looking at the Environmental Modification Convention (ENMOD Treaty) that was adopted back in the 1970s to avoid the weaponization of the environment. This goes to the short- and long-term impact comment that you made earlier.

Can you explain to our listeners the importance of thinking about the environmental angle? Of course the environment is front and center of everything we talk about these days, it is a real concern, but it is even a bigger concern when we are thinking about how this can be maliciously or intentionally used to change the "strategic map," if you may, and also why now is a very important time to revive ENMOD.

KOBI LEINS: The Environmental Modification Treaty, for those who don't know, came about after the Vietnam War, after the widespread uses of Agent Orange.

ANJA KASPERSEN: Which was a weaponized herbicide.

KOBI LEINS: Yes. There are a number of aspects to your question that I want to tap into, but let me just give you a bit of context.

At the time the concept behind it—and I go into this a little bit in the book—the framing of why and how this Treaty came out is really relevant. It was looking at, to your point, if an intentional harm to the environment was caused there should be some kind of culpability.

I raise this, and I go further to say that I don't think it is any more about the intent and we have to be really careful. Intent is incredibly hard to prove, firstly. Particularly when you are talking about new sciences and new materials the risks of which are not fully understood, it would be very easy for a state or a monitor to say, "I just didn't know" or "I wasn't aware," which may be the case. So I think the intent part of it is problematic.

I think what is in that Treaty are the bones of the ability to again open up the conversation. It is a Treaty that exists already. It is a Treaty that I was quite surprised to find has lain dormant. I don't think states have looked at it for one or two decades.

ANJA KASPERSEN: Since the mid-1990s.

KOBI LEINS: That's a longtime ago. We really need to have these conversations.

The context as well is when I first raised this ten years ago, I was told, "Do you really think the environment is that big of a deal?" I know for some of us the environment is front and center, but for some people it is still not.

Then, over time—as I was working on the book, obviously we had the bush fires in Australia and we have now had the floods more recently—I think those natural events have crystalized or coagulated some people, that there are, at least where I live, real risks. Whatever you intend and whatever the scale—and this is the issue—the current framing is that there needs to be "widespread, long-lasting, or severe impacts"—and they develop in quite a short timeframe.

We have a different world now. We have science that can cause long-term harm cumulatively. Should we only be looking at those risks? I would argue very strongly no. Again, I think the states need to think about if you leave a country long-term after a conflict in a way where no food can be grown safely and no water can be drunk safely, over a longer term that is a risk that should also be captured by the laws of war.

ANJA KASPERSEN: But even the immediate effects, that if you are manipulating access to water, the environment—

KOBI LEINS: A hundred percent the risks in the immediate term also need to be reinforced, but I think the longer term needs to be explored as a greater risk, which is not being done. Again it is an "and," not an "or."

ANJA KASPERSEN: Another question?

WENDELL WALLACH: Kobi, over the years as I have listened to these conversations and participated in them, there is a point where somebody comes in and says: "What you have just described is overwhelming. The complexity is daunting. We are not only talking about having to look at specific nanotechnologies but also we have the problems with bio and we have the synergies between the technologies. There is not the will to work on the treaties, to restrict the deployment of these technologies. There is not necessarily even the will or the funding to put in place the complex infrastructure or to even put in appropriate oversight."

And of course now we have this context where we are seeing rash violations of international laws with great impunity. One day you pass a no-cluster-bomb treaty and the next day cluster bombs are being deployed.

I hear over and over again: "Why should I be anything other than pessimistic? Are you doing anything more here than waving a cautionary flag, if not a flashing red Doppler signal or something like that?" What is it that gives you hope or what actual first steps do you think would lay foundations that people can be hopeful or trust that they are actually putting in place so we achieve these kinds of goals?

KOBI LEINS: I think if I didn't have hope I wouldn't have written the book and I would have gotten under the duvet.

I think throughout history there have been times when the appetite for disarmament and weapons control, or for finding solutions to any complex problem, have waxed and waned, and right now we are in a waning moment. I think that generations forget what's happened before, and we are currently in a moment where it is very easy to forget. As long as we had the horrors of some of the other wars, there was a greater understanding of the value of some of these laws.

I don't have all the answers. What gives me hope, though, is that there enough people in these communities who do care and who are informed to actually kickstart this conversation.

I think the bigger challenge for me is: How do we keep these new technologies that are complex and overwhelming relevant—and I think this might be going to the heart of your question—in the face of core violations of basic principles with basic weapons? Who cares about the new stuff when we have all these old weapons? If you drop a conventional weapon on a hospital or school, why are we talking about nano?

Again, I don't think it's an either/or, and this is part of the reason I really wanted to put something in writing that holds that we should be complying with in international law.

I think this is a moment right now—the flagrant violations we are seeing now are horrendous—but we can't lose sight of the other issues that are still on the horizon because they will become more and more prevalent. These new and emergent and now current technologies that have not been used on the battlefield yet we will see on the battlefield, and when that happens I want something in writing that says it's covered by international law. I don't want people to be saying, "We need new law," because we don't.

That is the hope, I suppose, that there have been treaties negotiated over decades in the past that carry a lot of weight. Treaties are not some kind of dead instrument. As those who have worked in negotiations know, they are live things that involve relationships, communities, and people who care.

I think probably the biggest answer is that I see people who care, and as long as there are people who care there is hope. What is the alternative? For me I can't not try.

But these projects are also hard. Getting across the science was hard. It would have been much easier to just say, "Uh, nanomaterials," which is what had happened sort of up until this project, and the same to a certain extent with quantum. I think there are still other gaps that need to be filled, where people need to come in and ask: "Where is the technology? Where is it going to be? What are the real risks? How does this really change something?" We do need to have more forward-thinking lines in the sand to say: "Look, when you get here and this is developed—not with this tool. No, you don't. We have the law."

QUESTION: What is your feeling and what is your experience insofar as the bringing together of the scientific community and their recommendations and their attempt to explain the potential risks—long-term, short-term, complex—to the diplomatic community, to people with very different sets of backgrounds, educational trajectories, all those types of things? You have very different groups of people with different pedigrees, different interests, and different obligations professionally coming together when we are talking about something that is scientific?

You mentioned previously, "There is a gun, there is a bullet, you put it in," and you can imagine that anybody with a basic rational mind can understand the outcomes. But when you start to talk about percentages, statistical significance, nanomaterials that are a particular type of molecular structure, etc., you can quickly lose people who are just one field over but also scientists.

What was your experience in this in terms of when these two groups of people meet how does it go? How does the information transfer go? Is there space for hope there, or is that something that needs a tremendous amount of work, and what type of opportunities are there?

KOBI LEINS: I love that question. I have spent nearly all of my career as either the only or one of two women in the room or the only person who is not of a major group, so I think I have specialized in being an outsider.

As a legally trained person who wrote a paper on dictatorships and humor in Germany and the fall of the Wall, I then ended up working with scientists on the Biological and Chemical Weapons Convention compliance side. I have always had this desire, and I think there are other generalists like me who need to be given greater voice and greater roles to translate.

I translate all the time. I sit in rooms where I use words that are not the complex academic words, although I can use those, but what is important to me is that translational work between cultures. The differences are huge in some aspects between whether you have a science degree or a law degree. We are all shaped by our original form of study, those of us who had the privilege of studying, which not everyone does. A lot of the work also needs to be done with people who don't have degrees. Again, it is making this information accessible. But I think that work can be done, it needs to be done, and it needs to be prioritized.

More recently I was again the only lawyer in two computer science and engineering schools saying, "Do you know there are risks to what you are building?" Having those conversations takes time, building culture takes time, but it doesn't mean we can't do it.

I think education has to change and the way we engage has to change. Policymakers need to listen, but scientists also need to be more accessible and need to be able to translate their work.

I had this argument very directly in the academic circles I was in where I was told that I needed to use more "academic" words. I was told this to my face. I did a mapping project for cyber in Australia and I was told, "You need to use more academic words."

I was also told that mapping was not an academic project. My book is a map. My book is a map of what exists to find where the gaps are. Every time I see a problem—maybe it's just the way my brain works—I want to know what exists already before I go in and go, "We need new law, we need new law." What exists now, what is missing, what is new? This is a process, an approach.

We need to be less snobby. We need to let more people in the room. I think—to the point of Pugwash, which I raised earlier—we need more scientists who are engaged in policymaking, who understand, and that comes back to issues of conflict of interest and issues of funding. We need to have those resources to ensure—

ANJA KASPERSEN: Issues of scientific methods.

KOBI LEINS: Issues of scientific methods even within computer science let alone between the different disciplines, so when you are adding AI to nanomaterials or AI to chemicals or any of these sorts of connections, we need to be doing much, much more of the translation work and be a little bit more humble about the language we use and let more people in.

ANJA KASPERSEN: This is a great segue. You have a whole career other than writing about the weaponization of new technologies.

As you alluded to, you work in engineering schools, you work with computer scientists, you are regarded as a global expert on the issues of how do we manage data responsibly, how do we develop science responsibly, how do we develop AI responsibly, and the ethical considerations that often get forgotten.

What have you seen in this line of work? You have been really pushing some new boundaries and barriers on rethinking how we think about computer science per se, the power that is held by the engineers and the computer scientists, but also how we are embedding these technologies.

We started off talking about whether technology can ever be neutral. You have been a very strong advocate for saying that the minute it is embedded it is not. You may think that it's neutral at the development stages because even there you have to question it from the very conception when you are embedding values, historical patterns, economic considerations, and like you said, some of the other issues in terms of funds and where the grant comes from.

What are you thinking about? What are the big issues and perils we are facing right now? Do we have the right type of scientific discourse and open enough scientific discourse to be addressing that?

It's a big question, I know.

KOBI LEINS: It's a huge question.

ANJA KASPERSEN: You can go in any one direction. That's fine.

KOBI LEINS: I really don't have all the answers is what I am going to start by saying. I just see problems and I keep poking at them. I see things that are problems and I write about and talk about them.

For those who have any doubt that technology is neutral, Langdon Winner is one of my favorite authors, and he writes about physical structures. The bus overpasses here in New York were built really low to keep the high-rise buses out because certain people took them and they didn't want them coming into New York. We have certain energy structures and certain physical structures that shape the world that we have.

What I think about a lot is not thinking about critiquing or saving the world we have. I think what we have lost the ability to do is what Sheila Jasanoff talks about as "sociotechnical imaginaries." We are not imagining the world enough. We build these tools, we make these tools, we decide how these tools are used, we set limits on these tools, we as humans. To the point about is it guns who kill people or people who kill people, it's people who do the research and it's people who write the policy.

I for one would really like to see a world where we are having more direct and thoughtful conversations about what we actually want to be achieving and for whom. Who are these developments benefiting?

Not in a naïve kind of "I'd like to sit down and have a little think." As I said in a recent job interview, "I really don't want to do fluffy bunny ethics," I thought that probably shouldn't have been said in a job interview, but they hired me anyway.

It has to be practical. It has to be granular. There has to be decision-making and ownership over these tools, by whoever makes and builds any of these tools, and that includes the technologies that we are seeing that are shaping our society in ways that are far beyond what we even understand yet. We need to be holding to account and also calling for change.

ANJA KASPERSEN: Do you feel that this is well understood by computer scientists and people working on the development of new technologies, be that on nanoscale, in bio, in chem, or in other fields?

KOBI LEINS: Absolutely not. The most regular response I had working with computer scientists—again, "computer scientist" is an overreach. Computer scientists fall into multiple different communities. Those communities again have different cultures, and there is a piece of research that I worked on looking at the hypothesis that the more diversity you have in a group the more interesting thought you have and the more responsibility.

Again, coming back to AI, a lot of the people calling for change and limitations are sort of "outsiders" and people in the committees, and there are not a lot of women. The more diversity you have the more risk management you have. It is all tied in again. The more diversity you have, the more you let other people into the group, the more you are going to have people raising these issues.

Ethics is often touched upon lightly in computer scientists' training, but it's not how they think. I always joke and say, "Ethics is not binary, computer programming is," and so there is this desire to sort of code in ethics. There are things that are innately and uniquely human, and I do not think all things can be automated.

ANJA KASPERSEN: You can't code all human facets.

KOBI LEINS: No.

ANJA KASPERSEN: You spoke about the computer scientists. Going to the policy side, which you have also been part of for a long time, do we have the right level of understanding on the policy side of the intricacies that some of these technologies, especially the combinatorial effect, have?

KOBI LEINS: I wasn't thinking I was going to quote Ted Lasso in this talk, but I have to. (laughter)

There are different cultures and different groups, but on the policy side—and I say as a legally trained person—you can't be arrogant and curious at the same time. You have to be able to sit in a place of "not knowing." Every day that I go to work I don't know things, I ask dumb questions. I think when you have people who have come through a particular line of training where their entire career is—policymakers are often in very prestigious roles and they don't want to ask questions that make them look stupid.

We have to all ask questions that make us look stupid because nobody understands what's going on right now. Nobody understands enough or all of it or has questions that are stupid. We just need to be more humble. Again, it comes back to a way of engaging with the world of asking more questions and being more curious.

It's hard. Policymakers need to have a certain interactional way of engaging, but again I think it comes back to that culture. You need to build up a culture where there are relationships between the scientists and the policymakers and people feel safe enough to ask questions, and I think all of that takes investment and time.

ANJA KASPERSEN: Earlier you mentioned Pierre Bourdieu, the GAPS, the dogmas. You and I have spoken about this before. It was then repeated in some of Gillian Tett's work speaking about "social silence" that builds on Bourdieu's work, issues that are purposely kept out of the public discourse, out of the public domain, but they are issues that drive some of the more political movements we see in society.

My last question to you: What are the social silences that we should be mindful of?

KOBI LEINS: In relation to nano or more broadly?

ANJA KASPERSEN: In relation to emerging technologies more broadly, but I think maybe with a specific focus on nano, the environment, and AI. Three different domains, but I think some of the social silences bizarrely and maybe naturally coincide.

KOBI LEINS: I think there is one word that we don't talk about enough, and that word is power. All of those areas of research being funded for certain groups with certain reasons of interest. When I say "power" what I mean is we started out talking about—to use AI again as an example—the ethics of AI. I think that conversation is shifting and ethicists are relevant, but what we are ultimately talking about is power: Who is funding what to what purpose?

Not all groups have equal access. I don't think you can have any kind of equitable world without more diversity in the room, having a conversation saying, "This is how it affects me." A lot of these tools are being developed and used by particular groups for particular purposes. Again, none of this is new. This is not some 2022 thing.

But what is different is the amount of money that has crystallized is so enormous and those people have very specific views. One of them is that having the rich people survive through climate change and propagate into the future and that is better than saving all the planet now. Well, I fundamentally disagree. We actually have an obligation to look after this planet, these people, and I don't think just saving the rich should be involved in the work that we do.

But again, a lot of this comes down to how do you assess those values, how do you build those communities, how do you have these safeguards? It requires funding, so there is this inherent tension in there. That is a conversation that I think we need to be having.

ANJA KASPERSEN: Tackling those inherent tensions is also the core of the work of the Carnegie Council in trying to re-envision ethics for the Information Age.

Thank you so much, Kobi, for this great talk.

KOBI LEINS: Thank you so much for having me.


AIEI 播客反馈

访问连接页面订阅

您可能还喜欢

2022 年 5 月 13 日 - 文章

我们所知的伦理已不复存在。是时候重新认识伦理学了。

鉴于国际事务令人担忧的现状,我们有理由对道德规范的框架或共用方式表示极大的关注。为了迎接这一时刻,...

2022 年 5 月 10 日 - 播客

与保罗-罗特-沃普(Paul Root Wolpe)合作,在价值观冲突或优先顺序不同时做出决策

在本期 "人工智能与平等"(Artificial Intelligence & Equality)播客中,卡内基-上广协会(Carnegie-Uehiro)研究员温德尔-沃拉赫(Wendell Wallach)与埃默里大学(Emory University)的保罗-罗特-沃尔普(Paul Root Wolpe)教授进行了一次发人深省的对话。

2022 年 5 月 9 日 - 播客

与科林-艾伦合作《打造道德机器有进展吗?

安雅-卡斯珀森(Anja Kaspersen)和温德尔-瓦拉奇(Wendell Wallach)与匹兹堡大学的科林-艾伦(Colin Allen)教授探讨了我们在构建能够...

2022 年 5 月 3 日 - 文章

为什么民主与专制的比较没有意义

今天,全世界似乎都在专注于 "民主与专制 "的斗争,但如果这种意识形态的争论忽略了重点呢?哥伦比亚 ...

2022 年 2 月 9 日 - 播客

数字信息时代的公共广场在哪里? 与 Stelios Vassilakis 合著

在本期 "人工智能与平等倡议 "播客中,高级研究员安雅-卡斯珀森(Anja Kaspersen)和Carnegie Council 总裁乔尔-罗森塔尔(Joel Rosenthal)与斯塔夫罗斯...