书籍封面细节。
书籍封面细节。

技术失控:数字时代的危机管理》,与伊恩-米特罗夫合著

2019 年 1 月 8 日

危机管理专家伊恩-米特罗夫(Ian Mitroff)说,金箔纹身将成为我们设备的屏幕,芯片将植入我们的大脑--这些都是正在开发的令人担忧的技术,但却没有考虑到对我们的身心造成的后果。他将此归咎于 "技术思维",认为技术可以解决一切问题。我们需要技术专家,但我们也需要伦理学家,我们需要制定危机预案。

JOANNE MYERS: I'm Joanne Myers, director of Public Affairs programs here at the Council. In a few minutes, I'll be speaking with Ian Mitroff, author of a very timely new book entitled Technology Run Amok: Crisis Management in the Digital Age.

Ian is one of the founders of the crisis management discipline. Currently he is a senior affiliate at the Center for Catastrophic Risk Management at the University of California-Berkeley and professor emeritus of the Annenberg School for Communication and the Marshall School of Business at the University of Southern California (USC). At USC he founded and directed the USC Center for Crisis Management.

Thank you for joining us today.

IAN MITROFF: My pleasure. Thank you for having me.

JOANNE MYERS: There is no question but that technology has had an unparalleled effect on our lives. Many proclaim its positive impact. Others point to the negative forces emotionally, physically, and socially.

Your book Technology Run Amok seems to be a call to arms about this latest revolution. So let me ask you: What makes this current revolution different from others?

IAN MITROFF: It's the sheer pervasiveness and invasiveness of technology, and let me give you a few examples and make it concrete.

Recently, Massachusetts Institute of Technology (MIT) engineers have developed gold leaf tattoo that can be placed directly on our skin so that we can communicate seamlessly with all of our wonderful devices. In other words, our skins will be the window or the screen into our devices. The problem with it is that young kids already sleep with their cellphones under their pillows at night lest they miss an important text. Now their skins are going to be buzzing all night long? The point is that we continue to dump all this stuff out on society and then later—if then—think of the negative unintended consequences.

I'll give you an even more ominous one. Artificial intelligence (AI) enthusiasts have proposed putting chips in our brains so that AI will not only be in us but will be of us literally, and no thought is given to the fact that a brain is a very complicated mechanism, to put it mildly, and what if somebody were to hack into the chips, what kind of damage could they do to our brains and our bodies?

We can talk about robots taking over, driverless cars, which, yes, would be safer, but millions of people would be displaced. So the point is that technology is more pervasive and invasive than ever before in our history.

Yes, of course, there's the positive side. I'm not anti-technology. I have a Ph.D. in engineering science, but that's not the point. The point is that we haven't done a good job in really managing technology. The crisis management part that I do is that all technologies are abused and misused—we could talk about Facebook here galore—in ways not envisioned by their designers, and that has to be an integral part of thinking before we unleash any technology on the world.

I want to know what are the plans? What are the crisis plans in place, and thought that has been given to how possible damage can result from the best of intentions because one of the things that I do in my book, after reading scores of books on technology pro and con, I explicated something I call the "technological mindset," a set of underlying beliefs which drive technologists. It's not all bad, but in it one of the central beliefs is that technology is the solution to all our problems.

JOANNE MYERS: Let me just stop you one moment. Could you define what you mean by the "technological mindset"?

IAN MITROFF: The technological mindset to me is when I read this slew of books I realized that there was an underlying set of beliefs, a belief system, not a bunch of independent attitudes floating around in space, but something that was coordinated. And when you look at it, it was a mindset. It was a mindset that directs people on the importance if not the supremacy of technology and how technology—it's almost like "technology über alles" is what it is.

And the number-one belief—let me just give you a few, otherwise it gets too long: One, technology is the solution to all our problems, including those caused by technology; technology is the single most important factor responsible for progress; in looking at technologists, they should only concern themselves with the positive aspects of technology, and leave the negative to somebody else; the sooner that we're taken over by robots, the better; the sooner we give all our data to Google, they will make better decisions for us. So when you look at this thing as a whole you begin to see a technical belief system that is underlying our obsession with technology.

Not all technologists have this, thank god, but unfortunately in my experience in interacting with technologists over a lifetime many do in one way or another. Probably the most prevalent thing about the tech mindset is really the radical separation between technology and its social implications on the world: "That's not my job. I'm not concerned about it. I don't need to think about that. I focus my efforts solely on technology." We can't afford to do that anymore. We need technologists who are informed by the humanities and vice versa.

Let me give you a concrete example. My wife has a Ph.D. in education and child development. I don't know the name of the company, but my wife informed me that one of the people who she has mentored, a company developed technology that was intended solely for adults. Unfortunately, kids began to play around with it. The light turned on in their brain and said: "Oh, my god. If kids are playing around with this stuff, then we better get somebody in here who knows something about kids." So they hired an expert in child development.

That to me is one of the key elements of a socially responsible company, a tech company, that says: "Wait a second. We need to have a different set of people in here than the normal suspects. Yes, we need technologists, but we need ethicists."

JOANNE MYERS: Let me just stop you again and ask, when you opened up, you talked about tattooing our skins, but doesn't that need Food and Drug Administration (FDA) approval or bringing somebody in to say this is okay to put tattoos that will affect the mind on our skin?

IAN MITROFF: Well, ideally, theoretically, yes, you're right. That's what we should do, but I don't know whether that's the process that they will go through.

But actually, that's what I'm talking about. If anything, I think a new federal agency needs to be created that will subject tech to regulation because largely it has been unregulated. And by the way, that's one of the key elements also of the technological mindset: "We don't need to be regulated. We'll do self-regulation. Because it'll impede innovation, which is the most important thing." That's almost like lawlessness. We can't afford to operate that way. So if you just take the tattoos, you're right. But now think about all the other things, like putting chips in our brains.

JOANNE MYERS: That's another thing. When you put something in your body it seems to require some type of oversight.

IAN MITROFF: That's exactly what I'm talking about. What will be the oversight agency?

One of the things I've been looking at is that Congress for years had technological assessment agents that would inform congressional members on what scientific and technical—Office of Technological Assessment. It was disbanded. We need to bring something like that back gangbusters, because the point is—here's another point. It's not just these things individually which is important enough, but think about how when they all interact with one another.

So we put tattoos on the outside of our skin and we put chips inside. What kind of interactions are we setting up? The mind/body system is so complex.

And now in addition with Facebook we're operating on the body politic of society. We're interfering. So at every level, going back to the question you asked earlier, what's different, never before to my knowledge in the history of the world, the history of science, have we been able to intervene at so many levels and aspects of society, our minds, and our bodies, as we are able to do now. And who is in charge of it? Is there a central element or mind or body that is taking a look at this and saying: "Hey, wait a second. Suppose this thing here interacts with that. What are the effects, good and bad?" And that's what I'm talking about.

So it's not talking about—we're a heavily technological society and world. Nothing is going to stop that. But boy, we need to take a harder look at what the unintended consequences are, how things will be abused and misused.

Before, you and I were talking about—The New York Times, as we know, ran a series of devastating articles, an exposé, on Facebook.

JOANNE MYERS: Let me just stop you again. For our listeners out there, I think we're now transitioning to talk about the role that technology plays with many of the tech companies in addressing crises. They seem to be more reactive rather than proactive, and as you know and as you're starting to talk about Facebook, it was reported in The New York Times that Facebook shared vast swaths of user data with companies such as Spotify, Netflix, Apple, and even a Russian search giant, Yandex.

In essence, what this means is that Facebook breaches our privacy and sells our information to outside businesses without telling us. So, as a crisis management expert, what would you advise them to do now, and what should they have done before?

IAN MITROFF: Let me backtrack. What's even worse is that one of things in doing my research I came across is that Facebook employed facial recognition technology with a lot of companies, but they focused in on the most vulnerable kids and then sold that information to companies that would prey on the kids, selling them stuff or whatever that they didn't need or hyped it up.

What would I do? Here's the point. Unless you build crisis management into an organization from the very beginning, it's very, very difficult to build it in at the end.

The point about The New York Times article, the exposé, is that time and again Zuckerberg and Sheryl Sandberg were warned of potential problems that were more than just brewing, that were more than highly likely, in fact had already occurred, like the misuse of their platform. What they did is they not only ignored but blocked the early warning signals of impending problems. In fact, they set out on a campaign to discredit the people who brought forth those signals, and particularly external critics. One of the worst things of all from The New York Times is that they labeled critics who brought forth important and reasonable information as anti-Semitic. So what they did is they did everything possible to discredit the critics until the problems became overwhelming and burst into the public consciousness and they could no longer ignore them.

You asked me, what would I do? In many cases, not all, the only way to regain some sanity and some ethical direction is to really fire the top management.

Let me put it another way, in very stark terms. There is no question obviously that Zuckerberg was smart enough to invent Facebook. He had the technical knowledge. He doesn't have the social maturity and breadth of social knowledge—

JOANNE MYERS: And lack of moral responsibility, I might add.

IAN MITROFF: —and the moral responsibility to manage a complex organization.

I have in front of me a cartoon from the Chicago Tribune, and it shows Zuckerberg. Here's what it says: "For all of the scandals—data mining, fake content—we are sincerely sorry that we got caught."

If you get stuff like that from the public media, that's almost the kiss of death for an organization. Every day bad news piles up. They can't seem to get out of it.

Let me put it another way. You asked from a crisis management standpoint. Ideally, what would have been done at the first signs of cyberbullying and in fact even before, I feel almost 100 percent sure that if from the beginning teams of parents, kids, teachers, and psychologists had been assembled, they would have picked up the possibility of Facebook being used as a platform for cyberbullying.

The point is, one of the hardest things to do is when you created your baby and you say, "What are the ways it'll be misused?" because you're thinking about all of the positive attributes instead of how they'll be turned back on themselves.

But let me give you another case. This happened to me years ago when I was at USC and I ran the USC Center for Crisis Management. We had corporate sponsors who not only gave us money to do our research, but they gave us entrée into their organizations so we could study them firsthand and see what they're doing.

I was out on the road one time, and I went to this major pharmaceutical company. The guy who agreed to talk with me, I asked him, "What are you guys doing about product tampering?" That's one of the major things that worries drug companies, particularly after the Tylenol poisoning.

Without missing a beat, this guy turns to me, and he said, "We have an internal assessment team."

I said, "You have what?"

He said: "Yeah. One day we realized we know more about our product than anybody else, so we held up one of our bottles of pills, painkillers, and we looked at the cap as the front door of a house, and the walls are the walls of a house. And we asked ourselves: 'How could the burglar get in, do the most damage, and remain undetected for as long as possible?'" I won't go through everything.

They realize there is no such thing as tamper-proof seals, but they came up with tamper-evident. So if somebody came in and fooled around with a seal, it would become evident, and the client, the patient, would not take it.

The point is, think about what they did. Would that every organization would do it: "How could somebody else do the most damage to our pet product, our favorite product?" I realize how difficult that is to do because you've got to say that there's something that you love, you brought into being, your baby, how could it be attacked? But unfortunately as we know that's the kind of world in which we live. It requires the strength of character and the strength of mind to say, "We're going to do everything we can—we believe in our product—to protect it."

If anything, I would make that mandatory. I would like to see—in fact, first of all, I'd like to see these companies be branded as media companies. They would have to apply for a license to operate. As part of that license to operate, they would have to have a crisis plan that would have to be reviewed periodically, a couple of times a year.

JOANNE MYERS: Do you see this media regulation movement going forward?

IAN MITROFF: I think it will happen. It certainly is happening in the realm of privacy.

One of the things that we know is that the United States has some of the weakest privacy laws in the world, and the European Union has some of the strongest. In the European Union, when you sign off and you check these boxes about what you agree to and how your data will be used it has to be written in the simplest, plainest language possible devoid of legal-beagle or at least translated and understandable. And you have to be informed if your data is going to be sold or used by so many—you have to be informed on a very timely basis. So the point is, there already exists a model in the Western industrialized world.

The thing I'm talking about, we have to now broaden way beyond privacy. Privacy is the thing that sets us off, our data being sold, our most personal data. But now, as we were talking earlier, it has become even more prevalent with putting chips inside of us and tattoos and talking about the Food and Drug Administration.

So the point is, it has broadened. There is not a single aspect of our bodies or minds that is not subject to technology.

JOANNE MYERS: Right. But is there any legislation pending that would change the trajectory of it?

IAN MITROFF: Not that I know of. The only thing that I do know of as part of The New York Times articles is that the very senators that we know—because Facebook has contributed a lot of money, lobbying, to the campaigns of various senators—when The New York Times exposé came out, they got really mad, and they're reversing their policy of hands-off toward Facebook and other social media companies.

What that will result in ultimately, I don't know. But I think it will. Does it need to? Absolutely. Do they need to be regulated? Absolutely.

What that will be, because we've been living under weak or no regulations—let them off because they've lobbied to get there. It doesn't work. Self-regulation doesn't work.

I don't know of any other kind of industry that affects our health in such prodigious ways that is not regulated. And why is it? Because that's part of the attitude, part of the mindset that we've all been subject to.

JOANNE MYERS: Well, hopefully on that sort of optimistic note that we could go forward and institute some legislation that will protect us from Facebook and the obvious misuse of its technology, I'd like to thank you very much for spending the morning with us, and we'll continue to watch what happens next.

IAN MITROFF: Thank you very much for having me. I appreciate it.

您可能还喜欢

2024 年 11 月 21 日 - 文章

新的国际秩序正在形成,我们必须坚持我们的原则

在新的国际秩序到来之际,Carnegie Council 将继续倡导和平与合作的愿景,这仍然是我们的使命。

2024 年 11 月 13 日 - 文章

道德灰色地带:政治审议中的人工智能代理

随着代理人工智能应用的增加,研究人员和政策制定者必须就伦理原则达成一致,为这一新兴技术的管理提供依据。

2024 年 10 月 24 日 - 文章

人工智能与 2024 年选举的公正性

这是 CEF 第一批学员的最后一个项目,讨论了在 2024 年数十亿人参加投票时,人工智能对选举公正性的影响。