新监控技术的六个问题,与亚瑟-荷兰-米歇尔合作

2022 年 10 月 18 日 - 7 分钟观看

全球伦理日之际,高级研究员亚瑟-荷兰-米歇尔(Arthur Holland Michel)探讨了与新兴技术和隐私有关的问题。

这些系统将如何使用?我们如何确保它们被透明地使用?发生意外时谁来负责?"米歇尔说:"解决这些问题的最佳方式是进行坦诚、包容的讨论,让每个人都有发言权。

米歇尔还是人工智能与平等倡议组织顾问委员会的成员。

有关全球伦理日的更多信息,请点击此处

These days, we are witnessing the emergence of a wide variety of new surveillance technologies. Things like facial recognition, drones, location databases, and data fusion. It’s all happening so fast that it can be hard to know where to start when thinking about and discussing each of these technologies’ ethical implications.

However, though these machines come in many different forms and do many different things, they all raise some of the same ethical concerns. So for Global Ethics Day 2022, I wanted to highlight six key ethical questions that are common to all emerging surveillance technologies.

First, does the technology actually work?
It's easy to think that these technologies are all super powerful. But sometimes—indeed, oftentimes—new surveillance technologies don't prove to be as effective or reliable as one imagines they'll be. A poorly performing surveillance system might be more likely to cause harm (say, by misidentifying a suspect in a crime) , and its limited or inconsistent benefits won't outweigh its costs to privacy and freedom. Therefore, understanding a technology's real-world effectiveness is important for preventing unintended harm, as well as for deciding whether or not the technology should even be used in the first place.

Second, is it fair?
There's ample evidence to show that new surveillance technologies are disproportionately used against—and cause disproportionate harm to—non-white populations, as well as socially and economically marginalized societal groups. This has been a consistent pattern in the past, and is likely to continue to be a pattern in future. Therefore, new surveillance technologies require a robust assessment to determine their impact across different segments of society and, as needed, rules to forestall anticipated inequities.

Third, how will it be used?
Even when a new surveillance technology is at first only used for a seemingly noble purpose (say, for example, finding people who have been kidnapped), that doesn't mean it will only ever be used that way. New surveillance technologies often end up being used in ways that go far beyond their original purpose, for tasks that raise serious ethical concerns, for example identifying protesters who are exercising freedom of speech. Therefore, when a new surveillance technology emerges, it is helpful to consider not just the ethical balance of its stated purpose but also to imagine and consider the ethical implications of all the other ways it might hypothetically be used.

Another question is what happens to the data?
New surveillance technologies tend to generate large amounts of detailed digital data. In the absence of clear standards for how—and for how long—surveillance data are stored and secured, as well as rules for how the data can and cannot be used, the data may be exploited for privacy intrusions that have nothing to do with the original reason that these data were collected.

Next, it’s important to ask, Will it be accountable?
Even the most reliable, equitable surveillance technologies can fail, and there is always the chance that they will be intentionally used in ways that overstep ethical bounds. When this happens, a clear, transparent, standardized process of accountability is important for ensuring that those who were affected have recourse to justice and that those who are responsible for the harm face appropriate consequences. This will also help to dissuade authorities from using surveillance technologies in unethical ways, or in ways that go beyond their original stated purpose.

And finally, is it being rolled out and used transparently?
It is impossible to address any of these other questions if we don’t know about the surveillance technologies used in our communities. And yet unfortunately, new surveillance technologies are often deployed in the dark. This makes it impossible to have any kind of public scrutiny or discourse that addresses any of the other questions I’ve just mentioned. Transparency is also, in itself, an ethical principle. Privacy scholars and existing case law tends to agree that you have the right to know about the new surveillance technologies that are used either directly against you or your community as part of an investigation, or that may collect data about you incidentally in the course of its use. Oh, and transparency is also, in itself, a good way of keeping our overwatchers accountable.

There are, of course other questions. But this is a good place to start. The best way to address these questions is to have an honest, inclusive discourse where everyone has a voice. So I invite you all to share your own questions, concerns, and personal experiences using the hashtag #GlobalEthicsDay. Thanks for watching!

您可能还喜欢

2024 年 4 月 25 日 - 播客

保护网络空间》,与德里克-雷韦龙和约翰-萨维奇合著

德里克-雷弗龙(Derek Reveron)和约翰-萨维奇(John Savage)与 "The Doorstep" 一起讨论他们的著作 "网络时代的安全"。我们如何减轻人工智能的有害影响?

危险大师》一书封面。CREDIT: Sentient Publications.

2024 年 4 月 18 日 - 文章

危险的主人欢迎来到新兴技术世界

在这篇为《危险的主人》一书平装本所写的序言中,温德尔-瓦拉赫讨论了人工智能和新兴技术的突破和伦理问题。

2024 年 4 月 9 日 - 视频

战争算法:在武装冲突中使用人工智能

从加沙到乌克兰,人工智能的军事应用正在从根本上重塑战争伦理。政策制定者应如何把握人工智能固有的利弊权衡?