新监控技术的六个问题,与亚瑟-荷兰-米歇尔合作

2022 年 10 月 18 日 - 7 分钟观看

全球伦理日之际,高级研究员亚瑟-荷兰-米歇尔(Arthur Holland Michel)探讨了与新兴技术和隐私有关的问题。

这些系统将如何使用?我们如何确保它们被透明地使用?发生意外时谁来负责?"米歇尔说:"解决这些问题的最佳方式是进行坦诚、包容的讨论,让每个人都有发言权。

米歇尔还是人工智能与平等倡议组织顾问委员会的成员。

有关全球伦理日的更多信息,请点击此处

These days, we are witnessing the emergence of a wide variety of new surveillance technologies. Things like facial recognition, drones, location databases, and data fusion. It’s all happening so fast that it can be hard to know where to start when thinking about and discussing each of these technologies’ ethical implications.

However, though these machines come in many different forms and do many different things, they all raise some of the same ethical concerns. So for Global Ethics Day 2022, I wanted to highlight six key ethical questions that are common to all emerging surveillance technologies.

First, does the technology actually work?
It's easy to think that these technologies are all super powerful. But sometimes—indeed, oftentimes—new surveillance technologies don't prove to be as effective or reliable as one imagines they'll be. A poorly performing surveillance system might be more likely to cause harm (say, by misidentifying a suspect in a crime) , and its limited or inconsistent benefits won't outweigh its costs to privacy and freedom. Therefore, understanding a technology's real-world effectiveness is important for preventing unintended harm, as well as for deciding whether or not the technology should even be used in the first place.

Second, is it fair?
There's ample evidence to show that new surveillance technologies are disproportionately used against—and cause disproportionate harm to—non-white populations, as well as socially and economically marginalized societal groups. This has been a consistent pattern in the past, and is likely to continue to be a pattern in future. Therefore, new surveillance technologies require a robust assessment to determine their impact across different segments of society and, as needed, rules to forestall anticipated inequities.

Third, how will it be used?
Even when a new surveillance technology is at first only used for a seemingly noble purpose (say, for example, finding people who have been kidnapped), that doesn't mean it will only ever be used that way. New surveillance technologies often end up being used in ways that go far beyond their original purpose, for tasks that raise serious ethical concerns, for example identifying protesters who are exercising freedom of speech. Therefore, when a new surveillance technology emerges, it is helpful to consider not just the ethical balance of its stated purpose but also to imagine and consider the ethical implications of all the other ways it might hypothetically be used.

Another question is what happens to the data?
New surveillance technologies tend to generate large amounts of detailed digital data. In the absence of clear standards for how—and for how long—surveillance data are stored and secured, as well as rules for how the data can and cannot be used, the data may be exploited for privacy intrusions that have nothing to do with the original reason that these data were collected.

Next, it’s important to ask, Will it be accountable?
Even the most reliable, equitable surveillance technologies can fail, and there is always the chance that they will be intentionally used in ways that overstep ethical bounds. When this happens, a clear, transparent, standardized process of accountability is important for ensuring that those who were affected have recourse to justice and that those who are responsible for the harm face appropriate consequences. This will also help to dissuade authorities from using surveillance technologies in unethical ways, or in ways that go beyond their original stated purpose.

And finally, is it being rolled out and used transparently?
It is impossible to address any of these other questions if we don’t know about the surveillance technologies used in our communities. And yet unfortunately, new surveillance technologies are often deployed in the dark. This makes it impossible to have any kind of public scrutiny or discourse that addresses any of the other questions I’ve just mentioned. Transparency is also, in itself, an ethical principle. Privacy scholars and existing case law tends to agree that you have the right to know about the new surveillance technologies that are used either directly against you or your community as part of an investigation, or that may collect data about you incidentally in the course of its use. Oh, and transparency is also, in itself, a good way of keeping our overwatchers accountable.

There are, of course other questions. But this is a good place to start. The best way to address these questions is to have an honest, inclusive discourse where everyone has a voice. So I invite you all to share your own questions, concerns, and personal experiences using the hashtag #GlobalEthicsDay. Thanks for watching!

您可能还喜欢

2024 年 11 月 13 日 - 文章

道德灰色地带:政治审议中的人工智能代理

随着代理人工智能应用的增加,研究人员和政策制定者必须就伦理原则达成一致,为这一新兴技术的管理提供依据。

从左至右埃莉奥诺尔-福尼尔-汤姆斯、乔拉-米兰博大使、安娜-卡琳-埃内斯特罗姆大使、多琳-博格丹-马丁、维拉斯-达尔。CREDIT: Bryan Goldberg.

2024 年 9 月 19 日 - 视频

开启合作:全民人工智能

在未来峰会前夕,Carnegie Council 和联合国大学预防危机和复原中心主办了一次特别活动,探讨人工智能对多边体系的影响。

2024 年 9 月 16 日 - 视频

人工智能促进信息无障碍:从基层到政策行动

公民、民间机构和行业专业人士如何共同努力,确保人人都能使用新兴技术?