ICGAI 促进合作:人工智能治理倡议之间的合作

2021 年 3 月 29 日

这是国际人工智能治理大会(ICGAI)在线演讲系列活动的启动仪式。第一场活动的主题是 "促进合作:人工智能治理倡议的合作"。主题包括高层专家和决策者对全面和可信治理的见解,以及对人工智能全球治理网络(GGN-AI)提案的概述。

This event was streamed live on YouTube. To follow along with the presentations, please watch the full video.

MICHAEL MØLLER: Hello, everyone. A very warm welcome to you. My name is Michael Møller. I am from Denmark, and I am a former under secretary-general in the United Nations and currently the chairman of the Diplomacy Forum in the Geneva Science and Diplomacy Anticipator.

I am very pleased to be co-chairing this first session of the International Congress for the Governance of Artificial Intelligence (ICGAI) with my colleague Nanjira Sambuli, a Kenyan researcher, writer, policy analyst, and advocacy strategist, who works to understand the intersection of information and communications technology adoption with governance, media, entrepreneurship, and culture.

Today and on April 13 we will explore where we are in the development of the much-needed governance tools for the ethical and legal oversight and governance of powerful AI systems and what challenges in the governance realm are not yet being addressed adequately.

We all know that AI and emerging technologies more broadly hold great promise for humanity's well-being, and yet, if unregulated, can pose serious risks and undesired societal consequences. The profound impacts of new technological applications expose fundamental inadequacies in existing global mechanisms for international collaboration. They highlight their inability to develop new creative approaches that deliver tangible implementable practices with the urgently required impactful results in order to urgently address the many existential challenges we are facing, including the deep and growing inequalities in our societies.

AI and other cutting-edge technologies challenge existing structures and mindsets and require us all to work towards new and much more effective models for international cooperation. Hence, the title of our session today, "Catalyzing Cooperation: Working Together Across AI Governance Initiatives." It is very much our hope and ambition that the presentations in today's session and the discussions in our second session on April 13 will help us move further towards that goal.

After opening remarks by the interim United Nations tech envoy, Maria-Francesca Spatolisano, there will be a few short welcoming greetings by the organizers of the ICGAI, including by the lead organizer of the Congress, Wendell Wallach, who will delve a little deeper into the background and intentions of the ICGAI.

We have a number of great, high-level, diverse, and very knowledgeable speakers today, and I want to thank them all very much for having agreed to participate in our Congress.

It is now my great pleasure to hand the floor over to my co-chair, Nanjira Sambuli. Thank you.

NANJIRA SAMBULI: Thank you, Michael. It is a pleasure to be co-chairing this important gathering.

In addition to the international and global governance mechanisms we need for artificial intelligence, as Michael has pointed out, we must pay special attention to whose experiences and worldviews get to shape them. The world is highly unequal, as we know, and power and resources are unevenly distributed. This is challenging in ensuring that AI is conceptualized, developed, and deployed in a manner that maximizes benefits and opportunities while minimizing harms and risks.

To that point, inclusion or inclusivity is a fundamental principle for shaping artificial intelligence itself as well as its consequent governance mechanisms. Inclusion or exclusions span geography, sectors, gender, and age, just to name a few, and all of these must be given important considerations throughout our deliberations.

The multi-stakeholder governance approach that is often talked about does hold promise for making AI governance more inclusive as we will see and hear from diverse perspectives throughout the Congress, in today's session and the session on April 13. We absolutely need inclusive and equitable deliberation tables, and I hope that the discussions during this Congress contribute positively to that.

I am looking forward very much to today's discussions. Thank you, Michael. Over to you.

MICHAEL MØLLER: Thank you very much.

It is my pleasure to kick us off with a prerecorded message from United Nations Assistant Secretary-General Maria-Francesca Spatolisano, who is the officer in charge of the Office of the Secretary-General's Envoy on Technology. She is a former European Union ambassador to the Organisation for Economic Cooperation and Development (OECD) and to United Nations Educational, Scientific, and Cultural Organization (UNESCO).

MARIA-FRANCESCA SPATOLISANO: Good morning, afternoon, and evening wherever you are, distinguished participants.

I would like to begin by thanking the Congress organizers for inviting me to speak to you today. I am especially grateful to speak after my distinguished former UN colleague and Congress co-chair Michael Møller, who has been a pillar of the United Nations system and continues to contribute his time and effort to our work.

I speak to you today in my capacity as officer in charge of the Office of the Secretary-General's Envoy on Technology at the United Nations. This office was established by the secretary-general to support the implementation of his Roadmap for Digital Cooperation and to bring together all stakeholders to better harness the potential of technology while addressing its challenges.

The ways in which artificial intelligence or AI change our lives are limitless. AI technologies are being used in everything from commercial services to public services in areas as diverse as education, health care, infrastructure, and much more. Its transformative character can enable rapid economic and social progress, helping us to achieve the Sustainable Development Goals.

However, like many other technologies, AI is not neutral. For example, AI can amplify existing biases. It often discriminates based on its limited training and may help spread misinformation, hate speech, and violent content.

These are not challenges of AI technology we can afford to ignore, but they are also not faults we must live with. This is why there are many ongoing governance initiatives like the OECD AI Policy Observatory, the Global Partnership on Artificial Intelligence (GPAI), and this one, the International Congress for the Governance of AI.

Naturally much work still needs to be done to create inclusive, responsive, and effective global cooperation structures that can meet the challenges posed by AI. In particular we need to ensure greater representation from the Global South and developing countries as most of the voices in global AI discussions are from the Global North. This is where the United Nations can play an important role in bringing all concerned—governments, the private sector, civil society, academia, and the technology community—to the same table to work together.

As the secretary-general of the United Nations has frequently emphasized, how we address the challenges of the digital world is one of the key issues of our time. He has thus launched a Roadmap for Digital Cooperation, which lays out a vision on key digital issues such as universal connectivity, digital human rights, and digital inclusion. The Roadmap was developed after extensive multi-stakeholder and cross-regional consultations. It contains a series of action-oriented and concrete steps to take forward the recommendations of the secretary-general's own Multi-stakeholder High-Level Body (MHLB) on digital cooperation, which the Office of the Secretary-General's Envoy on Technology is now leading through partnerships with key UN entities and stakeholders.

In his Roadmap the secretary-general specifically stresses artificial intelligence as an area that needs greater global steerage. The secretary-general thus proposes that he will establish in the United Nations, the truly global and representative international body in this area, a multi-stakeholder advisory body on global AI cooperation to address issues around inclusion, coordination, and capacity building. The AI body will help share and promote best practices as well as exchange views on artificial intelligence, standardization, and compliance efforts. It will also provide guidance on artificial intelligence that is trustworthy, human rights based, safe, sustainable, and which promotes peace.

These are the key questions that go to the heart of the challenges we face with AI, ensuring that this technology is used in ways that are positive, ethical, environmentally sound, and respect and protect human rights. For example, how can we ensure that the use of AI technology does not perpetuate existing biases and disproportionately impact different groups? What safeguards do we need to put in place to ensure that AI is used in positive ways and does not, say, contribute to misinformation, deepen divisions, and stir up conflicts? I hope the discussions at this Congress can shed some light on these key issues, and I am pleased to note, for instance, that the Congress has a session on Trustworthy AI.

The secretary-general's proposal for a multi-stakeholder advisory body is part of the important work being done by the broader UN family on the issue of AI. For instance, UNESCO is working on global AI ethics standardization, the International Telecommunications Union (ITU) on building capacity for AI for Good, and the United Nations Children's Fund (UNICEF) on AI for children. The secretary-general has also called for a ban on the use of lethal autonomous weapons as machines with the power to kill on their own without human judgment and accountability, with that bringing us into unacceptable moral and political territory.

The Office of the Technology Envoy works closely with all UN agencies on these AI issues as well as on the broader implementation of the secretary-general's Roadmap. We will also continue consultations on developing the secretary-general's AI multi-stakeholder advisory body to ensure that the body is impactful and relevant in the already crowded and active AI space. Given the UN's unique convening role and universal legitimacy I remain convinced that this is where many of these important issues must be discussed.

Over all the emerging technologies, artificial intelligence stands unique as the one with the greatest potential to empower but also to disrupt. This is why the stakes for international cooperation are greatest here. The fact that AI moves faster than normative and regulatory frameworks only further underscores the urgency. AI will be an essential tool in our journey towards a prosperous future.

I would like to conclude by inviting you and the broader AI community to share its knowledge and expertise in the UN work streams that we have established to implement the secretary-general's Roadmap, in particular in the areas where we address the impact of artificial intelligence. I know that some of you are already engaging with us in this important work. My office is your entry point to contribute to our collective efforts to build a more inclusive and fairer digital world together.

I thank you.

NANJIRA SAMBULI: And now it is my pleasure to hand over the floor to Jim Clark, who is one of the organizers of this Congress and founder of the World Technology Network.

Jim, the floor is yours.

JIM CLARK: Thank you, Nanjira.

Two decades ago I founded the World Technology Network to identify, highlight, and create a global community around the most innovative and impactful work in science and technology and related fields. Soon after we launched the annual World Technology Awards, including notably the World Technology Award for Ethics. In 2014 I presented Wendell Wallach with that award, and so began a wonderful friendship.

In 2018 Wendell and I began our collaboration to create the International Congress for the Governance of AI, building on his longstanding ideas about what was needed. We held very well attended and high-level planning meetings during the UN General Assembly in 2018 and 2019 as well as elsewhere.

To consider and develop proposals, ICGAI also held a series of high-level expert working group workshops, in particular in partnership with the Observer Research Foundation in Delhi, India, the assuring the safety of autonomous systems program at York University in London, and the Global Digital Policy Incubator at Stanford University, the last workshop from which emerged the most comprehensive proposal for a global governance network for AI about which Wendell will discuss at the close of today's event.

As almost all of you know, the goal all along was to hold the ICGAI as a physical event this past April in Prague, thanks to the foresight and generosity of the city of Prague and its mayor, from whom you will hear in a moment. I would be remiss if I did not also mention the other cities who also offered to host: The Hague, Hong Kong, and Singapore. We thank them all.

We could not have predicted the global pandemic, causing us to reschedule only weeks before the in-person event. We believe that we have captured the main ideas and spirit of that event as we now commence today with our ongoing ICGAI virtual events series.

We have been supported in this virtually all-volunteer collective global effort by many people and organizational partners, a wonderful list you can explore further on the ICGAI.org website, including our almost one hundred expert advisors and the original lineup of speakers for the three-day Prague event, and also with the ongoing logistical support of Volha Litvinets. Thank you.

Finally and certainly not least, we have been supported way beyond the call of duty in partnership with the Carnegie Council for Ethics in International Affairs under the wise leadership of Joel Rosenthal, who we will also hear from in a moment.

On behalf of the ICGAI we thank all of you who have brought us to this point and will take the possibilities inherent in ICGAI even further.

Now it is my great pleasure to welcome for brief remarks His Honor, the mayor of the magical city of Prague.

ZDENĚK HŘIB: Ladies and gentlemen, you are genuine supporters of artificial intelligence. Let me welcome you to the first event from the ICGAI series.

As you know, we were very much looking forward to receiving you all in Prague last year. Sadly the pandemic had different plans, and like many others eventually we decided to go online.

This opening event starts an important discussion on the comprehensive governance of AI. The city of Prague, a founding member of the Prague AI Initiative, is committed to employing AI-based solutions and technologies for the benefit of the citizens. At the same time, it is our duty to put in place mechanisms to prevent negative impacts such as violations of human rights. In other words, we have to make sure that the AI we and others use is an ethical one.

I would like to extend my sincere thanks to the ICGAI team, the Carnegie Council for Ethics in International Affairs, the World Technology Network, and all of our distinguished advisors and speakers for all their hard work and commitment.

Ladies and gentlemen, I wish you a very productive event today, and please remember, when it comes to AI ethics, Prague is here to help.

JIM CLARK: Thank you, mayor.

Finally I would like to welcome a man without whose organization, the Carnegie Council, and personal support and patience we would not be meeting here today, Joel Rosenthal.

JOEL ROSENTHAL: Thanks, Jim.

I am only sorry that we are not meeting in Prague, but it is delightful to see all of you here online. I hope that we can get to Prague sometime soon.

I just want to add my warm welcome to all of you from the Carnegie Council in New York City. We are very pleased to be co-conveners of this international Congress. This Congress is suitably ambitious for our Council, and it is in keeping with our historic legacy and also our ambitions for the future.

Let me just say a word about our Council. We seek to generate ethical solutions to global-scale challenges. For over a hundred years we have acted as a nonpartisan and independent organization trusted to set the ethical agenda. Our mission is educational and action-oriented. We identify ethical issues, we convene leading experts, like all of you, we communicate to a global audience, and we connect communities and build networks around shared interests and common values.

Throughout our history we have convened diverse voices and leaders, asked hard questions, and explored the shared interests needed to develop a set of principles for the common good. We have moved conversations out of the echo chambers and into the global public square, where all voices are welcome and respected.

Finally we have aspired to engage audiences, build constituencies, and empower ethical action around the world. This is exactly what we are doing today, and we are grateful to all of you for joining the effort. We are also grateful to the leadership of this project, including Jim Clark, who did so much of the organizing, and our Carnegie Council Senior Fellow Wendell Wallach.

Most of you know Wendell for his path-breaking work Moral Machines: Teaching Robots Right from Wrong and the governance of emerging technologies. We at the Carnegie Council are delighted to have him as our Senior Fellow for this project and our new project, called Artificial Intelligence and Equality Initiative (AIEI). Wendell has been the architect and engine for so much of this work and so much of the discussion that we are having today. With that, I am going to turn it over to him.

Wendell, the floor is yours.

WENDELL WALLACH: Thank you, Joel.

Planning for an International Congress for the Governance of Artificial Intelligence actually dates back five years now. In calling this a "Congress" there was no illusion of legislative authority. The initiative was motivated over concerns voiced by scholars, leaders of civil society, and a few policy planners about the mismatch between the speed at which innovative technologies are deployed and governments' ability to put in place effective ethical and legal oversight. Technologies would become entrenched before governments would address risks and undesired societal consequences.

When this initiative was started there were of course international meetings around regulating fintech, managing cyberinsecurity, and at the United Nations in Geneva proposals to enact a treaty to restrict deployment of lethal autonomous weapons, but very little else. A lot has changed over five years. By some counts there are more than 150 lists of principles for guiding the responsible development and deployment of AI. The conversation has moved from principles to practices and policies for their realization. A plethora of international governance initiatives has emerged, and the pandemic has sped up and underscored existing trends.

On the one hand, life on the screen has made the pandemic bearable, kept economies from collapsing, and biotech and AI have sped up the development of vaccines. The digital economy has expanded at an exponential rate, and those of us able to invest in tech companies have prospered, while hundreds of millions of lives have been devastated.

More troubling are the ways that AI and the digital economy continue to exacerbate structural inequalities and give rise to new inequities. Unfortunately talk of AI for good and AI ethics can obscure this fact and can facilitate ethics washing and distract attention to a very troubling trend.

As Joel mentioned, at the Carnegie Council for Ethics in International Affairs, this concern led Anja Kaspersen and myself to initiate the AI & Equality Initiative.

The AI revolution must be nudged onto a more positive trajectory to ensure its benefits far outweigh the risks and undesired consequences and that we bequeath a world worth living in to our children.

Changes have forced us to be agile as to what functions the International Congress for the Governance of AI might yet serve. The Planning Committee decided upon an online event that addresses issues and governance concerns that are not being adequately given attention. Those have already been touched upon by Michael Møller and Nanjira Sambuli, but let me repeat the key ones:

First, lack of an effective instrument to facilitate cooperation between governance initiatives. AI touches all facets of life, so distributed governance is natural. But how do we ensure that distributed governance does not deteriorate into "politics as usual" and a cacophony of competing voices, particularly on those international issues where agreement is essential?

Second, there is much talk of a need for multi-stakeholder input. Yet today one often witnesses patronizing inclusivity where those in power decide who speaks for the rest of humanity. How can we create meaningful inclusivity and bottom-up engagement in the governance of the bio-digital revolution?

Third, multilateralism is faltering, and the public is losing confidence and trust that national governments and international institutions can solve their problems. Can we reimagine and reinvent international governance for the 21st century?

A few of our speakers today are identified with specific international governance initiatives. However, we also gave particular attention to key leaders who span initiatives and think comprehensively about ways to address the international governance of artificial intelligence.

Most of you are undoubtedly suffering from "Zoom fatigue," but we trust that you will find this to be a particularly valuable event. Thank you for your attention, and over to you, Michael.

MICHAEL MØLLER: Thank you, Wendell. Thank you very much for setting us on the right course here.

It is now my pleasure to introduce Doris Leuthard, a former president of the Swiss Federation from 2010 to 2017. She is a member of the Club of Madrid. She was a member of the UN Secretary-General's High-Level Panel on Digital Cooperation, and she is today the president of the Swiss Digital Initiative. She also happens to serve with me on the Kofi Annan Foundation, so I am particularly happy to have a colleague with us today. She is going to be delivering a prerecorded message for us.

DORIS LEUTHARD: Dear ladies and gentlemen, it is a pleasure that I can join you for this very important conference on the governance of AI. We all know digital technology is rapidly transforming society simultaneously allowing for unprecedented advances in the human condition and giving rise to profound new challenges. Great opportunities created by the application of digital technologies are paralleled by stark abuses and unintended consequences. Digital dividends coexist with digital divides. Algorithms can do good and bad. And as technological change has accelerated also during the pandemic, the mechanisms for cooperation and governance of this landscape have failed to keep pace.

Divergent approaches and ad hoc responses threaten to fragment the interconnectedness that defines the digital age, leading to competing standards and approaches, lessening trust, and discouraging cooperation. We have to craft a path that will lead us to an equal and more secure and sustainable system of global cooperation and renewed multilateral institutions fit for the purpose of the 21st century and resilient against future disasters and challenges.

Sensing the urgency of the moment, in July 2018 the UN secretary-general appointed a high-level panel to consider the questions of digital cooperation and ways we work together in order to maximize their benefits and to minimize the risks. I had the honor to be a member of this panel.

We all know it is not easy to give an answer, and there are today different approaches. We see people in Silicon Valley saying: "Technology will solve the problem. We should trust the technology." We see the Chinese government, which has more and more control over the data of their citizens, and others who intervene in having access to the Internet.

What is clearest is that we need improved digital cooperation and that we are living in the age of digital interdependence. Effective digital cooperation requires that multilateralism is strengthened. Despite some strains there is no other way. No one country alone, no one company or organization alone, can fix the issue.

Multilateral cooperation is not an option. It is essential in the future. It is essential to preservation of our social values and resolution of transnational challenges like digital transformation as a whole.

It also requires that multilateralism be complemented by multi-stakeholder-ism, a cooperation that involves not only governments and not only tech experts, but far more we have to include the whole spectrum of other stakeholders like technologists, civil society, academia, and the private sector. We need to bring far more voices to the table until we get the best results. This will not be easy because most governments are not used to that. They lose a little bit of power by this cooperation.

In Switzerland we have a long tradition in such cooperation and participating processes, and we made very good results with that because citizens who can raise their voices to every law are better informed. Finally the government—and this means society at the end of the day—has a better result because people are involved and can say their opinion, and this is finally a better solution. This change will not be easy, but we have to go in this direction.

The same counts for multilateral institutions and international cooperation. They require increased participation, inclusion, and capacity of all, a claim that was recently also addressed by the Club of Madrid, the former presidents and prime ministers. We need a forward-looking, viable, and people-centered strategy of pragmatic dialogue, solidarity, and trust.

Trust will be crucial. Trust of consumers and users is essential. If we look back in history, in the past it was regulation standards that created this trust level. Think of the food industry, pharmaceutical products, and cars. For all these companies it was only possible by a level of trust created by common standards, and international cooperation to go global and to have also a global market.

To support the consumer and user we in Switzerland by the Swiss Digital Initiative have an element which could also lead to more governance and more transparency, the Digital Trust Label. We think it must be very simple to know as a user: What is the label of trust? What are the indicators to count on trust? Our label can give orientation and is a fast way for the user to make a distinction, and he does not have to read manuals or hundreds of articles from the company. We do not need to read all these different technological approaches, so a label can be an element of governance and could also influence the design of future products.

What helps is that it is a fact that universal human rights apply equally online and offline, but there is a need to examine how time-honored human rights frameworks and conventions should guide digital cooperation and digital technologies. In AI we know how difficult this can be. We need society-wide conversations about boundaries, norms, and shared aspirations for the use of AI, including complicated issues like privacy, human agency, security, and liability. This is a very different and difficult task.

The Club of Madrid recently called for a globally agreed set of norms and measures to enable improved global connectivity and data flows, inclusive digital platforms, and better Internet management. We need a Bretton Woods for digitalization. This would mean a totally new architecture.

AI influences a lot of developments, and we know there are also technology tensions today between states, creating tensions then on the political level. A global order helps to accompany technology developments to set standards, to benchmark practices, protocols for interoperability and validation, measures for data security and protection, guarantees for individual rights, and the definition of public goods to have equal conditions for researchers and companies. A globally agreed set of norms is possible and should lead to a UN charter where all these principles, norms, and standards are globally agreed. This would be my dream.

We can draw on many thoughtful reflections and reports in this respect and values also that are generally accepted in today's world, like inclusiveness, respect, human-centeredness, transparency, accessibility, sustainability, and collaboration. All of these values can also be applicable in cyberspace.

Ladies and gentlemen, it is up to us now to go in the future towards this new architecture. The panel I mentioned elaborated three possible models, three possible future architectures. We did not propose a new organization because this normally takes a lot of time and a lot of resources. We can build on existing platforms and organizations like, for example, our first model, the IGF, the Internet Governance Forum, in Geneva. It can be optimized to the needs of today. Or we can go also to other solutions like a co-governance architecture or as a third way a "digital commons" architecture. I will not go into details here of these three models, but the secretary-general of the United Nations published a Roadmap, and we can go in this direction only step by step.

I invite you to discuss which model of governance of architecture could be possible. Maybe it will be AI, maybe it will be in general all questions on Internet and our data society.

Time will be crucial. If we continue only to talk and to make a lot of reports, we lose power, and others will make the profit. I think we should begin because we know every structure can be optimized. We know we can also fail with some elements. But let's begin a new architecture of the 21st century that can accompany AI and that can give also legal certainty orientation—what we like, what we don't like, what is legal, and what could be problematic.

Thank you so much. Enjoy the Congress.

MICHAEL MØLLER: Thank you to Doris Leuthard. An important statement. That sets us even more strongly on the path of where we are going.

It is now my pleasure to introduce Lord Clement-Jones, also in a video presentation. He is the former chair of the House of Lords Select Committee on AI. He is the co-chair of the All-Party Parliamentary Group on AI, and he is a founding member of the OECD Parliamentary Group on AI and a member of the Council of Europe's ad hoc Committee on AI (CAHAI).

LORD TIM CLEMENT-JONES: Hello. It is great to be with you.

Today I am going to try to answer questions such as: What kind of international AI governance is needed? Can we build on existing mechanisms? Or does some new body need to be created?

As the House of Lords in our follow-up report, "AI in the UK: No Room for Complacency," last December strongly emphasized, it has never been clearer, particularly after this year of COVID-19 and our ever-greater reliance on digital technology, that we need to retain public trust in the adoption of AI, particularly in its more intrusive forms, and that this is a shared issue internationally. To do that, we need, whilst realizing the opportunities, to mitigate the risks involved in the application of AI, and this brings with it the need for clear standards of accountability.

The year 2019 was the year of the formulation of high-level ethical principles in the field of AI by the OECD, the European Union, and the G20. These are very comprehensive and provide the basis for a common set of international standards. For instance, they all include the need for explainability of decisions and an ability to challenge them, a process made more complex when decisions are made in the so-called "black box" of neural networks.

But it has become clear that voluntary ethical guidelines, however much they are widely shared, are not enough to guarantee ethical AI, and there comes a point where the risks attendant on noncompliance with ethical principles is so high that policymakers need to accept that certain forms of AI development and adoption require enhanced governance and/or regulation.

The key factor in 2020 has been the work done at international level in the Council of Europe, OECD, and the European Union towards putting these principles into practice in an approach to regulation which differentiates between different levels of risk and takes this into account when regulatory measures are formulated.

Last spring the European Commission published its white paper on the proposed regulation of AI by a principle-based legal framework targeting high-risk AI systems. As the white paper says, a risk-based approach is important to help ensure that the regulatory intervention is proportionate. However, it requires clear criteria to differentiate between the different AI applications, in particular in relation to the question or not of whether they are high-risk. The determination of what is a high-risk AI application should be clear, easily understandable, and applicable for all parties concerned.

In the autumn the European Parliament adopted its framework for ethical AI to be applicable to AI, robotics, and related technologies developed, deployed, and/or used within the European Union. Like the Commission's white paper, this proposal also targets high-risk AI. As well as the social and environmental aspects notable in this proposed ethical framework is the emphasis on human oversight required to achieve certification.

Looking through the lens of human rights, including democracy and the rule of law, the CAHAI last December drew up a feasibility study for regulation of AI, which likewise advocates a risk-based approach to regulation. It considers the feasibility of a legal framework for AI and how that might best be achieved. As the study says, these risks, however, depend on the application, context, technology, and stakeholders involved. To counter any stifling of socially beneficial AI innovation and to ensure that the benefits of this technology can be reaped fully while adequately tackling its risks, the CAHAI recommends that a future Council of Europe legal framework on AI should pursue a risk-based approach targeting the specific application context, and work is now ongoing to draft binding and non-binding instruments to take the study forward.

If, however, we aspire to a risk-based regulatory and governance approach, we need to be able to calibrate the risks, which will determine what level of governance we need to go to. But, as has been well illustrated during the COVID-19 pandemic, the language of risk is fraught with misunderstanding. When it comes to AI technologies we need to assess the risks by reference to the nature of AI applications and the context of their use. The potential impact and probability of harm, the importance and sensitivity of use of data, the application within a particular sector, the affected stakeholders, the risks of non-compliance, and whether a human in the loop mitigates risk to any degree.

In this respect, the detailed and authoritative classification work carried out by another international initiative, the OECD Network of Experts on AI working group, so-called "ONE AI," on the classification of AI systems comes at a crucial and timely point. This gives policymakers a simple lens through which to view the deployment of any particular AI system. Its classification uses four dimensions: context, i.e., sector, stakeholder, purpose, etc.; data and input; AI model, i.e., neural or linear, supervised or unsupervised; and tasks and output, i.e., what does the AI do? It ties in well with the Council of Europe feasibility work.

When it comes to AI technologies we need to assess the risks by reference to the nature of the AI applications and their use, and this kind of calibration, a clear governance hierarchy, can be followed depending on the level of risk assessed. Where the risk is relatively low, a flexible approach such as a voluntary ethical code without a hard compliance mechanism, can be envisaged, such as those enshrined in the international ethical codes mentioned earlier.

Where the risk is a step higher, enhanced corporate governance using business guidelines and standards with clear disclosure and compliance mechanisms needs to be instituted. Already at international level we have guidelines on government best practice, such as the AI procurement guidelines developed by the World Economic Forum, and these have been adopted by the UK government. Finally we may need to introduce comprehensive regulation, such as that which is being adopted for autonomous vehicles, which is enforceable by law.

Given the way the work of all of these organizations is converging, the key question of course is whether on the basis of this kind of commonly held ethical evaluation and risk classification and assessment there are early candidates for regulation and to what extent this can or should be internationally driven. Concern about the use of live facial recognition technologies is becoming widespread with many U.S. cities banning its use and proposals for its regulation under discussion in the European Union and the United Kingdom.

Of concern too are technologies involving deep fakes and algorithmic decision making in sensitive areas, such as criminal justice and financial services. The debate over hard and soft law in this area is by no means concluded, but there is no doubt that pooling expertise at international level could bear fruit. A common international framework informed by the work so far of the high-level panel on digital cooperation, the UN Human Rights Council, and their AI for Good platform, and brokered by UNESCO, where an expert group has been working on a recommendation on the ethics of artificial intelligence. The ITU or the United Nations itself, which in 2019 established a Centre for Artificial Intelligence and Robotics in the Netherlands, could be created, and this could gain public trust for establishing that adopters are accountable for high-risk AI applications and at the same time allay concerns that AI and other digital technologies are being over-regulated.

Given that our aim internationally on AI governance must be to ensure that the cardinal principle is observed that AI needs to be our servant and not our master, there is cause for optimism that experts, policymakers, and regulators now recognize that they have a duty to ensure that whatever solution they adopt they recognize ascending degrees of AI risk and that policies and solutions are classified and calibrated accordingly.

Regulators themselves are now becoming more of a focus. Our House of Lords report recommended regulator training in AI ethics and risk assessment, and I believe that this will become the norm. But even if at this juncture we cannot yet identify a single body to take the work forward, there is clearly a growing common international AI agenda, and—especially I hope with the Biden administration coming much more into the action—we can all expect further progress in 2021.

Thank you.

MICHAEL MØLLER: Thank you very much. Also a very important statement and very interesting to see the many different activities that are happening all over the place that we need to connect to. We need to connect the dots.

It is now my pleasure to introduce Danit Gal, associate fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, visiting research fellow at the S. Rajaratnam School of International Studies at the Nanyang Technological University. She previously led the AI portfolio in the implementation of the UN Secretary-General's Roadmap for Digital Cooperation as a technology advisor.

Danit, the floor is yours.

DANIT GAL: Thank you, Michael, and thank you so much for the invitation to speak with you today.

Building on the excellent talks from previous distinguished speakers, we can all agree that the AI governance initiatives mentioned today are truly commendable in their efforts to advance discourse on the regulatory methods and instruments needed to govern AI. But we still have a very long way to go, and many experts, including myself, are increasingly concerned that we are not necessarily going in the best possible direction.

Echoing Doris, calls for multi-stakeholder-based international cooperation are essential, but we must put them into action to include those who depend on access to these initiatives the most. This brief talk builds on the promising work presented by previous speakers and sheds further light on the most pressing challenge these international AI governance initiatives face to date, being inclusive.

In order to effectively govern we must do so inclusively. While governments for the most part truly govern in the full sense of the word, many other stakeholders are essential for successful and effective AI governance. It is also important that we remember that AI cannot be contained by geopolitical borders, making it effectively everyone's problem.

This poses a considerable challenge to national, regional, and international-level governance efforts. While domestic regulations must respond to local values and needs, regional and international-level regulations need to be flexible enough to encompass a wide variety of values and needs without being too vague. For this reason people tend to assume that the more stakeholders they have around the table, the more complicated finding consensus on AI governance becomes and thus the less effective their efforts become. Conversely many existing AI governance initiatives start out by identifying those with shared values and needs so they can more easily reach initial consensus and then add other countries that agree to align with such existing consensus.

These misconceptions among others limit robust global participation in the AI governance initiatives and erect barriers, often false barriers, to meaningful participation further down the line. As a result, inclusive AI governance often appears as an afterthought for existing initiatives and has yet to truly materialize in most of them.

To be sure, there are many other barriers to inclusive and representative participation in AI governance initiatives that need our urgent attention and care. This has been made abundantly clear in almost every conversation about international cooperation on AI governance, and most recently highlighted by the AI & Equality Initiative established by the Carnegie Council for Ethics in International Affairs to address exactly this pressing challenge.

But even if we are able to overcome these aforementioned barriers in time, the prospects of meaningful inclusive AI governance initiatives is diminishing by the minute. If this current trajectory of intentional and unintentional exclusion persists, these initiatives run the risk of transforming into walled regulatory gardens by the time outstanding global actors, namely from the Global South, come knocking on their doors. If there is a lesson that developed countries leading current AI governance initiatives would do well to learn from the painful experience of Internet access in the developing world, it is that walled gardens do more to stifle progress, innovation, and cooperation than open access.

On a brighter note, however, earlier today I moderated a panel consisting of experts and representatives from the aforementioned initiatives, and I am greatly encouraged by the many steps initiatives like GPAI, the Council of Europe, and the OECD are taking. Inclusive engagements are also at the heart of established expert participation platforms that are powered by UN agencies like ITU, UNESCO, UNICEF, and the United Nations Interregional Crime and Justice Research Institute (UNICRI), as well as other entities building on the work of the high-level panel for digital cooperation and realizing the resulting Secretary-General's Roadmap for Digital Cooperation, like the Office of the Technology Envoy. All of the above are making room for underrepresented experts at their tables, and it is essential that we support them in doing so.

But there is so much more that needs to be done. The main takeaway this talk offers is that genuine global cooperation on AI governance can and should assume a decentralized form where many initiatives coexist and cooperate to shed light on different perspectives and approaches but foster equal consideration and discussion. A reality where Western-based norms and principles are considered as international guidelines and standards is not just unrealistic; it is unreasonable and risky, particularly for those in the Global South.

In order to fully govern AI across borders and political interests we must do everything in our power to support as many channels of engagement as possible and to bring together a diversity of stakeholders from around the world. We must involve different societies, cultures, beliefs, and values to ensure that AI is not only used for the benefit of the whole of humanity but is also governed to do so.

Thank you.

MICHAEL MØLLER: Thank you very much, Danit. An important statement from you, and I of course agree 100 percent. No, I agree 1,000 percent on the imperative for inclusion and to not continue these divisive practices that we have had for so many years.

It is now my pleasure to introduce and give the floor to Raja Chatila, professor emeritus at the Pierre and Marie Curie University, former director of the Institute of Intelligent Systems and Robotics, and of the Laboratory of Excellence on human-machine interaction. He is the chair of the Institute of Electrical and Electronic Engineers (IEEE) Global Initiative for Ethical Considerations in AI and Autonomous Systems and the co-chair of the Global Partnership on AI (GPAI) Working Group on Responsible AI.

Raja, you have the floor.

RAJA CHATILA: Thank you very much.

I will add another facet to the multi-stakeholder initiatives that are going on. I will speak about the GPAI.

First, you have heard about several initiatives, several groups that come up with recommendations—the Council of Europe, and the OECD continuing the framework of ONE AI, and the World Economic Forum, which is providing a lot of documentation and very interesting material for again multi-stakeholder reflection and specifically companies. There is also something which is really important because it stems from the community itself, the community which is building AI systems, which is building the technology. It is the engineers.

The international organization Institute of Electrical and Electronics Engineers started in 2016 the Global Initiative on Ethics of Autonomous and Intelligent Systems and produced a document, Ethically Aligned Design, because indeed what we want to do is to empower all the stakeholders and also the persons who are building those systems to be able to prioritize ethical values when developing those systems.

Practically speaking—and I think this is very important, as we mentioned—we need tools to ensure that those systems are actually aligned with our values, and practical tools are, for example, standards. Fifteen standards are under development in the IEEE Standard Association stemming from this initiative that embed ethical design.

Of course, when you speak about standards you need to certify that the systems are actually using those standards, so there is also the Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) that ensures that there is indeed this compliance. Also you need to make sure that those who are building the systems but also more generally all the stakeholders are educated, so there is also an education program to enable this.

But today I would like to focus on another global initiative, another global framework, the GPAI, the Global Partnership on AI. The GPAI, as you probably know, started from an initiative between France and Canada inside the G7 and then was launched in June 2020 with 15 countries. By the end of 2020, four other countries joined. As you can see, it is an international community. The important thing also is that this initiative is very much connected to the OECD. The mission of GPAI is actually to support and guide the responsible introduction of AI that is grounded in human rights, inclusion, diversity, innovation, economic growth, and societal benefits while seeking to address the UN Sustainable Development Goals.

The global overview that you see here is that we bring together people from the scientific community, experts from industry, from civil society, from governments, and from international organizations and organize them into working groups so that they feed the conversation with their expertise and that they can start initiatives for cross-sectoral multi-stakeholder collaboration, and most importantly—and this is really I would say the central issue—bridging the gap between theory and practice. We have come up with several principles, and many of these principles converge. This is of course good news, but we need to put them into practice. Therefore, we need some actual grounded projects to do that.

Of course, it is written to the Sustainable Development Goals (SDGs), which means that we have to take into account emerging and developing countries' situations and build on existing work. This is the multilateralism that I believe Doris Leuthard and Ms. Spotolisano have spoken about.

As I said, it is very much connected to the OECD, and actually the basis for being part of the GPAI is to agree on the OECD principles for responsible stewardship of trustworthy AI. So there is a close connection. Of course, the OECD has its own process with the ONE AI groups that Lord Clement-Jones mentioned, but there are these shared principles and cooperation.

The organization of the GPAI: There is a high-level council with of course member state involvement, then a steering committee, which includes elected people from the multi-stakeholder expert group, and also government representatives. The secretariat is hosted at the OECD.

The multi-stakeholder experts are divided into five working groups, the working group on responsible AI with a specific subgroup on AI and pandemic response. Of course this started during the COVID-19 crisis, so it was very important to also address this problem. There is another working group on data governance; a working group on the future of work, and this is of great importance. As you know, the impact of AI on society involves the transformation of work; and a working group on innovation and commercialization. There are two expertise centers, one in Montreal and one in Paris, to support the work of these working groups.

The working groups are co-chaired by two persons each. I have the honor to co-chair with Yoshua Bengio from Canada, the Responsible AI working group. There is Data Governance with Maja Bogataj Jančič from Slovenia and Jeni Tennison from the United Kingdom. Future of Work is Wilhelm Bauer from Germany and Yuko Harayama from Japan; Innovation and Commercialization, Francoise Soulié-Fogelman and François Gagné, France and Canada; and the AI and Pandemic Response with Alice Hae Yun Oh from Korea and Paul Suetens from Belgium.

Just to give you an idea about concrete work, the focus of the Responsible AI working group specifically is actually to contribute to the responsible development, governance, and use of AI systems in a human-centric way and in congruence with the UN Sustainable Development Goals. Since we were launched in June 2020 we have reviewed national and international initiatives and identified areas for action. These are the five areas for action within this working group:

  • Facilitated Intra-governmental Cooperation on AI Governance: What can we do to facilitate intra-governmental cooperation on AI governance? This is a really hard question.
  • Governance of Social Media. Given the importance that social media plays in our daily lives but also its impact, for example, on democracy. We need to imagine what governance means so that social media is compliant with our values and protects our democracy.

The next three target the Sustainable Development Goals:

  • AI and Climate Change and Biodiversity
  • AI for Public-Domain Drug Discovery
  • AI for Education and Literacy

So the working groups actually are not going to develop specific AI projects, but they are going to develop the roadmap: What should be done? How could this be done? This will be leveraged afterwards in concrete projects in international collaboration.

Thank you for your attention.

MICHAEL MØLLER: Thank you very much, professor.

It is now my pleasure to introduce another video participation by Xue Lan, who is a director of the Institute of AI International Governance and professor and dean of the Schwarzman College at Tsinghua University.

XUE LAN: Thank you very much for inviting me to this very important conference. Today I am going to talk about the Chinese view on global governance for AI.

I think first of all in terms of what in AI governance are the critical elements from our view?

First of all, of course, is: What are the key values to pursue in the governance of AI and also through what systems, what regimes that we go through in governing AI, and also actors, object, and what is the outcome of the AI governance?

The second element I think we focus on is: Who are the key players and what are the key tools in governing AI? Of course we recognize that the key players are the private sectors, particularly the companies that develop and apply AI. Here I think they use various tools in terms of the ethical principles, the internal committees, and also technical tools.

I think the public sector plays a very important role of providing the overall principles and also I think in developing various kinds of legislations and government policies.

The social sector also here plays an important role in terms of voicing their views and also engaging in various communications.

Finally, of course, international organizations play a critical role in coordinating different countries and in setting up international platforms for dialogue and communication.

I think one thing that we need to recognize in terms of AI governance at the moment is the regime complex phenomenon. That is, we have many players in this AI governance system, but there is no clear dominance of one organization versus the others, so everyone has some stake in it and some overlap, but there is not a clear structure and hierarchy at all. I think we need to work hard to try to sort this out to make sure that our AI governance at a global level is efficient and effective.

Also in terms of AI governance, there are two major concerns. One is the technical dimension, how we actually would make AI be more transparent, explainable, and ethical. At the same, what are the social dimensions of AI development, making sure that indeed we have AI for good or AI generating positive benefit for the society but also at the same time minimizing the potential risks and also the problems?

One key dilemma that we always face in AI governance is that the regulatory system is always behind the technological change in AI development, so I think it is the so-called "pacing" problem that is prevalent in probably all of the countries that are actively engaged in AI development. I think that is something we have to recognize and also try our best to catch up.

Let me move on to talk about AI development in China and AI governance in China. China's AI research and development has been moving very rapidly in recent years. On the left you see that China's peer-reviewed publications have surpassed the United States and Europe and that private investment in China also is quite active. It is second to the United States.

In terms of AI governance in China, China issued a policy statement in 2017 on AI development. The same year China established a Strategic Advisory Committee. Two years later it established an AI Expert Committee for AI Governance, and I chair that Committee. In the same year we developed a set of AI governance principles that was published in June of that year, and here we focused on responsible AI. We focused on harmony and human-friendly, fairness and justice, inclusion and sharing, and so on. These are the major principles we follow.

I think one thing that we did mention is the so-called agile governance, recognizing the pacing problem. So we want to make sure that indeed our governance principles will also change according to AI development.

In practice now we are moving forward in developing further some industrial guidelines and standards, so China's International Trade Centre Trade Standards Committee has recently issued a guideline for companies to follow. Also I think in different application areas there are also many various kinds of guidelines and regulations being developed in different locations. For example, with autonomous driving, different regions have already set various policies. Also I think the general public has broader issues about legal cases on facial recognition and so on, and indeed I think the court ruled based on the principles that we proposed.

At company level it is much more active. In many companies they have set up various mechanisms for AI governance, such as internal ethical review committees, and also I think there are guidelines in their internal practices.

Let me move on to international AI governance. I think this is very critical for this community. China is a very active supporter and innovator of international AI governance.

First of all, I think that AI can be a very powerful tool to promote human progress, but also there is a need for coordination, so I think this is really a common good that benefits all of us.

Also, I think the second point is cooperation for development. We recognize that AI development involves many cross-cutting issues such as open-source algorithms, data flow, cybersecurity, scientific collaboration, trade, and so on. So there is a need for global coordination and addressing common issues.

The third is that while we have all these positive aspects, AI can have a devastating impact on human societies, such as large autonomous weapons systems and so on. It is unlikely to be addressed by a single country, in many cases maybe not even sovereign countries. Various kinds of groups can bring harm to the global society, so we really want to make sure that we prevent a race-to-the-bottom phenomenon in this area that leads to a new arms race and other bad outcomes.

Also, we recognize that indeed different countries may have different views, different values, and so on. So the critical issue is: How actually in this process of providing better international AI governance we can reconcile our differences? Here again I think global cooperation is very, very important. Without deliberate efforts it will be very hard to reach a consensus at the global level. Those are I think the broad set of reasons that we need to promote international AI governance and we could do that very quickly.

In terms of specific Chinese positions on international AI governance, China has always wanted to work with other countries and with international communities to really push on this front, to promote AI for good, to develop AI applications in various sectors, and also at the same time to work with our international partners to study the potential risks and also potential social impacts to see how we can minimize the negative part of the social impact.

Also, China wants to maintain open development, promote international collaboration, and particularly we want to oppose decoupling. We do see some phenomena that indeed some countries or groups of countries are trying to set up small circles of AI governance at the international level, excluding many of the critical players in AI development. I think that could lead to unwelcome consequences. So we really want to support inclusive rule-making and support UN-based discussion and debates on governance principles, and we will try to make sure that indeed all the views on critical issues are expressed at a fair level and to make sure that indeed latecomers and all those different countries' views are heard.

Also, we want to respect the laws and regulations of different countries, really making sure that we leave space for AI's rapid development.

I think that in this AI governance international collaboration, we want to make sure that indeed the private sector's voices are heard, particularly the many multinationals. They do have very high stakes in it, and also they can play important roles, so we want to make sure that they are included in this process.

Finally, what are the steps for international AI governance? I think first of all to establish inclusive global platforms to coordinate AI governance issues is very critical. I think the AI Congress is doing a wonderful job in doing this.

Second, I think there are a lot of things we can learn from other global governance issues. For example, Internet governance, the international system on nuclear arms control, space law, and climate change all provide good lessons for AI international governance.

The third is that we need to strengthen scientific collaboration in AI research, including research on issues related to governance and social impact. Looking at the research, I think there are a lot of international collaborations in AI technical research, including researchers in China and in the United States. I think we can learn from their efforts to make sure that indeed for people who study governance issues also that we engage in that cooperation.

The fourth is that we need to seek common values while respecting differences. We recognize that indeed different countries have different values and different governance systems, but at the same time as part of the human society we do share some common values. I think those are the foundation for international AI governance. We want to see what are those common values that can serve as a basis for international AI governance. Of course, we also recognize the social, economic, and political differences among countries.

Finally, I think that based on those efforts we can develop common principles and norms to really guide the healthy development and deployment for AI to make sure that indeed AI's development can benefit human society and further the 2030 Sustainable Development Goals of the United Nations. That is what we hope we can see as the next step in this area.

Thank you very much.

MICHAEL MØLLER: Thank you, professor. Thank you very much. I very much agree with you on the problem of the gap between the speed with which technology is moving and the policy, but also I think we need to look at the gap of adaptation. In other words, there is a gap between that speed and the ability of humans, organizations, and states to adapt quickly enough to these new technologies.

Thank you very much, professor, again.

It is now my pleasure to introduce Konstantinos Karachalios, managing director of the IEEE Standards Association and member of the IEEE Management Council.

Konstantinos, you have the floor.

KONSTANTINOS KARACHALIOS: Thank you. I have sent the presentation. Perhaps it can be uploaded.

It is my pleasure to be here and to address this audience.

I think it is important to address the question of governance of these emerging technologies for the reasons I am going to try to explain.

What are the key elements for such governance? Of course, we need better governance in cyberspace. I cannot explain now all of the aspects, but probably you understand what I mean. AI is becoming an important part of the intelligence of cyberspace.

Secondly, there are many actors. We have to work together, and these actors tend to not cooperate with the others. We have to break this. This is one of the reasons that IEEE has engaged not only in technical standards but also in a very large-scale effort to build bridges with policymakers. I was just on a panel with the European Parliament right now, and I realized that all the executives who were there were working intensively with them.

This is part of our role, how we see ourselves. We want to add a layer of self-reflection within the techno-scientific communities that will help us do a better job from the beginning. This means assuming all sorts of responsibilities and at the same time engaging with the other actors—the business world, the policymakers—the ones who understand what we are talking about here, and we can work together. We don't try to convert enemies; we try to build alliances with the ones who understand.

Here is a very short presentation because we are not just a standards organization. We are part of something much bigger, which is the IEEE. We are part of a global community with locals everywhere. I was in Singapore and talking with Xue Lan because of this. This is how we know each other. It is not because we are making standards. It is because we are part of what I would call a global "democracy" of technical experts. This is something very interesting.

The way we address these big issues and challenges is we create communities. We started by creating a global community, a global initiative, to address these questions which we are talking about today, and we did it years ago. Practically we brought these things into the heart of the technical communities. Wendell and other people were reading and writing about this, but they could not reach our hearts. This is what we are doing now.

This is a change and a positive one, I think. These global communities we are establishing create things like Ethically Aligned Design, which probably many of you know, which is our bible, to put it like this. Very interestingly IEEE, which never takes a position about the things that happen through IEEE, took here a position and endorsed this. It was the first time in the history of IEEE. This is another indication that things are changing.

Of course, it is not enough to be self-aware. We have to find the tools of how to translate this self-awareness into practical tools, into systems design. This is what we do with our standards.

This is a very powerful form of governance. Just to give you an example, many of the standards in this area about federated learning, about how to respect privacy while training AI systems, and so on, come from people who live and work in China. By doing this, we are creating—as Professor Xue Lan was trying to explain—this global consensus from the bottom up.

In my opinion, although it looks technical, it is deeply political but at a different level and a very substantial one. It is not geopolitics. It is about caring about our future as a human species on this planet.

Other things we do. We create certification schemes that allow companies and also governments and administrations like municipalities to assess the quality of their AI systems with respect to their level of transparency, accountability, reduction of bias, privacy, and so on. We have done this already, and we are now working on use cases, so anyone who is interested, please contact me. I cannot go into details here.

But not only this. It is about a paradigm change within the techno-scientific communities because if we do not change ourselves, the others have really no chance because of this discrepancy of the pace that Xue Lan has shown. These cars go apart, and we have really to build alliances, and IEEE has really taken a protagonistic role in creating international fora where we can all work together to address this like the Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS), which is an open forum of many standardization organizations in industry who understand now the need to address what I call "contextual" issues, not only technical, in standards. This is really also a change.

Also, we create other fora, like the Council on Extended Intelligence. I cannot go into details here. Please contact me if you want to find out more about these things.

For instance, we have created a team that thinks and works around the questions of concentration of power and technology, and these are all open groups. Everybody can join who is interested. There are extremely interesting people who are participating here, and what we do is very influential. I think it is not a coincidence that you see a proliferation of AI principles which look all very similar.

Of course, one of our main priorities is to educate ourselves. It is not to educate others. It is about ourselves. If we lose this battle within our communities, then I don't think we are helping anybody. So we have to educate the young engineers already in the curricula and also the young professionals, but it is not only about abstract education. It is not about motherhood and apple pie. It is about creating the tools that enable these people to do a better job from the beginning. It is about design. It is about guidance to design, and we have standards that help people do this.

Here you see just a sample of our technical standards around algorithmic systems, and these will be made available to all of you.

It is about the impact standards, which is not just technology. It is technology plus something else, and I would like to just name one, the first one, the P7000, which is a model process for addressing ethical concerns during system design. I think the title says it all. This is our flagship standard. It is now close to maturity. I hope it will be released this year.

We have also standards which come very close to the title of your webinar today. We have a recommended practice for organization of governance of artificial intelligence. Many of the people who work here leading this are participating today in your seminar. This is chaired by Gary Marchant, who I suppose many of you should know, and there is a very strong group. There are a lot of people who are interested here, and I hope we will conclude this early next year.

I want to convey again this message: Governance cannot be just political. It is a very complex phenomenon, and the main actors need to come together and work together. The techno-scientific communities who are actors and who are here assuming all sorts of responsibilities are ready to engage with others because we cannot do it alone. We have to understand the point of view of the others and work together to achieve a better governance of these technological systems, which are very critical because they are pervasive and invasive in our everyday lives. It is about democracy. It is about the mental health of our children. So it is about our future.

MICHAEL MØLLER: Thank you very much, Konstantinos. I could not agree with you more. It was really quite important to stress again the need to work together, and if we don't, clearly we are not going to make it.

I will hand over to Nanjira, who will take us through the rest of the meeting. Thank you.

NANJIRA SAMBULI: Thank you, Michael.

Konstantinos' presentation is a really good segue to the three other presentations we are going to have that start to show us how we need to look at different actors and how they are coming at this question of governance to have a technical and social need to create cohesive governance.

It is my pleasure to introduce Anand Rao, who is the global artificial intelligence lead at PricewaterhouseCoopers (PwC) and partner and innovation lead in PwC's Advisory Practice, who will give us some insights on corporate self-governance in a prerecorded presentation.

ANAND RAO: Good morning, good afternoon, and good evening. Welcome to this session on AI corporate self-governance.

As a founding member of the International Congress for the Governance of AI, I want to really commend Jim and his team for bringing this group together to discuss some of these issues. What we want to do today is to go over the why, what, who, and how of AI corporate self-governance, and first look at why we want to govern AI, what does it really mean when we say AI governance, who should be involved, and how exactly to perform these things, all coming in very much from the corporate perspective, I should add.

Let's establish the question of why. I am sure every one of you has seen a number of articles and press releases in the recent past where a number of risks of AI have been highlighted. We have taken a very methodical approach to these risks, and you will see six categories of risks here. The three on the left are very application-specific risks, and the three on the right are much more business-level and societal risks.

On the application-level risks, everyone has heard about bias, various types of errors, over-fitting or under-fitting of models, and opaqueness of models. That is one class of risks, and AI models and machine learning models fall under this trap of bias.

The second list of risks is much more around security risks, a combination of cyber and AI, elements of stealing information from models, and a number of other areas of security concerns.

The third bucket is more around control: How does the transfer of control between the humans and the machines work? When does the human take control or the machine take control of certain key decisions, and how does that interaction really happen? All of those are areas of risk, and these are risks that are happening today that we want to make sure we are mitigating some of those risks.

On the right-hand side you see a much broader set of risks around ethical risks, alignment of values of AI, economic risks in terms of job displacement and how AI is augmenting or displacing jobs, and finally some of the risks that we have seen in terms of deep fakes, misinformation, and more societal risks around autonomous weapons and other areas.

All of these risks need to be taken into account and mitigated while we are also building up and exploiting the benefits of AI. That is the reason why we want to govern the data and the AI models that use them.

You might ask, "What does it really mean to be governing AI?" What exactly are we talking about here?

Here is a very quick framework on what we call "Responsible AI," which encompasses AI governance. On the left-hand side you see societal and the ethical viewpoint, starting from the top where number of researchers, number of professional institutions, policymakers, and national bodies have come up with various principles, principles of data, principles for ethics, and so on, and some of them are being translated into regulations as we speak. So there is one body of work which very much starts at ethics and at the broader societal level.

Then we get into a number of areas which we are calling more technical areas, where there are specific things that you can do from a data science AI perspective to check on other models. Models are being built, and there are ways of making sure that they are mitigating some of the risks that we just looked at earlier: bias and fairness, explainability and interpretability of models, safety and security of models, robustness, and privacy. All of those are very technical areas.

Finally, the use of AI is very much a social-technical system, so humans and machines collaborate and solve certain problems, so we need to make sure that there is AI governance where people are looking at the processes from an end-to-end perspective and from a top-down perspective within an organization. All of this together is Responsible AI, so AI governance is one aspect of it, trying to mitigate some of the risks and also stay true to the overall ethics that the organization has adopted.

As we move forward, we look at: Who should be involved in this AI governance? Is it just the corporations? Is it the government? Is it a combination? What exactly should be done as to who is involved in AI governance?

Here again we look at three classes of institutions, one coming from the top down, what we call a "top-down" approach, very much a policymaker regulatory-driven approach. Again, it is not just the national governments. We see a number of professional bodies like IEEE and also multinational bodies like the World Economic Forum, the United Nations, OECD, all groups now talking about AI and how to operationalize AI. That comes in very much from a hard law perspective, where you are actually going to have some regulations in some of these areas or very much from a guideline, soft-law perspective: What should various types of companies or technologies should be doing? That is very much top-down.

We also have a bottom-up approach. A number of technology companies as well as some of the companies that are using these technologies are all coming up with various types of self-regulation, whether it is testing of the models, whether it is bias and fairness, certain methodological processes and so on, they are coming up with these.

Both of these should meet, and we should tie the two together. That is the middle-out approach. How do we do the framing from a legal perspective but then allow different frameworks and tools of different venders to coexist while satisfying some of the key things that we just looked at? AI for People, another body, looks at this as no-regret actions coming from the top and much more of an engagement coming in from the corporations and the need to build these things using various types of coordination mechanisms. At the end of the day we need all of these forces to come together to be able to define what we want in terms of governance.

So there are a number of national strategies. The national strategies look for more than just regulation. It is much more around protecting consumer data, protecting businesses, looking at the safety and security of people, but then also reskilling, so if jobs get displaced or changed, how do we actually get more people to be skilled, and also how do we use AI for the benefit of all?

The policy areas typically split into these six areas: Academic partnerships, national preeminence, basic AI R&D. All of those areas are things that the national policy areas address in addition to some very specific areas in either national security, healthcare, and subspecialized AI. These are the things that national policy documents look at, and that is how they are coming into the notion of AI governance.

Moving back into the corporate world, how do you actually perform AI governance? What we have done, as well as a number of other companies, is come up with translating all of the broad set of principles into something really actionable. We have defined 10 different principles that encompass the Responsible AI framework that we looked at earlier. The first and foremost is aligning the principles and practices to your organization. There are a number of those documents here, but what really is relevant for your strategy, for your customers, for what you are delivering, and the gravity of the decisions that the AI is making dictates some of those.

Then you go into the governance aspect, the top-down end-to-end governance, and a number of those technical areas that we looked at in terms of robustness, control, respect, transparency, security, and so on, at the end of the day all need to be accountable, and that is where the governance comes in, and we want to foster much more of a social and environmental well-being, essentially looking at much broader concepts than just the accuracy of your model.

These are the principles of Responsible AI, and what we need to do is embed them into what we call an "end-to-end" governance, so right from the time that someone decides that there needs to be something where an AI solution is needed, to what data we have, and so on, there is a nine-step process all the way to building those models, deploying them in real time, and then also continuously monitoring them. Some of those key questions that need to be addressed by business and data scientists are highlighted there, so all of these principles should align with these six stage gates, and also it should be targeted towards mitigating some of those risks that we talked about. So this is sort of a framework that we use for governing AI for people building the models and people inspecting and people using these models. So that is what we mean by AI governance.

Obviously, there are a number of details here. From this nine-step process, typically what happens is you go into each one of those stage gates and you see a number of different things happening here as to who is responsible for doing some of these: business & data understanding, solution design, what is a document that they produce, and who is actually approving those, and what is the escalation process? So governance is all about what decisions are being made, who is making those decisions, and what information are they using to make those decisions, and how does the escalation process happen? That is what we are trying to set up here from a data science perspective, but then from a business C-level and a board-level perspective, so essentially going end to end but also going from the bottom, which is the data science, all the way to the top, and outside as well to the consumer and to the regulators. So you really need to be thinking about governance in this 360 view. That is what we mean by AI governance.

Very quickly to summarize, AI governance is required to mitigate some of the risks. There is a huge number of those risks, like bias, fairness, security, and safety, and a combination of a top-down and a bottom-up approach is required, and it requires all different bodies to come together, the public, private, not-for-profit, professional bodies, global institutions like the United Nations, ICGAI, and so on, to provide that guidance and provide that regulation, provide the framework and tools, all of that.

Operationalizing AI ethics from principle to practice also requires a number of methodologies, tools, and frameworks that are all emerging from the academic community as well as from the vendor technology communities. All these things need to come together, and obviously we need some standardization, and that is what we are here for, to bring these things together.

With that, I would like to complete my talk here. Thank you very much.

NANJIRA SAMBULI: Thank you, Anand, for taking us into a very interesting deep dive on how corporate sectors may be coming at this question.

Next we will hear from Hilary Sutcliffe, who is director of SocietyInside and TIGTech and who is a former co-chair of the World Economic Forum's Global Future Council on Technology, Values, and Policy. Hilary's is also a recorded presentation.

HILARY SUTCLIFFE: Thank you. It is an honor to be part of this really important conference.

I would like to start with the roots of distrust. Our research has shown that the biggest driver of distrust is when companies, politicians, and regulators particularly appear to prioritize the making of money over people and planet; secondly, the way that institutions and legal approaches are not really equipped for values judgments or ethical and moral issues, a governance regime that is really predicated on health and safety—"Jump through this hoop and all will be well," and "Stop and do that and we will fine you"— does not actually set itself up well for some of the critical values judgments we find with artificial intelligence, facial recognition, and ethical and moral issues; third, governance institutions—and that includes companies but also regulators and government—are aloof and secretive and opaque. They do not really show their priorities and how they go about their decision making. Those are the three roots of distrust in governments.

I would like to share with you as well three findings of our research that are important to trust in governance because when we started this project and when Conrad von Kameke came to me with this idea, I sort of thought: Yeah, trust in governance, that's really interesting. It wasn't until I really did the work that I saw how important people's trust in governance and regulation is, and it is pivotal to their trust in the technology itself.

A perfect example of that is the trustworthiness and trust in the approvals process and the decision-making around COVID-19 vaccines. That has actually been just as important as trust in the vaccines themselves because citizens trust governance when they see that it is working, when they are holding public interest, punishing those at fault, and rewarding those who are doing good things.

At the moment with AI, it looks as if companies under-regulated are failing that number one distrust test and that the tech is moving forward without thinking and considering the values and ethics of society. It is important to the system as a whole that citizens do and can trust governance.

What does that look like? We had a great quote in some of the consultation work that we did. Someone said to me: "I don't know why you're doing this project. If something bad happens, I'm going to hear about it on the news." That is what trust in governance is about: The system works. I don't have to worry about it. We see in AI at the moment that actually the system isn't working and that people are worried about it.

Let's say where to start. You have bought this idea that trust and governance are important, whether you are a company or a regulator. I would like to draw the great and simple quote of Baroness Onora O'Neill, who is a philosopher of trust. She says: "How to be trustworthy? How to be trusted? First, be trustworthy, and second, provide good evidence that you are trustworthy." That is really simple. That is so great.

With that in mind, we started to look at what are the drivers of trust, what are the signals of trustworthiness, and what matters to trust. Quite surprisingly, we found an unusual academic consensus, not about trust, which is all over the place. Evolutionary psychologists, psychologists, sociologists, political scientists, behavioral scientists all have a hugely different idea about what is trust. But what there is consensus on are these signals of trustworthiness, and I would like to show you what these seven signals of trustworthiness are. They are:

  • Intent. You have a good intent. From the governance point of view, you have an intent that is in the public interest. And that is upheld through purpose, through process, and through delivery.
  • You have competence. The OECD has a great quote: "It's all very well having good intent, but if you're incompetent, really that doesn't count either. You will not be trusted." Part of the qualities of that competence are reliability, effectiveness, and responsiveness, which is particularly important.

There are some five values components to these signals of trustworthiness:

  • Respect. You almost could trash everything else and put respect first and center because seeing others as equals, listening to and taking seriously their concerns, views, and rights, and considering the potential impact of your work, of your governance, of your innovation on others is a pivotal driver of trust.
  • Openness. We hear a lot about transparency and openness, but it is really critical, particularly now. A real radical openness is quite important, and it is seen as effectiveness in terms of driving trust.
  • Integrity, which we look at as well as accountability, about operating honestly, and also being independent of vested interests.
  • Fairness is enshrined in justice and equality in governance processes in all sorts of governance, from international instruments to just the way we deal with each other. "It's not fair!" is a visceral feeling in most of our minds, and I think fairness is an important part that we see in AI. Is it fair? Is it fair to these different constituencies? How do we make it more fair? I see that a lot in lots of the ethical frameworks.
  • Very interestingly, inclusion. This idea of working collaboratively to design governance, working collaboratively to shape the trajectory of your technology is a pivotal part of trustworthiness. Involving others.

Finally, we could think here of trust and trustworthiness a bit like an iceberg. Trust is made up of hundreds of accumulated acts of trustworthiness based on those little signals, and that is underneath the surface. That goes on behind the scenes. It is about how you run things. It is about you are decision-making. It is about how the way your organization works. Above the surface, the iceberg itself is the evidence of trustworthiness.

A lot of talk about trust talks about communications. It's about public relations (PR), it's about messaging, and we would say that trust is very much about the evidence of your trustworthiness, not just good PR. But as you will see here, there is the weather, the things that shape the iceberg, that shape the technology and then shape the governance, and that is culture. It's media. It's changing norms. It's context. It's all of these things that actually make that a very complex, difficult, and challenging way to earn trust. It is not just this equation of: "You do this, you do that, and trust will come out of the sausage machine." It is both a science and it is an art to get trustworthiness, to earn the trust of society.

Those are the key findings of our trust in governance project. If you would like to know more, a slide here will show you just where to go to our dedicated website, and please give me a call at any time. I am very happy to talk further about it.

NANJIRA SAMBULI: Wonderful. Thank you so much for that contribution, Hilary. I am sure many of us will be referring to the website to think about this a lot more as we really conceptualize what trustworthy AI looks like.

Last but not least, we have the pleasure of hearing live from Merve Hickok, who is a senior researcher at the Center for AI and Digital Policy (CAIDP) and founder at AIEthicist.org.

Merve, over to you.

MERVE HICKOK: Thank you so much. I am really honored to be here today, and I see a lot of familiar faces on the panel as well as the participants.

Today I would like to share some of the work that we are doing at the Center for AI and Digital Policy. Listening to the sessions ahead of me, I think I will be able to tie to some of their work as well.

A couple of months ago our team at the Center for AI and Digital Policy, working under the auspices of the Michael Dukakis Institute in collaboration with the AI World Society and Boston Global Forum, published "AI and Democratic Values: AI Social Contract Index 2020" report. It is the first comparative report on national AI policies and practices where we looked a 30 countries for their endorsement of OECD AI principles, human rights conventions, and the "Social Contract for the AI Age."

We compared those endorsements against actual implementations in each country, and what we found was when we were looking at actual practices, which were mainly public sector implementations, we noted that a number of the national AI strategies make reference to human rights and OECD AI principles. However, there is still a significant gap between the written commitment and actual implementation.

There was a striking number of countries committing themselves to experimenting with AI, test-bidding or data collection and sharing practices between public and private entities, which makes the question of this governance even more crucial.

Like I said, we reviewed 30 countries with 12 metrics and ranked the countries within five tiers. Thirty countries, 25 of these by gross domestic product, and we included five impact countries like Singapore and Rwanda within the report. In each country section we provide the details of implementations that we reviewed and what the country could do to further move towards a more democratic implementation of AI.

On the Tier One end, you are seeing robust safeguards, AI leadership, and public participation. On the other end of the spectrum, you are looking at countries which may need to develop or enhance their policies and governance structures against these metrics.

Some of the findings that had were that the OECD and G20 AI Guidelines were influential in shaping out these national AI strategies. We identified a number of redlines for AI, such as facial recognition for mass surveillance, criminal sentencing, or scoring of citizens. In a number of countries we found that non-governmental organizations were impactful in terms of shaping the national AI strategies and in shaping some of the public implementations of AI technologies.

One of our final findings was that AI policy and regulation is in its early days, but the pace is certainly accelerating. On looking at both ends of the AI debates on ethics as well as policy, the policy side definitely needs to catch up with some of the conversations we have.

As a team we also had a concern when we were doing this report with some of our other engagements. Although we accomplished an incredible feat of comparing 30 countries, we acknowledge the fact that the representation of Global South countries was not enough, and we committed to expanding our coverage in the 2021 report as well as monitoring AI policy developments.

To that end we have introduced a number of new initiatives that you can find at our website. One is public policy. In this segment we promote public participation in AI policy. We monitor different countries and see which policy statements or legislation that they open for public comment and provide that curated list to those interested. We also have the Center for AI and Digital Policy's own statements against these policy initiatives that you can find on the website.

The updates piece provides significant AI policy developments globally. Again, this is to support the regions that are not usually heard to be able to share significant policy updates in a central space so we have a more global approach to it.

Finally, AI policy events. I tend to see a lot of AI ethics conversations and events. In fact, this is my second event with Lord Tim today. We are discussing a lot of stuff on the AI side, but AI policy events themselves tend to be a lot rarer. Again, this is part of our website, part of our initiative that you can find and be able to follow the policy events.

Some of these policy items that we are covering closely in 2021 were already mentioned by different panelists in different sessions, but if you are contributing to any of these and would like to discuss or would like to provide further insight, please connect with us at the Center for AI and Digital Policy.

Finally, I would like to mention another initiative, this one spearheaded by the Boston Global Forum. It is: How can we create an artificial intelligence international accord? To be able to do that and mold an international partnership and alliance, we are looking to build consensus around the framework for an international accord on AI standards, and then establish a democratic alliance on that and its companion document, "Social Contract for the AI Age."

"Social Contract for the AI Age" is grounded in the idea that AI should protect fundamental rights, that systems should be implemented in a transparent and accountable way, and that they are considered from a multi-stakeholder perspective to achieve a fair and equitable global community. It was one of our metrics in our index report, so this is like I said another initiative by Boston Global Forum to move this conversation forward and actually have a global alliance on the social side.

The final piece is: How can we create a monitoring system to observe the uses as well as abuses of AI by governments and businesses? How can we track and share this for better governance and keep governments as well as businesses accountable for their stands as well as their practices?

I will stop there. If there are any questions, please feel free to contact me.

NANJIRA SAMBULI: Merve, thank you very much.

With that, we come to the end of today's presentations, which have been very rich, very diverse, with lots to ruminate on.

I will hand it over to Wendell to give us a sense of what is to come and what he makes of all of this.

WENDELL WALLACH: Thank you ever so much, Nanjira.

This has truly been a remarkable series of talks, and I trust that all of you will be leaving with a very rich sense of the breadth of things going on in the international governance of AI but also a rich sense of what has not been dealt with yet and requires some attention. I won't repeat some of the most obvious themes, but I do want to mention at least one of them, that this patchwork quilt of principles and policies is inadequate, and we need a 21st-century mechanism to ensure effective and trustworthy AI governance. We need to ensure that we put that in place, and it is a truly inclusive mechanism. Yes, we do have a few initiatives that portend to move in that direction, but I think we also know that each of them has strengths and weaknesses and that there is much more to be done.

Something that gets lost in these discussions around AI governance is the cross-fertilization of ideas and proposals and the manner in which they have been evolving over time through collective engagement and participatory intelligence. This is so often lost as individuals and institutions clamor for primacy.

Our project over these five years was already building on so much work by scholars, by reformers in international governments, and by critics of existing multilateralism, and there were many proposals that were already being put forward, and they fed into the creation of some of the initiatives you have been hearing about. They fed into work at the World Economic Forum. They fed into the work of the secretary-general's High-level Panel on Digital Cooperation, which had an output of recommending that we create some kind of a multi-stakeholder advisory body.

As we were formulating policy recommendations for the Congress which was to have convened in Prague—and it is so sad that we are limited to making presentations and not sitting together and engaging over these issues and struggling with how collectively we can move forward—we did have these three experts working groups. One of them was graciously organized and hosted for us by Eileen Donahoe, who is the executive director of the Global Digital Policy Incubator at Stanford University's Cyber Policy Center.

At that meeting we had 20 to 25 experts from around the world representing different governments, representing Microsoft and Google, and academics who had put forward some new proposals. We came up with one proposal, which has already been echoed over and over again, a proposal for a global governance network.

Again, it was recognized from the get-go that the governance of AI globally will be distributed, but there did not seem yet to be a mechanism in place that would facilitate adequate cooperation between all the stakeholder groups, not just governments and the tech oligopoly, but also those whose lives were to be affected by AI implementations and yet had very little input on what systems would and would not be put in place.

The global governance network, the ideas that led to that also fed into many of these other initiatives, as I have been saying. One of the outputs was that of the secretary-general's High-level Panel's recommendation for a multi-stakeholder advisory body, and that has been furthered as we have moved from recommendations to the secretary-general's Roadmap for Digital Cooperation.

So it looks very similar to what we were putting forward. The question that is still outstanding is: Will that body be put in place, and will it be effective? Certainly all of us recognize that the United Nations would be the correct place for such a full engagement of the world community, and yet we are also very mindful of the way in which some of the multilateral structures in place at the United Nations have hamstrung effective governance in so many different areas.

A question that I will leave you with is whether or not we can make the UN secretary-general's initiative robust or whether it will need to be complemented by activity in other initiatives or whether we may even need to create a new international body that not only complements what is going on in the United Nations but perhaps facilitates some of the activities that the United Nations or other global initiatives such as GPAI are not able to take up alone and speak for the world community as a whole.

The second part of all of this is: How do you ensure that such a body is meaningfully inclusive in the governance of the AI revolution? There have also been many recommendations to create international governance networks that from the bottom up elect their own representatives to international bodies, those whom they trust collectively and whom they believe can speak for their stakeholder groups and will be trusted by the others who sit in this global governance network, whether that it is within the United Nations, another body, or something totally new.

That is what we are going to focus on a little bit more in our next meeting on April 13. We are going to bring together and talk about meaningful inclusivity and the governance of the AI revolution with talks and panels from a variety of stakeholder groups, and then at the end we hope to have a conversation about whether the secretary-general's initiative can be made robust, whether one of the other initiatives can truly represent the world community in an even-handed way, or whether we might need to be creating some new complementary bodies.

I invite you all to join us on April 13, and with that let me return the floor to Nanjira.

NANJIRA SAMBULI: Thank you, Wendell. Thank you for encapsulating some really interesting ways for us to think and ruminate over what we have heard today.

Mine is really now just to give a very hearty appreciation for all our presenters and speakers, many of whom are still on the screen right now. Thank you so much. It is in itself a very interesting experiment. This is the kind of governance and inclusive way we can bring different actors to the table as we are talking about, so we very much appreciate your input.

Again, I will re-emphasize that we look forward to seeing you all back on April 13.

Michael, I don't know if you have a final word as well for our great audience here.

MICHAEL MØLLER: Thank you, Nanjira. Yes, I do. First of all, what a terrific few hours. This has been great.

I just want to say a few words about the fact that what we are trying to do and the way this convergence we have seen in all of the talks today situates itself very much in the much broader transition of governance that we are living right now and have done for a while but which has been vastly accelerated by the pandemic and particularly with the leapfrogging of technology into our lives.

What I have heard today really gives me hope. While we were focusing on the governance of AI, we were in effect also talking about governance in general, a new model of governance that is much more integrated, that is much more collaborative, that is de-siloed, and that more than anything else is networked. It is only by creating these new structures and by reinventing, if you want, multilateralism in that sense that we are going to make it and that we are going to deal with the existential problems that are facing us, including how we are going to govern and put some order into the way that technology is being deployed and how it affects the lives of people and how we use it, in terms of ethics, human rights, and applying the Sustainable Development Goals in an equitable way and one that makes sure that we leave no one behind and ensures that everybody is the beneficiary of this.

I want to thank you all and congratulate you all for this. I think there is much hope in the way this is moving forward, and I look forward to seeing you all on April 13 together with Nanjira and beyond because I think, as somebody said earlier on, it is no longer the time for just talk. We really need to get our act together collectively and start putting in place some of these great ideas that are coming out, ideas that will be applicable not just for the governance of AI but for governance in general.

Thank you so much and see you soon.

NANJIRA SAMBULI: Thank you. Goodbye, everyone. Thank you very much.

您可能还喜欢

2024 年 7 月 31 日 - 播客

负责任的人工智能与大型模型的伦理权衡,与 Sara Hooker 合著

在本期节目中,高级研究员安雅-卡斯珀森(Anja Kaspersen)与 Cohere for AI 的萨拉-胡克(Sara Hooker)讨论了模型设计、模型偏差和数据表示等问题。

2024 年 7 月 24 日 - 播客

人工智能与战争:军备控制与威慑的新时代》,与保罗-沙尔合著

高级研究员安雅-卡斯珀森(Anja Kaspersen)与新美国安全中心(Center for a New American Security)的保罗-沙雷(Paul Scharre)就技术与战争交叉领域的新问题进行了对话。

2024年7月2 日• 播客

控制论、数字监控和工会在技术治理中的作用,与Elisabet Haugsbø合著

高级研究员 Anja Kaspersen 与技术联盟 Tekna 主席 Elisabet Haugsbø 就她的工程之旅、AI 时代的弹性等进行了交谈。