在本期 人工智能促进信息无障碍播客中,主持人 Ayushi Khemka 与艾伯塔大学的博士生 Emad Mousavi 和 Paolo Verdini 谈论了人工智能背后的伦理和哲学。他们谈到了他们共同经营的一个项目--伦理机器人,并讨论了通过人工智能以及在人工智能中的问责和公平问题。
AI4IA播客系列与 9 月 28 日举行的人工智能促进信息无障碍 2022 年大会相关联,以纪念国际全民信息无障碍日。人工智能促进信息无障碍 2022 年大会和播客系列还与阿尔伯塔大学人工智能促进社会(AI4Society)和库勒高级研究所(Kule Institute for Advanced Studies)、印度观察家研究基金会 新经济外交中心(Centre for New EconomicDiplomacy at theObserver Research Foundation)以及牙买加广播委员会(Broadcasting Commission of Jamaica)合作举办。
要获取会议发言,请使用此链接。
CORDEL GREEN: Hello and welcome. My name is Cordel Green, chairman of the UNESCO Information for All Programme Working Group on Information Accessibility. Welcome to the AI for Information Accessibility podcast, organized by Carnegie Council for Ethics in International Affairs. Your host is Ayushi Khemka, a Ph.D. student at the University of Alberta.
AYUSHI KHEMKA: Hello and welcome back to the AI4IA podcast. Today we have with us Emad Mousavi and Paolo Verdini to talk about AI and ethics. Both Emad and Paolo are Ph.D. students at the Department of Philosophy at the University of Alberta. They are working on a project together called the Ethics Bot, but more on that later.
Why don't you both first tell me more about what brought you into this exciting field of all things AI and specifically AI and ethics? Emad, why don't we start with you, and, Paolo, we can follow with you?
EMAD MOUSAVI: Absolutely. My background is in engineering. I got both my Bachelor's and Master's in geomatics engineering, so I already had an exposure to the world of big data. That was the big buzzword at the time when I was doing my Master's degree. After I finished my Master's degree I then started working as an engineer. I actually still work as a geomatics engineer, so I had my fair share of exposure to the world of information technology, big data, and artificial intelligence.
I always have had this affinity toward the world of moral psychology/moral philosophy. That was basically my way into this field, that I had the technical background of engineering, artificial intelligence a bit, big data analysis and analytics in general, and then I have always had this huge, huge interest in the world of philosophy. I found that AI and ethics are a very happy medium of both worlds.
I started getting more interested in the field, started reading more and listening to what we are doing, more podcasts in the same area, and little by little I got more interested, and I thought that I could thrive in this environment and contribute to the whole discussion in the field. That is why I thought about it more seriously to take it up as a degree, so that is why I'm doing what I'm doing currently, which is working in the field of ethics and AI. That is a short answer. I am more than happy to expand more if you have any questions on the details, but this is the big picture.
PAOLO VERDINI: I would say that I wish I had more lofty plans attached to my interest or how it started at least. It all got started by chance basically. After my BA and Master's in philosophy I had a spare year before anything was supposed to happen as a Ph.D.—and hopefully I would be a Ph.D. student by then—and then I applied and enrolled in a computer science program as a Master's degree in big data analytics and social mining back in Italy.
My interest started to bloom after that point. I came from a little bit of a math and logic background, so it seemed to me like a very natural continuation of what I was doing and more an application. It was very, very abstract and theoretical what I was doing. That got me to the topic of AI by putting my hands on things and having to work with projects. Then I compiled my interest that was building up in AI with my philosophy background and what I was about to do from then on.
One of the things in philosophy that makes it relevant is to be very aware of contemporary issues and times, and it is undeniable how ubiquitous AI is in our everyday life, so it seemed to be very relevant more than interesting if not interesting to start working on what exactly are the interrelations between AI systems and let's say digitalized humanities in general with our daily lives. So little by little I started getting into projects and more projects and getting more involved in all of that.
AYUSHI KHEMKA: It is so interesting to know that both of your academic journeys have been very intersectional and interdisciplinary and very similar to each other if not the same. My next question to you is: I know that both of you are working on the Ethics Bot project. Could you tell us a bit more about that and what it is all about?
PAOLO VERDINI: Since we are talking about the field of AI, what we are talking about specifically when it comes to the Ethics Bot project comes to a subfield of AI, which is natural language processing. We are working on a natural language generator. We wanted to devise one of these models so that it could respond to salient ethical questions from the perspective, so to speak, of a machine in the forms of a language model.
Natural language processing is a field that has been flourishing in the last few years. Among all the fields in AI it has received probably more funding than any other subfield and has developed incredibly fast, to the point that we had to change our model and our approach to our project halfway through because we started working with one model, OpenAI's Generative Pre-trained Transformer 2 (GPT-2), and we decided to switch before finalizing the project to GPT-3, which is the ultimate version of the same model. The idea was to train this model with ethics-based textbooks about the ethics of AI and other documentation and have it answer specific questions that you would ask to an AI expert or somebody who has knowledge of the field and see exactly how the AI responds to those questions. That is the gist of it.
EMAD MOUSAVI: There are a few points I need to clarify about this project. I am not going to repeat what Paolo just perfectly laid out about how the project started and what we are trying to do, but I want to put an emphasis on what we are not trying to do on this project. There are many natural language generators out there on the Internet, and what they are trying to do is mimic human responses as best as possible. In other words, what they are trying to do is see if they can fool their audience into not knowing whether there is a person behind the computer or if there is a bot, software, or an AI behind the computer generating the sentences.
We are very upfront that we are not doing that. We are not interested in concealing that the responses that are being generated are by an AI. We want our audience to know that these answers are generated by an AI because the point we are trying to determine is if answers provided by an AI can provoke a reflection and thought or ethical reflection or ethical thoughts, for that matter. This is the goal of the Ethics Bot in short.
What we are trying to do, as Paolo was describing, is using the latest, greatest technology and model—that is what Paolo is doing mostly in the project—and the Ethics Bot is generating some answers for us based on the queries that we are providing to the EthicsBot. What we are doing is giving these responses to queries that we are making to the Ethics Bot to a panel of experts, including ourselves—Paolo, myself, Dr. Geoffrey Rockwell, who is our supervisor, and a bunch of other philosophers and instructors at the University of Alberta mainly—and asking them to evaluate the responses that the Ethics Bot is providing as if the responses are coming from let's say an undergrad student in philosophy or psychology. The goal is to see if the Ethics Bot can provide good enough answers to provoke ethical reflection and ethical thought. That is the end goal we have defined for ourselves.
Paolo is also engaged in the initial stages of another project to take the Ethics Bot and insert it into a classroom situation, which he can talk more on if he wants, but that is also a very exciting application of the Ethics Bot that I think is very interesting.
PAOLO VERDINI: The project Emad was just mentioning is in its initial form, so we need to find out the details, but that is exactly what we are aiming to do, introduce some of the generations from the languaging to the classroom environment and to have a form of experimental/augmented philosophy class and see how that works. Of course it is very important that we understand how to deliver it properly and not just completely haphazardly and see how it works because we would define any sort of purpose, and we don't want to do that.
The important thing is that once the Ethics Bot is refined and reaches a state that we are pleased with and functions properly, the ramifications are many and they are possible. It could be turned into a chatbot, meaning that it can impersonate different philosophers, so it could be interactive. It could be fun. It could be informative to the point that it can summarize and replicate very well the textbooks that Ethics Bot reads. It can provoke critical thought and spark interesting discussion.
Of course the focus could be on itself, meaning, how does the Ethics Bot work and what is our relationship to our Ethics Bot? How do we interface with an AI having humanlike thoughts, even if again this is not a Turing test. We are not interested in that, as Emad has just said. The applications are many, and, yes, we are very much looking forward to applying it to different settings.
AYUSHI KHEMKA: I want to extend this conversation on ethics. There are a lot of times wherein a lot of AI- or machine-learning-related projects treat ethics as an afterthought. When everything that could have gone wrong has gone wrong, be it racially, be it as per gender, sexuality, or different categories and everything has gone wrong, then ethics come into the picture because people say, "Oh, we just forgot to talk about that." How do you both think of ethics in AI as a larger field? Emad, why don't we start with you?
EMAD MOUSAVI: The answer to this question can be a very, very long answer. I can go on and on for a long time answering this question, but I will try my best to answer it as briefly as possible. Paolo, feel free to jump wherever you think you want to jump in and correct me or add anything or whatnot.
Of course studying in this field, being a student of philosophy and ethics and working on ethics and AI, I think that it is imperative to think about ethics before we start any projects, let alone a project that can have as many consequences as, for example, AI can have with the potential that it has. The short answer is it should not be an afterthought. I agree that it is an afterthought right now that we start a project and when we see that things are going wrong then we pause and say: "Oh, things are going wrong. Why don't we start talking about the ethics of it or the dos and don'ts of it?" I agree with the premise of your question, that it is an afterthought and it should not be an afterthought.
To expand it a bit more—this is my personal view of it—AI is a loaded word. AI can mean many things to different people. I bet if you ask me to define AI and Paolo here we will give you different answers. They may be slightly different answers, but they are going to be different. That is just the nature of AI, plain and simple. If you bring in two AI professors and ask them—I have done this—what they think AI is, they are going to give you slightly different answers. This trickles down into everybody else who works in the industry and who has little exposure to a lot of exposure to the world of AI.
Having said that, when I look at ethics and AI there are three different levels of it to me. There is a level which your listeners may be familiar with. For example, Nick Bostrom is a philosopher behind one level of it, which is the level of the existential threat of AI, superintelligent machines, a future which is filled with AI robots that are basically our overlords, controlling everything. That is one view of it.
The second view of it is: When are we going to get to a general artificial intelligence, where we are going to have AI who are as intelligent as human beings? I want to be clear that we are not in that realm whatsoever. At least in my opinion we are not even close to that level. That is the second tier of looking at AI.
The third one, which we are currently in, is that we have a lot of intelligent machines which are doing intelligent work and definitely need to be regulated in the sense that developers who are designing them, people who are in charge of putting in and understanding the mechanisms in which these softwares are being developed, should have a good understanding of what the softwares, systems, bots, or codes they are developing are doing and what the potential of that piece of code, software, bot, platform, or whatever is being designed is and what it is intended for.
These are the three main questions that I think those who are in charge should ask themselves and each other in the field. That is how I look at the whole thing.
PAOLO VERDINI: That covers a lot of ground, both your question and Emad's answer, so I will try to not overlap, but at the same the risk is me going into a complete rant and digressing completely. I will try to keep myself steady and on point as much as I can.
One thing I would like that say is that, yes, as much as I agree with the premise that you said in the question and how Emad answered that, I have to say that to me the problem of the status of the technique in our contemporary society nowadays, AI being like an aspect, probably the most relevant, of complete modernization of a certain type of rational thought. The point of the afterthought seems to be very well built in what we are doing with the understanding and the conceptual construction that is behind our delegations to machines, for example.
Am I allowed to quote Kant in this context? Kant, when speaking about the moral imperative—it's a very strange quote that people get a little bit mixed up about, because it says, "If you must do something, then you can," and not other way around. Well, let me get to that.
That actually allows for or explains what is the moral imperative. There is a very steep difference between what you morally must do and what you can practically transform into your transformative actions. I believe that the technical work—and science is understood mostly as technique these days—works exactly the opposite: If you can, then you must.
Whatever path we are undertaking in research and development, we follow it—and by "we" I mean as a researcher group of people or as a modernized society as a whole—just because there are the possibilities to do that, because we want to extend, even if you want the combinatory possibilities, of whatever can be researched. That always comes before the questions of "Is it possible, is it ethical, or what are the conditions that we should agree on before we undertake this project?"—any project for that matter, be it AI research, be it in genetic manipulation, and so forth. After we realize we have gone too far we start trying to contain the problem, but the necessity of the afterthought is built into the way we approach problems nowadays.
Philosophically I am very much a pessimist, but from an AI perspective I would say that I am still a little bit neutral if not skeptical in general, so I am not saying that again there is going to be a Skynet as in The Terminator universe that is going to pull the strings for us and all that. I believe a version of it, but it is much more humanized and not in the sense that it is portrayed in the movies.
At the same time, yes, I do think there is space and need for intervention, first of all addressing some of the problems that are there. I am just going to say one, but it is not the only one or probably even the most important, and that is transparency, transparency meaning how AI models are built. Of course most of them are probatory models, so it is not clear exactly what data has been gathered to build this model, so we use things literally as black boxes. Neural networks and multilayered neural networks, especially for whoever has not designed them, are literally black boxes, so you can only evaluate what they do, their performance, and their critical status after they have been put to use. So there is an aspect that is embedded in how we are dealing with these technical items.
At the same time regulations should become a mainstream concept and a mainstream discussion, and I would like to extend this not only to the limited field of AI but also to other fields that are technical and production-related, especially when we are funding so much and when they have such a large impact on everybody's lives.
AYUSHI KHEMKA: With this regulation bit, I am wondering because there has been a lot of conversation around how AI can replace human intelligence or be as intelligent as a human. We were initially talking about how there are people who are creating AI wherein you are trying to fool the audience and testing whether they are going to be able to figure out if it is an AI or a human who is talking to them. Do you think AI can achieve that sort of base standard, and is that a world that we need to be prepared for? Is that an important intervention that humanity needs to make right now?
EMAD MOUSAVI: There is a textbook or should I say a handbook that is handed out to philosophy students that requires us to answer a question by a question. They teach this at university. I'm joking. I am saying this to you to answer the question with another question. It is very important how we define "intelligence." I think the key to answering your question is, what do we mean by intelligence?
Let me expand on this a little bit. Paolo quoted Kant. Let me quote Chomsky. At a similar event—I am not quoting him verbatim—he was asked, "Can machines think?" basically the same question that was posed to Turing and has been asked since 70 years when Turing first wrote his paper [asking] "can machines think?"
He answered with something along these lines: "It depends on what you mean by 'thinking.'" For example, if I'm not mistaken, in Hebrew birds and airplanes don't fly; they glide, if you want to translate that. So depending on what you mean by thinking, a machine can think or not think. A bird can fly or glide, depending on what you mean by the question.
I made that example to point out that it is important for us to know what we mean by intelligence. When we ask, "Can machines actually replace or get to the level of human intelligence," what do we mean by "intelligence?" If we mean, for example, processing information in general, that is a baseline of our intelligence. We as human beings have some sensors in our body—our eyes, for example—our brain gets the information, we hear information, we process it in our brains, and then we act, react, or make decisions. If that is a definition of intelligence, sure, machines can do that.
For example, if by intelligence we include something like contemplating, something like imagining in intelligence, can machines ever do that? I don't know. I would say with the current state of AI the answer is no, they won't be able to do that, just because of the nature of imagining. It is a subjective endeavor. No one can come and design an experiment to empirically and objectively measure whether I am imagining something or not. I should report whether I am imagining something or not. Because of the definition of it, I would argue that we can never know if a machine is imagining something. In that sense, if that is included in our definition of intelligence, then my answer would be no, AI would never be able to gain the same level of intelligence as human beings.
The point I am trying to make here is that depending on what we mean by intelligence the answer may vary, but what we know for a fact, what we know without a doubt, is that AI has a lot of potential to do very interesting and high-level processing tasks that were unimaginable for human beings even 20 years ago, but what it lacks currently, just as a last thought—obviously I am not breaking any news or ground here; this is a known issue of AI—when it wants to cross from one domain to another, AI still lacks that ability. Until and unless we as AI researchers and developers can increase the capability and functionality of the AI to do that, we are still a ways—we still need to do a lot of work to get to a general intelligence question of whether AI can reach the general intelligence of what humans and what we have at the moment.
If I want to quickly clarify what I mean by the cross-field comment that I made, for example, if I am not mistaken the current AIs that we have can beat any human chess masters that we currently have. The champion of chess right now is an AI. The champion of the game Go is an AI, but can you have a conversation with the AI that is behind the Go AI? Can you have a meaningful conversation—can you ask, for example, the AI responsible for the game of Go to set an appointment for you with your doctor, and at the same time process natural language, and do another task in another domain of intelligence that human beings can easily navigate and do? We still don't have that capability in AI world.
To summarize my answer, in a narrow sense, if you want a narrower intelligence to subfields, absolutely, AI is already there and beyond, but in a general sense we are still not there, and we have a long way to go to get there.
AYUSHI KHEMKA: Paolo, what are your thoughts on that?
PAOLO VERDINI: My thoughts on that are that it is a very passionate and enticing type of question because it has taken many people from different fields to chime in and try to find a proper answer to it or just pour their thoughts. I find it interesting that it has been widely seen that people have searched for "the thing" or the entity, the element that would differentiate humans from machines. In a more popular or vulgar way, let's say a non-informed way, to people who say, "Well, machines can't feel"—a lot of pop culture has been stated like that as the main difference, the feeling, the passion of the machine, how does it get their feelings—that is one thing.
Another thing that has been in philosophically-grounded discourse is, "Well, machines lack intentionality," or they work on manipulation of syntax as opposed to lacking the semantic aspect, the semantic "atmosphere," as Wittgenstein would say, around it. It is interesting how again over and over people have tried to point the finger at something specific that differentiates machines. There is an ongoing conversation, especially because machines acquire power not only in terms of computational power but power as a presence in our society, so it is a very important question.
There are certainly two sides to the question you asked. Is AI going to replace humans? Work-wise there are definitely some jobs that are going to be done by machines instead of humans. It is already there, but in the future it will be even more so, because of efficiency, because of how easy it is to make some of our job processes automatic. Machines can be more efficient and even more precise and can be trained to do exactly what we usually do in a more specific and less consuming way, and so forth. Of course some jobs are going to be more in jeopardy than other jobs, and that is part of the question. That is the state of things right now.
In terms of a more general, "Are machines going to replace humans in their role in history and development," I complete agree with what Emad has said so far. I would add something, for example, repeating that the danger here is that we are making ourselves more similar to AI and not vice versa. This is what is happening. We reason nowadays much along the lines of efficiency, of output. Everything has very much an input/output metaphor attached to it or in general advantage or growth. Somebody would say—I won't mention the name because it is a little bit of a dark horse, but I would whisper Heidegger warned us, even though again his analysis of technique was not anything as a warning but more like an observation or description of things.
So we need to be very careful about how we think that our product is going to judge us and not vice versa. I think this should be something that we have to keep in mind, and this is also something that I think differentiates our way of interacting with each other and our place in the world as opposed to AI.
AI systems by themselves and by design do not have a project. They are not within a community of development as we humans are. We are taking AI into our life project to make it better, easier, and even more efficient—"efficient" is not a bad word, so it can actually be there—cleaner. It can help us solve problems. That is the main important thing to keep in mind. We have to resist the risk of trying to relate with what we have created and become more AI-like and not vice versa. I think this is very important.
Reading Aristotle, for example, in the Nicomachean Ethics, as far as I know the first theoretical distinction is between knowledge and wisdom. Knowledge is something that actually is object-like and is necessary. Leave aside the theoretical or contemplation side that Emad was talking about, as much as deductive knowledge—we can go as far as that—sure, machines are there, and it is perfectly fine.
The wisdom is the ability to use rationality to further the human good, and that is something that is inherently human because it is within our project and is part of the history of our development, so we are actually welcoming AI into our general global system, and that is something that we should always be very cautious about and not leave this flagship and try to reverse the roles because that is a very slippery slope. We are our worst enemies in this sense. We should keep our heads on our shoulders and treat AI for what it is. Of course there are problems as we discussed before, and the discussion is open to further dialogue, but this is what I wanted to say about this point.
AYUSHI KHEMKA: Now that we have quoted Kant, Chomsky, Heidegger, and whatnot, my final question is about grad school stuff. Do you think, as grad students working on AI and related fields, there are ways in which universities specifically can help push forward the discourse on an ethical and equitable AI?
Related to that or maybe prior to that would be another question: Is an equitable and accountable AI even possible? Does that exist? Can that exist?
EMAD MOUSAVI: To answer the last question first, I would say that it absolutely is possible. I understand that there is a certain school of thought out there that thinks it is not possible, but I would submit that that is because of a lack of imagination. This ties to the first part of your question of what can schools and universities do when promoting equitable AI.
AI is here. Artificial intelligence is here whether we like it or not, and as Paolo was mentioning just now, being efficient is not bad. It is a good thing, and this is one thing that AI is designed for, to be more efficient, to do tasks, to do jobs more efficiently. This brings with it a lot of baggage in the sense that, "Okay, it's going to replace a lot of jobs." That's where we should be more I would say creative with our solutions to the problem.
For example—I am just going to put it out there—something like universal basic income. I understand that there are politicians in the United States who are advocating this. I am not talking policy or politics here, but this is a real thing. AI is going to replace some jobs, and for AI to be equitable we need to come up with ways in which AI can benefit everyone, can benefit the society, because the reality of the present—I am not going to even say the future—is that AI is already here, it is taking up already a lot of jobs, a lot of tasks that used to be done by ordinary people, and we can absolutely design a better AI system. We can think about the ethics of AI in response to your two questions before this question, that usually ethics is an afterthought. This is one of the domains and fields that we as researchers in this field should be more aware of and more sensitive about.
Paolo and I are not specifically working on this subfield that I am talking about, but there are people in the field of AI and ethics who are working in this subfield and should be more cognizant of the ramifications, the effects of an AI system on the society as a whole, and should come up with creative ways and solutions to make AI more equitable for everyone. I think that it is possible.
There are good proposals out there, but we definitely need to discuss more the ways in which we can improve on those and the ways we can expand on some of the ideas, and what universities can do is wake up to the reality, to the "present"—again I am very deliberate in choosing this word, that it is not a future problem, it's here, and we have to start dealing with it right now—wake up to the fact that AI is here, we are currently doing a lot of tasks using this technology, and there is no stopping it. There is no turn-off switch that you can say: "I don't like this technology. Let's not do this anymore." You can do it in a very local sense. A university can decide not to work on it, but that does not mean that the other universities, the other cities, and the other countries are going to do that.
AI is here to stay, and it is just going to get bigger and hopefully better. We—as researchers, as administrators in a university, as people who are in charge of making decisions—need to start thinking about this more seriously and more deliberately. That is my view on the question.
PAOLO VERDINI: On my part I think quite a lot has been done recently. We have to understand that development in AI, the quickness and how fast it goes, probably outruns the capacity for university programs to keep up with the pace, or if not necessarily programs, the structure within the university itself, meaning that some adjustment time is to be taken into consideration here.
I think there is a good deal of discussion going on nowadays in universities and a good deal of attention to representation and problems of representation and, as I said before, data collection, transparency, and policies that discuss privacy regulations and surveillance.
Often we might think that the first picture we get of AI is the robot-like person who simulates the appearance and demeanor of a human, but in terms of what Emad was saying, if intelligence is intended as capacity of calculation, let's not forget that rationality comes from ratio, which in Latin means "calculus." A simple operator or a tool that just makes 2+2=4 is almost an AI system in itself, a non-developed AI system in itself but AI nonetheless.
We need to intervene into the discussion, but that is already happening. We need to allow some time for universities and people to know exactly how to tackle the questions, which I think has been done, at least in certain universities, quite well. The discussion is very much open. This is the first part of the question that you asked.
For the second part of the question again I will go a little bit on a rant, and it goes like this: Is an equitable and accountable AI possible? I would like to do a very philosophical thing, divert the question to something I can answer and say that one of the many things that we will see in AI as the curator or policymaker will more and more be in huge corporations because let's not forget that's where everything mostly is going to.
They are doing a good job on addressing problems of representation and minorities and how exactly AI systems are working, but if you take that aside for a second, let's not forget that the automatic approach to problem solving, the ultimate goal, is to free us from the toil and trouble of work and the workload. It should make our lives easier. If I hint to Marcuse, for example, "reducing the stress of our work should help us flourish as a species" in a way that we have more time for ourselves, more time for leisure time—not leisure time as just sitting on the couch but to do meaningful things for each other. That should be the ultimate goal of an AI, to free us from the toil and trouble of having heavy, heavy work.
That brings another question. If exactly some AI researchers or AI policy researchers are doing a good job on checking that representation is in order, that there is no discrimination, at the same time one big question is how the distribution of wealth is impacted in the installation of even more powerful AI systems.
We are not heading toward a more fair society, not at all. I think this is a main problem that is even more important—I don't know if problems should have a scale—because whoever deploys and employs AI systems is very likely to have the power to centralize huge quantities of wealth. How is this wealth going to be redistributed? How is this going to positively affect society?
Let us forget for a second something that is very, very, very, very understated, which is the environmental impact of AI. Running what we do in the Ethics Bot project, like GPT-3, is extremely taxing from an environmental point of view, not exactly what specifically Emad and I are doing, but you can see how the CO2 production for training a model like that is off the charts. It is like 300,000 times larger than a single individual's CO2 production in a year. This has been proven by a study. There is a little code that hangs around the web that can be attached to your code to see exactly when you are training your model how impactful what you are doing is.
So there are a lot of problems with AI. AI is not just about efficiency. It is not just about making things easier. In itself it has a negative impact on our environment. So sustainability is a problem.
Representation is another problem, and how AI is designed is another problem in itself. If I can touch for a second, for example, on the problem of representation, I think we have a theoretical problem there because most of the AI systems work—and I am talking more about prediction systems, statistical systems, or even natural language processing, which concerns us more closely—are based on data gathering, so these systems rely on the background in order to predict the future. They learn from what it has been to predict the future.
Of course we have seen that using this system might very well lead to discrimination, so how do we change that? Either we put a human filter that intervenes after—we go back to the afterthought—or we need to change the way in which AI is working right now, again gathering large amounts of data and learning by itself.
This is very much a problem. The theoretical problem is that we want to have a bias-free AI to be more equitable, but at the same time we want it to maintain the core humanity level of bias. But bias is something that is inherently human, not because of the fact that we have prejudgment of groups of people or even ourselves for that matter but because we are not obliterating our background and our history. So we come from a place and we are going forward to another place, and we make decisions based on that. To have an AI that completely cancels that and works in a different way is something that I am not sure we are prepared to design or even think about, something that is so alien to the way that we understand problems.
AYUSHI KHEMKA: I think that gives great input and questions for our audience to think about and for us also.
That brings us to the end of our episode. Thank you so much, Emad, thank you so much, Paolo, for joining us. It was wonderful talking to both of you, and I hope you had a good time.
PAOLO VERDINI: Yes.
EMAD MOUSAVI: Yes. Thank you Ayushi. We appreciate it.
PAOLO VERDINI: Thank you so much for having us.
CORDEL GREEN: The AI4IA Conference and the podcast series are being hosted in collaboration with AI4Society and the Kule Institute for Advanced Studies, both at the University of Alberta, the Centre for New Economic Diplomacy at the Observer Research Foundation in India, and the Broadcasting Commission of Jamaica.