Artificial Intelligence Governance and Regulation with Catherine Régis
Democracies around the world are grappling with the best uses of artificial intelligence (AI), the ethical and legal challenges it can pose as well as the benefits it can bring to citizens. In this episode, Catherine Régis, expert in AI governance and regulation talks about the importance of involving experts and citizens to address AI's ethical challenges, the need for international collaboration and initiatives to address AI-related issues and the potential of AI to accelerate progress on the United Nations Sustainable Development Goals.
Background reading
Subscribe
Subscribe to Breaking Boundaries wherever you listen to podcasts so you never miss an episode:
Read the transcript of this show
[00:00:00] Annelise Riles: Welcome to the Breaking Boundaries podcast. I'm Annelise Riles, Executive Director of Northwestern University's Roberta Buffett Institute for Global Affairs. The Northwestern Buffett Institute is dedicated to breaking through traditional silos of expertise, geography, culture, and language to surface novel solutions to pressing global challenges. This season, we're focusing on the relationship between technology and global affairs. And today, the technology we're discussing is artificial intelligence. A new report from the United States Department of State suggests that artificial intelligence has the potential to accelerate progress on the United Nations Sustainable Development Goals. But the report also acknowledges that there's much work to be done when it comes to governing and regulating AI so that it respects human rights and is equitable and inclusive. Today's guest is here with insight on how AI can be used to tackle challenges like the United Nations Sustainable Development Goals in human centered ways. Catherine Regis is an expert in AI governance and regulation. She's a professor at the University of Montréal, the Canada Research Chair in Collaborative Culture, the Canadian CIFAR Chair in AI, and a researcher at the International Observatory on the Societal Impacts of Artificial Intelligence and Digital Technology, as well as at MILA, the Quebec Institute on AI. Among her many roles, Catherine also leads the Digital Innovation and AI Lab for the U7+ Alliance of World Universities, an international alliance that includes Northwestern University and more than 50 other higher education institutions around the world. Welcome, Catherine. Bienvenue.
[00:01:54] Catherine Regis: Hi, Annelise. Thank you for having me. I'm so excited to be here.
[00:01:58] Annelise Riles: So, Catherine, can you tell our listeners a little bit about you and how you got interested in AI?
[00:02:05] Catherine Regis: That started a couple years ago. My first expertise is in health law and policy. The vice president of my university organized this very interesting workshop where a lot of experts from medicine and other social sciences and people who are working in health more broadly. They discuss how an AI could be useful in their work and what they already were doing in that field. So I was invited as a legal scholar to participate and hear pretty much what the progress was. And I was so captivated by what it could be. Then I decided to go deeper into that field and really see all the challenges that it could trigger because from a legal perspective for sure, there's a lot of things that are at stake there. And I will say the other key perhaps moment was after that meeting we started to discuss the pertinence of working on a Montreal declaration on ethical guidelines that could really help the movement to have better guidelines and ideas about how we should frame this AI movement with ethical principles. So what we decided to do at the University of Montreal is to gather the experts, ask what do you think about this? What should be the key principles? And then we decided to also ask the opinions of these citizens. And we organized workshops and other activities where the two met. And at the end of the day, this combination of the two voices that created the Montreal Declaration on Responsible AI, where we have 10 principles and sub principles, and really, that was another key moment I would say, in my professional life that made me interested in this field.
[00:03:43] Annelise Riles: So first of all, way back in 2016, I realize it was yesterday, but in AI time, that was a long time ago. What were some of the main concerns circa 2016?
[00:03:54] Catherine Regis: At the beginning, I think the regulation wasn't that clear. It was more about, how can we have these ethical principles that are soft and really can rapidly evolve because the AI field is rapidly evolving itself? So we needed to have some guidelines that could follow that fast pace. And then the challenges are many and from an ethical and legal perspective, but I would say the nondiscrimination already was a challenge right at the beginning, knowing that these AI systems are built on data that are not always representative of the different communities on which they will be used. So that fact uh, lead to sometimes discriminatory results in the AI system for these populations But also there were broader ethical and societal concerns that we had at that time. So who's going to benefit from these technologies? What are going to be the impact on the environment? Because, you know, they require a lot of energy and material as well. So what kind of impact it will have on jobs? And how it's going to transform our key institutions? Justice, health systems, and all the institutions that we're going to have to work with the machine to a certain extent. And what does it mean for professionals and our interaction in that space and for patients, for instance, and other key actors?
[00:05:11] Annelise Riles: And one of the things that sounds so interesting about this project was that you decided to engage ordinary citizens in this conversation. So can you talk a little bit about why you did that and how you did it?
[00:05:22] Catherine Regis: So we couldn't just ask a lot of people, you know, what do you think about AI? First, because AI is a very complex technology, so we needed to bring people on board and have some educational pieces that come with it. So we organized a lot of different workshops where the experts were meeting with the citizens. They were explaining the technology: what we know, what we don't know, and then the citizens were interacting with these experts and expressing their views. So that was one way. But another thing that is really original is we decided to work with prospective scenarios. We were presenting scenarios about the future of AI in different fields. So education, justice, culture, and health, and all these key areas. For instance, we had a scenario in AI about digital twins, right? it's kind of a version of yourself. In a digital space, we could test things on you and not having to test it on your body, for instance. It would have a lot of information about you with that of an avatar, I would say, that could be very useful in ways. But also it triggers a lot of questions about what do you do with that digital twin? And, there's a lot of issues. So knowing these things could happen, what would you want to make sure in the present that we take into account and we develop as a key principle to our development? So we did that in many areas and that helped to trigger kind of a concrete understanding of where this could go and what we need to do now to prevent this. I think the key illustration is the privacy principle. So there's a lot of issues in AI about how it could really impact privacy issues because you have a lot of data about people. The more personal the data is in AI, often it can be really efficient or really effective for you. So it's kind of a trade off that you have to do. Sometimes you have to renounce to kind of a certain effectiveness of the AI system. So we don't want to be constantly in a space where we are surveilled by AI systems, where we are constantly in this digital; we cannot disconnect from this world, right? So we added the privacy principle in the Montreal declaration. The goal was really to add some democratic legitimacy to the process. And I would say to engage in this educational perspective and journey, not only for the citizens about what AI means, but also about what the consideration are for the citizens from an expert perspective. So both of the groups I would say really learned from this process. We had a really recent survey in Canada organized by CFAR. We Analyze like millions and millions of tweets and also Google search. And what they realize is Canadian have a very positive view actually about AI, but they see it as a shiny object and they don't have that critical perspective really about what it could mean and how it could transform their life in a day-to-day basis and more broadly in society. But in that survey, we realized that Quebec had more discussion about this issue of AI. But I think the fact that we had the Montreal Declaration and all these conversation going on during the process and after kind of helped to have more discussion about this topic in Quebec.
[00:08:42] Annelise Riles: That's so interesting, Catherine, because here at the Buffett Institute, we're finding that whatever problem we're dealing with right now, whether it's energy policy or peace and security or global financial markets, we keep coming back to the same thing, which is that we need to have a conversation between experts and citizens. And it's critical, yet it's really hard to do, right? And I love your idea of using scenarios. I think that's a really neat idea. We may start to do that from now on. I want to ask you about an open letter that you signed recently, along with others like Elon Musk, entitled, Pause Giant AI Experiments. Can you tell us a little bit about what this letter said and why you signed on? About why you did that and how you did it?
[00:09:25] Catherine Regis: This letter was a response to the realization that AI was developing incredibly rapidly and with capacities that we didn't anticipate, at least in such a short term, that could be extremely impactful in our societies. So even the people who were developing AI technologies were surprised by this pace. If you remember, this letter was sent after the release of chat2gpt4. And after that period. AI became embedded in the very fabric of our lives and societies to an extent that we didn't have before. So it could influence how we think, write, interact, create as individuals, professionals, communities, and so on. So the letter asks that we pause the development of even more powerful AI systems than chatGTP4 for a few months. In order to take the time to define and implement appropriate governance of frameworks to frame this technology, and that considering that such framework take time to develop. So, we need to discuss and agree on the rules we want to regulate this technology, and not only at a local level, but also international level. And we need more than ethical guidelines. We need binding regulations to follow this technology. So, it is this desire to reduce the clash between, on one hand, a rapidly evolving technology with powerful capabilities, and on the other hand, we need to develop sound and appropriate governance strategies, and which take time, and this is at the heart of the letter. Not surprisingly, I would say the pause requested by the letter didn't happen. But it certainly triggered a lot of interest, debates, and initiative to do better in that space. And progress is being made. I have been quite surprised by the reaction from governments around the world and to international organizations like the United Nations and other key institutions which have shown interest in doing more to contribute to the governance effort. And I think it's a good sign and the letter achieved part of its objectives in that sense.
[00:11:22] Annelise Riles: What are a couple of examples of the possibilities of AI, the positives that you see for humanity, and then what are a couple examples of the biggest dangers that you're most concerned about and that you think we need to address with regulation or new ethical principles?
[00:11:37] Catherine Regis: Okay. So let's start with the positive. So I mentioned the healthcare space. So that's something that I really work on. It's very important to me. I've been doing this for many years now. And this is one of the key areas where AI can really lead to some important progress and really improve the life and efficiency of our healthcare system. So to give concrete examples, we already know that it can improve diagnosis and their accuracy and efficiency. It can also help on an on site and super rapid way during surgeries to have valuable information on a constant basis to help the doctors to make good choices during the operations. It can also really speed up the process of drug discovery. We all know by now after that pandemic that it can be super important and really have an impact on life to accelerate the process of discovering vaccines and these other drugs. And also just about efficiencies. So there's a lot of inefficiencies in our healthcare system. And there's a lot of bureaucratic work that we could improve with AI. For instance, by being able to more rapidly take informations in people's files and coordinate this with, you know, a plan where we can really accelerate it too, and knowing that it's between 15 percent and 30 percent of dollars are spent on things like that, that doesn't add any value on a human level. If we can really use AI to help us there and take that time and that money to do something else in the healthcare system that would really improve the healthcare services, I think that would be absolutely amazing. It's already starting. So healthcare is a big one. But, let's talk about the environment. So AI can also really help to use energy in a more efficient way. So better plan, you know, how to use temperature in big buildings and also help farmers to better anticipate, you know, the temperature and plan accordingly and use resources, water and so on, in a way more efficient way. So AI and environment, it's a good example of where it could be useful, but also it's an intention with potential negative impact. It also requires a lot of energy. And also requires material that can pollute and consume a lot of our precious material on this planet. So it's kind of intention, but that's another example of what it could do. And I'm not going to go into all the scary things that it can do, but, something that is really important to me and a lot of is how AI can impact democracies. Just to give you the full picture of just how accurate and important it is, 2024 is considered to be the year of elections, so key elections will happen in places with significant powers and population in the US, in the EU, in India, Mexico. And actually this year's election will cover more than 2 billion people across the planet and more than 40 trillion in GDP. So this year of election really influences the global interaction we will have for many years to come. But we know that AI can also really have an important impact on the democratic institutions and processes. And one example, it's not the only one, but it can really increase the speed of disinformation and influence. It can produce deepfake that are really convincing in a super fast way that can confuse people and with what's true, what's not true. And that ultimately will really influence the debate we will have and on what people vote. What took a lot of people many weeks before to produce as disinformation, it will take seconds for AI to do. So you can just imagine the scale of and the speed of it. So that's one example that I think we need to be really worried about and there's no key solutions. And we need to help all the other countries that are facing this challenge. We need to be together in this. But first of all, we need to make sure that when we're interacting with an AI system, we know we are in front of an AI system and not an actual person. So it needs to be labeled as such.
[00:15:51] Annelise Riles: You lead the Human-Centered AI for and by Colleges and Universities, or HAICU Lab of the U7+ Alliance of Global Universities. Can you talk a little bit more about this project and in particular, why global cooperation is so important in this area and how colleges and universities around the world can play a role in achieving better outcomes?
[00:16:14] Catherine Regis: So this HAICU was based on the idea that universities have to really work together because they are facing the same challenges in AI, and they have this unique capacity to bring evidence-based information into the debate and hopefully influence it in a positive way. And we all know by now that AI is not an issue for one single country, one single person. It's something that goes beyond borders of every country. So we really have to work together to resolve these issues and pull our expertise together. And one thing that I realized also and I wanted to really shape with the HAICU is debate is often concentrated,the expertise in certain places on the planet. in the global North from certain countries. There's a lot of people that are left out from this debate. And I really wanted to create a space where we could connect with different regions, different perspectives, social science, people from the tech side and all these different perspectives. But also from a global perspective. What the Japanese think about AI and how do they view this? How do the Indians? So we really need to come all together to figure it out actually. So I was really challenged by the different views and the HAICU. And I think it really improved my knowledge and I'm more open than I was before about how we could face this together and where we don't agree as well. We work on different projects to make sure that we activate this view, and I will say that also the HAICU is more focused on the higher education. So that's another field I didn't mention before, but the universities as an institution itself is impacted by AI in so many ways. Not only by how we do research, the fact that we become more and more dependent because AI needs a lot of data. And sometimes the private companies have these data. So we kind of have more dependencies on these companies to do some research work. So now we know with chat GPT and all these generative AI that it impacts the way we teach. The way we interact with our students. And that's another example of where universities can gather together and not reinventing the wheel each time, but share how do you do this? How have you managed the professor role and students relationship using this system? And how your university is going to use AI or not in its administrative processes? And are you aware of discrimination that it can create? If we have that place where we can really share and go faster because we don't reinvent the wheel each time, and we have more critical thinking because we are exposed to other views, I think it's just fabulous.
[00:18:54] Annelise Riles: You mentioned that there were different points of view among scholars from different regions. Can you say a little bit more about this? I mean, what were some differences that stood out to you between, say, the Japanese perspective, the North American perspective, and the European perspective?
[00:19:09] Catherine Regis: So there are a couple of examples. An interesting one I think is how the Japanese might view the role of robots and eventually AI, compared to Canadians, uh, for instance. So realize that people in Japan may have a different interaction with robots and can view their roles as more embedded and more natural in their personal and social life. So, for instance, if I'm a patient, I need to have some information about healthcare services or really have someone to talk to, I would usually think that it would be better to do so with an actual person. But in Japan, they may be more open or more used to interacting with robots and therefore, it would not necessarily be a bad thing to the contrary in some circumstances.
[00:19:49] Annelise Riles: That's fascinating. Finally, I want to ask you a question that I ask all my guests, which is, as you think about five years, 10 years, or 15 years ahead in the world of AI, what are you most worried about and what are you most hopeful for?
[00:20:04] Catherine Regis: I have to go back to the democracy piece. I think it's perhaps more fragile than we thought. AI is another piece that kind of plays into that space where we need to stand up and fight for our democracies more than ever. So I'm certainly worried about this, not only now, but in five years and in 10 years, how we're going to protect our judicial institutions? And all these institutions, knowing that the AI will really transform these institutions, but also it can have a really strong impact. How do we work with these institutions? How we vote for government and all. So for me, I'm super worried right now, but I don't think this issue will go away in one year. It will be there in five years and 10 years. The other one is some say this technology is rapidly evolving. There's a lot of that we know now. There's a lot that we don't know now. And now that we are well aware that we need to work this on a global level, collectively, we need to be able to develop frameworks and governance structure that are capable of not only addressing the risks that we have right now with AI, but also the thing that we don't know and being able to kind of anticipate and adapt to this new reality. So that's something that requires some agility and it's tough to do, honestly. And, there are things that are pretty scary perhaps in the future that we need to work on right now. I would say on the positive side, I think we all use these tools on different levels and we see how they can improve our lives in so many ways. Google Maps is just an example. It really facilitates our life. Healthcare I mentioned, it can save lives. It can really improve the life of a lot of people. Allow us to better use our little resources, the resources are so precious on the planet. So many things that we need to use, it can improve improve efficiency as well. Let's think about the government. So there's a lot of distrust globally about governments. There's a global trust deficit disorder right now. And if we can improve the efficiencies of the government by using these AI systems to really help the citizens to access the services, to better understand what it means to help us complete some forms and have some translation, and better connect with the government, improve efficiency in so many ways ,I think that's already a really important thing. So I'm looking forward to what's next, but also I'm really keep my eyes on the issues and work hard to make sure that we have norms at the local but also international levels that really help us to follow this wave.
[00:22:42] Annelise Riles: And maybe AI can even help us to improve the conversations between citizens around the world and experts on the way to producing better norms to regulate AI. Catherine, it's really an honor to work with you. You're an inspiration and I think the world is lucky that we have a brilliant and agile leader like yourself thinking about these really challenging questions. Thank you so much for all you do. For more information on this episode and on the Northwestern Buffett Institute for Global Affairs, visit us at buffett.northwestern.edu.