Skip to main content

Northwestern Buffett Institute for Global Affairs

The Future of Deepfakes with V.S. Subrahmanian, PhD

Many deepfakes are designed to spread disinformation or cause confusion and mistrust, and therefore are a threat to UN Sustainable Development Goal 16: Peace, justice and strong institutions. How do we combat deepfakes, and can technology help us in any way to address this global challenge? Annelise Riles discusses with V.S. Subrahmanian, one of the world's leading experts on the role of AI in national and global security. Subrahmanian is Walter P. Murphy Professor of Computer Science at the Northwestern McCormick School of Engineering and Faculty Fellow at the Northwestern Roberta Buffett Institute for Global Affairs.

VS Subrahmanian headshot

A deepfake could be a piece of audio, a video, an image or even a text—and, it's also a piece of text which has been synthetically generated, and which may be true, which may not be true, which may be intended to mislead or which may not be intended to mislead. So deepfake technology is applied to a number of settings. We hear a lot about the negative uses of deepfakes, but there are also some positive uses.”

— V.S. Subrahmanian, PhD

Background reading

  • Read The Brookings Institution report Subrahmanian and his co-authors published called “Deepfakes and International Conflict.”
  • Connect with Subrahmanian on Twitter
  • Explore the Northwestern Security and AI Lab, supported by the Northwestern Buffett Institute

Subscribe

Subscribe to Breaking Boundaries wherever you listen to podcasts so you never miss an episode:spotify-podcast-badge-blk-wht-165x40.png en_google_podcasts_badge_2x.png us_uk_apple_podcasts_listen_badge_rgb.svg us_listenon_amazonmusic_button_white_rgb_5x.png stitcher.jpg

Read the transcript of this show

[00:00:00] Annelise Riles: Welcome to the Breaking Boundaries podcast. I'm Annelise Riles, Executive Director of Northwestern University's Roberta Buffett Institute for Global Affairs. The Northwestern Buffett Institute is dedicated to breaking through traditional silos of expertise, geography, culture and language to surface novel solutions to pressing global challenges.

We've all heard a lot lately about deepfakes, video, audio and images that look and sound real, but are actually the product of fabrication by someone or something often using artificial intelligence and deep machine learning. Many deepfakes are designed to spread disinformation or cause confusion and mistrust, and therefore are a threat to peace, justice and strong institutions around the world — and listeners to this podcast know that peace, justice and strong institutions are, of course, UN sustainable development goal number 16.

So how do we combat deepfakes and can technology help us in any way to address this global challenge? To talk about this, I'm delighted to introduce today's guest. V.S. Subrahmanian is the Walter Murphy professor of Computer Science at Northwestern's McCormick School of Engineering, and a faculty fellow right here at the Northwestern Buffett Institute for Global Affairs. V.S. is one of the world's leading experts on the role of AI in national and global security. He's using his knowledge at the cutting edge of computer science to address everything from combating terrorism to stopping Rhino poachers in Africa. V.S., thank you so much for being here.

[00:01:45] V.S. Subrahmanian: I'm excited to be here. Annelise. Looking forward to this conversation.

[00:01:49] Annelise Riles: I'd love for our listeners, first of all to get to know you a little bit, and you are such a boundary breaker. You work across national boundaries across disciplines on problems of all kinds. So you have such an interesting career. Tell us a little bit about how you got involved in artificial intelligence and national security, and what was your path to having such a broad and multifaceted career. 

[00:02:12] V.S. Subrahmanian: So Annelise, I've been an AI guy since the time I was doing my PhD, but my interest in national security issues really took a steep upward turn after 9/11. At that time, I lived in the Washington D.C. area. We all know about the horrible events of that day and this spurred me to start thinking about applying the kinds of advanced AI techniques we were developing to national security issues, and in particular the war on terrorism.

At the time in the early part around 2002/3 the U.S. was in Afghanistan and we were suffering many, many drawbacks and many, many, setbacks during this early part of the operation in Afghanistan. We did not understand the tribes in Afghanistan, what their behaviors were. And so we started looking at trying to learn models of the behaviors of the tribes in Afghanistan.

It was clear this was a multidisciplinary effort and so we reached out to sociologists, area experts from the region, political scientists, and others. And it became clear that we were not able to do a good job at learning models of these tribes because there was so little data about them that was available.

In fact, guys from the 10th Mountain Division of the U.S. Army came to me looking for the data I had on these groups and it turned out, at least in some cases, they liked our data more than what they were getting before deploying. But we moved from there to terrorism simply because there was more data about terrorist groups. And so we've been studying AI and national security issues right now for over 20, almost 25 years.

[00:03:44] Annelise Riles: That's absolutely fascinating. We're gonna talk about deepfakes, but I think there's so much talk about the negative consequences of these technologies. I wanna give you a chance to talk about one of your efforts to create a positive use for one of these technologies. So right here at Buffett, you are leading a global working group focusing on the way AI can global social movements. Could you tell us a little bit about what you're doing there?

[00:04:07] V.S. Subrahmanian: So the goal of this particular project, which brings together political scientists, communications experts, people with expertise in the business sector as well as folks in law. All of us came together and said, you know, how can we bring the power of AI to help support social movements that are trying to achieve goals that are supportive of the SDGs?

So what we are doing is to build out advanced AI techniques that can take data from online social media, from news sources and say, which of these articles, which of these posts is about a social movement, which is supportive of an SDG? 

Can we say, this bunch of data that we've gathered over the last week is about SDG 1. This bunch of data is about SDG 16 and so forth, and can we then make this kind of data summarized in some appropriate form so that personal information is stripped out? But the gist of what's going on is available to policymakers? Is available to journalists so they can see what's happening in the area around them, talk about it and hopefully accelerate the success of these movements. Again, we're looking at movements supportive of the SDGs. We'd like them to grow and be successful.

[00:05:17] Annelise Riles: Let's talk now about deepfakes. First of all, what is a deepfake? How do you define it? 

[00:05:22] V.S. Subrahmanian: So a deepfake could be a piece of audio, could be a video, could be an image, or even a text. And it's a piece of text, which has been synthetically generated, which may be true, which may not be true, which may be intended to mislead, which may not be intended to mislead. So deepfake technology is applied to a number of settings. We hear a lot about the negative uses of deepfakes, but there are also some positive uses.

[00:05:45] Annelise Riles: Could you give us some examples of deepfakes that maybe our listeners might be familiar with or have seen?

[00:05:51] V.S. Subrahmanian: I can certainly give you a number of deepfakes that people may have heard about at least. As an example, here in the Chicago area, there was some guy who decided to create deepfakes of the Pope wearing a puffy coat, wearing very snazzy sneakers. You know, he looked like he could be a model. I don't think the person had any malicious intent. It was clear, I think, to most people that these were deepfakes and you know, it was just fun to look at. I suspect the Pope himself took this with a grain of salt and perhaps chuckled a little bit. I certainly did. But on the other hand, there are a number of negative users of deepfakes.

So, for example, we have the deepfake presumably produced by the Russians, of President Zelensky of Ukraine, telling his soldiers to lay down their arms. That's a problem. There are other instances of deepfakes like this where specific people have been portrayed as saying things that they never said or never would've said. In our own area in Chicago, we had a mayoral election recently, and there was a deepfake audio this time. Of one of the mayoral candidates, Paul Vallas, allegedly saying things that showed great insensitivity to the actions of police in the killings of innocent civilians. We all are aware that police shootings are happening, but he was represented via a deepfake audio of saying something that we don't think he ever did.

So those are the kinds of negative impacts that we will see, that we have seen already, and that we expect to see in coming years.

[00:07:16] Annelise Riles: If I as a citizen am looking at something, how do I know if it's fake or not? And presumably if I see the Pope and sneakers in a puffy jacket, I have a pretty good clue that that's probably not accurate. But take the Paul Vallas audio, how would I have known, what would've been the clues that I should pay attention to?

[00:07:32] V.S. Subrahmanian: The first thing is in the case of an audio, you're looking at the way the person is speaking. And in this case it was very strange that we had an audio and an image, but no video. Alright, so that suggested that whoever created the deepfake, was able to produce a good, deepfake audio, but was not able to create a good, deep, fake video.

And the first question that went off in my head is, why on earth is it a picture and an audio and captions of what he's saying? If he had really said it, I would've thought they would've had a video of it, especially if they have all this other material. So the first is to use common sense and think about the credibility of what you're hearing.

Is this something you would expect this person to say? And is it being presented and portrayed in a way that seems credible, not just what was allegedly said? The second thing is, are you noticing discontinuities in his speech that you would not ordinarily expect? You know, there may be good reasons why that's happening, even if the video is real. But these are all things that raise questions. 

And in the case of video, deepfake videos, you're looking and saying, is the person's mouth moving in a way that's consistent with the person's speech. There was a deepfake image of Putin being arrested that I saw again a few weeks back. And it was interesting that in this video you could see his wrist but you could only see part of a handcuff. So given the place the photograph was taken from, you'd have expected to see the whole handcuff, not half a handcuff. And it clearly, was not a credible image.

So you look for signs of credibility in the image as a human being. And anytime you see something that appears false, be suspicious. We're entering the age when more and more deepfakes of different kinds are gonna come together and we need to be careful in what we believe is credible and what we think is not true.

[00:09:16] Annelise Riles: So in your Northwestern Security and AI Lab at Northwestern Buffett, you're using AI to better understand deepfakes, and I understand that one of your projects is actually to produce your own deepfakes. So why are you doing that? Can you tell us a little bit what that's about?

[00:09:33] V.S. Subrahmanian: Yeah you know, the first thing is, it's really hard to understand the technology and combat it, if you cannot understand how that technology works. And this also goes back all the way to the Chinese military strategist Sun-Tzu, who said, "the man who understands himself and who understands his adversary will win a thousand battles".

But if you only understand yourself or only the adversary, you're gonna win a lot less. That in today's world means we need to understand the technology that the adversary might use as well. We used deepfakes and developed a system called TREAD, which stands for Terrorism Reduction using AI Deepfakes.

And the idea in Tread is quite simple. We know that certain terrorist groups such as Lashkar-e-Taiba who carried out the Mumbai attacks of 2008, don't carry out any attacks when they have internal dissension within the group. So it suggests a counterterrorism strategy of sowing internal dissension within the group.

You know, these are not good upright citizens who I care a whole lot about if their opinions are misrepresented. So if the goal of a US organization such as, you know, one of our intelligence agencies, is to sow dissension in Lashkar-e-Taiba. Then one way to achieve that is by generating deepfakes of perhaps one leader of the group criticizing another leader of the group. Or one leader of the group portrayed speaking to somebody who they might never have expected this person to speak to and automatically makes them think he's in, say, the pockets of the CIA.

So can we generate deepfakes that lessen the trust that those terrorists have in each other? That lesson the respect they have for each other, and that increases the amount of dissension within the group? So generally speaking, we're not out to increase dissension between groups of people, but when these people are terrorists who are carrying out attacks, if dissension causes them to reduce their number of attacks, you know, it is a legitimate question to ask. Is sowing such an dissension okay? 

We know this was done many years ago. In the case of the infamous terrorist Abu Nidal, who killed many of his own lieutenants because he thought they were working as spies and spying on him. So the strategy does work, has worked for many years. This is sort of the 21st technology version of it.

[00:11:44] Annelise Riles: That actually gets us to the ethics of deepfakes and deepfakes in national security strategy. And I know you've been working with a Brookings Institute and others trying to think about that and what are some good guidelines for this. So how do we think about how far you can or should go in using these technologies?

I think a lot of American citizens might be surprised to learn that their government does this kind of thing. So how should we think about that? What are your thoughts on it?

[00:12:12] V.S. Subrahmanian: Let me start by saying I'm not telling the US government or any government what to do or not do. But I do have suggestions on what a responsible government somewhere should or should not do. And the first is that use of deepfakes is intricately linked to the respect that the statements made by the government in question command. So if a government makes statements that are false, then they should know that that impacts the credibility of every other statement that they will make in the future. And that means that there is an accompanying loss of credibility. That credibility is of great value to the United States, but perhaps of less value to some other state actors.

And so different countries are going to make this decision based on their assessment of the trade off of their own credibility on the one hand, long-term, and the short-term benefits obtained by using a deepfake. So this is why different countries perhaps treat deepfakes differently. They view the positive value of the deepfake to their mission, as in one way, while another country may view it in another way. And the long-term consequences of a government that does not have a lot of credibility to start with may be small. So if you look at a country like Russia or Iran where we don't believe what their leaders say a whole lot, the use of deepfakes is very attractive because they're not gonna lose that much credibility, comparatively speaking.

In contrast, we, in the West and the United States, our word is our bond. So we want our words to count for something. When the president of the United States, or the Prime Minister of the UK says something, they expect people to believe them, not question their motives or the veracity of what they're saying. That is a huge asset that we should not give up lightly.

[00:13:53] Annelise Riles: On that point you recently co-authored a report with the Brookings Institute research team on deepfakes international conflict. Could you tell us a little bit about the findings of that study?

[00:14:06] V.S. Subrahmanian: So part of the report was imagining the ways in which deepfakes would be used in military and political contexts. I've already talked about Paul Vallas being misrepresented via deepfake and deepfakes can be used in many military settings where it might be considered fair game.

So, as an example, for thousands of years militaries have been leaving fake maps laying around and fake plans lying around for their adversaries to find and be led down the wrong path, one that might get them into trouble. That's considered and has been considered a legitimate military strategy for millennia. And deepfakes can only accelerate that. 

imagine leaving deepfakes of your own battle plans or operational plans in your own network. And when an adversary hacks your network and steals this data he's taking away fake plans, which might cause him to do something that he doesn't wanna do, which might lead him into a trap.

You can imagine deepfake images of battle damage that are created primarily to show an adversary that his actions have either had less or more battle damage than the reality. Either could work in our favor if he thinks he's damaged a target a lot, even when it was damaged, much less. Then he may focus his attention somewhere else where we might prefer his focus to be.

So there are a number of operational settings in the military where I can see the use of deepfake imagery, the use of deepfake videos. You know, militarily today we have in Ukraine apps where Ukrainian citizens can download an app onto their phone if they see a UAV, they can point their phone at the UAV and push a button, and it's gonna send back a report about that drone back to the Ministry of Defense.

I'm not sure exactly what they do with it, but I can imagine a lot of possible values for that. Including knowing that this drone is here right now, it was here a few minutes back, it was somewhere else a few minutes before that, so it's going on this particular trajectory. It's probably heading towards this target and we can intercept it and take it out.

Civilians are getting more and more involved in Ukraine. Again the citizenry of Ukraine, as well as many of those sympathetic to Ukraine, have done a fantastic job in uncovering a swath of Russian generated deepfakes, which are war related.

So these are obviously ideas that the Russian military thought were useful in their pursuit of their goals in Ukraine. And what we are finding is that, people can come together nicely to detect these deepfakes and help reduce their harmful impacts.

[00:16:29] Annelise Riles: What findings does the report have about what policymakers specifically should do in this space?

[00:16:35] V.S. Subrahmanian: The first thing is when we think of governments using deepfakes, we feel that there should be some guardrails established. And the generation of a deepfake and its use may be very, very appealing to a specific guy in the military who has a specific mission, his goal is to achieve that mission and let's say this deepfake, the use of a particular deepfake is very attractive for him to achieve that mission. This is a very, very narrow thinking approach. And the use of this kind of deepfake technology, especially because it can severely dent US credibility in other domains, must be used with extreme caution. 

So we suggest the creation of a process, which we call the deepfake equities process. And the idea is that there should be a multi- organizational, multi-institutional entity that looks at potential users of deepfakes, proposed users of deepfake and assesses whether the value of using that deepfake in the narrow operational context in which it is proposed, can be beneficial as compared to the harm caused to the credibility of the United States.

[00:17:37] Annelise Riles: That's really interesting because I suppose any one person in any part of government doesn't have a field of vision into the full cost and benefits of the operation. So a committee of that kind could see the broader picture. What about the role of other sectors of society? I'm thinking about corporations who are producing a lot of this stuff, or again, us universities who have an opportunity to train students in the kinds of critical thinking you just described. What do you think about these parties' roles?

[00:18:03] V.S. Subrahmanian: I want to move to the government of Finland for a minute. As far back as seven or eight years, they saw the Russian disinformation campaigns in Ukraine after the invasion of Crimea. And they knew that this might be coming at them, as a close neighbor of the Russians, down the pike. So they started educating kids right from the ground up, several years back. And these kids are taught to question what they see. To think critically about what they're seeing and say, "Hey, is this credible? Is this not?" 

So, I think one of the things that we at universities can do is to help create educational programming around many of the questions you've just asked me, Annelise. How do I recognize a deepfake video? Can we produce programming which says, you know, here's how you recognize a deepfake video. Can we partner with local, not just high schools, but middle and primary schools, and get those kids attuned to the idea that they're gonna see more and more fake information and content out there. They should know about it from the time they're five years old and think through it. Or maybe six. I'm not sure what the right age is, but it should be early on. So that's one. 

The second is it really takes a village to detect these deepfakes. It is not something that the government is capable of doing. Let's be blunt, the government, no government by themselves invented deepfake technology. They may have funded researchers to invent it, but the governments themselves did not. So the technology itself is being developed in academia and in the private sector. So you need stakeholders who understand not just what the technology is today, but where it's gonna be six months down the road, 12 months down the road. We need to get ahead of that curve. 

If we understand today what kind of deepfake threats are coming at us in 2024, we will do a much better job at combating those rather than be caught napping and have to play catch up next year or later. So I think we have a huge role to play in one, the educational side, educating not just kids, but also adults. 

We know a lot of adults are not very good at figuring out what's real and what's fake. We need to get them educated on this topic. So we need to put out programming on places like YouTube and TikTok and whatever venues people are watching. Showing educational content that is real versus deepfake, and showing how they can think about it and distinguish between the two.

We need to set up the research institutions needed to think about what the deepfakes are gonna look like 1, 2, 3 years down the road. We need people in medicine to think about how deepfakes will be used in medicine. We need people in engineering to understand how deepfakes will be used to affect what's happening on their factory floors. We need people in transportation thinking about how deepfakes will be used to mess with the transportation system. 

So this is not something that computer scientists can do alone. It's not something that government people can do alone. It really requires people in different industry sectors, certainly the tech sector, but many other sectors. It requires people from the government and it requires people from industry as well.

[00:20:50] Annelise Riles: I wanna finish by asking you the question I ask all my guests, which is, as you think about the future right now, what are you most worried about? What keeps you up at night and also what are you most hopeful for?

[00:21:01] V.S. Subrahmanian: I think there are a number of regulations that are being put in place to regulate deepfakes, and I think they're all fine, in principle. They will do a great job at regulating what I do and what you do. But are they gonna do a good job at regulating what foreign nation states, especially rogue states are going to do?

I don't think so. The governments of Iran and North Korea haven't spent a lot of time thinking about the consequences of violating US or international law. They're gonna continue violating them. So what keeps me up at night? Is the fact that these guys now have technology that they understand that they know how to use, and they're gonna use it to carry out new kinds of phishing attacks that we've never seen before.

They're gonna simulate orders. I mean, a good example is the Bank of Bangladesh heist by the North Korean government in which they siphoned off about 80 million dollars. That's what worries me because systems are gonna get hacked. Humans are gonna get hacked in the sense that they're gonna fall for phishing lures, which are far more sophisticated than anything we see today.

I worry about the use of deepfake technology to automatically generate new forms of malware that nobody's seen before. Effectively automating the development of what is a cyber weapon. So those are the kinds of things that worry me. Deepfakes can have tremendous applications. Imagine a deepfake of a patient's heart presenting symptoms that are not seen very often. And imagine that you're training to be a cardiac surgeon. You can now have the equivalent of a flight simulator that is credibly simulating, via a deepfake, a heart that has a disease that you've never seen a patient with, but that you might see in your career. So as a training tool, it's incredible. It can prepare people for situations they've never seen and it can save lives. So that's the kind of deepfake use that I would love to see more of.

[00:22:46] Annelise Riles: This has been so interesting. I've learned so much V.S.

[00:22:50] V.S. Subrahmanian: Wonderful talking to you, and I'm having the time of my life being part of Buffett.

[00:22:54] Annelise Riles: For more information on this episode and on the Northwestern Buffett Institute for Global Affairs, visit us at Buffett.northwestern.edu.