Skip to main content

Northwestern Buffett Institute for Global Affairs

Harnessing Artificial Intelligence for Global Security: Inside the Northwestern Security & AI Lab

5.jpg

Buffett Faculty Fellow and head of the Northwestern Security & AI Lab, gave a keynote at the workshop on “The Irresponsibility of a Narrow Debate on Military AI” at the first global Summit on Responsible Artificial Intelligence in the Military Domain, REAIM 2023, organized by the Government of Netherlands. Credit: REAIM 2023

Artificial intelligence (AI) has generated new threats to national and global security, but a lab jointly housed in the Buffett Institute and McCormick School of Engineering is unleashing AI’s potential to combat them, advance justice-driven social movements and foster greater global peace.

As artificial intelligence (AI) rapidly transforms the global security landscape, Northwestern University’s Buffett Institute for Global Affairs and McCormick School of Engineering are supporting a new lab dedicated to harnessing AI to foster greater peace and security. The Northwestern Security and Artificial Intelligence Lab (NSAIL) has quickly emerged as a leader in the development of AI-based systems to address pressing national and global security challenges, ranging from predicting potential terrorist activity to protecting against intellectual property theft.

NSAIL is led by V.S. Subrahmanian, Walter P. Murphy Professor of Computer Science at the McCormick School and Faculty Fellow at the Buffett Institute, who has worked at the intersection of artificial intelligence and national security for more than 30 years. As a Buffett Faculty Fellow, he has advanced these projects on a global scale while examining how AI techniques can be used for good and mitigate harm before it occurs.

“AI is going to dramatically reshape national security. We know that our adversaries are going to move ahead, using AI and advanced weapons systems and information operations,” said Subrahmanian. “We need laboratories that test these tools out before our adversaries ever think about them and prevent bad outcomes from happening.”

Since its launch in October 2022, NSAIL has conducted fundamental research on AI technology relevant to cybersecurity and international security. NSAIL is currently working on over 20 distinct research projects related to AI, examining issues ranging from how to protect cities from drone attacks by terrorist organizations to detecting deception in videos and interrogating the implications of deepfakes for international conflicts.

6.jpg

NSAIL researchers have been developing a simulation platform called Drine Urban Cyber-defense Testbed (DUCK) to study the problem of drone attacks on urban areas and understand how best to combat them. The DUCK platform was developed jointly by researchers at NSAIL and Dartmouth College with several international partners.

At NSAIL's annual Conference on AI & National Security on October 17, the lab will release the Northwestern Terror Early Warning System (NTEWS), a machine-learning platform that can model terrorist behavior and generate forecasts about future attacks. By analyzing specific terrorist group activity, they hope to predict terrorist behavior in advance of any attacks they might carry out, allowing officials to plan for when attacks are likely to happen and mitigate their impact.

“It's important to study your opponent. You cannot mount a good defense against any kind of attack—regardless of whether it's a terrorist attack, war or cyber-attack—unless you understand your adversary,” he said.

The lab doesn’t work with classified data; instead, it uses open-source data out of a commitment to knowledge sharing with the world. By developing a dataset on Boko Haram’s history that contained monthly information from news outlets and other online sources about different types of attacks and different circumstances prevailing over several years, researchers now at NSAIL developed the first ever predictive model of Boko Haram—a project that laid the foundation for NTEWS. NSAIL researchers have since made several important policy recommendations regarding actions that can be taken to substantially reduce the number of Boko Haram attacks.

NSAIL researchers have also been at the leading edge of developing systems to generate realistic deepfake videos that can potentially cause dissent and distrust within terror groups. Deepfakes, or media content generated by machine learning algorithms, have become rapidly more sophisticated in recent years, sparking disinformation campaigns and digital impersonation.

7.jpg

NSAIL researchers discovered that, when there was internal dissension within the terror group Lashkar-e-Taiba (LeT), they carried out almost no terror attacks. To support counterterrorism strategies informed by this finding, NSAIL developed the Terrorism Reduction with Artificial Intelligence Deepfakes (TREAD) system to generate deepfake videos of LeT leaders that could sow dissension.


One of their newest projects, the Global Online Deepfake Detection System (GODDS), allows users to upload images, audio or videos they believe may be a deepfake or algorithmically generated.

“The goal is to allow people like journalists and election officials to upload a file that they are skeptical about to understand if it’s a deepfake,” he said. “What GODDS lets us do is run it through a number of algorithms to try and figure out whether it’s real or fake using a combination of human ingenuity and technical tools.” Since its release in summer 2024, GODDS has helped journalists at major outlets like The New York Times and The Washington Post determine the validity of media they sought to investigate.

With Georgetown University’s Dan Byman and the Brookings Institution’s Chris Meserole, Subrahmanian also outlined policy recommendations for liberal democracies considering using deepfakes against their adversaries. Their report argues that the United States and its allies must develop a code of conduct for deepfake use called a Deepfakes Equities Process.

"Any government use of deepfakes must be done with extreme caution," he said. "We're not saying they should never be used. We're saying that they should be done with extreme caution and a rigorous process should be followed. We've tried to articulate what part of that process should be.”

8.jpg


Subrahmanian also co-authored a 2024 report published by the Center for Strategic and International Studies (CSIS) in Washington, D.C., outlining the hypothetical scenarios in which democratic governments like the U.S. might be tempted to use deepfakes. With co-authors Daniel W. Linna, Jr., Senior Lecturer and Director of Law and Technology Initiatives at Northwestern University’s Pritzker School of Law, and Daniel Byman, Professor at Georgetown University and Senior Fellow at CSIS, Subrahmanian has come up with the questions that the U.S. government needs to think about when deciding whether they should use deepfakes in an overseas context. "This is an important question because the use of deepfakes has the potential to significantly degrade the credibility of the U.S.," Subrahmanian said.

If every piece of media content becomes suspect because of AI and deepfakes, Western democracies will become increasingly concerned about the potential erosion of trust needed for democracy to function. However, by developing systems that can weigh the benefits and risks of leveraging deepfake technology, governments can deliberate on when to deploy them and incorporate the viewpoints of different stakeholders across government agencies.

Subrahmanian also co-leads the AI and Social Movements Global Working Group at the Buffett Institute, which has brought together experts from engineering, legal and social science backgrounds to understand how to ethically leverage AI. The group is piloting a Social Movement Analysis & Reasoning Tool (SMART) that journalists, activists and policymakers can use to track social movements across the world.

“Within movements for gender equality, there may be specific themes through which activists are tackling the issue, such as equal opportunity from a work perspective, sexual harassment or female infanticide. So, SMART helps answer, who's talking about them? In which countries are people talking about them? What are these movements trying to achieve?” he said.

As AI and global security concerns grow increasingly intertwined, NSAIL will continue exploring emerging challenges while developing the tools to use AI responsibly. “We’re in a field where it’s very hard to figure out what countries are doing—covert use of deepfakes, for example, is not something people want to talk about. What we’re hoping is that we are educating different countries on the right questions to ask,” he said.



Learn more about NSAIL's annual Conference on AI & National Security happening on Thursday, October 17, 2024.