In a simulation from Georgia Tech and GTRI, participants navigated escalating global crises — including AI-enabled biothreats and cyberattacks — to assess how different actors might respond to emerging AI risks.
Last Friday, researchers from Georgia Tech and the Georgia Tech Research Institute (GTRI) piloted an in-depth crisis simulation exploring the national security implications of advanced artificial intelligence. Designed by the AI Safety Initiative in collaboration with Model UN at Georgia Tech, the immersive half-day workshop challenged faculty to respond to a series of escalating threats — including a potential bioattack, cyberattacks, and rising global tensions.
Participants represented major governments, corporations, and organizations — including OpenAI and Google DeepMind — and were inundated with simulated press releases and intelligence reports describing the rapid evolution of AI technologies. Their task: to debate and coordinate policy responses in real time.
In one scenario, a preliminary WHO report revealed AI-enabled pathogens spreading across Central Asia. The player representing China quickly moved to close borders and reimpose pandemic-era lockdowns, a move that caused global confusion and economic instability.
“There’s just no way I could have predicted that response,” said Parv Mahajan, the director of the simulation. “But that kind of extreme response tells us so much about how unprepared countries might react.”
Divjot Kaur, who constructed the simulated documents participants received throughout the workshop, agreed. “This valuable information can shed light on the research and work we must put in,” Kaur said.
Some players took advantage of the chaos. As tensions with Taiwan escalated, the representative from OpenAI pushed hard for lucrative military contracts, coining the memorable line, “A free Taiwan is a fee Taiwan.” The simulation concluded with a discussion about how profit motives might distort information access and accelerate a potential AI arms race.
What stood out most to participants was the range of ideas that emerged during the crisis. “It was great to see the perspectives diverse disciplines had on the future of AI,” said Amaar Alidina, an undergraduate researcher who participated in the simulation. “Debate provided meaningful insight on topics we wouldn't even have thought of,” said Kaur.
Looking ahead, the AI Safety Initiative hopes to expand the simulation through collaborations with labs and departments across campus.
“The future of our work will depend, in some way or another, on AI," said Mahajan. "And the best way to understand the future is to try and experience it.”
Additional Images
