Cybersecurity

AI Scientists Gather to Plot Doomsday Scenarios (and Solutions)

Researchers, cyber-security experts and policy wonks ask themselves: What could possibly go wrong?
Photographer: ktsimage/Getty Images/iStockphoto
Lock
This article is for subscribers only.

Artificial intelligence boosters predict a brave new world of flying cars and cancer cures. Detractors worry about a future where humans are enslaved to an evil race of robot overlords. Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss, seeking a middle ground, gathered a group of experts in the Arizona desert to discuss the worst that could possibly happen -- and how to stop it.

Their workshop took place last weekend at Arizona State University with funding from Tesla Inc. co-founder Elon Musk and Skype co-founder Jaan Tallinn. Officially dubbed "Envisioning and Addressing Adverse AI Outcomes," it was a kind of AI doomsday games that organized some 40 scientists, cyber-security experts and policy wonks into groups of attackers -- the red team -- and defenders -- blue team -- playing out AI-gone-very-wrong scenarios, ranging from stock-market manipulation to global warfare.