Destabilizing Artificial Intelligence
“Artificial Intelligence (AI) technologies promise to be…a source of enormous power for countries that harness them.” However, as powerful as modern machine learning is, the technology also remains profoundly fragile—even the most advanced AI systems can unpredictably fail. The risk of system failures causing significant harm increases as machine learning becomes more widely used, especially in areas where safety and security are critical.
To reap the benefits of AI and mitigate risks of catastrophic failure, the nation needs to invest in research aimed at ensuring the safety and reliability of AI and its resilience against attacks from malicious actors. In addition, as Stephen Hawking put it, “Whereas [AI’s] short-term impact…depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
Further Reading
“AI Accidents: An Emerging Threat” (2021), Zachary Arnold and Helen Toner (Center for Security and Emerging Technology)
“Click Here to Kill Everyone” (2017), Bruce Schneier
“Regulating for ‘Normal AI Accidents” (2018), Matthijs M. Maas
The Alignment Problem (2020), Brian Christian