AI doom, AI boom and the possible destruction of humanity thumbnail
AI doom, AI boom and the possible destruction of humanity
venturebeat.com
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” This statement, released this week by the Center for AI Safety (CAIS), reflects an overarching — and some might say overreaching — worry about doomsday sce
1 Users
0 Comments
27 Highlights
0 Notes

Top Highlights

  • “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
  • This statement, released this week by the Center for AI Safety (CAIS), reflects an overarching — and some might say overreaching — worry about doomsday scenarios due to a runaway superintelligence.
  • existential threats may manifest over the next decade or two unless AI technology is strictly regulated on a global scale.
  • The statement has been signed by a who’s who of academic experts and technology luminaries ranging from Geoffrey Hinton (formerly at Google and the long-time proponent of deep learning) to Stuart Russell (a professor of computer science at Berkeley) and Lex Fridman (a research scientist and podcast host from MIT).
  • In addition to extinction, the Center for AI Safety warns of other significant concerns ranging from enfeeblement of human thinking to threats from AI-generated misinformation undermining societal decision-making.

Ready to highlight and find good content?

Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.