Preventing an AI-related catastrophe - Problem profile thumbnail
Preventing an AI-related catastrophe - Problem profile
80000hours.org
AI will have a variety of impacts and has the potential to do a huge amount of good. But we’re particularly concerned about the possibility of extremely bad outcomes, especially an existential catastrophe. Some experts on AI risk think that the odds of this are as low as 0.5%, some think that it’s h
3 Users
0 Comments
6 Highlights
0 Notes

Top Highlights

  • AI will have a variety of impacts and has the potential to do a huge amount of good. But we’re particularly concerned about the possibility of extremely bad outcomes, especially an existential catastrophe. Some experts on AI risk think that the odds of this are as low as 0.5%, some think that it’s higher than 50%. We’re open to either being right —...
  • Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. If worthwhile policies are developed, we’ll need people to put them in place and implement ...
  • Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity
  • As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.
  • Why is it that humans, and not chimpanzees, control the fate of the world?

Ready to highlight and find good content?

Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.