Maybe We Will Finally Learn More About How A.I. Works thumbnail
Maybe We Will Finally Learn More About How A.I. Works
www.nytimes.com
Transparency is particularly important now, as models grow more powerful and millions of people incorporate A.I. tools into their daily lives. Knowing more about how these systems work would give regulators, researchers and users a better understanding of what they’re dealing with, and allow them to
1 Users
0 Comments
8 Highlights
8 Notes

Top Highlights

  • Transparency is particularly important now, as models grow more powerful and millions of people incorporate A.I. tools into their daily lives. Knowing more about how these systems work would give regulators, researchers and users a better understanding of what they’re dealing with, and allow them to ask better questions of behind the models.
  • These firms generally don’t release about what data was used to their models, or what hardware they use to run them.
  • There are user manuals for A.I. systems, and list of everything these systems are capable of doing, or what of testing have gone into them.
  • And while some A.I. models have been made open-source — meaning their code is given away for free — the public still doesn’t know much about the process of creating them, or what happens after they’re released.
  • I generally hear one of three common responses from A.I. executives when I ask them why they don’t share more information about their models publicly.

Ready to highlight and find good content?

Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.