Thomas Higginbotham: Trusting and Distrusting Algorithms | Summary and Q&A

TL;DR
Trusting machine learning algorithms blindly can lead to unintended consequences and biases.
Key Insights
- 🔇 Statistics can be manipulated to make any argument, as demonstrated by the speaker's childhood experience with their Penny Algorithm.
- 🙈 Machine learning algorithms can perpetuate bias, as seen in the case of Amazon's hiring algorithm and facial recognition software.
- 🎁 Trusting algorithms blindly can have unintended consequences, and it's crucial to understand the assumptions and potential biases present.
- 🧑 Each person must take responsibility for decisions made with algorithms, considering the three questions in Penny's Trustworthy Framework.
- ❓ Empathy and considering individual circumstances may challenge the recommendations of algorithms.
- 🧑💼 The transformative potential of machine learning technology comes with risks and trade-offs.
- 🥇 Trust should be placed in algorithms less, and healthy skepticism is necessary for making better decisions.
Transcript
[MUSIC] When I was nine,
[LAUGH] >> My favorite pastime wasn't recess or video games or even sports. It was statistics. >> [LAUGH] That taught myself alone in my bedroom sorting basketball cards from best to worst, using the numbers on the back of the cards. It was back then, that I've learned a very important lesson of stats 101, that I coul... Read More
Questions & Answers
Q: How did the speaker prove that their favorite basketball player was the best in the world using statistics?
The speaker used multiple metrics like rebounds per game and assists per game to narrow down the pool of players. They also considered factors like games played and eliminated anyone with fewer than a certain number of games. Their model, called The Penny Algorithm, ultimately concluded that their favorite player was the best.
Q: Why can machine learning algorithms be vulnerable to human bias?
Machine learning algorithms learn from existing data, and if that data contains biases, the algorithms might perpetuate those biases. For example, Amazon's hiring algorithm was biased against female applicants because it learned from a dataset where the majority of successful employees were male.
Q: How did Joy Buolamwini find bias in facial recognition software?
As an African American computer scientist, Joy noticed that the facial recognition software didn't work well on her dark skin or features. She discovered that the algorithm was trained on a dataset lacking diversity, leading to lower accuracy for faces like her own.
Q: What are the three questions included in Penny's Trustworthy Framework?
The three questions are: what is the model true of, who or what is missing, and what are the consequences? These questions help evaluate the assumptions, biases, and potential impacts of a given algorithm.
Summary & Key Takeaways
-
The speaker reflects on their childhood experience of creating a statistical model to prove that their favorite basketball player was the best, highlighting how statistics can be manipulated.
-
The speaker discusses examples of algorithms with biases, such as Amazon's hiring algorithm favoring male applicants and facial recognition technology failing on people with darker skin tones.
-
The speaker emphasizes the need to have a healthy distrust for algorithms, asking three important questions: what is the model true of, who or what is missing, and what are the consequences.
-
The speaker shares a personal experience of deciding not to trust a predictive model for a struggling student in a flight training program, demonstrating the importance of empathy and individual decision-making.
Share This Summary 📚
Explore More Summaries from Stanford Graduate School of Business 📚





