Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Lex Fridman Podcast #61 | Summary and Q&A

87.7K views
December 28, 2019
by
Lex Fridman Podcast
YouTube video player
Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Lex Fridman Podcast #61

TL;DR

Melanie Mitchell discusses the limitations and challenges of artificial intelligence and machine learning, highlighting the importance of understanding human cognition and the role of concepts and analogy making in intelligence.

Install to Summarize YouTube Videos and Get Transcripts

Key Insights

  • 🧠 AI Terminology: The term "artificial intelligence" has different meanings to different people, and the definition of "intelligence" itself is not clearly defined.
  • 😕 Misunderstood Term: Many experts in the field are not keen on the term "artificial intelligence" and see it as vague and problematic. Other terms like "cognitive systems" or "intelligent systems" are also being considered or used.
  • 💡 Strong vs. Weak AI: The distinction between strong AI (machines thinking) and weak AI (machines simulating thought) has been a topic of debate throughout the history of AI. The line between the two is still a subject of ongoing exploration.
  • 🌐 General Intelligence: The quest for artificial general intelligence, which is human-level or beyond, continues to be a topic of interest and research within the AI community. The goal is to strive for higher levels of intelligence in machines.
  • 🤖 Limits of Current Approaches: While current AI methods, such as deep learning, have shown impressive capabilities in narrow tasks, there are concerns about their limitations in terms of truly understanding and reasoning about the world. The lack of innate knowledge and the inability to create mental models hinders their potential.
  • 🔎 Importance of Analogy Making: Analogy making is considered a fundamental aspect of human thinking and cognition. It plays a key role in concept formation, perception, and generalization. Understanding and developing the ability to create and use analogies is seen as a critical open problem in AI.
  • 💡 Future AI Approaches: Cognitive architectures and other approaches that focus on more active and generative models, as well as the integration of symbolic and connectionist methods, hold promise for advancing our understanding of intelligence and perception.
  • 💭 Challenges in Autonomous Driving: Autonomous driving is a complex task due to the open-ended nature of the environment and the need to handle various edge cases and unexpected situations. Perception systems, obstacle detection, and decision-making policies are among the challenging areas in autonomous vehicles.

Transcript

the following is a conversation with Melanie Mitchell she's the professor of computer science at Portland State University and an external professor at Santa Fe Institute she has worked on and written about artificial intelligence from fascinating perspectives including adaptive complex systems genetic algorithms and the copycat cognitive architect... Read More

Questions & Answers

Q: Why does Melanie Mitchell believe that the term "artificial intelligence" is problematic?

Mitchell believes that the term "artificial intelligence" is problematic because it can mean different things to different people and because the term "intelligence" itself is not clearly defined.

Q: What are the limitations of current machine learning approaches, according to Melanie Mitchell?

Mitchell believes that current machine learning approaches, such as deep learning, have limitations in terms of their ability to understand and interpret the world like humans do. She argues that these approaches lack the ability to form and fluidly use concepts and make analogies.

Q: What does Melanie Mitchell believe is the most important open problem in AI?

According to Mitchell, the most important open problem in AI is understanding how to form and fluidly use concepts. She believes that the ability to form and apply concepts is fundamental to human thinking and intelligence.

Q: What is the role of analogy making in intelligence, according to Melanie Mitchell?

Mitchell believes that analogy making is a core aspect of human intelligence. Analogies help us recognize similarities between different situations and apply our understanding from one situation to another. Analogical reasoning and concept formation are closely intertwined in human cognition.

Q: How does Melanie Mitchell propose expanding the capabilities of AI systems?

Mitchell suggests incorporating cognitive models and dynamic perception in AI systems to enhance their ability to understand and interpret the world. She believes that deeper understanding of human cognition and the inclusion of concepts and analogy making are key to advancing AI capabilities.

Q: What is Melanie Mitchell's stance on the future potential of deep learning in AI?

Mitchell is skeptical about the potential of deep learning alone to achieve human-level intelligence. She believes that while deep learning has made significant progress, it falls short in terms of understanding and generalization beyond the training data. She advocates for a hybrid approach that combines deep learning with cognitive models and dynamic perception.

Q: What are the challenges of autonomous driving, according to Melanie Mitchell?

Mitchell highlights the difficulties in autonomous driving, particularly in perception and action selection. She mentions that current systems struggle with correctly identifying and interpreting obstacles, resulting in overly cautious or excessive braking. The open-ended nature of real-world driving scenarios and the need to handle edge cases further compound the challenges.

Summary

In this podcast episode, Lex Friedman interviews Melanie Mitchell, a professor of computer science at Portland State University and an external professor at Santa Fe Institute. They discuss the term "artificial intelligence," the concept of intelligence, the line between weak AI and strong AI, the future of AI, and the importance of understanding our own minds in creating intelligence. They also delve into the concept of concepts and the role of analogy-making in cognition.

Questions & Answers

Q: What is Melanie Mitchell's view on the term "artificial intelligence"?

Melanie Mitchell is not crazy about the term "artificial intelligence" because it has multiple meanings and lacks a clear definition of intelligence itself. However, she was attracted to the field due to her interest in phenomena of intelligence.

Q: What alternative term has been proposed for artificial intelligence?

Some people have suggested using the term "cognitive systems" instead of artificial intelligence to capture the higher-level aspects of intelligence. However, Melanie doesn't believe that cognition and perception should be separated as they are intimately connected.

Q: Is there a distinction between weak AI and strong AI in the field?

John Searle proposed a distinction between weak AI and strong AI, with the latter being the view that a machine is actually thinking and not just simulating thinking. Over time, as machines have achieved specific tasks previously thought to require human-level intelligence, the understanding of intelligence has been revised.

Q: Are we closer to having a better understanding of the line between weak AI and strong AI?

Yes, we are gradually gaining a better idea of what that line is. As machines have demonstrated capabilities once believed to require general human-level intelligence, the understanding of intelligence itself has evolved. However, there is still ongoing debate on the boundaries of intelligence.

Q: Will we eventually reach a point where we create something that is considered intelligent?

Melanie believes that, in principle, we could create machines that are considered intelligent. While it may be challenging to know for sure, our understanding of intelligence may refine over time until we can definitively determine what it means. She also suggests that the machines we create may be different from the ones we have now, leading to a deeper understanding of our own machine-like qualities.

Q: Can we create intelligence without fully understanding our own minds?

Melanie believes that, at some significant level, we need to understand our own minds in order to create intelligence. However, brute force approaches based on big data and large networks have yielded surprising progress, even without a complete understanding of our own intelligence.

Q: Are humans okay with something that is more intelligent than us?

Melanie points out that it is difficult to define intelligence as "smarter than us" because smarter is relative and task-specific. For tasks where computers outperform humans, such as multiplication or route planning, we are mostly happy with the machines' superior abilities. The fear arises when machines can perform tasks that humans consider highly human, such as creating beautiful music or art.

Q: Throughout history, why have humans dreamed of creating artificial life and artificial intelligence?

Melanie believes our drive to create artificial life and intelligence stems from a desire to understand ourselves better and to have machines help us with various tasks. This fascination with creation and intelligence seems to be deeply embedded in human culture and mythology, distinguishing us from other species.

Q: What drives Melanie's interest in artificial intelligence?

Melanie is motivated by her curiosity about her own thought processes and a desire to understand human intelligence. She also finds the broader concept of intelligence across different systems, such as biological and societal processes, to be fascinating. Exploring these questions through computer simulations allows her to approach the concept of intelligence from a unique perspective.

Q: What is the COpycat program?

Copycat is a program developed over 30 years ago by Melanie and her colleagues, inspired by the ideas of Douglas Hofstadter. It is a program that makes analogies in an idealized domain of letter strings. Copycat aims to simulate the dynamic process of perception and the role of analogy-making in human cognition.

Q: What is the role of concepts and analogies in human cognition?

Concepts are fundamental units of thought and underlie our ability to generalize and make sense of the world. Analogies, on the other hand, allow us to recognize similarities between different situations or concepts, even when they are not identical. The act of making analogies is deeply connected to our understanding of concepts and is pervasive in our thinking, perception, and language.

Q: How can we engineer a process like analogy-making in artificial systems?

Engineering a process like analogy-making in artificial systems requires a better understanding of how humans do it. Melanie believes that internal models, or mental models, play a crucial role in analogy-making. These models allow us to mentally simulate situations and make predictions, facilitating analogical reasoning.

Q: Is it possible to convert vast amounts of common-sense knowledge, like that found in Wikipedia, into a format that can be used in analogy-making?

While attempts have been made to convert common-sense knowledge into logical representations or knowledge bases, Melanie argues that these approaches are limited. Much of our common-sense knowledge, especially in fields like intuitive physics or psychology, is not explicitly represented and is invisible to us. The challenge lies in finding the right representation and capturing the vast amount of implicit knowledge that underlies common sense.

Q: What breakthroughs are needed in AI to achieve a better understanding of concepts and analogy-making?

Melanie believes that a breakthrough is needed in both hardware and software. While Turing computation may be sufficient, there is a need for faster and more powerful hardware. Additionally, the right algorithms and architectures are necessary to capture the dynamic and interconnected nature of mental models. The field is still in an early stage, and much work needs to be done to unravel the mysteries of concept formation and fluid analogy usage.

Takeaways

The term "artificial intelligence" has its limitations and lacks a clear definition of intelligence. The line between weak AI and strong AI has evolved as machines have demonstrated capabilities once believed to require human-level intelligence. Understanding our own minds and the process of analogy-making is crucial in creating intelligence. Concepts and analogies play a fundamental role in cognition, allowing us to generalize and make sense of the world. Engineering artificial systems to mimic analogy-making requires a better understanding of how humans do it and the development of internal models. The challenge lies in capturing the implicit, common-sense knowledge that is invisible to us. Breakthroughs are needed in both hardware and software to achieve a deeper understanding of concepts and better analogy-making abilities.

Summary & Key Takeaways

  • Melanie Mitchell highlights the problems with the term "artificial intelligence" and the lack of a clear definition for intelligence itself.

  • She emphasizes the need to understand human cognition and the role of concepts and analogy making in intelligence.

  • Mitchell discusses the limitations of current machine learning approaches, such as deep learning, and the importance of incorporating cognitive models and dynamic perception in AI systems.

Share This Summary 📚

Summarize YouTube Videos and Get Video Transcripts with 1-Click

Download browser extensions on:

Explore More Summaries from Lex Fridman Podcast 📚

Summarize YouTube Videos and Get Video Transcripts with 1-Click

Download browser extensions on: