Leslie Kaelbling: Reinforcement Learning, Planning, and Robotics | Lex Fridman Podcast #15 | Summary and Q&A

38.2K views
March 12, 2019
by
Lex Fridman Podcast
YouTube video player
Leslie Kaelbling: Reinforcement Learning, Planning, and Robotics | Lex Fridman Podcast #15

TL;DR

Leslie Kael Blaine discusses her journey into AI, the importance of philosophy in computer science, the challenges of perception and planning, and the future of AI research.

Install to Summarize YouTube Videos and Get Transcripts

Key Insights

  • 🤖 Leslie Kayla Bling is a renowned roboticist and professor at MIT, specializing in reinforcement learning and planning for robot navigation.
  • 🚀 She won the Edge K-Y Computers and Thought Award for her work in AI and was the editor-in-chief of the prestigious Journal of Machine Learning Research.
  • 🎓 Leslie's background in philosophy and computer science has shaped her unique approach to AI research, integrating logical reasoning with robotic systems.
  • 🔬 She believes that AI researchers should be part-time philosophers, as philosophy can provide valuable insights into the ethical and moral implications of AI.
  • 🧩 Abstraction and decomposition are key to solving complex AI problems, allowing us to reason about high-level goals and reduce the size of the state space.
  • ⚙️ Leslie discusses the challenges of planning under uncertainty and the importance of controlling beliefs rather than just the physical actions of a robot.
  • 🤔 She emphasizes the need for both built-in structure and learning algorithms in AI systems and believes that there is no one-size-fits-all approach.
  • 💡 Leslie sees the potential for AI researchers to make significant contributions in various fields by combining different disciplines and addressing specific problem areas.

Transcript

the following is a conversation with Leslie Kayla bling she's a roboticist and professor at MIT she's recognized for her work and reinforcement learning planning robot navigation and several other topics in AI she won the edge KY computers and thought award and was the editor-in-chief of the prestigious journal machine learning research this conver... Read More

Questions & Answers

Q: What inspired Leslie Kael Blaine to study AI and work with robots?

Leslie was inspired to study AI after reading Godel, Escher, Bach in high school, which introduced her to the interesting concepts of AI and what it takes to create intelligent behavior.

Q: How does Leslie Kael Blaine view the relationship between philosophy and AI?

Leslie believes that philosophy has an important role in AI, especially in areas like belief and knowledge, which are closely related to AI concepts like representation and reasoning.

Q: What are the challenges Leslie Kael Blaine sees in perception and planning for robots?

Leslie believes that the challenge in perception lies in the representation of the world and understanding what perception should deliver. In planning, she emphasizes the importance of abstraction and hierarchical reasoning to handle complex tasks.

Q: What are Leslie Kael Blaine's thoughts on the future of AI research?

Leslie believes that the future lies in finding a balance between built-in structures and learning algorithms. She sees the need for new ideas and innovative approaches to tackle the challenges in building intelligent robots.

Q: Does Leslie Kael Blaine have any concerns about the impact of AI on society, such as job displacement?

Leslie acknowledges that there are concerns about job displacement due to AI advancements, but she feels that she lacks expertise to fully address the sociological and economic aspects of this issue.

Q: What does Leslie Kael Blaine consider the most exciting area of research in the short term?

For Leslie, the most exciting area of research is finding the optimal combination of learning and not learning in order to engineer intelligent robots that can effectively navigate and operate in the real world.

Q: As a roboticist, does Leslie Kael Blaine have a favorite robot from science fiction?

Leslie states that she values the process of engineering robots more than the end product, and she does not have a particular favorite robot from science fiction.

Summary

In this conversation, Lex Friedman interviews Leslie Kael Bling, a roboticist and professor at MIT, about her work in artificial intelligence, robotics, reinforcement learning, and planning. They discuss her background in philosophy and computer science, the importance of abstraction and belief space in AI, the challenges of perception and planning, and the potential for building robots with human-level intelligence.

Questions & Answers

Q: What got Leslie Kael Bling excited about AI?

Leslie mentions that reading Godel, Escher, Bach in high school exposed her to the interestingness of primitives and combination, and the ideas of AI and generating intelligent behavior.

Q: What attracted her to robotics?

Leslie explains that her first job at Stanford's AI lab involved working on a robot, which sparked her interest in robotics. She also mentions her philosophy background and how it relates to her work in computer science and AI.

Q: What were the options for majors related to artificial intelligence at the time?

Leslie explains that there were not many options for majors related to AI at the time, and that philosophy was a common choice among those interested in the field.

Q: Should AI researchers also be philosophers?

Leslie believes that there are important philosophical questions related to AI, particularly in areas like belief and knowledge. While she thinks it's important for AI researchers to consider these questions, she also emphasizes the importance of materialist views and the focus on solving technical problems.

Q: Do you think we can create a robot that is behaviorally indistinguishable from a human?

Leslie believes that it is possible to create a robot that is behaviorally indistinguishable from a human. However, she is less concerned with the philosophical question of whether it is internally indistinguishable or a "zombie" and more focused on the practical challenges of perception, planning, and operating successfully in the world.

Q: What roadblocks were faced in the 80s and 90s in AI and expert systems?

Leslie explains that one of the main roadblocks in the development of AI and expert systems was the challenge of articulating human knowledge effectively into logical statements. She argues that humans have difficulty explaining or defining the rules and processes behind their decision-making, which hindered the success of expert systems.

Q: How do belief states differ from state spaces in decision-making?

Leslie explains that belief states represent a distribution of probabilities over possible states of the world, while state spaces represent a complete and deterministic state of the world. Belief states are used in decision-making when information is incomplete or uncertain, allowing for reasoning about how actions may affect one's understanding of the world.

Q: What are the challenges in planning under uncertainty?

Leslie explains that planning under uncertainty can be computationally complex and sometimes undecidable. It requires making approximations and using various solution concepts, depending on the problem. She also emphasizes the importance of bounded optimality and the need for more formal solution concepts in order to make meaningful predictions and plans.

Q: How does hierarchical planning work and why is it important?

Leslie explains that hierarchical planning involves dividing a long execution or task into segments or abstract levels. By reasoning at a higher level of abstraction, one can plan and reason about dependencies and constraints among these actions without considering every possible detail. It allows for more efficient planning and decision-making, particularly in complex tasks with long horizons.

Q: How does perception compare to planning in terms of difficulty?

Leslie believes that perception is a more challenging problem than planning because of the question of representation. While perception has made significant progress in recent years, understanding what perception should deliver and how it should connect to other aspects of AI is still an ongoing challenge. The question of representation and finding structures or biases that help with perception are key areas of exploration.

Q: What does it take to build a robot with human-level intelligence?

Leslie admits that she doesn't know the answer to this question, as it is a complex and ongoing problem. She doesn't believe that self-awareness or consciousness are necessary for a robot to have human-level intelligence, but rather that observation of system parts and their performance is critical for self-awareness in robots.

Takeaways

Leslie Kael Bling's interview provides insights into her work in AI, robotics, and planning. She emphasizes the importance of abstraction, belief space, and hierarchical planning in AI and highlights the challenges of perception and planning. She also discusses the need for more formal solution concepts and the exploration of different representations and biases in AI. While the creation of a robot with human-level intelligence remains a complex problem, Leslie highlights the importance of observation in building self-awareness in robots.

Summary & Key Takeaways

  • Leslie Kael Blaine fell in love with AI after reading Godel, Escher, Bach in high school and started working with robots in her first job.

  • She studied philosophy at Stanford, which provided a strong foundation for AI and computer science.

  • Leslie emphasizes the importance of abstraction and hierarchical reasoning in robot planning and believes there is still much to learn about perception and representation.

Share This Summary 📚

Summarize YouTube Videos and Get Video Transcripts with 1-Click

Download browser extensions on:

Explore More Summaries from Lex Fridman Podcast 📚

Summarize YouTube Videos and Get Video Transcripts with 1-Click

Download browser extensions on: