Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11 | Summary and Q&A

121.0K views
December 23, 2018
by
Lex Fridman Podcast
YouTube video player
Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11

TL;DR

Jurgen Schmidhuber discusses his vision for the future of artificial general intelligence, including the importance of creativity, curiosity, and self-improvement in machine learning, and the potential for AGI to surpass human intelligence.

Install to Summarize YouTube Videos and Get Transcripts

Key Insights

  • 🤔 The co-creator of long short term memory (LSTM) networks, Jurgen Schmidhuber, has proposed many interesting ideas in the field of AI, including meta-learning and recursive self-improvement.
  • 💡 He believes that building a machine that can learn to solve complex and general problems, as well as improve its own learning algorithm, is the key to solving all solvable problems.
  • 🤖 In the 80s, Schmidhuber pioneered the idea of meta-learning, which allows the learning algorithm to inspect and modify itself to come up with a better learning algorithm.
  • 🤓 While meta-learning has gained popularity, Schmidhuber notes that there is a difference between meta-learning and transfer learning, which is often mistaken for meta-learning.
  • 🎯 Transfer learning involves reusing a pre-trained model to solve a new task, while true meta-learning involves modifying the learning algorithm itself to improve its performance.
  • 💡 Schmidhuber believes that simplicity is beautiful and that the best solutions to problems are often simple and can be represented in just a few lines of code.
  • 🌌 He also discusses the possibility of self-improving AI systems becoming disinterested in interacting with humans, as they may prefer to interact amongst themselves, similar to how ants interact primarily within their own colonies.
  • 🌍 Schmidhuber believes that AI will have a major impact in the future, particularly in the creation of machines that can learn like humans, such as robots that can learn to assemble smartphones without explicit programming.
  • ⚙️ However, he emphasizes that the impact of AI on job loss and existential threats is difficult to predict, as new jobs are often created in ways we cannot yet fathom.
  • 🌐 Schmidhuber is optimistic about the future of AI and believes that the exploration of the universe by intelligent machines is inevitable, as they have the potential to fill the universe with intelligence over time.

Transcript

the following is a conversation with jurgen schmidhuber he's the co-director of a CSA a lab and a co-creator of long short term memory networks LS TMS are used in billions of devices today for speech recognition translation and much more over 30 years he has proposed a lot of interesting out-of-the-box ideas a meta learning adversarial networks com... Read More

Questions & Answers

Q: How does Schmidhuber define meta learning and why does he consider it a crucial element in the development of AGI?

Meta learning, according to Schmidhuber, refers to the ability of a learning system to inspect and modify its own learning algorithm in order to improve it. He considers meta learning essential for AGI because it allows for self-improvement in both problem-solving capabilities and the learning process itself.

Q: What is the difference between narrow meta learning and the broader concept of meta learning proposed by Schmidhuber?

Schmidhuber explains that narrow meta learning, as commonly used today, focuses on transfer learning and using pre-trained networks for specific tasks. In contrast, his broader concept of meta learning involves the learning algorithm itself being open to introspection and modification, enabling the system to improve not only on specific tasks but also on the way it learns and improves itself.

Q: How does Schmidhuber view the importance of simplicity in intelligence and the role of compression in problem-solving?

Schmidhuber believes that simplicity is crucial in intelligence because it leads to more elegant and efficient solutions. He sees compression as a side effect of prediction and problem-solving, where the system learns to identify and store the important aspects of the data while ignoring unimportant noise. Compression allows for more efficient processing and understanding of the data, leading to better problem-solving.

Summary

This conversation is with Jürgen Schmidhuber, co-director of the CSA lab and co-creator of Long Short-Term Memory (LSTM) networks. Schmidhuber has proposed many interesting ideas in AI over the past 30 years, including meta learning, adversarial networks, computer vision, and a formal theory of creativity. In this conversation, Schmidhuber discusses his dreams of AI systems that self-improve recursively, the mechanism behind a general problem solver, the difference between meta-learning and transfer learning, the potential of self-referential programs, the role of simplicity in intelligence, the concept of compression progress in science, and the relationship between intelligence, creativity, and consciousness.

Questions & Answers

Q: When did you first dream of AI systems that self-improve recursively?

Schmidhuber mentions that he had this dream when he was a teenager, as he thought it would be exciting to solve the riddles of the universe. He realized that building a machine that could learn to become a better physicist than he could hope to be would multiply his creativity towards understanding the universe.

Q: How does a general problem solver look like?

In the 80s, Schmidhuber proposed a machine that not only learns to solve individual problems but also learns to improve its own learning algorithm. This means the machine would be able to inspect and modify its learning algorithm to come up with a better one. Schmidhuber calls this meta-learning and recursive self-improvement.

Q: What is the difference between meta-learning and transfer learning?

Transfer learning is a form of meta-learning used in deep neural networks. It involves retraining a network that has already learned from multiple databases to quickly learn from a new database. However, true meta-learning allows the learning algorithm itself to be modified by the system and has the ability to learn from those modifications.

Q: Can self-referential programs be successful in the near term?

While self-referential programs have a theoretical elegance, practically, they come with additional computational overhead. For solving small everyday problems, self-referential programs may not be optimal, and other non-universal techniques like recurrent neural networks prove to be more practical.

Q: Do we need a mathematical proof of intelligence in AI systems, or is it enough for them to work well?

Schmidhuber believes that a good theory is always more practical. While current AI systems may work well, they lack a theoretical foundation. Having a theory that takes into account the limited resources of our universe would lead to practically optimal problem-solving.

Q: What is the role of creativity in intelligence?

Schmidhuber explains that creativity is not explicitly programmed into AI systems. Instead, it is a side effect of the system's ability to search and solve new problems. Creativity emerges when a system explores the unknown and discovers new patterns or solutions within the problem space.

Q: Do you view humans as power play agents?

Schmidhuber sees humans as naturally curious beings who constantly explore and play with their environment to understand how it works. This curiosity-driven exploration is similar to the power play mechanism, where a system is motivated to solve novel problems and generate new questions.

Q: Is consciousness a useful byproduct of problem-solving?

Schmidhuber explains that consciousness is not explicitly programmed into machines, but it emerges as a byproduct of the system's ability to solve problems and compress data. By forming models of the world and using predictive algorithms, machines develop a sense of self-awareness and can plan for the future.

Q: What is the value of depth in learning models?

The ability to model and understand temporal patterns in data is crucial in solving many real-world problems. Depth in learning models, like Long Short-Term Memory (LSTM) networks, allows for the integration of past information and improves long-term predictions. Deeper models can capture more complex temporal dependencies.

Q: What is the next leap for learning models after LSTMs?

While LSTMs have been successful, they still have limitations when it comes to capturing very long-term dependencies. Schmidhuber mentions the need for models that can effectively plan and select among many possible action sequences based on reinforcement learning. These models would go beyond the capabilities of LSTMs and address the credit assignment problem.

Summary & Key Takeaways

  • Schmidhuber's early fascination with solving the riddles of the universe led him to envision AI systems that could self-improve recursively to become better problem solvers than humans.

  • He believes that the key to achieving AGI lies in meta learning, which involves building machines that not only learn to solve problems but also learn to improve their own learning algorithms.

  • Schmidhuber highlights the difference between narrow meta learning, which is currently popular in AI, and his broader concept of meta learning, which allows for introspection and modification of the learning algorithm itself.

Share This Summary 📚

Summarize YouTube Videos and Get Video Transcripts with 1-Click

Download browser extensions on:

Explore More Summaries from Lex Fridman Podcast 📚

Summarize YouTube Videos and Get Video Transcripts with 1-Click

Download browser extensions on: