François Chollet: Keras, Deep Learning, and the Progress of AI | Lex Fridman Podcast #38 | Summary and Q&A

September 14, 2019
Lex Fridman Podcast
YouTube video player
François Chollet: Keras, Deep Learning, and the Progress of AI | Lex Fridman Podcast #38


Francois Shelley discusses the future of artificial intelligence and the role of deep learning libraries like Karass within the TensorFlow ecosystem, exploring controversial ideas around intelligence explosion and the limitations of deep learning.

Install to Summarize YouTube Videos and Get Transcripts

Questions & Answers

Q: Can deep learning libraries like Karass and TensorFlow pave the way for the development of superintelligent AI systems?

Deep learning libraries like Karass and TensorFlow are valuable tools for building AI systems, but achieving superintelligence requires more than just deep learning. It involves combining deep learning with symbolic AI and program synthesis to create hybrid systems that have the ability to reason and generalize across tasks and domains.

Q: What are the limitations of deep learning and the current state of AI research?

Deep learning is highly effective for tasks that involve pattern recognition and large-scale data analysis but falls short in areas such as reasoning and abstract problem-solving. The current state of AI research is focused on combining deep learning with other AI approaches, such as reinforcement learning and unsupervised learning, to address these limitations and build more intelligent systems.

Q: How does program synthesis play a role in the future of AI?

Program synthesis involves automatically generating programs from input-output examples or specifications. It has the potential to revolutionize AI research by enabling the creation of rule-based models that can reason and generalize across different tasks and datasets. While program synthesis is still in its early stages, it holds promise for future advancements in AI.

Q: Can deep learning libraries improve data efficiency and reduce the need for human annotation?

Deep learning libraries like Karass and TensorFlow offer techniques for unsupervised learning and reinforcement learning, which can reduce the need for extensive human annotation. By leveraging large amounts of data and finding patterns and structures within the data, these techniques can improve data efficiency and enhance the capabilities of AI systems.


This conversation is with Francois Chollet, the creator of Karas, an open-source deep learning library. He discusses the idea of intelligence explosion and his doubts about its feasibility. He also talks about the history of Karas and its integration with TensorFlow. He explains the design decisions involved in creating a deep learning framework and the challenges of satisfying the diverse needs of TensorFlow users.

Questions & Answers

Q: What controversial idea have you expressed online and received pushback for?

Francois questions the idea of intelligence explosion, which suggests that if an AI system is able to improve itself, it could reach a level of intelligence beyond human capabilities. He argues that this idea doesn't align with how intelligence actually works, as intelligence emerges from the interaction between a brain, body, and environment. Tweakings the brain alone wouldn't lead to exponential growth in intelligence. He also mentions that the notion of intelligence explosion comes from mythology, where the world is often headed towards a final event of destruction and transformation.

Q: Is it possible for intelligence to exponentially increase in certain tasks?

Francois believes that in specific tasks, there may be a possibility of exponential growth in problem-solving ability. However, he points out that even systems with recursive self-improvement do not experience an explosion of capabilities due to the interdependencies among different components. He uses the example of science as a recursively self-improving problem-solving system, which consumes exponentially increasing resources but has a linear output in terms of scientific progress. He argues that when one part of a system is improved, another part becomes a bottleneck, preventing exponential growth.

Q: Can a superhuman AI system exhibit recursive self-improvement like science does?

Francois considers science as the closest thing we have to a recursively self-improving superhuman AI system. He observes that science feeds into technology, which can further improve scientific progress. However, he notes that even with recursive self-improvement, science does not experience exponential growth. He believes that the resource consumption of science adjusts dynamically to maintain linear progress. He also mentions that the significance of scientific discoveries, when measured using expert ratings, does not show exponential growth.

Q: How does the integration of Karas into TensorFlow happen?

Francois explains that he initially developed Karas as a high-level interface for deep learning, with a focus on LSTM networks. At the time, the most popular deep learning library was Caffe, which Francois found less flexible. He wanted to combine LSTM and convolutional networks, and thus created Karas. After joining Google, Francois was exposed to the early internal version of TensorFlow and saw it as an improved version of Theano. He then refactored Karas to run on TensorFlow and made it compatible with multiple backends. Over time, TensorFlow overtook Karas in popularity.

Q: What are you excited about in TensorFlow 2.0?

Francois expresses excitement about TensorFlow 2.0, specifically about the usability and flexibility it offers. He mentions that TensorFlow now provides a spectrum of workflows with varying levels of usability and flexibility. From high-level features suitable for data scientists, to low-level flexibility for researchers, TensorFlow caters to diverse user needs. He also mentions the seamless integration with different tooling and deployment options like mobile and the cloud.

Q: How are design decisions made when integrating Karas into TensorFlow?

Francois describes the process of making design decisions as highly collaborative and involves discussing and reviewing design documents. He emphasizes the need to satisfy constraints while keeping the design as simple as possible for maintainability. The API design is focused on reflecting the mental models of domain experts, minimizing cognitive load and reducing the time required to understand and use the API. The goal is to have an API that is modular, hierarchical, and easy to map to the user's mental model.

Q: What does the future of Karas and TensorFlow look like?

Francois does not provide a clear answer to this question as he states it is difficult for him to say. He mentions that he is no longer the one making the decisions and emphasizes that the future is uncertain and subject to the needs and demands of users.

Summary & Key Takeaways

  • Francois Shelley is the creator of Karass, an open-source deep learning library that integrates with TensorFlow, and discusses its use within TensorFlow and the future of AI.

  • He questions the idea of an intelligence explosion, arguing that intelligence is not just a property of the brain but also includes the interaction between the brain, body, and environment.

  • Shelley believes in the potential of hybrid systems that combine symbolic AI with deep learning and highlights the importance of program synthesis in future AI research.

Share This Summary 📚

Summarize YouTube Videos and Get Video Transcripts with 1-Click

Download browser extensions on:

Explore More Summaries from Lex Fridman Podcast 📚

Summarize YouTube Videos and Get Video Transcripts with 1-Click

Download browser extensions on: