The Power of General Purpose Methods: Lessons Learned from AI Research
Hatched by Alessio Frateily
Jan 30, 2024
4 min read
11 views
Copy Link
The Power of General Purpose Methods: Lessons Learned from AI Research
Introduction
In the field of artificial intelligence (AI), researchers have made significant progress over the past 70 years. However, one of the most important lessons that can be derived from this extensive research is the power of general purpose methods that leverage computation. These methods have proven to be the most effective, surpassing approaches that rely heavily on human knowledge. This article explores the concept of embedding and the bitter lesson learned from AI research, emphasizing the need for scalable methods such as search and learning.
The Cheshire Cat: Embedding Semantic Information
One of the key techniques used in AI is the process of embedding, which involves condensing the semantic information of input content into a vector representation. This embedding vector serves as a condensed representation of the input text and resides in a Euclidean geometrical space. As a result, geometrical operations can be applied to the embedding, enabling the measurement of distance between two points. This distance measurement informs us about the similarity between two sentences, providing valuable insights for various AI applications.
The Bitter Lesson: General Methods and Computation
The bitter lesson learned from 70 years of AI research is that general methods that leverage computation are ultimately the most effective. The exponential growth witnessed in computational power, as described by Moore's Law, has played a significant role in this realization. Initially, AI researchers attempted to incorporate human knowledge into their agents, assuming it would enhance performance. However, it was later discovered that this approach often complicated methods and hindered the ability to take advantage of general methods that leverage computation.
In speech recognition, for example, early attempts involved utilizing human knowledge of words, phonemes, and the human vocal tract. However, newer statistical methods based on hidden Markov models emerged and outperformed human-knowledge-based approaches. The recent rise of deep learning in speech recognition further emphasizes the success of methods that rely less on human knowledge and more on computation and learning from vast training sets.
A similar pattern can be observed in computer vision. Early methods focused on searching for edges or generalized cylinders, while modern deep-learning neural networks rely on convolution and specific invariances. These deep learning methods have demonstrated superior performance compared to previous approaches. The bitter lesson here is that attempting to build AI systems that work the way researchers believe their own minds work, by incorporating human knowledge, ultimately hinders progress. The breakthrough progress observed in AI is achieved through the opposing approach of scaling computation through search and learning.
The Power of General Purpose Methods
The bitter lesson teaches us the great power of general purpose methods that continue to scale with increased computation. Search and learning, in particular, have been identified as two methods that can scale arbitrarily in this way. As computational resources become more abundant, these methods prove to be highly effective.
The bitter lesson also highlights the complexity of the actual contents of minds. Attempts to simplify the way we think about space, objects, multiple agents, or symmetries often fall short. The true complexity lies in the arbitrary and intrinsically complex nature of the outside world. Therefore, it is crucial to build only the meta-methods that can capture this complexity, rather than incorporating our specific discoveries. AI agents should possess the ability to discover, similar to how humans do, rather than merely containing what we have already discovered.
Actionable Advice
- 1. Embrace General Purpose Methods: Instead of relying solely on domain-specific knowledge, explore the power of general purpose methods such as search and learning. These methods have shown tremendous scalability and effectiveness.
- 2. Leverage Computation: Take advantage of the exponential growth in computational power. Moore's Law has paved the way for breakthrough progress in AI, and harnessing increased computation can lead to significant advancements.
- 3. Emphasize Discoverability: Shift the focus from incorporating specific knowledge into AI agents to enabling them to discover like humans do. By building meta-methods that capture the complexity of the outside world, AI agents can achieve greater flexibility and adaptability.
Conclusion
The bitter lesson learned from AI research emphasizes the power of general purpose methods and the need to leverage computation. Embedding, as a technique, provides a condensed representation of semantic information and enables similarity comparisons between sentences. By embracing general methods and leveraging scalable computation, breakthrough progress can be achieved. It is crucial to recognize the complexity of the actual contents of minds and build AI agents that can discover, rather than solely containing what we have discovered.
Resource:
Copy Link