Deep Neural Networks Help to Explain Living Brains: A Look into the Connection Between Artificial Intelligence and Neuroscience

Glasp

Hatched by Glasp

Sep 30, 2023

4 min read

0

Deep Neural Networks Help to Explain Living Brains: A Look into the Connection Between Artificial Intelligence and Neuroscience

In the field of artificial intelligence, deep neural networks have been used to develop systems that can recognize objects in pictures, much like the human brain does. These networks, inspired by the neurological wiring of living brains, have provided insights into the workings of our own cognitive processes. Researchers have not only wondered why different parts of the brain perform different functions but also why these differences can be so specific. For example, why does the brain have an area dedicated to recognizing faces?

Computational neuroscientist Daniel Yamins, now at Stanford University, conducted a study in which he showed that a neural network processing the features of a scene hierarchically, similar to the brain, could match the performance of humans in recognizing objects. This suggests that the deep networks used in artificial intelligence aspire to emulate the hierarchical processing of visual information in the brain, where low-level features are processed in earlier stages and complex representations, such as whole objects and faces, emerge later.

Yamins, along with his team, also explored the use of deep neural networks in classifying speech and music. They designed deep nets that could process audio by first modeling the cochlea, the organ responsible for sound transduction in the inner ear. The team considered different architectures for the deep net, where the tasks of recognizing speech and music could either share only the input layer or share the same network for all processing and split only at the output stage.

Interestingly, the team found that the deep net trained to recognize faces performed poorly in recognizing objects, and vice versa. This suggests that these networks represent faces and objects differently, just as the human brain does. The network had internally organized itself to segregate the processing of faces and objects in the later stages of the network, similar to how functional specialization emerges in the human brain.

Similar evidence has emerged from research on the perception of smells. The initial layer of odor processing in the brain involves olfactory sensory neurons, each expressing only one of about 50 types of odor receptors. When researchers trained a deep net to classify simulated odors, they found that the network converged on a similar connectivity pattern as seen in the fruit fly brain. This similarity suggests that both evolution and deep nets have reached an optimal solution.

However, critics argue that deep-net models of the brain may simply replace one black box with another. These models often require large amounts of labeled data for training, while our brains can learn effortlessly from just a single example. Additionally, deep nets use an algorithm called back propagation, which many neuroscientists believe cannot work in real neural tissue due to the lack of appropriate connections.

Cognitive neuroscientist Josh Tenenbaum at MIT acknowledges that deep-net models are steps of progress but points out that they mainly focus on classification or categorization tasks. Our brains, on the other hand, do much more than categorize what's in our surroundings. They also infer causal structures inherent in scenes, allowing us to understand the world around us in an instant.

In the business world, the concept of the ERRC Grid (Eliminate, Raise, Reduce, Create) has gained traction as a way to identify and capitalize on blue ocean markets. Blue Ocean refers to an uncontested market space or innovation that provides unique value to customers while maintaining profitability. The ERRC Grid suggests that eliminating conventional features that have no impact on customers, raising aspects with a significant impact, reducing aspects with minimal impact, and creating new value-added features can lead to success.

Interestingly, the relationship between non-conformity and the performance of a company follows an inverse U-shaped curve. Moderate non-conformity has a superlative impact on performance, suggesting that mild divergence from conventions can significantly improve a company's performance.

In conclusion, the connection between deep neural networks and living brains has provided insights into how our cognitive processes work. These networks, inspired by the hierarchical processing in the brain, have shown promise in recognizing objects, faces, speech, music, and even smells. However, there are still challenges to overcome, such as the need for large amounts of labeled data and the limitations of back propagation. Nonetheless, these advancements in artificial intelligence have shed light on the complexities of our own brains and offer potential applications in various fields.

Actionable Advice:

  • 1. Embrace the hierarchical processing of information: Just as deep neural networks process information hierarchically, consider organizing your own thought processes in a hierarchical manner. Break down complex problems into smaller, more manageable parts to enhance understanding and problem-solving abilities.
  • 2. Foster functional specialization: Deep neural networks have shown that functional specialization can lead to better performance in recognizing different types of stimuli. Apply this concept to your own work by identifying areas where specialization can lead to improved efficiency and expertise.
  • 3. Embrace mild divergence: The ERRC Grid suggests that mild divergence from conventional practices can have a significant positive impact on performance. Don't be afraid to challenge the status quo and explore new ideas or approaches in your work or business.

Hatch New Ideas with Glasp AI 🐣

Glasp AI allows you to hatch new ideas based on your curated content. Let's curate and create with Glasp AI :)