Steve Kaplan
@69gb9ppqkh26lr65
Joined May 9, 2023
0
Following
0
Followers
Learns #Product Management #Productivity #Growth #Marketing #Design
23
43
394
www.aomni.com/research/cbcf116d-36bc-459b-a7d1-2adf68131db7
Jul 13, 2023
1 Highlights
www.linkedin.com/pulse/ppc-news-week-27-yoann-ferrand/
Jul 11, 2023
1 Highlights
meetglimpse.com/extension/
Jun 18, 2023
1 Highlights
www.eventcube.io/blog/promote-your-event-free-30-places
Jun 13, 2023
1 Highlights
skillshop.exceedlms.com/student/path/645553/activity/1317109
May 27, 2023
6 Highlights
www.octaneai.com/pricing
May 26, 2023
1 Highlights
aiagent.app/agent/7a5728d5-4b0e-4dd5-926b-ce293fc902b7
May 25, 2023
3 Highlights
www.quantamagazine.org/a-new-approach-to-computation-reimagines-artificial-intelligence-20230413?ref=refind
May 23, 2023
9 Highlights
longform.asmartbear.com/problem?ref=refind
May 19, 2023
1 Highlights
kagi.com/search?q=are+merchant+cash+advances+banned+on+google
May 19, 2023
1 Highlights
www.irs.gov/faqs/small-business-self-employed-other-business/form-1099-nec-independent-contractors
May 17, 2023
1 Highlights
aiagent.app/agent/45cfc360-03e1-4263-9b75-84d40dec9b64
May 17, 2023
2 Highlights & 1 Notes
www.northbeam.io/features/platform-overview
May 15, 2023
1 Highlights
www.northbeam.io/compare/northbeam-vs-triple-whale
May 13, 2023
1 Highlights
kratomspot.com/whats-legal-in-kratom-advertising
May 12, 2023
1 Highlights
medium.com/glasp/tutorial-how-to-export-web-articles-sentences-into-notion-907571bd5050
May 9, 2023
1 Highlights & 1 Notes
espite the wild success of ChatGPT and other large language models, the artificial neural networks (ANNs) that underpin these systems might be on the wrong track.
Such systems are so complicated that no one truly understands what they’re doing, or why they work so well. This, in turn, makes it almost impossible to get them to reason by analogy, which is what humans do — using symbols for objects, ideas and the relationships between them.
“You have to propose that, well, you have a neuron for all combinations,” said Bruno Olshausen, a neuroscientist at the University of California, Berkeley. “So, you’d have in your brain, [say,] a purple Volkswagen detector.”
Instead, Olshausen and others argue that information in the brain is represented by the activity of numerous neurons. So the perception of a purple Volkswagen is not encoded as a single neuron’s actions, but as those of thousands of neurons. The same set of neurons, firing differently, could represent an entirely different concept (a pink Cadillac, perhaps).
This is the starting point for a radically different approach to computation known as hyperdimensional computing. The key is that each piece of information, such as the notion of a car, or its make, model or color, or all of it together, is represented as a single entity: a hyperdimensional vector.
“The ease of making nearly orthogonal vectors is a major reason for using hyperdimensional representation,” wrote Pentti Kanerva, a researcher at the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley, in an influential 2009 paper.
hyperdimensional computing is well suited for a new generation of extremely sturdy, low-power hardware. It’s also compatible with “in-memory computing systems,” which perform the computing on the same hardware that stores data (unlike existing von Neumann computers that inefficiently shuttle data between memory and the central processing unit). Some of these new devices can be analog, operating at very low voltages, making them energy-efficient but also prone to random noise. For von Neumann computing, this randomness is “the wall that you can’t go beyond,” Olshausen said. But with hyperdimensional computing, “you can just punch through it.”
Despite such adv
Despite such advantages, hyperdimensional computing is still in its infancy. “There’s real potential here,” Fermüller said. But she points out that it still needs to be tested against real-world problems and at bigger scales, closer to the size of modern neural networks.
“For problems at scale, this needs very efficient hardware,” Rahimi said. “For example, how [do you] efficiently search over 1 billion items?”
All of this should come with time, Kanerva said. “There are other secrets [that] high-dimensional spaces hold,” he said. “I see this as the very beginning of time for computing with vectors.”