Gary Marcus


49 Quotes

"Noam Chomsky, who led off the night, was worried about whether the current approach to artificial intelligence would ever tell us anything about the thing that he cares about most: what makes the human mind what it is?"
Gary Marcus
An epic AI Debate—and why everyone should be at least a little bit worried about AI going into 2023
"I, Gary Marcus, worried about whether contemporary approaches to AI would ever give solutions to four key aspects of thought that we ought to expect from any intelligent machine: reasoning, abstraction, compositionality, and factuality."
Gary Marcus
An epic AI Debate—and why everyone should be at least a little bit worried about AI going into 2023
"Dileep George, DeepMind Researcher, and cofounder of two AI startups, worried that scaling alone would not be enough to bring us to general intelligence, raising an analogy with dirigibles like the Hindenberg that at one point seemed to be outpacing airplane development."
Gary Marcus
An epic AI Debate—and why everyone should be at least a little bit worried about AI going into 2023
"Yejin Choi, the MacArthur-winning UW/Allen AI professor named above, worried about whether we were making enough progress to understanding what she called the “dark matter of AI”, commonsense reasoning, and raised important further questions in a second talk about value pluralism and ethical reasoning in AI."
Gary Marcus
An epic AI Debate—and why everyone should be at least a little bit worried about AI going into 2023
"he nonetheless counseled for an increased focus on metalearning, the (ideally automated) combination of multiple learning mechanisms with different aptitudes, across different tasks."
Gary Marcus
An epic AI Debate—and why everyone should be at least a little bit worried about AI going into 2023
"Francesca Rossi, IBM Fellow and President of the leading AI society, AAAI, worried about whether current approaches to AI could bring us to AI systems that could behave sufficiently ethically."
Gary Marcus
An epic AI Debate—and why everyone should be at least a little bit worried about AI going into 2023
"She also argued that we must bring humans in the loop, given the realities of current technology."
Gary Marcus
An epic AI Debate—and why everyone should be at least a little bit worried about AI going into 2023
"“In the 2010s, symbol manipulation was a dirty word among deep learning proponents; in the 2020s, understanding where it comes from should be our top priority.”"
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"But by the end — in a departure from what LeCun has said on the subject in the past — they seem to acknowledge in so many words that hybrid systems exist, that they are important, that they are a possible way forward and that we knew this all along."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"For nearly 70 years, perhaps the most fundamental debate in artificial intelligence has been whether AI systems should be built on symbol manipulation — a set of processes common in logic, mathematics and computer science that treat thinking as if it were a kind of algebra — or on allegedly more brain-like systems called “neural networks.”"
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"A third possibility, which I personally have spent much of my career arguing for, aims for middle ground: “hybrid models” that would try to combine the best of both worlds, by integrating the data-driven learning of neural networks with the powerful abstraction capacities of symbol manipulation."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"I would argue that either symbol manipulation itself is directly innate, or something else — something we haven’t discovered yet — is innate, and that something else indirectly enables the acquisition of symbol manipulation."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"All of our efforts should be focused on discovering that possibly indirect basis. The sooner we can figure out what basis allows a system to get to the point where it can learn symbolic abstractions, the sooner we can build systems that properly leverage all the world’s knowledge, hence the closer we might get to AI that is safe, trustworthy and interpretable."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"Early AI pioneers like Marvin Minsky and John McCarthy assumed that symbol manipulation was the only reasonable way forward, while neural network pioneer Frank Rosenblatt argued that AI might instead be better built on a structure in which neuron-like “nodes” add up and process numeric inputs, such that statistics could do the heavy lifting."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"It’s been known pretty much since the beginning that these two possibilities aren’t mutually exclusive."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"The article ended with an attack on symbols, arguing that “new paradigms [were] needed to replace rule-based manipulation of symbolic expressions by operations on large vectors.”"
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"“The sooner we can figure out what basis allows a system to get to the point where it can learn symbolic abstractions, the closer we might get to AI that is safe, trustworthy and interpretable.”"
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"Rather, as we all realize, the whole game is to discover the right way of building hybrids."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"People have considered many different ways of combining symbols and neural networks, focusing on techniques such as extracting symbolic rules from neural networks, translating symbolic rules directly into neural networks, constructing intermediate systems that might allow for the transfer of information between neural networks and symbolic systems, and restructuring neural networks themselves."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"Finally, we come to the key question: could symbol manipulation be learned rather than built in from the start?"
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"The straightforward answer: of course it could. To my knowledge, nobody has ever denied that symbol manipulation might be learnable."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"The first was a “learnability” argument: throughout the book, I showed that certain kinds of systems — basically 3-layer forerunners to today’s more deeply layered systems — failed to acquire various aspects of symbol manipulation, and therefore there was no guarantee that any system regardless of its constitution would ever be able to learn symbol manipulation."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"People should be skeptical that DL is at the limit; given the constant, incremental improvement on tasks seen just recently in DALL-E 2, Gato, and PaLM, it seems wise not to mistake hurdles for walls. The inevitable failure of DL has been predicted before, but it didn’t pay to bet against it."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"Optimism has its place, but the trouble with this style of argument is twofold. First, inductive arguments on past history are notoriously weak."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"Second, there is also a strong specific reason to think that deep learning in principle faces certain specific challenges, primarily around compositionality, systematicity and language understanding."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"All revolve around generalization and “distribution shift” (as systems transfer from training to novel situations) and everyone in the field now recognizes that distribution shift is the Achilles’ heel of current neural networks."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"on natural language, compositionality and reasoning, which differ from the kinds of pattern recognition on which deep learning excels, these systems remain massively unreliable, exactly as you would expect from systems that rely on statistical correlations, rather than an algebra of abstraction."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"Current systems, 20 years after “The Algebraic Mind,” still fail to reliably extract symbolic operations (e.g. multiplication), even in the face of immense data sets and training."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"The example of human infants and toddlers suggests the ability to generalize complex aspects of natural language and reasoning (putatively symbolic) prior to formal education."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"A little built-in symbolism can go a long way toward making learning more efficient;"
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"“We can finally focus on the real issues at hand: how can you get data-driven learning and abstract, symbolic representations to work together in harmony in a single, more powerful intelligence?”"
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"In the 2010s, symbol manipulation was a dirty word among deep learning proponents; in the 2020s, understanding where it comes from should be our top priority."
Gary Marcus
Deep Learning Alone Isn’t Getting Us To Human-Like AI | NOEMA
"Some of what it writes is so good that some people are using it to pick up dates on Tinder (“Do you mind if I take a seat? Because watching you do those hip thrusts is making my legs feel a little weak."")"
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"Still others are using it to try to reinvent search engines"
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"As I told NYT columnist Farhad Manjoo, ChatGPT, like earlier, related systems is “still not reliable, still doesn’t understand the physical world, still doesn’t understand the psychological world and still hallucinates.”"
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"In technical terms, GPT-4 will have more parameters inside of it, requiring more processors and memory to be tied together, and be trained on more data."
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"GPT-1 was trained on 4.6 gigabytes of data, GPT-2 was trained on 46 gigabytes, GPT-3 was trained on 750."
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"Although GPT-4 will definitely seem smarter than its predecessors, its internal architecture remains problematic. I suspect that what we will see is a familiar pattern: immense initial buzz, followed by a more careful scientific inspection, followed by a recognition that many problems remain."
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"As far as I can tell from rumors, GPT-4 is architecturally essentially the same as GPT-3."
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"If so, we can expect that approach to still be marred by something fundamental: an inability to construct internal models of how the world works, and in consequence we should anticipate an inability to understand things at an abstract level."
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"GPT-4 may be better at faking term papers, but if it follows the same playbook as it predecessors, it still won’t really understand the world, and seams will eventually show."
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"GPT-4 will solve many of the individual specific items used in prior benchmarks, but still get tripped up, particularly in longer and more complex scenarios."
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"It will not be trustworthy and complete enough to give reliable medical advice, despite devouring a large fraction of the Internet."
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"Fluent hallucinations will still be common, and easily induced, continuing—and in in fact escalating— the risk of large language models being used as a tool for creating plausible-sounding yet false misinformation."
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"Its natural language output still won’t be something that one can reliably hook up to downstream programs"
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"GPT-4 will not have reliable models of the things that it talks about that are accessible to external programmers in a way that reliably feeds downstream processes."
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"“Alignment” between what humans want and what machines do will continue to be a critical, unsolved problem."
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"When AGI (artificial intelligence) comes, large language models like GPT-4 may be seen in hindsight as part of the eventual solution, but only as part of the solution."
Gary Marcus
What to Expect When You’re Expecting … GPT-4
"“Scaling” alone—building bigger and models until they absorb the entire internet — will prove useful, but only to a point."
Gary Marcus
What to Expect When You’re Expecting … GPT-4

Want to Save Quotes?

Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.