Leif Hansen
@sxmd0w8pj11olbdp
Joined Apr 13, 2023
0
Following
0
Followers
Learns #AI
1
42
8
www.kosmosjournal.org/kj_article/the-future-of-intelligence/?fbclid=IwAR0_7jNuYyeCGqw-AsV-lUUWpsQTO-OJIvVmhMx1uaSHRuzSnBr-eIy2C7o
Apr 14, 2023
42 Highlights & 7 Notes
If there arises a separation between need and fulfillment, we strive to bridge the gap. If the mother does not respond to the baby’s subtle cues, it will cry to summon her. The cry of the baby develops into language, culture, technology.
Subtle, implicit, intuitive technology creates a template, a seed, an attractor for the universe to organize around. It works in a stochastic, nonlinear, magnetic, acausal, “quantum” way, by aligning within the energy flows already available. Subtle technologies include myth, ceremony, meditation, and much of what we call “shamanism”. They cultivate our sensitivity for right action — when, where, and how to apply our direct technologies.
Yet civilization has developed the intellect at the expense of intuition. Intellectual technology has created a world in its own image, fragmenting our primordial need (of wholeness) into a kaleidoscope of derivative needs resulting from the consequences of the technologies themselves. And we compensate for our loss by seeking more control, a futile effort that consumes nearly all of our capacity. Our intuition and its technologies atrophy and we no longer trust them. Our world demands a disassociated intellect.
We swaddle our existential despair by creating a consolation world of continuous distraction. Yet something is ever missing from these simulations. Something about the body, something about our purpose in the universe, something beyond “something”. Not an idea or a value, but ineffable being-ness, slipping through the net no matter how finely woven.
This, truly, is Artificial Intelligence: the disconnection of the intellect from the living body. The self-perpetuating process has been described for millennia, long before modern computers. Computerized AI is simply its exponential acceleration towards its inexorable fate.
To give an example: the following passage from the 1000+ year old Srimad Bhagavatam is a precise description of artificial intelligence:
“This uncontrolled mind is the greatest enemy of the living entity. If one neglects it or gives it a chance, it will grow more and more powerful and will become victorious. Although it is not factual, it is very strong. It covers the constitutional position of the soul.”
Computerized AI such as GPT-4 are now beating our mechanized intellect at its own games. Our personal and social structures have been built for millennia upon the capabilities of the human brain. They are simply not prepared for what is happening. We are approaching an asymptote beyond which nothing is certain.
Yet in order to realize their promise, we must learn to communicate with these increasingly life-like computers. To do so we must cultivate and draw from our intuition. As we progress, we uncover the forgotten computational capabilities of our own bodies, our own ecosystems. We find that we are the computer we have been waiting for.
We find that we are the computer we have been waiting for.
The dangers of AI — to name but a few: disassociation, misunderstanding, alienation, disinformation, atrophy of embodied capabilities, surveillance, replacement of human creativity — are not new. They are not unique to machine learning models. They must be addressed in the depth of their historical context.
The “Alignment Problem” that is now a central concern of our time is a reframing of the age-old question: how do we align our technological development with our lived values? In other words, how do we align our intellect with our intuition?
The difference is that the imminent disruption of our lives is compelling us to collectively find the answers. It’s no longer a philosophical indulgence or a hope for future generations. It’s here, now, life-or-death.
Many AI researchers consider alignment to be an intractable problem on technical grounds. Yet it’s more than a technical issue. If our stated values and goals are contradictory, or have perverse implications, their “alignment” with AI is impossible. Ultimately, alignment needs to occur with our universal values — symbiosis, harmony, freedom, abundance, beauty, and love.
Mechanisms of control are necessary only until underlying needs can be met in a self-organizing way. Intellectual technology compensates for the loss of our innate wisdom and capability — yet it has itself become the barrier to the realization of this primordial gnosis. Control becomes self-sustaining and addictive even though it’s not truly fulfilling. As we collectively recover and embody this effortless self-organization of life — the Tao — the scaffolding of culture becomes superfluous. And AI can be a means of accelerating this process by providing more efficient ways of meeting our deeper needs as it also destabilizes the structures of control that have become calcified beyond their usefulness.
and various dialogues with other thinkers,
This point has been made by many others,
Therefore let’s entertain the following hypothesis: that the technological progression culminating in AI/quantum computing fusion meets certain kinds of needs very effectively, while leaving other needs completely unmet. Even worse, its intoxicating success at meeting certain needs distracts us from the others, so that we hardly know what is missing. We then fall into an addictive pattern in which we chase more and more of what we don’t much need in tragic and futile compensation for what’s missing.
Subsistence labor peaked not in pre-modern times, but probably during the Industrial Revolution, and today remains higher per capita than among hunter-gatherers, as we spend so much time and energy maintaining the very systems that produce all our labor-saving technologies
The conceit of the modern scientific program is that all phenomena can be quantified, that anything real can be measured and, theoretically at least, controlled. A corollary is that there is no limit to what we can simulate.
If there are elements of reality that are fundamentally qualitative, irreducible to data, then AI will always bear limits. We may attempt to remedy its deficiencies (what the data leaves out) by collecting ever more thorough and precise data, but we will never break through to the qualitative
No amount of quantity adds up to quality.
As long as we are aware of this fundamental limit, I think we will be able to find the right role for AI. If we ignore it, we risk colonizing more and more of human experience, robbing it of its spirit, leaving us with virtual experiences that, no matter how convincing their verisimilitude, never feel real.
Abuses of AI that Tam mentioned will also result from this misunderstanding, as we remove more and more of the human element from such social functions like policing, credit, and governance. When these are performed through the manipulation of quantities, something will always be left out.
We have always personified that which appears to meet us. If you spend your life in a remote desert with a dozen humans, stones may be experienced as conscious. If you pass thousands of humans a day in a metropolis, you won’t truly meet them all as conscious. If most of your interactions are mediated by a computer screen, a chatbot becomes a person too. A normative construct cannot encompass the human experience of “Thou”. As you have written, Tam, consciousness is everywhere to be found.
We can only dispense with the means when true alternatives are at hand. We might bemoan where our water comes from — but we’ll keep drinking it until we find a new source.
In short, the ideal of a fully controlled technocratic society is — the perfectly interwoven network of ecological and social feedback loops, finally allowed to find dynamic balance.
Let me put the notion of inscrutability another way. An engineer might understand the process by which an ANN develops the ability to play chess, even without understanding how that ability itself works. The program develops its own chess algorithms. The programmer only develops the algorithm-development algorithm.
What this may be pointing to is that if we develop machines that “truly understand,” it will come at the price of understanding those machines. In some sense, we won’t know how they work.
Quantum computing takes inscrutability to a further extreme. Quantum computing algorithms erase the tracks of their own computations. In order for all the qbits to remain in a superposition of states so that multiple computations can be performed simultaneously, they must be unobserved during the computation. In other words, in many quantum algorithms the intermediate values of the computation are unknowable. This is also known as a black box or oracle function. It reminds me a lot of human intuition. I know, but I don’t know how I know.
What all this implies is just what Freely said, that genuine artificial intelligence, “true understanding,” will look less like a computer and more like a brain. It will not come about because we have “solved” understanding. We won’t have decoded understanding and reduced it to a set of rules.
When the computer becomes more and more like a brain, more and more organic, more and more ecological in its structure, then we may more readily conceive of a healthy society that way too. As Freely put it: “the perfectly interwoven network of ecological and social feedback loops finally allowed to find dynamic balance.” Certainly, brain-like AIs can be used for nefarious purposes (as can human brains). But when they don’t work according to top-down principles, perhaps we will also envision a better society along different principles also.
Herein lies an answer to Tam’s question: “If not control or power, what can replace these notions as the centerpiece of human societies?” Or alternatively, we can ask how to create Freely’s network of social and ecological feedback loops. The key word is relationship. We can ask of any policy whether it will increase the density of social and ecological relationships.
Density of relationship, social and ecological feedback loops, certainly generate intelligence, but not necessarily benign intelligence. Artificial neural networks and evolutionary algorithms also develop intelligence through the operation of feedback; these too are not necessarily benign. In either case, we need to ask what conditions lead to pro-social, pro-life outcomes.
The equivalent in the social organism is, perhaps, empathy – the ability to feel what someone else is feeling. As with cells, this is less likely the more relationships are mediated by symbols. As we all know, the threshold for saying horrible things to people is much lower on line than it is in person.
the further our symbols take us from embodied awareness, the less empathic and interwoven the world that they enact. Conversely, as our symbolic systems more closely approximate the wholeness of our being, there is a convergence of our technology and our empathy.
here is a special semiotic significance to quantum computing. A quantum coherent system exists in a multiplicity of superimposed states. While it’s common knowledge that a (strong) observation returns only a single state, a “weak” observation just slightly perturbs the system, providing only relative information but preserving the coherence of a system.
To utilize quantum computing, we must think in terms of possibilities rather than certainties, the implicit rather than the explicit, the subtle rather than the direct. It is fundamentally a technology of intuition, yet we could only attain it after millenia of intellectual technologies. And in learning how to quantum compute, we will discover that our body-minds are already ideal quantum computers. We are coming full circle.
Computation is simply the transformation of information to fulfill a need
Consciousness will not be “uploaded” into a massive hard drive in a locked fluorescent room. The technological singularity is in fact our collective enlightenment to our true nature. In seeking the other we finally find oneself.
Massive centralized artificial neural networks run on supercomputers may appear as an impenetrable dystopian fortress. Yet they can be countered by more agile, lifelike and embodied AIs that can anticipate and thwart them, or even divert them towards regenerative ends.
The saving grace is nonlinearity. History is replete with well-fed armies defeated by spirited guerillas — in fact, their heaviness and inflexibility becomes their weakness. Weapons costing millions of dollars are disabled by counter-weapons that cost pennies. An inky cap mushroom softly bursts through a slab of concrete containing gigajoules of embodied energy, spews its spores, and digests its own body into a black puddle. A vast swath of desert is revived as an abundant forest by a living vision, handfuls of seeds, and a machete. This David and Goliath mythos resonates deeply as a remembrance of the transcendent, asymmetrical, and unexpectable power of life.
We can never truly anticipate how the future will unfold — but we can water the seeds of hope, trust in their unfolding, and do all we can to make it easy.
what’s needed most of all is a story of wholeness that encompasses AI and all that has led to it. To recognize the challenges and to face the opportunities. To feed the best-case, not the worst. The dystopian scenarios are based on assumptions that repressive control is inevitable, that technological development can only lead to disembodiment, that intuition is powerless and impractical. I’ve made the case here that the truth is otherwise. From a different set of assumptions, new possibilities emerge.
What might be gained? What new directions might we direct human intelligence toward?
If I may make a vague prediction, it will be always and ever toward those things that elude quantification. Traditionally, science has told us that anything that is real is quantifiable, and will one day succumb to its onward march. Science may be wrong in that foundational metaphysical postulate. Quantity can only simulate quality; it can never reach it. That will become more obvious, not less, as the latest extension of quantitative intelligence that we call AI, despite its wonders, fails as did its predecessors to solve the real problems of the human condition. The most significant positive effect of AI, then, may lie not in its capabilities but, paradoxically, in its limitations.
Artificial Intelligence is unraveling who we thought we were, what we thought technology was, what we thought life was. In the process we discover our true nature. As our technology comes alive, so too do we.
We shed the layers we had taken on along the way until finally we find ourselves naked.
And in primordial innocence we eat the fruits of the tree of life, cultivating the garden of our hearts in love and beauty.