by Mayer Spivack
8/6/2008
The Singularity—The Siren.
If any definition of ‘The Singularity’ is: That future moment when artificial intelligence function levels in machines are equal to or greater than human intelligence, then how do we get there from here? By the wayside, how intelligent are we? What do we include and exclude from our definitions of intelligence, including our own?
The Railroad Track Illusion.
Consider a walk alongside a railway line where one rail represents human intelligence and the other represents AI. The tracks will always remain parallel because the two kinds of intelligence are likely to remain dissimilar. From where and when we stand here and now, standing on one rail, they do appear to join at the horizon— at ‘The Singularity’. However, no matter how far we walk, these rails will remain parallel and never join.
Yet, something is shifting in the ground below the tracks. Humans are becoming cleverer, (but not necessarily smarter), and computer driven AI is getting more complex. We wonder, are these rails beginning to bend toward a convergence? Is their angle changing as their intelligences grow? Is this path converging, or is it only asymptotically, ever so tauntingly, closing the impossible gap? Perhaps despite increases in computation power and richness, and greater human ingenuity, the tracks can only become narrower gauge, to remain forever parallel however nearly touching.
The Rapidly Receding Singularity.
I suggest that the singularity will recede at the same rate that we approach it.
It is in some respects a fixed race, with humans on one track, and our inventions on the other, with us at a fixed distance from the apparent horizon. Philosophically-speaking, is it possible for us to become cleverer enough to invent clever machines that outsmart us, without that event making us also just that much smarter than the machine we have just built?
Picking AI Up By It’s Bootstraps.
About forty years ago at the MIT Instrumentation Laboratory in the Gyro Research Group, I faced a similar problem. As a novice designer and builder of inertial navigation equipment, I designed and built a high-precision rotary grinder, one that could finish the interior annular groove inside a ball bearing race. It had to produce better tolerances in that surface than could be obtained—or could then exist—in the bearings and on the motor of the grinding wheel that machined them. In simple machines this bootstrap trick can be, and was, done. In AI we can, I am sure, make a device that is smarter and works more smoothly than ourselves sometime in the future, but when we are done, it may only be something like a ball-bearing race, not a person-replacement. It may just be a gadget, like my grinder or it’s product. We know that all gadgets are obsolete the moment they are built, because we learn so much in the effort of making them that we end up smarter than we were at the outset. This is particularly true in the world of computers.
Emotion Trumps Crunch.
Speed is not the most important factor in Artificial Intelligence, but we cannot move forward in that effort with slow equipment. If AI merely succeeds in making computers with faster crunch, we will gain negligible advantage in our quest to make them smarter. If we believe that faster is smarter we are fooled by an illusion. In that scenario, AI would have failed at the larger goals of replicating the rich abilities of our own brains and minds within machines. Human minds are not mere number crunchers. We are emotional thinkers.
We Need An Owner’s Manual For The Brain And Mind.
Achieving holistic machine intelligence that resembles our own messy emotional intelligence may always seem a long way down the tracks. Many will say at the outset that we should keep emotions out of the AI effort because they insist that emotion cannot be understood nor controlled. They are wrong. The work starts in the examination of our own minds. We use mind and emotion together all the time, yet we have no idea how they interact, or even if they are separable. We have no owner’s manual. Before we get good at AI, we need to write a good owner’s manual for the brain and mind, and keep it updated in Wikipedia.
We Cannot Reverse Engineer What We Have Not Studied.
I think AI is a noble goal. It is noble not because of some romantic compulsion to climb impossible mountains, but because the pursuit of AI will finally drive us humans to do something we have resisted for millennia—we have to take a good careful look at ourselves. We cannot reverse engineer what we have not first studied.
We Are The Forbidden Fruit.
Most cultures, and most individuals, resist the work need to acquire self-understanding or self-knowledge, instead substituting religious, moral, or legal belief systems in the place of careful scientific self-study. By running as a team together along the tracks, the developers of Artificial Intelligence with the new neurosciences, along with psychotherapists, can do what has usually been taboo or was considered too embarrassingly touchy-feely. (Does the phrase ‘touchy-feely’ suggest that we are afraid of admitting the emotion in ourselves)?
Go Talk With A Psychodynamicist!
It is time to tackle the work of understanding what we are and how we work. That effort is great and noble beyond description. It is essential to world peace, health, and planetary salvation, to the development of human intelligence, and not least or last to the development of Artificial Intelligence.
AI Is Brain Science.
This will be a complex and unfamiliar kind of work for the computer wranglers. It is not yet clear to neurologists. It is finally an exiting and hopeful time in the sciences of brain and mind. In order to move forward, we must parse the work into units that complete some task or other in the brain. This will break the AI goal into manageable pieces that can integrate with each other and with the findings from neurology and psychodynamics. Ultimately we would try for integration of all the parts within an environment that is holistically higher than any particular part contained in the effort. This will take a long, long, time and lot’s of people.
There Is No Yellow Brick Road To Artificial Intelligence.
There can be no roadmaps isomorphic to the inside an illusion. There are no onramps leading to an indefinite location that will always remain somewhere over the rainbow, in a future that may not occur. Those of us who pump our little handcarts along the tracks will have to be open to new ideas, new psychological discoveries, and be patient. We will not be able to publish four papers per semester. Why am I so pessimistic? Do I have no faith in human ingenuity, innovation, and engineering? Alice, following the yellow brick road, heard the Tin Man sing—“If I only had a brain!” At least we each have one. Let’s study it.
The answer is that I am wildly optimistic. I think that we will make wonderful advances in neurology, computation, and networking. We are already making amazing progress in studying our brains. We need to come together in teams. What will keep the tracks parallel instead of convergent is not lack of computation power and abilities but an inherent paradox. If AI and human intelligence get even close or get very close, then it will be just great! Close will be more than good enough, and we get there.
May The Paradox Be With You.
The paradox is within us. We are the problem. We may be the one problem that is most difficult for us to solve because of our lack of distance from the structure of the device we are studying. The device on the workbench is ourselves. We have trouble overcoming that blindness and fear of discovering of our own workings, even trouble accepting the terms of the study of ourselves. We strongly resist psychodynamic understanding for our own personal, social, familial, and cultural reasons. If we do not understand our own mental process, we will fail at understanding our psychological/emotional processes. Ask any psychotherapist to help you understand what I mean. This is not psychobabble. In fact, the term psychobabble is employed as a defense to prevent self-understanding and shut out the psychotherapist. Now, psychotherapists need to become a significant part of the AI team effort.
What Is The Secret In The Shoebox?
The whole subject has been in a shoebox at the back of the human family closet for a few thousand years. Within the past century, psychiatry abandoned it’s own psychodynamic child. Freud died and we buried his ideas. We must look at how we think AND HOW WE FEEL about the world we experience, and within which we work on our computers. The AI enhanced computers in our future must not just fool us into believing that they feel what we feel, they must be able to proactively replicate the processes of emotion. The AI computer needs to be selfishly recursive, it must always strive to understand it’s own process independently from it’s operators or programmers. (see comments by Aubery De Grey, PH.D., on YouTube: http://www.youtube.com/watch?v=0A9pGhwQbS0
However smart our AI computers get, they will not even be temporarily replicate our intelligence only then to surge ahead of us to become our galactic-cloud self-extensions, the supernovas of our extrahuman intelligence. That is fun to imagine, but it is science fiction. We need to mount a more humble effort from within the computer community to learn about ourselves in enough depth and detail to fashion computers that resemble us.
We Are The Secrets In The Box.
A blocking problem occurs at this junction on the railway to the future of AI. We humans have no adequate agreed-upon model of how to self-examine. We do not nearly understand ourselves. Those of us who work in the area of emotion and behavior with real people know that most people, mentally healthy or not, do not begin to conceive of how to do this task for themselves, and the AI community, however smart, is no exception. We are the ultimate black box problem, and we are all inside the box.
In order to understand general human intelligence, and the intelligence that underlies science and engineering, we must to deal with human emotion. Emotion is an important part of intelligence. It is not a separable human foible. Emotion mediates, motivates, and directs all our actions and reactions to our own thoughts and ideas, to each other, to what we work upon; all experience and mentation is emotionally correlated. Emotion rules all of it, there can be no exceptions. Are you angry yet? Do you emphatically disagree? Are you emotional about an idea that you hold dear? There is no objectivity. Science is passionate because it is the work of passionate people.
We Cannot Download Or Upload Psychology.
Psychology cannot supply any simple answers or a neat and clean mind-dump, consultation service for AI designs. or supply, on contract, the human emotional package that completes artificial intelligence for the rest of us psychologically unwashed. Psychologists are themselves intellectually divided, and emotionally conflicted about what is acceptable content within their own fields of research and practice. They are of at least two minds about everything, like the rest of us. Nonetheless, we need their participation in AI and they need ours to help drive the effort of self-understanding. We first have to accept that computers and ourselves are, and may have to remain, as distinct entities. We could do this now, and here. Just accept it.
If we team together, the most we can hope for is that we might intentionally and incrementally narrow the distance between the railroad tracks—between ourselves and our computers. We will learn a lot about both in the process. What are the predictable moments when the track might narrow but not touch?
Where The Tracks Can Get Closer.
I believe that narrowing, or a drawing together of human intelligence and AI might occur at the following railway junctions:
We Replicate Intelligence Incrementally.
The first narrowing point may occur when we become incrementally able to replicate our own neurological structures in a complex and useful built device. That device (or network of devices) must have enough processing power to be assist in real-time human interactions and communication. It must be a tool. Modestly—just give us one machine that can occasionally out think, not just outwork, a talented and informed human. If we are ever going to do a good job of self-replication we had better mount a national project to discover our own humanity.
Psychodynamic Modeling Is Problematic And Necessary.
The next narrowing of the tracks will occur when, modeling upon our own psychodynamics, we replicate some our own psychological and psychodynamic structures in a built device and that device has fewer neurotic bugs than we do.
AI and Civil Society.
A grand narrowing will occur when we can agree internationally and across cultures upon the common wisdoms and common sense underpinnings that support a civil society, for only then can we begin to judge whether our machine intelligences have got it right or not. Strong AI will have an enormous impact upon civil society because civility is perhaps the greatest product of intelligent communities. The impact is inescapable. Civility at the macro-scale is a by-product manifestation of human intelligence so hopefully and our intelligent machines can be scalable to this high level. The first part of this millennia-long challenge does not need a machine at all, it is first up to us. So far, unaided by smart equipment, we have not been very good at civility (maybe we are not yet intelligent enough for peacefulness), but no machine will have much value without it. We have to teach machines our values, large and small.
Why Can The Train Tracks Never Meet?
We are essentially trying to reverse engineer something we do not understand, that we have never completely inspected, that has some parts we do not want to acknowledge; something that is changing all the time, and that is one of the most complex devices (organism-systems) in existence; ourselves, us humans. We will therefore remain the model for the top of the line AI machine, and it will never catch up with us.
The Human Information Input-Output Pinch-Point.
As we make progress in AI (and general information technology), we will quickly arrive at a pinch-point beyond which we can no longer communicate with our artificial assisted intelligence devices fast enough to benefit from their speed and complex outputs. Our sensory inputs and expressive output channels will simply not operate anywhere near machine speeds. This is something that we are already experiencing. Everyone has experienced this at some time when listening to fast speech in voicemail.
Our learning and expressive abilities will fail because information flows too fast for our eyes, ears and fingers. Our tongues and fingers will fail to keep up. Even our prize, the mind itself, will boggle as information speed increases. We are very nearly at this pinching-speed now. Our eyes and ears frequently overload our brains as information flows faster. Our brains, when forced to attend to rapidly changing data get fatigued. We lose concentration and focus. Our bodies have various bandwidth limits that technology might not be able to overcome completely. We have not yet evolved to move that fast. Our biological inputs and outputs seem to have built-in speed limits. As we accelerate our walk along the railroad tracks in an imagined high-speed train we reach the pinch-point quite quickly. This will push the tracks apart so far that our
AI work may be derailed. This is a major challenge to AI.
The Parallax Paradox Of AI Development.
As we get better at replicating our own brains in vitro, our brains will grow that much smarter because of that process. Here is the explication of this claim: In order to build a brain as good as our own, we must first observe the brain within ourselves as it is doing the work of machine-making. We know from Heisenberg’s work at the smallest scales that a system observed is changed by that observation. There are some very small-scale electrical and chemical processes running inside brain tissue. Recursiveness ratchets us tightly into this conundrum in a kind of Heisenburgian dilemma that will dog every step of our effort.
It’s nip and tuck, becoming self-aware pushes the train tracks further apart, just as continuing, accelerating changes in our intelligent machines appear to move the contact horizon closer to us. Damn! a variation on Zeno's paradox has grabbed us. Just because we figured ourselves out a bit more, we raceed ahead of the computer we made—just when it was so close to being our equal. Maybe we just learned something. That is already an advance in AI. Yet what we have learned will bring us no closer to AI.