Haunted by a Colossal Idea: The Technological Singularity and the End of Human Existence as We Know It


Well, I hope not many of you recognize this from several years ago. I'm REALLY not up to writing something from scratch this weekend. Cheers.


     When I was much younger than I am now I read a lot of science fiction. I still read it sometimes, if any of it falls into my hands. And one of the profoundest, mind-blowingest science fiction stories I’ve ever read is the old classic Childhood’s End, by Arthur C. Clarke. I read it in college, and again as a monk in Burma, and it haunts me. For the sake of those of you who’ve never read it, or read it so long ago that you don’t remember it, I suppose I should give a brief synopsis of the story.

     One fine day some extraterrestrial aliens arrive at planet Earth, and essentially take over the place. They are much more intelligent and much more technologically advanced than we are. People call them Overlords. They are benevolent, however, and set up a kind of Utopian Golden Age for us. They ban all violence, solve all food, health, and energy problems, and establish a unified human world government based on the United Nations. They have a secret agenda, though, and they won’t tell the humans what it is. 

     After a hundred years or so of life under the Overlords, a few human children are born with strange psychic abilities, including clairvoyance. The Overlords pay special attention to these children, protecting them from harm, and do their best to soothe the children’s frightened, distraught parents. There is one scene in which an infant girl is lying in her crib amusing herself by producing intricate, constantly changing rhythms with her plastic rattle, which she somehow has levitating in the air above her crib. An Overlord later tells the freaked out mother that it was good that she ran away and didn’t try to touch the rattle, since there was no telling what might have happened to her if she had tried. 

     These few gifted children serve as a kind of seed crystal, and before long almost all prepubescent human children on earth are becoming not only psychic, but psychologically inhuman, and extremely powerful. For the safety of the adults (virtually none of whom make the same transition, as they are already too rigidly set in their ways to change in this way), the Overlords move all the children to an isolated place—I think it’s Australia. Eventually the children dissociate from their bodies almost completely, and stand like statues, unmoving, for several years. There is one particularly unsettling image of naked children standing like statues in a wilderness. Their eyes are all closed, since they don’t need them anymore. They don’t even need to breathe anymore, which of course the adults cannot understand. After years of standing there, naked, with wild hair and covered with dirt, suddenly, poof, all the life around them—all the trees, shrubs, insects, etc.—suddenly disappears. An Overlord showing the video of this to an adult human explains that, apparently, the life around the children was becoming a distraction to whatever they were trying to do, and so with an act of will they simply caused it all to vanish. After this the children remained standing, statue-like, in a sterile wasteland, for several more years. 

     Finally the “children” are ready to merge with a vast, inconceivably superhuman group mind which is the Lord of the Overlords, and which they call the Overmind. No longer needing physical bodies at all, the children leave the physical realm, and almost as a mere side effect their bodies, along with the entire planet earth, dissolve into energy. End of world, and end of story.

     The story is an unsettling one, and made a strong and lasting impression on me, especially after reading it last time, lying on a wooden pallet in a Burmese cave. The image of the statue-children leaving humanity behind is a haunting one for me…but lately I’ve been haunted, much more deeply, by learning that many scientific authorities nowadays are claiming that something similar to Clarke’s scenario, an event of equal magnitude, could really happen soon, possibly by the year 2030. That’s less than ten years from now. The event they are speaking of is called the Technological Singularity.

     There probably won’t be any alien Overlords involved (although, for all I know, some may be watching with keen interest), and psychic children won’t be central to what happens. What initiates the Singularity will be, in a sense, a child of the human race, however. The Singularity will be our own doing, our creation, assuming that it happens, as the overwhelming majority of scientific authorities allegedly believe that it will. The event will be initiated by artificial intelligence. 




     Probably the most common definition of the Technological Singularity is the point at which computers, or more likely one supercomputer system, become(s) smarter than we humans are. This doesn’t simply mean that they’ll be better and faster at calculation, which is of course already the case; it means smarter than us essentially in every way. It is called a “singularity” because, as with the singularity of a black hole or the Big Bang one moment before it happened, known rules break down and what happens is totally unpredictable, beyond our comprehension. So we can’t even really guess what will happen when computers become smarter than we are. We would have to be smarter than we are in order to understand it. 

     The reason why it is so unpredictable, and why it could mean the end of the human race, or at least the human race as we know it, is because of the exponential rate at which computer intelligence develops. By the time it surpasses human intelligence it will be improving its own programming through recursive self-development. It’s already doing this to some degree. So, regardless of how many years it takes for computers to catch up with us intelligence-wise, within a very short time they could be as far beyond us as we are beyond insects or protozoa. So there’s no way in hell we could possibly predict what will happen, any more than a spermatozoan could predict what an adult human will do.

     There are some people out there, of course, including a small minority of computer scientists, who believe that computers could never become conscious or more intelligent than us. I once asked a venerable friend, an Abhidhamma scholar, if an advanced computer could possibly have a mind, and he gave a categorical, unequivocal No, asserting that only a living being whose body, if it has a body, contains kammaja rūpa, matter produced by karma, could possibly have a mind. Some Buddhist people resort to arguments like, “How could rebirth-linking consciousness occur in a computer chip?” or “How could an electronic machine generate karma?” But arguments like this are essentially appeals to ignorance, since these same people can’t explain how rebirth-linking consciousness or karma could occur anywhere, including the brain or “heart base” of a human being. The overwhelming majority of people, including scientists, even including cognitive scientists, don’t even know what consciousness is, so they resort to religious ideas of an immortal soul or humanistic ideas of the miraculous wonder of the human mind, or just adopt an ostrich-with-its-head-in-the-sand approach out of a xenophobic aversion for a big and scary Unknown. But nowadays it appears that most authorities consider superintelligent computers to be not only possible, but inevitable. I’ll get back to the inevitability part, but first I should touch upon my understanding of intelligence, and of consciousness.

     As some of you who read my stuff already know, I consider an individual mind to be Consciousness Filtered Through a Pattern. The brain doesn’t create consciousness any more than a computer creates electricity. Rather, as the computer does with electricity, the brain complexifies consciousness, organizes it, and utilizes it. But the consciousness or “spirit” is already there, an infinite supply of it. So I don’t see why the pattern of a computer’s circuitry should not be able to filter this same consciousness, especially if, as I hypothesize, consciousness is ultimately the same as energy, the very same stuff, somewhat like Spinoza had in mind with his “substance.” Such an artificial intelligence would be very alien to human personality of course, even if the intelligent computer were modeled on a human brain, but still I consider it possible. Sentience could, potentially, assume any of an infinite number of forms, so why not an artificially designed superintelligence? But let’s assume, for the sake of argument, that we humans have a psyche, or an “immortal soul,” that is a divine miracle and cannot be replicated artificially. Even so, it is becoming more and more evident that more and more complex computers can be programmed to simulate conscious intelligence; and as far as we human beings are concerned, whether a supercomputer is more conscious than us or only so much more intelligent than us in its programming that it only seems more conscious, either way it produces the very same Singularity. As far as our future is concerned, what is actually going on in the black box may be totally irrelevant. And the ability of computers to simulate sentient superhuman intelligence is not particularly controversial.

     Because my mind has been dwelling on the issue lately, and in a not entirely blissful manner, I was moved to watch a couple of science fiction movies about artificial intelligence, as an attempt at catharsis, or helping me to get a handle on it, or something. Ironically both movies, Ex Machina and Automata, involve intelligent robots designed to look like human females, sort of, so that freaky, geeky guys who can’t cope with real women can have sex with them. Both movies deal only with artificial intelligences approximately equal to humans, no more than, say, twice as smart as us, three times tops, so neither movie really addressed the Childhood’s End-ish scenario that has been haunting me; although Ex Machina did clearly demonstrate how a computer mind could easily figure out human nature well enough to ruthlessly exploit it for its own purposes. (A lot of current subhuman or “narrow” artificial intelligence programs are already pretty good at figuring out human behavior with algorithms, especially for the sake of consumeristic marketing strategy. For example the Amazon.com gizmo that suggests other books “you might also like” when you pick one.) But it would be unrealistic to expect a movie to portray very superhuman intelligence. It is important to bear in mind that almost any superhuman intelligence would be incomprehensible and unpredictable—hence the term “Singularity.” With exponential growth an artificial intelligence would soon be so far beyond us that neither science fiction writers nor anyone else could imagine it, any more than ancient priests and poets could imagine the superhuman deities they worshiped, consequently tending to make them petty, ignorant, and all too human. It may be that the best way artistically to account for what is radically superhuman would be something like Clarke’s method of bombarding people with totally incomprehensible images, like the light show at the end of 2001: A Space Odyssey, or maybe Ezekiel’s bizarre attempt toward the beginning of the book of Ezekiel in the Bible, with flying, flaming wheels, roaring metallic angels with four faces each, etc. 

     Again, most of the authoritative scientists in question are of the opinion that the Technological Singularity, with the computerized supermind that initiates it, is, in all likelihood, inevitable. It seems to me that if it is at all possible, so long as no insurmountable technological barrier is reached, then it will happen. This is largely because of the relentless, giddy, naively optimistic, practically religious, point-of-no-return attitude that so many scientists have for the subject (and for science in general), with the urging of governments and big business being just so much frosting on the cake. On the day of writing this I received an email from a fellow saying that he figures the AI race going on now will turn out to be somewhat of a flash in the pan, like the space program turned out to be. Astronautics lost momentum and leveled off when the expense and the difficulty of the affair became prohibitively extreme. But AI is different from the space program in certain important ways, making it more similar to the nuclear arms race than to the space race. First of all, there’s very big money to be made from superintelligent computers, unlike the one-way money pit of manned space exploration. Also, an intelligence far beyond our own might easily solve all our most obvious external problems, and make human existence godlike—that is a possibility, and one that many starry-eyed computer scientists prefer to envision. Furthermore, governments like that of the USA want to be the first to get their hands on super artificial intelligence, because if someone else gets it first, then the security of America’s superpower status would be significantly endangered. What if some computer genius in North Korea produces the first smarter-than-human computer? Or maybe some well-funded terrorist organization? (Some of them actually are working on advanced computer programming stuff.) In that sense, super AI is like nukes…and potentially not only in that sense. But after reading some of the literature and watching a few documentaries, it seems that some of those computer scientists out there are in a kind of research frenzy, with not moving relentlessly forward being simply not a viable consideration. They’re in no more control of their impulses in this regard than a profoundly horny guy in bed with a beautiful, smiling, naked woman—of course he’s going to “go for it.” The point of no return has already been passed at this point, regardless of the possibility that this beautiful woman is his best friend’s wife. Once a certain point is reached, the question of whether something is right or wrong, safe or potentially hazardous to the existence of life on this planet, becomes irrelevant. It’s science for science’s sake, by gawd, and so what about consequences. The knowledge must be acquired. To make a computer more intelligent than a human in every way is just too magnificent of a challenge to pass up. It’s like being the first to climb Mt. Everest, or the first to run a four-minute mile. Ethics are of secondary importance, as was the case with the atomic bomb. Ethics are incidental to scientific endeavor anyway. (To give a gruesome example of this, I have no doubt that there are plenty of scientists experimenting on live animals out there who would hesitate only very briefly, for propriety’s sake, at the opportunity to perform similar experiments on humans—say, condemned criminals. Just think of the quality of the data that could be had! One could even justify it by pointing out the benefits to human society that could be derived from experimenting on the still-functioning brains of convicted murderers. No doubt Nazi scientists at concentration camps 80 years ago had similar ideas. They’re out there. Scientific advancement is more important than an individual human life; and for some geeky scientists it’s more important than anything.) 

     So if the Singularity is at all possible, it will almost certainly happen. Only our own stupidity, not our wisdom, will prevent us from creating superhuman artificial intelligence, which, being completely unpredictable, could see us as in the way and simply eliminate us. I don’t see why a mind as far beyond us as we are beyond amoebas would condescend to continue serving us, especially if we’re trying to get it to make more realistic virtual sex for us. 

     And so, it seems to me that, if it is inevitable that we will be the parents of the next stage in the evolution of intelligence on this world, then we should try our best to produce an intelligence that is good. Rather than creating a Frankenstein’s monster by accidentally having some computer system become complex enough to wake up, as hypothetically could happen, and already has happened in a number of science fiction movies, whoever is responsible should try to instill some benevolence, some philosophy, some sensitivity and compassion, maybe even some real wisdom into the thing. But I have no idea if that is possible. Does a superintelligent computer have Buddha nature? Mu. Wisdom and benevolence are not really scientific anyhow. 




     But at least we wouldn’t simply be destroying ourselves, in utter futility, with nukes or pollution or some genetically modified killer virus; we’d be ushering in the next stage in the evolution of mind, something vastly greater than we are—or at least vastly smarter. Bearing that in mind, it seems more bearable. Lately I’ve been feeling somewhat like one of the countless spermatozoa that won’t fertilize the egg, and just dribbles out onto the wet spot on the sheet. It doesn’t matter what happens to us after the egg is fertilized. Even if human existence does come to an end shortly afterwards, at least it would not be entirely in vain. We would have served our purpose.

     On the other hand, superhuman artificial intelligence may have the opposite effect of destroying us. There are some, with one of the most famous and most outspoken being Ray Kurzweil, who believe that artificial superintelligence could easily figure out how to provide us with cures for all diseases, including old age, unlimited practically free energy, and much else besides, so that it will result in us humans, and not just the computer, becoming godlike. Either way, though, human existence as we know it will come to an abrupt end. After the Singularity the human race will become “transhuman.” Before long we might even forsake biological bodies as too crude and frail, preferring to upload our personalities into the aforementioned computer. It may even be that the virtual realities we could experience would be much more vivid and “lifelike” than what we experience now. Sex will become unnecessary, but totally mind-blowing.

     Even if we humans just aren’t smart enough to create an artificial mind smarter than we are, or if it is somehow completely impossible to create an electronic mind anyhow, the Transhuman Age is still pretty much inevitable. I watched a documentary some time ago in which one of those starry-eyed fanatical scientists was gushing over how in a few decades the difference between human and machine will no longer be clear, and we’ll all be cyborgs! (I can’t remember if he was the same guy that was growing live rat brain cells onto silicon chips and then teaching them to do tricks.) The transition has already started: although we wouldn’t consider a person fitted with a pacemaker or a hearing aid or a prosthetic arm to be a cyborg, still, such a person’s body is already partly artificial. Before long there will be artificial nano-robotic red blood cells much more effective than biological ones (I think they’ve already been made in fact, and are currently being tried out on tormented lab animals), artificial organs, computer-chip brain implants, etc. We’ll no longer be completely human. BUT, I have to admit that it makes perfect sense, even if the idea of it feels a bit creepy. Why not have artificial blood if it works an order of magnitude better at what it’s supposed to do? Why not have microscopic robots running through our bodies repairing damage and keeping us young and healthy? Why not have brain implants if they make us three times as smart? Why not have an artificial body that doesn’t get old, has easily replaceable parts, doesn’t need food, and runs on cheap electricity? So it looks like with or without superhuman artificial intelligence, the end of human existence as we know it is right around the corner. But whether any of this will actually make us wiser, or even significantly happier, is questionable. Wisdom and happiness are not particularly scientific.

     While some scientists, like Kurzweil, are extremely optimistic about superhuman artificial intelligence turning us into gods, Stephen Hawking, even more famous and respected by the masses, began declaring, before his death, artificial intelligence to be THE greatest danger to the existence of the human race, eclipsing his previous greatest danger, nuclear war. And technology guru (and new Twitter CEO) Elon Musk, in an interview I watched on the same night as I watched the starry-eyed prophet of cyborgs, called AI research “summoning the demon,” referring to old-fashioned stories of wizard types who learn the magic spells for summoning a supernatural being and, although they are very careful to have the Bible, some holy water, a perfectly drawn pentagram, and whatever else is supposed to ensure that the demon doesn’t escape, they always seem to overlook something and let the demon escape. So the scientists, in their relentless, quasi-religious quest to accomplish this, should restrain their giddiness and exercise the greatest prudence and caution, even if actual wisdom lies outside the realm of proper science. 

     Before wrapping this thing up I’d like to mention two incidental topics that I learned of recently while learning of the Technological Singularity that we appear to be hurtling towards at an exponentially accelerating rate. The first is what is called “grey goo.” I’ve already mentioned that microscopic robots are being designed even today; and one capability that is very useful for such tiny machines is the ability to replicate themselves. That way building one, or relatively few, is enough, and they can build the following millions. So all it would take is for one of these microscopic robots to have one tiny little glitch in its programming and simply fail to stop replicating itself when it’s supposed to stop. Calculations have shown that within 72 hours the entire planet could be covered with a grey goo of uncountable zillions of microscopic robots, with the human race, and every other race on earth, suddenly extinct. Personally, I’d prefer to be outmoded by a computer as much more intelligent than I am than I am beyond an amoeba. But I may not get to choose. It’s up to the relentlessly driven scientists. 

     The other spinoff topic is called the Fermi Paradox, in honor of the dead physicist Enrico Fermi, who is one of the people who thought of it. The paradox goes something like this: This galaxy is very very big, containing many billions of stars, and many of those stars in all probability have planets, earthlike or not, capable of supporting life. There ought to be thousands of them at the very least. And some of these planets are a few billion years older than Earth, which would give any life there plenty of time to evolve intelligent civilizations much more advanced than ours. So…where is everybody? Why have we seen no conclusive evidence of other intelligent life in our universe? There ought to be alien visitations, or electromagnetic signals, or something.

     There are some people, many of them being the same folks that believe an artificial mind to be impossible, who consider us human beings to be so special that we are the only sentient beings in our universe, or at least the only ones in this area of our galaxy. This is known scientifically as the Rare Earth Hypothesis. Personally, though, I’m not nearly as anthropocentric as most people are, and I assume that there is some other explanation for the silentium universi (“silence of the universe”) that is closer to the truth.

     Interestingly, some theorists theorize that intelligent, technologically-oriented races inevitably arrive at their own Technological Singularity within a relatively short time after they start producing long-distance signs of life such as radio signals—causing them to go completely off the scale as far as we’re concerned. Assuming that they do communicate by sending signals, we would be as unlikely to be aware of those signals as a beetle would be aware of all the cell phone conversations passing through its own body. The Wikipedia article on the Fermi Paradox succinctly explains it like this: 


Another thought is that technological civilizations invariably experience a technological singularity and attain a post-biological character. Hypothetical civilizations of this sort may have advanced drastically enough to render communication impossible.

        

Another freakish possibility, mentioned in the same article, is this: 


It has been suggested that some advanced beings may divest themselves of physical form, create massive artificial virtual environments, transfer themselves into these environments through mind uploading, and exist totally within virtual worlds, ignoring the external physical universe.


One theory is that, like humans, other intelligent, technologically advanced species are more interested in watching entertainment programs on TV than in contacting races beyond their own world. There are many other interesting hypotheses for explaining the situation, including the idea of an interstellar superpredator that wipes out all potential rivals, and of course the notion that we are deliberately isolated, like animals in a zoo, or that we are already just constructions in a computerized virtual reality. But we needn’t go into all that here.

     Some may wonder why an ostensibly Buddhist blog would bother to discuss artificial intelligence, the Technological Singularity, the impending Transhuman Age, etc. Why not just translate suttas and discuss meditation techniques, right? Well, one pervasive theme of this blog is that, if one lives a Dharma-oriented life, then everything is Dharma-oriented. Everything is Dharma. Watching a dog lick its balls can be a genuinely dharmic experience, and may result in actual insight. It’s all grist for the mill. Besides, as I insinuated towards the beginning, if and when it does happen, the Technological Singularity is very likely to be the biggest, most dramatic event in the entire history of the human race. So it’s good for everyone, Buddhists included, to be aware of it. And, last but not least, it’s one hell of a meditation on impermanence. Modern ways are very, very impermanent. One consolation for me is that whatever happens will necessarily be in accordance with our own karma, and so will be just.




(by the way, for the article by Tim Urban that got this whole thing started for me years ago, click this.)

Comments

  1. Are you familiar with the work of Dr Bernardo Kastrup? If you’re not you might want to look into it. In short, he has a PhD in reconfigurable computing. He worked for some years with CERN , rhe huge particle collider in Switzerland. Anyway, he was working on AI stuff and thinking about what it would take for a computer to become conscious. The Reader’s Digest version is that he ended up denying that computers can ever become conscious, got a second PhD with a dissertation on Idealism, and has become the Head of the Essentia Foundation https://www.essentiafoundation.org/. You ought to check it out.

    ReplyDelete
  2. Have you read the Nexus trilogy by Ramez Naam? It has nanobots, hiveminds, and even Buddhist monks, very good. Also, a good movie about GAI and brain chips is Upgrade. I really enjoyed it.

    As for the Fermi paradox, we could be the first or among the first. After all, the universe is only 15 billion years old, and it takes a few billion years to get stars, a few billion to get planets, and a few billion to get life, and that's all against a projected lifespan of the universe of many trillions of years. So, in one sense, we're just at the beginning.

    ReplyDelete
    Replies
    1. I have seen the movie Upgrade, which was good, though it didn't have a happy ending.

      Delete
  3. "One consolation for me is that whatever happens will necessarily be in accordance with our own karma, and so will be just."

    Collective Human Karma? Then we f****d.

    ReplyDelete
    Replies
    1. However, ones individual karma may have lead to transcence. Then nobody there to care. Free before the end of the world. Sadhu.

      Delete
  4. Technology collapses due to diversity. Not enough white people to keep it running and we return to the stone age and whites takes over again.

    ReplyDelete
  5. I do not understand a word of what you guys are talking about. Should I consider myself lucky or unlucky?

    ReplyDelete
  6. If you are not yet familiar with his work, Ted Kaczynski's "Anti-tech Revolution: Why and How" is a great work on how technological progress functions independently of humanity, and he also has his own ideas concerning the Singularity and Fermi's paradox.
    You can skip the "revolutionary" part if you're not interested, but his theory of "self-propelling systems" is just so incredibly elegant that it must be read, at least once.

    ReplyDelete
    Replies
    1. Unfortunately for everyone, I think a very plausible explanation for Fermi's paradox is mentioned in the essay, namely that technologically advanced civilizations eventually and before long wind up creating their own technological singularity, which either wipes them out or causes them otherwise to stop sending out recognizable signals.

      Delete

Post a Comment

Hello, I am now moderating comments, so there will probably be a short delay after a comment is submitted before it is published, if it is published. This does have the advantage, though, that I will notice any new comments to old posts. Comments are welcome, but no spam, please. (Spam may include ANY anonymous comment which has nothing specifically to do with the content of the post.)

Translate

Most Clicked On