Pages

Monday, June 27, 2011

Three arguments against the singularity

Here's a fascinating post by science fiction author Charles Stross* on why he thinks the singularity isn't going to happen:
I periodically get email from folks who, having read "Accelerando", assume I am some kind of fire-breathing extropian zealot who believes in the imminence of the singularity, the uploading of the libertarians, and the rapture of the nerds. I find this mildly distressing, and so I think it's time to set the record straight and say what I really think.

Short version: Santa Claus doesn't exist.

Before we get to that, perhaps I should define "singularity." I'm no expert on the subject, but here's a website that will give you a good idea:
John von Neumann was quoted as saying that "the ever accelerating progress of technology ... gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." His definition of the Singularity was that the Singularity is the moment beyond which "technological progress will become incomprehensively rapid and complicated."

Vernor Vinge introduced the term Technological Singularity in his science fiction novel Marooned in Realtime(1986) and later developed the concept in his essay the Coming Technological Singularity (1993). His definition of Singularity is widely known as the event horizon thesis and in essence says that trans or post-human minds will imply a weirder future than we can imagine:

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. [...] I think it's fair to call this event a singularity. It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown."

Charlie Stross includes some useful links in his post, too, if you're really interested in the argument. But I just wanted to pick out a few details, such as this argument about artificial intelligence:
First: super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it's unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we're likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own. ...

We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don't want my self-driving car to argue with me about where we want to go today. I don't want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos. And I certainly don't want to be sued for maintenance by an abandoned software development project.

Eventually, we may reach the point where we could create conscious, self-aware artificial intelligences - computers who are people, for all intents and purposes. But will we? On the one hand, it's hard to imagine why. But on the other, it's hard to imagine that no one would do it, if it were actually possible.

You can compare the human brain to a computer, but there are profound differences. In particular, our thinking mind seems to be inextricably wound up with our emotions. There is clearly no disembodied mind that's separate from our flesh and blood bodies, no "spirit" that's separate from our physical selves, no matter how it might seem sometimes.

In fact, we human beings are animals first, and thinking beings second. Computers aren't animals. And why would we want them to be? Computers are useful tools, but do we really want computers as peers? As masters? Even if that were possible, it's hard to imagine why we'd do something like that.

But then, here's another possibility:
Karl Schroeder suggested one interesting solution to the AI/consciousness ethical bind, which I used in my novel Rule 34. Consciousness seems to be a mechanism for recursively modeling internal states within a body. In most humans, it reflexively applies to the human being's own person: but some people who have suffered neurological damage (due to cancer or traumatic injury) project their sense of identity onto an external object. Or they are convinced that they are dead, even though they know their body is physically alive and moving around.

If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to — in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements. While such an AI might accidentally jeopardize its human's well-being, it's no more likely to deliberately turn on its external "self" than you or I are to shoot ourselves in the head. And it's no more likely to try to bootstrap itself to a higher level of intelligence that has different motivational parameters than your right hand is likely to grow a motorcycle and go zooming off to explore the world around it without you.

Hmm,... that sounds kind of creepy, doesn't it? Do we really want another consciousness with no internal sense of self, that exists only to serve a human being's needs? Can you imagine a worse slavery than this? I hope we never learn how to do this to people! And I'm even uncomfortable with the idea when it comes to computers.

Besides, what happens when the computer's perception of our needs differs from our own? My right hand will act on its own sometimes, jerking back from a hot surface before my brain can get the alarm. But it doesn't usually decide for itself what's best for me, let alone go against my wishes to accomplish something. If you've got two consciousnesses looking out for the same person, who decides?

There's a lot here that does sound intriguing to me. I'll get to that in a minute. But first, here's one more excerpt, this one about uploading our minds into software:
But even if mind uploading is possible and eventually happens, as Hans Moravec remarks, "Exploration and colonization of the universe awaits, but earth-adapted biological humans are ill-equipped to respond to the challenge. ... Imagine most of the inhabited universe has been converted to a computer network — a cyberspace — where such programs live, side by side with downloaded human minds and accompanying simulated human bodies. A human would likely fare poorly in such a cyberspace. Unlike the streamlined artificial intelligences that zip about, making discoveries and deals, reconfiguring themselves to efficiently handle the data that constitutes their interactions, a human mind would lumber about in a massively inappropriate body simulation, analogous to someone in a deep diving suit plodding along among a troupe of acrobatic dolphins. Every interaction with the data world would first have to be analogized as some recognizable quasi-physical entity ... Maintaining such fictions increases the cost of doing business, as does operating the mind machinery that reduces the physical simulations into mental abstractions in the downloaded human mind. Though a few humans may find a niche exploiting their baroque construction to produce human-flavored art, more may feel a great economic incentive to streamline their interface to the cyberspace." (Pigs in Cyberspace, 1993.)

Our form of conscious intelligence emerged from our evolutionary heritage, which in turn was shaped by our biological environment. We are not evolved for existence as disembodied intelligences, as "brains in a vat", and we ignore E. O. Wilson's Biophilia Hypothesis at our peril; I strongly suspect that the hardest part of mind uploading won't be the mind part, but the body and its interactions with its surroundings.

Now, what's my interest in all this? Well, I enjoy a good debate as much as anyone, and all this "singularity" stuff is certainly good, clean fun. Speculating on the future is always interesting. (But I'm a skeptic, too, so I naturally take every claim with a grain of salt. Maybe, maybe not. We'll see...)

But I'm also a game-player, and all this sounds quite intriguing when it comes to entertainment. OK, if mind uploading is possible, maybe we human beings won't be perfectly suited to cyberspace. Maybe our computer software will be more efficient in such a realm. I would certainly expect so. So let the computers do the work while we play!

I don't necessarily want to live there (unless, of course, the option is death). But what a great place to visit! Will we care that it's not efficient? Well, for some things, we will. But not necessarily for everything.

Likewise, I'm not sure I want another consciousness thinking that it's me, no matter how helpful it might be. But couldn't that same technology be used to put my conscious mind in a computer game? Not permanently, of course. But wouldn't you like to experience a book or a movie from the inside? (Basically, that's what a role-playing game is, although a gamer isn't a passive watcher but an active participant.)

I don't know if "super-intelligent AI" is likely or not, but we've already got computers as tools, and they're only going to get better. We already play computer games, and they're only going to get more lifelike. Whatever the end result might be, we know we can use computers to enhance our own abilities and we know we can use computers to help us enjoy life. We're nowhere near to seeing the limit of those kinds of things.

It's fun enough to debate it, I suppose, but I see enough here to excite my imagination with or without the singularity. Besides, if robots take over from human beings, they're going to be our robots, right? In effect, they'd be our descendants. Our children take over from us - every generation does that - so is it that much different? Heh, heh. Well, maybe it is,... but I wonder why.

* PS. I'm sorry to say that I'm not familiar with any of Charles Stross' novels. I've read some of his short fiction - and enjoyed it - so I really need to try something longer. I've heard especially good things about The Atrocity Archives, but also praise for The Hidden Family and Iron Sunrise (and its sequel, Singularity Sky).

2 comments:

  1. More intelligent or less intelligent is impossibly vague. We have AI programs today that have "super human" intelligence with regard to some narrow problem set, and given enough computing power, a computer can win any game by brute force i.e chess, brute force, simply trying all combinations, we have computers powerful enough to beat any human master now. Our brains are made up of specialised sub programs, it is almost certain that our AI creations will work the same way.

    ReplyDelete
    Replies
    1. Yes, indeed, Anonymous. But those will be tools, just as computers are today. This post is about AI that's more than that.

      It's just speculation, which might be interesting, but... well, I'm not going to place much confidence in speculation about the future. Reality is too complicated for that. Not to say that it's not worth thinking about. (And I do love science fiction.)

      Specialized sub programs would make excellent tools. They wouldn't make a person, though. Even a collection of specialized sub programs wouldn't necessarily make a person. We will certainly have specialized sub programs. The question is whether we'll see anything more than that.

      Thanks for the comment. I agree.

      Delete