Tuesday, February 22, 2011

Is the Singularity Near?

Time magazine has an article about the Singularity – a point in the relatively near future at which the rate of technological advancements will supposedly skyrocket, ushering a new era of... something. It's not clear, but Singularitarians advise humanity to brace itself. Many people see the movement as a kind of secular, science-based religion (it's been called "the rapture for nerds"), so I thought it would be appropriate to discuss it here.

The Singularity's most prominent advocate is futurist Ray Kurzweil, who claims that humanity's technological advances can be plotted on a logarithmic curve, culminating in a Singularity around 2045. PZ Myers of Pharyngula has made several posts criticizing Kurzweil's claims, and after reading them I must say I agree with him. For example, Kurzweil plots mammals, primates, and humans as "innovations" on his exponential growth curve. PZ says:
"If you're going to use basic biology as milestones in the countdown to singularity, we can find similar taxonomic divisions in the cow lineage, so they were tracking along with us primates all through the first few billion years of this chart. Were they on course to the Singularity? Are they still?"
Kurzweil says we'll reverse engineer the brain in 20 years based on the relative simplicity of the genome. PZ says:
"Kurzweil knows nothing about how the brain works. Its design is not encoded in the genome: what's in the genome is a collection of molecular tools wrapped up in bits of conditional logic, the regulatory part of the genome, that makes cells responsive to interactions with a complex environment. The brain unfolds during development, by means of essential cell:cell interactions, of which we understand only a tiny fraction."
In short, Kurzweil is a smart guy who's competent in certain areas of computer science. But his ideas about exponential growth, while superficially interesting, should be taken with a shake of salt or three.

The idea of a Singularity isn't uniformly ridiculous, however, and there are other views of the phenomenon that are different from Kurzweil's. Ultimately, what it comes down to is this: Can we create a being (be it an augmented human or an AI) that's smart enough to create a smarter being (or make itself smarter), which can then do the same thing itself, and so on? If so, we could plausibly get a runaway effect that results in unimaginably intelligent beings, which might be capable of efficiently developing new technology and solving the world's problems.

Of course, there are several limitations to consider. Development of AI and intelligence augmentation at that level could turn out to be prohibitively difficult. It's true that AI has come a long way: for example, just a few days ago, the Watson computer program beat the two best Jeopardy! players in the show's history. But it also has a very long way to go: Watson also occasionally gave answers that no thinking person would come up with, like "Toronto" to a question about "U.S. Cities." War and economic downturns could also get in the way. Politicians could intervene and try to halt development of these technologies out of fear (which wouldn't necessarily be unjustified). And even if such an intelligence does arise, there are still practical limits to how quickly it could produce and implement new innovations.

I must admit that the idea of the Singularity holds a certain attraction for me. Even if a god does exist, it probably wouldn't be the sort that grants us an afterlife, so unless science can come up with some radical solutions, I'll almost certainly be faced with permanent death before this century is over. I try as much as possible to keep my desire for some sort of utopian technological revolution from influencing my analysis of the likelihood of that outcome, but it's tough. Ultimately, the best position to take is cautious, skeptical optimism. I'll hope for a future paradise, but we're nowhere near there yet, so I won't count on it.

No comments:

Post a Comment