The term "singularity" as applied to the medium-term future of technology/humanity bundles together a variety of predictions for purposes that suit advocates of continued technological development, but don't necessarily have to occur together. In this post, I want to briefly point out the different pieces, and comment on the strength of the connections. For readers wanting some more background on the concept, the Wiki article is a good place to start.
A) Human level machine intelligence. This is the idea that continued progress in computer science, computer engineering, neuroscience, etc, will ultimately result in computer systems that are equivalent in intelligence to humans.
B) Intelligence Explosion. This is the idea that as organisms/machines get smarter, they are able to improve their own intelligence faster, so that intelligence is in some sense accelerating and will accelerate faster and faster in the future to the point where infinitely smart machines (whatever that would mean) can be produced in a finite amount of history, or at least where the acceleration of intelligence would be so great that the consequences would be completely unforeseeable for beings of lesser intelligence (us) ahead of time.
C) Transcendental Singularity. Some people have argued that such a hypothetical super-intelligence (or collection of super-intelligences) would take over the entire universe and essentially become something equivalent to god.
D) Human Augmentation. For lack of a better term, I'm using this to denote a cluster of ideas that humans and machines will merge to varying degrees - that people will have brain implants to improve their memory and cognition, and eventually be able to upload their personalities and experiences up into a digital fabric of some kind and have a continued life entirely free of the constraints of current biological bodies.
E) Economic Singularity. The idea that intelligent machines will increasingly take over from humans in the workforce, and that people will not need to, or be able to, work in the future.
F) Growth Singularity. The idea that economic growth will increase faster and faster in the future as a result of trends in machine intelligence.
The basic view I have come to is that from amongst these possibilities, A) and E) are serious issues worth grappling with, while the rest range from highly speculative to extremely unlikely. Here are my arguments, in brief.
A) Machine intelligence. Clearly, we are all painfully aware that computers are not intelligent today, but rather show their heritage as digital calculating machines all too clearly - the computer is good at doing repetitive calculations accurately, but needs to be programmed by a human to do anything new. Still, we have seen that the number of previously "human only" skills that can now be done by computation are being knocked off one by one - chess, Jeopardy, understanding speech, generating speech, recognizing faces, driving (with the implied ability to perceive general roadway environments and make good decisions about what to do in a wide range of them). I'm well aware that my fellow computer scientists are furiously trying to improve each of these areas, as well as new ones, and it's not obvious to me that there's some fundamental barrier that will prevent the next 20-30 years from showing a similar level of progress to the last few decades.
Human abstract reasoning appears to me to consist of reuse, via the fundamental mechanism of metaphor/symbolism, of a large number of specialized circuits that were developed earlier in evolution for solving the problems of being an animal in an environment of plant food sources and other competing animals. Although I certainly wouldn't know how to build such a system in the next five years if you gave me a big team, given that we are essentially reproducing all the specialized circuits one by one in order to solve various problems in robotics, computer vision, etc, I don't see why we won't eventually be able to tie them altogether in some equivalent way. Just arguing from evolutionary/genetic distance, it's clear that human intelligence must only involve relatively minor tweaks on mechanisms that had already evolved to manage being a dog or an ape, and it would seem very difficult to be confident that computer science will not be able to develop something equivalent to the control system for a dog or an ape in coming decades.
Certainly, it appears that the amount of available computer power for running such systems will continue to increase rapidly for quite some time to come. So, while I cannot say it's a certainty, it appears to be more likely than not that machines will gradually approach human levels of intelligence in the present century.
B) Intelligence Explosion. The idea that intelligence will accelerate in future does not follow from the possibility of developing human-level machine intelligence. It assumes a completely unproven "law of intelligence" that the smarter you are, the easier it is to produce an intelligence even greater. Maybe it works the other way altogether - the smarter you are, the more complex and difficult it is to produce an even greater intelligence. Perhaps there's some fundamental limit to how intelligent it's possible for any agent to be. We have no clue. We haven't so far seen even a single generation of intelligences (us) producing a more intelligent entity, so the whole intelligence explosion idea consists of extrapolating from less than one data point. It's utter speculation.
C) Transcendental singularity. A fortiori, this is even more speculative. We have no idea whether some future super-intelligence descended from our efforts will be in a position to spread throughout the universe via yet unknown physical mechanisms, or will be condemned to sit here on earth living on solar power and contemplating its ultimate demise when the sun blows up.
D) Human augmentation. At the moment, a major economic problem for the United States, and, to slightly lesser degree, other developed countries, is that medicine is encountering diminishing returns. It's costing more and more to produce smaller and smaller gains in human health/longevity/wellbeing. While there are crude implants in use to treat things like Parkinson's disease, they involve brain surgery, which is incredible expensive, complex, and involves serious risks that one wouldn't undertake except for the most compelling of reasons.
So the idea that brain/computer interfaces can sometime soon enough to care about become as cheap and as rapidly evolving as computers themselves strikes me as implausible. It's less speculative, perhaps, than the idea of an intelligence explosion, but it certainly doesn't follow just from extrapolating existing trends.
Further, I think there are very strong psychological reasons why technology advocates are pushing this idea. If you take away this possibility, then the singularity basically just sucks from the perspective of a human being. It just becomes about your kids having no jobs, and creating a super-intelligence that we won't understand and therefore won't be able to control, and therefore will be an existential threat to us. If you are, say, an artificial intelligence or robotics researcher, it will be impossible to derive meaning from your work if you think this is what you are doing, so you have to come up with some psychological out that lets you continue the work you enjoy without feeling like you are being destructive to your own species. So, I think that's why this human augmentation thing gets so much play.
E) Economic Singularity. I've argued elsewhere that this is a serious concern. To the extent machine intelligence exists, or even very partial human skills can be replicated by computers, businesses are highly motivated to replace humans with it. Humans have rights, and machines don't. Humans insist on wasting a bunch of resources driving home and living in the biggest house they can afford, whereas computers will happily continue working all night on nothing more than a little electricity. Human workers can only reproduce very slowly and at great expense, whereas software can be replicated as often as necessary at almost no cost. This is exactly why you now talk to a computer when you call your phone company.
F) Growth singularity. At the moment, there's no evidence for this - global economic growth has been proceeding at a few percent a year, more-or-less, since the industrial revolution - and we haven't seen signs of a major acceleration yet. Future economic acceleration pretty much depends on assuming an intelligence explosion and is equally speculative in my view.
Hopefully, I'm wrong about all this, since these are seriously miserable conclusions. However, if so, my errors have not yet become clear to me.
The only defense to the economic singularity is a robust humanism, and we lost that when we decided we were reducible to machines - cogs in a giant economic engine. To that point, I highly recommend watching the Adam Curtis film "All Watched Over by Machines of Loving Grace." If we consider ourselves as machines, then the future is bleak. Compared to the true machines, in doing machine things, we suck.
ReplyDeleteI think your arguments largely hinge on the concept that an intelligence explosion (point B) is entirely speculative, which is to some extent true.
ReplyDeleteAnd while, I agree that there has never been a single generation of intelligence producing, on it own, a more intelligent entity... there are defiantly natural/universal processes, like evolution, at work that have steadily caused matter to be arranged into more and more complex (and intelligent)structures.
Perhaps the argument could hinge on whether there is evidence that natural/evolutionary processes are accelerating? or can be reasonably expected to accelerate once equivalent intelligence's are freed of the biological 'limitations' of humans? Speculative once again, but perhaps more evidence based predictions can be made here?
I do agree though, that there doesn't seem to necessarily be any good reason that humans have to remain a part of this process if it were to occur.
Yes, the E conclusion is hardly miserable, unless we let ideology keep the fruits of all this mechanization in the hands of the few rather than the many. It is a political question whether we let CEOs grab all the money, or not. We have plenty of ways to serve each other left, even after machines run everything mundane in our lives.
ReplyDeleteYou risk being labeled a luddite if you think that mechanization per se is bad for workers and the culture generally. We have experienced the march of mechanization for well over 200 years, and "only" have ~15% unemployment. Something else must be going on.
Burk - there are plenty more indicators of societal well-being than U6. The prospect of garbage collectors being replaced by machines would have been wholly ridiculous 15 years ago, too. We need plenty of those grunt-work jobs to provide work for the less educated levels of society. Or perhaps the machines can just provide for them. Maybe we could automate frenzied cable news pundits while we're at it. ;)
ReplyDeleteI think E trumps all. Where does the funding for all these DARPA and Blue Brain programs come from when the tax base is gone and no one wants to buy a brand spanking new vid card because they need the money for now-tiny boxes of Mac and Cheese?
Presumably all that flush capital and attendant research will move to BRICs, but next peak oil makes a hash of trying to spur rapid growth all over again, to say nothing of other resources limits issues.
I recall reading that Kurzweil has a real horror of death, forget what that stems from; oh, if only I had an augmented brain...to paraphrase the Scarecrow in the Wizard of Oz. Anyway that might be a factor in D's popularity.
I'm happy to see some concise interpretations of the singularity arguments, and rebuttals of them. But let me address specific points.
ReplyDeleteA) A "fundamental barrier" is not needed for the singularity argument. The advent of exponential growth in the capabilities of machines, as of yet unseen, can come about as a product of a slow technological grind. The ability to play Jeopardy is a step forward, but if you put enough small steps together you can get something revolutionary eventually.
B) There are some very simple arguments for such a "law of intelligence". The most compelling argument is the observation that
- Computer have HUGE fundamental limitations
- The human mind has HUGE fundamental limitations
- These limitations are very different
The argument that we can see an explosion of intelligence is merely putting these together. How long did humans struggle to get more digits of Pi? And how long have computer struggled to gain Earthworm-like agility? We have both technologies at our disposal, whether or not we understand biological brains yet. There is, at minimum, an argument for a "step" increase in intelligence from the hybridization of computing/thinking technologies.
C) We are not a space-faring society yet and we don't use fusion yet. If you accomplish both of these, you become like a bacteria in an infinite petri dish. This is not a difficult or speculative argument in any way.
E) To me, this seems like a shift of wealth from the labor markets to the capital markets and nothing more. Humans are not really "dealt out", but inequality is the issue at hand.
F) I think the internet itself could constitute the proto-stages of an intelligence revolution. Communications is the first form of "intelligence augmentation".
Well that was fun!
I agree with Burk Braun; it's political. We're wealthier (overall) than we ever were, we could fix many problems with a little tax and spend.
ReplyDeleteAll the fantasies of those for whom technological singularity comprises a faith-based religion to cling to at the "end of days". The complexity bubble, even as it expands at the periphery, is already collapsing from the core like a burned out star.
ReplyDeleteI think you've analyzed this pretty well.
ReplyDeleteJust came across this today:
"The Singularity is Far"
http://www.boingboing.net/2011/07/14/far.html
David Linden, "is a professor of neuroscience at The Johns Hopkins University School of Medicine and Chief Editor of the Journal of Neurophysiology."
"...
However, Kurzweil then argues that our understanding of biology—and of neurobiology in particular—is also on an exponential trajectory, driven by enabling technologies. The unstated but crucial foundation of Kurzweil's scenario requires that at some point in the 2020s, a miracle will occur: If we keep accumulating data about the brain at an exponential rate (its connection maps, its activity patterns, etc.), then the long-standing mysteries of development, consciousness, perception, decision, and action will necessarily be revealed. Our understanding of brain function and our ability to measure the relevant parameters of individual brains (aided by technologies like brain nanobots) will consequently increase in an exponential manner to allow for brain-uploading to computers in the year 2039.
That's where I get off the bus."
I.e., the software guy looks at the problem and says "No problem! The brain is easy!" The neuroscientists say, "You are vastly underestimating the problem." Given the past statements about artificial intelligence, I'm inclined to listen to the neuroscientists.
For Stuart or anyone else: Are there any good blogs out there on futurism or the singularity?
ReplyDeleteAgainst A, there is the idea that life is different and the concatenation of forces involved in living brain cells exposed to a dynamic environment is not something that can necessarily be duplicated artificially.
ReplyDeleteSingularity theory is, first and foremost, the most boring and vulgar version of the classic messiah myth.
ReplyDelete(besides being totally ignorant regarding theoretical results in complexity and calculability).
Basically a kind of puritan ultra utilitarian self hate version of the messiah myth, that is basically what "singularity theory" is.
Evidence for the growth singularity that is typically cited is "England doubled its economy in 150 years, US in 100 years, Germany and Japan in 35 years, Korea in 18 years, China did it in 8 years and is doing on the trot"
ReplyDeleteI don't completly agree with the hypothesis as I see the later countries doing mainly catchup growth.
However, I agree with the intelligence explosion hypothesis. Intelligence, in the human brain is occuring due to some repeatable algorithms. We don't know when the algorithms will be understood, but when they will, there will be a fairly quick ramp-up to the greatest capacity that our present physics can support. It may not be a God, but from the human perspective, there will be little that they can do against it.
"Intelligence, in the human brain is occuring due to some repeatable algorithms."
ReplyDeleteThis can be considered proven to be false, study a bit more, dude
"This can be considered proven to be false, study a bit more, dude
ReplyDelete"
Humans have repeatably produced other humans, who have proved intelligent. There is an algorithm, and a repeatable one, for intelligence somewhere. It is unknown to us, right now. But there is no mystic essence to it. That is my only point. Human cells drill down to physics and chemistry at some level.
My reply:
ReplyDeletehttp://sixtystoryrobot.blogspot.com/2011/07/we-already-have-good-broad-theories-of.html
"There is an algorithm, and a repeatable one, for intelligence somewhere."
ReplyDeleteYou saying this is purely based on faith, Mathematical results regarding computability theory point to the contrary, think Gödel, church rosser theorem these kind of things.
Besides, all mathematicians point to some kind of "illuminations" when discovering something really new, no there is most probably no algorithm there, besides, today we KNOW that science is infinite.
Again, singularity theory is the modern day version of the classical puritan vulgarity, nothing new there... (Nietzsche already knew about it)
@James Andrix
ReplyDelete"We already have good broad theories of intelligence"
No we don't, and any "broad" declaration about it or willingness to self convince yourself doesn't change much about it.
And what is more interesting maybe, is the desire to have these theories ... (which we will never have)
@yvesT
ReplyDeleteYou are quick to contradict, slow to make actual arguments.
Your mathematical name dropping only leads me to believe you misunderstand the actual implications of your references.
Do you also think that there is no algorithm of vision? No possibility of a theory of visual processing? Could you persuade someone who didn't know how eyes worked?
Yves:
ReplyDeleteYour posts do seem to have a high ratio of absolute statements and put-downs of others relative to detailed explanations or links to support your position. Clearly you are talking to smart people who disagree with you, and convincing such people requires evidence not peremptoriness
I don't think so Stuart, obviously in this Artificial intelligence debate and stuff like that, if someone believes thinking is purely algorithmic so be it, besides, if you take the matter route, obviously you are in the same kinf of "proof of god or not" debate, as machines and humans being matter, no way out in this discussion can be expected. Only time will tell, however maybe we could expect some minimal honesty regarding AI "scientists" and there results, which is obviously not the case, and once again, what is maybe more interesting in all this stuff is this "frankenstein desire", which it is true, I find truly boring.
ReplyDeleteMoreover what is truly interesting regarding technology in general, is not what a machine can do or cannot do, but to consider the complete book of technology : all the machines and programs stopped, and "view" the evolution of this stopped book through all new machines new software or new versions of these. This is something truly amazing.
ReplyDelete