questions like “what is the meaning of life?” will be practical engineering questions.
uh... I've heard that from another science fiction writer somewhere... now what happened with that?
I don't think we have been becoming better at "outsourcing our cognition" (though it's an inspiring goal). I've tried to do this as much as possible in my own life, and I've found I'm much more effective when I don't do it at all (the most effective outsourcing is to try to articulate one's idea to someone else - but they don't need to actually do anything). This lack of gradual increase means there's no extrapolation to further increase, let alone an exponentially increasing acceleration into a singularity - not, at least, for me :-(.
Mechanizing conceptual understanding remains difficult. Although we human beings are good at modularizing knowledge (it's what we constantly do), and it looks like it should be straightforward to automate, we haven't managed it, at all. So many times in history, people have thought they are on the verge of it...
Did you know that Mr Boolean originally called his algebra the "Rules of Thought"? What he did was very cool, and you can see what he meant, but... it's such an awfully long way from "thought".
I think it's possible to create human level intelligence (and probably faster and broader - but maybe not any deeper); we just have no idea how. And machines that are 16,000 times faster (21 Moore years) aren't going to help with that, in themselves.
But it would be awesome.
(My masters was on statistical inference of pattern, without predetermined patterns, and taking into account the complexity of the pattern sought. I chose this as the closest approach to automating thought. We are a long way away, IMHO).
I wonder whether super human intelligence is a super set of human intelligence or something entirely different. I find it rather plausible that machine intelligence will very soon be capable of doing things we humans cannot do. In fact, that's already the case to some degree (and I don't mean just doing things faster).
However, I believe the day when machines will be able to do everything we can do, only better, is very far out. 2030 is in about 20 years. Looking at things like anaphora resolution and how far we have progressed in the last 20 years, I'm not optimistic we'll make the deadline.
And then of course, the singularity would not be a program that does anaphora resolution, but a program that writes programs to solve any new problem and determine worthy goals in the first place, because these are all things we humans can do.
"Looking at things like anaphora resolution and how far we have progressed in the last 20 years, I'm not optimistic we'll make the deadline."
It may just take a new perspective on the problem. Just look at the changes in the internet over the last decade. IMHO, there's been an obvious rate increase in dissemination and cross-pollenization of ideas. Blogs and sites like this only decrease the generation time even further.
It's the ever increasing second derivative that makes the idea of the singularity plausible (with all due respect to the S-curve believers (I think they are right, but what's the point if the top of the S is so significantly higher than where we are now)).
Sure, breakthroughs are always possible, and more pieces of information are linked to each other and so on. I'm not saying it's not possible that someone has a good idea tomorrow that changes everything.
But no, ideas don't cross pollinate. It's individuals who cross pollinate ideas in their heads and there's a limit to how much information a person can possibly juggle in her head.
It's that creative process that we don't really understand yet which is the bottleneck. No amount of linked information will be able to somehow supercharge our coginitive capacity. Sometimes just having time to think without interruption is more important than more information.
I think the likelyhood for breakthrough ideas is largely a function of the number of people working on hard problems. I'm not sure how fast that rate rises. Not very fast I would think. And the second derivative may very well be zero.
"I find it rather plausible that machine intelligence will very soon be capable of doing things we humans cannot do. In fact, that's already the case to some degree (and I don't mean just doing things faster)."
What can today's computers do that a human couldn't do given infinite time, patience, pencil and paper?
Theoretically the answer is probably nothing. What I was thinking about is the kind of structure that is the internet for instance. Could humans collectively _be_ the internet? Could they simulate it, very slowly, without computer programs and hardware other than pencil and paper? A single person could not.
Another thing are parallel algorithms that include random factors. I don't think humans can think like that, consciously and rationally, even though the brain might work like that underneath.
Or could a human sift through massive amounts of data (even given an infinite amount of time) and do something similar to a data mining algorithm? I doubt it. He wouldn't be able to remember enough of the things he had seen and looking them up in his notes would take so long that he would have forgotten where he was when he started. The overall picture of a particular rule or pattern just might not form in his mind.
Essentially the internet is a much more accessible version of the postal system, which has dated back for centuries. I send you a piece of paper requesting such and such information, you being the server sends me back a piece of paper with all the information I requested. Just instead of taking several weeks to send a request from one side of the world to the other, it may only take a few seconds.
Incidentally the version of the internet protocol being tested for use in space behaves even more like a postal system as they want to be able to store information in relay stations. So rather ironically, in the far future the internet will likely operate more like the postal service than ever before.
The mere fact of the matter is that until computers begin creating things exclusively of their own, all things computers run are made of the human mind and built of human systems and experience.
You refer to data miners, however have you ever seen the server farms they use for such tasks? There's thousands of computers in them, and incidentally tens or hundreds of thousands of individual processors.
I'd argue that when comparing a human to a computer, you should be comparing a human to a single core processor. Your data mining example is complete bunk when you consider the sheer amount of processors they use. Can 100,000 people do a data mining task? I'd guess yes.
Ed: If you doubt my claim that a large group of people cannot be the equivalent of a server farm for data mining, then you should read up on the history of how the Enigma Machine was cracked. The British invented the Colossus computer system specifically to speed up the work already done by humans in Bletchley Park.
Cryptanalysis is much more complicated than mere data mining, especially considering the enigma machine was actually designed to change to make it even more difficult to decipher. If people can do mass cryptanalysis, people can do data mining.
I don't think the number of CPUs matters. The interesting question is whether machine intelligence can do things we humans cannot do with pen and paper, either individually or as a group. You may be right about the internet and the postal service, but there's no doubt in my mind that some of the things algorithms do with data is impossible to do for humans due to cognitive limits.
Sure, you could try to split the work into many parts and have many people work on it, but given enough data and sufficiently complex patterns, coordinating all those people to find patterns that are not local to one such part would overwhelm the organisation.
I don't think it's important, but even a single machine can find patterns in large amounts of data that I cannot find with pen and paper no matter how much time I have. What machines cannot do today is make a judgement about the relevance of the patterns found.
Kurzweil also predicts the Singularity to occur by 2030. My personal thoughts are that even if the AI doesn't emerge, we're already close enough to Singularity that the exponential progress of the next 20 years will still be pretty damn mind boggling. The internet alone is a Singularity of sorts.
"All exponential growth comes to an end eventually."
Perhaps. But it may be that this limit is very far in the future.
Technological progress relies on matter to store information and to built tools, and energy to apply those tools to accumulate information. There is a lot of matter and energy in the Universe, so much so that even were we to continue at our current rate of progress, we wouldn't hit these known physical limits for a considerable span of time.
There may, of course, be more subtle limits we are currently unaware of, but I'm inclined to believe that technological progress will continue at an accelerating rate for a very long span of time.
Kurzweil suggested in the "Singularity is Near" that the believed speed limit imposed by the speed of light would be a barrier limiting the available resources to those in the solar system. Thus, limiting the exponential growth of technology and leading to the flattening of the growth curve suggested in your article. He also, gave a date of mid next century when this would occur.
I didn't like Kurzweil's expansionist idea so much. Instead I imagine that the Singularity will hole-up, and become disinterested in the outside universe. There might be lots of real estate - read computing resource - there under all those turtles.
A hockey-stick curve can be approximated to any desired degree of accuracy by a sum of many sigmoid curves. The question is, when, if ever, will we run out of new S-shaped curves?
I suspect that the Singularity will be like a speciation event, a recession, or the bottoming of the markets. We won't be able to say exactly when and where it happened until after the fact.
Instead of trying to predict exactly when, the smart money will execute a strategy that doesn't depend on the exact time, like Dollar Cost Averaging into the current depressed markets.
Wouldn't all this require a massive investment in strong AI right now ... Well, I can tell you; that isn't happening. Good luck trying to convince your dissertation advisor that you want to build the Singularity. :)
Who has a massive server farm?
Computes enough to keep countries warm?
Goo-gle, goo-gle!
Who has a massive data store?
of websites, books and maps and more?
Goo-gle, goo-gle!
Who hired Norvig as technical lead?
Has rooms of genius PhDs?
Whose statistical translation is leagues ahead?
Whose voice recognition will leave Dragon dead?
Who reserves permission to train on your mail?
Knows who said what in the worldwide news?
Keeps most of their work hidden under a veil?
Their every feature gets 'ahhs' and 'oohs'?
Goo-gle, goo-gle!
(Ahem - http://www.stlyrics.com/lyrics/thesimpsons/stonecuttersanthem.htm )
Well, I can tell you; that isn't happening
Sure, says the guy posting under the name of a sentient computer! :)
Thw assumptions are even stronger. To actually become singular, the development must start to accelerate to eventually become infinitely faster than exponential. Otherwise you will just get growth by a constant factor in a constant time, with no singularity ever.
The other way to get a singularity is to have a quantity you divide by go to zero, like when the price society has to pay for the next development step is an increasing function of the inverse of the remaining supplies of a finite resource (like some rare earth element required for faster semiconductors or whatever). If we just continue like we did for the last few decades, this might as well be the most probable type of singularity to expect until 2045.
But that is of course nothing to build a milleniarist movement upon.
I rather believe in cybernetic immortality (http://pcp.vub.ac.be/CYBIMM.html), Vinge's singularity is too vague, some kind of pop science.
In every metasystem transformation (such as singularity would like to be) a new form of control arises over the existing reigning control mechanism.
To put in our terms, the next metasystem transformation will be the control of the culture (humans are culture beings).
And usually this is not about some higher intelligence as Vinge states, it is about the apparition of new senses which opens a totally new universe / dimension.
I'm not sure if I voiced my particular distaste for the term "singularity" before, but it's something that really does get on my nerves. I'm reminded of Pasternak denouncing the phrase "We the people's" in Dr. Zhivago. It's a word that the community that use it most often are utterly stripping bear and bastardising. As far as I'm concerned there are only two (with a possible third) definitive singularities - those of the "black holes and baby universes". <br/>
I really can't see the useful application of the metaphor here. It's not a singularity... a wave, certainly. This gives us the idea of the "seventh wave" which concentrates the potency of the waveform. Or how about a renaissance? It will get to the point where software development, especially in AI, reaches a functional plateau. Branching out and diversifying into many stylistic forms. Or (my personal favourite) a "Babel Event", where we construct a universal medium, shattering the old converging paradigms and multiplying our complexities by an order of magnitude.
<br/>
Surely these are far more "useful metaphors" than the idea of a singularity - the point at which the rules change, or the classical conventions are destroyed? I think this is insidiously vague (and might be why the "singularity meme" has spread so far/fast?)
I believe the media, or more accurately that subset of the media who are even aware of the idea, have labeled it the "Nerd Rapture." Which, whether you find it accurate or not, is definitely a catchier phrase than the "Singularity."
Yeah I wasn't too sure of the origin, but it does seem to be an esoteric nomenclature and very specific to SF geeks. Now I'm not saying I'm not an SF geek, but first and foremost I'm an agent of the word and find the ambiguities of such a concept as "The Singularity" alienating for both sides. One can't express it to the other because of it's vastness, and the other can't grasp it because it's a poorly conceived metaphor.
Hell, for a mind boggling position on where my doubts are loosely based, read the Quine-Duhem Thesis or The Indeterminacy of Translation
The first states that the search for "meaning" in language is a dead end, due to the problem of translation (an agonising and paradoxical conclusion to realise).
While the second states that no scientific theory (that of "the singularity" for instance) can ever be tested or even contemplated in isolation. All knowledge essentially "faces the tribunal of sense-experience, not alone, but as a cohesive unit". So my primary assumption that "singularity" refers to "black wholes and baby universes" refuses the "singularity meme" to be accepted by my cohesive knowledge web... as ever - food for thought
I don't know if singularity is ever going to occur, but it's definitely not going to happen that soon. Our computer technology -- AI in particular -- is currently at the goldfish level.
Compare the Apple I to today's PCs. Compare dialup to mainframes to scaling out thousands of severs automatically in the cloud.
These aren't linear changes. If we're at goldfish level today, and we were at bacterium level yesterday, tomorrow could be a monkey, and the next day more intelligence than the whole human race combined.
I'm not saying it's going to happen, just that you can apply linear growth analysis to areas growing exponentially.
Our rate of development is limited by how quickly we can make good decisions.
Any decision support or "cognitive outsourcing" that is invasive will have to go through the usual approval processes before becoming widespread. It may eventually transform those same processes, but that will lag far behind the initial adoption.
I can have a super-mondo-uber tool, but I still need to know what to build. If I need more than one person for my project, then I also need to be able to scale high-bandwidth human to human collaboration.
None of these things seem impossible, it's just that closing the circle and applying these tools to ourselves slows things down tremendously.
How convenient, since Mr. Vinge, at 85 years old, will hopefully still be alive!
BTW Vernor Vinge is a great author. I heard he's working on a third book in the Fire Upon the Deep & A Deepness in the Sky series. Looking forward to it!
uh... I've heard that from another science fiction writer somewhere... now what happened with that?
I don't think we have been becoming better at "outsourcing our cognition" (though it's an inspiring goal). I've tried to do this as much as possible in my own life, and I've found I'm much more effective when I don't do it at all (the most effective outsourcing is to try to articulate one's idea to someone else - but they don't need to actually do anything). This lack of gradual increase means there's no extrapolation to further increase, let alone an exponentially increasing acceleration into a singularity - not, at least, for me :-(.
Mechanizing conceptual understanding remains difficult. Although we human beings are good at modularizing knowledge (it's what we constantly do), and it looks like it should be straightforward to automate, we haven't managed it, at all. So many times in history, people have thought they are on the verge of it...
Did you know that Mr Boolean originally called his algebra the "Rules of Thought"? What he did was very cool, and you can see what he meant, but... it's such an awfully long way from "thought".
I think it's possible to create human level intelligence (and probably faster and broader - but maybe not any deeper); we just have no idea how. And machines that are 16,000 times faster (21 Moore years) aren't going to help with that, in themselves.
But it would be awesome.
(My masters was on statistical inference of pattern, without predetermined patterns, and taking into account the complexity of the pattern sought. I chose this as the closest approach to automating thought. We are a long way away, IMHO).