Forgot your password?
typodupeerror
Sci-Fi AI

The Singularity Is Sci-Fi's Faith-Based Initiative 339

Posted by Soulskill
from the omnipotent-god-computers-will-not-run-your-life dept.
malachiorion writes: "Is machine sentience not only possible, but inevitable? Of course not. But don't tell that to devotees of the Singularity, a theory that sounds like science, but is really just science fiction repackaged as secular prophecy. I'm not simply arguing that the Singularity is stupid — people much smarter than me have covered that territory. But as part of my series of stories for Popular Science about the major myths of robotics, I try to point out the Singularity's inescapable sci-fi roots. It was popularized by a SF writer, in a paper that cites SF stories as examples of its potential impact, and, ultimately, it only makes sense when you apply copious amounts of SF handwavery. The article explains why SF has trained us to believe that artificial general intelligence (and everything that follows) is our destiny, but we shouldn't confuse an end-times fantasy with anything resembling science."
This discussion has been archived. No new comments can be posted.

The Singularity Is Sci-Fi's Faith-Based Initiative

Comments Filter:
  • by Anonymous Coward on Wednesday May 28, 2014 @03:42PM (#47112097)

    We call them people.

    The idea that it might not be possible at any point to produce something we *know* to be produceable (a human brain) seems rediculious.
    The idea, having accepted that we produce a human brain, that we cannot produce even a slight improvement seems equally silly.

    Of course the scenerios of how it happens, what it's like, and what the consequences are, are fiction. I don't dare to put a time-table on any of it; and absolutely believe it will only occur through decades of dilligent research and experiementation; but we are not discussing a fictional thing (like teleportation), but a real one (a brain). There's no barrier (like the energy required) that might stop us as would something like a human-built planet.

    No. We don't know *how*, but we know it can be done and is done every minute of every day by biological processes.

  • by MozeeToby (1163751) on Wednesday May 28, 2014 @03:46PM (#47112173)

    The disparaging way that the summary and article talk about references to science fiction stories is practically an ad hominem attack. There is nothing inherently wrong with science fiction stories that makes them improper for thinking about the implications of changing technology. Much of the best sci-fi in existence is little less than thought experiments about how various kinds of advances might affect humanity on an individual and cultural level.

  • by Jeff Flanagan (2981883) on Wednesday May 28, 2014 @03:47PM (#47112189)
    >Is machine sentience not only possible, but inevitable? Of course not.

    The only thing that would stop it is the fall of civilization. There's no reason to believe that only machines made of meat can think. You didn't think your thoughts were based on fairy-dust, did you?
  • Ai is inevitable (Score:5, Insightful)

    by fyngyrz (762201) on Wednesday May 28, 2014 @03:47PM (#47112193) Homepage Journal

    Is machine sentience not only possible, but inevitable?

    Of course it is. Why? Physics. What do I mean by that? Everything -- bar none -- works according to the principles of physics. Nothing, so far, has *ever* been discovered that does not do so. While there is more to be determined about physics, there is no sign of irreproducible magic, which is what luddites must invoke to declare AI "impossible" or even "unlikely." When physics allows us to do something, and we understand what it is we want to do, we have an excellent history of going ahead and doing if there is benefit to be had. And in this case, the benefit is almost incalculable -- almost certainly more than electronics has provided thus far. Socially, technically, productively. The brain is an organic machine, no more, no less. We know this because we have looked very hard at it and found absolutely no "secret sauce" of the form of anything inexplicable.

    AI is a tough problem, and no doubt it'll be tough to find the first solution to it; but we do have hints, as in, how other brains are constructed, and so we're not running completely blind here. Also, a lot of people are working on, and interested in, solutions.

    The claim that AI will never come is squarely in the class of "flying is impossible", "we'll never break the sound barrier", "there's no way we could have landed on the moon", "the genome is too complex to map", and "no one needs more than 640k." It's just shortsighted (and probably fearful) foolishness, born of superstitious and conceited, hubristic foolishness.

    Just like all those things, those who actually understand science will calmly watch as progress puts this episode of "it's impossible!" to bed. It's a long running show, though, and I'm sure we'll continue to be roundly entertained by these naysayers.

  • by Anonymous Coward on Wednesday May 28, 2014 @03:51PM (#47112241)

    There's a big difference between "Hmm, what would happen if nuclear power cells existed and we could build a computer the size of a planet!?!" and "This is the specific scientific path that will lead us to that future."

    Literature of any form can enlighten, provoke, and illuminate. But confusing "What if?" with "This is the way it will happen!" prophecy is fucking stupid.

  • Stupid? (Score:2, Insightful)

    by pitchpipe (708843) on Wednesday May 28, 2014 @03:52PM (#47112263)

    I'm not simply arguing that the Singularity is stupid â" people much smarter than me have covered that territory.

    "Stupid"? That's just fucking asinine. "The Singularity" has many incantations, some of which are plausible, and others which are downright unbelievable, but to say it is "stupid" makes you sound stupid. The various models of the singularity have been argued as both likely and impossible by equally intelligent people. I take offense to the word.

  • by Anonymous Coward on Wednesday May 28, 2014 @03:57PM (#47112323)

    Your point about the Singularity is totally right. The idea that robots or AI is a requirement tells me the original author has not read much Singularity SF.

    Your second point about society made me laugh. At one point I was working as a the person who opens the kitchen in the morning at Arby's, as I did this I notice how easy it would be to replace 90% of my work with present day robots. When I pointed this out to the other workers they laughted and said their jobs were safe for the rest of their lives.

    Funny that is what I was told when I worked at GM on the truck line, now those jobs are gone. Not to another country, the robots replace the humans.

  • by Mordok-DestroyerOfWo (1000167) on Wednesday May 28, 2014 @04:02PM (#47112379)

    Hey! I want my transporters, warp drives, and a galaxy full of humans-with-extra-bumps-embodying-a-particular-stereotype, and I want these things NOW!

    I would trade all of that for one Orion slave girl.

  • by MozeeToby (1163751) on Wednesday May 28, 2014 @04:03PM (#47112391)

    You're begging an important question with your argument, let me quote from the article to illustrate it.

    If you asked someone, 50 years ago, what the first computer to beat a human at chess would look like, they would imagine a general AI. It would be a sentient AI that could also write poetry and have a conception of right and wrong. And itâ(TM)s not. Itâ(TM)s nothing like that at all.

    If you asked someone today what the first computer capable of designing an improved version of itself would look like, you'd say it would be a true AI. This is not necessarily true. You are assuming that designing a new, more powerful computer requires true intelligence. Maybe in reality it'll be a few million node neural network optimized with a genetic algorithm such that the only output is a new transistor design or a new neural network layout or a new brain-computer interface.

  • by Curtman (556920) * on Wednesday May 28, 2014 @04:04PM (#47112405)
    I'd agree with that, except for L. Ron Hubbard who showed us all that sci fi can be dangerous.
  • by Geoffrey.landis (926948) on Wednesday May 28, 2014 @04:15PM (#47112543) Homepage

    The singularity, of course, is defined as the point where the function and all its derivatives approach infinity. There is another way to think of a singularity. If you are extrapolating a function based on a power series around a point, you can only expand that power series as far as the closest singularity ("pole") in the complex plane (the "radius of convergence"). You can't extrapolate further than that with a simple power series, even if you aren't trying to solve for the function at the pole itself.

    So, thinking science fictionally, we can't extrapolate the future based on the present any further than the distance to the singularity, even if our actual future doesn't in fact pass through the singularity.

    So, don't think of the technological singularity as a time when life for humans ends, and robots/artificial intelligences/transcended humans take over. Think of it as time scale beyond which we can't extrapolate the future based on what we know now.

  • by MozeeToby (1163751) on Wednesday May 28, 2014 @04:25PM (#47112725)

    There are more than a few people like that here.

    But Verner Vinge isn't one of them. In his original paper, he used them to illustrate how difficult to comprehend concepts might, conceivable play out. For example, he mentions that a singularity may play out over the course of decades or over the course of hours. Imagining how such massive changes could occur on a global scale in just a few hours is difficult, so he points the reader to a book whose author has already put time and effort into imagining how such a thing could play out and what some of the implications might be. It is using the book precisely as a thought experiment to examine an especially extreme part of what he is describing.

  • by pr0fessor (1940368) on Wednesday May 28, 2014 @04:25PM (#47112731)

    I think we have already been transformed by technology at a rapid pace. When you look at everyday technology like communications, portable devices, and data storage, in some ways we have already surpassed the science fiction I enjoyed as a kid. Things like the cell phone, tablet, and the micro sd card only existed in science fiction when I was a kid.

    If you grew up in the 70s or earlier I'm sure you can come up with a big list of everyday items.

  • If slavery is the only way you can get women, maybe you should spend less time watching ST and more time working on your person skills?

  • Re:Stupid? (Score:2, Insightful)

    by geekoid (135745) <dadinportland AT yahoo DOT com> on Wednesday May 28, 2014 @05:10PM (#47113265) Homepage Journal

    Fine. How will it be powered? Every increasing speed require every increasing power, and the power need increases faster then the increase of power.

  • by Anonymous Coward on Wednesday May 28, 2014 @05:12PM (#47113287)

    The irony being that the "slave girls" were secretly in charge of their society the entire time.

  • by gweihir (88907) on Wednesday May 28, 2014 @05:46PM (#47113583)

    Current state if the art is that writing the "specs" is about as hard or harder than writing the code. And that has been the state for the last 50 years. This is unlikely to change anytime soon and may not ever change.

  • by wierd_w (1375923) on Wednesday May 28, 2014 @08:06PM (#47115107)

    Strange, that isnt how I would envision it at all. I would envision it as an iterative evolutionary process simulator with parallel virtual instance simulators all simulating minor variations of itself using (at first) a brute force algorithm over a range of possibly tweakable values, correllating and testing "improvement candidates" based on a set of fixed critera, assembling lists of changes, and restarting the process over again.

    Such models have already created wholly computer generated robots that are surprisingly energy efficient, if bizarre to look at.

    As humans get better at structuring problems into concrete sets of discrete variables, the better such programs will be able to run without human intervention.

    These "AIs" would not in any practical sense, even remotely resemble the intelligence that humans have. They would have much more in common with exponential functions with large numbers of descretized terms, converging on local maxima in their solution domains.

  • by hey! (33014) on Wednesday May 28, 2014 @08:08PM (#47115123) Homepage Journal

    We've already bettered typical human cognition in various limited ways (rote computation, playing chess). So in a sense we are already living in the age of intelligent machines, except those machines are idiot savants. As software becomes more capable in new areas like pattern recognition, we're more apt to prefer reliable idiot savants than somewhat capable generalists.

    So the biggest practical impediment to creating something which is *generally* as capable as the human brain is opportunity costs. It'll always be more handy to produce a narrowly competent system than a broadly competent one.

    The other issue is that we as humans are the sum of our experiences, experiences that no machine will ever have unless it is designed to *act* human from infancy to adulthood, something that is bound to be expensive, complicated, and hard to get right. So even if we manage to create machine intelligence as *generally* competent as humans, chances are it won't think and feel the same way we do, even if we try to make that happen.

    But, yes, it's clearly *possible* for some future civilization to create a machine which is, in effect equivalent to human intelligence. It's just not going to be done, if it is ever done, for practical reasons.

The economy depends about as much on economists as the weather does on weather forecasters. -- Jean-Paul Kauffmann

Working...