Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Sci-Fi Books Media Science

The Singularity Blinds Sci-Fi 603

foobsr writes "Popular Science has an article discussing the growing difficulties that Sci-Fi writers encounter when it comes to extrapolating current trends. Doctorow and Stross , both former computer programmers, are rated to be prototypes of a new breed of guides to a future which due to Vinge's Singularity might not happen for humanity once a proper super-intelligence - maybe as a Matrioshka Brain - has been created."
This discussion has been archived. No new comments can be posted.

The Singularity Blinds Sci-Fi

Comments Filter:
  • Okay (Score:4, Insightful)

    by lukewarmfusion ( 726141 ) on Sunday August 15, 2004 @02:36PM (#9975019) Homepage Journal
    I'm sorry if this is too off-topic, but that story summary made absolutely no sense to me. I'm not a scientist, but I've got a decent education in science. I'm also a fan of sci-fi books, short stories, television, and movies... what am I missing? Or, what should I be reading/watching so that this stuff isn't so far over my head?
    • Re:Okay (Score:2, Insightful)

      by Anonymous Coward
      no shit. it was "trying" to sound smart.

      it ended up sounding like total ass.

      some people work really hard at keeping up the "i'm a mental giant" facade.
    • Re:Okay (Score:5, Funny)

      by ashot ( 599110 ) <ashot@noSpAm.molsoft.com> on Sunday August 15, 2004 @02:45PM (#9975047) Homepage
      the article?
    • Re:Okay (Score:5, Insightful)

      by Txiasaeia ( 581598 ) on Sunday August 15, 2004 @02:45PM (#9975055)
      Problems in extrapoliating recent trends - for example, Neuromancer by William Gibson (written in 1983/4) is supposed to be set sometime in 2020 (I think), but there are no cell phones, despite the fact that cells are ubiquous devices and will certainly be around in the *real* 2020. He didn't see that one coming. this is the problem that the article is talking about.
      • Re:Okay (Score:3, Insightful)

        by ashot ( 599110 )
        essentially if these people could extrapolate trends to the future accurately they would not be wasting their time writing sci-fi books.
        • Re:Okay (Score:5, Insightful)

          by tgibbs ( 83782 ) on Sunday August 15, 2004 @07:04PM (#9976441)
          essentially if these people could extrapolate trends to the future accurately they would not be wasting their time writing sci-fi books.

          Right! People who can can really predict future trends--like the development of satellite-based communications, for example--wouldn't waste their time writing science fiction.

          Oh, wait....
          • Re:Okay (Score:5, Insightful)

            by Anonymous Coward on Sunday August 15, 2004 @11:25PM (#9977560)
            Sure, but at the same time Clarke predicted things like we'd be reaching for the far planets by now (2001, anyone?) and other things which have turned out wrong, and didn't predict the Internet, or the space shuttle disasters, etc.

            The guy's complaint isn't that sci-fi writers don't sometimes get it right (an infinite number of monkeys pounding on an infinite number of keyboards...), but they can't be expected to be mystic seers, or else they'd be working for Wall Street. Complaining that Gibson didn't anticipate cell phones before 2020 is just lame, because (good) science fiction isn't really about the technology, but man's and society's interaction with the technology and the future. In which case, it doesn't really matter what the technology is; it could be mysterious gadget X, as long as what gadget X does is well-defined.

            For example, in Asimov's robot stories, he defines a gadget X that follows the 3 laws of robotics. He never provides detailed technical drawings or any expectations that such robots will be created (certainly not in the near future), but the conceit nonetheless provides a rich basis for a large number of stories exploring the ramifications.

            The technology in science fiction is a means to an end, not the end itself. The technology serves the purpose of the plot, not the other way around. Thus its existence is dictated by the plot, and whether or not it is truly predictive of future trends is largely immaterial. Good science fiction generally only tackles a few disruptive ideas at a time, and the rest of the backfiller is just to maintain a suitably futuristic atmosphere.

            Besides, in the long run, all technologies are transient. By 2100, we may not be using communication satellites anymore, which are made obsolete by the technology Q, a high capacity computer network of digital packet radios communicating using Q particles travelling faster than light (yes, I just made that up, don't hold your breath waiting for my prediction to come true). OMG, why didn't Arthur C. Clarke anticipate technology Q by the year 2100? He sucks! All his science fiction now sucks, too!
      • Re:Okay (Score:5, Insightful)

        by torpor ( 458 ) <ibisum AT gmail DOT com> on Sunday August 15, 2004 @03:06PM (#9975174) Homepage Journal
        duh ... you're the one who is lousy at extrapolating trends.

        point 1: its not 2020 yet.

        point 2: cell phones are rapidly becoming computing devices. by 2020, they may well be the only computing device you need.

        i know i'm currently shopping for a new cell phone that can handle my e-mail needs ...
        • Re:Okay (Score:3, Informative)

          by Doppler00 ( 534739 )
          T-mobile with a Nokia 3660. I can check my POP e-mail from comcast. Cool stuff. Also have a wireless headset, so I think I'm pretty much adopting technology that will be common place by 2010. Can't imagine what 2020 would be like.
      • Re:Okay (Score:4, Insightful)

        by 0racle ( 667029 ) on Sunday August 15, 2004 @04:48PM (#9975746)
        Funny that since they're just writers. They're not scientists, just writers. Few things irritate me more then someone holding a Sci-Fi writer as some sort of visionary, if they actually did get something right, its because it was the obvious thing, or a fluke. They're not brilliant geniuses.
        • Re:Okay (Score:5, Interesting)

          by Squiffy ( 242681 ) on Sunday August 15, 2004 @05:22PM (#9975910) Homepage
          Actually, a good number of science fiction writers are scientists. Gregory Benford, David Brin, and Alastair Reynolds are all currently employed as scientists, for example. Isaac Asimov was a scientist as well.

          Furthermore, any novelist worth his/her salt does a lot of research to make sure they know what they're talking about. So when they get the future right, it's a well-informed guess, not so much a fluke.

          I'll agree that they aren't necessarily brilliant geniuses, though.
        • Re:Okay (Score:5, Insightful)

          by tgibbs ( 83782 ) on Sunday August 15, 2004 @07:17PM (#9976504)
          Funny that since they're just writers. They're not scientists, just writers. Few things irritate me more then someone holding a Sci-Fi writer as some sort of visionary, if they actually did get something right, its because it was the obvious thing, or a fluke. They're not brilliant geniuses.

          Actually, scientists are not really in the business of predicting the future. Scientiists tend to have relatively short perspectives: "What can I do now to increase our understanding?" Most scientists are specialists, knowing a great deal about a narrow area of study. This is often what you need to make progress, but it doesn't necessarily help you see the shape of the future. A writer of hard science fiction has to be familiar with many areas of science to come up with novel ideas for stories. And while they may not be scientists themselves, what they write needs to be scientifically plausible, because a lot of their readers are, and don't hesitate to point out errors (like Niven's unstable Ringworld).

          And sometimes, I think, SF writers may even help to make the future Scientists read science fiction, and may take an interest in pursuing some of the ideas they read about in more rigorous ways. I can't help wondering how many of the guys now working on quantum "teleportation" were influenced by Star Trek's transporter....
      • Re:Okay (Score:3, Insightful)

        by Finkbug ( 789750 )
        The greatest misunderstanding of SF, often even by those writing it, is that it is a predictive form. It's not. It (or at least should) describe situations with the suppositions leading to scene setting & story. Ralph 124C4U+ predicted night baseball games amid its junkyard of failed futurism. Who cares? There can be a visceral thrill for both author and reader in grabbing the Soon Now by the throat and trying not to get bucked off (read Spinrad about Russian Spring, his near future novel predicting t
      • It's been a while since I read Neuromancer, but just because something is NOT mentioned, doesn't mean that it is not around.

        I can't remember any stories where the characters use the toilet, but I assume they still crap in the future.

        Maybe we can assume cell-phones are like crappers; everywhere and not worth mentioning.

      • Re:Okay (Score:4, Insightful)

        by RedWizzard ( 192002 ) on Sunday August 15, 2004 @09:39PM (#9977119)
        Problems in extrapoliating recent trends - for example, Neuromancer by William Gibson (written in 1983/4) is supposed to be set sometime in 2020 (I think), but there are no cell phones, despite the fact that cells are ubiquous devices and will certainly be around in the *real* 2020. He didn't see that one coming. this is the problem that the article is talking about.
        It has never been the goal of science fiction to predict specific technological advances. SF is about exploring the consequences of advances, regardless of whether the advance is likely or even possible. Sometimes SF has predicted real advances, sometimes because the fiction provides inspiration to the inventors, but those cases are more of a happy coincidence than any deliberate attempt to anticipate the future. A few authors have attempted to predict possible advances, Arthur C Clarke being the obvious one, but when they do so it's usually in essay or editorial form rather than as a story.
    • Bingo (Score:5, Insightful)

      by Benwick ( 203287 ) on Sunday August 15, 2004 @02:46PM (#9975066) Journal
      I'm a writer and a programmer and I didn't understand the description either.

      One thing I can say, though, is that fiction doesn't have to be true. Hence the name! Basing what science fiction authors can or cannot do in terms of what is likely to happen in the future, is absurd. I know someone will say that truth is stranger than fiction, and that fiction must hew close to the truth. Anyone who actually takes that pap seriously should not be reading sci-fi (hard or otherwise) or any other form of fiction, for that matter, since it is speculative. (Blah, blah blah, probability, spare me. Prove to me that Genghis Khan did not come from a distant galaxy.)

      The real assumption is that there is macro-truth (background, history, physics, etc.) and micro-truth (characters behaving, their interactions, etc). If the term fiction can apply, authors should be given the liberty to fake whatever they please. (And again, spare me any argument involving economics and who is going to read a book about talking toasters from the 35th century, etc..)
      • Re:Bingo (Score:5, Insightful)

        by johannesg ( 664142 ) on Sunday August 15, 2004 @03:21PM (#9975245)
        The idea here is that SF works by extrapolating from our current situation, not so much in terms of technology but rather our social situation (think about it: all the good SF books use SF as a vehicle to examine the human condition from a unique angle). The singularity, in this context, is an event that will change our society beyond recognition, and probably almost overnight. What that event could be, or even if we will ever see it, is of course subject to speculation, but it is not outside the realm of the possible and it may even be close (i.e. somewhere in the 21st century). Now, the very nature of the singularity makes it impossible to predict how our society will look like afterwards. For this reason SF cannot continue to extrapolate from current society to build a believable future society - it is blinded.

        As for what the singularity could be, there are plenty of options. Development of a working nano assembler might do it (manufacturing capabilities would instantly become meaningless, since we would be able to produce enough of _everything_ for _everyone_. Don't tell me that won't change things...). Development of an AI would probably also do it, since it could itself develop better, faster versions - faster than we could ever hope to keep up with. Or there is contact with an alien race. Perhaps even something as mundane as the FTL drive or anti-gravity... Anyway, the singularity is rather fascinating, even though it is itself SF for now ;-)

        • Essentially, the expansion of the internet into almost every country, and the continued growth of open source software methods has created a sort of "mini-singularity".

          Through cooperation and collaberation on the internet, people have the ability to create and expand software much much rapidly than could have been concieved of.. even as late as the 1990s.

          As internet service is expanded to more and more sections of the world, and as computer literacy keeps rising, expect this trend to develop exponential

          • >Who needs to manufacture a super-human machine intelligence, when you already have 6 billion Human beings that you can link into a cluster?

            Has anyone else noticed how much Google works like a human mind? It has associative retrieval and makes its "memories" more accessible the more they are used. And its knowledge base is a non-microscopic fraction of what humanity knows.

            >And aren't the rapid development of things like the wikipedia, GNU tools, the linux kernal, and so on, a result of this new clus
        • Re:Bingo (Score:5, Insightful)

          by NoMoreNicksLeft ( 516230 ) <john.oyler@ c o m c a st.net> on Sunday August 15, 2004 @05:48PM (#9976040) Journal
          Development of a working nano assembler might do it (manufacturing capabilities would instantly become meaningless, since we would be able to produce enough of _everything_ for _everyone_. Don't tell me that won't change things...).

          Of course it will change everything. I expect half the world to starve in the months after that event. Current trends in intellectual property law point to that already.
      • Re:Bingo (Score:5, Interesting)

        by tgibbs ( 83782 ) on Sunday August 15, 2004 @03:42PM (#9975388)
        One thing I can say, though, is that fiction doesn't have to be true. Hence the name! Basing what science fiction authors can or cannot do in terms of what is likely to happen in the future, is absurd.

        However, the article is referring to a particular kind of science fiction (sometimes called "hard" SF) which is based upon realistically extrapolating current technology and trends into the future.



        The problem is that reasonable extrapolation along a number of pathways leads to a future that is so alien that it is difficult to imagine, and even more difficult to think of anything to write about that would be entertaining to modern readers. The problem, is that humanity as we know it may not exist for much longer.

        However, both Vinge and Stross have found literary ways around the singularity. Sort of the science fiction equivalent of "Left Behind." That is, even if the singularity occurs, it might not take everybody.

    • Re:Okay (Score:5, Informative)

      by Indras ( 515472 ) on Sunday August 15, 2004 @02:58PM (#9975125)
      I actually ran into all of the talk about the singularity by asking the question: What is the meaning of life? More specifically, I asked Jeeves [ask.com].

      The first result he comes up with (this one [ask.com]) is an FAQ on the meaning of life. Part of the question of the meaning of life is an eventual goal, something to reach towards. Once of the options discussed is the Singularity.

      The best place for more info is the Singularity Institute [singinst.org]. Their definition of the Singularity is the technogical creation of smarter-than-human intelligence. This is by any possible means, either overclocking the human mind, creating artificial intelligence which is smarter than humans, or some combination thereof (such as uploading human minds to computers to run at a faster rate).

      Read the FAQ. It'll clear up your basic questions, and doubtless leave you with many more.
      • Re:Okay (Score:2, Interesting)

        by Eric604 ( 798298 )
        such as uploading human minds to computers to run at a faster rate

        Upload a mind to the computer, run it, pull the plug and you just killed someone. Perhaps this kind of research should be disallowed, it's sort of murder..

        • Re:Okay (Score:5, Insightful)

          by Indras ( 515472 ) on Sunday August 15, 2004 @03:38PM (#9975364)
          I've actually thought about that a lot. I mean, seriously, if your mind is running in a computer program, then it must have a way to start up or shut down, which means it saves to a file, not running in ram continuously (except maybe MRAM, but it still must be able to "boot" the first time).

          Therefore, if you were chatting with a person in a computer and said something that ticked them off and they refused to talk to you anymore, simply shut it down, resore from backup, and restart. Murder? Not really, there's no death. I think it's worse.

          And think of the first person who has this procedure done. How many times will his/her processes have to be shut down and restarted, or how many simultaneous instances would be run?

          I wholeheartedly agree with you, this should be disallowed, but it's not murder.

          But then again, if a human intelligence, even if copied, is to precious for us to research with, then who is to say a created (artificial) intelligence is any less precious.

          One or the other is going to happen eventually. We need to be prepared for that day. Much like the first cloned human.
          • Re:Okay (Score:4, Informative)

            by Scarblac ( 122480 ) <slashdot@gerlich.nl> on Sunday August 15, 2004 @05:34PM (#9975973) Homepage

            You ought to read "Permutation City" by Greg Egan. It's about things like this, and takes them to an extreme conclusion.

          • Re:Okay (Score:3, Interesting)

            by sql*kitten ( 1359 ) *
            Therefore, if you were chatting with a person in a computer and said something that ticked them off and they refused to talk to you anymore, simply shut it down, resore from backup, and restart. Murder? Not really, there's no death. I think it's worse.

            Yes, but would you have a real person running on the Linux box in your bedroom?

            If this ever happens on a large scale, the uploaded "people" will live in a secure datacentre, probably buried under a mountain or something, and they will do work (i.e. creating
    • The "singularity" is one of the favorite wet dreams of the "transhumanists", a group of spoiled adults who seemingly find it difficult to tell reality and science fiction apart. The "theory" is that human progress is going so "fast" (nevermind that progress is qualitative, and any supposed measurement is an arbitrary procedure), that before we know it we're going to reach the "singularity"-- the point where it accelerates beyond our capability to understand it. Typically thanks to our having built machine
      • Re:In a nutshell (Score:5, Informative)

        by tgibbs ( 83782 ) on Sunday August 15, 2004 @03:55PM (#9975458)
        The "singularity" is one of the favorite wet dreams of the "transhumanists", a group of spoiled adults who seemingly find it difficult to tell reality and science fiction apart.

        Indeed, this can be difficult even for scientists who read the physics literature. Much of what was regarded as science fiction in the 50's is fact today, including some things that were generally considered to be fantasy at one time, like beam weapons. Physicists are carrying out serious experiments on quantum teleportation, and methods of transmitting information (random information, but still information) faster than light.

        Now there are multiple lines of serious investigation, any one of which that could lead to massive transformation not merely of human culture (such as happened so recently with the internet, and was predicted by hardly anybody), but also of humanity itself:

        -AI
        -Genetic modification of human beings
        -Direct man/machine interfaces
        -Nanotechnology

        Perhaps any one of these will not pan out. AI progress has moved fairly slowly of late. On the other hand, neurobiology has been booming along, and there seems little doubt that it will eventually be possible to simulate brain function. I can understand why writers are finding it difficult to extrapolate far into the future; it is simply hard to imagine that all of these will stall out.
    • Re:Okay (Score:5, Informative)

      by samantha ( 68231 ) * on Sunday August 15, 2004 @03:26PM (#9975284) Homepage
      A gentle but fairly thorough taste can be found in Kurzweil's "The Age of Spiritual Machines". Also check out http://www.kurzweilai.net/meme/frame.html?m=1 [kurzweilai.net]
      http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-s ing.html [caltech.edu]
      http://www.aleph.se/Trans/Global/Singularity/ [aleph.se]

      I am sure interested entities can google more.
  • The Singularity will probably occur near when scientists can pinpoint the human soul and consciousness. Much of the PopSci article involves creating an electronic copy of the human brain and possibly connecting a chip wired with one's mind to a human body less the brain. One fictional space traveler mentioned leaves behind a copy of herself on Earth and uploads her brain into a small virtual spaceship. This leads to questions such as: will the traveler on the spaceship be conscious, or will it be a mechaniz
    • Actually, that sort of thing has been done before. Read Daniel Dennett's Where Am I?, it is a great and though provoking read.
  • by BubbaThePirate ( 805480 ) on Sunday August 15, 2004 @02:42PM (#9975040)
    "I Have No Karma and I Must Troll".

    Sheer terror I tell you!

  • by T-Kir ( 597145 ) on Sunday August 15, 2004 @02:44PM (#9975045) Homepage

    Another author for ya: Greg Egan. I never got to finish Quarantine, but good science fiction like his tries to make you think 'outside of the box' compared to your usual spaceship/futuristic fare.

    Mind, I don't read many books for fun... the last book I actually bought with the Butlerian Jihad, got halfway through it before I realised the Dune Prelude series was a pile of steaming crap.

    Just my $0.02

    • by superdan2k ( 135614 ) on Sunday August 15, 2004 @02:50PM (#9975088) Homepage Journal
      Also worth investigating is John C. Wright's The Golden Age Trilogy...I had bought the first book to read on my Vegas trip (honeymoon) next week. Already ripped through it. Set 10,000 years in the future, where the Singularity, if it hasn't already happened, is damned close.

      The books (in order) are:

      * The Golden Age
      * The Phoenix Exultant
      * The Golden Transcendence

      That said, the first 50 pages of the first book are a little tough-going, given that Wright is painting a really alien picture and forcing you to catch up with his terminology, but in the end, it's worth it. Having just started the second book, I can tell you that one of the major themes is socialism vs. libertarianism, and as a subset of that personal responsibility to society.
    • Interesting bit here:

      In the Chequers, Doctorow mentions the original title for one of the novels he's working on, a story about a spam filter that becomes artificially intelligent and tries to eat the universe. "I was thinking of calling it /usr/bin/god."

      "That's great!" Stross remarks.

      Well, great for those who know that "/usr/bin" is the repository for Unix programs and that "god" in this case would be the name of the program, but a tad abstract for the rest of us. This tendency can make for difficult

  • by Anonymous Coward on Sunday August 15, 2004 @02:44PM (#9975046)
    When reading through the article's talk of the Singularity ushering in a posthuman era of genetic modifications, human implants, and computer brains that exceed people's own abilities, I remembered a hugely popular story from 1999 that dealt with all of these issues and more. What book did the story appear in? It didn't appear in any book. Was it at the multiplex? No, you didn't watch it in theatres (neither live nor screened) or on television.

    You played it on your computer. That game was Deus Ex.

    I think the article was narrowminded in that it was expecting modern science fiction to surface in the same medium as it had in its heyday. (Remember too that except in the U.S., most of the world had a serious paper shortage in the late 40s and early 50s following the war, so the print industry today isn't necessarily equipped to be the proper breeding ground). But Science Fiction comes in the form of computer games (single player or MMORG), little Flash animations, and the like. The "authors" of Deus Ex imagined a future world that had much of what the article was yearning for, and maybe the authors of the article just need to accept that storytelling can take differing forms.
  • Eh.... (Score:3, Insightful)

    by Fnkmaster ( 89084 ) on Sunday August 15, 2004 @02:47PM (#9975068)
    I'm not sure how impressed I am with that Vinge piece. In order for computers to start thinking like humans, we first have to be able to properly understand and model how humans think. The computers, no matter how massive the computational power available to them is, aren't going to spontaneously "wake up" (what the hell is he talking about there?) and develop consciousness - humans developed consciousness because brains evolved via very complex evolutionary mechanisms over millenia - mechanisms that computers don't exhibit or use.


    His assertion that this depends on the progress of computing hardware seems absurd to me. We already have as much computing hardware as we need, where computing hardware is all essentially capable of handling Turing-complete computation (in the lax sense of the phrase, obviously computational power and storage are finite, but not so limited that it's hampering our ability to simulate human intelligence).


    Then he makes the assumption that if we are able to create a human-level artificial intelligence (which is itself a somewhat ill-defined concept), it will be able to figure out how to improve itself to be substantially "better" than human intelligence. But do we really have any metric for what that even means? I mean, we still don't have a firm grasp on even measuring human intelligence very well.


    I am not saying his scenario is impossible or that it won't happen. Computers can already do certain tasks far better than humans, and that will continue to be the case. He seems to want a program capable of designing other programs. Is the first program Turing-test passing? Is it "smarter" than humans because it is better at recognizing patterns and reacting to them? Or smarter because it can generate and test hypotheses more rapidly? I feel very uncomfortable with drawing lots of conclusions about the future rate of progress of a topic that feels so ill defined to me.


    I agree that mastering consciousness and thought, and understanding the human brain will be one of the next great frontiers of science, and with that mastery ought to eventually come much better ability to simulate it in silico. But I'm not willing to speculate too much farther ahead than that.

    • Re:Eh.... (Score:4, Insightful)

      by samantha ( 68231 ) * on Sunday August 15, 2004 @03:32PM (#9975325) Homepage
      "Human level" or greater does not mean the AI "thinks like humans". It means it has equal or greater modeling, problem-solving, creativity, and so on plus self-awareness (self-modeling). If it also has the ability to self-improve/self-evolve and if it can take advantage of the at least 10^6 times faster switching ability it is built upon, then it is not difficult at all to project that we won't be in Kansas anymore. Mere human-level AI would change the world drastically. Beyond that the Singularity is almost inescapable.
      • Re:Eh.... (Score:4, Interesting)

        by Fnkmaster ( 89084 ) on Sunday August 15, 2004 @04:23PM (#9975614)
        I mentioned thinking like humans only because the Turing test is at least a quantifiable metric for what most people mean when they talk about AI. And with the kind of human-like assumptions embedded all over this work, I have to assume that any such super-human AI would, at a bare minimum, be able to pass a Turing test.


        In any case, regardless, I recognize the possibility of non-humanlike AI, but then we enter into the realm of unquantifiable BS. How do we measure modelling, problem-solving and creativity abilities (other than by something that ends up looking shockingly like a Turing test?). What do those words mean outside of the human context? As I pointed out in another post, outside of very limited, constrained problem domains, we don't have any idea how to wire something up that can do even sub-human "problem-solving" or "modelling". The field of AI has provided lots of great algorithms that turn out to do a decent job at doing near-human-quality work in very limited domains, or much-less-than-human-quality work in slightly less limited, but still very constrained domains. The field of consciousness research, which aims to understand and presumably, eventually, model the human brain is still nascent.


        I trust the instinct of Francis Crick who spent the last years of his life working on this problem that it will be a huge problem that dogs science for years to come. Just like how Einstein spent his last years looking for a TOE - guess what, here we are decades later, and we are _slightly_ closer, but basically up against a brick wall.


        I recognize the ability (in theory) to self-improve or evolve rapidly in software would make a "Singularity" type of scenario at least conceivable (assuming there are no other barriers to this sort of rapidly improving digital intelligence) if you can get past the humongous hurdles in getting there. I just don't think it's likely to happen in the next 10 or 20 or 30 years. And beyond that, I prefer not to speculate, or at least not to pretend that my speculations are much more than pure science fiction themselves.

    • Re:Eh.... (Score:4, Insightful)

      by EvilTwinSkippy ( 112490 ) <{yoda} {at} {etoyoc.com}> on Sunday August 15, 2004 @06:43PM (#9976325) Homepage Journal
      In order for computers to start thinking like humans, we first have to be able to properly understand and model how humans think.

      So I guess medival "engineers" would have to grasp the concepts of momentum and potential energy before the catapult was invented, and prehistoric man would have had to have groked thermodynamics before fire was created.

      No, no, no, no, no. Intellectuals have the problem backwards. Historically makind goes out and does something, and only later do we understand HOW we did it. Look at the invention of the transister. Alchemist predate chemistry by millenia. The profession of Engineering is derived from their work on siege engines. (Shakespear uses the term "Engineer" in his plays a full century before modern physics was formulated by Newton.)

      Some team, or a lone crackpot, is going to develop a thinking machine as a side effect of some other project, and 30 years later science will formulate a theory about how it works.

  • As I've said elsewhere [geocities.com]:

    A vital side note: Heinz von Foerster had published a paper in 1960 on global population: von Foerster, H, Mora, M. P., and Amiot, L. W., "Doomsday: Friday, 13 November, A.D." 2026, Science 132, 1291-1295 (1960). In this paper, Heinz shows that the best formula that describes population growth over known human history is one that predicts the population will go to infinity on a Friday the 13 in November of 2026. As Roger Gregory [thing.de] likes to say, "That's just whacko!" The problem is, a

  • Doctorow was never a programmer. He is relatively uneducated, no college, he went to an "alternative" high school (meaning: the uneducatable, problem child school).

    Unfortunately, Cory is also unschooled in the classic scifi genres. If he knew more about his own field, he would know his own style becomes dated more quickly than any other style. Basing one's work on current perspectives of the future is the surest way to make your work obsolete before it's ever published. As Roddenbury said, "nothing becomes
    • The very point of these writers is that such a perspective is itself doomed. Roddenbury's quote is to be, by their guess, more true than it was before. Because writing on perspectives of human nature will no longer have any application if human nature has revised itself dramatically. Given the choice between writing about something (classic SF) that is expected to become obsolete soon, or writing about the expected beginning of a new era, these writers are chosing the latter over the former.
    • Re:Correction (Score:4, Interesting)

      by r_benchley ( 658776 ) on Sunday August 15, 2004 @03:25PM (#9975272)
      Excellent points. The best science fiction writers (IMNSHO) are the ones that extrapolate the future based on human behavior and motivations, rather than where we think our technology will take us. Good science fiction is not about predicting tehnological advances. It should read like non-fiction that hasn't occured yet. My four favorite science fiction writers are Dick, Gibson, Stephenson, and Bester. Their novels have aged well, and seem to portray a pretty accurate picture of humanity's future because they all realize one thing: people do not change. Technological advance and trends aside, we are not that different from people thousands of years ago. Books like Do Androids Dream of Electric Sheep?, Neuromancer, Snow Crash, or The Demolished Man seem more and more likely, because the technological advance therorized on are secondary. We identify with the characters in books like these. These books address religion, corporate greed, politics, race relations, the military, etc. They seem plausible because the characters in these books act like we would. A good science fiction writer needs to make a few good extrapolations on where technology might be in the coming decades (nanotechnology, cloning, genetic modifications, interplanetary travel, worldwide computer networks, whatever), but the real value is addressing the human factor. A hundred (or a thousand) years from now, people will still be bitching about the government, religion, and corporations. We will still be greedy and giving, petty and generous, cruel and kind. Human beings do not change. When writing science fiction, it is important to retain that insight into human nature if you want accurately forecast where we are going.
  • by Jonathan ( 5011 ) on Sunday August 15, 2004 @02:52PM (#9975099) Homepage
    The article claims that suddenly technology is too hard to predict. I just don't see how that's new. The article mentions Clarke's idea of geosynchronous satellites, but that has to be one of the few technologies actually predicted by SF. In general, SF is pretty laughable when it comes to prediction. 1950's SF regularly had FTL travel and intelligent robots -- but people used slide rules -- computer technology was completely ignored. Even visionary 1960's writers like John Brunner, who predicted a sort of Internet, assumed that computers would be centralized and what everyone would have would just be terminals.
    • Where it not for Microsoft, that may have very well been the case today. And in fact, that may still very well be the case in the future. Near future. I can tell you there are only 2 applications I need to be able to replace where I work before this can happen ( and when I replace those applications, it WILL happen, guaranteed ).
    • by solios ( 53048 ) on Sunday August 15, 2004 @03:04PM (#9975162) Homepage
      I mean, you use your terminal (aka "web browser") to connect to the master server that holds the content and responds to your queries (aka the "web site") all the time, don't you? None of that stuff is actually on your home machine, you're just accessing it remotely...
    • Even visionary 1960's writers like John Brunner, who predicted a sort of Internet, assumed that computers would be centralized and what everyone would have would just be terminals.

      Actually, the way most people use the net... that's pretty much what they do have. My internet access basically refuses to provide any support for anything but a web browser. If you can get to Google through Internet Explorer, they consider your connection to be up... even if their router is randomly dropping TCP on any port but
    • Obviously, you have never read Jules Verne [wikipedia.org] or you would have known better. He predicted so many things it really makes your head spin. For starters:

      "In 1863, he wrote a novel called Paris in the 20th Century about a young man who lives in a world of glass skyscrapers, high-speed trains, gas-powered automobiles, calculators, and a worldwide communications network, yet cannot find happiness, and comes to a tragic end"

      Hunderd years before the act, he wrote about ubiquitous electricity, about submarines,

  • load of rubbish (Score:5, Interesting)

    by GuyFawkes ( 729054 ) on Sunday August 15, 2004 @02:53PM (#9975101) Homepage Journal
    because the vast majority of science fiction has always been "lets take present day concious and subconcious fears and talk about them in metaphors set into a future so that we can discuss them without censorship or fear"

    On the other hand there is a minority of good, hard, scientific science fiction like Larry Niven.

    In the year 3004 (assuming humans still exist) the vast majority of the human race will still be assholes, and if their personalities are downloaded into sugar cube sized computers they will be assholes with even less grip on reality that todays breed of assholes.

    I think I am going to patent a method for inflicting virtual pain / beatings / torture / death on these future embedded personalities, because it will be the only way to keep the bastards in line.

    A E Van Vogt wrote a great novel, The Anarchistic Colossus, which dealt with the issues of advancing technology vs human minds extremely well, thoroughly recommended, despite the fact that it is 20 or 30 years old there are many things in there that todays slashdot reasers will recognise as current actual concerns.
    • by Anonymous Coward
      I think I am going to patent a method for inflicting virtual pain / beatings / torture / death on these future embedded personalities

      Perl 9.
    • Re:load of rubbish (Score:3, Informative)

      by bcrowell ( 177657 )
      I'm surprised nobody has mentioned John Wright's Golden Age series. The article mentions Stross. I'm in the middle of reading Stross's latest novella, in Asimov's, and as far as I can tell, it's meant to be an outrageous parody of Wright. It's actually pretty funny, and Stross also gets his science right (to the extent that he commits himself to anything very specific), whereas Wright appears to have learned all his science from Star Trek, and seems to take himself entirely too seriously.
  • by selectspec ( 74651 ) on Sunday August 15, 2004 @02:56PM (#9975117)
    Read "Masks of the Universe" 1985 by Edward Harrison:

    Harrison's thesis is that the universe is infinitely complex and that we are no more aware of the inner workings of the universe than the ancient greeks.
  • Yawn (Score:4, Interesting)

    by sql*kitten ( 1359 ) * on Sunday August 15, 2004 @02:58PM (#9975121)
    Since the 70s, scientists and sci-fi authors have been promising that a revolution, including real AI, is "just around the corner". But the elusive breakthroughs recede further into the distance the more progress is made.

    There are plenty of contemporary sci-fi authors working in the near-future, the next few decades or centuries, Alastair Reynolds, Richard Morgan and Neal Asher being among the most notable. Reynolds in particular is very good - his future humanity colonizes the stars using a mix of cryogenics and relativistic time, no warp drives here.

    Also, he mistakes the point of pedandtry. No-one is bothered if the science is possible (yet) but any author worth his salt knows that the fictional technology must be CONSISTENT. A device can't act one way in one story and a completely different way in another, because if that happens, it's not sci-fi anymore but pure fantasy (and not even good fantasy). Sheer laziness and lack of talent on the part of the author.
  • Singularity (Score:5, Insightful)

    by Wes Janson ( 606363 ) on Sunday August 15, 2004 @02:58PM (#9975127) Journal
    The basic point I suspect the article is trying to make is thus: the field of speculative science fiction is no longer what it once was. Look back at the middle of the century, and you'll see that the predictive writings of science fiction authors all contained major assumptions about the social and cultural settings of the future. Even the ones that realized that fact, and tried to compensate, still failed for a lack of ability to predict. Absolutely no one in 1950 had an inkling of what the computer would do to society in fifty years. Looking at the history of science fiction, you see that while on occasion a few skilled authors make an accurate prediction or two, the vast majority of speculative sci fi fails dramatically to come close to reality. In the last two or three decades, it is generally considered that this situation has been growing steadily worse. Cultural changes are effectively impossible to predict long-term, because of their very nature (many small meme introductions over a long period of time), but now it becomes increasingly difficult to predict scientific and social changes. If the WWW had such an incredible impact on global economy within a span of nine or ten years, how can anyone hope to guess what will happen in eighty or ninety years?
  • Other books (Score:3, Insightful)

    by tcdk ( 173945 ) on Sunday August 15, 2004 @02:59PM (#9975133) Homepage Journal
    Just want to recommend Ken MacLeods Newton's wake [sfbook.com] as post-singularity SF book.

    Singularity Sky [amazon.com] by Charles Stross should also be good, but I haven't read that one yet.
  • I saw an article on this just after Hawking conceded his bet a few weeks ago:

    Hawking Loses Bet; Sci Fi Fans Take It Up The Wormhole> [ridiculopathy.com]

    here's the lead paragraph

    DUBLIN, IRELAND- At an address to a scientific conference earlier this week, Renowned physicist Stephen Hawking reversed his long-held position about the inescapable nature of black holes. In conceding his bet to American colleague John Preskill, he declared that it now appears that these singularities do emit "mangled matter" over time and

  • Matrioshka Pulsar (Score:2, Insightful)

    by KrackHouse ( 628313 )
    Hey, if these Brains are surround stars for power they'd have to start their creation on both sides of the planet to prevent the sun from orbiting around the increasingly massive brain. If that was the case the two halves of the brain would have to orbit around the sun as they grew until they connected. So depending on the speed of their orbit, before the two halves connected, you'd get something that looked like a pulsar from our perspective.
  • One definition of superhuman intelligence is almost upon us. This deffinition is that a single entity is capable of directing human beings as if the humans had no free will. This scenario can easily be envisioned as marketing which is capable of selling whatever the marketeer wishes to sell. Given that the feedback systems within large retailers are already capable of predicting the response of individual consumers with a high degree of accuracy. There would not need to be much of a shift in the effectivene
  • Lol (Score:2, Funny)

    by Anonymous Coward
    From the Matrioshka Brain page:

    "In general however, we may assume that current trends in" ... " should provide approximate human-brain equivalent computational capacity in desktop machines sometime between 2005-2010."

    lol! That's funny. Or laughable even

    To be fair, he didn't say full AI, just "computational capacity". But then he doesn't define what he means by that, and makes a wide, worthless generalization.

    If the rest of the paper is like that, this is just a bad sci-fi author trying to make peopl
  • Since aeiveos.com seems to have burst in flames here [66.102.11.104] is the cached page from Google.
  • by Thangodin ( 177516 ) <elentar AT sympatico DOT ca> on Sunday August 15, 2004 @03:45PM (#9975405) Homepage
    There is no reason to assume that bipedal intelligent life will be rare. Consider the evolutionary trail we followed. Four legged creatures walk and run very well, but six legged creatures are problematic--they tend to stumble and jerk a lot. Not a problem if you're a small light animal like an ant, but military research into six legged miltary ATV's was aborted because of this problem. The bigger the creature, the more pronounced the problem.

    Intelligent, tool using animals must readapt at least some of their limbs to prehnensile appendages. Given that their predecessors will probably begin with four legs, you end up with a creature that walks upright, with two limbs for manipulation, sense organs located high up for good vantage, close to the brain for high speed transmission of information. In other words, humanoid.

    It is possible to start with eight legs and end up with six, or six and end up with four on the floor, and high gravity species may well take this route. But there is still that problematic number six before or after, and there is also the problem of energy expenditure of moving all those extra limbs, especially in high gravity.

    The singularity is a possibility, but the increasing ignorance of science, not to mention growing political naivety, threatens this. It is hard to build a vast distributed intelligence when ignorance seems to be growing more common. The singularity also threatens more archaic world views, which will become more militant as this threat becomes apparent to them. The singularity would either eradicate religion entirely, or become the dominant religion itself. This is the real root of the conflicts in the middle east--an attempt to preserve what is essentially a medieval world view against the assault of modernity itself. The singularity is also partially dependent on the availability of energy. If we can make fusion work as a safe, cheap, energy supply, we're home free. Otherwise the singularity may recede even if the science and technology is available to make it possible.

    There is one last problem with any vision of the future: if the prophet can understand the messiah, then the prophet is the messiah. The messiah here is any radical, Copernican revolution which changes the entire world view. You could not predict the theory of general relativity unless you already had it, that is, unless you had already worked it out yourself. Nearly all hard science fiction works upon the technological consequences of existing science. Science fiction fills in the blanks for things we know we should be able to do but cannot do yet. That target moves with each advance in science.

    Finally, most works of science fiction work by extrapolating current social and political trends, which can change suddenly and without notice. Cold War science fiction often extrapolated the Cold war into the far future; William Gibson's Neuromancer, written at the height of Japan's rise as an economic dynamo, had Japanese culture permeating all things western. This aspect of it has become somewhat dated. I suspect that a lot of science fiction writers might be tempted to extrapolate the current religious tensions into the far future. But I suspect that a lot of Muslims may be getting tired of being medieval peasants and having their neighbourhoods blown up by fanatics and the armies sent to fight them. This too could change, and the change may be very swift when it comes.
  • by RichardtheSmith ( 157470 ) on Sunday August 15, 2004 @04:34PM (#9975675)
    To be honest, I really hate articles like this. I predict that the
    future will be pretty much like the present only with more people and
    more problems.

    SF utopians please note:

    - With regards to the human brain, we are just barely getting started.
    We can't cure or even partially remedy any of the diseases related to
    brain/nerve damage (strokes, Alzheimer's, cord injuries). The idea
    that we will ever be able to create Matrix-style VR or "upload"
    people's minds is just wishful thinking at this point.

    - We haven't solved the strong AI problem (P=NP).

    - We haven't solved the problem of getting spaceships into orbit
    without using bulky multi-stage rockets and ungodly amounts of fuel.
    No one really knows how we will get to Mars let alone past the Solar
    System.

    - We haven't solved the basic unification problem in Physics
    (reconciling QM with GR so we can have some clue about the nature of
    gravity). Fifty years after Einstein's death we are still working on
    the same riddles he left behind.

    - We haven't solved the energy problem. Sustainable fusion keeps
    getting pushed further back each decade.

    - And, more fundamentally, we haven't solved the problem of our own
    natures. Every time we have a technological breakthrough the first
    thing we worry about is someone using it to blow us all up. The "Star
    Trek" ideal that Earth will eventually be a unified planet is, well,
    just turn on the news, folks...

    Let's all try to work on that stuff before we start worrying about
    Verner Vinge-style singularities. Okay thanks...

    • We haven't solved the strong AI problem (P=NP).


      This is a problem that may not need solving. Our brains are seninent and exist. Once sufficient computing power - be it classical, quantum, or other - exists, then it is reasonable to assume that something comparable to our brain except artificial can be built. We even have a pretty good start on this one, the decoded genome. If you have enough computing power, you could just simulate the whole deal starting with DNA. Efficient no, but effective. People are
  • by localman ( 111171 ) on Sunday August 15, 2004 @06:12PM (#9976179) Homepage
    Okay -- I'll go out on a limb and say they'll be no smarter-than-human intelligence in, say, the next 1000 years.

    Of course, a definition of intelligence would be helpful, and we don't have a very good one yet. The Turing test, which I like for recognizing intelligence, doesn't help much determining how intelligent something is.

    I think we can all agree that number crunching isn't intelligence. I think of intelligence as the ability to find similarities between things that are different, and differences between things that are similary. Basically an ambiguity processing engine. Needs to be terribly adaptable, too.

    Anyways, I think the human brain stopped developing a long time ago because it already contains all the processing power needed for such actions. In fact, it's overkill. The proof is that while our hardware is all very similar, our "intelligence" varies greatly. Our current limitations on intelligence are limitations on learning, not on processing. Even if we built a better brain, we wouldn't have any idea what to feed it. We don't have any idea how to feed ourselves. Most geniuses arise by chance.

    Also, I think we strive for the elimination of all ambiguity, and concoct ideas of super-intelligence, or God, to represent this ideal. But I also think that we're fooling ourselves if we think there is a "right" answer to every question. If we were really intelligent we might realize the limits on intelligence are inherent, and not a lack of.

    So I think people can be smarter than they are today, and that a super-brain could be built. But i think the technology would be in education and environment. And I think that it would still be confused most of the time, kind of like us.

    Cheers.
  • by Jugalator ( 259273 ) on Sunday August 15, 2004 @06:37PM (#9976296) Journal
    I did some browsing and found a Wikipedia article [wikipedia.org] that informs about this particular "singularity" term.

    Also, here's some of Arthur C Clarke's predictions:

    2002 Clean low-power fuel involving a new energy source, possibly based on cold fusion.
    2003 The automobile industry is given five years to replace fossil fuels.
    2004 First publicly admitted human clone.
    2006 Last coal mine closed.
    2009 A city in a third world country is devastated by an atomic bomb explosion.
    2009 All nuclear weapons are destroyed.
    2010 A new form of space-based energy is adopted.
    2010 Despite protests against "big brother," ubiquitous monitoring eliminates many forms of criminal activity.
    2011 Space flights become available for the public.
    2013 Prince Harry flies in space.
    2015 Complete control of matter at the atomic level is achieved.
    2016 All existing currencies are abolished. A universal currency is adopted based on the "megawatt hour."
    2017 Arthur C. Clarke, on his one hundredth birthday, is a guest on the space orbiter.
    2019 There is a meteorite impact on Earth.
    2020 Artificial Intelligence reaches human levels. There are now two intelligent species on Earth, one biological, and one nonbiological.
    2021 The first human landing on Mars is achieved. There is an unpleasant surprise.
    2023 Dinosaurs are cloned from fragments of DNA. A dinosaur zoo opens in Florida.
    2025 Brain research leads to an understanding of all human senses. Full immersion virtual reality becomes available. The user puts on a metal helmet and is then able to enter "new universes."
    2040 A universal replicator based on nanotechnology is now able to create any object from gourmet meals to diamonds. The only thing that has value is information.
    2040 The concept of human "work" is phased out.
    2061 Hunter gatherer societies are recreated.
    2061 The return of Haley's comet is visited by humans.
    2090 Large scale burning of fossil fuels is resumed to replace carbon dioxide.
    2095 A true "space drive" is developed. The first humans are sent out to nearby star systems already visited by robots.
    2100 History begins.
    • by Chuck1318 ( 795796 ) on Sunday August 15, 2004 @10:28PM (#9977346)
      2019 There is a meteorite impact on Earth.

      This silliness reveals the lack of understanding in a list like this. It needs to be remembered that these are works of fiction, and events in them are story elements, not predictions. Science fiction writers are not mediums peering into crystal balls. To the extent that science fiction can be judged on predictive abilities, it is in the general shape of future technology, and the effects it has on people's lives. Furthermore, elements of technology can be in the story, not because the author believes them probable or even possible, but because it allows a certain kind of story to be told. For example, rapid and common interstellar travel is part of the background of many stories just because it is the only way to tell that sort of story. Especially, conflating elements from various stories into a timeline is only reasonable if the author has included them into a coherent "future history", which many stories are not.

    • by Qinopio ( 602437 ) on Monday August 16, 2004 @02:20AM (#9978137) Homepage
      2101 War was beginning.
  • by Philip Dorrell ( 804510 ) on Sunday August 15, 2004 @08:21PM (#9976761) Homepage Journal

    Today a spokesperson for the World Government announced a new scheme to slow down technological progress, to prevent the occurrence of the disastrous Technological Singularity.

    "With the introduction of the Internet, it becomes possible for a software implementation of a new idea to be uploaded, distributed, downloaded by anyone or everyone who might be interested in the idea, improved upon, and re-uploaded, all in a matter of hours. The consequences of this speed are downright scary."

    "To preserve a sense of balance, we have decided to award 'ownership' of an idea to the first person who thinks of it, and give that 'owner' the right to demand arbitrarily high financial compensation from any other person who seeks to implement improved versions of the owner's original idea. We plan to set the period of ownership to 20 years, which is tens of thousands times longer than an uncontrolled Internet-based development cycle."

    "At last we can all sleep soundly, knowing that the singularity will not happen in our lifetimes or even those of our children or grandchildren."

  • by FleaPlus ( 6935 ) on Sunday August 15, 2004 @08:50PM (#9976904) Journal
    AI researcher Eliezer Yudkowsky mentions [singularity.org] a number of different ways to reach a singularity:

    * Computer software endowed with heuristic algorithms
    * Artificial entities generated by evolution within computer systems
    * Integration of the human nervous system and computer hardware
    * Blending of humans and computers with user interfaces
    * Dynamically organizing computer networks


    Most of the comments so far have concerned the first method, which basically consists of programming a super-smart AI. However, I think that the third and fourth items listed, dealing with the way humans augment their information-processing capabilities, will have the biggest near-term results.
  • by Brandybuck ( 704397 ) on Sunday August 15, 2004 @09:30PM (#9977084) Homepage Journal
    The problems of sci-fi aren't the singularity. The problem is that the genre has undergone a huge paradigm shift. Take a look at the current sci-fi shelves and you'll find half of it is outright fantasy, another quarter is a rehash of the last two decade's themes, and the rest are "biting social commentaries" set in a space opera or cyberpunk milieu. Out of the hundreds of scifi novels published each year, you might find half a dozen that break out of the mold.

    What happened to popular music is happening to science fiction.

    We are in the bronze age of science fiction. The golen age was marked by an unabashed love of science and technology, with a dash of unadulterated libertarianism thrown in. Stories of this era showed that a free individual could solve any problem given enough gadgetry and smarts. Next was the silver age of scifi, when we started to invent alient societies and extrapolate cultures into the future. No longer were Mesklinites mere copies of human beings. The science took a back seat in the new wave authors' vehicles, but the science was still there.

    Now we're in the bronze age, and frankly it's a fizzle. Most of it is fantasy with a thin veneer of techno-trappings. A signficant amount of it is downright hostile to science and technology. All of the genre's rigorousness has evaporated. It isn't just books, it's movies and television too.

    The problem isn't the singularity, the problem is that science fiction has become popular.
    • I can't agree with you. SF has ALWAYS been about social commentary, fantasy, science, crime stories, space opera, rehashing old themes and more.

      Look at Asimov. Few of his books are about "unabashed love of science and technology". His robot stories cover classical literature subjects such as what it means to be human, crime stories, space opera, etc. Very few of them use science as anything but a prop.

      The entire Foundation series is for the most part one big epic space opera.

      Of old classics, the Time

  • by Animats ( 122034 ) on Sunday August 15, 2004 @11:23PM (#9977550) Homepage
    We have a big problem on the energy front. Much of SF assumes that a good new energy source will be developed. Many SF writers assumed one would have been developed by now. We're never going to do much in space on chemical fuels. And on Earth, whether we're running out of fossil fuels or not, demand is increasing faster than supply.

    Fifty years after atomic power, there has been very little progress. We can't make fusion work. Fission is too messy. And there's nothing else in the research pipeline.

    Don't think solar or wind will help. Here are the actual figures for California [ca.gov] for the last twenty years. Solar power hasn't increased over the last decade, and is stuck around 0.03% of consumption. Wind power is at 0.1% of consumption, and the good sites have already been developed.

  • by geekotourist ( 80163 ) on Monday August 16, 2004 @12:44AM (#9977797) Journal
    Science fiction isn't about predicting the future. Writers/ fans / analysts of the genre have rarely claimed it was. Instead, its about:
    • Predicting how people will react to one or more significant changes to society, either in the future (most SF) or the past (the subgenre of Alternate History. Start with these 1,600+ stories [uchronia.net].) The Handmaid's Tale wasn't predicting a fundie future for the US. It did capture the feel of what happened in Afghanistan after the fundie Taliban took over.
    • Predicting interesting uses for new technologies. Networks hadn't been out for that long when Brunner, and even before that Brin (or Benford? one of the 'killer B's') wrote about possibilities for worms and viruses in cyberspace.
    • Extrapolating / having fun with an exponential growth or decay of an important resource. What if our population booms or crashes? What if the planet freezes or goes greenhouse? What if a person or computer gets vastly more intelligent than before?
    • And the most important part of SF-- Sensawunda. The sense of wonder when you're pulled out of your own time and space and get to gaze (for the length of a book) through the eyes of other humans at a deep future, wide universe, and wide range of societies.
    • and as part of Sensawunda-- inspiring the future... all the scientists inspired by Heinlein or LeGuin or Gibson ("Neuromancer didn't predict the future. Neuromancer *created* the future. If you would understand the past twenty years' technological advance and retreat, this book is required reading..."- C. Doctorow. [boingboing.net]) to go into the sciences or computing...

    Enough has been written about The Singularity that any SF writer writing about 50+ years into the future should at least explain why if one isn't in their universe. Doesn't have to be a long explanation: put it in and go on with the story. Good SF writing hasn't been stopped by actual advances in science. Discovering that Venus is 700 degrees, going to the moon, or widespread PCs outdated some earlier SF stories' technology. But those events inspired many more new writers and new stories. The possibility of a singularity in a few decades should have less of an effect than those actual advances.

    And if a singularity does happen, there could be a second golden age of SF. You don't just write about universes, you create them [davidbrin.com]. Certainly Alternate History will be filled with that, like "what would happen if Reagan *won* the 1980 election?" versions of earth being run within the trillions of ongoing simulations (and no, the Matrix wasn't original- SF movies are usually far behind the SF literature.)

    SF writers who are particularly good at sensawunda in a post singularity (and/or humans dealing with beings larger than ourselves) universe include Greg Benford [authorcafe.com], the 'can make you empathize with loss in the life of regular deathless people' [netspace.net.au] Greg Egan [netspace.net.au], the 'pulls off multiple believable economic systems in one novel' Ken Macleod [blogspot.com], the recently reviewed [slashdot.org] Richard Morgan [infinityplus.co.uk], Ian Banks, and of course Cory Doctorow [craphound.com] and the early Slashdot adoptor [slashdot.org] (and I worry that he's going to hit an Algernon moment soon- how can he keep writing so well?) Charlie Stross [antipope.org].

    Many are scientists, but you don't have to be a scientist to be a good SF writer. You do have t

    • I jut can't agree with your list at the end. While I'm sure many good SF writers fit, many of the finest SF writers of our time completely fail to meet your criteria.

      Olaf Stapledon (Starmaker, Last and first men) and Edwin A. Abbott (Flatland) didn't even really care about SF at all, or consider their work SF. William Gibson have long been successfull because his knowledge of many of the subjects he wrote about was superficial and caused him to stay clear of technical details - books like Neuromancer are

  • not really news (Score:4, Insightful)

    by maxpublic ( 450413 ) on Monday August 16, 2004 @03:37AM (#9978374) Homepage
    I don't see how this article could be considered anything other than a rehash of concerns that've been aired before, time and time again.

    SF writers have always been in the prediction bind. They do the best they can with what they have. The vast majority of the time they're completely, utterly wrong. This was true in the past, is true today, and will be true in the future.

    So what? Most stories aren't about technology anyway, but about people. This is true no matter what the genre. The idea that SF writers are having more difficulty predicting the future than they have in the past is just plain bullshit; for reference, pick damned near anything from the 30's to the 70's and see just how laughable most of those 'predictions' are today.

    Not that it matters. It's the story that counts, not the technology (or lack of it) that's described.

    Max
  • by nikster ( 462799 ) on Monday August 16, 2004 @04:20AM (#9978491) Homepage
    Reading the article on the singularity, i have one question: What is intelligence?

    This question needs to be answered before other questions can be answered, like:
    If entity A is intelligent, can entity A create or design an entity B that is at least as intelligent as entity A?
    So far, it seems like "No" is the answer. I call this the intelligence barrier.

    The border cases seem to support this: A being with intelligence zero cannot design another being of intelligence zero. And God can't create God.

    Even if humans could design robots that are just as intelligent as them, it doesn't mean they could design robots that are more intelligent. Which also means these robots couldn't design other robots which would be more intelligent.

    This is the basic fallacy in the singularity concept.

    P.S.: I am also missing a debate about enlightenment: To be enlightened means to truly understand oneself, and in that, to truly understand life. Yet, most people are not enlightened. And how can you talk about understanding another intelligence if you can't even understand yourself?
    • This argument is pure bullshit. Evolution is the counter example. Since intelligence can increase through natural selection, it follows that given an entity of a specific level of intelligence you can "design" a more intelligent entity by "simply" copying evolution - apply pressures to ensure that the most intelligent are a lot more likely of breeding.

      The same holds for robots. If we manage to engineer robots that are just as intelligent as us, all it takes to design robots that are MORE intelligent than

  • by 3seas ( 184403 ) on Monday August 16, 2004 @04:43AM (#9978559) Homepage Journal
    This is really not about extrapolating from where we are today to create science fiction, but rather about finding some inspiration...

    Its been said that the first Sci-Fi movie ever created had all the plots and themes incorporated in it - Metropolis by Fritz Lang

    there are new generations of humans and just like other markets have realized much can be recycled as far as ideas go, simply because its "new" tio the new generations.

    Oh no, I just inspired someone to write a science fiction about a master races that lives much longer than us humans and is fully aware of this mental limitation of ours that allows them to watch reruns of our antics...

  • by BigWhale ( 152820 ) on Monday August 16, 2004 @05:22AM (#9978651)
    "Sure, we can upload you and you can live in our perfect virtual world, of course. It's just that we'll have to reprogram you a little bit, you see, you don't measure up to our standards...." ;) That's how average transhumanist thinks...

    It's a little bit nicer way of saying... lobotomy... ;>
  • by KlomDark ( 6370 ) on Monday August 16, 2004 @10:32AM (#9980203) Homepage Journal
    "Stross, 39, a native of Yorkshire who lives in Edinburgh, looks like a cross between a Shaolin monk and a video-store clerk--bearded, head shaved except for a ponytail, and dressed in black, including a T-shirt printed with lines of green Matrix code."

    Uh, look at the picture, that's not Matrix code - that's Space Invaders. Author must be too young to identify it.

    That would be a weird combo of ideas for a game - have the Matrix code scrolling down the page, and then have the blocky Space Invaders cannon that you have to shoot the codes with. Somebody write it then send me a copy. :)

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...