The Singularity Blinds Sci-Fi 603
foobsr writes "Popular Science has an article discussing the growing difficulties that Sci-Fi writers encounter when it comes to extrapolating current trends. Doctorow and
Stross , both former computer programmers, are rated to be prototypes of a new breed of guides to a future which due to
Vinge's Singularity might not happen for humanity once a proper super-intelligence - maybe as a Matrioshka Brain - has been created."
Okay (Score:4, Insightful)
Re:Okay (Score:2, Insightful)
it ended up sounding like total ass.
some people work really hard at keeping up the "i'm a mental giant" facade.
Re:Okay (Score:5, Funny)
Re:Okay (Score:3)
Re:Okay (Score:5, Insightful)
Re:Okay (Score:3, Insightful)
Re:Okay (Score:5, Insightful)
Right! People who can can really predict future trends--like the development of satellite-based communications, for example--wouldn't waste their time writing science fiction.
Oh, wait....
Re:Okay (Score:5, Insightful)
The guy's complaint isn't that sci-fi writers don't sometimes get it right (an infinite number of monkeys pounding on an infinite number of keyboards...), but they can't be expected to be mystic seers, or else they'd be working for Wall Street. Complaining that Gibson didn't anticipate cell phones before 2020 is just lame, because (good) science fiction isn't really about the technology, but man's and society's interaction with the technology and the future. In which case, it doesn't really matter what the technology is; it could be mysterious gadget X, as long as what gadget X does is well-defined.
For example, in Asimov's robot stories, he defines a gadget X that follows the 3 laws of robotics. He never provides detailed technical drawings or any expectations that such robots will be created (certainly not in the near future), but the conceit nonetheless provides a rich basis for a large number of stories exploring the ramifications.
The technology in science fiction is a means to an end, not the end itself. The technology serves the purpose of the plot, not the other way around. Thus its existence is dictated by the plot, and whether or not it is truly predictive of future trends is largely immaterial. Good science fiction generally only tackles a few disruptive ideas at a time, and the rest of the backfiller is just to maintain a suitably futuristic atmosphere.
Besides, in the long run, all technologies are transient. By 2100, we may not be using communication satellites anymore, which are made obsolete by the technology Q, a high capacity computer network of digital packet radios communicating using Q particles travelling faster than light (yes, I just made that up, don't hold your breath waiting for my prediction to come true). OMG, why didn't Arthur C. Clarke anticipate technology Q by the year 2100? He sucks! All his science fiction now sucks, too!
Re:Okay (Score:5, Insightful)
point 1: its not 2020 yet.
point 2: cell phones are rapidly becoming computing devices. by 2020, they may well be the only computing device you need.
i know i'm currently shopping for a new cell phone that can handle my e-mail needs
Re:Okay (Score:3, Informative)
Re:Okay (Score:4, Insightful)
Re:Okay (Score:5, Interesting)
Furthermore, any novelist worth his/her salt does a lot of research to make sure they know what they're talking about. So when they get the future right, it's a well-informed guess, not so much a fluke.
I'll agree that they aren't necessarily brilliant geniuses, though.
Re:Okay (Score:5, Insightful)
Actually, scientists are not really in the business of predicting the future. Scientiists tend to have relatively short perspectives: "What can I do now to increase our understanding?" Most scientists are specialists, knowing a great deal about a narrow area of study. This is often what you need to make progress, but it doesn't necessarily help you see the shape of the future. A writer of hard science fiction has to be familiar with many areas of science to come up with novel ideas for stories. And while they may not be scientists themselves, what they write needs to be scientifically plausible, because a lot of their readers are, and don't hesitate to point out errors (like Niven's unstable Ringworld).
And sometimes, I think, SF writers may even help to make the future Scientists read science fiction, and may take an interest in pursuing some of the ideas they read about in more rigorous ways. I can't help wondering how many of the guys now working on quantum "teleportation" were influenced by Star Trek's transporter....
Re:Okay (Score:3, Insightful)
Are you sure there are no cell phones? (Score:3, Insightful)
I can't remember any stories where the characters use the toilet, but I assume they still crap in the future.
Maybe we can assume cell-phones are like crappers; everywhere and not worth mentioning.
Re:Okay (Score:4, Insightful)
Bingo (Score:5, Insightful)
One thing I can say, though, is that fiction doesn't have to be true. Hence the name! Basing what science fiction authors can or cannot do in terms of what is likely to happen in the future, is absurd. I know someone will say that truth is stranger than fiction, and that fiction must hew close to the truth. Anyone who actually takes that pap seriously should not be reading sci-fi (hard or otherwise) or any other form of fiction, for that matter, since it is speculative. (Blah, blah blah, probability, spare me. Prove to me that Genghis Khan did not come from a distant galaxy.)
The real assumption is that there is macro-truth (background, history, physics, etc.) and micro-truth (characters behaving, their interactions, etc). If the term fiction can apply, authors should be given the liberty to fake whatever they please. (And again, spare me any argument involving economics and who is going to read a book about talking toasters from the 35th century, etc..)
Re:Bingo (Score:5, Insightful)
As for what the singularity could be, there are plenty of options. Development of a working nano assembler might do it (manufacturing capabilities would instantly become meaningless, since we would be able to produce enough of _everything_ for _everyone_. Don't tell me that won't change things...). Development of an AI would probably also do it, since it could itself develop better, faster versions - faster than we could ever hope to keep up with. Or there is contact with an alien race. Perhaps even something as mundane as the FTL drive or anti-gravity... Anyway, the singularity is rather fascinating, even though it is itself SF for now ;-)
We've already reached the singularity (Score:3, Insightful)
Essentially, the expansion of the internet into almost every country, and the continued growth of open source software methods has created a sort of "mini-singularity".
Through cooperation and collaberation on the internet, people have the ability to create and expand software much much rapidly than could have been concieved of.. even as late as the 1990s.
As internet service is expanded to more and more sections of the world, and as computer literacy keeps rising, expect this trend to develop exponential
Re:We've already reached the singularity (Score:3, Interesting)
Has anyone else noticed how much Google works like a human mind? It has associative retrieval and makes its "memories" more accessible the more they are used. And its knowledge base is a non-microscopic fraction of what humanity knows.
>And aren't the rapid development of things like the wikipedia, GNU tools, the linux kernal, and so on, a result of this new clus
Re:Bingo (Score:5, Insightful)
Of course it will change everything. I expect half the world to starve in the months after that event. Current trends in intellectual property law point to that already.
Re:Bingo (Score:3, Interesting)
1. It will devastate the foundation for our current economic system.
This is because it will eliminate any job related to production, whether it is assembly in a factory, or food production (farmers, fishers), or production of raw materials (since the nano-factories would of course reuse our waste). That's a _lot_ of people suddenly without jobs.
Indeed, there would only be jobs left in services, design, and energy production. And design jobs would be constantly und
Re:Bingo (Score:5, Interesting)
However, the article is referring to a particular kind of science fiction (sometimes called "hard" SF) which is based upon realistically extrapolating current technology and trends into the future.
The problem is that reasonable extrapolation along a number of pathways leads to a future that is so alien that it is difficult to imagine, and even more difficult to think of anything to write about that would be entertaining to modern readers. The problem, is that humanity as we know it may not exist for much longer.
However, both Vinge and Stross have found literary ways around the singularity. Sort of the science fiction equivalent of "Left Behind." That is, even if the singularity occurs, it might not take everybody.
Re:Plateau. (Score:5, Insightful)
In many cases, physical limits intervene. Exponential increase in speed of travel does not imply that we'll find a way to break the light speed barrier (but we might). But the singularity being spoken of here is not a physical singularity, but a singularity of extrapolation--a kind of discontinuity or state transition beyond which simple extrapolation is impossible, because what lies on the other side is qualitatively different from what came before. And those are actually rather common in the real world.
Re:Okay (Score:5, Informative)
The first result he comes up with (this one [ask.com]) is an FAQ on the meaning of life. Part of the question of the meaning of life is an eventual goal, something to reach towards. Once of the options discussed is the Singularity.
The best place for more info is the Singularity Institute [singinst.org]. Their definition of the Singularity is the technogical creation of smarter-than-human intelligence. This is by any possible means, either overclocking the human mind, creating artificial intelligence which is smarter than humans, or some combination thereof (such as uploading human minds to computers to run at a faster rate).
Read the FAQ. It'll clear up your basic questions, and doubtless leave you with many more.
Re:Okay (Score:2, Interesting)
Upload a mind to the computer, run it, pull the plug and you just killed someone. Perhaps this kind of research should be disallowed, it's sort of murder..
Re:Okay (Score:5, Insightful)
Therefore, if you were chatting with a person in a computer and said something that ticked them off and they refused to talk to you anymore, simply shut it down, resore from backup, and restart. Murder? Not really, there's no death. I think it's worse.
And think of the first person who has this procedure done. How many times will his/her processes have to be shut down and restarted, or how many simultaneous instances would be run?
I wholeheartedly agree with you, this should be disallowed, but it's not murder.
But then again, if a human intelligence, even if copied, is to precious for us to research with, then who is to say a created (artificial) intelligence is any less precious.
One or the other is going to happen eventually. We need to be prepared for that day. Much like the first cloned human.
Re:Okay (Score:4, Informative)
You ought to read "Permutation City" by Greg Egan. It's about things like this, and takes them to an extreme conclusion.
Re:Okay (Score:3, Interesting)
Yes, but would you have a real person running on the Linux box in your bedroom?
If this ever happens on a large scale, the uploaded "people" will live in a secure datacentre, probably buried under a mountain or something, and they will do work (i.e. creating
Re:Okay (Score:3, Interesting)
Me, as non-religious as I am, tend to think there's more going on in the old thick skull. I hate to call it a soul, but c'mon guys... what if there is something quantum going on? If so, then ma
Re:Okay (Score:3, Insightful)
Any middle of the road approach would suffer the SimCity effect. No two layouts end up working out the same way unless you can ensure the exact same top
In a nutshell (Score:2, Interesting)
Re:In a nutshell (Score:5, Informative)
Indeed, this can be difficult even for scientists who read the physics literature. Much of what was regarded as science fiction in the 50's is fact today, including some things that were generally considered to be fantasy at one time, like beam weapons. Physicists are carrying out serious experiments on quantum teleportation, and methods of transmitting information (random information, but still information) faster than light.
Now there are multiple lines of serious investigation, any one of which that could lead to massive transformation not merely of human culture (such as happened so recently with the internet, and was predicted by hardly anybody), but also of humanity itself:
-AI
-Genetic modification of human beings
-Direct man/machine interfaces
-Nanotechnology
Perhaps any one of these will not pan out. AI progress has moved fairly slowly of late. On the other hand, neurobiology has been booming along, and there seems little doubt that it will eventually be possible to simulate brain function. I can understand why writers are finding it difficult to extrapolate far into the future; it is simply hard to imagine that all of these will stall out.
Re:In a nutshell (Score:5, Insightful)
Some of the unexpected changes:
1) The free press no longer belongs to "the man who can afford one." Everybody has the equivalent of their own printing press. Individual bloggers, unaffiliated with major news organizations, are now a significant influence on political races.
2) Distribution of music has been transformed, and the control of traditional studios of media distribution is eroding.
3) Virtually everybody worldwide has access to research capabilities that were previously available only to the wealthy and those who had access to a major library are now available worldwide.
4) Government control of information distribution has become enormously more difficult. Interdiction of taboo political or cultural information (e.g. pornography) is much weaker.
5) There is now a market available to the average citizen worldwide in almost any product you can identify, new or used.
Computers have changed hardly at all in the last thirty years, even. The sort of software running on your desktop at a kernel level is not exactly revolutionary, it is just the same sort of thing as 30 years ago.
However, applications and interfaces have changed enormously. Almost everybody now has access to music, photo, typesetting, and video editing facilities that were available only to professionals 30 years ago.
It is sad that the most revolutionary and transforming effects could be acheived with the simple technologies we have already developed, like anti-malarial drugs, vitamin supplements, fertilisers, and so on, than can possibly be acheived by any future development that is remotely likely.
Yes, if only we could make more rational, more humanistic use of the resources we already have, the world would be transformed. But practically speaking, this is as much fantasy as orcs and wizards. While the technology of doing more with computers is rapidly advancing, the "social technology" of achieving in practice the sort of "revolutionary and transforming effects" that you envision seems to have stalled decades ago.
Re:Typical. (Score:3, Insightful)
Will they? Oddly e
Re:Okay (Score:5, Informative)
http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-
http://www.aleph.se/Trans/Global/Singularity/ [aleph.se]
I am sure interested entities can google more.
What/where is the soul? (Score:2, Insightful)
Re:What/where is the soul? (Score:3, Informative)
Re:What/where is the soul? (Score:2)
A new, horrifying trend in Sci-fi... (Score:5, Funny)
Sheer terror I tell you!
Mind bending Science Fiction (Score:4, Insightful)
Another author for ya: Greg Egan. I never got to finish Quarantine, but good science fiction like his tries to make you think 'outside of the box' compared to your usual spaceship/futuristic fare.
Mind, I don't read many books for fun... the last book I actually bought with the Butlerian Jihad, got halfway through it before I realised the Dune Prelude series was a pile of steaming crap.
Just my $0.02
Re:Mind bending Science Fiction (Score:5, Interesting)
The books (in order) are:
* The Golden Age
* The Phoenix Exultant
* The Golden Transcendence
That said, the first 50 pages of the first book are a little tough-going, given that Wright is painting a really alien picture and forcing you to catch up with his terminology, but in the end, it's worth it. Having just started the second book, I can tell you that one of the major themes is socialism vs. libertarianism, and as a subset of that personal responsibility to society.
Books for Nerds (Score:2)
In the Chequers, Doctorow mentions the original title for one of the novels he's working on, a story about a spam filter that becomes artificially intelligent and tries to eat the universe. "I was thinking of calling it /usr/bin/god."
"That's great!" Stross remarks.
Well, great for those who know that "/usr/bin" is the repository for Unix programs and that "god" in this case would be the name of the program, but a tad abstract for the rest of us. This tendency can make for difficult
SciFi doesn't have to be in print or on TV (Score:5, Insightful)
You played it on your computer. That game was Deus Ex.
I think the article was narrowminded in that it was expecting modern science fiction to surface in the same medium as it had in its heyday. (Remember too that except in the U.S., most of the world had a serious paper shortage in the late 40s and early 50s following the war, so the print industry today isn't necessarily equipped to be the proper breeding ground). But Science Fiction comes in the form of computer games (single player or MMORG), little Flash animations, and the like. The "authors" of Deus Ex imagined a future world that had much of what the article was yearning for, and maybe the authors of the article just need to accept that storytelling can take differing forms.
Eh.... (Score:3, Insightful)
His assertion that this depends on the progress of computing hardware seems absurd to me. We already have as much computing hardware as we need, where computing hardware is all essentially capable of handling Turing-complete computation (in the lax sense of the phrase, obviously computational power and storage are finite, but not so limited that it's hampering our ability to simulate human intelligence).
Then he makes the assumption that if we are able to create a human-level artificial intelligence (which is itself a somewhat ill-defined concept), it will be able to figure out how to improve itself to be substantially "better" than human intelligence. But do we really have any metric for what that even means? I mean, we still don't have a firm grasp on even measuring human intelligence very well.
I am not saying his scenario is impossible or that it won't happen. Computers can already do certain tasks far better than humans, and that will continue to be the case. He seems to want a program capable of designing other programs. Is the first program Turing-test passing? Is it "smarter" than humans because it is better at recognizing patterns and reacting to them? Or smarter because it can generate and test hypotheses more rapidly? I feel very uncomfortable with drawing lots of conclusions about the future rate of progress of a topic that feels so ill defined to me.
I agree that mastering consciousness and thought, and understanding the human brain will be one of the next great frontiers of science, and with that mastery ought to eventually come much better ability to simulate it in silico. But I'm not willing to speculate too much farther ahead than that.
Re:Eh.... (Score:4, Insightful)
Re:Eh.... (Score:4, Interesting)
In any case, regardless, I recognize the possibility of non-humanlike AI, but then we enter into the realm of unquantifiable BS. How do we measure modelling, problem-solving and creativity abilities (other than by something that ends up looking shockingly like a Turing test?). What do those words mean outside of the human context? As I pointed out in another post, outside of very limited, constrained problem domains, we don't have any idea how to wire something up that can do even sub-human "problem-solving" or "modelling". The field of AI has provided lots of great algorithms that turn out to do a decent job at doing near-human-quality work in very limited domains, or much-less-than-human-quality work in slightly less limited, but still very constrained domains. The field of consciousness research, which aims to understand and presumably, eventually, model the human brain is still nascent.
I trust the instinct of Francis Crick who spent the last years of his life working on this problem that it will be a huge problem that dogs science for years to come. Just like how Einstein spent his last years looking for a TOE - guess what, here we are decades later, and we are _slightly_ closer, but basically up against a brick wall.
I recognize the ability (in theory) to self-improve or evolve rapidly in software would make a "Singularity" type of scenario at least conceivable (assuming there are no other barriers to this sort of rapidly improving digital intelligence) if you can get past the humongous hurdles in getting there. I just don't think it's likely to happen in the next 10 or 20 or 30 years. And beyond that, I prefer not to speculate, or at least not to pretend that my speculations are much more than pure science fiction themselves.
Re:Eh.... (Score:4, Insightful)
So I guess medival "engineers" would have to grasp the concepts of momentum and potential energy before the catapult was invented, and prehistoric man would have had to have groked thermodynamics before fire was created.
No, no, no, no, no. Intellectuals have the problem backwards. Historically makind goes out and does something, and only later do we understand HOW we did it. Look at the invention of the transister. Alchemist predate chemistry by millenia. The profession of Engineering is derived from their work on siege engines. (Shakespear uses the term "Engineer" in his plays a full century before modern physics was formulated by Newton.)
Some team, or a lone crackpot, is going to develop a thinking machine as a side effect of some other project, and 30 years later science will formulate a theory about how it works.
Re:Consciousness is just software. (Score:3, Informative)
How do you know? Before powered flight, how reasonable would the description of a 747 have sounded?
Well, 100 billion neurons or so. Given that these guys are building a system today which emulates 20 billion neurons: http://www.ad.com/ human level consciousness might not be all that far away.
von Foerster's Singularity (Score:2)
A vital side note: Heinz von Foerster had published a paper in 1960 on global population: von Foerster, H, Mora, M. P., and Amiot, L. W., "Doomsday: Friday, 13 November, A.D." 2026, Science 132, 1291-1295 (1960). In this paper, Heinz shows that the best formula that describes population growth over known human history is one that predicts the population will go to infinity on a Friday the 13 in November of 2026. As Roger Gregory [thing.de] likes to say, "That's just whacko!" The problem is, a
Correction (Score:2)
Unfortunately, Cory is also unschooled in the classic scifi genres. If he knew more about his own field, he would know his own style becomes dated more quickly than any other style. Basing one's work on current perspectives of the future is the surest way to make your work obsolete before it's ever published. As Roddenbury said, "nothing becomes
Re:Correction (Score:2)
Re:Correction (Score:4, Interesting)
Since when has SF *ever* predicted technology? (Score:5, Insightful)
Re:Since when has SF *ever* predicted technology? (Score:3, Interesting)
Brunner was right, indirectly. (Score:5, Insightful)
Re:Since when has SF *ever* predicted technology? (Score:3, Insightful)
Actually, the way most people use the net... that's pretty much what they do have. My internet access basically refuses to provide any support for anything but a web browser. If you can get to Google through Internet Explorer, they consider your connection to be up... even if their router is randomly dropping TCP on any port but
Re:Since when has SF *ever* predicted technology? (Score:3, Interesting)
Also, computers small enough for individual to own must have existed. I get the impression from references in the book that there were legal restrictions on individuals owning computers
Re:Since when has SF *ever* predicted technology? (Score:3, Interesting)
Hunderd years before the act, he wrote about ubiquitous electricity, about submarines,
load of rubbish (Score:5, Interesting)
On the other hand there is a minority of good, hard, scientific science fiction like Larry Niven.
In the year 3004 (assuming humans still exist) the vast majority of the human race will still be assholes, and if their personalities are downloaded into sugar cube sized computers they will be assholes with even less grip on reality that todays breed of assholes.
I think I am going to patent a method for inflicting virtual pain / beatings / torture / death on these future embedded personalities, because it will be the only way to keep the bastards in line.
A E Van Vogt wrote a great novel, The Anarchistic Colossus, which dealt with the issues of advancing technology vs human minds extremely well, thoroughly recommended, despite the fact that it is 20 or 30 years old there are many things in there that todays slashdot reasers will recognise as current actual concerns.
Re:load of rubbish (Score:3, Funny)
Perl 9.
Re:load of rubbish (Score:3, Informative)
Masks of the Universe (Score:3, Interesting)
Harrison's thesis is that the universe is infinitely complex and that we are no more aware of the inner workings of the universe than the ancient greeks.
Yawn (Score:4, Interesting)
There are plenty of contemporary sci-fi authors working in the near-future, the next few decades or centuries, Alastair Reynolds, Richard Morgan and Neal Asher being among the most notable. Reynolds in particular is very good - his future humanity colonizes the stars using a mix of cryogenics and relativistic time, no warp drives here.
Also, he mistakes the point of pedandtry. No-one is bothered if the science is possible (yet) but any author worth his salt knows that the fictional technology must be CONSISTENT. A device can't act one way in one story and a completely different way in another, because if that happens, it's not sci-fi anymore but pure fantasy (and not even good fantasy). Sheer laziness and lack of talent on the part of the author.
Re:Yawn (Score:2)
He being Doctorow, I mean, not Reynolds in this paragraph.
Singularity (Score:5, Insightful)
Other books (Score:3, Insightful)
Singularity Sky [amazon.com] by Charles Stross should also be good, but I haven't read that one yet.
Hawing changes mind, decades of sci fi negated... (Score:2)
Hawking Loses Bet; Sci Fi Fans Take It Up The Wormhole> [ridiculopathy.com]
here's the lead paragraph
Matrioshka Pulsar (Score:2, Insightful)
What is super intelligence? (Score:2, Interesting)
Lol (Score:2, Funny)
"In general however, we may assume that current trends in"
lol! That's funny. Or laughable even
To be fair, he didn't say full AI, just "computational capacity". But then he doesn't define what he means by that, and makes a wide, worthless generalization.
If the rest of the paper is like that, this is just a bad sci-fi author trying to make peopl
Google cache (Score:2)
Why no humanoid aliens? (Score:5, Insightful)
Intelligent, tool using animals must readapt at least some of their limbs to prehnensile appendages. Given that their predecessors will probably begin with four legs, you end up with a creature that walks upright, with two limbs for manipulation, sense organs located high up for good vantage, close to the brain for high speed transmission of information. In other words, humanoid.
It is possible to start with eight legs and end up with six, or six and end up with four on the floor, and high gravity species may well take this route. But there is still that problematic number six before or after, and there is also the problem of energy expenditure of moving all those extra limbs, especially in high gravity.
The singularity is a possibility, but the increasing ignorance of science, not to mention growing political naivety, threatens this. It is hard to build a vast distributed intelligence when ignorance seems to be growing more common. The singularity also threatens more archaic world views, which will become more militant as this threat becomes apparent to them. The singularity would either eradicate religion entirely, or become the dominant religion itself. This is the real root of the conflicts in the middle east--an attempt to preserve what is essentially a medieval world view against the assault of modernity itself. The singularity is also partially dependent on the availability of energy. If we can make fusion work as a safe, cheap, energy supply, we're home free. Otherwise the singularity may recede even if the science and technology is available to make it possible.
There is one last problem with any vision of the future: if the prophet can understand the messiah, then the prophet is the messiah. The messiah here is any radical, Copernican revolution which changes the entire world view. You could not predict the theory of general relativity unless you already had it, that is, unless you had already worked it out yourself. Nearly all hard science fiction works upon the technological consequences of existing science. Science fiction fills in the blanks for things we know we should be able to do but cannot do yet. That target moves with each advance in science.
Finally, most works of science fiction work by extrapolating current social and political trends, which can change suddenly and without notice. Cold War science fiction often extrapolated the Cold war into the far future; William Gibson's Neuromancer, written at the height of Japan's rise as an economic dynamo, had Japanese culture permeating all things western. This aspect of it has become somewhat dated. I suspect that a lot of science fiction writers might be tempted to extrapolate the current religious tensions into the far future. But I suspect that a lot of Muslims may be getting tired of being medieval peasants and having their neighbourhoods blown up by fanatics and the armies sent to fight them. This too could change, and the change may be very swift when it comes.
Re:Why no humanoid aliens? (Score:3, Interesting)
I think "thinking at an amazing speed" is actually a fairly important part of what the singularity is about -- it's about machines (whether AI or augmented humans) that come up with new new ideas so rapidly that they completely change human culture. But you are right in one thing -- it isn't supposed to be difficult to achieve. In fact, if Vinge is right, it is almost unavoidable.
I think that the point, and the main que
I get tired of these articles... (Score:3, Insightful)
future will be pretty much like the present only with more people and
more problems.
SF utopians please note:
- With regards to the human brain, we are just barely getting started.
We can't cure or even partially remedy any of the diseases related to
brain/nerve damage (strokes, Alzheimer's, cord injuries). The idea
that we will ever be able to create Matrix-style VR or "upload"
people's minds is just wishful thinking at this point.
- We haven't solved the strong AI problem (P=NP).
- We haven't solved the problem of getting spaceships into orbit
without using bulky multi-stage rockets and ungodly amounts of fuel.
No one really knows how we will get to Mars let alone past the Solar
System.
- We haven't solved the basic unification problem in Physics
(reconciling QM with GR so we can have some clue about the nature of
gravity). Fifty years after Einstein's death we are still working on
the same riddles he left behind.
- We haven't solved the energy problem. Sustainable fusion keeps
getting pushed further back each decade.
- And, more fundamentally, we haven't solved the problem of our own
natures. Every time we have a technological breakthrough the first
thing we worry about is someone using it to blow us all up. The "Star
Trek" ideal that Earth will eventually be a unified planet is, well,
just turn on the news, folks...
Let's all try to work on that stuff before we start worrying about
Verner Vinge-style singularities. Okay thanks...
Re:I get tired of these articles... (Score:3, Interesting)
This is a problem that may not need solving. Our brains are seninent and exist. Once sufficient computing power - be it classical, quantum, or other - exists, then it is reasonable to assume that something comparable to our brain except artificial can be built. We even have a pretty good start on this one, the decoded genome. If you have enough computing power, you could just simulate the whole deal starting with DNA. Efficient no, but effective. People are
Re:I get tired of these articles... (Score:5, Insightful)
I appreciate what you're saying, but I can't get past the fact that we haven't had any real breakthroughs since the birth of the Atomic and Computer ages 40-50 years ago.
It is not a "fact", it is an illusion which you have due to, I guess, insufficient education and/or knowledge.
Re:I get tired of these articles... (Score:3, Insightful)
Why go so far to the past? Imagine telling someone 200 years ago that we carry objects with us which allow us to speak with each other around the world. Imagine telling them that we have boxes which not only send all sorts of images, sounds and texts around the world, but are even able to make (crude) translations of the
Smarter than Humans (Score:5, Insightful)
Of course, a definition of intelligence would be helpful, and we don't have a very good one yet. The Turing test, which I like for recognizing intelligence, doesn't help much determining how intelligent something is.
I think we can all agree that number crunching isn't intelligence. I think of intelligence as the ability to find similarities between things that are different, and differences between things that are similary. Basically an ambiguity processing engine. Needs to be terribly adaptable, too.
Anyways, I think the human brain stopped developing a long time ago because it already contains all the processing power needed for such actions. In fact, it's overkill. The proof is that while our hardware is all very similar, our "intelligence" varies greatly. Our current limitations on intelligence are limitations on learning, not on processing. Even if we built a better brain, we wouldn't have any idea what to feed it. We don't have any idea how to feed ourselves. Most geniuses arise by chance.
Also, I think we strive for the elimination of all ambiguity, and concoct ideas of super-intelligence, or God, to represent this ideal. But I also think that we're fooling ourselves if we think there is a "right" answer to every question. If we were really intelligent we might realize the limits on intelligence are inherent, and not a lack of.
So I think people can be smarter than they are today, and that a super-brain could be built. But i think the technology would be in education and environment. And I think that it would still be confused most of the time, kind of like us.
Cheers.
Wiki article about this, and Clarke's predictions (Score:5, Interesting)
Also, here's some of Arthur C Clarke's predictions:
2002 Clean low-power fuel involving a new energy source, possibly based on cold fusion.
2003 The automobile industry is given five years to replace fossil fuels.
2004 First publicly admitted human clone.
2006 Last coal mine closed.
2009 A city in a third world country is devastated by an atomic bomb explosion.
2009 All nuclear weapons are destroyed.
2010 A new form of space-based energy is adopted.
2010 Despite protests against "big brother," ubiquitous monitoring eliminates many forms of criminal activity.
2011 Space flights become available for the public.
2013 Prince Harry flies in space.
2015 Complete control of matter at the atomic level is achieved.
2016 All existing currencies are abolished. A universal currency is adopted based on the "megawatt hour."
2017 Arthur C. Clarke, on his one hundredth birthday, is a guest on the space orbiter.
2019 There is a meteorite impact on Earth.
2020 Artificial Intelligence reaches human levels. There are now two intelligent species on Earth, one biological, and one nonbiological.
2021 The first human landing on Mars is achieved. There is an unpleasant surprise.
2023 Dinosaurs are cloned from fragments of DNA. A dinosaur zoo opens in Florida.
2025 Brain research leads to an understanding of all human senses. Full immersion virtual reality becomes available. The user puts on a metal helmet and is then able to enter "new universes."
2040 A universal replicator based on nanotechnology is now able to create any object from gourmet meals to diamonds. The only thing that has value is information.
2040 The concept of human "work" is phased out.
2061 Hunter gatherer societies are recreated.
2061 The return of Haley's comet is visited by humans.
2090 Large scale burning of fossil fuels is resumed to replace carbon dioxide.
2095 A true "space drive" is developed. The first humans are sent out to nearby star systems already visited by robots.
2100 History begins.
Re:Wiki article about this, and Clarke's predictio (Score:4, Insightful)
This silliness reveals the lack of understanding in a list like this. It needs to be remembered that these are works of fiction, and events in them are story elements, not predictions. Science fiction writers are not mediums peering into crystal balls. To the extent that science fiction can be judged on predictive abilities, it is in the general shape of future technology, and the effects it has on people's lives. Furthermore, elements of technology can be in the story, not because the author believes them probable or even possible, but because it allows a certain kind of story to be told. For example, rapid and common interstellar travel is part of the background of many stories just because it is the only way to tell that sort of story. Especially, conflating elements from various stories into a timeline is only reasonable if the author has included them into a coherent "future history", which many stories are not.
You forgot one! (Score:4, Funny)
How to slow things down ... (Score:3, Funny)
Today a spokesperson for the World Government announced a new scheme to slow down technological progress, to prevent the occurrence of the disastrous Technological Singularity.
"With the introduction of the Internet, it becomes possible for a software implementation of a new idea to be uploaded, distributed, downloaded by anyone or everyone who might be interested in the idea, improved upon, and re-uploaded, all in a matter of hours. The consequences of this speed are downright scary."
"To preserve a sense of balance, we have decided to award 'ownership' of an idea to the first person who thinks of it, and give that 'owner' the right to demand arbitrarily high financial compensation from any other person who seeks to implement improved versions of the owner's original idea. We plan to set the period of ownership to 20 years, which is tens of thousands times longer than an uncontrolled Internet-based development cycle."
"At last we can all sleep soundly, knowing that the singularity will not happen in our lifetimes or even those of our children or grandchildren."
Many paths to a singularity (Score:3, Informative)
* Computer software endowed with heuristic algorithms
* Artificial entities generated by evolution within computer systems
* Integration of the human nervous system and computer hardware
* Blending of humans and computers with user interfaces
* Dynamically organizing computer networks
Most of the comments so far have concerned the first method, which basically consists of programming a super-smart AI. However, I think that the third and fourth items listed, dealing with the way humans augment their information-processing capabilities, will have the biggest near-term results.
The problems of sci-fi (Score:3, Insightful)
What happened to popular music is happening to science fiction.
We are in the bronze age of science fiction. The golen age was marked by an unabashed love of science and technology, with a dash of unadulterated libertarianism thrown in. Stories of this era showed that a free individual could solve any problem given enough gadgetry and smarts. Next was the silver age of scifi, when we started to invent alient societies and extrapolate cultures into the future. No longer were Mesklinites mere copies of human beings. The science took a back seat in the new wave authors' vehicles, but the science was still there.
Now we're in the bronze age, and frankly it's a fizzle. Most of it is fantasy with a thin veneer of techno-trappings. A signficant amount of it is downright hostile to science and technology. All of the genre's rigorousness has evaporated. It isn't just books, it's movies and television too.
The problem isn't the singularity, the problem is that science fiction has become popular.
Re:The problems of sci-fi (Score:3, Insightful)
Look at Asimov. Few of his books are about "unabashed love of science and technology". His robot stories cover classical literature subjects such as what it means to be human, crime stories, space opera, etc. Very few of them use science as anything but a prop.
The entire Foundation series is for the most part one big epic space opera.
Of old classics, the Time
Energy will be a big problem (Score:3, Informative)
Fifty years after atomic power, there has been very little progress. We can't make fusion work. Fission is too messy. And there's nothing else in the research pipeline.
Don't think solar or wind will help. Here are the actual figures for California [ca.gov] for the last twenty years. Solar power hasn't increased over the last decade, and is stuck around 0.03% of consumption. Wind power is at 0.1% of consumption, and the good sites have already been developed.
SF is about reaction, not prediction (Score:4, Insightful)
Enough has been written about The Singularity that any SF writer writing about 50+ years into the future should at least explain why if one isn't in their universe. Doesn't have to be a long explanation: put it in and go on with the story. Good SF writing hasn't been stopped by actual advances in science. Discovering that Venus is 700 degrees, going to the moon, or widespread PCs outdated some earlier SF stories' technology. But those events inspired many more new writers and new stories. The possibility of a singularity in a few decades should have less of an effect than those actual advances.
And if a singularity does happen, there could be a second golden age of SF. You don't just write about universes, you create them [davidbrin.com]. Certainly Alternate History will be filled with that, like "what would happen if Reagan *won* the 1980 election?" versions of earth being run within the trillions of ongoing simulations (and no, the Matrix wasn't original- SF movies are usually far behind the SF literature.)
SF writers who are particularly good at sensawunda in a post singularity (and/or humans dealing with beings larger than ourselves) universe include Greg Benford [authorcafe.com], the 'can make you empathize with loss in the life of regular deathless people' [netspace.net.au] Greg Egan [netspace.net.au], the 'pulls off multiple believable economic systems in one novel' Ken Macleod [blogspot.com], the recently reviewed [slashdot.org] Richard Morgan [infinityplus.co.uk], Ian Banks, and of course Cory Doctorow [craphound.com] and the early Slashdot adoptor [slashdot.org] (and I worry that he's going to hit an Algernon moment soon- how can he keep writing so well?) Charlie Stross [antipope.org].
Many are scientists, but you don't have to be a scientist to be a good SF writer. You do have t
Re:SF is about reaction, not prediction (Score:3, Interesting)
Olaf Stapledon (Starmaker, Last and first men) and Edwin A. Abbott (Flatland) didn't even really care about SF at all, or consider their work SF. William Gibson have long been successfull because his knowledge of many of the subjects he wrote about was superficial and caused him to stay clear of technical details - books like Neuromancer are
not really news (Score:4, Insightful)
SF writers have always been in the prediction bind. They do the best they can with what they have. The vast majority of the time they're completely, utterly wrong. This was true in the past, is true today, and will be true in the future.
So what? Most stories aren't about technology anyway, but about people. This is true no matter what the genre. The idea that SF writers are having more difficulty predicting the future than they have in the past is just plain bullshit; for reference, pick damned near anything from the 30's to the 70's and see just how laughable most of those 'predictions' are today.
Not that it matters. It's the story that counts, not the technology (or lack of it) that's described.
Max
Intelligence Barrier. (Score:3, Interesting)
This question needs to be answered before other questions can be answered, like:
If entity A is intelligent, can entity A create or design an entity B that is at least as intelligent as entity A?
So far, it seems like "No" is the answer. I call this the intelligence barrier.
The border cases seem to support this: A being with intelligence zero cannot design another being of intelligence zero. And God can't create God.
Even if humans could design robots that are just as intelligent as them, it doesn't mean they could design robots that are more intelligent. Which also means these robots couldn't design other robots which would be more intelligent.
This is the basic fallacy in the singularity concept.
P.S.: I am also missing a debate about enlightenment: To be enlightened means to truly understand oneself, and in that, to truly understand life. Yet, most people are not enlightened. And how can you talk about understanding another intelligence if you can't even understand yourself?
Re:Intelligence Barrier. (Score:3, Insightful)
The same holds for robots. If we manage to engineer robots that are just as intelligent as us, all it takes to design robots that are MORE intelligent than
Sounds more like someone wants inspiration.. (Score:3, Interesting)
Its been said that the first Sci-Fi movie ever created had all the plots and themes incorporated in it - Metropolis by Fritz Lang
there are new generations of humans and just like other markets have realized much can be recycled as far as ideas go, simply because its "new" tio the new generations.
Oh no, I just inspired someone to write a science fiction about a master races that lives much longer than us humans and is fully aware of this mental limitation of ours that allows them to watch reruns of our antics...
Singularity... ...schmingularity... (Score:3, Funny)
It's a little bit nicer way of saying... lobotomy...
Article author must be too young (Score:4, Funny)
Uh, look at the picture, that's not Matrix code - that's Space Invaders. Author must be too young to identify it.
That would be a weird combo of ideas for a game - have the Matrix code scrolling down the page, and then have the blocky Space Invaders cannon that you have to shoot the codes with. Somebody write it then send me a copy.
Re:the rapture? (Score:2)
The background to Greg Egan's "Permutation City" is of a world that has ru
Re:AI isn't going to happen - so why worry? (Score:3, Informative)
Turing showed that no such program exists that can solve the halting problem for all possible input programs.
However, it's a big stretch to go from that to debugging software. Even if you show that the halting problem is equivalent to debugging a program (assuming you can define that formally), you still can get around t
Re:Incorrect Assumption On First Page (Score:5, Interesting)
Re:The Borg? (Score:3, Interesting)
Re:Apply Asimov's laws and s/robot/freaky AI (Score:3, Interesting)
Rather, what you want is for it to do a lot of what you want it to do, without tons of configuration and without needing expert advice to configure it, and do it neatly and efficiently...and when it finds something it doesn't know how to handle, it *knows* when to bother you,