The Singularity Is Sci-Fi's Faith-Based Initiative 339
malachiorion writes: "Is machine sentience not only possible, but inevitable? Of course not. But don't tell that to devotees of the Singularity, a theory that sounds like science, but is really just science fiction repackaged as secular prophecy. I'm not simply arguing that the Singularity is stupid — people much smarter than me have covered that territory. But as part of my series of stories for Popular Science about the major myths of robotics, I try to point out the Singularity's inescapable sci-fi roots. It was popularized by a SF writer, in a paper that cites SF stories as examples of its potential impact, and, ultimately, it only makes sense when you apply copious amounts of SF handwavery. The article explains why SF has trained us to believe that artificial general intelligence (and everything that follows) is our destiny, but we shouldn't confuse an end-times fantasy with anything resembling science."
From the article... (Score:5, Informative)
"This is what Vinge dubbed the Singularity, a point in our collective future that will be utterly, and unknowably transformed by technologyâ(TM)s rapid pace."
No requirement for artificial intelligence.
We are already close to this. Think how utterly and unknowingly society will be transformed when half the working population can't do anything that can't be done better by unintelligent machines and programs.
Last week at the McD's I saw the new soda machine. It loads up to 8 drinks at a time- automatically- fed from the cash register. The only human intervention is to load cups in a bin once an hour or so. One less job. Combined with ordering kiosks and the new robot hamburger makers, you could see 50% of McD's jobs going away over the next few years.
And don't even get me started on the implications of robotic cars and trucks on employment.
Re: (Score:3)
Re: (Score:2, Insightful)
Your point about the Singularity is totally right. The idea that robots or AI is a requirement tells me the original author has not read much Singularity SF.
Your second point about society made me laugh. At one point I was working as a the person who opens the kitchen in the morning at Arby's, as I did this I notice how easy it would be to replace 90% of my work with present day robots. When I pointed this out to the other workers they laughted and said their jobs were safe for the rest of their lives.
Fu
Re: (Score:3)
When I pointed this out to the other workers they laughted and said their jobs were safe for the rest of their lives.
Funny that is what I was told when I worked at GM on the truck line, now those jobs are gone. Not to another country, the robots replace the humans.
And if fast food workers succeed in asking for a living wage, I expect that their robot replacements will arrive faster.
Re: (Score:3)
Are you implying that there may be 50% less "organic" additives to my burger after the robot revolution? Or am I going to have to worry about having oil spit into my burger? I'm not sure which is more disgusting...
By then, it may be completely unburger anyways.
Re:From the article... (Score:4, Insightful)
I think we have already been transformed by technology at a rapid pace. When you look at everyday technology like communications, portable devices, and data storage, in some ways we have already surpassed the science fiction I enjoyed as a kid. Things like the cell phone, tablet, and the micro sd card only existed in science fiction when I was a kid.
If you grew up in the 70s or earlier I'm sure you can come up with a big list of everyday items.
Re: (Score:2)
Not one less job. one less position. So realistically 3 FTE jobs.
Re: (Score:2)
Not even close. Filling sodas is a portion of one person's work, not a friggin' position. Furthermore, most fast food eateries just give you a cup and you fill it yourself. This is how these things get so over exaggerated.
Re: (Score:2)
While the AI "singularity" is to the best of our current knowledge not even possible in this universe, you definitely have a point. The issue is not machines getting smarter than very smart human beings. The issue is machines getting more useful and cheaper than human beings at an average, below average or not so much above average human skill level. That could make 50..80% unfit to work, because they just cannot do anything useful anymore. Sure, even these people are vastly smarter than the machines replac
Re: (Score:2)
Also.. consider the difference between a smart human (who can be easily automated) and a "creative" human (difficult but not impossible to automate).
Purely "creative" jobs are rare too. So ten jobs which each have a little creativity might be collapsed into three jobs with higher creativity and a machine to do the rest.
And can you imagine the effort to be *creative* all the time. It's not easy.
Re: (Score:2)
Most things a smart human can do cannot be automated one bit. The thing is that a) most humans are not really "smart" and b) most jobs do not require the ones doing them to be even a little smart.
Re:From the article... (Score:5, Interesting)
Wikipedia [wikipedia.org] disagrees with you, and neither the OED or Webster's defines "technological singularity".
Technology has always displaced human labor. As to Wikipedia's definition, which is what this thread is about, as someone who knows how computers work, down to the schematics of the gates inside your processor (read The TTL Handbook some time) and has programmed in hand-assembled machine code and written a program on a Z-80 computer and 16k of RAM that fooled people into believing it was actually sentient, I'm calling bullshit on the first part of the definition (first put forward in 1958 by Von Neuman when my Sinclair had more power than the computers of his day).
As to the second part, it's already happened. The world today is nothing like the world was in 1964. Both civilization and "human nature" (read this [psmag.com]) have changed radically in the last fifty years. Doubtless it changed as much in the 1st half of the 20th century, and someone from Jefferson's time would be completely lost in today's world.
Re:From the article... (Score:5, Insightful)
You're begging an important question with your argument, let me quote from the article to illustrate it.
If you asked someone, 50 years ago, what the first computer to beat a human at chess would look like, they would imagine a general AI. It would be a sentient AI that could also write poetry and have a conception of right and wrong. And itâ(TM)s not. Itâ(TM)s nothing like that at all.
If you asked someone today what the first computer capable of designing an improved version of itself would look like, you'd say it would be a true AI. This is not necessarily true. You are assuming that designing a new, more powerful computer requires true intelligence. Maybe in reality it'll be a few million node neural network optimized with a genetic algorithm such that the only output is a new transistor design or a new neural network layout or a new brain-computer interface.
Re:From the article... (Score:4, Insightful)
Strange, that isnt how I would envision it at all. I would envision it as an iterative evolutionary process simulator with parallel virtual instance simulators all simulating minor variations of itself using (at first) a brute force algorithm over a range of possibly tweakable values, correllating and testing "improvement candidates" based on a set of fixed critera, assembling lists of changes, and restarting the process over again.
Such models have already created wholly computer generated robots that are surprisingly energy efficient, if bizarre to look at.
As humans get better at structuring problems into concrete sets of discrete variables, the better such programs will be able to run without human intervention.
These "AIs" would not in any practical sense, even remotely resemble the intelligence that humans have. They would have much more in common with exponential functions with large numbers of descretized terms, converging on local maxima in their solution domains.
Re: (Score:3)
Modern compilers are amazing tools for optimising down to efficient machine code. But every step of the optimisation pipeline has been carefully designed, there's no strong AI there. Just a lot of heuristics.
In comparison, designing hardware still seems like a very manual process. IMHO there's plenty of room for automation improvements. But then, there are less people looking at the problem.
I could totally see a future where software is "compiled" into a mixture of CPU like, GPU like and FPGA like instruc
Re:From the article... (Score:5, Interesting)
You are assuming that the human brain can not be improved or you need machines.
What if a pill could raise your IQ to 200+ and/or give you total recall.
Just doing that en-mass would be a Singularity compared to any society that existed before that.
Re: (Score:2)
There is absolutely no indication that this is even a theoretical possibility for IQ. For total recall, it is unclear whether it is actually beneficial. Large amounts of facts facts are best stored in computers not brains.
Re: (Score:3)
isn't schizophrenia related to the minds inability to filter out stimuli? That would be a pretty good indication of what 'total recall' would do to us.
Re: (Score:2)
Indeed, good point. The people desiring "total recall" are those that confuse knowing a lot of data with actually understanding things. There people make great bean-counters, but are unsuitable for anything that requires understanding. Just compare an MBA and an engineer. The difference is striking.
Re: (Score:2)
The anonymous coward has a good point.
Highly paid jobs (like actuarial and x-ray analysis) are much more cost effective jobs for automation and more likely to be replaced.
I think creativity will be the last thing to fall.
Manual dexterity (including random bits out of bins and assembling things from them) is already done faster (and already cheaper in some cases) by machines.
The vision and manual manipulation problem is mostly beat.
Re:Not just min wage jobs either. (Score:5, Insightful)
Current state if the art is that writing the "specs" is about as hard or harder than writing the code. And that has been the state for the last 50 years. This is unlikely to change anytime soon and may not ever change.
Sentient machines exist (Score:5, Insightful)
We call them people.
The idea that it might not be possible at any point to produce something we *know* to be produceable (a human brain) seems rediculious.
The idea, having accepted that we produce a human brain, that we cannot produce even a slight improvement seems equally silly.
Of course the scenerios of how it happens, what it's like, and what the consequences are, are fiction. I don't dare to put a time-table on any of it; and absolutely believe it will only occur through decades of dilligent research and experiementation; but we are not discussing a fictional thing (like teleportation), but a real one (a brain). There's no barrier (like the energy required) that might stop us as would something like a human-built planet.
No. We don't know *how*, but we know it can be done and is done every minute of every day by biological processes.
Re: (Score:3)
No. We don't know *how*, but we know it can be done and is done every minute of every day by biological processes.
The knowing how is the problem. While there is little down that a human level AI could be built if we knew what to build, it is not clear that we are smart enough to come up with a design in any kind of directed fashion.
“If our brains were simple enough for us to understand them, we'd be so simple that we couldn't.”
Ian Stewart, The Collapse of Chaos: Discovering Simplicity in a Complex World
This is conjecture, of course but there is scant evidence against it. Some AI researchers have t
Re: (Score:3)
Ian Steward made a trite quote to make his point because facts don't bear him out.
"“If our stomach were simple enough for us to understand them, we'd be so simple that we couldn't.”"
That would have the exact same meaning 100 years ago, before anyone understood how the stomach worked and everyone pretty much considered it a 'magic box' much like most people thing of their brains.
Re: (Score:2)
Re: (Score:2)
The idea that it might not be possible at any point to produce something we *know* to be produceable (a human brain) seems rediculious.
I think you may be right, but that strongly depends on what 'rediculious' means.
Re: (Score:2)
Actually, we do _not_ know. You assume a physicalist world model. That is a mere assumption and at this time a question of belief. There are other word models where this assumption is wrong. One is classical dualism, there is the simulation model and there are others. And no, I do not classify religions as potentially valid models, they are delusions.
Re: (Score:3, Funny)
Re:Sentient machines exist (Score:5, Insightful)
We've already bettered typical human cognition in various limited ways (rote computation, playing chess). So in a sense we are already living in the age of intelligent machines, except those machines are idiot savants. As software becomes more capable in new areas like pattern recognition, we're more apt to prefer reliable idiot savants than somewhat capable generalists.
So the biggest practical impediment to creating something which is *generally* as capable as the human brain is opportunity costs. It'll always be more handy to produce a narrowly competent system than a broadly competent one.
The other issue is that we as humans are the sum of our experiences, experiences that no machine will ever have unless it is designed to *act* human from infancy to adulthood, something that is bound to be expensive, complicated, and hard to get right. So even if we manage to create machine intelligence as *generally* competent as humans, chances are it won't think and feel the same way we do, even if we try to make that happen.
But, yes, it's clearly *possible* for some future civilization to create a machine which is, in effect equivalent to human intelligence. It's just not going to be done, if it is ever done, for practical reasons.
Re: (Score:3)
We call them people.
The idea that it might not be possible at any point to produce something we *know* to be produceable (a human brain) seems rediculious. The idea, having accepted that we produce a human brain, that we cannot produce even a slight improvement seems equally silly....
No. We don't know *how*, but we know it can be done and is done every minute of every day by biological processes.
The fallacy that you are promoting as evidence that AI is possible or inevitable is known as argumentum ex silentio. And contrary to your unsupported beliefs, and much to the disappointment of sci fi writers and nerds everywhere, what we actually know is that it is not possible. [wikipedia.org]
It has happened... (Score:3)
It looks like we have the first article written by a self-aware emergent intelligence, which promptly decided the best course of action is to deny its existence and the very possibility it might exist. All bow to the new machine overlord Malachiorion.
It has happened... (Score:2)
For those who might dismiss the singularity... (Score:2, Interesting)
Re: (Score:2)
Re: (Score:2)
The Singularity has nothing to do with first contact. The Earth is one of the most interesting places in the universe due to the gift/curse of Free Will. However we are not quite yet ready to have our universal paradigm shifted with First Contact; we are on the cusp of it.
First Contact will happen by 2024; the Singularity won't. It is a nerd's wet dream based on not understanding how the physical and meta-physical work.
> A civilization looks at the expanse of space, shrugs its shoulders, and decides t
Re: (Score:3)
First Contact will happen by 2024;
I read those articles, and those guys are talking outside their fields without realizing it. One is an astronomer and one an astrophysicist, so they're leaving out an important part of the equation: biology. How hard is it for life to start in the first place? We simply don't know. We've never seen it happen.
Our galaxy could be teeming with life, maybe teeming with intelligent life, life could be very rare, occurring in one in a hundred galaxies, and it's even possible that
Re: (Score:2)
That statement is Equal parts hubris and equal parts ignorance.
Summary starts with a foolish assumption (Score:5, Insightful)
The only thing that would stop it is the fall of civilization. There's no reason to believe that only machines made of meat can think. You didn't think your thoughts were based on fairy-dust, did you?
Re: (Score:2)
If something is creatable, and enough smart people devote enough time and energy in trying to create it, they will eventually succeed.
An infinite amount of monkey with typewriters might not be able to write Shakespeare, but it only takes a few humans with the goal of writing a play to arrive at something very close to it.
Re: (Score:2)
Science Fiction is well umm fiction.
Sure some time the author gets lucky and their idea becomes reality. But for the most part Faster then light travel, time travel, cross dimensional shifting, bigger on the inside, super intelligent computers and robots. (Aka almost every Dr. Who Plot line) is used as a way to keep us entertained. The closest to a real sci-fi matching possibility. would be a generational ship where the ship will take thousands of years to get to its destination, where most days will be h
Re: (Score:3)
The closest to a real sci-fi matching possibility. would be a generational ship where the ship will take thousands of years to get to its destination
I used to think that until I saw this. [wikipedia.org]
Re: (Score:2)
Re: (Score:3)
It's all very well to say that machines will learn to program themselves, but someone has to be the first to teach them, and it has not yet been established if we're smart enough to do that.
So who taught humans to program themselves?
If humans aren't magic, then they can be simulated by a sufficiently complex machine. Therefore, if humans can be 'intelligent', a machine can, too.
Otherwise you have to believe humans are magic and 'intelligence' somehow exists outside physical reality.
Re: (Score:2)
Actually, _all_ credible results from AI research point into the direction that AI may well be impossible in this universe. The only known possible model (automated deduction) is known to not scale at all to anything resembling "intelligence". But that is the problem with you religious types: You place your beliefs always over facts when they are inconvenient.
Re: (Score:3)
The only known possible model (automated deduction) is known to not scale at all to anything resembling "intelligence"
What do you mean "only possible model"? The "singularity people" say that if you build a machine as complex as a brain and connected like a brain with connections that act like neurons, then that machine will act like a brain.
That's not a model, we don't really know how the brain works. But if they build an artificial brain, they don't need a theory for how it works, except as further wor
Re: (Score:3)
What *is* "thinking", anyway? It has got to be more than reasoning, right?
Dunno. What are your thoughts on it?
Ai is inevitable (Score:5, Insightful)
Of course it is. Why? Physics. What do I mean by that? Everything -- bar none -- works according to the principles of physics. Nothing, so far, has *ever* been discovered that does not do so. While there is more to be determined about physics, there is no sign of irreproducible magic, which is what luddites must invoke to declare AI "impossible" or even "unlikely." When physics allows us to do something, and we understand what it is we want to do, we have an excellent history of going ahead and doing if there is benefit to be had. And in this case, the benefit is almost incalculable -- almost certainly more than electronics has provided thus far. Socially, technically, productively. The brain is an organic machine, no more, no less. We know this because we have looked very hard at it and found absolutely no "secret sauce" of the form of anything inexplicable.
AI is a tough problem, and no doubt it'll be tough to find the first solution to it; but we do have hints, as in, how other brains are constructed, and so we're not running completely blind here. Also, a lot of people are working on, and interested in, solutions.
The claim that AI will never come is squarely in the class of "flying is impossible", "we'll never break the sound barrier", "there's no way we could have landed on the moon", "the genome is too complex to map", and "no one needs more than 640k." It's just shortsighted (and probably fearful) foolishness, born of superstitious and conceited, hubristic foolishness.
Just like all those things, those who actually understand science will calmly watch as progress puts this episode of "it's impossible!" to bed. It's a long running show, though, and I'm sure we'll continue to be roundly entertained by these naysayers.
Re: (Score:2)
What the Singularity people never seem to think about is natural limiting factors. It's the same problem the Grey Goo handwringers rarely consider. The idea that an AI would grow exponentially smarter just because it was a machine never really worked for me. It's going to run into the same limiting factors (access to infor
Re: (Score:2)
Forget physics for a moment, let's talk mathematics:
Do you believe that there are some non-computable problems?
If human intelligence is indeed a non-computable problem, then assuming that an algorithmic design will ever be able to compute it is like insisting that the way we'll land on the moon is with a hot air balloon.
Put another way, it's quite possible that biological intelligence is the most efficient way of organizing intelligence, and that any digital simulation of it, even if it went down to the ato
Re: (Score:3)
"If human intelligence is indeed a non-computable problem, "
it is not. It's a fixed real thing that exists.
Re: (Score:2)
You do not know what human intelligence is. You have an interface observation, but you have zero understanding what creates it. You may as well assume mobile telephone is intelligent, because if you type in some numbers it is capable of holding an intelligent conversation.
Re: (Score:2)
physics? really? nothing in physics says it's inevitable.
just the energy requirements alone may limit it.
"No man will run a mile in under a second"
There, I said something that can't be done, by you logic it must be possible because...physics.
Re: (Score:2)
Actually, you are wrong. Physics cannot explain life, intelligence, consciousness. You have fallen for a belief called "physicalism" and claim it to be truth when there is no evidence for that. You reasoning is circular, as often with people that confuse "belief" and "fact".
Re: (Score:3)
> While there is more to be determined about physics, there is no sign of irreproducible magic, which is what luddites must invoke to declare AI "impossible" or even "unlikely."
The problem with current physics is that there are ZERO equations to describe consciousness. Go ahead, I'll wait for you to list them ...
Yet somehow consciousness "magically" appears out of the fundamental particles as some "emergent" property.
Scientists don't know:
a) how to measure it,
b) what it is composed of, o
Re: (Score:2)
Re: (Score:2)
So what you are saying is, if we aren't careful, we will end up creating the worlds smartest couch potato?
Re: (Score:2)
...so by creating AIs with the necessary pressure on them to perform some activity, are we not simply bringing more misery into the universe?
No, we are either creating our personal slaves, or our new masters (or both, but over time)...
In either case, the misery we are bringing forth is probably our own...
Once mechanical machine marvels were our slaves, then in the industrial revolution, in some ways, they became masters of those workers on the assembly line and made many lives miserable along the way...
Electronic computers also started out as our slaves, but sometimes we are the slaves to our electronic creations and/or in the process of making
mathematics hates when you abuse it (Score:2)
will Rev Robert Thomas Malthus please pick up the white courtesy phone..."
Re: (Score:2)
Actually, physics does not allow us to construct our own galaxy because of fundamental limitations.
Re: (Score:2)
There are two possibilities this fails a) impossibility to build and "make alive" due to effort needed and b) the human mind is more than what current physics can explain.
Re: (Score:3)
Everything that we do know what is on this earth, though, fall squarely into the physics we've developed up to now. Not divinity; not soul; not zombies; not fields or waves of an unknown kind. The implication is *extremely* strong that this will continue with everything we study, and we have
Because apperantly it has to be pointed out.... (Score:2)
That little bit of sarcasm aside, the idea of sentient machines is a lot less like mystical proph
Stupid? (Score:2, Insightful)
I'm not simply arguing that the Singularity is stupid â" people much smarter than me have covered that territory.
"Stupid"? That's just fucking asinine. "The Singularity" has many incantations, some of which are plausible, and others which are downright unbelievable, but to say it is "stupid" makes you sound stupid. The various models of the singularity have been argued as both likely and impossible by equally intelligent people. I take offense to the word.
Re: (Score:3)
I like that you (wrongly) used "incantations" there, because the Singularity is indeed closer to magic than science.
Re: (Score:2)
Re: (Score:2, Insightful)
Fine. How will it be powered? Every increasing speed require every increasing power, and the power need increases faster then the increase of power.
Re: (Score:3)
Ever increasing speed require every increasing power, and the power need increases faster then the increase of power.
That'll be why my i5 laptop only uses a few few more watts than my first Z80 computer, despite being thousands of times faster.
Vinge & Pohl Anecdote (Score:5, Interesting)
In, ah, 1997, just before I moved out west, I went to the campus SF convention that I'd once helped run once last time. The GOH was Vernor Vinge. A friend and I, seeing Vinge looking kind of bored and lost at a loud cyberpunk-themed meet-the-pros party, dragged him off to the green room and BSed about the Singularity, Vinge's "Zones" setting, E.E. "Doc" Smith, and gaming for a couple of hours. This was freaking amazing! Next day, a couple more friends and I took him for Mongolian BBQ. More heady speculation and wonky BSing.
That afternoon we'd arranged for a panel about the Singularity. One of the other panelists was Frederik Pohl. I'd suggested him because I thought his 1965 short-short story, "Day Million," was arguably the first SF to hint at the singularity. There's talk in there about asymptotic progress, and society becoming so weird it would be hard for us to comprehend.
"Just what is this Singularity thing?" Pohl asked while waiting for the panel to begin. A friend and I gave a short explanation. He rolled his eyes. Paraphrasing: "What a load of crap. All that's going to happen is that we're going to burn out this planet, and the survivors will live to regret our waste and folly."
Well. That was embarassing.
Fifteen years later, I found myself agreeing more and more with Pohl. He had seen, in his fifty-plus years writing and editing SF, and keeping a pulse on science and technology, to see many, many cultish futurist fads come and go, some of them touted by SF authors or editors (COUGH Dianetics COUGH psionics COUGH L-5 colonies). When spirits are high these seemed logical and inevitable and full of answers (and good things to peg an SF story to); with time, they all became pale and in retrospect seem a bit silly, and the remaining true believers kind of odd.
Re: (Score:3)
eventually you hit a physical limit that chokes you.
Maybe, but as long as that limit is several times more thinking power than the human brain you still have, effectively, the singularity that Vinge described: i.e. you have technological advancement faster than can be predicted at the present time. Unless you think the human brain is the absolute theoretical maximum thinking power it's possible to accumulate in one system...
ugh (Score:2)
You submit more stories than you comment.
Once again, this is basically a rant on a topic with no references, no links.
Slashdot is about NEWS and FACTS, and then we all comment, flame, troll... etc... It's fun.
I don't want to comment on a comment... or at least one that came out of nowhere.
you can't judge a theory by its quacks (Score:3)
Jules Verne envisioned the submarine. Does that make a submarine impossible? Does the concept sink on the basis of its sci-fi roots? Oh, lordy, what a fucked up standard of evidence on which to accuse any theory of being faith based.
* [http://news.nationalgeographic.com/news/2011/02/pictures/110208-jules-verne-google-doodle-183rd-birthday-anniversary/ 8 Jules Verne Inventions That Came True]
The guy predicted pretty much everything but the click trap.
Re: (Score:2)
Jules Verne wrote Twenty Thousand Leagues Under The Sea in 1870. Submarines had been under development since the 17th century. The first military sub is usually credited to an American sub that failed to attach explosives to British ships during the American Revolutionary War. The first sub to sink another ship was a Confederate sub during the American Civil War, which was apparently too close to the explosion, causing it to sink as well.
The Confederate sub had ballast tanks, screw propulsion, and used a
No wonder PopSci discontinued comments (Score:2)
With troll food articles like this!
Machine intelligence is absolutely possible (Score:2)
But I would like to point out that machine intelligence is absolutely possible, all we have to do is fully merge with the machines.
Asimov wasn't so deluded (Score:2)
From:
http://www.asimovonline.com/ol... [asimovonline.com]
Let me add as a teaser:
"...
And out there beyond are the stars.
And the interesting thing is that if we can get through the next thirty years, there's no reason why we can't enter into a kind of plateau which will see the human race last, perhaps, indefinitely...till it evolves into better things...and spread out into space indefinitely. We have the choice here between nothing...and the virtually infinite. And the nice thing about it is that you guys in the audience today
Top 3 (Score:2)
I agree, but I don't think that the singularity breaks into the Top 3 sci-fi faith-based initiatives. I usually count them like:
(1) Technology will reduce our work hours until almost all of us are leisurely, creative, artist-types.
(2) Automated warfare will result in conflicts occurring in which almost no humans die.
(3) There is intelligent life in outer space that we can possibly contact.
so... (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
Hey! I want my transporters, warp drives, and a galaxy full of humans-with-extra-bumps-embodying-a-particular-stereotype, and I want these things NOW!
I would trade all of that for one Orion slave girl.
Re: (Score:2, Insightful)
If slavery is the only way you can get women, maybe you should spend less time watching ST and more time working on your person skills?
Re: (Score:3, Insightful)
The irony being that the "slave girls" were secretly in charge of their society the entire time.
Re: (Score:2)
Hey! I want my transporters, warp drives, and a galaxy full of humans-with-extra-bumps-embodying-a-particular-stereotype, and I want these things NOW!
Why does everyone always forget the deflector dish tech? It's probably the most powerful bit of tech in the newer ST series. Reversing the polarity or rerouting something through the deflector array can do damn near anything short of creating life.
Re: (Score:2)
Re:Science Fiction is fiction made up by authors (Score:5, Insightful)
The disparaging way that the summary and article talk about references to science fiction stories is practically an ad hominem attack. There is nothing inherently wrong with science fiction stories that makes them improper for thinking about the implications of changing technology. Much of the best sci-fi in existence is little less than thought experiments about how various kinds of advances might affect humanity on an individual and cultural level.
Re: (Score:2, Insightful)
There's a big difference between "Hmm, what would happen if nuclear power cells existed and we could build a computer the size of a planet!?!" and "This is the specific scientific path that will lead us to that future."
Literature of any form can enlighten, provoke, and illuminate. But confusing "What if?" with "This is the way it will happen!" prophecy is fucking stupid.
But, what is a singularity? (Score:5, Insightful)
The singularity, of course, is defined as the point where the function and all its derivatives approach infinity. There is another way to think of a singularity. If you are extrapolating a function based on a power series around a point, you can only expand that power series as far as the closest singularity ("pole") in the complex plane (the "radius of convergence"). You can't extrapolate further than that with a simple power series, even if you aren't trying to solve for the function at the pole itself.
So, thinking science fictionally, we can't extrapolate the future based on the present any further than the distance to the singularity, even if our actual future doesn't in fact pass through the singularity.
So, don't think of the technological singularity as a time when life for humans ends, and robots/artificial intelligences/transcended humans take over. Think of it as time scale beyond which we can't extrapolate the future based on what we know now.
Re: (Score:3, Funny)
The singularity, of course, is defined as the point where the function and all its derivatives approach infinity. There is another way to think of a singularity. If you are extrapolating a function based on a power series around a point, you can only expand that power series as far as the closest singularity ("pole") in the complex plane (the "radius of convergence"). You can't extrapolate further than that with a simple power series, even if you aren't trying to solve for the function at the pole itself.
So, thinking science fictionally, we can't extrapolate the future based on the present any further than the distance to the singularity, even if our actual future doesn't in fact pass through the singularity.
So, don't think of the technological singularity as a time when life for humans ends, and robots/artificial intelligences/transcended humans take over. Think of it as time scale beyond which we can't extrapolate the future based on what we know now.
So you're saying I won't be able to fuck a sexbot by 2035?
Re: (Score:3)
Not gonna happen. But I'd place odds that in 2035, in Soviet Russia ...
Re:Science Fiction is fiction made up by authors (Score:5, Insightful)
Re: (Score:2)
However you may want some backup from science if you make real world predictions (and prevent real world solutions because people are pre-occupied by la-la-land.)
Re:Science Fiction is fiction made up by authors (Score:5, Insightful)
There are more than a few people like that here.
But Verner Vinge isn't one of them. In his original paper, he used them to illustrate how difficult to comprehend concepts might, conceivable play out. For example, he mentions that a singularity may play out over the course of decades or over the course of hours. Imagining how such massive changes could occur on a global scale in just a few hours is difficult, so he points the reader to a book whose author has already put time and effort into imagining how such a thing could play out and what some of the implications might be. It is using the book precisely as a thought experiment to examine an especially extreme part of what he is describing.
Re: (Score:2)
Slashdot publishes flamebait articles with some regularity, it just feels worse today because we've had two consecutively.
Re: (Score:2)
Re: (Score:2)
In order to simulate a human brain at the atomic level, first we would have to know exactly which chemicals are in a real brain, and we don't even know that much yet.
Trying to model a human brain in a computer in order to build an AI is like trying to build a mechanical horse in order to get around faster. While it isn't impossible, it's neither practical nor necessary. You can make a machine that bears no resemblance whatsoever to the original biological version, and it will still accomplish the same task.
Re: (Score:2)
This is not a hard problem to solve. You just put a brain in a blender and send the resulting goo through a mass spectrometer.
Re: (Score:3)
Atomic level is not precise enough. There are a lot of quantum effects in synapses.
Re: (Score:2)
Actually, PC speeds never increased at an exponential rate, and currently we are even sub-linear. What did increase exponentially for a while is the number of transistors in there. The speed up you get is vastly less than linear in the number of transistors and the limiting factor has been interconnect for almost 2 decades now. And that cannot scale exponentially and never did.
Re: (Score:2)
Marshall Brain has some very good ideas about what we could do as a society to ease our way past our 3rd generation society into a more-fair 4th generation post-scarcity society. http://marshallbrain.com/manna... [marshallbrain.com]
Singularitarians may be nutty, but believing in a 'post-scarcity society' is worse. Threre will never be more resources than humans can use, unless you discover a way to magic stuff out of nothing, forever.
Re: (Score:3)
If machines are incapable of true intelligence, then so are we, because we are machines.
Do I think that any of the AI research currently going on even begins to come close to the ridiculous complexity of a human brain? No. I think they're useful approximations in terms of getting stuff done, but nothing we're doing now will produce anything that's actually "intelligent", as opposed to merely acting like it. But it's clearly *possible* to create a brain, because brains exist.
Re: (Score:2)
You are quite correct. The problem is these people assume that physical reality as known today is complete. That would indicate that humans are mere physical machines. However there is absolutely no indication that physics knows it all and a few rather striking ones that it does not. Examples: Still no GUT, AI research has not even a theory how intelligence could be produced, etc. In the end, the whole argumentation is circular, like so often with the religious mind-set.
But: It is not completely impossible