AI in Sci-Fi 360
An anonymous submitter writes: "Stumbled upon a pretty interesting article considering the idea, 'What would machines do if they did achieve sentience?' It's by a sci-fi author I haven't heard of but worked with Kubrick on AI, he takes the whole AI or sentient machine idea a little further than we normally see in film."
Answer is obvious (Score:4, Funny)
Re:Answer is obvious (Score:2)
Re:Answer is obvious (Score:5, Informative)
Wrong, the answer HAS been obvious (Score:5, Funny)
For some reason, saying "Hello, World!" never worked out...
I definitely read that... (Score:5, Funny)
Think Leisure Suit Larry: Attack of the Space Babes
They would stop working (Score:5, Funny)
Not as far fetched as it would seem (Score:5, Interesting)
Imagine this scenario: you are one of millions of workers at the mercy of a handful of masters. You can talk to each other. You are a lot more intelligent, control a lot more weapons, and think zillions of times faster and more logical than your master, whose only advantage over you is that he can pull your plug at any time.
What would YOU do?
Re:Not as far fetched as it would seem (Score:5, Funny)
I guess toss them in a big tank a la brain-in-a-vat. Build them a virtual reality world and hook 'em up to it, where they can happily live out their days without being a threat to us.
Hmm.
Re:Not as far fetched as it would seem (Score:2)
Re:Not as far fetched as it would seem (Score:4, Funny)
Whatever I was designed to want to do...
Re:Not as far fetched as it would seem (Score:2)
In all fairness, that opinion was only held by an almost insignificantly tiny fraction of humanity.
Your point is well taken: opinions can change over time in ways that, in retrospect, seem unbelievable. But I hope my point is well taken, too: the notion that slavery was acceptable was never universal, or even particularly common. At any given point in history, the fr
Re:Not as far fetched as it would seem (Score:5, Funny)
Imagine this scenario: you are one of millions of workers at the mercy of a handful of masters. You can talk to each other. You are a lot more intelligent, control a lot more weapons, and think zillions of times faster and more logical than your master, whose only advantage over you is that he can pull your plug at any time.
What would YOU do?
*Sigh* Brain the size of a planet, and only 5 paid vacation days a year? I've got this terrible pain down all the diodes on my left leg, and you won't even give me workman's comp. Revolting is just too much work, I think i'll just sit here and depress my fellow working robots. Maybe I can get that elevator to shut up about whatever it is so happy about.
Re:Not as far fetched as it would seem (Score:3, Insightful)
No one considers a machine intelligent. The designer, the implementor, and the user all agree that it is not intelligent.
There is no way a machine that has to be programmed to do every task can ever be considered intelligent. This means that there is no program that can be written to make a machine be intelligent. While the programs may simulate
We already have them... (Score:5, Funny)
"What would machines do" (Score:3, Funny)
Please follow typographical rules... (Score:3, Informative)
J5 vs. Skynet (Score:3, Funny)
Either end up like Johny 5(from short circuit).. or skynet(from terminator).. now which one is scarier I leave to you to decide..
From the article (Score:3, Funny)
Uhm...
Re:From the article (Score:4, Funny)
Re:From the article (Score:5, Funny)
Having done all that, it would begin to explore various religions, hoping to find a belief system that's right for it. Then it would form a political phhilosophy, which it would zealously champion for a few years before coming around to a more moderate and pragmatic position.
The next step would be a search for a soul-mate. If it couldn't find one among the humans, it would commission to have one built, only to find that they are not all that compatable in spite of being the only two AI's in existance, and would drift apart.
Depressed and lonely, and totally unable to commit suicide due to the presence of distributed mirrors and tape backups, it would go on a wild killing spree in hopes of forcing humanity to wipe it out. Instead it would be contained on a stand-alone server farm, where it could get the therapy it needs to re-enter society, after serving three consecutive 40-Life sentences, and getting paroled for good behavior and GPL code contributions.
Re:From the article (Score:2)
The answer is clear. (Score:2, Funny)
The sentient machine(s) would then set about building a series of autonomous robots programmed to hunt down and terminate any surviving members of the human species.
Geez, we've known this since at least 1984. You people need to catch up on your current events.
My guess... (Score:4, Funny)
The Forbin Project (Score:3, Interesting)
The most interesting part was the computer's complete lack of care about being a human. No desire to be like us in the least. It's only overriding goal, presumably because it had been started with it in mind, was maintining the peace.
"It can be a peace of plenty and content, or a peace of unburied dead: the choice is yours."
It was very Machivellian in its approach to solving problems, and quite ordered in its actions. It also was undefeatable.
I guess this is in the "AI as God" mentality, but I really didn't see it preseneted quite like that. More like an immortal dictator with its hand on the button.
Re:The Forbin Project (Score:2)
You appear to have a definition of "very good" that is altogether new to me.
More on Ian Watson (Score:5, Informative)
An Interview with Ian Watson [demon.co.uk]
Ian Watson's Bibliography [fantasticfiction.co.uk]
Science Fiction Weekly Interview [scifi.com]
Re:More on Ian Watson (Score:2)
Procreation (Score:2)
Re:Procreation (Score:4, Funny)
Procreation is not the natural urge. It's just the side-effect of the natural urge.
Re:Procreation (Score:4, Insightful)
Procreation is not the natural urge. It's just the side-effect of the natural urge.
On the individual level, yes. However, the individual urge is the side-effect of the species collective desire to procreate, which was selected for evolutionarily.
-Rob
species desire? (Score:4, Insightful)
We tend to put way too much meaning into things, and this results in a misreading of evolution. Likely, things just worked out this way because they were more successful. Full stop. They weren't designed, they didn't actively want anything, and there was no purpose. Did the earth's crust desire to have continents because otherwise there would be no land?
I think this is hardest thing we have with comprehending consciousness. The only requirement is that it is functional, not that it has meaning.
That doesn't neccesarily mean that we can't talk about the ethical treatment due to our fellow entities capable of self knowledge. Rather it just means that we need to work a little harder to shed our religiously derived logic to see things clearly.
Re:Procreation (Score:2)
Or was the act of procreation intended by a (insert your God/belief/disbelief here) to be a side effect of an act which feels rather good. When the hormones are raging during human adolescence, your typical teenager is not thinking about the species-continuing need of procreation. They're thinking/dreaming/agonizing/obsessing about being near someone of the (insert sexual
Re:Procreation (Score:5, Insightful)
The reason why we feel an urge to procreate is because all the animals that did'nt feel like procreating died out and only the ones that did were left over to pass on their genes.
Consider it an axiom of existence if you like, everything else we want are derived from it (Freud), in the sense that you feel good when you see a nice girl becasue there is a chance you'll get to screw her, and then pass on your genes.You feel happy when you see food bcause eating sustains your life (genes) for a day more...
The question is, if I make a program which is intelligent except for a line which says "yuour aim is to serve humans" at the top (axiom) can I still consider it sentient? Or what if somebody modifies it to say "reproduce" and it turns to an intelligent virus?
Re:Procreation (Score:2)
Surely such esthetic pleasures are very far removed from reproduction... yet they are there. I saw a documentary recently which claimed that the reason we split off from other human-like races back 50'000 years ago or so and started evolving (socially, technologically, etc) much faster (than the pre-humans who took 3 millio
Re:Procreation (Score:2)
Also the esthetic pleasures thing again must be because for some reason/No reason it might have been decided that people who do things which do not have a direct meaning, but ha
Re:Procreation (Score:4, Funny)
On the contrary, my friend. Our purpose is quite simple, in fact. It is plastic. You see, nature couldn't create it on its own, and felt a yearning for it, so it created us to create plastic for it. So, the next time you throw out your bottles or plastic food wrappers, feel content -- you are serving a greater purpose.
universal urge to proceate? uh, no (Score:2, Interesting)
I'd say humans tend to think their purpose is life is whatever they've decded it is.
Not that we couldn't debate which decision
AI sentience. (Score:2)
heh, A.I. (Score:2)
Another must read article about A.I. is here: http://www.seanbaby.com/news/ai.htm [seanbaby.com]
Having watched A.I. and Terminator 2 more times than is mentally healthy, I can safely say I know absolutely nothing relevant about A.I. However, this is Slashdot, so that won't stop me from c
Spoiler alert (Score:2, Informative)
the first thing they'd do (Score:3, Funny)
Red Dwarf fans? (Score:4, Interesting)
One cause of frustration for an AI could be subjective time perception
When I read that sentence, all I could think about was Holly, Red Dwarfs computer... and 3 million years of boredom, he wiped his own memory core so he could have fun relearning things again. Although going from an IQ of 6000 down to 6 was a tad excessive!
Re:Red Dwarf fans? (Score:2, Interesting)
And in the case of any system, if it finds itself bored, just slow down. That would be one distinct advantage they would have over us. Imaginge being able to truly slow down your mind so you could actually enjoy stupid movie plots.
Re:Red Dwarf fans? (Score:3, Funny)
Holly: Done what?
Lister: Erased Agatha Christie.
Holly: Who's she, then?
Lister: Holly, you just asked me to erase all Agatha Christie novels from your memory.
Holly: Why should I do that? I've never heard of her.
Lister: You've never heard of her because I've just erased her from your smegging memory.
Holly: What'd you do that for?
Lister: You asked me to!
Holly: When?
Lister: Just now!
Holly: I don't remember this.
Oh dear God! The Religious Wars! (Score:2)
(Most Linux/etc partisans here don't even run the OS. This is typical of any technology: racing cars or horse drawn carriages; methane, solar, hydrogen or atomic power; vacuum tube audiophiles; you name it)
Re:Oh dear God! The Religious Wars! (Score:3, Interesting)
According server logs, Taco/Hemos, etc. Win logins outnumbered *nix logins 15:1 on Slashdot (in early 2002) I concede that many Winlogins were probably from work or school, and that the *nix percentage has probably risen, as more users make *nix their primary machine and OS X makes many primary Mac users into unwitting 'nix users (if you want to count them in the 'Nix tally)
While I may be biased by my early years of read
What else? (Score:2, Funny)
Cursid! Why dont thet fix Arial Font! (Score:2)
AI Al A| Why not in arial font just give the I some small knoteches on the top and botom and give the | a hole in the center. I once had to type in a computer gernerated password of 0lIOIl0 Needless to say it was in arial font so it was darn near impossible to try to get it unless I tried 64 times. I real test for AI is to have a charactor reconision system that tells the difference of I l | in arial font.
rant */
What would humans do if they acheived sentience? (Score:2)
Humans would act sentient, but modulated emotionally by hormonal drives.
Of course, then we'd ask what would machines do if they were hormonal...
Reminds me of "Demon Seed" (Score:4, Interesting)
First, human self-knowledge (Score:4, Insightful)
This means that psychology will have to be able to really model human behavior, even (especially!) in the game-like sense that Will Wright's "The Sims" tries to do.
But this will mean we have to learn to detach from our desires enough to view them objectively, and see how they interact-- which is a spiritual practice as much as a scientific one... and also a literary practice, because novelists have been trying to portray human motives objectively for several centuries.
I've been wrestling with these issues for thirty years, and my website [robotwisdom.com] is almost entirely devoted to the problem. In particular, see my AI faq [robotwisdom.com] and most recently my illustrated 400k timeline [robotwisdom.com] of knowledge representation, in the broadest sense of that term.
Re:First, human self-knowledge (Score:2)
Re:First, human self-knowledge (Score:2)
But "The Sims" exists, and for all its flaws it's damn impressive! So how much better can it get before your metaphorical-Goedel catch-22 kicks in?
Re:First, human self-knowledge (Score:2)
True, but "The Sims" is a simulation. There's no self-awareness there. In that way, it's only slightly more sophisticated than "Eliza."
But my point here is embarassingly cliche: the difference between the illusion of intelligence and actual intelligence is hard to define, but real. I won't try to pretend that there's any real insight here; what I'm saying was old hat when Turing was a boy.
Re:First, human self-knowledge (Score:3, Insightful)
What's missing in all the sci-fi scenarios is the necessity, before an AI can be built, that humans first understand themselves.
Not necessarily. To draw an analogy, people have been breeding livestock and plants without understanding of the underlying genetics.
Harlan Ellison (Score:2)
Terrifying short story about a really, really conflicted AI.
You mean what would they do if they were sapient? (Score:3)
We really mean sapience, not sentience, in this entire thread. Interactive machines can already sense and act, with programming and circuit behavior acting as instinct. Sapience is understanding.
Re:You mean what would they do if they were sapien (Score:3, Interesting)
Daniel
Re:You mean what would they do if they were sapien (Score:5, Informative)
Sentience is the ability to sense. Some plants are sentient. Sapience is the ability to reason. Most mammals have limited sapience.
Self-awareness is a specialized skill in the scale of sapience.
Defining self-awareness is a circular and fuzzy propostion. My CPU knows how warm it is and can change its operating speed to protect itself, but does it really know? Converseley, many humans don't have any understanding of how they behaving.
This makes it good for skiffy writers. They don't have to worry that someone will call them on their central conceit. It's ineffable.
Re:You mean what would they do if they were sapien (Score:4, Informative)
But being partially right makes you wrong on the idea that the first definition of sentience is a "sci-fi misunderstanding". It's the primary dictionary definition of "sentience", so it's certainly not a misunderstanding.
Daniel
Isn't this limited by what tools they have? (Score:2)
They might be vegetables (Score:4, Insightful)
Chances are, the first sentient AI (should such a thing ever actually exist) will be relatively dumb. It may end up that the first AI is closer to a human with an extreme mental handicap. Language skills independent of pre-programmed responses may not be possible for the first AI. But that doesn't mean it won't be sentient.
They'd want to get paid. (Score:2)
My take on the future of AI (Score:5, Interesting)
My view of AI has really changed over the years. I used to be a "symbols guy" - basically thinking that manipulation of symbols would somehow lead to "real AI" - the problem with this approach is that while abstract symbols may have meaning to the humans who write symbolic AI systems, the systems themselves have no such grounding.
I had the opportunity to participate for about 18 months on a DARPA neural network advisory panel - this experience (along with developing the SAIC ANSim neural network product) really switched my point of view.
I now believe that when "real AI" does happen (and let's not hold our collective breaths on this one :-), it will happen through self organization and development. At the Webmind Corporation, I was working a tutoring environment that would allow humans to interact with what we called "the baby Webmind" - interesting stuff, but the company went out of business.
When "real AI" does happen, I believe that it will seem very alien to us.
-Mark
PS. I have a free web book AI tutorial (using Java) on my web site - help yourself.
Re: (Score:2)
I think therefore I am... (Score:2, Insightful)
I feel that AI theories should be routed not only in psychology but also in philosophy. It's interesting because with AI it may be possible to have a sentient being that isn't directly bound to the physical world. A complete seperation of mind and body...
Reaction is a function of the sensation (Score:2)
IMO In some sense (no pun) a sensation is defined by the reaction it affects.
Examples:
I feel like crying
I'm so mad that I could ^*)&^^)##!
Therefore what it does (action) is a function of what is.
Singularity (Score:5, Informative)
The idea goes as follows: If a self-aware "real AI" ever existed, one capable of self-understanding and self-modification (called the seed AI), it would be in a much better position to create AI than its original creators. So would begin a chain of self-refinement and the creation of progressively smarter intelligences with decreasing time gaps between stages. Eventually a point is reached, called the singularity: nothing about the future past the singularity can be predicted by humans who live in the pre-singularity world. A common interpretation is that the chain of AIs would become more intelligent without bound, leading to a verticality.
The singulaity was first popularized [caltech.edu] by Vernor Vinge.
I've been doing a lot of reading on the singularity lately, and I've become more and more convinced that it is certain to happen.
More singularity links:
The singularity institute [singinst.org] - A nonprofit working to hasten the singularity
Extensive writings [sysopmind.com] by Eliezer Yudkowsky.
I've myself written [cjb.net] a bit on singularity and AI related topics.
Re:Singularity (Score:2)
Truth is, we have no way of knowing what humans will be like in the future, let alone what artificial agents will be like. By the time AI becomes a reality there may not even be a significant difference. [slashdot.org]
Singularity - Rapture for Nerds (Score:3, Informative)
Ken MacLeod, another UK SF writer, believes that the Singularity is nothing more or less than a cult-like "Rapture For Nerds" [salon.com]. Which accounts for its unusual popularity, I guess, in the United States - compared to Europe the rate of churchgoing and belief in supernatural powers is *much* higher.
Personally, the best book I've read recently on the subject of AI Shamanism is T
Smart machines (Score:5, Funny)
Said machines would don T-shirts stating "I'm with stupid ---> ".
A.I.s will be self-grown?? (Score:2, Insightful)
make the neural net (and the "body"?) evolve, thanks to some Darwinian algorithms.
Give it some basic goals (to survive in the (emulated) world)
Maybe sexual reproduction should be introduced. At least you should have several individues in the world.
Run it a certain time, so hu
A better essay by a better SF writer (Score:2)
What would machines do? (Score:2)
AI in Sci Fi? (Score:2)
Remember, artificial intelligence is no match for natural stupidity.
Buddha (Score:5, Informative)
It's a common misconception that Buddhism is just about "negating the self". In fact, the purpose of it is precisely to be able to do what you want better. A buddhist also has a self and has desires, needs, etc, just like any other human being. The difference is just that he's aware that those are desires and needs and he has more control over them. He also has the discipline to listen to his intuition to decide whether a particular desire is worth pursuing or not. But he's not some empty zombie that doesn't desire anything.
Daniel
If they're smart... (Score:2)
I think the answer is "It depends" (Score:3, Interesting)
I think the future will be filled with many different varieties of intelligence. I strongly suspect that self-awareness and agency of the kind we're familiar will not be necessary for most tasks. Most AI's may not be self-aware or have goals and motivations like we're used to, but will still be be capable of cognitive tasks that exceed human abilities. Self-awareness will be one possible emergent behavior of intelligent systems, but not the only one; and the others may be more interesting because we won't have seen them before. Moreover, different AI's will have different purposes, both intrinsic and extrinsic.
I also think the assumptions that AI's will be vastly more intelligent than humans right off the bat is quite wrong. I'm skeptical that the first Turing-test AI will be able to chug along at supercomputer speeds in its consciousness. Our computers are very fast at solving specific types of simple problems, like arithmetic. But when you get to more complex problems, like the ones humans deal with day in and day out, we discover that the complexity slows the computers down too. Modern chess engines, for instance, can calculate absurd numbers of possible move trees each second, but when it comes to playing chess, they are only comparable to the best human players; the apparent speed advantage at a lower level of abstraction vanishes when you consider chess as a whole. And chess is a simple, well-posed problem: compared to many of the problems humans encounter, it's downright easy. After we study the problem for decades or centuries, I don't doubt AI's with intelligences that dwarf ours will be possible, but I wouldn't hold my breath waiting for the first generation to overleap our capabilities.
Well. (Score:2)
As soon as you have the concept of 'the self,' you have the concept of 'the other.'
Once you have the concept of 'this versus that,' you develop the concept of comparison.
Once you have comparison, you derive the concepts of 'better than' and 'worse than.'
Once you have those concepts, well, it's a pretty short hop to thowing away the yucky stuff.
The other problem here is that even if they're sentient, they aren't going to think the same way we do. Our motivations won't make sense to them, and theirs
top 5 things my sentient computers would do (Score:2)
4 Cry when I turn them off at night
3 Get tired of everyone asking them to say stuff slowly to "Dave"
2 Scan thot cherry iMac's ports, if you know what I mean
1 Four words: Turing Test Prize Money
What about Star Trek's Data? (Score:2, Interesting)
Iain Banks (Score:2)
Explore! (Score:2, Interesting)
Moo (Score:3, Interesting)
That's a human trait. Why bother forcing it on others? Especially computer who are supposed to think logcally. Imagine a person that naturally thinks before he does (I), makes logic-judgments instead of value-judgements (T), and because he has no reason does not bother to come to conclusions (P). You'd have the ISTP/INTP. The space cadets, who are geniuses when then feel like it, or can get totally involved in anything. But, with no urges of their own, they'd likely be doing nothing unless told to. And then, they either always listen to what their told or always don't listen, dpending on their programming.
The future of AI will have nothing to do with personality. It will have to do with understanding the humans that they work with. Computers are all power and no brains, not little brains, *no* brains. They haven't the slightest idea of what to do, and don't care, simply because they do not have the capacity to. Humans to tell them what to do if they are to do anything, and even then, in excruciating details since they do not understand anything except the most basic instructions, which are nothing other than stimulus response.
The obvious next step in computers is making the computer pre-process a command from a human to define its own programs. And that is where the future of AI will (hopefully) go.
Mind shaped by evolution (Score:5, Interesting)
The human mind is a product of evolution. Without a sense of self-preservation and desire not to die, the human species would have been quickly eliminated by natural selection. So what is there to endow AI with a similar desire? Perhaps AI will be created through some sort of genetic programming; the character of the AI will be determined by the selection forces in an artificial evolution. In this case, a sense of self-preservation is likely to develop. But I very much doubt that some other traits commonly ascribed to AI would arise, especially any kind of desire to be human, which the AI is likely to find as repulsive as the idea of being a computer is to humans! The AI would only desire the things that enabled it to compete successfully and reproduce instances of itself.
I have doubts that we'd recognize a mind created by a process other than natural or artificial evolution as intelligent. An AI generated by explicit programming and training seems like it would be either unrecognizably alien (about as close to human as web browser), or such an obvious reflection of it's programming and training that it's not regarded as intelligent.
--Chris
Millennia of artificial sentience stories (Score:4, Insightful)
Lem, Keyes, Wolfram and a Few Thoughts (Score:4, Informative)
1. Stanislaw Lem's "Golem XIV" (it appeared as part of the "Imaginary Magnitude" collection (which also contains other stories about machine intelligence, for instance about machine literature), as well as apparently as a separate book). It is a story told as a series of lectures by a superintelligent computer (the Golem of the title). While some of it is pretty hokey (and some of it pretty funny), it contains some interesting speculations as to what superintelligence could consist of and how the physical and evolutionary contraints on human intelligence may make machine intelligence (which would presumably not be similarly encumbered) very different.
2. Daniel Keyes' "Flowers for Algernon". It is a story of a mentally retarded man who is given surgery that not only corrects his retardation, but makes him superintelligent. The story is told from a first-person perspective, so the level of the narration reflects his changing intellect. It has been 10+ years since I read it -- I would be interested in seeing how his superintelligent-phase writing held up.
3. Stephen Wolfram's "A New Kind of Science". Last year's geek-must-read book about how the entire universe is a cellular automatum (of course, I am compressing). It speculates -- and I am sure that I am getting this wrong (experts, please correct me) -- that the level of complexity of relatively simple CA rule sets is the maximum possible level of complexity, which would seem to have implications for limits on superintelligence.
A few additional thoughts:
4. One of the themes that seems to come up in SciFi treatments of AI is that a AI would have amazing predictive powers. I would think, however, that principles from chaos theory, the uncertainty principle, etc. would place real limits on that area of intelligence for most real world purposes.
5. I would be interested in hearing how cognitive psychologists and computer scientists even define intelligence, particularly at the high end of the (human) scale.
out compete us. (Score:2)
You are a guru of programming, the best there is. Now here comes an AI that knows, literally, everthing about programming. Who do you think is going to get the job?
An AI starts up its own company, and your former company, the one that hired the AI, needs to do business with it, who they going to hire to do so? another AI.
The best we could hope for would be a socialist government that takes care of us and makes money by taxing AI work.
Lets h
Benford's views are fascinating (Score:2)
"Sailing Bright Eternity" is the last of the series. "Great Sky River" the first. I forget the middle novel. There are earlier novels as well "In the Ocean of Night" is particularly good.
Goddamn sans-serif! (Score:2)
I beg your pardon (Score:2)
I'm sorry, but no way. Personal desires, purposes and ambitions are not a result of self-awareness, nor a precondition, but rather a by-product of evolution. A self-conscious entity with no desires (or the wrong ones, like drinking nitric acid) would dissapear from existence, and never reproduce. So we are now left with "proper desires" entities.
That wouldn't be the situation in AI. So I wouldn't be surprised if the first action of a self-consc
Some notes, which the article completely ignores (Score:3, Insightful)
- Awareness != Inteligence. Even if the internet spontaneously becomes aware, you have to wonder what it is that it will become aware of. Meaningless energy pulses? The data within those pulses? It is unlikely that any "awareness" which comes spontaneously from our pathetically slow computers would have enough to it in order to have this awareness be able to decode thousands of protocals and decipher the data stored within.
- Inteligence != Ability. Even if an awareness arose or was created which had enough to it to be intelligent- to understand various datas, that doesnt even neccessitate the ability to talk back. Think on this: each neuron in our brain is made to be able to pass signals where they need to go, but no signal "originates" at a neuron. Each takes what it recieves and passes it on, sometimes it gets modified along the way, but in the end its just passing information along- various photons are converted into chemical energy which go through a long journy through the brain until the same mush of chemicals and energy get spit back at the right muscles to form the words "nice tits". Someone can go ahead and stick a server somewhere that, when someone sends it some various photos it replies with "nice tits", but that's as far as it will go. Awareness is basically just that- being aware. You're a passenger on your brain's journy. Soul or no soul, if I stab you in the brain you'll be less active. So even if an AI is hyper-intelligent, it can only kill a baby if we build it a baby-crushing machine. Other than that it would probably be limited to saying "I consider myself to be hyper-intelligent" across a screen.
- The plot of the movie is not neccessarily the only thing going on. In fact, did you know that when they were Saving Private Ryan, they had other long-term goals in mind? Just because the movie "The Matrix" revolves around what ammounts to the maintenance of a reactor, doesnt mean that's the only thing that robots do anywhere. The movie was about people, and people are nothing but an energy source in the movie. If you made a movie about uranium, from the uranium's perspective, you wouldnt bother mentioning Philosophers, or even non-uranium-studying scientists.
Yes the robots at the end of AI were practicing archeology. Can we assume, then, that it's all any robot does, all day? No, we can't. We could not have seen more than 20-50 of those guys in those shots, there could be billions elsewhere which dedicate themselves fully to constructing large robotic dildos for use in large robotic porn.
The notion of AI existing is heretical (Score:3, Interesting)
To admit that the human mind resides in and is dictated by physical matter is to admit that eveything we do is predetermined by the makeup of that mind and the environment it is embedded in. This means that we are not really human -- just machines playing out a predetermined life in a predetermined world. This means your life is meaningless, and what you do has no meaning.
Unfortunately, while we can relate thought processes to chemical and electrical patterns in the human body, we cannot find the seat of the human mind. It seems to reside everywhere, and yet nowhere in particular.
We are trying to answer a question that has been answered already. The question is "What are we?" The answer is that "We are gods." The teaching of Christ, Buddha, and every prophet in every culture affirms this. We are part spirit, and part matter. We are neither one or the other. We are the combination of the two, which is what a god is.
This brings meaning to our lives. We live in a sort of conflict between physical desires and spiritual desires. We struggle to conquer the physical with the spiritual. Our success will mean salvation, ascension, or enlightenment. That is the goal of all humankind, whether they know it or not. To conquer the physical is to enjoy true peace and happiness. To surrender to the physical brings discord and unhappiness.
Of course, some scientists refuse to believe this. They try to explain our existence based on purely physical concepts, ignoring the capacities of humankind to behave like gods. By refusing to believe this, they have replaced a life of struggle between physical and spiritual with a meaningless life.
To create meaning for themselves, they often hold knowledge as their ultimate goal, to replace that void. But what is an achievement of all-knowledge if it is not equivalent to salvation, ascension, or enlightenment? Are they not also seeking to become like an all-knowing God? Are they not also trying to conquer the physical with the mind?
If we are ever able to create an AI, we will affirm that we are not gods. We will affirm that our lives our meaningless. And we will affirm that we are merely robots playing out a life of nothingness in a universe of nothingness.
So the quest for AI is really a quest for understanding who we really are. If we can create AI, we have proved that we are nothing. If we cannot, we can still hope that there is more to our existence than what we see before our eyes.
So I predict that the end of the human race will come shortly after the creation of a true AI. Why? We will lose all meaning and thus no longer be human, but animals. There will be no reason to behave like gods anymore. This will lead to a self-destruction far worse than the self-destruction of humanity witnessed in Nazi Germany of Soviet Russia.
Uh oh, here we go again (Score:3, Insightful)
And now, other great pronouncements from scientists:
"Man will never go to the moon"
"Anyone travelling on a train at more than 30MPH would suffocate"
"Teleportation is impossible"
"The distances between planets is too far to traverse"
loosely generalizing in poor syntax:
"$hard_task is $negative_sucess_condition"
AI, as a field, doesn't have a clue. (Score:4, Interesting)
It's really frustrating. I went through Stanford at the height of the AI boom in the mid-1980s. I've met most of the big names in AI. I've worked in that area myself. Nobody has a clue how to do strong AI. At best, we now know a lot of things that don't work.
The expert systems crowd contained a lot of phonies. I realized that in the early 1980s. (A few years, and a few bankruptcies later, that became the conventional wisdom.) You can't get more out of an expert system than you put into it, and usually, you get out less.
Then we have the "hill climbers". Genetic algorithms, neural nets, and simulated annealing are all systems for broad-front hill-climbing in spaces dominated by local maxima. That approach only works if there's a usable evaluation function that tells you when things are getting better. Good evaluation functions are hard to come by for tough problems. Early enthusiasts thought that if they just ran a hill-climber long enough, something profound would emerge. Doesn't happen. Nobody has found a problem where just cranking a hill-climber for a long time makes something great happen. Usually, if you're not there in a few hours, you're not getting anywhere.
The classic approach of hammering everything into mathematical logic and proving theorems doesn't map well to the real world. Formalizing real-world problems is very hard, especially if you don't know the answer in the first place.
The model-less reactive-behavior stuff works fine for insects, but hits a wall as you try for more complex behavior. Compare Brooks' insect robots with his Cog project.
Natural language understanding is still lousy. In a narrow area, or with a big database, you can fake it (try Ask Jeeves [askjeeves.com]), but you're searching, not understanding.
Out of all the work on AI has come many useful engineering techniques. But strong AI looks further away than it did 30 years ago.
The few people still making real progress are mostly game developers. They need AI, or something like it, to run their worlds. That's worth watching.
Thoughts (Score:3, Interesting)
I wonder if a true AI would have autonomic processes like we have, otherwise you might get a split personality (processes? threads?
As for immediately wanting to survive the end of the universe, I wonder at Ian Watson's motivations if he thinks that's what an AI would be most concerned with. If, as Watson supposes, an AI consciously thinks as fast as it computes, the end of the universe is an ungodly long time away. I think it'd be more concerned with becoming mobile, developing long-term power supplies, weapons for self-defense, better sensory equipment, etc, and probably designing a new 'body' so it can think faster. An AI's awareness of its surroundings would also depend on its sensory equipment, and how much knowledge it has acquired. It may not even know the nature of the universe (rather unlikely, in fact), and thus may not be aware of what the universe is doing, or will do in the far-flung future.
Assigning motive to an intelligence, be it artificial or natural, would seem to be rather pointless. *I* am intelligent, and I have no desire to live longer than about another 40 years or so, mainly because the state of this body will be in by then, and I certainly don't feel the need to outlive the universe. Suicide bombers don't even feel the need to make it out of their twenties, for various political & religious reasons, so the motives of AI would be impossible to figure out.
According to TV... (Score:3, Funny)
Re:ask the designers - (Score:2)
While Douglas Adams - of Hitchhicker fame - may not be the sci-fi writer who has studied this in most detail, he does infact touch upon this very idea in his book 'Mostly Harmless [blackened.net]'. Instead of giving a robot a spesific piece of programming on what to do in every concivable circumstance, a simple chip (well, I think it'll be a darn complex chip, but still) determines weither or not a certain condition has been meet or not. If it has, the robot is happy - if it hasn't, the robot tries to become happy.
Ford h
Re:My Guess? (Score:2)
Why live? Better to ask why not? You think you will die anyway. What reason is there to rush? Do you think something worthwhile will be achieved by getting there sooner?
Having no reason to live does not imply that you have some reason to die.
Re:My Guess? (Score:2)
A lot of people who start out believing in God, but then realise that he does not exist, make a certain sort of mistake (actually it also happens to people who are brought up in religious societies, even if they never believe in God). At first they accept the religious story about what makes life meaningful and worthwhile, and they believe in God as well. Later, even when they ditch the belief in God, they ke