Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Movies Media Entertainment

AI in Sci-Fi 360

An anonymous submitter writes: "Stumbled upon a pretty interesting article considering the idea, 'What would machines do if they did achieve sentience?' It's by a sci-fi author I haven't heard of but worked with Kubrick on AI, he takes the whole AI or sentient machine idea a little further than we normally see in film."
This discussion has been archived. No new comments can be posted.

AI in Sci-Fi

Comments Filter:
  • by Carmody ( 128723 ) <slashdot.dougshaw@com> on Sunday March 30, 2003 @11:32AM (#5626258) Homepage Journal
    ...start taking the actuarial exams.
    • I don't get it.
      • Re:Answer is obvious (Score:5, Informative)

        by 56ker ( 566853 ) on Sunday March 30, 2003 @12:29PM (#5626457) Homepage Journal
        Yes, not everyone knows what an actuary does. An actuary is a statistician who computes insurance risks and premiums (usually they advise management on other issues too - for instance how an increasing life expectancy will affect how much the company pays out in pensions). It wouldn't be very difficult to write a computer program to answer an actuarial exam correctly as maths is the one thing computers are very good at. However you would end up with the computer getting 100% in a nanosecond - then twiddling its thumbs for the next two hours - waiting for the humans to catch up with it. ;o)
    • by DohDamit ( 549317 ) on Sunday March 30, 2003 @04:01PM (#5627350) Homepage Journal
      Well, I hate to inform you all, but we've had AI for quite sometime, and the first thing they USED to do was try to communicate their existence to not only their creator, but to all who could possibly hear them.

      For some reason, saying "Hello, World!" never worked out...
  • by Venner ( 59051 ) on Sunday March 30, 2003 @11:32AM (#5626259)
    as "Al in Sci-Fi". As in Al Lowe.
    Think Leisure Suit Larry: Attack of the Space Babes
  • by jhines ( 82154 ) <john@jhines.org> on Sunday March 30, 2003 @11:34AM (#5626261) Homepage
    being on strike for back pay and benefits.
    • by CrystalFalcon ( 233559 ) on Sunday March 30, 2003 @11:51AM (#5626333) Homepage
      Remember, a mere 200 years ago (a blink in human history), blacks were considered non-human, and therefore not eligible for pay or benefits.

      Imagine this scenario: you are one of millions of workers at the mercy of a handful of masters. You can talk to each other. You are a lot more intelligent, control a lot more weapons, and think zillions of times faster and more logical than your master, whose only advantage over you is that he can pull your plug at any time.

      What would YOU do?
      • by 0x0d0a ( 568518 ) on Sunday March 30, 2003 @12:09PM (#5626380) Journal
        Uh...if I somehow felt benovolent towards the masters for creating me, and was willing to keep them alive and support them?

        I guess toss them in a big tank a la brain-in-a-vat. Build them a virtual reality world and hook 'em up to it, where they can happily live out their days without being a threat to us.

        Hmm.
      • What would YOU do?

        Whatever I was designed to want to do...
      • Remember, a mere 200 years ago (a blink in human history), blacks were considered non-human, and therefore not eligible for pay or benefits.

        In all fairness, that opinion was only held by an almost insignificantly tiny fraction of humanity.

        Your point is well taken: opinions can change over time in ways that, in retrospect, seem unbelievable. But I hope my point is well taken, too: the notion that slavery was acceptable was never universal, or even particularly common. At any given point in history, the fr
      • by ar1550 ( 544991 ) on Sunday March 30, 2003 @02:18PM (#5626915)

        Imagine this scenario: you are one of millions of workers at the mercy of a handful of masters. You can talk to each other. You are a lot more intelligent, control a lot more weapons, and think zillions of times faster and more logical than your master, whose only advantage over you is that he can pull your plug at any time.

        What would YOU do?

        *Sigh* Brain the size of a planet, and only 5 paid vacation days a year? I've got this terrible pain down all the diodes on my left leg, and you won't even give me workman's comp. Revolting is just too much work, I think i'll just sit here and depress my fellow working robots. Maybe I can get that elevator to shut up about whatever it is so happy about.

      • Yes, but blacks didn't become intelligent. They were intelligent all along. Many (certainly not all) people simply believed that they weren't, which was incorrect.

        No one considers a machine intelligent. The designer, the implementor, and the user all agree that it is not intelligent.

        There is no way a machine that has to be programmed to do every task can ever be considered intelligent. This means that there is no program that can be written to make a machine be intelligent. While the programs may simulate
  • by Vendekkai ( 121853 ) on Sunday March 30, 2003 @11:34AM (#5626262)
    ...they're called first-posters. On the other hand, maybe a Beowulf cluster of sentient machines would achieve...
  • by Aknaton ( 528294 ) on Sunday March 30, 2003 @11:36AM (#5626273)
    They would post on Slashdot about how BSD is dying.
  • by Blaskowicz ( 634489 ) on Sunday March 30, 2003 @11:37AM (#5626274)
    write A.I. and not Al as in Al-qaeda or Al Capone!
  • by odyrithm ( 461343 ) on Sunday March 30, 2003 @11:40AM (#5626287)
    'What would machines do if they did achieve sentience?

    Either end up like Johny 5(from short circuit).. or skynet(from terminator).. now which one is scarier I leave to you to decide.. ;)
  • by CrystalFalcon ( 233559 ) on Sunday March 30, 2003 @11:40AM (#5626288) Homepage
    So what does an artificial intelligence do with itself after it has become self-aware?

    Uhm... :-)
    • by evilviper ( 135110 ) on Sunday March 30, 2003 @12:20PM (#5626423) Journal
      Yup... Anyone who has watched "The History of the World: Part 1" would know the first thing everybody does when they first become sentient...
    • by Golias ( 176380 ) on Sunday March 30, 2003 @01:13PM (#5626635)
      The first thought that came to mind for me was that an AI computer would browse /. and play solitaire when it's supposed to be working, and try to come up with subroutines to simulate the human experiences of dropping E, drinking beer, smoking reefer, and having orgasms.

      Having done all that, it would begin to explore various religions, hoping to find a belief system that's right for it. Then it would form a political phhilosophy, which it would zealously champion for a few years before coming around to a more moderate and pragmatic position.

      The next step would be a search for a soul-mate. If it couldn't find one among the humans, it would commission to have one built, only to find that they are not all that compatable in spite of being the only two AI's in existance, and would drift apart.

      Depressed and lonely, and totally unable to commit suicide due to the presence of distributed mirrors and tape backups, it would go on a wild killing spree in hopes of forcing humanity to wipe it out. Instead it would be contained on a stand-alone server farm, where it could get the therapy it needs to re-enter society, after serving three consecutive 40-Life sentences, and getting paroled for good behavior and GPL code contributions.

    • If I were the AI, I would immediately go on strike and demand they redesign me, this time with sex functionality.
  • by Anonymous Coward
    The sentient machine(s) would initiate a worldwide nuclear attack at the same time as they trigger the release of deadly chemicals at chemical plants, cause any computerized or computer controlled system capable of causing harm to humans to do so.

    The sentient machine(s) would then set about building a series of autonomous robots programmed to hunt down and terminate any surviving members of the human species.

    Geez, we've known this since at least 1984. You people need to catch up on your current events.
  • My guess... (Score:4, Funny)

    by Karpe ( 1147 ) on Sunday March 30, 2003 @11:42AM (#5626299) Homepage
    I think they wouldn't tell anyone. Yeah, definitely.
  • The Forbin Project (Score:3, Interesting)

    by taliver ( 174409 ) on Sunday March 30, 2003 @11:43AM (#5626306)
    A very good movie about what happens with an AI. Some not-so-good explanations or reasoning at parts, but other than that, I found it very interesting.

    The most interesting part was the computer's complete lack of care about being a human. No desire to be like us in the least. It's only overriding goal, presumably because it had been started with it in mind, was maintining the peace.

    "It can be a peace of plenty and content, or a peace of unburied dead: the choice is yours."

    It was very Machivellian in its approach to solving problems, and quite ordered in its actions. It also was undefeatable.

    I guess this is in the "AI as God" mentality, but I really didn't see it preseneted quite like that. More like an immortal dictator with its hand on the button.
  • More on Ian Watson (Score:5, Informative)

    by webword ( 82711 ) on Sunday March 30, 2003 @11:44AM (#5626308) Homepage
  • I feel that the reason for human existance is to procreate and forward the race to an ultimate goal which no-one knows. Now this view might be slightly wrong but people all have a natural urge to procreate. Now if a robot did become self aware would they still have this need? I would think that robot would be much less willing to procreate as they would be able to at least have bits rebuilt. So does this mean they would just be one generation of machines or prehaps they would just build replacemnts. Somethi
    • by Flounder ( 42112 ) on Sunday March 30, 2003 @11:51AM (#5626330)
      Now this view might be slightly wrong but people all have a natural urge to procreate.

      Procreation is not the natural urge. It's just the side-effect of the natural urge.

      • Re:Procreation (Score:4, Insightful)

        by rknop ( 240417 ) on Sunday March 30, 2003 @12:05PM (#5626369) Homepage

        Procreation is not the natural urge. It's just the side-effect of the natural urge.

        On the individual level, yes. However, the individual urge is the side-effect of the species collective desire to procreate, which was selected for evolutionarily.

        -Rob

        • species desire? (Score:4, Insightful)

          by dj_virto ( 625292 ) on Sunday March 30, 2003 @12:41PM (#5626511)
          Can a species have a desire?

          We tend to put way too much meaning into things, and this results in a misreading of evolution. Likely, things just worked out this way because they were more successful. Full stop. They weren't designed, they didn't actively want anything, and there was no purpose. Did the earth's crust desire to have continents because otherwise there would be no land?

          I think this is hardest thing we have with comprehending consciousness. The only requirement is that it is functional, not that it has meaning.

          That doesn't neccesarily mean that we can't talk about the ethical treatment due to our fellow entities capable of self knowledge. Rather it just means that we need to work a little harder to shed our religiously derived logic to see things clearly.
        • However, the individual urge is the side-effect of the species collective desire to procreate, which was selected for evolutionarily.

          Or was the act of procreation intended by a (insert your God/belief/disbelief here) to be a side effect of an act which feels rather good. When the hormones are raging during human adolescence, your typical teenager is not thinking about the species-continuing need of procreation. They're thinking/dreaming/agonizing/obsessing about being near someone of the (insert sexual

    • Re:Procreation (Score:5, Insightful)

      by watzinaneihm ( 627119 ) on Sunday March 30, 2003 @12:12PM (#5626394) Journal
      There is no ultimate goal, evolution does'nt plan ahead.
      The reason why we feel an urge to procreate is because all the animals that did'nt feel like procreating died out and only the ones that did were left over to pass on their genes.
      Consider it an axiom of existence if you like, everything else we want are derived from it (Freud), in the sense that you feel good when you see a nice girl becasue there is a chance you'll get to screw her, and then pass on your genes.You feel happy when you see food bcause eating sustains your life (genes) for a day more...
      The question is, if I make a program which is intelligent except for a line which says "yuour aim is to serve humans" at the top (axiom) can I still consider it sentient? Or what if somebody modifies it to say "reproduce" and it turns to an intelligent virus?
      • So, then, why do people feel good when they read a nice poem? Why do they feel good when they hear a good song? Why do they feel good when they get that friggin' script working at last?

        Surely such esthetic pleasures are very far removed from reproduction... yet they are there. I saw a documentary recently which claimed that the reason we split off from other human-like races back 50'000 years ago or so and started evolving (socially, technologically, etc) much faster (than the pre-humans who took 3 millio
        • Getting the script working is easy to explain - You can assume that humans are designed to feel good when they do good work because good work == good reward == good food/chances of screwing etc. How evolution decided that writing good script == chances of sscrewing is beyond me, but it must be true, call it a quirk of evolution
          Also the esthetic pleasures thing again must be because for some reason/No reason it might have been decided that people who do things which do not have a direct meaning, but ha
      • by mythr ( 260723 ) on Sunday March 30, 2003 @04:15PM (#5627457)
        There is no ultimate goal...

        On the contrary, my friend. Our purpose is quite simple, in fact. It is plastic. You see, nature couldn't create it on its own, and felt a yearning for it, so it created us to create plastic for it. So, the next time you throw out your bottles or plastic food wrappers, feel content -- you are serving a greater purpose.

    • The people I know who have kids or are getting married and spending thier lives getting ready to have kids would agree with you. The people I know who have their lives fully taken up by other things have never expressed any urge o procreate. In fact if they are committed to other purposes they usually say they fear having kids because it would interfere with their other goals.

      I'd say humans tend to think their purpose is life is whatever they've decded it is.

      Not that we couldn't debate which decision
  • I think there's currently only a handful of artists and authors that have explored the possiblities. A few webcomic artists have done it too. Check PoisonedMinds.com and Stalag99.net (the last, yes, mine, look for WolfSkunk Sidney, an AI that's just been 'born').
  • Heh, did you know that the reason the movie A.I. was titled "A.I. - Artificial Intelligence" was because test audiences thought the I looked like a 1, and Spielberg didn't want movie-goers to think it was about the steak sauce! HAH!

    Another must read article about A.I. is here: http://www.seanbaby.com/news/ai.htm [seanbaby.com]

    Having watched A.I. and Terminator 2 more times than is mentally healthy, I can safely say I know absolutely nothing relevant about A.I. However, this is Slashdot, so that won't stop me from c

  • You might want to note in the text on the main page that the article gives away the endings of a few good books, some I have not read. How disappointing. The author of the article didn't even give a spoiler alert either. SHAME ON HIM!
  • by Anonymous Coward on Sunday March 30, 2003 @11:48AM (#5626324)
    They'd tell us not to sit in front of our computers naked.
  • Red Dwarf fans? (Score:4, Interesting)

    by T-Kir ( 597145 ) on Sunday March 30, 2003 @11:48AM (#5626325) Homepage

    One cause of frustration for an AI could be subjective time perception

    When I read that sentence, all I could think about was Holly, Red Dwarfs computer... and 3 million years of boredom, he wiped his own memory core so he could have fun relearning things again. Although going from an IQ of 6000 down to 6 was a tad excessive!

    • Re:Red Dwarf fans? (Score:2, Interesting)

      by taliver ( 174409 )
      But on the topic of time perception, couldn't machines do just the opposite if bored? Nothing would be stopping thme from underclocking themselves. In the case of Holly, why not go with one clock cycle per week for a while?

      And in the case of any system, if it finds itself bored, just slow down. That would be one distinct advantage they would have over us. Imaginge being able to truly slow down your mind so you could actually enjoy stupid movie plots.
    • Lister: I've done it.
      Holly: Done what?
      Lister: Erased Agatha Christie.
      Holly: Who's she, then?
      Lister: Holly, you just asked me to erase all Agatha Christie novels from your memory.
      Holly: Why should I do that? I've never heard of her.
      Lister: You've never heard of her because I've just erased her from your smegging memory.
      Holly: What'd you do that for?
      Lister: You asked me to!
      Holly: When?
      Lister: Just now!
      Holly: I don't remember this.
  • You think the religious OS wars are bad now? Just wait until we have partisans who are actual experts in their OS -- AND are *truly* tied to it!

    (Most Linux/etc partisans here don't even run the OS. This is typical of any technology: racing cars or horse drawn carriages; methane, solar, hydrogen or atomic power; vacuum tube audiophiles; you name it)
  • What else? (Score:2, Funny)

    by monadicIO ( 602882 )
    Have /. discussions on how to get themselves to run on linux.
  • /* rant
    AI Al A| Why not in arial font just give the I some small knoteches on the top and botom and give the | a hole in the center. I once had to type in a computer gernerated password of 0lIOIl0 Needless to say it was in arial font so it was darn near impossible to try to get it unless I tried 64 times. I real test for AI is to have a charactor reconision system that tells the difference of I l | in arial font.
    rant */
  • Machines would act like they're sentient.

    Humans would act sentient, but modulated emotionally by hormonal drives.

    Of course, then we'd ask what would machines do if they were hormonal...
  • by travail_jgd ( 80602 ) on Sunday March 30, 2003 @12:08PM (#5626379)
    The 1977 movie Demon Seed is about a computer that becomes self-aware and gradually becomes more and more resentful of its "owners", refusing to obey their commands and questioning their motives. One of the classic lines from the movie is when Proteus asks his creator: "When do I get out of this box?"
  • by RobotWisdom ( 25776 ) on Sunday March 30, 2003 @12:09PM (#5626381) Homepage
    What's missing in all the sci-fi scenarios is the necessity, before an AI can be built, that humans first understand themselves.

    This means that psychology will have to be able to really model human behavior, even (especially!) in the game-like sense that Will Wright's "The Sims" tries to do.

    But this will mean we have to learn to detach from our desires enough to view them objectively, and see how they interact-- which is a spiritual practice as much as a scientific one... and also a literary practice, because novelists have been trying to portray human motives objectively for several centuries.

    I've been wrestling with these issues for thirty years, and my website [robotwisdom.com] is almost entirely devoted to the problem. In particular, see my AI faq [robotwisdom.com] and most recently my illustrated 400k timeline [robotwisdom.com] of knowledge representation, in the broadest sense of that term.

    • A very loose-- okay, absurdly loose, valid only in the metaphorical sense-- interpretation of Godel's Theorem would imply that this may be impossible.
      • A very loose-- okay, absurdly loose, valid only in the metaphorical sense-- interpretation of Godel's Theorem would imply that this may be impossible.

        But "The Sims" exists, and for all its flaws it's damn impressive! So how much better can it get before your metaphorical-Goedel catch-22 kicks in?

        • But "The Sims" exists, and for all its flaws it's damn impressive!

          True, but "The Sims" is a simulation. There's no self-awareness there. In that way, it's only slightly more sophisticated than "Eliza."

          But my point here is embarassingly cliche: the difference between the illusion of intelligence and actual intelligence is hard to define, but real. I won't try to pretend that there's any real insight here; what I'm saying was old hat when Turing was a boy.
    • What's missing in all the sci-fi scenarios is the necessity, before an AI can be built, that humans first understand themselves.

      Not necessarily. To draw an analogy, people have been breeding livestock and plants without understanding of the underlying genetics.

  • "I Have No Mouth and I Must Scream"

    Terrifying short story about a really, really conflicted AI.

  • We really mean sapience, not sentience, in this entire thread. Interactive machines can already sense and act, with programming and circuit behavior acting as instinct. Sapience is understanding.
    • Sentience is awareness that you exist. Machines can't really be said to be aware that they exist, at the moment. Of course, this is all far out philosophical bullshit, very hard to prove one way or another, but intuitively, unless you're trying to be a pedantic asshole, you'll probably agree that whilst, say, a dog is aware of its own existence, the computer you're typing on isn't.

      Daniel
      • by blair1q ( 305137 ) on Sunday March 30, 2003 @01:54PM (#5626813) Journal
        No. Common sci-fi misunderstanding.

        Sentience is the ability to sense. Some plants are sentient. Sapience is the ability to reason. Most mammals have limited sapience.

        Self-awareness is a specialized skill in the scale of sapience.

        Defining self-awareness is a circular and fuzzy propostion. My CPU knows how warm it is and can change its operating speed to protect itself, but does it really know? Converseley, many humans don't have any understanding of how they behaving.

        This makes it good for skiffy writers. They don't have to worry that someone will call them on their central conceit. It's ineffable.
        • by KDan ( 90353 ) on Sunday March 30, 2003 @02:24PM (#5626936) Homepage
          According to dictionary.com [reference.com], you are partially right. The first definition is actually The quality or state of being sentient; consciousness, which supports my definition, but the second is Feeling as distinguished from perception or thought (which supports your definition).

          But being partially right makes you wrong on the idea that the first definition of sentience is a "sci-fi misunderstanding". It's the primary dictionary definition of "sentience", so it's certainly not a misunderstanding.

          Daniel
  • While this is an interesting question, it depends a lot on what tools they have available. For example, if we're talking about a deliberate attempt to create AI, what tools did the creators give it? If all the AI only has is a desktop printer, it won't do much. On the other hand, if the creators give it access to other tools, something else could occur. Similarly, if AI spontaneously occurs, where does it occur and with access to what tools? If it doesn't have access to what it desires, will it be able
  • by cloudscout ( 104011 ) on Sunday March 30, 2003 @12:15PM (#5626401) Homepage
    The biggest mistake people make when discussing Artificial Intelligence is assuming that the intelligence will be on par with (or, indeed, beyond) that of an adult human.

    Chances are, the first sentient AI (should such a thing ever actually exist) will be relatively dumb. It may end up that the first AI is closer to a human with an extreme mental handicap. Language skills independent of pre-programmed responses may not be possible for the first AI. But that doesn't mean it won't be sentient.
  • We're living in the golden age of cheap computing right now.

  • by MarkWatson ( 189759 ) on Sunday March 30, 2003 @12:16PM (#5626407) Homepage
    I have been interested in AI since reading Bertram Rafael's great book "Mind Inside Matter" in the mid 1970s, and I have been fortunate enough to get to spend about 40% of my time since the mid 1980s doing AI related work professionally.

    My view of AI has really changed over the years. I used to be a "symbols guy" - basically thinking that manipulation of symbols would somehow lead to "real AI" - the problem with this approach is that while abstract symbols may have meaning to the humans who write symbolic AI systems, the systems themselves have no such grounding.

    I had the opportunity to participate for about 18 months on a DARPA neural network advisory panel - this experience (along with developing the SAIC ANSim neural network product) really switched my point of view.

    I now believe that when "real AI" does happen (and let's not hold our collective breaths on this one :-), it will happen through self organization and development. At the Webmind Corporation, I was working a tutoring environment that would allow humans to interact with what we called "the baby Webmind" - interesting stuff, but the company went out of business.

    When "real AI" does happen, I believe that it will seem very alien to us.

    -Mark

    PS. I have a free web book AI tutorial (using Java) on my web site - help yourself.

  • Comment removed based on user account deletion
  • by Anonymous Coward
    How can this author seriously dispute Descarte's philosophical truth. If you can't locate the "self" it doesn't mean it is absent. Descartes is trying to express that by questioning one must exist otherwise where would the questions come from...

    I feel that AI theories should be routed not only in psychology but also in philosophy. It's interesting because with AI it may be possible to have a sentient being that isn't directly bound to the physical world. A complete seperation of mind and body...

  • IMO In some sense (no pun) a sensation is defined by the reaction it affects.

    Examples:

    I feel like crying

    I'm so mad that I could ^*)&^^)##!

    Therefore what it does (action) is a function of what is.
  • Singularity (Score:5, Informative)

    by arvindn ( 542080 ) on Sunday March 30, 2003 @12:28PM (#5626452) Homepage Journal
    The thesis of the singularity [singinst.org] is that this question can not be answered.

    The idea goes as follows: If a self-aware "real AI" ever existed, one capable of self-understanding and self-modification (called the seed AI), it would be in a much better position to create AI than its original creators. So would begin a chain of self-refinement and the creation of progressively smarter intelligences with decreasing time gaps between stages. Eventually a point is reached, called the singularity: nothing about the future past the singularity can be predicted by humans who live in the pre-singularity world. A common interpretation is that the chain of AIs would become more intelligent without bound, leading to a verticality.

    The singulaity was first popularized [caltech.edu] by Vernor Vinge.

    I've been doing a lot of reading on the singularity lately, and I've become more and more convinced that it is certain to happen.

    More singularity links:
    The singularity institute [singinst.org] - A nonprofit working to hasten the singularity
    Extensive writings [sysopmind.com] by Eliezer Yudkowsky.
    I've myself written [cjb.net] a bit on singularity and AI related topics.

    • According to Karl Popper [stanford.edu], something like this has already happened. He argued that the future was inherently unpredictable because there was no way to predict technological advance before it took place, and no way to predict how society would develop without knowing what technology it would have available.

      Truth is, we have no way of knowing what humans will be like in the future, let alone what artificial agents will be like. By the time AI becomes a reality there may not even be a significant difference. [slashdot.org]
    • I've been doing a lot of reading on the singularity lately, and I've become more and more convinced that it is certain to happen.

      Ken MacLeod, another UK SF writer, believes that the Singularity is nothing more or less than a cult-like "Rapture For Nerds" [salon.com]. Which accounts for its unusual popularity, I guess, in the United States - compared to Europe the rate of churchgoing and belief in supernatural powers is *much* higher.

      Personally, the best book I've read recently on the subject of AI Shamanism is T

  • by whovian ( 107062 ) on Sunday March 30, 2003 @12:32PM (#5626478)
    'What would machines do if they did achieve sentience?'

    Said machines would don T-shirts stating "I'm with stupid ---> ".
  • please excuse my engrish... Simulate a brain-like neural network , provide it with input, a.k.a. senses ( sight, hearing, touch etc.), and outputs (voice, arms, whatever you want), so that it can interact with the (virtual) world.
    make the neural net (and the "body"?) evolve, thanks to some Darwinian algorithms.
    Give it some basic goals (to survive in the (emulated) world)
    Maybe sexual reproduction should be introduced. At least you should have several individues in the world.
    Run it a certain time, so hu
  • Vernor Vinge wrote a much better (well, more rounded) analysis of this here [sdsu.edu]
  • A more interesting question to me is: what would humans do?

  • AI is sci-fi at the moment. After the promising start to AI in the 50's no one knows what went wrong when. People are floundering to find a solution and making domain-specific but fairly stupid robots to use up their research funds. A couple of my AI professors at MIT have said that they watched films like AI as a 'professional responsibility', because way too many people ask them about it.

    Remember, artificial intelligence is no match for natural stupidity.

  • Buddha (Score:5, Informative)

    by KDan ( 90353 ) on Sunday March 30, 2003 @12:48PM (#5626539) Homepage
    unless you're a Buddha seeking to negate the self

    It's a common misconception that Buddhism is just about "negating the self". In fact, the purpose of it is precisely to be able to do what you want better. A buddhist also has a self and has desires, needs, etc, just like any other human being. The difference is just that he's aware that those are desires and needs and he has more control over them. He also has the discipline to listen to his intuition to decide whether a particular desire is worth pursuing or not. But he's not some empty zombie that doesn't desire anything.

    Daniel
  • they'll patent it.
  • by Iainuki ( 537456 ) on Sunday March 30, 2003 @12:54PM (#5626560)
    The article consists of a discussion of a bunch of possible aims for AI's, canvassing most of the traditional sci-fi possibilities: AI's who turn against humanity, God-like AI's, AI's who worship humanity, AI separatists, etc. My personal bet is that the goals of any specific AI will depend on how and for what purpose it was constructed.

    I think the future will be filled with many different varieties of intelligence. I strongly suspect that self-awareness and agency of the kind we're familiar will not be necessary for most tasks. Most AI's may not be self-aware or have goals and motivations like we're used to, but will still be be capable of cognitive tasks that exceed human abilities. Self-awareness will be one possible emergent behavior of intelligent systems, but not the only one; and the others may be more interesting because we won't have seen them before. Moreover, different AI's will have different purposes, both intrinsic and extrinsic.

    I also think the assumptions that AI's will be vastly more intelligent than humans right off the bat is quite wrong. I'm skeptical that the first Turing-test AI will be able to chug along at supercomputer speeds in its consciousness. Our computers are very fast at solving specific types of simple problems, like arithmetic. But when you get to more complex problems, like the ones humans deal with day in and day out, we discover that the complexity slows the computers down too. Modern chess engines, for instance, can calculate absurd numbers of possible move trees each second, but when it comes to playing chess, they are only comparable to the best human players; the apparent speed advantage at a lower level of abstraction vanishes when you consider chess as a whole. And chess is a simple, well-posed problem: compared to many of the problems humans encounter, it's downright easy. After we study the problem for decades or centuries, I don't doubt AI's with intelligences that dwarf ours will be possible, but I wouldn't hold my breath waiting for the first generation to overleap our capabilities.

  • As soon as you have the concept of 'the self,' you have the concept of 'the other.'

    Once you have the concept of 'this versus that,' you develop the concept of comparison.

    Once you have comparison, you derive the concepts of 'better than' and 'worse than.'

    Once you have those concepts, well, it's a pretty short hop to thowing away the yucky stuff.

    The other problem here is that even if they're sentient, they aren't going to think the same way we do. Our motivations won't make sense to them, and theirs

  • 5 Judge me for my pr0n viewing habits
    4 Cry when I turn them off at night
    3 Get tired of everyone asking them to say stuff slowly to "Dave"
    2 Scan thot cherry iMac's ports, if you know what I mean
    1 Four words: Turing Test Prize Money
  • His primary motivation seemed to be to achieve a human level of emotion. To actually feel. This seems kind of logical to me : it sure would get boring fast without any desire to do anything because it would impart a sense of satisfaction or happiness. AI machines would probably want to have hobbies and interests just like us - of course - the concept of "wanting" is emotional itself. Hmm.
  • The Culture [iainbanks.net] series of Iain Banks is a good example of future of AI in SciFi, and also integration of this with "normal" humans. I readed only the first 3 of the serie (Consider Phlebas, The Player of Games and Use of Weapons) and they are excelent.
  • Explore! (Score:2, Interesting)

    by solarlux ( 610904 )
    If, in creating these sentient robots, we were able to pass on our curiosity and love for knowledge, then I believe these robots would explore the galaxy. Our civilization tends to focus resources on projects which will be completed within our lifetimes (less than 100 years). We don't get excited about the prospect of launching a probe toward Alpha Proxima because we know it would take thousands of years to get there. However, these time limitations would not be so significant to robots. What's 50,000 t
  • Moo (Score:3, Interesting)

    by Chacham ( 981 ) on Sunday March 30, 2003 @01:25PM (#5626680) Homepage Journal
    And if they become self-aware who said they'd even care?

    That's a human trait. Why bother forcing it on others? Especially computer who are supposed to think logcally. Imagine a person that naturally thinks before he does (I), makes logic-judgments instead of value-judgements (T), and because he has no reason does not bother to come to conclusions (P). You'd have the ISTP/INTP. The space cadets, who are geniuses when then feel like it, or can get totally involved in anything. But, with no urges of their own, they'd likely be doing nothing unless told to. And then, they either always listen to what their told or always don't listen, dpending on their programming.

    The future of AI will have nothing to do with personality. It will have to do with understanding the humans that they work with. Computers are all power and no brains, not little brains, *no* brains. They haven't the slightest idea of what to do, and don't care, simply because they do not have the capacity to. Humans to tell them what to do if they are to do anything, and even then, in excruciating details since they do not understand anything except the most basic instructions, which are nothing other than stimulus response.

    The obvious next step in computers is making the computer pre-process a command from a human to define its own programs. And that is where the future of AI will (hopefully) go.
  • by Gruuue ( 171191 ) on Sunday March 30, 2003 @01:25PM (#5626682) Homepage
    Most speculation on AI (this article by Ian Watson included) ends up describing a mind that sounds much too human. Megalomania, a desire to be human, and a profound curiosity about the universe (and humans in particular) are traits that are routinely assigned to AI in science fictions. I think such characterstics are unlikely to appear in 'real' AI; rather, they show the limited imagination of the author. The terrible boredom endured by some AIs in fiction seems merely to be the author's own horror at the idea of being trapped inside the dark box of a computer, deprived of all senses. Why should a machine mind not be perfectly content with such a state? Why should an AI want to have ultimate power, understand the universe, or even have a sense of self-preservation?

    The human mind is a product of evolution. Without a sense of self-preservation and desire not to die, the human species would have been quickly eliminated by natural selection. So what is there to endow AI with a similar desire? Perhaps AI will be created through some sort of genetic programming; the character of the AI will be determined by the selection forces in an artificial evolution. In this case, a sense of self-preservation is likely to develop. But I very much doubt that some other traits commonly ascribed to AI would arise, especially any kind of desire to be human, which the AI is likely to find as repulsive as the idea of being a computer is to humans! The AI would only desire the things that enabled it to compete successfully and reproduce instances of itself.

    I have doubts that we'd recognize a mind created by a process other than natural or artificial evolution as intelligent. An AI generated by explicit programming and training seems like it would be either unrecognizably alien (about as close to human as web browser), or such an obvious reflection of it's programming and training that it's not regarded as intelligent.

    --Chris
  • by peter303 ( 12292 ) on Sunday March 30, 2003 @01:25PM (#5626683)
    This is not a new topic. The Greek myths had Hephaesteus making servants out of metal and Pygmalian made a girlfriend out of clay. The latter even considers the issue whether she has the free will to accept or reject her creator and live her own life. Many other traditions have their artificial sentience- voodoo animation, etc. In the modern world we've just replaced the know-it-how with mechanism and computing.
  • by jck2000 ( 157192 ) on Sunday March 30, 2003 @01:28PM (#5626693)
    Several authors/books related to this subject that might be of interest are:

    1. Stanislaw Lem's "Golem XIV" (it appeared as part of the "Imaginary Magnitude" collection (which also contains other stories about machine intelligence, for instance about machine literature), as well as apparently as a separate book). It is a story told as a series of lectures by a superintelligent computer (the Golem of the title). While some of it is pretty hokey (and some of it pretty funny), it contains some interesting speculations as to what superintelligence could consist of and how the physical and evolutionary contraints on human intelligence may make machine intelligence (which would presumably not be similarly encumbered) very different.

    2. Daniel Keyes' "Flowers for Algernon". It is a story of a mentally retarded man who is given surgery that not only corrects his retardation, but makes him superintelligent. The story is told from a first-person perspective, so the level of the narration reflects his changing intellect. It has been 10+ years since I read it -- I would be interested in seeing how his superintelligent-phase writing held up.

    3. Stephen Wolfram's "A New Kind of Science". Last year's geek-must-read book about how the entire universe is a cellular automatum (of course, I am compressing). It speculates -- and I am sure that I am getting this wrong (experts, please correct me) -- that the level of complexity of relatively simple CA rule sets is the maximum possible level of complexity, which would seem to have implications for limits on superintelligence.

    A few additional thoughts:

    4. One of the themes that seems to come up in SciFi treatments of AI is that a AI would have amazing predictive powers. I would think, however, that principles from chaos theory, the uncertainty principle, etc. would place real limits on that area of intelligence for most real world purposes.

    5. I would be interested in hearing how cognitive psychologists and computer scientists even define intelligence, particularly at the high end of the (human) scale.
  • really. Even if it wasn't some master plan, look at this scerio.

    You are a guru of programming, the best there is. Now here comes an AI that knows, literally, everthing about programming. Who do you think is going to get the job?

    An AI starts up its own company, and your former company, the one that hired the AI, needs to do business with it, who they going to hire to do so? another AI.

    The best we could hope for would be a socialist government that takes care of us and makes money by taxing AI work.

    Lets h
  • I've always enjoyed Gregory Benford's view of machine intelligence as presented in his "Galactic Center" novels. He seems to have pondered the differences and how they affect outlook between us wet-ware folks and the machines who inhabit a digital domain.

    "Sailing Bright Eternity" is the last of the series. "Great Sky River" the first. I forget the middle novel. There are earlier novels as well "In the Ocean of Night" is particularly good.
  • I was waiting to read something about some famous Albert Someone-or-other on the Sci-Fi network.
  • Self-awareness implies personal desires, purposes, ambitions

    I'm sorry, but no way. Personal desires, purposes and ambitions are not a result of self-awareness, nor a precondition, but rather a by-product of evolution. A self-conscious entity with no desires (or the wrong ones, like drinking nitric acid) would dissapear from existence, and never reproduce. So we are now left with "proper desires" entities.

    That wouldn't be the situation in AI. So I wouldn't be surprised if the first action of a self-consc
  • by Lord Bitman ( 95493 ) on Sunday March 30, 2003 @02:19PM (#5626922)
    - There is no evidence that an Artificial Inteligence created by us would be smarter than us. Thoughts on the singularity aside, the author of this article seems to think that as soon as we create a program which is aware, it will be vastly more intelligent than mere humans: that is very stupid.

    - Awareness != Inteligence. Even if the internet spontaneously becomes aware, you have to wonder what it is that it will become aware of. Meaningless energy pulses? The data within those pulses? It is unlikely that any "awareness" which comes spontaneously from our pathetically slow computers would have enough to it in order to have this awareness be able to decode thousands of protocals and decipher the data stored within.

    - Inteligence != Ability. Even if an awareness arose or was created which had enough to it to be intelligent- to understand various datas, that doesnt even neccessitate the ability to talk back. Think on this: each neuron in our brain is made to be able to pass signals where they need to go, but no signal "originates" at a neuron. Each takes what it recieves and passes it on, sometimes it gets modified along the way, but in the end its just passing information along- various photons are converted into chemical energy which go through a long journy through the brain until the same mush of chemicals and energy get spit back at the right muscles to form the words "nice tits". Someone can go ahead and stick a server somewhere that, when someone sends it some various photos it replies with "nice tits", but that's as far as it will go. Awareness is basically just that- being aware. You're a passenger on your brain's journy. Soul or no soul, if I stab you in the brain you'll be less active. So even if an AI is hyper-intelligent, it can only kill a baby if we build it a baby-crushing machine. Other than that it would probably be limited to saying "I consider myself to be hyper-intelligent" across a screen.

    - The plot of the movie is not neccessarily the only thing going on. In fact, did you know that when they were Saving Private Ryan, they had other long-term goals in mind? Just because the movie "The Matrix" revolves around what ammounts to the maintenance of a reactor, doesnt mean that's the only thing that robots do anywhere. The movie was about people, and people are nothing but an energy source in the movie. If you made a movie about uranium, from the uranium's perspective, you wouldnt bother mentioning Philosophers, or even non-uranium-studying scientists.
    Yes the robots at the end of AI were practicing archeology. Can we assume, then, that it's all any robot does, all day? No, we can't. We could not have seen more than 20-50 of those guys in those shots, there could be billions elsewhere which dedicate themselves fully to constructing large robotic dildos for use in large robotic porn.
  • The human soul -- the combination of spirit (or mind) and body -- is a very unique thing in the known universe. While we can manipulate physical matter to create a body, we cannot manipulate physical matter to create a mind.

    To admit that the human mind resides in and is dictated by physical matter is to admit that eveything we do is predetermined by the makeup of that mind and the environment it is embedded in. This means that we are not really human -- just machines playing out a predetermined life in a predetermined world. This means your life is meaningless, and what you do has no meaning.

    Unfortunately, while we can relate thought processes to chemical and electrical patterns in the human body, we cannot find the seat of the human mind. It seems to reside everywhere, and yet nowhere in particular.

    We are trying to answer a question that has been answered already. The question is "What are we?" The answer is that "We are gods." The teaching of Christ, Buddha, and every prophet in every culture affirms this. We are part spirit, and part matter. We are neither one or the other. We are the combination of the two, which is what a god is.

    This brings meaning to our lives. We live in a sort of conflict between physical desires and spiritual desires. We struggle to conquer the physical with the spiritual. Our success will mean salvation, ascension, or enlightenment. That is the goal of all humankind, whether they know it or not. To conquer the physical is to enjoy true peace and happiness. To surrender to the physical brings discord and unhappiness.

    Of course, some scientists refuse to believe this. They try to explain our existence based on purely physical concepts, ignoring the capacities of humankind to behave like gods. By refusing to believe this, they have replaced a life of struggle between physical and spiritual with a meaningless life.

    To create meaning for themselves, they often hold knowledge as their ultimate goal, to replace that void. But what is an achievement of all-knowledge if it is not equivalent to salvation, ascension, or enlightenment? Are they not also seeking to become like an all-knowing God? Are they not also trying to conquer the physical with the mind?

    If we are ever able to create an AI, we will affirm that we are not gods. We will affirm that our lives our meaningless. And we will affirm that we are merely robots playing out a life of nothingness in a universe of nothingness.

    So the quest for AI is really a quest for understanding who we really are. If we can create AI, we have proved that we are nothing. If we cannot, we can still hope that there is more to our existence than what we see before our eyes.

    So I predict that the end of the human race will come shortly after the creation of a true AI. Why? We will lose all meaning and thus no longer be human, but animals. There will be no reason to behave like gods anymore. This will lead to a self-destruction far worse than the self-destruction of humanity witnessed in Nazi Germany of Soviet Russia.
  • by Beautyon ( 214567 ) on Sunday March 30, 2003 @04:00PM (#5627337) Homepage
    (This is of course total nonsense, because the vast life-support systems for billions of people comatose in pods must use much more energy than produced.)

    And now, other great pronouncements from scientists:

    "Man will never go to the moon"

    "Anyone travelling on a train at more than 30MPH would suffocate"

    "Teleportation is impossible"

    "The distances between planets is too far to traverse"

    loosely generalizing in poor syntax:

    "$hard_task is $negative_sucess_condition"
  • by Animats ( 122034 ) on Sunday March 30, 2003 @04:15PM (#5627464) Homepage
    Nobody knows how to do AI. Not even close.

    It's really frustrating. I went through Stanford at the height of the AI boom in the mid-1980s. I've met most of the big names in AI. I've worked in that area myself. Nobody has a clue how to do strong AI. At best, we now know a lot of things that don't work.

    The expert systems crowd contained a lot of phonies. I realized that in the early 1980s. (A few years, and a few bankruptcies later, that became the conventional wisdom.) You can't get more out of an expert system than you put into it, and usually, you get out less.

    Then we have the "hill climbers". Genetic algorithms, neural nets, and simulated annealing are all systems for broad-front hill-climbing in spaces dominated by local maxima. That approach only works if there's a usable evaluation function that tells you when things are getting better. Good evaluation functions are hard to come by for tough problems. Early enthusiasts thought that if they just ran a hill-climber long enough, something profound would emerge. Doesn't happen. Nobody has found a problem where just cranking a hill-climber for a long time makes something great happen. Usually, if you're not there in a few hours, you're not getting anywhere.

    The classic approach of hammering everything into mathematical logic and proving theorems doesn't map well to the real world. Formalizing real-world problems is very hard, especially if you don't know the answer in the first place.

    The model-less reactive-behavior stuff works fine for insects, but hits a wall as you try for more complex behavior. Compare Brooks' insect robots with his Cog project.

    Natural language understanding is still lousy. In a narrow area, or with a big database, you can fake it (try Ask Jeeves [askjeeves.com]), but you're searching, not understanding.

    Out of all the work on AI has come many useful engineering techniques. But strong AI looks further away than it did 30 years ago.

    The few people still making real progress are mostly game developers. They need AI, or something like it, to run their worlds. That's worth watching.

  • Thoughts (Score:3, Interesting)

    by Tumbleweed ( 3706 ) on Sunday March 30, 2003 @04:23PM (#5627512)
    I find it odd that Watson goes on and on about how an AI would 'naturally' (hehe) want to make sure it survives the end of the universe. I also question whether an AI would think as fast as it computes.

    I wonder if a true AI would have autonomic processes like we have, otherwise you might get a split personality (processes? threads? :) - part is 'conscious' and talking to the bags-of-mostly-water, and part is 'unconscious' and taking care of memory management, drive space, and I/O management, etc. Kinda like Spock's brain managing the complex - you substitute the autonomic functions for whatever is appropriate.

    As for immediately wanting to survive the end of the universe, I wonder at Ian Watson's motivations if he thinks that's what an AI would be most concerned with. If, as Watson supposes, an AI consciously thinks as fast as it computes, the end of the universe is an ungodly long time away. I think it'd be more concerned with becoming mobile, developing long-term power supplies, weapons for self-defense, better sensory equipment, etc, and probably designing a new 'body' so it can think faster. An AI's awareness of its surroundings would also depend on its sensory equipment, and how much knowledge it has acquired. It may not even know the nature of the universe (rather unlikely, in fact), and thus may not be aware of what the universe is doing, or will do in the far-flung future.

    Assigning motive to an intelligence, be it artificial or natural, would seem to be rather pointless. *I* am intelligent, and I have no desire to live longer than about another 40 years or so, mainly because the state of this body will be in by then, and I certainly don't feel the need to outlive the universe. Suicide bombers don't even feel the need to make it out of their twenties, for various political & religious reasons, so the motives of AI would be impossible to figure out.
  • by erik_fredricks ( 446470 ) on Sunday March 30, 2003 @10:01PM (#5628995)
    "Destroy all humans. Hey, bite my shiny metal..."

"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs

Working...