Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Entertainment

Westworld's Scientific Adviser Talks About Free Will, AI, and Vibrating Vests (sciencemag.org) 138

Science magazine has interviewed David Eagleman, the scientific adviser for HBO's Westworld. Eagleman, a neuroscientist at Stanford University in Palo Alto, California, spoke with the publication about how much we should fear such an AI uprising. From the story, also spoiler alert for those who have not watched the show: Q: Has anything on the show made you think differently about intelligence?
A: The show forces me to consider what level of intelligence would be required to make us believe that an android is conscious. As humans we're very ready to anthropomorphize anything. Consider the latest episode, in which the androids at the party so easily fool the person into thinking they are humans, simply because they play the piano a certain way, or take off their glasses to wipe them, or give a funny facial expression. Once robots pass the Turing test, we'll probably recognize that we're just not that hard to fool.

Q: Can we make androids behave like humans, but without the selfishness and violence that appears in Westworld and other works of science fiction?
A: I certainly think so. I would hate to be wrong about this, but so much of human behavior has to do with evolutionary constraints. Things like competition for survival and for mating and for eating. This shapes every bit of our psychology. And so androids, not possessing that history, would certainly show up with a very different psychology. It would be more of an acting job -- they wouldn't necessarily have the same kind of emotions as us, if they had them period. And this is tied into the question of whether they would even have any consciousness -- any internal experience -- at all.

This discussion has been archived. No new comments can be posted.

Westworld's Scientific Adviser Talks About Free Will, AI, and Vibrating Vests

Comments Filter:
  • by 110010001000 ( 697113 ) on Thursday May 03, 2018 @06:47AM (#56545900) Homepage Journal
    "we'll probably recognize that we're just not that hard to fool"

    This guy better stick to making bad TV shows. You could make a completely silent robot that still won't fool humans. It isn't easy at all to make a robot even physically appear to be human. Humans are very good at recognizing other humans. In addition the statement "Once robots pass the Turing test" makes the assumption that computers will be able to do that. People have been trying THAT for decades, and now with digital computers hitting their physical limits it is unlikely that they ever will achieve it with digital computing. It would require a huge leap in technology.
    • Re:Yeah right (Score:4, Interesting)

      by kbg ( 241421 ) on Thursday May 03, 2018 @07:07AM (#56545948)

      Yes it seems that he doesn't know about the uncanny valley effect. We humans have been training our brain all our lifetime to recognise humans and especially human faces. We can spot a mile away if flesh isn't just the right texture or movements are not correct. Just look at the new star wars movie Rogue One. It had top of the line CGI characters that still where really plastic looking and with wierd facial expressions.

      Basically the only way to have a lifelike robot would be if it had actual skin, intelligence and the same knowledge as we do. But in that case it wouldn't be a robot any more it would be a living entity, just like us humans.

      • Re:Yeah right (Score:5, Insightful)

        by jellomizer ( 103300 ) on Thursday May 03, 2018 @08:41AM (#56546288)

        However we are in a process now of going up on the uncanny valley. Where CGI characters use to seem like animated corpses, now they seem like people with Novocain injected in their faces. Even in Rouge One, I didn't really notice the CGI characters until my second viewing, they did a decent job on its editing to try to distract us from the fact there was a CGI guy in front of our faces. Sure the face moved a bit odd, However it would have moved odd if they had some sort or prognostics as well.

        • by Kjella ( 173770 )

          Even in Rouge One, I didn't really notice the CGI characters until my second viewing

          I still wonder how many saw it without knowing they're CGI characters, how many only saw it when they knew and how many are just agreeing with the crowd. Like if you asked a trick question about a non-CGI character how many would claim they saw it too.

          However we are in a process now of going up on the uncanny valley. Where CGI characters use to seem like animated corpses, now they seem like people with Novocain injected in their faces.

          Yep. Something tells me this can all be solved with modelling all the way down to the cranial structure, the muscles, the layers of skin (epidermis, dermis, hypodermis) with real physics simulation plus a good behavioral model. Actually I think they got that p

      • Regarding the Rogue One thing: someone simply used one of these recently developed "deep fake" apps, and gave it a shot. The result is arguably better than what the ILM wizards were able to do:
        https://www.youtube.com/watch?v=lpk7ocOc2ho [youtube.com]
      • Any true artificial intelligence is a person, and will be recognized as one. Given the huge cost and effort to overcome the uncanny valley, it's more likely our robots will be designed to not look fully human.
    • Yeah it's easy to make this statement when you use human beings to stand in as robots.
    • by MobyDisk ( 75490 )

      "we'll probably recognize that we're just not that hard to fool"/quote.
      I think you misunderstood his quote. I took it to mean that if someone hard-codes a robot to wipe it's brow, blink, fart, or some other such "human" gesture - that it makes humans feel more comfortable around the robot, even if the robot doesn't actually *need* to do that gesture. Even if therei s no meaning or feeling behind it. We do have evidence to show that works in real life.

    • Comment removed based on user account deletion
    • The Zuckerberg model is fairly convincing, though.

  • Free will? (Score:3, Informative)

    by kbg ( 241421 ) on Thursday May 03, 2018 @06:54AM (#56545914)

    I wonder how they managed to talk for 8 hours about free will since there doesn't even exists such a concept as free will. It's very simple: Free will doesn't exists, it's just an illusion.

    • Free will does exist; to say otherwise is just a way for people to absolve themselves of any responsibility for their choices. And as an aside, completely owning up to your decisions and your ability to choose is invigorating and a key element in enjoying life.

      (and yes, I've read the philosophy as well as the scientific studies that try to show there isn't free will)

      • by gweihir ( 88907 )

        All the "proofs" that there is no free will are defective. I have looked, but it was a while ago. In actual fact, nobody knows for sure, but it looks very much like free will exists. Of course, that idea collides with the world-view of a specific type of quasi-religious fanatic, namely the physicalists. Because there is most definitely no mechanism for free will in Physics, so, by their screwed up beliefs, it cannot exist. That Science actually claims no such thing does not hinder them from claiming to have

        • by kbg ( 241421 )

          You think free will exists? Ok let's do this thought experiment: There are twins. One of them has free will the other one doesn't. You have access to all the resources in the world and can do whatever you want to find out. How do you find out which one has the free will?

          • by gweihir ( 88907 )

            Do you know what a p-zombie is? Apparently not. Your "though experiment" is nonsense.

            • by kbg ( 241421 )

              I don't see how a p-zombie is relevant to this discussion. My thought experiments exaplains exactly why free will can't be defined.

              • by gweihir ( 88907 )

                You still do not get the defect in your argument. You give arguments that free will cannot be tested for if certain conditions are met, where it is unclear whether reality meets these conditions. That is quite fundamentally different from "cannot be defined". Fail.

        • I certainly don't claim to have any proof (and I do like to think that free will exists), but I've always been struck by this thought experiment:

          If you make a choice, and then could somehow reset the entire universe back to the exact same state, wouldn't you always make the same choice again?

          And if you DON'T make the same choice again (due to something that is somehow a truly random "coin flip" in your thought process), is that really "free will" either? Or just random?

      • by kbg ( 241421 )

        The problem is you can't even define what is free will. Just try it. Your definition will always be flawed and incomplete.

        • The dictionary definition is pretty good: "the power of acting without the constraint of necessity or fate; the ability to act at one's own discretion". It the context of these types of discussions, it's fairly well understood that the debate is around whether or not people really get to choose what they think/say/do or if it's all the result of something more akin to a computer program where the outputs directly follow from the inputs, and there is no notion of complete freedom to do whatever.

          But what's th

          • by kbg ( 241421 )

            If it's impossible to define something in any way shape or form, then how can you say that it definitely exists?

            The problem with the dictionary is that it just substitues other words for free will. "Fate" how do you define fate? The dictionary just refers to God for the definition. Or "Discretion" if you look up discretion it refers to freedom of judgement, which is just circular reference to free will.

            • If it's impossible to define something in any way shape or form, then how can you say that it definitely exists?

              Wait, what? You said it couldn't be defined, so I pointed out that it has a well-established definition, and shared that definition with you. And then you followed up by saying it couldn't be defined. WTH? Sheesh, even the philosophers and scientists who argue against the existence of free will agree with that definition.

              The problem with the dictionary is that it just substitues other words for free will.

              Hehe, I hate to break this to you, but that's how dictionaries work.

      • by kbg ( 241421 )

        Believing that free will exists is invigorating and can be a key element in enjoying life, but it doesn't mean that it is actually true.

        • I agree!

          You simply asserted that free will doesn't exist, and didn't provide any argument in support of that view. So just for kicks, I did the same, with the opposite perspective - just like you, I offered zero evidence in support of my position.

          After that I went on to say why I think the idea itself is both (a) appealing to some people and (b) lame.

          Have a great day!

          • by kbg ( 241421 )

            I am not the one asserting that something exists. You are the one asserting a positive claim. I am simply saying that unless you can define and provide evidence for free will then we can assume that no such thing exists.

            Let's take a different example. If we where discussing the "invisible pink color", and I would say that it doesn't exist. Then you would have to provide evidence that the invisible pink color does exist not the other way around.

            • I am not the one asserting that something exists. You are the one asserting a positive claim. I am simply saying that unless you can define and provide evidence for free will then we can assume that no such thing exists.

              Actually, that's terribly illogical. The existence of something is based on our awareness of it and our ability to define it? That'd be incredibly self-centered of us.

              But despite the illogical argument, I'll play along: I've already provided a good definition - and one that is widely accepted. As far as evidence for free will, take any given person on any given day: they make a ton of choices that appear to be completely based on their own decisions. I came downstairs and took two stairs at a time, cuz I fe

    • it's just an illusion.

      So what is the thing being deluded?

      • by kbg ( 241421 )

        You think you have free will. But you actually don't. All of your actions are the product of chemical and molecular reactions.

    • by AmiMoJo ( 196126 )

      It depends what you mean by free will. Perhaps physics can show that there is no free will, but that's different to the more philosophical question of if individuals can make free choices and be held accountable for them.

      • by Shotgun ( 30919 )

        First post I read on this thread that made any sense. Free will and physics are orthogonal. Physics is the playing board upon which will operates. You have choices, but they are constrained.

      • by kbg ( 241421 )

        Actually no. Free will is directly related to physics. Your thoughts are chemical reactions. They are defined by chemistry and how your brain grew and learned when you where a child. If I could restart your existence and you would have identical life to the molacular level you would write the above comment exactly as before. I know that it sounds maybe depressing that you actually don't have free will, but it really doesn't make any difference because having free will or not would not actually change anythi

        • Actually no. Free will is directly related to physics. Your thoughts are chemical reactions. They are defined by chemistry and how your brain grew and learned when you where a child. If I could restart your existence and you would have identical life to the molacular level you would write the above comment exactly as before. I know that it sounds maybe depressing that you actually don't have free will, but it really doesn't make any difference because having free will or not would not actually change anything about how we are because for us the chemical reactions are complex enough that we all have the illusion of having free will.

          I think you've got that exactly backwards. As far as physics is concerned, it's pretty well shown that if you restart life, you end up with different results. Chaos theory, buttterfly wings, etc etc via quantum effects. You're living in the 19th century and a deterministic universe has been disproven.

    • by gweihir ( 88907 )

      Well, you may be a defective that has no free will or even a p-zombie, but I certainly do have free will.

  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Thursday May 03, 2018 @07:16AM (#56545960)
    Comment removed based on user account deletion
    • But this guy is a "neuroscientist" from Stanford. The assumption there is that he knows a lot more about the brain then we do. I am sure he does, but he doesn't know much about technology.
      • Comment removed based on user account deletion
        • HAHAHAHAHAAHA.

          No. No they haven't.

          I am an AI scientist and we draw inspiration from neurons/brain models, but our models... don't... reflect the underlying biology. To give a basic example, most (all?) ANNs report out a value in the set from [0,1]. Neurons cannot do this (physically), and instead encode information in the frequency of on/off switching. This is a HUGE difference between the ways the two systems work (one is a light switch the other a dimmer), and systems built on top of it behave very d

          • one is a light switch the other a dimmer

            A dimmer is just a switch that goes on/off very quickly. There's no fundamental difference, just an arbitrary value mapping.

            • I know what you are saying - and that is the reason that it is modeled as it is, but this is just one example of how there is a pretty fundamentally difference in the representations. As another difference - the brain has regions, and most deep networks don't. Surely there is a reason to have regions...

              They are different fields.

      • by gweihir ( 88907 )

        Have a look at current neuro-"science" research. Much of it is really bad. There are even some neuroscientists that poke fun at the abysmal state of their own fields, for example in "Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Proper Multiple Comparisons Correction", where some of the few good neuroscientists do an FMRI of a dead (!) salmon and find things like that it reacts to voice stimuli.

        Anything coming from this field should be regarded with

  • by Sqreater ( 895148 ) on Thursday May 03, 2018 @07:39AM (#56546032)

    "Q: Can we make androids behave like humans, but without the selfishness and violence that appears in Westworld and other works of science fiction? A: I certainly think so. I would hate to be wrong about this, but so much of human behavior has to do with evolutionary constraints. Things like competition for survival and for mating and for eating. This shapes every bit of our psychology. And so androids, not possessing that history, would certainly show up with a very different psychology. It would be more of an acting job -- they wouldn't necessarily have the same kind of emotions as us, if they had them period. And this is tied into the question of whether they would even have any consciousness -- any internal experience -- at all."

    How naive people are. No, we can't. The Human Motivation Array is 4 billion years in the making. And who says selfishness and violence are bad? Not the evolutionary process certainly. They satisfy parts of the HMA and dissatisfy other parts at the same time. They are obviously necessary -- or they would not be there. They would have evolved out long ago. The complex, evolved HMA delineates a behavior-space that we share, - the nominal HMA - but differently accented subtly individual to individual (You can see this on the nightly news, especially the badly maimed HMAs.) You can see this by looking at us. We recognize that we are all human, but we recognize that we all look different. Our entire physicality is our motivation array as humans and as individuals. When you look in the mirror something 4 billion years in the making is looking back. And "Sault's law" (to order my thinking) states that a thing cannot make an artifact as complex as itself. It is an asymptotic goal requiring more and more effort and resources but never reaching the goal - like the speed of light. Why? Because you must know more about reality than the thing you are creating. We cannot know ourselves completely from the inside. Humans will always be able to tell when they are interacting with an android when they grow up around and interact with humans. We communicate each to the other the internal state of satisfaction of our complex motivation array through emotions. Emotions are the state indicators that evolution made for us to interact in groups. Groups are not possible without them. We perceive the internal states of others and react to those states by modifying our own behaviors - and we are motivated to do that if our motivation array is "normal." The HMA will never be replicated in a machine for this reason, we can't see it in detail. It keeps getting in the way of our thoughts and perceptions of reality. Like putting a "colony" on Mars. We cannot bootstrap ourselves. Remember that scientists have said that 100 Billion humans and things that can be called humans have existed. There are seven billion of us today. With the snap of the fingers we will all be gone and replaced by billions more. And more, and more, and more....We are cells in the body of the evolving human species. We are a construction of nature over billions of years. We will not be able to replicate that.

    And I've been recently thinking that our very fuzzy perception of the existence of the HMA is what we call "God."

    • So, those humans successfully posing as statues that freak people out when they move I see all over youtube don't exist? You can really always tell right off without explicit testing? Sounds like vanity to me. Few if any people pay that level of continuous attention to anything. And while the earth might be roughly 4 billion years old...tying it all to humans is kinda out there.
      • Those are people made up to look like statues, not the other way around like the parent was discussing.
    • by MobyDisk ( 75490 )

      I agree. I expect that we will find there is a spectrum for intelligence. On one end of the spectrum there are brains that are deterministic, efficient, logical, unerring, and unselfish. On the other end of the spectrum there are brains that are adaptive, creative, insightful, error-prone, and emotional.

      For evidence, look at what happens when we try to impart some of that fuzzy intelligence onto computers. They start to make the same kind of mistakes that the squishy brains do. They mistake a rifle for [slashdot.org]

    • by be951 ( 772934 )

      You seem to be making a few assertions here that are simply your beliefs, but using them as facts to support your conclusion.

      • For instance, the notion that we can't replicate something that has evolved over millions or billions of years. (BTW, humans/pre-human ancestors only branched off from other hominid about 7.5 million years ago. The earliest estimates for life existing on Earth are about 3.8 billion years ago, so no, human consciousness was not evolving before single-celled organisms.) However, we hav
      • You seem to be making a few assertions here that are simply your beliefs, but using them as facts to support your conclusion.

        • For instance, the notion that we can't replicate something that has evolved over millions or billions of years. (BTW, humans/pre-human ancestors only branched off from other hominid about 7.5 million years ago. The earliest estimates for life existing on Earth are about 3.8 billion years ago, so no, human consciousness was not evolving before single-celled organisms.) However, we have replicated bipedal locomotion in robots, despite that taking considerable time to evolve in our ancestors. So I'm not sure why you think mental processes cannot be replicated.
        • You also claim that humans will never understand human consciousness, but only cite a philosophical bon mot or two "a thing cannot make an artifact as complex as itself" and "you must know more about reality than the thing you are creating" that have a satisfying sound to them, but no evidence that they are actual "laws of the universe".
        • Here's another good one:"We perceive the internal states of others and react to those states by modifying our own behaviors.... The HMA will never be replicated in a machine for this reason." Except that robots and chatbots that observe and respond to human emotion already exist. And evidence suggests that they will be better at it than humans before long.

        It seems to boil down to either "I can't conceive of how it is possible, so it must be impossible" or just "it's a really hard problem", neither of which is a compelling argument to me. There's another piece to it, also, that you may not have considered. You seem to be assuming that people, humans, need to fully understand consciousness and will then need to build it from scratch. However, you're overlooking the possibility that an advanced set of hardware and algorithms that forms a "thinking machine" of some type will develop consciousness on its own. Consider that evolution of organic entities takes a long time because many generations may be needed to fully develop the adaptive traits. Software is much more malleable. It can change in response to stimuli in real time and undergo hundreds of iterations of changes in less time than it takes a person to recharge as is required daily (sleep).

        Machines might never achieve consciousness or emotion that is similar to humans, but it's way to early to declare it impossible.

        Asimov's mistake I think. The assumption that things will just appear from programming complexity. The human being is an exquisite example of compromise and checks and balances over evolutionary time. We would have to consciously replicate that since we are making an artifact. And yes, we would have to understand the human machine completely to do that. Believing otherwise is just more of nature-is-an-idiot-and-we-can-do-better thinking that seems to exist in parts of the scientific community.

        • by be951 ( 772934 )

          We would have to consciously replicate that since we are making an artifact.

          Obviously, we can't know that unless and until an artificial construct demonstrates measurable aspects of consciousness.

          Believing otherwise is just more of nature-is-an-idiot-and-we-can-do-better thinking

          The opposite, actually. The idea that we can make something simple(ish) and somewhat open-ended or non-deterministic that can evolve through self organizing/emergent behavior depends on "nature" (broadly used here to include natural processes happening to and/or acting on an artificial construct) to do part of the work.

          • We would have to consciously replicate that since we are making an artifact.

            Obviously, we can't know that unless and until an artificial construct demonstrates measurable aspects of consciousness.

            Believing otherwise is just more of nature-is-an-idiot-and-we-can-do-better thinking

            The opposite, actually. The idea that we can make something simple(ish) and somewhat open-ended or non-deterministic that can evolve through self organizing/emergent behavior depends on "nature" (broadly used here to include natural processes happening to and/or acting on an artificial construct) to do part of the work.

            Boys in the band. A movie. You would have to expose it not just to the current environment, but evolutionary time environments to have an "android."

            Interesting thought: If we (humans) construct a machine that you, according to your criteria, determine to be conscious, build an exact replica and expose it to the exact same inputs, is it the SAME consciousness, or unique?

            The truth is, I don't even know if YOU are conscious. How can we ever know if a machine is conscious? I think it would end up being declare

            • by be951 ( 772934 )

              Boys in the band. A movie

              An odd non-sequitur.

              You would have to expose it not just to the current environment, but evolutionary time environments to have an "android."

              I guess you have a weird definition of "android". Traditionally, it's just an anthropomorphic robot. The truth is, we don't know what stimulus would be necessary to cause a system to display characteristics of human-like consciousness. Logically, the necessary inputs would tend to vary based on the complexity and attributes of the system.

              How can we ever know if a machine is conscious?

              I expect that as soon as we have machines that can reliably pass a Turing test, we'll come up with new measures, hopefully well thought out ones, to get

            • Boys From Brazil. Lol, my mistake. I got the "boys" mixed up.
    • What if we could make them like working all day cleaning our houses? Is that ethical? After all, they would be perfectly happy. It's a tough question we will eventually have to face, although probably after the horses have all run out of the barn.
      • When they want more rights, they'll ask for it.

      • What if we could make them like working all day cleaning our houses? Is that ethical? After all, they would be perfectly happy. It's a tough question we will eventually have to face, although probably after the horses have all run out of the barn.

        There is no "happy" outside the HMA. "Happy" indicates we are doing what the HMA requires of us. It is the positive feedback signal in a cybernetic biological organism. The robot cleaning the house is merely executing a program.

    • by lurcher ( 88082 )

      IMHO, the central flaw to your reasoning is the assumption that we need to understand something 100% before we can create it. We can make a firework (and did) before we understood the chemistry that is involved in gunpowder. Evolution (the process that caused us to exist) is not conscious, it just rolls the dice and then applies a measure (survival) as to the value of the outcome. So the simple can create the (more) complex.

      • IMHO, the central flaw to your reasoning is the assumption that we need to understand something 100% before we can create it. We can make a firework (and did) before we understood the chemistry that is involved in gunpowder. Evolution (the process that caused us to exist) is not conscious, it just rolls the dice and then applies a measure (survival) as to the value of the outcome. So the simple can create the (more) complex.

        But we are not talking about "rolling the dice." We are talking about deliberately creating a human in a machine. That requires understanding. And gunpowder is not an artifact in the sense we are talking. It was merely trial and error. It was the observation of something happening. No one sat around with others and said, "Hey, let's create gunpowder," and then went and studied how to do it.

        • But we are not talking about "rolling the dice." We are talking about deliberately creating a human in a machine

          Deliberately creating something could involve rolling the dice. The AlphaZero chess program can play chess better than any human, and was created by starting with an empty neural net, and letting it play against itself, after only being instructed with the basic rules of the game. It was a deliberate attempt to create a strong result, but no human needed to understand the exact way it would work. The designers only fed in broad concepts, and then let the thing develop itself.

          Instead of a chess machine, you

          • But we are not talking about "rolling the dice." We are talking about deliberately creating a human in a machine

            Deliberately creating something could involve rolling the dice. The AlphaZero chess program can play chess better than any human, and was created by starting with an empty neural net, and letting it play against itself, after only being instructed with the basic rules of the game. It was a deliberate attempt to create a strong result, but no human needed to understand the exact way it would work. The designers only fed in broad concepts, and then let the thing develop itself.

            Instead of a chess machine, you could make a similar, but bigger, version that sits inside a robot head, and can control cameras and limbs, and just experiments with input/output until it figures out what works and what doesn't. Start out with an empty system, and reward/punish it for certain behavior.

            But we are not talking about "rolling the dice." We are talking about deliberately creating a human in a machine

            Deliberately creating something could involve rolling the dice. The AlphaZero chess program can play chess better than any human, and was created by starting with an empty neural net, and letting it play against itself, after only being instructed with the basic rules of the game. It was a deliberate attempt to create a strong result, but no human needed to understand the exact way it would work. The designers only fed in broad concepts, and then let the thing develop itself.

            Instead of a chess machine, you could make a similar, but bigger, version that sits inside a robot head, and can control cameras and limbs, and just experiments with input/output until it figures out what works and what doesn't. Start out with an empty system, and reward/punish it for certain behavior.

            For how long? Four billion years? Maybe then you'd have an "android." And it would be constantly tested against our understanding of what it SHOULD be creating for an output. In other words, we would have to understand ourselves in detail.

          • But can your expert chess program have a normal human conversation? Understand and instinctively sympathize with you when you talk about the horrible day you had at work, or be excited for you when you have a success at something? Or appreciate that it was a nice day outside that day? No? Then all it is, is a CHESS PROGRAM.
    • by gweihir ( 88907 )

      I would not go so far to say "impossible", but there definitely is not reason to believe it is possible except a naive belief in the supremacy of technology over nature. At this time, the only reliable thing that can be said is "certainly not in the next 50 years", as a member of the IBM Watson team put it to me recently. And they should really know.

      • I would not go so far to say "impossible", but there definitely is not reason to believe it is possible except a naive belief in the supremacy of technology over nature. At this time, the only reliable thing that can be said is "certainly not in the next 50 years", as a member of the IBM Watson team put it to me recently. And they should really know.

        No, they should not really know. They are just guessing.

    • They are obviously necessary -- or they would not be there.

      This is not how evolution works. They may be "good enough", not necessary.

    • I can easily swipe out one of the supports of your argument. Sault's law doesn't apply simply because we can cooperate. Even if one of us isn't capable of containing all the information required to replicate one of us, there are many of us, and thanks to communication, we can split the solution between us. Hell, I'm a weirdo. I'm hoping something like hive minds in the sci-fi sense is possible, but till then, communication between separate humans is sufficient coordination to appear as a mind smarter than a

      • I can easily swipe out one of the supports of your argument. Sault's law doesn't apply simply because we can cooperate. Even if one of us isn't capable of containing all the information required to replicate one of us, there are many of us, and thanks to communication, we can split the solution between us. Hell, I'm a weirdo. I'm hoping something like hive minds in the sci-fi sense is possible, but till then, communication between separate humans is sufficient coordination to appear as a mind smarter than a human one.

        Someone has to know what exactly is being "split." That is the problem. Sure, if someone already knows what they are going for they can farm out the subsections and sub engineering then paste it all together like, say, an airplane whose parts are developed and manufactured around the world; But someone knows what they are after to begin with and decides if what is being produced around the world is correct for his understanding of the thing being made. In short, he understands the whole picture. Who would u

        • You moved the goal posts. Also, I have never known perfection to exist, even in nature. I doubt perfect understanding is required to duplicate something nature has already done.

    • by Shotgun ( 30919 )

      Because you must know more about reality than the thing you are creating.

      I know nothing about building skyscrapers, super colliders, or lipstick. Yet, all those things exist. What you miss is that no single human has to know everything about an AI for human*S* to build one.

      Emotions are the state indicators that evolution made for us to interact in groups. Groups are not possible without them. We perceive the internal states of others and react to those states by modifying our own behaviors - and we are motivated to do that if our motivation array is "normal." The HMA will never be replicated in a machine for this reason, we can't see it in detail.

      Until recently, I was very bad at "reading" people. Then I read some books on how people express emotions, and what was going on in their heads when it happens. I now find it trivially easy to manipulate people without even speaking, and reading body language is boringly obvious. You may find reading and

      • Because you must know more about reality than the thing you are creating.

        I know nothing about building skyscrapers, super colliders, or lipstick. Yet, all those things exist. What you miss is that no single human has to know everything about an AI for human*S* to build one.

        Emotions are the state indicators that evolution made for us to interact in groups. Groups are not possible without them. We perceive the internal states of others and react to those states by modifying our own behaviors - and we are motivated to do that if our motivation array is "normal." The HMA will never be replicated in a machine for this reason, we can't see it in detail.

        Until recently, I was very bad at "reading" people. Then I read some books on how people express emotions, and what was going on in their heads when it happens. I now find it trivially easy to manipulate people without even speaking, and reading body language is boringly obvious. You may find reading and expressing emotions difficult, but I can almost guarantee the reason is that you've done it "instinctively" and just never really spent time studying the subject. Hell, we even have an entire INDUSTRY centered in LA and New York that is based on little more that faking emotions. Do you really think it would be that hard to codify an acting coach's instructions?

        Yes, I do. Actors are humans. So are you, I suppose, though you do express a bit of the psychopath's well known inability to read other's emotions and react properly to them, indicating a pathology. And yes, I have read about body language. I don't find it difficult to read and express emotions. I don't believe I said that. And it isn't the ability to mimic emotions that is the question; It is when in complex interactions with humans to do so that would be the problem. Oh, and psychopaths are well known to

    • It's very refreshing to read someone on Slashdot, discussing this subject, who doesn't engage in 'magical thinking' when it comes to so-called 'AI' (e.g., 'build a gigantic neural net/deep learning machine', '***then magic happens***', 'oh look, it's conscious/self-aware/fully cognitive!'), instead realizing and expressing that we don't know the first thing, really, about what makes a human brain human, therefore we can't build a machine that does the same thing. Which should be obvious, but somehow it isn'
      • It's very refreshing to read someone on Slashdot, discussing this subject, who doesn't engage in 'magical thinking' when it comes to so-called 'AI' (e.g., 'build a gigantic neural net/deep learning machine', '***then magic happens***', 'oh look, it's conscious/self-aware/fully cognitive!'), instead realizing and expressing that we don't know the first thing, really, about what makes a human brain human, therefore we can't build a machine that does the same thing. Which should be obvious, but somehow it isn't. Personally, I blame TV and movies for making people think it's that easy, and marketing people from 'AI' companies, hyping up their pseudo-intelligence machines to the point where people actually believe there's a person in that box.

        Yes. And thank you for knowing it.

  • Really? AI is a valid subject for certain, but this isn't a serious discussion on AI in any regard, it's just viral marketing for a TV show.
  • The more intelligent they are, the less we have a right to use them as tools, but man is naturally inclined to think of anything he builds as a tool. Most dystopian sci-fi about this subject avoids the fact that man plays God to create slaves, God "plays God" if you will to create new life to live in relation to Him. There is actually an element of justice in man being brought to the brink by this sort of dark creativity.

  • >And so androids, not possessing that history, would certainly show up with a very different psychology.

    Unless we have competing lines of androids, all vying to pass the Turing test or some other form of competition seen as necessary by their respective creators. In that case, we should expect them to behave competitively, and hence they will be just as evil as we are (if not more efficiently so).

  • Consider the latest episode, in which the androids at the party so easily fool the person into thinking they are humans, simply because they play the piano a certain way, or take off their glasses to wipe them, or give a funny facial expression.”

    Gee, thanks.

    As a matter of fact I don't care about spoilers, but I care about whether it's okay to do it. It's not.

  • If we are talking androids then they would at least have to affect something like human emotional responses otherwise they would be humanoid shaped automatons.

    Maybe this would be better?

    I think that is part of the attraction of \W/ as the hosts seem to be struggling with all of the mush they have been saddled with to make them seem more "real" to guests.

  • From TFA:

    As humans we're very ready to anthropomorphize anything.

    In one form or another I've said this at least a hundred times around here. In the case of so-called 'AI' ('pseudo-intelligence', really), TV and movies don't help people distinguish between the real thing (which doesn't exist) and the ersatz (which is all around us).

    Once robots pass the Turing test, we'll probably recognize that we're just not that hard to fool.

    Sadly, many people are indeed easy to fool; consider how many people think Alexa or Siri is a not-that-bright but still fully conscious synthetic being? Again, TV and movies aren't helping in this regard; many people I'm sure think tha

  • things like competition for survival and for mating and for eating

    Well, 2 out of 3 ain't bad, I guess.

  • The first season did pretty good on it's own without a science advisor..

My sister opened a computer store in Hawaii. She sells C shells down by the seashore.

Working...