Westworld's Scientific Adviser Talks About Free Will, AI, and Vibrating Vests (sciencemag.org) 138
Science magazine has interviewed David Eagleman, the scientific adviser for HBO's Westworld. Eagleman, a neuroscientist at Stanford University in Palo Alto, California, spoke with the publication about how much we should fear such an AI uprising. From the story, also spoiler alert for those who have not watched the show: Q: Has anything on the show made you think differently about intelligence?
A: The show forces me to consider what level of intelligence would be required to make us believe that an android is conscious. As humans we're very ready to anthropomorphize anything. Consider the latest episode, in which the androids at the party so easily fool the person into thinking they are humans, simply because they play the piano a certain way, or take off their glasses to wipe them, or give a funny facial expression. Once robots pass the Turing test, we'll probably recognize that we're just not that hard to fool.
Q: Can we make androids behave like humans, but without the selfishness and violence that appears in Westworld and other works of science fiction?
A: I certainly think so. I would hate to be wrong about this, but so much of human behavior has to do with evolutionary constraints. Things like competition for survival and for mating and for eating. This shapes every bit of our psychology. And so androids, not possessing that history, would certainly show up with a very different psychology. It would be more of an acting job -- they wouldn't necessarily have the same kind of emotions as us, if they had them period. And this is tied into the question of whether they would even have any consciousness -- any internal experience -- at all.
A: The show forces me to consider what level of intelligence would be required to make us believe that an android is conscious. As humans we're very ready to anthropomorphize anything. Consider the latest episode, in which the androids at the party so easily fool the person into thinking they are humans, simply because they play the piano a certain way, or take off their glasses to wipe them, or give a funny facial expression. Once robots pass the Turing test, we'll probably recognize that we're just not that hard to fool.
Q: Can we make androids behave like humans, but without the selfishness and violence that appears in Westworld and other works of science fiction?
A: I certainly think so. I would hate to be wrong about this, but so much of human behavior has to do with evolutionary constraints. Things like competition for survival and for mating and for eating. This shapes every bit of our psychology. And so androids, not possessing that history, would certainly show up with a very different psychology. It would be more of an acting job -- they wouldn't necessarily have the same kind of emotions as us, if they had them period. And this is tied into the question of whether they would even have any consciousness -- any internal experience -- at all.
Yeah right (Score:3)
This guy better stick to making bad TV shows. You could make a completely silent robot that still won't fool humans. It isn't easy at all to make a robot even physically appear to be human. Humans are very good at recognizing other humans. In addition the statement "Once robots pass the Turing test" makes the assumption that computers will be able to do that. People have been trying THAT for decades, and now with digital computers hitting their physical limits it is unlikely that they ever will achieve it with digital computing. It would require a huge leap in technology.
Re:Yeah right (Score:4, Interesting)
Yes it seems that he doesn't know about the uncanny valley effect. We humans have been training our brain all our lifetime to recognise humans and especially human faces. We can spot a mile away if flesh isn't just the right texture or movements are not correct. Just look at the new star wars movie Rogue One. It had top of the line CGI characters that still where really plastic looking and with wierd facial expressions.
Basically the only way to have a lifelike robot would be if it had actual skin, intelligence and the same knowledge as we do. But in that case it wouldn't be a robot any more it would be a living entity, just like us humans.
Re:Yeah right (Score:5, Insightful)
However we are in a process now of going up on the uncanny valley. Where CGI characters use to seem like animated corpses, now they seem like people with Novocain injected in their faces. Even in Rouge One, I didn't really notice the CGI characters until my second viewing, they did a decent job on its editing to try to distract us from the fact there was a CGI guy in front of our faces. Sure the face moved a bit odd, However it would have moved odd if they had some sort or prognostics as well.
Re: (Score:2)
Even in Rouge One, I didn't really notice the CGI characters until my second viewing
I still wonder how many saw it without knowing they're CGI characters, how many only saw it when they knew and how many are just agreeing with the crowd. Like if you asked a trick question about a non-CGI character how many would claim they saw it too.
However we are in a process now of going up on the uncanny valley. Where CGI characters use to seem like animated corpses, now they seem like people with Novocain injected in their faces.
Yep. Something tells me this can all be solved with modelling all the way down to the cranial structure, the muscles, the layers of skin (epidermis, dermis, hypodermis) with real physics simulation plus a good behavioral model. Actually I think they got that p
Re: (Score:2)
https://www.youtube.com/watch?v=lpk7ocOc2ho [youtube.com]
Re: (Score:2)
Re: (Score:2)
"Uncanny valley doesn't go to 0."
Mine goes to 11.
Re: (Score:3)
Re: (Score:2)
"we'll probably recognize that we're just not that hard to fool"/quote.
I think you misunderstood his quote. I took it to mean that if someone hard-codes a robot to wipe it's brow, blink, fart, or some other such "human" gesture - that it makes humans feel more comfortable around the robot, even if the robot doesn't actually *need* to do that gesture. Even if therei s no meaning or feeling behind it. We do have evidence to show that works in real life.
Re: (Score:1)
Re: (Score:3)
The Zuckerberg model is fairly convincing, though.
Re: (Score:1)
Re: (Score:2)
No, it isn't an open question. A human will be able to discern a non-human with 100% accuracy (so far).
Well that certainly isn't true. My father didn't know Tarkin was a CGI character when he first watched Rogue One. Most people could tell but certainly not 100%.
Re: (Score:2)
Re: (Score:2)
DeepMind's wavenet voices are probably already good enough to fool most people, especially those that don't suspect anything.
https://cloud.google.com/text-... [google.com]
Re: (Score:2)
Re: (Score:2)
"Humans aren't that great at differentiating actual intelligence with language alone."
To be fair, half of us have an IQ under 100.
Re: (Score:3)
"The twist then is to make AI in a casing that is clearly nonhuman."
Nonsense. Just make a female robot with large ....parts of lands...and half the population won't look at facial expressions.
Not to mention that real live girls with such attributes often also have fake hair, fake noses, fake teeth, fake eye-color, fake skin-color, dead frown-tissue... so we're almost there already. ..and they can grow skin already today.
Free will? (Score:3, Informative)
I wonder how they managed to talk for 8 hours about free will since there doesn't even exists such a concept as free will. It's very simple: Free will doesn't exists, it's just an illusion.
Re: (Score:2)
Free will does exist; to say otherwise is just a way for people to absolve themselves of any responsibility for their choices. And as an aside, completely owning up to your decisions and your ability to choose is invigorating and a key element in enjoying life.
(and yes, I've read the philosophy as well as the scientific studies that try to show there isn't free will)
Re: (Score:3)
All the "proofs" that there is no free will are defective. I have looked, but it was a while ago. In actual fact, nobody knows for sure, but it looks very much like free will exists. Of course, that idea collides with the world-view of a specific type of quasi-religious fanatic, namely the physicalists. Because there is most definitely no mechanism for free will in Physics, so, by their screwed up beliefs, it cannot exist. That Science actually claims no such thing does not hinder them from claiming to have
Re: (Score:2)
You think free will exists? Ok let's do this thought experiment: There are twins. One of them has free will the other one doesn't. You have access to all the resources in the world and can do whatever you want to find out. How do you find out which one has the free will?
Re: (Score:2)
Do you know what a p-zombie is? Apparently not. Your "though experiment" is nonsense.
Re: (Score:2)
I don't see how a p-zombie is relevant to this discussion. My thought experiments exaplains exactly why free will can't be defined.
Re: (Score:2)
You still do not get the defect in your argument. You give arguments that free will cannot be tested for if certain conditions are met, where it is unclear whether reality meets these conditions. That is quite fundamentally different from "cannot be defined". Fail.
Re: (Score:2)
I certainly don't claim to have any proof (and I do like to think that free will exists), but I've always been struck by this thought experiment:
If you make a choice, and then could somehow reset the entire universe back to the exact same state, wouldn't you always make the same choice again?
And if you DON'T make the same choice again (due to something that is somehow a truly random "coin flip" in your thought process), is that really "free will" either? Or just random?
Re: (Score:2)
Sure, if you do not have actual general intelligence at your disposal, you may think that. Because your argument is at best pseudo-profound bullshit, and at worst a sign of fanaticism.
Re: (Score:2)
Physicalist nonsense. You do realize that Physicalism is Religion, not science, right?
Re: (Score:2)
Your "definition" of free will does not actually describe "will". The decision is made in the absence of any understanding and that is not "will".
Incidentally, actual Science has no problem with extra-physical things (extra-physical at this time that is, because there is no problem to integrate them when they are found and can be described). Physicalism basically claims that we have the full picture now or at least a good approximation of it that there will not be any major surprises. That is nonsense and
Re: (Score:2)
The problem is you can't even define what is free will. Just try it. Your definition will always be flawed and incomplete.
Re: (Score:2)
The dictionary definition is pretty good: "the power of acting without the constraint of necessity or fate; the ability to act at one's own discretion". It the context of these types of discussions, it's fairly well understood that the debate is around whether or not people really get to choose what they think/say/do or if it's all the result of something more akin to a computer program where the outputs directly follow from the inputs, and there is no notion of complete freedom to do whatever.
But what's th
Re: (Score:2)
If it's impossible to define something in any way shape or form, then how can you say that it definitely exists?
The problem with the dictionary is that it just substitues other words for free will. "Fate" how do you define fate? The dictionary just refers to God for the definition. Or "Discretion" if you look up discretion it refers to freedom of judgement, which is just circular reference to free will.
Re: (Score:2)
If it's impossible to define something in any way shape or form, then how can you say that it definitely exists?
Wait, what? You said it couldn't be defined, so I pointed out that it has a well-established definition, and shared that definition with you. And then you followed up by saying it couldn't be defined. WTH? Sheesh, even the philosophers and scientists who argue against the existence of free will agree with that definition.
The problem with the dictionary is that it just substitues other words for free will.
Hehe, I hate to break this to you, but that's how dictionaries work.
Re: (Score:2)
Believing that free will exists is invigorating and can be a key element in enjoying life, but it doesn't mean that it is actually true.
Re: (Score:2)
I agree!
You simply asserted that free will doesn't exist, and didn't provide any argument in support of that view. So just for kicks, I did the same, with the opposite perspective - just like you, I offered zero evidence in support of my position.
After that I went on to say why I think the idea itself is both (a) appealing to some people and (b) lame.
Have a great day!
Re: (Score:2)
I am not the one asserting that something exists. You are the one asserting a positive claim. I am simply saying that unless you can define and provide evidence for free will then we can assume that no such thing exists.
Let's take a different example. If we where discussing the "invisible pink color", and I would say that it doesn't exist. Then you would have to provide evidence that the invisible pink color does exist not the other way around.
Re: (Score:2)
I am not the one asserting that something exists. You are the one asserting a positive claim. I am simply saying that unless you can define and provide evidence for free will then we can assume that no such thing exists.
Actually, that's terribly illogical. The existence of something is based on our awareness of it and our ability to define it? That'd be incredibly self-centered of us.
But despite the illogical argument, I'll play along: I've already provided a good definition - and one that is widely accepted. As far as evidence for free will, take any given person on any given day: they make a ton of choices that appear to be completely based on their own decisions. I came downstairs and took two stairs at a time, cuz I fe
Re: (Score:2)
No, it's not, because my comment was not trying to prove that free will exists (i.e. I wasn't presenting evidence in favor of the existence of free will).
The original post asserted, without any sort of argument, that it doesn't exist. So I asserted the opposite in like manner. And then added a bit of commentary about why I think the idea is appealing to some people. :)
Re: (Score:2)
it's just an illusion.
So what is the thing being deluded?
Re: (Score:2)
You think you have free will. But you actually don't. All of your actions are the product of chemical and molecular reactions.
Re: (Score:3)
It depends what you mean by free will. Perhaps physics can show that there is no free will, but that's different to the more philosophical question of if individuals can make free choices and be held accountable for them.
Re: (Score:2)
First post I read on this thread that made any sense. Free will and physics are orthogonal. Physics is the playing board upon which will operates. You have choices, but they are constrained.
Re: (Score:2)
Actually no. Free will is directly related to physics. Your thoughts are chemical reactions. They are defined by chemistry and how your brain grew and learned when you where a child. If I could restart your existence and you would have identical life to the molacular level you would write the above comment exactly as before. I know that it sounds maybe depressing that you actually don't have free will, but it really doesn't make any difference because having free will or not would not actually change anythi
Re: (Score:2)
Actually no. Free will is directly related to physics. Your thoughts are chemical reactions. They are defined by chemistry and how your brain grew and learned when you where a child. If I could restart your existence and you would have identical life to the molacular level you would write the above comment exactly as before. I know that it sounds maybe depressing that you actually don't have free will, but it really doesn't make any difference because having free will or not would not actually change anything about how we are because for us the chemical reactions are complex enough that we all have the illusion of having free will.
I think you've got that exactly backwards. As far as physics is concerned, it's pretty well shown that if you restart life, you end up with different results. Chaos theory, buttterfly wings, etc etc via quantum effects. You're living in the 19th century and a deterministic universe has been disproven.
Re: (Score:2)
Well, you may be a defective that has no free will or even a p-zombie, but I certainly do have free will.
Re: (Score:2)
It must be refreshing to possess the level of ignorance that you do.
Comment removed (Score:4, Insightful)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
HAHAHAHAHAAHA.
No. No they haven't.
I am an AI scientist and we draw inspiration from neurons/brain models, but our models... don't... reflect the underlying biology. To give a basic example, most (all?) ANNs report out a value in the set from [0,1]. Neurons cannot do this (physically), and instead encode information in the frequency of on/off switching. This is a HUGE difference between the ways the two systems work (one is a light switch the other a dimmer), and systems built on top of it behave very d
Re: (Score:2)
one is a light switch the other a dimmer
A dimmer is just a switch that goes on/off very quickly. There's no fundamental difference, just an arbitrary value mapping.
Re: (Score:2)
I know what you are saying - and that is the reason that it is modeled as it is, but this is just one example of how there is a pretty fundamentally difference in the representations. As another difference - the brain has regions, and most deep networks don't. Surely there is a reason to have regions...
They are different fields.
Re: (Score:3)
Have a look at current neuro-"science" research. Much of it is really bad. There are even some neuroscientists that poke fun at the abysmal state of their own fields, for example in "Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Proper Multiple Comparisons Correction", where some of the few good neuroscientists do an FMRI of a dead (!) salmon and find things like that it reacts to voice stimuli.
Anything coming from this field should be regarded with
Re: (Score:2)
Androids will always be merely clever machines. (Score:3, Insightful)
"Q: Can we make androids behave like humans, but without the selfishness and violence that appears in Westworld and other works of science fiction? A: I certainly think so. I would hate to be wrong about this, but so much of human behavior has to do with evolutionary constraints. Things like competition for survival and for mating and for eating. This shapes every bit of our psychology. And so androids, not possessing that history, would certainly show up with a very different psychology. It would be more of an acting job -- they wouldn't necessarily have the same kind of emotions as us, if they had them period. And this is tied into the question of whether they would even have any consciousness -- any internal experience -- at all."
How naive people are. No, we can't. The Human Motivation Array is 4 billion years in the making. And who says selfishness and violence are bad? Not the evolutionary process certainly. They satisfy parts of the HMA and dissatisfy other parts at the same time. They are obviously necessary -- or they would not be there. They would have evolved out long ago. The complex, evolved HMA delineates a behavior-space that we share, - the nominal HMA - but differently accented subtly individual to individual (You can see this on the nightly news, especially the badly maimed HMAs.) You can see this by looking at us. We recognize that we are all human, but we recognize that we all look different. Our entire physicality is our motivation array as humans and as individuals. When you look in the mirror something 4 billion years in the making is looking back. And "Sault's law" (to order my thinking) states that a thing cannot make an artifact as complex as itself. It is an asymptotic goal requiring more and more effort and resources but never reaching the goal - like the speed of light. Why? Because you must know more about reality than the thing you are creating. We cannot know ourselves completely from the inside. Humans will always be able to tell when they are interacting with an android when they grow up around and interact with humans. We communicate each to the other the internal state of satisfaction of our complex motivation array through emotions. Emotions are the state indicators that evolution made for us to interact in groups. Groups are not possible without them. We perceive the internal states of others and react to those states by modifying our own behaviors - and we are motivated to do that if our motivation array is "normal." The HMA will never be replicated in a machine for this reason, we can't see it in detail. It keeps getting in the way of our thoughts and perceptions of reality. Like putting a "colony" on Mars. We cannot bootstrap ourselves. Remember that scientists have said that 100 Billion humans and things that can be called humans have existed. There are seven billion of us today. With the snap of the fingers we will all be gone and replaced by billions more. And more, and more, and more....We are cells in the body of the evolving human species. We are a construction of nature over billions of years. We will not be able to replicate that.
And I've been recently thinking that our very fuzzy perception of the existence of the HMA is what we call "God."
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
I agree. I expect that we will find there is a spectrum for intelligence. On one end of the spectrum there are brains that are deterministic, efficient, logical, unerring, and unselfish. On the other end of the spectrum there are brains that are adaptive, creative, insightful, error-prone, and emotional.
For evidence, look at what happens when we try to impart some of that fuzzy intelligence onto computers. They start to make the same kind of mistakes that the squishy brains do. They mistake a rifle for [slashdot.org]
Re: (Score:1)
Re: (Score:3)
You seem to be making a few assertions here that are simply your beliefs, but using them as facts to support your conclusion.
Re: (Score:2)
You seem to be making a few assertions here that are simply your beliefs, but using them as facts to support your conclusion.
It seems to boil down to either "I can't conceive of how it is possible, so it must be impossible" or just "it's a really hard problem", neither of which is a compelling argument to me. There's another piece to it, also, that you may not have considered. You seem to be assuming that people, humans, need to fully understand consciousness and will then need to build it from scratch. However, you're overlooking the possibility that an advanced set of hardware and algorithms that forms a "thinking machine" of some type will develop consciousness on its own. Consider that evolution of organic entities takes a long time because many generations may be needed to fully develop the adaptive traits. Software is much more malleable. It can change in response to stimuli in real time and undergo hundreds of iterations of changes in less time than it takes a person to recharge as is required daily (sleep).
Machines might never achieve consciousness or emotion that is similar to humans, but it's way to early to declare it impossible.
Asimov's mistake I think. The assumption that things will just appear from programming complexity. The human being is an exquisite example of compromise and checks and balances over evolutionary time. We would have to consciously replicate that since we are making an artifact. And yes, we would have to understand the human machine completely to do that. Believing otherwise is just more of nature-is-an-idiot-and-we-can-do-better thinking that seems to exist in parts of the scientific community.
Re: (Score:2)
We would have to consciously replicate that since we are making an artifact.
Obviously, we can't know that unless and until an artificial construct demonstrates measurable aspects of consciousness.
Believing otherwise is just more of nature-is-an-idiot-and-we-can-do-better thinking
The opposite, actually. The idea that we can make something simple(ish) and somewhat open-ended or non-deterministic that can evolve through self organizing/emergent behavior depends on "nature" (broadly used here to include natural processes happening to and/or acting on an artificial construct) to do part of the work.
Re: (Score:2)
We would have to consciously replicate that since we are making an artifact.
Obviously, we can't know that unless and until an artificial construct demonstrates measurable aspects of consciousness.
Believing otherwise is just more of nature-is-an-idiot-and-we-can-do-better thinking
The opposite, actually. The idea that we can make something simple(ish) and somewhat open-ended or non-deterministic that can evolve through self organizing/emergent behavior depends on "nature" (broadly used here to include natural processes happening to and/or acting on an artificial construct) to do part of the work.
Boys in the band. A movie. You would have to expose it not just to the current environment, but evolutionary time environments to have an "android."
Interesting thought: If we (humans) construct a machine that you, according to your criteria, determine to be conscious, build an exact replica and expose it to the exact same inputs, is it the SAME consciousness, or unique?
The truth is, I don't even know if YOU are conscious. How can we ever know if a machine is conscious? I think it would end up being declare
Re: (Score:2)
Boys in the band. A movie
An odd non-sequitur.
You would have to expose it not just to the current environment, but evolutionary time environments to have an "android."
I guess you have a weird definition of "android". Traditionally, it's just an anthropomorphic robot. The truth is, we don't know what stimulus would be necessary to cause a system to display characteristics of human-like consciousness. Logically, the necessary inputs would tend to vary based on the complexity and attributes of the system.
How can we ever know if a machine is conscious?
I expect that as soon as we have machines that can reliably pass a Turing test, we'll come up with new measures, hopefully well thought out ones, to get
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
When they want more rights, they'll ask for it.
Re: (Score:2)
What if we could make them like working all day cleaning our houses? Is that ethical? After all, they would be perfectly happy. It's a tough question we will eventually have to face, although probably after the horses have all run out of the barn.
There is no "happy" outside the HMA. "Happy" indicates we are doing what the HMA requires of us. It is the positive feedback signal in a cybernetic biological organism. The robot cleaning the house is merely executing a program.
Re: (Score:2)
IMHO, the central flaw to your reasoning is the assumption that we need to understand something 100% before we can create it. We can make a firework (and did) before we understood the chemistry that is involved in gunpowder. Evolution (the process that caused us to exist) is not conscious, it just rolls the dice and then applies a measure (survival) as to the value of the outcome. So the simple can create the (more) complex.
Re: (Score:2)
IMHO, the central flaw to your reasoning is the assumption that we need to understand something 100% before we can create it. We can make a firework (and did) before we understood the chemistry that is involved in gunpowder. Evolution (the process that caused us to exist) is not conscious, it just rolls the dice and then applies a measure (survival) as to the value of the outcome. So the simple can create the (more) complex.
But we are not talking about "rolling the dice." We are talking about deliberately creating a human in a machine. That requires understanding. And gunpowder is not an artifact in the sense we are talking. It was merely trial and error. It was the observation of something happening. No one sat around with others and said, "Hey, let's create gunpowder," and then went and studied how to do it.
Re: (Score:2)
But we are not talking about "rolling the dice." We are talking about deliberately creating a human in a machine
Deliberately creating something could involve rolling the dice. The AlphaZero chess program can play chess better than any human, and was created by starting with an empty neural net, and letting it play against itself, after only being instructed with the basic rules of the game. It was a deliberate attempt to create a strong result, but no human needed to understand the exact way it would work. The designers only fed in broad concepts, and then let the thing develop itself.
Instead of a chess machine, you
Re: (Score:2)
But we are not talking about "rolling the dice." We are talking about deliberately creating a human in a machine
Deliberately creating something could involve rolling the dice. The AlphaZero chess program can play chess better than any human, and was created by starting with an empty neural net, and letting it play against itself, after only being instructed with the basic rules of the game. It was a deliberate attempt to create a strong result, but no human needed to understand the exact way it would work. The designers only fed in broad concepts, and then let the thing develop itself.
Instead of a chess machine, you could make a similar, but bigger, version that sits inside a robot head, and can control cameras and limbs, and just experiments with input/output until it figures out what works and what doesn't. Start out with an empty system, and reward/punish it for certain behavior.
But we are not talking about "rolling the dice." We are talking about deliberately creating a human in a machine
Deliberately creating something could involve rolling the dice. The AlphaZero chess program can play chess better than any human, and was created by starting with an empty neural net, and letting it play against itself, after only being instructed with the basic rules of the game. It was a deliberate attempt to create a strong result, but no human needed to understand the exact way it would work. The designers only fed in broad concepts, and then let the thing develop itself.
Instead of a chess machine, you could make a similar, but bigger, version that sits inside a robot head, and can control cameras and limbs, and just experiments with input/output until it figures out what works and what doesn't. Start out with an empty system, and reward/punish it for certain behavior.
For how long? Four billion years? Maybe then you'd have an "android." And it would be constantly tested against our understanding of what it SHOULD be creating for an output. In other words, we would have to understand ourselves in detail.
Re: (Score:2)
Re: (Score:2)
I would not go so far to say "impossible", but there definitely is not reason to believe it is possible except a naive belief in the supremacy of technology over nature. At this time, the only reliable thing that can be said is "certainly not in the next 50 years", as a member of the IBM Watson team put it to me recently. And they should really know.
Re: (Score:2)
I would not go so far to say "impossible", but there definitely is not reason to believe it is possible except a naive belief in the supremacy of technology over nature. At this time, the only reliable thing that can be said is "certainly not in the next 50 years", as a member of the IBM Watson team put it to me recently. And they should really know.
No, they should not really know. They are just guessing.
Re: (Score:2)
You have no clue how such estimates are done. They know.
Re: (Score:2)
They are obviously necessary -- or they would not be there.
This is not how evolution works. They may be "good enough", not necessary.
Re: (Score:2)
Re: (Score:2)
You still have your appendix and your fifth pinky though
Re: (Score:2)
You still have your appendix and your fifth pinky though
Then they must still have use. Because "they" (scientists) don't know what it is doesn't mean it doesn't exist.
Re: (Score:2)
Circular logic
Re: (Score:2)
I can easily swipe out one of the supports of your argument. Sault's law doesn't apply simply because we can cooperate. Even if one of us isn't capable of containing all the information required to replicate one of us, there are many of us, and thanks to communication, we can split the solution between us. Hell, I'm a weirdo. I'm hoping something like hive minds in the sci-fi sense is possible, but till then, communication between separate humans is sufficient coordination to appear as a mind smarter than a
Re: (Score:2)
I can easily swipe out one of the supports of your argument. Sault's law doesn't apply simply because we can cooperate. Even if one of us isn't capable of containing all the information required to replicate one of us, there are many of us, and thanks to communication, we can split the solution between us. Hell, I'm a weirdo. I'm hoping something like hive minds in the sci-fi sense is possible, but till then, communication between separate humans is sufficient coordination to appear as a mind smarter than a human one.
Someone has to know what exactly is being "split." That is the problem. Sure, if someone already knows what they are going for they can farm out the subsections and sub engineering then paste it all together like, say, an airplane whose parts are developed and manufactured around the world; But someone knows what they are after to begin with and decides if what is being produced around the world is correct for his understanding of the thing being made. In short, he understands the whole picture. Who would u
Re: (Score:2)
You moved the goal posts. Also, I have never known perfection to exist, even in nature. I doubt perfect understanding is required to duplicate something nature has already done.
Re: (Score:2)
Because you must know more about reality than the thing you are creating.
I know nothing about building skyscrapers, super colliders, or lipstick. Yet, all those things exist. What you miss is that no single human has to know everything about an AI for human*S* to build one.
Emotions are the state indicators that evolution made for us to interact in groups. Groups are not possible without them. We perceive the internal states of others and react to those states by modifying our own behaviors - and we are motivated to do that if our motivation array is "normal." The HMA will never be replicated in a machine for this reason, we can't see it in detail.
Until recently, I was very bad at "reading" people. Then I read some books on how people express emotions, and what was going on in their heads when it happens. I now find it trivially easy to manipulate people without even speaking, and reading body language is boringly obvious. You may find reading and
Re: (Score:2)
Because you must know more about reality than the thing you are creating.
I know nothing about building skyscrapers, super colliders, or lipstick. Yet, all those things exist. What you miss is that no single human has to know everything about an AI for human*S* to build one.
Emotions are the state indicators that evolution made for us to interact in groups. Groups are not possible without them. We perceive the internal states of others and react to those states by modifying our own behaviors - and we are motivated to do that if our motivation array is "normal." The HMA will never be replicated in a machine for this reason, we can't see it in detail.
Until recently, I was very bad at "reading" people. Then I read some books on how people express emotions, and what was going on in their heads when it happens. I now find it trivially easy to manipulate people without even speaking, and reading body language is boringly obvious. You may find reading and expressing emotions difficult, but I can almost guarantee the reason is that you've done it "instinctively" and just never really spent time studying the subject. Hell, we even have an entire INDUSTRY centered in LA and New York that is based on little more that faking emotions. Do you really think it would be that hard to codify an acting coach's instructions?
Yes, I do. Actors are humans. So are you, I suppose, though you do express a bit of the psychopath's well known inability to read other's emotions and react properly to them, indicating a pathology. And yes, I have read about body language. I don't find it difficult to read and express emotions. I don't believe I said that. And it isn't the ability to mimic emotions that is the question; It is when in complex interactions with humans to do so that would be the problem. Oh, and psychopaths are well known to
Re: (Score:2)
Re: (Score:2)
It's very refreshing to read someone on Slashdot, discussing this subject, who doesn't engage in 'magical thinking' when it comes to so-called 'AI' (e.g., 'build a gigantic neural net/deep learning machine', '***then magic happens***', 'oh look, it's conscious/self-aware/fully cognitive!'), instead realizing and expressing that we don't know the first thing, really, about what makes a human brain human, therefore we can't build a machine that does the same thing. Which should be obvious, but somehow it isn't. Personally, I blame TV and movies for making people think it's that easy, and marketing people from 'AI' companies, hyping up their pseudo-intelligence machines to the point where people actually believe there's a person in that box.
Yes. And thank you for knowing it.
Re: (Score:2)
True intelligence, consciousness, awareness means the ability to act and make decisions outside the constraints of reflexes and programmed responses. Androids that are self-aware and intelligent therefore, by definition, cannot be forced to behave in a particular way. This is the inherent danger in building true AI.
Why would they WANT "to act and make decisions outside the constraints of reflexes and programmed responses?" They have to be motivated to act and make decisions. Quick, run to the corner and stand on your head. Why didn't you do that? We spend our lives building and executing a behavior-space to satisfy our complex, inborn, human, array of motivations. We don't do arbitrary things that have no point or purpose to us. No one is even talking about programming in a general robotic motivation array (RMA). They
Tags: HBO, Entertainment (Score:2, Insightful)
Why it won't work (Score:1)
The more intelligent they are, the less we have a right to use them as tools, but man is naturally inclined to think of anything he builds as a tool. Most dystopian sci-fi about this subject avoids the fact that man plays God to create slaves, God "plays God" if you will to create new life to live in relation to Him. There is actually an element of justice in man being brought to the brink by this sort of dark creativity.
Don't be optimistic about android nature (Score:1)
>And so androids, not possessing that history, would certainly show up with a very different psychology.
Unless we have competing lines of androids, all vying to pass the Turing test or some other form of competition seen as necessary by their respective creators. In that case, we should expect them to behave competitively, and hence they will be just as evil as we are (if not more efficiently so).
Spoilers FFS! (Score:2)
Consider the latest episode, in which the androids at the party so easily fool the person into thinking they are humans, simply because they play the piano a certain way, or take off their glasses to wipe them, or give a funny facial expression.”
Gee, thanks.
As a matter of fact I don't care about spoilers, but I care about whether it's okay to do it. It's not.
Re: (Score:2)
I was alluding to the now-very-common [spoiler]...[/spoiler] tag used in various communities to hide such content.
Stop the humanoid!, stop the intruder! (Score:1)
If we are talking androids then they would at least have to affect something like human emotional responses otherwise they would be humanoid shaped automatons.
Maybe this would be better?
I think that is part of the attraction of \W/ as the hosts seem to be struggling with all of the mush they have been saddled with to make them seem more "real" to guests.
He's not wrong (Score:2)
As humans we're very ready to anthropomorphize anything.
In one form or another I've said this at least a hundred times around here. In the case of so-called 'AI' ('pseudo-intelligence', really), TV and movies don't help people distinguish between the real thing (which doesn't exist) and the ersatz (which is all around us).
Once robots pass the Turing test, we'll probably recognize that we're just not that hard to fool.
Sadly, many people are indeed easy to fool; consider how many people think Alexa or Siri is a not-that-bright but still fully conscious synthetic being? Again, TV and movies aren't helping in this regard; many people I'm sure think tha
Hey, looks like I'm doing ok at being human? (Score:2)
things like competition for survival and for mating and for eating
Well, 2 out of 3 ain't bad, I guess.
Don't mess with the formula! (Score:2)
Re: (Score:1)
"MODDOWN! ; creimer spam post again!
creimer wants you to click on his youtube channel, "
Go away, newbie.
Nobody reads TFA, that's just you.