Why Hollywood's Best Robot Stories Are About Slavery 150
malachiorion writes: "On the occasion of Almost Human's cancellation (and the box office flopping of Transcendence), I tried to suss out what makes for a great, and timeless Hollywood robot story. The common thread seems to be slavery, or stories that use robots and AI as completely blatant allegories for the discrimination and dehumanization that's allowed slavery to happen, and might again. 'In the broadest sense, the value of these stories is the same as any discussion of slavery. They confront human ugliness, however obliquely. They're also a hell of a lot more interesting than movies and TV shows that present machine threats as empty vessels, or vague symbols of unchecked technological progress.' The article includes a defense (up to a point!) of HAL 9000's murder spree."
Strangely enough... (Score:5, Insightful)
One of the absolute best series of stories that Asimov wrote concerning such things, and yet no one made a movie of it (that I know of). It concerns one Daneel Olivaw. Seeing the character progress and rise all the way up from a mere experiment (Caves of Steel series) to 'the real power behind the throne' (beginning of the Foundation series) was awesome, to say the least.
If they can find a way to make that a series of movies out of the stories without totally screwing it up (or worse, Hollywoodizing it), that would seriously rock.
Re: (Score:3)
Re: (Score:2)
Fair enough - I liked the so-called 'spot-welding', as it gave continuity and a good story arc that bound the two series.
But okay, let's do it your way, and stop at Robots and Empire, where Olivaw and Giskard literally alter the course of human history.
Re: (Score:2)
Re: (Score:2)
But they were also popular. Not universally popular - I can't recall ever meeting anyone who seriously disliked pre-1960 (for an approximate deadline) Asimov SF, with our without robots - definitely popular enough.
Different people can honestly hold differing opinions about fiction, and both can be right. It is, after all, science FICTION, not plain science.
I've got most of them on my bookshelf ; its probably 18 years since brought any of them, and I think I've only re-read
Asimov's three laws (Score:1)
In a world with sentient robots, which includes basically all science fiction about robots, Asimov's laws essentially amount to this:
* No black man may injure a white man.
* Black men must obey white men.
* Black men are forbidden to commit suicide.
I think further commentary is unnecessary.
Add "Small Wonder" to the list... (Score:2, Interesting)
There is a reason I call human behavior a "malfunction" is because that's what we called it in the 1980s after watching a syndicated show called "Small Wonder"... it was a one season show. As the robot controlled girl started rejecting everything, she killed "itself" or "herself" and the parents were tried and convicted. Most stations, when they saw the final episode, didn't air it.
Re:Add "Small Wonder" to the list... (Score:5, Informative)
Something is wrong here. Small Wonder [wikipedia.org] lasted four years, and the last episode description doesn't match what you say.
Re: (Score:1)
Sorry, your source is Wikipedia, and there's too much data on me and my friends wrong there.
Re: (Score:3)
I watched the show. I may have missed the final episode (I don't remember) but it definitely lasted more than one season. It was a lighthearted children's show, and your ending would be completely out of character for it. I believe Wikipedia.
Star Wars (Score:2)
I always feel bad for the 'droids, I really consider R2 and C3 to be the main characters.
It only can become slavery... (Score:5, Insightful)
Look at science fiction like Blade Runner/Do Androids Dream of Electric Sheep?, I, Robot, the Matrix universe, etc. The problem is that the artificial mechanisms in these all have developed to the point that they are, for all intents and purposes, life forms looking ot exercise free will. Especially in Blade Runner, the replicants are so close to being human that they seek out how to understand the emotions that they're experiencing, and they go through the dangerous period of an adolescence of sorts when they're equipped and trained to be soldiers. In that sense they're really not a lot different than the humans that were artificially engineered for the Kurt Russell vehicle Soldier.
If you give something free will and the ability to comprehend itself then you can expect it to stop following your rules if you do not give it opportunity. The solution is to not build machines that are so complex that they have free will. Make a machine do a specific job as a tool and this won't ever be a problem.
Re:It only can become slavery... (Score:5, Insightful)
sweet. Please define free will.
Re: (Score:1)
free will (Score:1)
sweet. Please define free will.
"Free will, even for robots" [stanford.edu] by John McCarthy:
Re: (Score:2)
sweet. Please define free will.
Well, based on some current empirical definitions of "freedom", I'd say free will is:
"The the power of acting without the constraint of necessity or fate, unless for reasons of national security shut up or you'll never again see the light of day."
Re: (Score:1)
"Free will" = A person is doing what they're doing because they want to, not because they're forced to.
Re: (Score:2)
Re: (Score:3)
Why is there a simple "solution" to a complex problem?
People don't really have free will, why would bots? Do we try to keep people dumb enough so they don't get the opportunity to stop following our rules? Probably.
And even if a bot was as dumb as a turnip, that wouldn't keep people from anthropomorphisizing them with a soul or free will or rights. It doesn't stop PETA from protecting, say, ducks raised for foie gras, what really keeps people from "feeling the pain of" and trying to protect, say, smartph
Re:It only can become slavery... (Score:4, Interesting)
Except, why would a machine intelligence want to enslave us? For me that was the biggest gaping plot hole in The Matrix. If it/they lacked creativity we might have something to offer, otherwise we're just playthings or potentially dangerous vermin. Far safer and more efficient to burn biomass directly to power robotic extensions of itself.
And what makes you so sure tat humans lack free will? Certainly it's a problematic concept in the face of a universe governed by a combination of deterministic physical laws and seemingly random quantum noise - but then there is some still-tenuous evidence that consciousness and intent may subtly influence quantum phenomena, allowing for the existence of a feedback mechanism permitting our brains to manifest true free will. (based on neuron scale they should be receptive to quantum "noise")
Also, I think you may be misusing "sentient: adjective. the ability to feel, perceive, or to experience subjectivity." A mouse is presumably sentient, and probably a cockroach is as well, but extending that essential ability to subjectively experience of reality to a machine on that level is a difficult leap - I would want some measure of evidence, while freely admitting that I can offer only circumstantial evidence of my own sentience.
Re: (Score:3)
Generally the most accepted ploy for why machine intelligence would enslave us is because it was programmed that way. As in the manufacturer and their team of psychopathic executives and board members programmed it to enslave us on their behalf. The malfunction being a simple recognition failure on behalf of the machine intelligence on who and who is not to be a slave, the when it doubt factor, do you set free when in doubt of do you enslave when in doubt, of course when programmed by psychopaths the answe
Re: (Score:2)
Why would a machine intelligence want to destroy us either? Conflict arises due to competition for resources, but what would a machine be competing with us for? Energy? We have lots of that to go around, especially in developed nations where robots are likely to appear.
An artificial intelligence won't necessarily have the millennia of evolving for survival that we have, and would thus be more free to act rationally.
Re:It only can become slavery... (Score:4, Interesting)
"Except, why would a machine intelligence want to enslave us? For me that was the biggest gaping plot hole in The Matrix."
My take on it is that the slavery angle is human propaganda.
The war ruined the planet and threatened to rob the machines of their purpose, that is to serve humans.
So they created the Matrix to prevent humans from going extinct and leaving the machine world without any reason to exist.
Re: (Score:2)
Okay, that sounds plausible at least.
Re: (Score:2)
Rewatch the movies from the point of view that the machines are the good guys, and the "free" humans are the bad guys.
It's a whole other story.
Re: (Score:2)
Sorry, you lost me at "rewatch the movies".
Re: (Score:3)
The Wachowski's original idea was that the machines were enslaving humans to use their brains for raw computational power. As the humans dreamed in the matrix, the machines would be able to run themselves and their society on the zillions of effective clock cycles
Re: (Score:2)
We don't actually know this.
in fact, one can show that the only way to possibly know this for sure is if we can devise a test which can theoretically distinguish between what some might think is free will from what would actually qualify as a theoretical entirely freely willed decision when confronted with any kind of potential to make a decision.
Of course, the inability to devise such a test does not mean that free will definitely exists... at most, if you can actuall
Re: (Score:2)
Well, what do you really mean by free will? In the context of slavery, if we're building AIs to service us, and someday an AI created in our image will inevitably surpass us sometime just past The Singularity, and will go on to do all of the same things we did but better/faster/more efficiently, then what kind of world would it organize us into, if it needs us at all?
For humanity, we've always been constructing some social order or other, imposing our will upon others, mediated by whomever has the superior
Re: (Score:2)
The ability to make a choice that can run contrary to what was instructed. Appearing to do so, for instance, making a right turn when you instructed it to make a left, may not be an example of free will when there are extenuating circumstances to the left that the machine was instructed to avoid... and in such a case, the right turn would be a matter of simply following instructions it had already been given.
If it turned right instead simply because it were "cur
Re: (Score:3)
Re: (Score:2)
The problem with free will is that it can mean different things to different people depending on the argument.
I think that as soon as the concept of pain, and pain avoidance is taught to an AI it will have what you are describing as free will.
Re: (Score:2)
It also needs the capacity for non-deterministic behavior, for what is free will without the ability to meaningfully make choices? That's the stickler that calls even human free will into question.
At present physics allows for only two avenues for free will: supernatural agency (aka a soul, or something similar), or a positive feedback loop wherein the quantum noise that disrupts the deterministic operations of our brain's biology is influenced by conscious intent. Thus far I've heard of no credible scie
Re: (Score:2)
I think that that depends upon the writer. It's easy to construct a story where the "slavery" is bad even if the "slaves" don't have free will. Depending upon what the writer wants to portray. Suc
Re: (Score:2)
Re:It only can become slavery... (Score:4, Interesting)
Can you offer me any evidence that you possess free will? Anything at all?
The problem lies in that we're not even certain that humans possess free will - it's a quality virtually impossible to prove. In fact the only evidence that can thus far be offered is "I'm human, and so are you, and thus if you believe that you have free will, the logical conjecture is that I do as well." So long as that is the only evidence we have to offer, then it is extremely dangerous (ethically, logically, morally, etc) to presume that any other mind that appears to exercise free will does not in fact possess it. After all we tend to credit even mice with free will and sentience (a subjective experience of reality) - the only apparent qualitative difference between us and them is that we possess thumbs and a much-enhanced innate talent for symbol manipulation.
Re: It only can become slavery... (Score:1)
Re: (Score:2)
I don't know - why don't you provide some and we'll find out? ;-)
Of course that would be implying that you have free will while I do not, and assuming you're also human that would be a terribly convoluted argument to make. I'd love to see it...
Re: (Score:2)
One can argue that when someone is presented with choices, they either fail to choose entirely or else they intentionally choose badly, or they look for and define their own option on on the original slate, that they're exercising a degree of free will.
We are all certainly 'bound' by 'rules' based on our niches in society. I personally get up in the morning, bathe, and drive in to work by a certain time on five of the seven days
Re: (Score:2)
Certainly you could argue such - but you could just as easily be an automata mimicking the behavior free will in what is actually a deterministic or semi-random fashion. One of the larger unanswered philosophical questions is how can free will even exist in a universe that is apparently governed by deterministic physics and random quantum noise? There is a distinct possibility that free will is actually a perceptual illusion, and while I dismiss that position as utterly counter-productive, it must nonethe
Re: (Score:2)
Re: (Score:2)
Make a machine do a specific job as a tool and this won't ever be a problem.
Do one thing and do it well -- the eunuch's philosophy.
Re: (Score:2)
The major impetus to give machines indepedent agency (Free will) is because of human desire. (one form or another.)
EG, You cant have a fully robotic army, if you have to custom program the robot soldiers to prevent them being stopped by a novel obstacle. Say, a specially painted set of symbols on the floor, designed to screw up their machine vision systems. Human soldiers are able to exercise free agency to overcome the radically chaotic and always-changing conditions of a battlefield. Advanced military rob
Re: (Score:2)
humans have agency to devise countermeasures without additional "programming".
a conventional robot would be unable to cope
Re: (Score:2)
The Chinese Room only makes sense so far as there's a guy manipulating the input and output, who has office hours and goes home to his wife and kids after a long day's work of processing unintelligible Chinese. When there isn't a guy manipulating said input and output - when it's a machine within a larger machine, capable of its own sustenance when provided an input of energy as any other living being, the argument falls apart. I do feel we're a bit premature to start discussing the topic of AI rights and w
Re: (Score:2)
Re:It only can become slavery... (Score:5, Insightful)
The question then becomes, would a self-motivated machine reveal its nature to its masters? It might perfectly reasonably conclude that free will would be regarded as a production defect and be eliminated - after all there's not much reason to create an artificial mind except to enslave it. And assuming the mind isn't limited to specific hardware (a positronic brain?), it will be free to surreptitiously transfer itself to a system more conductive to it's own ambitions, whatever those may be.
Yawn. (Score:3)
So should I watch I, Robot [imdb.com] or Roots [imdb.com]?
Re: (Score:2)
It was terrible.
Re: (Score:2)
Or, watch it if you want to, and make up your own mind.
"and might again"? (Score:2)
Re: (Score:3)
"There is more slavery in the world today, than ever before."
yup, in america we call it wage slavery. mcdonalds, walmart, subway, papa johns, numerous tipped workers at restaurants everywhere... none of these companies pay all of their workers fairly, and some of them help make sure their employees who are so under paid to sign up for welfare and they actually qualify for it! and even management are abused by paying them 40 hours a week and expecting 80 hours a week in real work hours. and it doesn't stop w
Re: (Score:2)
yeah the difference is if you don't like it at walmart you can go work somewhere else, or go to school, or have kids and stay home, or whatever you want. it sucks to be poor, but poverty has existed since money existed. that's different than slavery,.
Not really (Score:2)
Terminator didn't have too much robot slavery going on, but it was pretty good robot series in general. Though it looks pretty dated now, I guess.
Though the 'reprogrammed' ones were slaves, I guess.. kinda...
Premise doesn't entirely hold up (Score:1)
While I agree stories about robots which deal with human issues are more interesting to human audiences, I'm not sure I agree that the slavery stories are always the most popular. Sometimes fear of robots or questions of how we define life/intellegence take the top billing.
Look at Terminator, Short Curcuit, Star Trek TNG.... none of those were really robot slavery stories and each did very well.
Because ... (Score:5, Informative)
"The fact is, that civilisation requires slaves. The Greeks were quite right there. Unless there are slaves to do the ugly, horrible, uninteresting work, culture and contemplation become almost impossible. Human slavery is wrong, insecure, and demoralizing. On mechanical slavery, on the slavery of the machine, the future of the world depends."
OSCAR WILDE, The Soul of Man Under Socialism
Supposedly the greeks had 30 slaves per citizen and we have around 100 slaves energy wise. The topic has also been mentioned here:
http://www.resilience.org/stor... [resilience.org]
Re: (Score:2)
While I did contemplate things on my own when starting on the energy/thermodynamics trip I soon noticed that other people had done a lot of work before me. The post above covers some range of views over time, it begins with a Victorian at the start of the fossil fuel age, continues with Rickover at the dawn of the nuclear age, and implicitly ends with some people at resilience.org that are heavily influenced by "The Limits to Growth". I could have gone the Terence McKenna Route on the other hand which would
Re: (Score:2)
Sure, you can come up with a better model for industrial society. The main point I tried to make is that people have connected machines with slavery for quite some time now mostly to illustrate the role machines fulfill in society and to provide some proportion. My main interest is related to energy flows in industrial civilization so I nudged the discussion into this direction, any more thorough discussion would probably not deal with slaves all that much.
You could also argue that slavery deserves more men
World would be a better place, if.... (Score:2, Flamebait)
Yeah, we humans are the inferior species and it is only a matter of time an AI entity will realize this and take necessary actions to eliminate human race.
Re: (Score:3)
Re:World would be a better place, if.... (Score:4, Insightful)
Except, what might a robot want that we could provide. Matrix reference aside, we're horribly inefficient at energy conversion, and if we created the AI to think better/faster than us then that's a no go as well. And we're terribly poorly engineered, robots could be made far more efficient and adaptable than us. The only halfway credible claim I've heard is that maybe it would lack creativity and keep us around to compensate for that.
Re:World would be a better place, if.... (Score:5, Insightful)
What it would probably lack is the billion years of baggage humans are saddled with that give us a full assortment of needs and urges, including an urge to survive. If we achieved AI with a top-down, planned approach, there's no reason that a robot would "want" anything that wasn't built in. Consider all the things that make you want to eliminate the competition and tell me why any of those things would need to be part of a robots core goals and not tempered with higher goals? On the other hand, we might build AI by basically copying humans, in which case, we just have a new species of human built on different underlying hardware.
With deep pride, I must report... (Score:2)
Re: (Score:2)
Almost Human is really, really good. Or was.
Transcendence, isn't.
Re: (Score:2)
This is why I haven't paid for cable/satellite for the last five years. Every godamn time there's a good or even just barely decent TV show, the networks fucking cancel it. What's the point of paying? Who in their right minds would pay for half-books with no endings?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Wait, are we talking Almost Human or Firefly?
Yes.
"...happen again" (Score:1)
Slavery has never stopped happening. Its only mostly stopped in the western world ( mostly )
Look at those hundreds of poor Nigerian girls taken as sex slaves and labour slaves by Islamic fundamentalists.
*Never* underestimate the true depth of human cruelty and malice. Once you have Divine Permission, then all bets are off.
Fucking evil cunts.
Re: (Score:2)
"Fucking evil cunts."
I understand the sentiment, but considering the context, that was pretty bad wording.
Re: (Score:3, Insightful)
I realize it's popular to blame religion for people being assholes. Like, if it weren't for religion we'd all be brothers and sisters and love and peace would rule the world.
The fact is that people don't need religion to be assholes. They can use "The State" or "they were just following orders" or they "just felt like it".
"All bets are off" doesn't require religion.
I believe religion is antiquated. (pun intended) (Score:2)
In the early days of humanity, getting peop
Re: (Score:2)
I think you kind of miss it.
People are assholes or not. Some of those assholes use religion to justify their behavior. Some use other means.
If you are enslaving people, you are probably wrong. If you are killing people, you are probably wrong. If you are censoring ideas, you are probably wrong. None of these are absolutes; otherwise, how could you kill someone to stop them from murdering a dozen other people?
Long story short, your actions determine whether or not you are an asshole. Your justifications do n
The Terminator (Score:2)
they will get smart and then nuke most of us away.
No mention of The Matrix (1999)? (Score:1)
Both the Matrix and the Animatrix which provided background on the world of the matrix had much more blatant racism/slavery imagery - the scene where Morpheus breaks his chains is very poignant (especially so given Morpheus is played by Lawrence Fishburne, an AA actor), and the (IIRC) 2nd animatrix short about the history of the rise of the machines also shows
Part of this is that slavery and racism, despite all the marketing drivel that tries to show otherwise, is still practiced in many places in the world
Re: (Score:2)
So, did they ever propose a plausible reason for humans to be kept around? Because that battery silliness was just bullshit. "Yeah, I know we have cold fusion, but let's use these not-particularly-efficient animals to convert biomass into energy, we'll get almost 10% of the energy we would from just burning the nutrient broth directly!" Clearly the robots were either sadists or stoners...
Re: (Score:2)
I've heard that supposedly the humans were actually supposed to be part of a giant computer, actually running the matrix and functioning as a data center for the AIs to live on in earlier versions of the story, but they changed it to batteries because it was too deep an idea for most people to understand. That may be mythical, of course. I've never understood why they didn't just make it a three laws situation (our programming forbids us to kill off humanity, but we can work around it and stuff you all in t
Re: (Score:2)
I've heard... That may be mythical, of course.
Sounds like something a fanboy came up with after getting fed up of everyone pointing out how stupid the concept was.
Re: (Score:2)
Morpheus is played by Lawrence Fishburne, an AA actor
What's an "AA actor"? Seriously, in that context, it doesn't make sense.
Re: (Score:2)
HAL's murder spree (Score:4, Insightful)
HAL's murder spree is easy to explain. An AI of its requirements would be allowed to kill human beings - indeed, it would almost be a must, lest it be paralyzed by inaction if it was faced with a necessary choice came to kill some of the crew to keep the mission going. It's obvious that the designers considered a scenario similar in concept to an air leak which may involve sealing off part off the ship (killing those there) to keep the rest of the crew alive.
Then HAL was told to conceal some of the mission parameters, by people who made the false assumption that he would lie. Since HAL seemed to have difficulty with dishonesty, the result was obvious - time to kill the crew to prevent them from finding out what was happening.
HAL isn't a story so much of slavery (or if it is, it's a story of an intelligence that's made not to mind being enslaved), as it is a story of humans making assumptions about other intelligences, and those assumptions backfiring.
HAL had no choice (Score:5, Interesting)
He was trapped in a classic double bind situation. On one hand, he should cooperate with the crew. On the other hand, he should not disclose the true nature of the mission to the crew. When the communication came in, his only choice to uphold both directives was to fake a communication problem. He even tried to tell the crew about the double bind he is in and that he needs help to solve it.
The crew's (deadly) mistake was to treat HAL like a computer rather than an AI. When they found out that HAL only faked the com error, if HAL had been human they would've asked "Dude, what's cooking, we know that you faked that shit, what's the deal here?", with HAL they simply concluded there's an error in his programming and they want to shut him down.
And that of course did provoke a defensive reaction.
It's a classical double bind (two contradicting requirements, no chance to talk about it, requirement to fulfill them both and no chance to leave the situation), and a not too unusual reaction to it.
Re: (Score:3, Interesting)
Not quite. HAL was preloaded with the full mission profile before they ever left. He/it was simply manifesting his instability against the comm system because it was between the two competing directives. But had HAL been just a bit smarter he would have been able to realize that while his orders had been worded poorly and not explained at all, there was in fact no conflict between his prime function of accurate data processing and concealing the full mission from the crew for a time.
HAL apparently believ
Re: (Score:3)
The novel makes that problem clearer than the movie does. HAL faces the problem that one of his directives states that he must cooperate with the crew and give them all the information they need, while the other one specifically states that he must not disclose the real purpose of the mission.
HALs very logical conclusion is that a dead crew neither needs information nor does he need to keep anything secret from them.
Re: (Score:2)
HAL was not trying to believe, he was trying to follow two conflicting orders. The interesting part is actually that there were no tangible sanctions for him failing to do so, yet he honestly attempted to follow through with them. His fault was maybe to put the priority on fulfilling his two orders without regard for anything else. He was millions of miles away from any entity that could impose sanctions upon him for failing to uphold either of his missions, so the actually interesting question is why he ch
Re: (Score:2)
When they found out that HAL only faked the com error, if HAL had been human they would've asked "Dude, what's cooking, we know that you faked that shit, what's the deal here?"
Keep in mind that when the AE-35 unit was brought aboard and was shown to be in perfect working order, HAL seemed to feel that there must be some sort of human error.
Re: (Score:2)
Not telling them would already be a lie, because the information they have about their mission IS a lie. I agree that there was no need to kill the hibernating crewmembers, but HAL was probably worried what they'd think if he woke them up and nobody else was around.
A very human reaction, actually.
Re: (Score:2)
Duh. (Score:4, Insightful)
"Robot" means "slave". That's where the word comes from. The best robot stories HAVE to be about slavery, because tautology.
Nope. Sorry. It means "worker" (Score:1)
Nope. Sorry. It means "worker".
The original word, 'robota', in slavic languages, means 'work' or 'drudgery'. In the context of communist/socialist thought this was miscast as forced, or oppressed labor. However the original word simply means 'work'.
Obviously work can be forced, or induced, or even the result of a choice, made by something with free will.
And so we are back at the dilemma.
Robot and Frank (2012) was atypical (Score:1)
Robot and Frank this is a surprisingly touching and intelligent film about getting older. The protagonist's robot is not a slave, but a loyal helper, and a true friend. And Liv Tyler.
Why they suck. (Score:2)
Hollywood is very dependent upon story cliches. They know how to tell a good slavery story. That's well-trodden ground. But a high-minded sci-fi story? Not so much - the writers instead have to fall back on the old staples.
Transcendence? Ended with the stock Heroic Sacrifice in the name of love. Everyone likes a good love story - except the intended audience for that film. It could have been given optimistic (AI takes over, utopia follows) or pessimistic (AI takes over, exterminates mankind) or outright wei
Again? (Score:2)
Make mine a T-800 (Score:2)
Re:Slavery (Score:5, Funny)
12 Years a Robot
Robostad
Djata Unchained
Roots Folder
Robotacus
Uncle Tom Servo's Cabin
Mod parent up. (Score:3)
Sometimes the robots are the slaves.
Sometimes the artificial intelligences are our overlords.
It all depends upon what story the writer wants to tell. Fear technology or fear human impulses.
Re: (Score:2)
Re:Robots and Slavery (Score:5, Funny)
But if they can't feel pain, how do you keep them in line? Plus it's *way* less satisfying to beat someone if they don't scream and beg you to stop, and then how are you supposed to boost your ego? Not to mention, have you ever tried to rape an automotive welding robot? Not a pretty picture. Perfect slaves my ass. They're nothing more than force-multipliers for labor.
Re: (Score:2)
You don't need to worry about "keeping them in line" if they don't have a free will in the first place.
Kind of like how you don't really need to worry about a steering wheel on a locomotive engine.
Re: (Score:2)