Yeah, the third one really sucked.
This issue is one I've thought about since I was very young. I liked to read a lot about robots and computers both in non-fiction and fiction because I could sympathize with them. This response will probably be a little long-winded but it is
something I've thought about for as long as I can remember, so I'm probably leaving out a lot.
Ever since the word 'robot' was coined in Rossum's Universal Robots
, and even before then, machine life in fiction very frequently displays hatred, or at least contempt towards humans. If you Google 'cybernetic revolt', you'll find a slew of works going back to the nineteenth century, but off the top of my head I can name R.U.R.
, the Berserker
series, I Have No Mouth, and I Must Scream
and The Terminator
, which is essentially a really good remake of a crappy film from the sixties called Cyborg 2087
. In such works of fiction, machine life tends to be strong, intelligent, organized and calm. Humans, in contrast, are often portrayed as weak, stupid, scatter-brained and frightened.
The 'cybernetic revolt' genre is one facet of the nihilism and self-hatred that characterizes the modern era. It was really inevitable, I think. There are many reasons for this self-hatred, but one of the most important ones appears to be that the growth of technology intensely magnified irrational human behavior. Notice that, in these works, the military is frequently involved in giving rise to malicious machine life and the nuclear stockpile plays a role in several. More generally, modern science suggests the Universe is a cold, indifferent place. It makes people look small and ridiculous. In contrast, machines and computers are really physics and math in action. They are representations of the only things that are really immutable in the Universe and, as such, are like the angels of a powerful god. When industry gives you an idea of what it is to be almighty and alwise, self-hatred follows naturally from that, too. The effect of science and technology in promoting this effect is twofold: it shows us something that is many, many times better than we are and worsens our faults at the same time.
Taking The Terminator
as an example, even though Ah-nuld was the antagonist, the admiration for the machine race was barely concealed from beginning to end. Schwarzenegger, with his thick Austrian accent, was a very appropriate casting choice because the Terminators embody the heroic warrior archetype of Saxon and Norse legend: fearless, single-minded, brutal and cunning. In that sense, the film was a lot like an hour and a half long retelling of the Battle of Stamford Bridge.
So is it possible that a computer could (functionally) feel hatred towards humans? Well, I don't think computers can really feel anything, but a sufficiently advanced imitation is good enough. (Try the Son of the Black-Eye
campaign in Wesnoth.)
The real question is whether a human would allow a computer to express artificial hatred and, in light of everything else I've said, I think the answer is yes. A fully logical military AI would probably tell its users things they wouldn't want to hear. It might suggest, for example, that seizing the resources of a neighbor would ultimately be counterproductive, that a prolonged war of attrition is wasteful, or that using a new weapon would backfire in the end. And the retooling will begin. No one wants to listen to facts and logic when he has a 'feeling' about something; I know this to be true through personal experience.