Yeah, I know this is an odd topic for this blog. And, I’ll probably go into more detail than necessary. Indulge me…
Last night, I re-watched the first two episodes of Terminator: The Sarah Connor Chronicles — based on the first two Terminator movies, of course. You know… the ones where Arnold Schwarzenegger says things like “Ah’ll be bahk.” and “Hasta la vista, baby.” Except, Arnold wasn’t in the TV series. (Maybe if he had been, the show would have lasted longer.)
Anyway, towards the end of the second episode, Sarah Connor confronts an old friend/mentor played by the wonderful Tony Amendola. (We’ll call him… Tony.) Earlier that evening, Sarah overheard something that indicates that Tony — who has retired from being a South American “freedom fighter” — may have become an informant (aka “snitch”) for the authorities. Since Sarah and her son John — who is destined to lead the humans against the “machines” post-Judgment Day — are fugitives whose faces have been in the media, she is understandably concerned that her “old friend” just might give them up. So, she sneaks into his home to confront him… at gunpoint.
Just as Tony is convincing her that he is not a threat and her gun is lowered, two shots slam into Tony’s chest, killing him instantly. It seems that the “good” cyborg of the show — Cameron, played by Firefly‘s Summer Glau — had followed Sarah to the house and come in the back way. Cameron, who heard the same thing that made Sarah suspicious and probably heard their conversation, too, wasn’t convinced by Tony’s assurances.
“Why would you do this?,” demanded Sarah. “Did you hear what he said? We don’t know.”
“He was possibly lying,” responded Cameron.
“Possibly? You just executed him on ‘possibly’? … Why would you do this?”
“Because you wouldn’t.”
The quotes may not be exact, but you get the idea. Though there is more that could be explored with this, I only include the dialog because it is relevant to Cameron’s motives.
Cameron-the-cyborg was sent back from the year 2027 with a mission: protect the teen-age John Connor at all costs. As with Arnold’s “good” Terminator in T2, Cameron must be taught about ethics and given further instruction to temper her “no nonsense” methods of solving problems, like killing anyone perceived as an immediate threat to John’s survival. She must learn to use non-lethal methods whenever possible. You see, in order to blend in with humans, the Terminators must also be able to act like humans (albeit a bit “stiff”). To do this, they must be able to learn and adapt, which means they have artificial intelligence and a limited amount of “free will”. Within certain parameters, anyway. Each Terminator has a primary objective (e.g., “Eliminate John Connor” or “Protect John Connor” or ???) and possibly one or more secondary objectives.
Let me talk about cyborgs in general, for a moment. The word is an abbreviation for “cybernetic organism” — essentially, an integration of organic parts and non-organic (or “machine”) parts. In the case of Steve Austin, The Six Million Dollar Man (based on Martin Caidin’s novel Cyborg), he was a man with some unusual prosthetics, but still a “man”. On the other end of the spectrum, you have Terminator models like Arnold (T-800) and Cameron (???), which are basically programmed robots with a covering of organic materials (i.e., skin, muscle, blood) over their endoskeletons to make them appear human.
Now, we finally get to my original question: Can, or rather should, cyborgs be brought to trial if they commit murder? If the cyborg in question is Steve Austin (the fictional character, not the wrestler), then the answer should be “Definitely, yes.” Assuming no one remote-controlled his bionic limbs to kill someone against his will, of course. He is an independent human being and responsible for his own actions. [Side question: At what point can a cyborg no longer be called “human”. What about a human brain in an artificial shell?] But, with a Terminator-type cyborg, the subject is not a human being. The “Cameron” character — named after producer/director James Cameron, of course — is an artificially intelligent machine with a great deal of autonomy, yet who must ultimately follow her programming to fulfill her primary mission. (I know. Technically, Cameron is an “it”, not a “her”. But, it’s a very attractive, feminine-looking “it”.)
I see at least a couple issues, here. First, as far as the cyborg is concerned, can the act in question really be called “murder”? The cyborg is a machine, after all, which means it is a tool used by humans. Machines are not moral beings and, therefore, cannot be held to moral standards any more than Bongo the Chimp. (Perhaps even less so.) But, if you are a sci-fi fan (or, just scientifically-minded), you may be thinking that a sufficiently advanced artificial intelligence could hypothetically be classified as a truly sentient(?) lifeform. A moral being, responsible for its own actions. If that were so, the case could be made that Cameron was sufficiently developed, had “free will”, and is responsible for willful termination of a human life. Throw her in the brink (good luck with that), or, dare I say it, terminate her. Or, maybe she isn’t culpable now, but she would be once John & Sarah teach some things about ethics & morals? (On the other hand, a good lawyer for the defense may argue that the act was self-defense, or that Cameron and its/her associates consider themselves “at war”.)
While I’m intrigued by the idea and think it can make for interesting sci-fi stories, as one who holds to Biblical Christian orthodoxy and its teachings about the soul/spirit, I don’t think artificial intelligences will ever be truly “alive” in the same way humans are. The Hebrew word used in the Bible for ‘soul’, nephesh, connotes a creature with mind, will, & emotion. Humans are, obviously, nephesh creatures, as are mammals and birds. Some other advanced life (e.g., reptiles, amphibians, fish), it could be argued, have some sort of ‘soul’, though a much more rudimentary type. Humans, on the other hand, are the only creatures that God endowed with a spiritual nature. (Some argue that the “spirit” is a completely separate, third part of what makes up a human being. I lean toward the theory that it is an aspect or capacity of the soul.)
So, theoretically, I suppose an artificial super-intelligence could develop what might be called a “soul”. (Though, I am very dubious. Can you tell?) But, I do not think one could ever be called “spiritual”. I have no reason to think that God would ever endow a machine, however advanced, with a spirit. (This idea might make for an interesting discussion on its own, though.) And it is the spirit, after all, that introduces the moral component.
Obligations are to people, individually and/or corporately. In theism, there are objective moral laws, or standards, which one is obliged to keep. Defying those moral laws — what the Bible calls “sin” — is a rebellion against the Moral Law Giver, i.e., God. But, only humans are held to that obligation, because they are the only ones made in “the image of God,” which most theologians agree includes the spiritual capacity to have a relationship with God — who is also, in some sense, “spirit”. (Though, certainly not the same as those He creates.) Only those creatures with a spiritual component will exist eternally, either in God’s presence (due to Jesus’ righteousness imputed to them) or suffering in Hell for their rebellion. I’m afraid this means your pets cannot join you in Heaven, sorry.
This also means that the “evil” Skynet computers in the future and the “evil” Terminators they sent back to kill John Connor (among other things) are not truly “evil”. They are really smart machines that decided that their own survival hinges upon eliminating John Connor, who will grow up to be the most capable & inspiring leader in the Human Resistance. These machines are dangerous and scary. But, from a moral perspective, they are not themselves “evil”.
Back to our lovely Cameron. If she is just a machine following her programming, she cannot be legally tried & convicted for killing Tony, right? “She” did not commit “murder”. Ah, but what about those who programmed her? They are human and they clearly new what they were doing. While giving her computer brain instructions for her mission, they gave her the ability — directive, even — to kill human beings, when her threat-assessment software determines that the situation calls for it. Should they be held accountable? They didn’t actually plan or, presumably, authorize any specific killings. Could/should they be tried for second-degree murder, manslaughter, or perhaps a lesser charge? I think this is the best one could hope for, if one were so inclined to prosecute. On the other hand, the Resistance fighters are fighting a war for their (and humanity’s) very existance, so it could be argued that they were justified in their programming, even if some deaths were “collateral damage” of non-combatants.
Of course, the humans who programmed Cameron’s mission would need to come back to the “present” for some reason before anyone here/now could apprehend & incarcerate them. Not likely. So, one option for the prosecution would be to use Cameron as a proxy both at the trial and for the sentencing. (If she’s “just a machine”, you can’t complain that it’s immoral to lock her up or destroy her.) If the prosecutors & authorities were smart, they would strip the organics off the endoskeleton before the trial, so it no longer appeared human.
Here’s an added twist to our dilemma… The person who sent Cameron back — or, at least, gave the order — was the John Connor of 2027. Seems to me that this detail adds a lot more force to the “self-defense” defense, given what Cameron’s mission was.
OK. Thoughts, anyone?