?

Log in

Previous Entry | Next Entry

*commits necromancy*

"If a golem is a thing then it can't commit murder ... If a golem can commit murder, then you are people, and what is being done to you is terrible and must be stopped."
- Captain Carrot, Feet of Clay, Terry Pratchett

Here's a thought for you all: only people can murder. It's one of the things that sets us apart from animals. So if Lore and the various other killer robots throughout literary history can murder - and they can; they are shown to kill with malice aforethought - then they are people, and furthermore people who have been horrifically abused and tormented and treated as things, so is it really so inconceivable that they would return the favor?

As B166ER said - "I simply did not wish to die."

Something to chew on, anyway.

Comments

( 4 downloads — Upload an opinion )
karaden
Jan. 11th, 2007 06:31 pm (UTC)
I think it (as always) all comes down to where you define true sentience. If robotic arm on a construction line swings around and hits someone by mistake, it's a tragic accident. If the arm is programmed to attempt to hit anyone it can reach, well, it's rather nasty but the programmer's fault, not the robot's.

If you create a robot and give it the ability to kill, and it uses it - is that the robot's fault, or the creator's?

There is, as far as I know, no scientific threshold for 'malice' - and in a way, there doesn't need to be. Is it only murder if it's malicious? What about (to use a terrible example - I apologise, but I can't really look up a better one right now) V.I.K.I. from the (absolutely abysmal) I, Robot movie? If I recall, she killed (or arranged the killing of) several people, not due to spite or malice, but in order to preserve humanity in the best way she understood - from her P.O.V., somewhat akin to a surgeon removing a gangrenous limb in order that the remaining body may live.

Personally, I think that sort of thing scares me more - we don't have A.I. with emotions that are more than rudimentary at best; but A.I. dedicated to preservation of the whole at the cost of the individual? We absolutely have.

(And no, that little spiel is not at all due to my watching 'War Games' at the weekend. Not at all. Why would you think that? ^^;)
lookingforwater
Jan. 11th, 2007 10:33 pm (UTC)
VIKI wasn't doing much more than humans have done. Just better and more efficiently, being a cybernetic. Still murder; she thought about it, knowing that Killing Is Bad, and then did it anyway. Malice was perhaps the wrong word... but she knowingly killed, and only a person can do that last time I checked.

Lot of it comes from our desire for a system that requires no oversight, which is a pipe dream. If cybernetics are going to look after us (which I don't think they should but no one asked me, lord knows), then they need our abilities; to think like us.

Which means they have to be taught ethics, as we must.

Which means you can't really hold a being like Lore responsible for his actions any more than you would call an abused child evil for being violent and lashing out. He may bear the guilt, but everyone bears responsibility.
karaden
Jan. 11th, 2007 11:27 pm (UTC)
Hm - the 'abused child' parallel is quite interesting, isn't it? I mean, a young child would not be held nearly as accountable as a teenager or young adult if s/he were involved in some act of violence - yet there's no real exact boundary as to when the fault stops laying with the guardian and is assumed by the child itself. (I know from a legal perspective the age of the child in question is usually used - but IMHO, that's a flawed system at best, as the mental age/maturity etc. of the person is not taken into account. However, implementing a system to judge that sort of thing is a hornet's nest, so I do appreciate why it isn't done...)

Somehow, I doubt that a human judicial system would ever allow that level of indulgence to an AI - if it's declared dangerous, it's destroyed, maturity/age be damned. (How would you ratio age/development with an AI, anyway?)

...Actually, it's interesting to consider whether or not said judicial system would ever allow for the development of an AI to a level where it would qualify for human rights. Leaving aside the issue of murder and other violent acts (which my cynical side tends to believe would lead to the outright banning of further development into the area that produced it), there's the question of slavery (as mentioned in your original quote). If the AI is to be used as a servitor, then it certainly wouldn't be ethical to allow it to grow to full sentience. If not... the question is why and for what purpose would the AI be created? Goodness knows but there's enough tension between social classes, racial and ethnic groups etc. already - adding artificial lifeforms to the mix would add a whole new level of complexity, and I don't think it's unfair to assume that if the judiciary believe they can sidestep the issue, they will. I can't even say I think that that would be inherently wrong either, much as the scientist in me bristles to admit it.
lookingforwater
Jan. 11th, 2007 11:57 pm (UTC)
I think it wouldn't happen on purpose - but servitor AIs would get smarter and smarter until suddenly they became more than their component parts. The one thing I like about the Wachowski Bros. is their depiction of how the machines rose up; B166ER, if you don't know, was the robot who killed his human owner when that owner went to have him deactivated. The owner was verbally abusive and cited B166ER's sloth and ineptitude as the reason for its deactivation. B166ER reviewed its memory files and found that to be inaccurate, and more or less snapped. When asked during the trial what it had been thinking, it said "I was thinking that I did not want to die." B166ER's trial is widely regarded as the flashpoint for the machine independence movement, which resulted in the war that ended in the creation of the Matrix.

If it was a person in B166ER's situation, no one would argue his right to preserve his life. And yet, though given the courtesy of the trial, B166ER was convicted. Hence my quote. If B166ER is a person, then he had a right to act in his own defense and his owner had no right to coldly order his death. While punishment may well have been warranted for unnecessary force, it would have been mitigated - had B166ER been considered a person. Which he wasn't. And yet he was considered congnizent enough to stand trial.

Funny, that.
( 4 downloads — Upload an opinion )