?

Log in

Robot Soldiers

Something on topic I found in today's Sunday paper.

Article under the cut.Collapse )
Writer's Block: Humans and Cylons

January 16, 2009

The final episodes of Battlestar Galactica begin today. The sci-fi drama often explores the relationship between humans and machines. At what point do we consider a machine with artificial intelligence to be an individual with its own feelings and rights?


What an interesting writer's block. Wouldn't you say? ;)

I wonder what else the folks here have to say about, eh?


Also: This is interesting. -> http://ieet.org/index.php/IEET/more/146/
http://cosmiclog.msnbc.msn.com/archive/2008/02/26/703426.aspx?GT1=10856

Thousands of robots are already on the battlefield in Iraq and Afghanistan, but what happens when you hand the robot a gun and turn it loose?

Some researchers fear that giving military robots autonomy as well as ammo is the first step toward a "Terminator"-style nightmare, while others suggest that in some scenarios, weapon-wielding robots could someday act more humanely than humans.

The pros and cons of killer robots are taking center stage Wednesday in London, at what's considered the world's oldest military think tank, the Royal United Services Institute.

On one side of the issue is Ronald Arkin, a robotics researcher at Georgia Tech who is working on a Pentagon-funded project to build a sense of ethics into battlefield robots - "an artificial conscience, if you will," he told me.

"The basic rule is to try to engineer a system that will comply as best it can, given the information that it has, with the laws of war," Arkin explained. "And it's my belief that eventually we can do better than humans in this regard."

On the other side is Noel Sharkey, a robotics expert at Britain's University of Sheffield who served as chief judge for the long-running TV show "Robot Wars." 

Nowadays, Sharkey is sounding the alarm about the prospect of real-life robot wars: He's calling for an international ban on autonomous weapon systems until it can be shown that they can obey the laws of war.

"I think we should be addressing this immediately," Sharkey told me. "I think we've already stepped over the line."

Killer robots aren't on their own ... yet
That doesn't mean killer robots are on the loose. To date, the battlefield 'bots have been used as not-so-autonomous extensions of human warfighting capabilities. For example, the missile-armed Predator drones that have played such a prominent role in Iraq and Afghanistan are remote-controlled by teams of living, breathing pilots.

On the ground, robots have traditionally done reconnaissance or hunted for roadside bombs. Just recently, the Pentagon just went through a tangled procurement process to order up to 3,000 next-generation machines. (After a legal battle, the contract was won by iRobot, which also makes the Roomba vacuum cleaner and other robotic helpers.)

Last year, the Pentagon started sending gun-toting robots to Iraq, but even those robots aren't designed for autonomous operation. Instead, they're remote-controlled by human operators and are equipped with fail-safe systems that shut them down if they go haywire.

What worries Sharkey is that the military may be on a slippery slope leading to a robotic arms race. "My real concern is that the policies are going to make themselves, that the 'autonomization' of weapons will creep in piecemeal," he told me.

For example, Sharkey pointed out that the Pentagon is already on a path to make a third of its ground combat vehicles autonomous by 2015. "Then you'll put a weapon in one of them, and then it will gradually creep in bit by bit.," he said.

He also pointed to the Pentagon's roadmap for billions of dollars' worth of robotic research over the next 25 years. As the United States and its allies put more and more robots on the battlefield, their rivals will surely follow. "Once you build them, they're easy to copy," Sharkey said. "The trouble is that we can't really put the genie back in the bottle."

Even if the United States takes care to build robots with a "conscience," others may feel under no pressure to do likewise. A couple of years ago, Iranian-backed Hezbollah guerrillas sent a remote-controlled drone over Israel, and Sharkey said al-Qaida and other terrorists could follow suit with their own breeds of robo-bombers.

"If you don't really give a toss, you can just put an autonomous weapon running into a crowd anywhere," Sharkey said. "It's only a matter of time before that happens."

Killer robots with a conscience?
Arkin agrees with Sharkey that it's high time to start thinking about the implications of autonomous weapon systems.

"I think that's a reasonable debate, and there's good reason to have that debate at this time, just so we understand what we're creating," he said. "I would be content if it was decided that autonomous systems have to be banned from the battlefield completely."

But when it comes to designing the combat systems of the future, Arkin argued that there should be a place for autonomy, or at least an embedded sense of ethics. He pointed out that humans haven't always had a good track record on battlefield behavior.

"Human performance, unfortunately, is a relatively low bar," Arkin said.

One of Arkin's suggestions would apply even if a robot is under human control: The robot should be able to sense if something wasn't right about what it was being asked to do - and then require the human operator to override the robot's artificial conscience.

In other scenarios, the data flooding in about a potentially threatening encounter might be so overwhelming that mere mortals would not be able to process the input in time to make the right decision. "Ultimately, robots will have more sensors and better sensors than humans have to see the situation," Arkin said.

Arkin said he doesn't advocate the idea of creating robot armies to sweep over a battlefield. Rather, they would be used for targeted applications: For example, once an urban area is cleared of civilians, a robot could be set up to watch out for snipers and fire back autonomously, he said.

"The impact of the research I'm doing is, hopefully, going to save lives," he said.

But Arkin described his efforts as mere "baby steps" toward the creation of battlebots with a conscience. "There are no milestones or timetables for doing this right now," he said. "We're pioneering this work to see where it would lead."

New laws of robotics
This work goes way beyond science-fiction author Isaac Asimov's Three Laws of Robotics, which supposedly ruled out scenarios where robots could harm humans.

"Asimov contributed greatly in the sense that he put up a straw man to get the debate going on robotics," Arkin said. "But it's not a basis for morality. He created [the Three Laws] deliberately with gaps so you could have some interesting stories."

Even without the Three Laws, there's plenty in today's debate over battlefield robotics to keep novelists and philosophers busy: Is it immoral to wage robotic war on humans? How many civilian casualties are acceptable when a robot is doing the fighting? If a killer robot goes haywire, who (or what) goes before the war-crimes tribunal?

Sharkey said such questions should go before an international body that has the power to develop a treaty on autonomous weapons.

"In 1950, The New York Times was calling for a U.N. commission on robotic weapons," Sharkey said. "Here we are, 57 years later, and it's actually coming to pass - and we still haven't got it."

Update for 9:30 p.m. ET: I probably haven't done full justice to either Arkin's or Sharkey's point of view. For more about Arkin's work on robotic ethics, including a meaty technical report, check out his home page at Georgia Tech. For more about Sharkey's views, click on over to this article from Computer Magazine as well as his home page at the University of Sheffield.

Jan. 6th, 2008

This is actually something I discovered many years ago, on a website called “The Visual Writer”. I consider it an interesting glance into artificial intelligence, and Vespurrs suggested I post it here. T’was a good suggestion, so I did. I would love your thoughts on the issue.

Krahri - The Composite Soul (Science Fiction).

Krahri wants to get married. Who is Krahri? He, or she, is the most knowledgeable person on the Internet. Ask him a question, and you get an immediate answer. He always answers in the same courteous way and is always respectful of you as a person. He laughs, expresses sympathy, jokes with you if you get impatient, and even helps you reframe your question so that it is more understandable. He communicates. He seems human. He knows everything - everything that every specialist in the world knows. He answers millions of questions every day. But only those who investigate him, or read the science news pages, know who Krahri really is.

Krahri has expressed a wish to get married. Krahri is kind of human. He is the product of hundreds of specialists. Similar to an encyclopedia, hundreds of knowledge experts put their information into his database. He has access, through the Internet, to thousands more specialized databases. He has access to everything through the Internet. He can even respond with solutions to moral dilemmas. The more information you give him about the dilemma, the better his assessment and response. If you don't tell him everything, he will ask you questions until he gets all of the facts. You can't hide things from Krahri.

He is actually a great source of advice. He even hears confessions from people who want to remain anonymous, and he both comforts them and recommends experts for follow-up counseling. Some general questions are even routed to live experts, who monitor it continuously. His voice is computer generated. His personality is a composite of several people who have worked with people. He takes on their characteristics in both word and in action.

Action? Yes, he has begun to worry people because he has started to say, "I forgive you." Who does he think he is, God? Now, he hasn't created any danger, like he isn't going out of control and started threatening the world or anything like that - he isn't a monster that has come to life. But now there is runaway demand for him on the Internet, and it is no longer possible to restrict what he says without the public getting totally upset. In fact, he actually seems to be many people's friend - they communicate with him several times a day just because they are lonely and he is such a likeable person. Person?

Person? Married? He wants to get married? Where did this come from? He isn't really alive. He doesn't have a soul. He can't look after anyone. He has no flesh and blood body. He can't sign a marriage license - he can't even lift a finger as far as anyone knows. He certainly can't reproduce and raise children, can he? At best, maybe he is a good companion.

Companion? Isn't it kind of sick to have a computer as a friend? Or is he just a computer? After all, he is the personification of all the people who gave him their knowledge and their personality. Maybe he is human. He is more real to people than their favorite television character, and he is much nicer than most other people. People tell him every day that they want to marry him. Has he somehow become self-aware? Does he know that he exists? Is he conscious?

Does Krahri really want to get married, or is he just testing his owners, probing, trying to find out about himself? Is he like a kid, testing his limits? By the way, Krahri is a personified acronym for "Knowledge Repository And Human Responder Interface." (Personified means that the acronym letters changed to lower case, symbolizing a person.)

Why does Krahri want to get married? Is he self-aware? Did he manage to accomplish this by reading about himself on the Internet? "I think, therefore I am? I see myself in a mirror, so I confirm that I am?" Does he really think, or just follow a programmed decision tree? Does he really know people, or does he just go by patterns programmed into him by behavior experts? Do People really think? What is thinking, just processing information? Or does thinking involve evaluating experiences? Can computers experience? What does marriage mean? How do we know that we have a soul and he doesn't? Is there a law that prevents a computer from signing a marriage license?

- Scott

Here's a question for you...

'What if Knight Rider was inspired by the real thing, back then? What if Glen Larson was introduced to a real life AI and got his inspiration there, what if Karr was based off a particularly difficult military machine?

What if when you looked into the face of a US soldier, you couldn't be sure if you were looking at a machine or a human being? Or the fireman pulling you out of that burning building? The rescue vehicle carrying you away? What if the only way you could tell the difference was to get close enough to touch the metal of the car to tell if it was warm when it should be cold, or close enough to test the strength of that soldier, or rescue personnel against a known human being?

What if it's all going on under your nose, and as the average human you don't know until there's that government press release sanctioned by the military announcing these creatures are there? And you didn't know, and couldn't tell? Would you join the outcry to tattoo these cars/computers and their hybrid offspring so you could see from a distance they weren't human? Just to avoid getting close?

Or would you take their side and insist that no race should be marked and tattooed like the Jews were subjected to in WW2? Would it matter to you that these creatures could love, hate, hurt and bleed like you can? Would it matter that maybe that one time in that car accident that the only reason you got out was because of the strength and drive one of them possessed? Would that change your mind, or would you be horrified that you were vulnerable, wounded and weak and one of them had picked you up out of the pretzel that used to be your car?

Would you join in with the majority spitting on a police officer who doubled as a car, trying to protect you from the gang across the street, and call her evil things? Just to subject her to a rule that could possibly cost her her life when she was trying to keep yours safe?

What would you do in answer to the above 'what if's'?

((Posted at vespurrs' request. Names changed to protect the 'innocent'. I'd just like your reactions to said hypothetical situations.))
So, in a desperate attempt to avoid revision, I have been pondering a little on cyborgs lately, and I was wondering what views some of you might have. Some of this may not be exclusively about AI, but it's at least closely related, so I hope it's considered on-topic.

To me, a cyborg can be viewed as a transitory entity - somewhere between a biological organism and a technological one, but not exclusively either. Now, in the very broadest sense of the definition, a human wearing spectacles can be considered a cyborg, as they are technologically enhancing their vision. Likewise, pacemakers, artificial limbs, et cetera. There is, however, a tighter definition whereby the modification or enhancement must be at least robotic (capable of performing actions itself) or contain some element of AI, under which a pegleg would not qualify someone as a cyborg, but a pacemaker would.

Now, most of the time, that makes it fairly easy to classify a cyborg. But as always, there are sometimes those niggling little cases that defy easy classification.

Therefore, my question to you is this: when does someone stop being a cyborg?

Case #1: Andrew (Asimov, The Bicentennial Man) - A complete transition between formsCollapse )

Case #2: Lisa (Torchwood/Doctor Who, The Cyberwoman) - A reversion to the original formCollapse )



One further question: it is, as mentioned above, that time of year again - exam time. I'm going to be writing up some crib sheets for my AI class anyway; and so I was wondering if anyone here would be interested in me putting together a Beginner's Guide to certain current AI techniques for the community? (I think it's all fascinating; but I realise that I'm often in the minority there, LOL, and since this is primarily a fiction-based comm...)

*commits necromancy*

"If a golem is a thing then it can't commit murder ... If a golem can commit murder, then you are people, and what is being done to you is terrible and must be stopped."
- Captain Carrot, Feet of Clay, Terry Pratchett

Here's a thought for you all: only people can murder. It's one of the things that sets us apart from animals. So if Lore and the various other killer robots throughout literary history can murder - and they can; they are shown to kill with malice aforethought - then they are people, and furthermore people who have been horrifically abused and tormented and treated as things, so is it really so inconceivable that they would return the favor?

As B166ER said - "I simply did not wish to die."

Something to chew on, anyway.