A friend of Philosophy Matters recommended a recent article in the Chronicle of Higher Education that raises some interesting questions about robots. One of the moral decisions that we as a society will have to make is whether or not we will allow robots to make moral decisions. The author insists that “Lethal autonomous systems are already inching their way into the battle space, and the time to discuss them is now.”
One interesting twist to the conversation is the meaning of autonomous:
“When you speak to a philosopher, autonomy deals with moral agency and the ability to assume responsibility for decisions,” he says. “Most roboticists have a much simpler definition in that context. In the case of lethal autonomy, it’s the ability to pick out a target and engage that target without additional human intervention.”
The question, then, is whether we want robots to be able to make decisions about killing without the aid of humans. Check out the article and let us know what you think.