Letters to the Editor for the October 8, 2012 weekly print issue: When we create artificial intelligence, will we create artificial 'ethicators,' too? The potential for 'cognitive decision-making skills' in computers is both challenging and exciting.
Regarding the Sept. 17 cover story, "Man & Machine," on the development of artificial intelligence (AI): I don't wish to be an alarmist, but I'm glad we're still far from inventing self-reasoning machines. Humankind has a history of creating new technologies simply because they're possible, only thinking about their impact later. Ray Bradbury suggested that science fiction is the nursery of new possibilities for humanity. If so, it should also be considered a warning.
From Isaac Asimov's novel "I, Robot" to HAL in Stanley Kubrick's film "2001: A Space Odyssey," thinkers have long been asking: How can we be sure an artificial intelligence will be good? A machine has no moral sense or inner Jiminy Cricket to guide it. Will we create artificial "ethicators," too? If we can't even train dogs reliably, are we really capable of training machines with human-level reasoning?