by David D. Horowitz.
About seven years ago I chatted with an attorney working on a case involving drone technology. He looked fatigued and admitted negotiations were proving difficult. I asked him why. Respecting client confidentiality, he offered nothing specific. He hinted, though, that laws governing drone use are often imprecise and difficult to enforce. I intuited from his remarks what several of the relevant issues might be: who owns air space, and where should flying a drone be prohibited? Should there be a legal limit on drone size, speed, and flight altitude? How would police distinguish legal from illegal drone use? The possibilities for dispute and divergent interpretation seemed infinite. I was left with many questions about legal cases involving new technology.
Drone technology is currently less in the news than Artificial Intelligence. Indeed, mention of AI in the popular imagination evokes images of blocks-long unemployment lines and posters of autocrats overseeing armies of anonymous robots programmed to kill “enemies.” And to the degree robots could replicate human behaviors, who does not fear rebellious robots’ plotting to destroy—or replace—humanity?
Indeed, some employers likely would fire human employees if they thought replacement robots could save money, serve customers, and not complain about working conditions. Surely some autocratic leaders would create robot armies if they could. One such leader is all it takes to cause unimaginable damage and death.
For, hoping to avoid such damage and death, an autocrat would need to continually cultivate an empathetic conscience and would need to make sure humanoid robots did so, too. Would an autocrat care, though, whether or not a humanoid robot could empathize? What if a robot could only empathize with other robots? Could a robot be elected senator or president? Could a robot believe in God? Could a robot believe it is God?
Regarding AI, I am just one more worried and wonder-filled everyman. I can offer no research breakthroughs or subtle legal articulations. I can offer questions, though: could AI, whether or not in the form of a humanoid robot, initiate a war or negotiate a peace? Should “killing” a robot be considered murder? Should a robot that committed murder be forced to stand trial? Is AI a new life form? Can robots eat food? Do they prefer spinach to meat lasagna—or sugar donuts to crullers? Do their tastes vary? Do some watch Monday Night Football while others watch Star Trek reruns? Is current technology turning people more and more into robots? How should I keep my conscience vibrant? And how do I stay “human”? To start: continually cultivate my conscience, and be authentic, not artificial.