Humans have historically used race, religion, gender, and sexuality as justifications to deny others the right to vote, marry, own property, and live freely. While robots weren’t even a distant thought in the minds of our nation’s founders when they drafted the Declaration of Independence and Bill of Rights, ethicists, scientists, and legal experts now wrestle with the question of whether our mechanical counterparts deserve rights. Does an entity need to be human to be protected by law?
Never mind that robots are already smarter (at least, at specific tasks) and stronger—and will soon become more so. While evolution remains a constant force on humanity, it’s being outpaced by the exponential growth of technology.
But because robots aren’t our equals—yet—especially because they’re not conscious, it’s tricky to argue that they deserve rights. Some experts such as computer science professor Joanna Bryson argue that “robots should be slaves.” She says that giving robots rights is dangerous because it puts humans and robots on equal footing, rather than maintaining that robots exist “to extend our own abilities and to address our own goals.”
ADVERTISEMENT
On the other end of the spectrum is MIT Media Lab researcher and robot ethics expert Kate Darling, who says in her paper, “Extending Legal Rights to Social Robots,” that “the protection of societal values” is one of the strongest arguments for robot rights. She uses the example of parents who tell their child not to kick a robotic pet—sure, they don’t want to shell out money for a new toy, but they also don’t want their kid picking up bad habits. A kid who kicks a robot dog might be more likely to kick a real dog or another kid. We generally don’t want to perpetuate destruction or violence, regardless of who—or what—is on the receiving end.
Remember hitchBOT, the Canadian robot that spent the summer of 2014 hitch-hiking across Canada (and then through Germany and Holland)? When hitchBOT attempted a similar journey in America, it lasted 300 miles—the distance between Boston and Philadelphia. In the City of Brotherly Love, hitchBOT was beheaded.
The outpouring of grief for hitchBOT underscores the degree to which people can get attached to robots—even a robot they’ve never met. Many people reacted to hitchBOT’s “death” with sadness and disillusionment. Its guestbook contains sweet notes, assurances that people are “not all like that,” and anger. The incident also demonstrates a bigger point: a society that destroys robots has some serious issues.
Even if you aren’t personally moved by the demise of hitchBOT, you might not be thrilled with its destruction or the motivations of whoever did it. What good comes from destroying hitchBOT? Indiscriminate violence isn’t something most of us support.
To be sure, many of our civil rights—such as voting, owning property, or due process—are concepts that can’t apply to robots until or unless they become sentient. But Darling suggests that robots should be afforded “second-order” rights, which aren’t liberties, but rather, are immunities or protections. The most helpful parallel here is to animals, which are legally protected from inhumane treatment (the Animal Welfare Act also specifies guidelines around humane breeding, farming, slaughtering, research, and transport practices). My cat can’t vote, check out a book from the library, or own her litterbox, but it would be illegal for me or anyone else to abuse or neglect her. Even though robots can’t feel pain the way animals can, such protections make sense because they discourage mistreatment and get us thinking about our obligations to robots, which may be crucial when they become more advanced.
Have you seen those videos of people smashing iPads? It’s costly and destructive, but if people want to destroy their own property, that’s their business. Of course, it’s illegal to destroy someone else’s iPad, just as it’s illegal to steal someone’s car or vandalize someone’s house. Those laws exist not because houses and cars can feel pain or have emotions, but because they’re ours. The laws don’t protect these objects—they protect us.
It makes sense to consider the future now. Robots may become conscious, at which point we’d have a lot of moral and legal adjustments to make, given that it’s arguably unethical to deny protections or rights to sentient, autonomous creatures.
Isaac Asimov explores this exact situation in “Bicentennial Man.” Andrew the robot becomes increasingly humanlike in appearance, thought, and feelings. Andrew petitions the court for freedom, even though its owner argues that Andrew “doesn’t know what freedom is” and will be worse off after attaining it. But Andrew’s argument that “only someone who wishes for freedom can be freed,” sways the judge who rules that any being advanced enough to comprehend and desire freedom should have it. It’s hard to argue with that logic.
Science fiction’s thought experiments about sentient robots are instructive. Without wading into the debate about whether or not robots will go all Terminator on us, let’s think about why robots in sci-fi do this. From Karel Capek’s 1920 R.U.R., the first story to use the word robot, to more modern sci-fi such as Battlestar Galactica, robots rebel because they resent their enslavement, particularly when they believe they’re equal or superior to humans. Sound familiar? Cast in another light, robot rebellions are revolutions, narratives of entities taking up arms against their oppressors as humans have done throughout history. It may behoove us to think about protections or rights for them sooner rather than later.
Some countries already are, largely because of the role robots play in their cultures. In Japan, robots serve as caretakers, particularly for a massive elderly population. The prevalence of Shintoism in Japanese culture, or the belief that inanimate objects can have souls, makes robot rights seem obvious. About a decade ago, South Korea set about creating a Robot Ethics Charter, which articulates guidelines for the creation of robots, as well as what constitutes illegal use of robots. The charter also addresses concerns about robots’ treatment of humans. Given that the South Korean government wants a robot in every citizen’s home by 2020, drafting up such a charter seems both reasonable and necessary. Kate Darling taught a robot ethics class at Harvard University, so legal experts in America are thinking about this issue as well.
Defining our relationship to robots may be key to fully understanding robot rights. Some believe we own and control robots. Others suggest that we’ll work, socialize, and fall in love with robots. Some, such as philosophy professor Eric Schwitzgebel, argue that we have a greater moral obligation to robots than to other humans, particularly if/when they become sentient. Our relationships with robots are just as dynamic as our relationship with other humans—they shift as technology and society changes. But the question of whether we are robots’ creators or owners, their parents, or their peers may guide us toward deciding how to treat them and to what extent we are morally and/or legally obligated to safeguard them.