Editor’s Note: Brian S. Hall is a writer for Dogster’s sister SAY Media site, ReadWrite.com. This article first ran on ReadWrite.com, but we’re rerunning it (with permission) so you readers can comment on it. Note that the opinions expressed below are the author’s and not necessarily Dogster’s.
Okay, I didn’t really abuse my robot dog. Although I might, if only to test cultural limits. Would you? It’s just a robot, after all. A gadget.
If you spotted animal cruelty, would you react any differently if you discovered it wasn’t really an animal but a robot? How about if it looked and behaved like a real dog — and even whimpered in pain? What if your daughter enjoyed pulling whiskers off the family cat — but it, too, was just a robot? Would that set off any alarms?
Kate Darling, a lawyer and Ph.D. candidate in the field of Intellectual Property and Law & Economics at the Swiss Federal Institute of Technology in Zurich, explores such questions in a series of experiments that examine how people interact with “social companion” robots:
At first glance, it seems hard to justify differentiating between the legal treatment of a social robot, such as a Pleo dinosaur toy, and a household appliance, such as a toaster. Both are man-made objects that can be purchased on Amazon and used as we please. Yet there is a difference in how we perceive these two artifacts. While toasters are designed to make toast, social robots are designed to act as our companions.
Robots are all around us. Not Blade Runner-like android robots, of course. Not yet. Today’s robots are used in medicine, to help build our cars, manufacture our smartphones, and in some cases, to clean our floors.
Such robots are typically developed for a specific purpose. They look, unsurprisingly, like nothing more than functional machines. But not all. Some robots look “alive,” like the popular Pleo. They are designed as companions. Expect them to get better, more lifelike, more responsive — more like actual companions, in other words.
Do these robots deserve legal protection similar to what we now provide pets, for example, or horses? Your initial reaction might be, “Of course not.” But what if this social robot served as the equivalent of your family dog and someone came along and stole it, abused it and “killed” it? Then that person posted a video on YouTube? (Go ahead, sound off in comments.)
In her 2012 paper, “Extending Legal Rights to Social Robots,” Darling makes it clear why people often find it more troubling to witness or incite violent or abusive acts on a “social robot” as opposed to a more machine-like, functional robot:
Studies involving state-of-the-art technology already indicate that humans interact differently with social robots than they do with other objects.
Robotic toys, household robots, and personal-care robots that interact with us on a social level generate stronger psychological attachments than we experience with everyday objects. This difference in how we perceive social robots could have legal implications.
Rapid advances in robotics, haptic feedback, voice recognition, design, data processing and algorithms are rapidly making highly realistic robot “pets” a reality for many. Nonetheless, that adorable, forever-puppy robot that “bonds” with your children presently has no more legal rights than the power drill hanging on the wall of your garage.
I spoke with Darling about social robots, typical human responses to them, and potential legal issues we all might face down the road.
Brian S. Hall: Should we grant rights to social robots?
Kate Darling: That’s really up to society to decide. But there are two reasons it could make sense to give social robots some legal protection beyond the property right of the owner. The first is that if people feel strongly enough about it, for example the way that we feel about protecting certain animals from abuse, we might want the law to reflect that social preference.
The second is that we might want to deter types of behavior that could be harmful in other contexts. One theory behind animal rights looks at it not from the viewpoint of the animals’ inherent capacities, but rather from the viewpoint of what it says about ourselves if we’re willing to treat other creatures or things in a certain way.
BH: Are there examples of rights you would propose for social robots — robot pets?
KD: I would say that analogies to animal abuse laws are helpful — so not “the right to live,” but rather protection from being treated in a way that we associate with unnecessary cruelty.
BH: Should such rules be different based upon what the robot is? A robot dog, for example, versus a robot woman?
KD: I would rather distinguish between robots that are specifically designed to interact with us socially and be anthropomorphized, as opposed to the many other robots, such as factory robots, that are not meant to engage our emotions.
BH: Have any countries (or legal entities) extended legal rights to robots?
KD: Not that I know of.
BH: Have any countries explicitly restricted legal rights of/to robots?
KD: Not that I know of.
BH: What might prompt legal action?
KD: I think [YouTube videos of animal torture]. Even with existing technology and very few use cases, the YouTube comments on videos picturing “torture” of robotic toys and pets are strikingly polarized. A lot of people get upset, or at least feel very uncomfortable watching something that they perceive as life-like get abused, accusing the video makers of horrible cruelty. This reaction is likely to become more common and more extreme with the increasing development of robots that are specifically designed to interact with us socially in a cute and sympathetic way.
BH: Have any religious groups promoted or restricted social robot rights?
KD: None that I am aware of. I could imagine that some might be opposed, but that really depends on their respective beliefs.
BH: Do you expect some societies to act first or differently regarding social robots?
KD: Some societies (like Japan and South Korea) seem to accept interaction with robots more easily, which could incentivize a societal push sooner than in other cultures.
Clearly, the ethical and legal implications of robots virtually endowed with human qualities can quickly sends many people down the rabbit hole. But society might be forced to grapple with the issue anyway. What if the robot looks not like the family dog, but like a human being? Is anyone harmed if your teenage son uses a fembot to practice sex with? (Or should that be “on”?)
Even when we can reliably predict aspects of the future, we seem to often miss out the larger implications of what we create. What do you think?