Robots can pose - or can appear to pose - a threat to life, property, and privacy. May a landowner legally shoot down a trespassing drone? Can she hold a trespassing autonomous car as security against damage done or further torts? Is the fear that a drone may be operated by a paparazzo or a peeping Tom sufficient grounds to disable or interfere with it? How hard may you shove if the office robot rolls over your foot? This paper addresses all those issues and one more: what rules and standards we could put into place to make the resolution of those questions easier and fairer to all concerned.
The default common-law legal rules governing each of these perceived threats are somewhat different, although reasonableness always plays an important role in defining legal rights and options. In certain cases - drone overflights, autonomous cars, national, state, and even local regulation - may trump the common law. Because it is in most cases obvious that humans can use force to protect themselves against actual physical attack, the paper concentrates on the more interesting cases of (1) robot (and especially drone) trespass and (2) responses to perceived threats other than physical attack by robots notably the risk that the robot (or drone) may be spying - perceptions which may not always be justified, but which sometimes may nonetheless be considered reasonable in law.
We argue that the scope of permissible self-help in defending one's privacy should be quite broad. There is exigency in that resort to legally administered remedies would be impracticable; and worse, the harm caused by a drone that escapes with intrusive recordings can be substantial and hard to remedy after the fact. Further, it is common for new technology to be seen as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great - or even extant. We therefore suggest measures to reduce uncertainties about robots, ranging from forbidding weaponized robots to requiring lights, and other markings that would announce a robot’s capabilities, and RFID chips and serial numbers that would uniquely identify the robot’s owner.
The paper concludes with a brief examination of what if anything our survey of a person's right to defend against robots might tell us about the current state of robot rights against people.'Public Opinion and the Politics of the Killer Robots Debate' by Michael C. Horowitz comments
The possibility that today’s drones could become tomorrow’s killer robots has attracted the attention of people around the world. Scientists and business leaders from Stephen Hawking to Elon Musk recently signed a letter urging the world to ban autonomous weapons. Part of the argument against these systems is that they violate the public conscience provision of the Martens Clause due to public opposition, making them illegal under international law. What, however, does the US public think of these systems? Existing research suggests widespread US public opposition, but only asked people about support for autonomous weapons in a vacuum. This paper uses two survey experiments to test the conditions in which public opposition rises and falls. The results demonstrate that public opposition to autonomous weapons is extremely contextual. Fear of other countries or non-state actors developing these weapons makes the public significantly more supportive of developing them. The public also becomes much more willing to actually use autonomous weapons when the alternative is sending in US troops. Beyond contributing to ongoing academic debates about casualty aversion, the microfoundations of foreign policy, and weapon systems, these results suggest the need for modesty when making claims about how the public views new, unknown, technologies such as autonomous weapons.In New Zealand the Privacy Commissioner has released Case Note 267458  NZ PrivCmr 6 regarding an objection to filming via a broadcaster's drone flying near his apartment.
The Commissioner states
We recently completed our first investigation into a complaint about a drone. This concerned Sky TV using a drone with a camera to film a cricket match. During the game the drone flew close (within 10 metres) to the complainant’s apartment which overlooked the cricket venue. The complainant was irritated by this and gave the drone “the fingers”.
The complainant complained to us that he thought the drone may have captured highly sensitive information in an unreasonably intrusive manner. He said he was unsure whether the drone had been filming or who may have seen the footage. He had not given consent.
This complaint raised issues under principles 1 - 4 of the Privacy Act 1993 which deal with the collection of personal information. These principles specify when personal information can be collected and for what purpose; what an individual should be told when their information is collected, and how information should be collected.
We contacted Sky TV about the complaint. Sky TV said that when their producer wanted to look at footage from the drone, he would radio the drone operator and inform him that he would begin recording the drone visuals from the air.
Sky TV said that despite how it appeared, the drone was not recording footage the entire time it was in the air. Sky TV accepted that its drone may have flown past the complainant’s property, but said the drone had not recorded or broadcast images of the complainant, or the inside of his property.
Sky TV also said the TV control room did not view any footage of the complainant or his property. Sky TV said it did record and broadcast coverage of two women who were on the balcony of an apartment. The Sky TV drone operator who was standing on a tower could, by line of sight, see the two women on their balcony. He indicated by hand gestures that he wanted to film them and by return hand gestures they indicated their consent to that recording. This was the only footage that was broadcast of identifiable individuals.Consent by hand gestures and happy faces?
The Commissioner further states
For us to find a breach of principles 1 – 4 of the Act, personal information needs to be collected. There was no evidence in this instance that Sky TV had collected information about the complainant, therefore in this case we found no breach of the Privacy Act.The complaint was also investigated by the Broadcasting Standards Authority, counterpart of ACMA, which found no breach under the Broadcasting Act 1989 (NZ).