'Tourism beyond humans – robots, pets and Teddy bears', a paper by Stanislav Ivanov presented at the International Scientific Conference 'Tourism and Innovations' (14-15 September 2018, Varna)
comments
Tourism is universally considered as an activity specifically reserved for humans. Although not explicitly stated, all definitions of tourism assume that the tourists are human beings. However, the advances in animal ethics, artificial intelligence and experience economy in the last decades indicate that this fundamental assumption might need revision. Travel agencies already offer trips for teddy bears, hotels have special pet policies, companies sell stones as pets, while social robots will force companies to adapt to the new technological realities. This paper focuses on these non-human travellers in tourism (home robots, pets and toys) and the specific strategic, operational and marketing issues they raise for tourist companies.
Ivano argues
Tourism is universally considered as an activity specifically reserved for humans. Although not explicitly stated, all definitions of tourism assume that the tourists are human beings (see for example UN and UNWTO, 2010). But should it be so? Recent advances in animal ethics and wellbeing (Armstrong and Botzler, 2016; Fennell, 2012, 2013; Markwell, 2015; Sandøe, Corr and Palmer, 2016), artificial intelligence and robotics (Bhaumik, 2018; Miller and Miller, 2017; Neaplitan and Jiang, 2013; Russell and Norvig, 2016) and experience economy (Andersson, 2007; Kirillova, Lehto and Cai, 2017; Pine and Gilmore, 2011) indicate that this fundamental assumption might need revision. Travel agencies already offer trips for teddy bears (e.g. http://www.teddy-tour-berlin.de), hotels have special policies for the pets of their guests, companies sell stones as pets (e.g. http://www.petrock.com/), while social robots will force companies to adapt to the new technological realities (Agah, Cabibihan, Howard, Salichs, and He, 2016; Ivanov, 2017; Nørskov, 2016). The presence and the future influx of these non- human travellers in tourism (home robots, pets and toys) requires that we broaden our perspective on who the traveller is, how he/she/it is involved in tourism activities and how should travel, tourism and hospitality companies address the specific strategic, operational and marketing issues these non-human travellers raise. This paper contributes to the body of knowledge by focusing on the non-human travellers in tourism, their specific characteristics, the challenges faced by travel, tourism and hospitality companies in regard to these non-human travellers and the ways to cope with the challenges.
Non-humans are actively engaged in tourism and hospitality services. Table 1 provides some examples of animate and inanimate entities that are involved in the provision or the consumption of tourist services. The animate non-human entities have a long history and important role in tourism (e.g. animals in zoos, animals used for safaris, photo safaris or riding, or pets travelling with their owners) (Carr and Broom, 2018), while due to technical reasons the inanimate entities (like chat bots and robots) have only recently been adopted for provision of travel, tourism and hospitality services (e.g. Ivanov, Webster and Berezina, 2017). However, the delivery of tourist services for non-animate human entities is nearly non-existent and is mostly anecdotal. Non-animate entities are actually perceived as objects, items or things that lack consciousness, needs, wants, or desires, hence they are excluded by default from the list of potential consumers of travel, tourism and hospitality services. Nevertheless, the owners of these entities consume travel, tourism and hospitality services and travel together with their non-animate entities, thus tourist companies need to provide certain services for these entities (e.g. robot-friendly hospitality facilities, repair services, storage, etc.) in order to be able to serve their human customers. Moreover, some owners of inanimate non-human entities send them on trips (or ‘pseudo-trips’?), probably due to the need of ego enhancement (Ivanov, 2008; MacCannell, 2002) through stories in social media of their toy/robot undertaking a ‘tourist’ trip, sense of belonging to a specific social group, special emotional attachment to the entity or as a substitute or an extension of owner when (s)he cannot personally undertake such a trip to the destination. While the research literature is abundant of studies on travelling pets (Gretzel and Hardy, 2015; Hung, Chen and Peng, 2016; Kirillova, Lee and Lehto, 2015; Taillon, MacLaurin and Yun, 2015) and has already started to pay attention to robots and chatbots as service providers in tourism (Ivanov and Webster, 2018; Ivanov, Webster and Berezina, 2017; Ivanov, Webster and Garenko, 2018; Kuo, Chen and Tseng, 2017; Murphy, Hofacker and Gretzel, 2017; Tussyadiah and Park, 2018), our review of related literature has not revealed yet a study that deals with inanimate non-human travellers, besides two notable exceptions. Ivanov and Webster (2017a) focuses on the design of robot-friendly hospitality facilities and emphasises that the ability to serve guests’ own mobile robots would be a key competitive advantage for accommodation establishments in the future. In another paper, Ivanov and Webster (2017b), the same authors elaborate on the role of robots as consumers of services and set a research agenda for further studies in the field. This paper tries to partially fill in this gap and to delve deeper into the field of non-human travellers, i.e. the non-human ‘consumers’ of travel, tourism and hospitality services.
'Revenge Against Robots' by Christina Mulligan in (2017-18) 69
South Carolina Law Review 579 comments
When a robot hurts a human, how should the legal system respond? Our
first instinct might be to ask who should pay for the harm caused, perhaps
deciding to rest legal liability with the robot's hardware manufacturer or its
programmers. But besides considering tort or criminal actions against
corporate and human persons, legal actors might also target the most
immediate source of the harm the robot itself.
The notion of holding a robot accountable for its actions initially evokes
absurd and amusing mental images - a prosecutor pointing to a smart toaster
shouting, "And what do you have to say in your defense? Jury, note that the
toaster says nothing. It says nothing because it is guilty." And it is easy to
laugh at this scenario and brush the idea aside. But there are more rational
ways to hold robots accountable for their actions and reasons why law and
policy makers would want to do so.
This Essay proceeds by first exploring how vengeful responses to
wrongdoing may provide significant psychological benefits to victims (Part
II). It goes on to argue that taking revenge against wrongdoing robots,
specifically, may be necessary to create psychological satisfaction in those
whom robots harm and addresses the concern that punishing robots would
psychologically injure humans (Part III). The Essay then shifts focus to robots
themselves, arguing that it is justifiable for humans to blame robots for their
actions because, like animals, autonomous robots are best understood as the
causes of their own actions (as "agents") (Part IV). Finally, the Essay
evaluates whether a robot's moral culpability is relevant to the issue of robot
punishment (Part V) and considers how revenge against robots could be
implemented (Part VI).
‘Could AI Agents Be Held Criminally Liable: Artificial Intelligence and the Challenges for Criminal Law’ by Dafni Lima in (2017-18) 69
South Carolina Law Review 677 comments
For the past few decades, artificial intelligence (AI) seemed like
something out of a science fiction work; the concept of a human-made
intellect that could gain sufficient autonomy in order to make its own,
independent choices is still quite unfamiliar for most. In recent years, rapid
technological development has led to products that have evolved to
increasingly incorporate Al elements. From smart products to drones to the
Internet of Things, social reality has advanced beyond what was
technologically feasible when relevant laws were drawn up and enacted.
Smart technical systems that can operate in the absence of constant human
input pose a set of questions particularly challenging for concepts salient for
criminal law and its application in practice.
While in the past Al applications have been used more and more broadly
in fields ranging from computer science to finance to medicine, we now stand
on the verge of the first major breakthrough in widespread application of Al in a way that is recognizable by the mass public: autonomous vehicles.' Smart
cars that can safely navigate traffic are hardly a fantasy anymore; they have
been in development for some years now, and the first versions are already on
the streets of major U.S. cities. In 2011, Nevada was the first state to allow
and regulate the operation of autonomous vehicles, and as of 2017, thirty-three
states have introduced legislation that is related to the issue; twenty of them
have already passed relevant legislation, and a further five have seen relevant
executive orders issued.
Operation of autonomous vehicles comes with great advantages: it will
arguably increase mobility for social groups like the elderly or people with
disabilities, it will provide greater safety on the road by providing a more
restful travel for professional drivers and arguably guarantee increased
adherence to traffic laws, as well as allow drivers to be more productive when
travelling, as the autonomous car could take over for the most part. The future
of autonomous cars is still not entirely shaped as versions based on a varying
degree of automation are developed, some requiring a standby human driver
and others being fully autonomous, yet autonomous vehicles in general rely
heavily on Al in order to operate.
The advent of what seems to be the first mass application of Al in
everyday life and in particular one that tremendously affects transportation -
an essential human activity that is intensely regulated by law and where ample
opportunities can arise for criminal law to intervene will undoubtedly have
implications that will affect how criminal law is construed and how it is
applied. More than that, it will provide an invaluable opportunity to revisit
and reflect on traditional criminal law concepts such as personhood, harm, and blame since it will introduce a new "agent" into the traditional agency spectrum that is defined by capable human actors.
'Fundamental Protections for Non-Biological
Intelligences or: How We Learn to Stop Worrying
and Love Our Robot Brethren' by Ryan Dowell in (2018) 19
Minnesota Journal of Law, Science and Technology 305 comments
In the future, it is possible that humans will create
machines that are thinking entities with faculties on par with
humans. Computers are already more capable than humans at
some tasks,1 but are not regarded as truly intelligent or able to
think. Yet since the early days of computing, humans have
contemplated the possibility of intelligent machines-those
which reach some level of sentience. Intelligent machines could
result from highly active and rapidly advancing fields of
research, such as attempts to emulate the human brain, or to
develop generalized artificial intelligence (AGI). If intelligent
machines are created, it is uncertain whether intelligence would
emerge through gradual development or a spontaneous emergence. Throughout this Note, such intelligent machines will
be referred to as non-biological intelligences (NBIs), with
emphasis on machines with human-analogous intelligence.
Protection of NBIs, equivalent to protection of human research
subjects, should be preemptively implemented to prevent
injustice and potential grave harm to them.
In the Introduction, this Note introduces current standards
by which we define a person, as well as several developing
technologies that will challenge current definitions. Part I
examines technologies that may result in non-biological
intelligences that exhibit human mental capacities. It then
examines the concept of personhood and its legal ramifications.
Part II examines how these technologies fit (or don't) into
existing legal frameworks and schema. Finally, in Part III, this
Note proposes preemptive implementation of protections
analogous to those for research on humans for NBIs, whether
such an intelligence arises as a replica of human consciousness,
as a de novo construct, or via unexpected means. Part III also
touches on some intervening occurrences before the emergence
of NBIs, which may begin to pave the legal path for more
advanced technologies.
Cryptocurreny & Robots: How to Tax and Pay Tax on Them' by Sami Ahmed in (2018) 69(3)
South Carolina
Law Review 697 comments
There have been many recent changes to technology that impact the
ability to calculate, levy, and refund taxes. Computers have gotten faster.
Tracking systems are more resilient than ever. Digital storage has become
incredibly cheap and easily accessible. And artificial intelligence has added
nuance to tracking; posed interesting questions about what a taxable entity
should be; and may change fundamental assumptions of the entire taxation
regime.
This Paper seeks to define and tackle some of the broader issues related
to these changes in technology. This Paper examines these trends in the
context of cases in the broader taxation and entitlements system.
Specifically, it remarks on how a number of these taxes and entitlements can
be more efficiently levied and targeted to their goals. The Paper makes two
claims: (i) improved technology will create new tax bases that the
government can target; and (ii) technology will enable taxes to be better
targeted to the populations and behaviors desired to be taxed.
Technology has shifted activity away from traditional components of the
economy to new previously untaxed activity. Two examples are virtual
currency transactions and robots that provide labor and services. This shift
threatens the ability of governments to maintain a steady stream of income
as more economic income is shifted to categories that are not taxed. The big
questions are whether and how virtual currency should be taxed and if
robots should be taxed like humans as they become more and more similar
to humans? These questions are further complicated when discussed in the
context of power struggles among sovereigns with their own financial and
political agendas.
Additionally, technology has opened the path to targeting taxes and
credits in ways that were previously unfeasible. For example, corporate
integration seeks to create a unified system of taxation that would eliminate
the bifurcated corporate and individual regimes in favor of a single taxation
system. A harmonized model would correct giant inefficiencies and distortions caused by the current system. Current distortions include the
preference for retention of income rather than distribution and a preference
for debt rather than equity. The current barrier to integrating the tax system
is an inability to trace corporate income to individual shareholders.
Fortunately, blockchain gives a method for tracing income and provides an
avenue to achieve integration.
.
This Paper is part of a larger scholarly agenda on technology and
taxation. Existing scholarship can primarily be grouped into describing
proposals for how to change particular tax provisions (without looking at
technology) or those that discuss the impact of technology on certain areas
of law (very little has been written at the intersection of technology and tax).
The scope of this Paper focuses on postulating a broader methodology of
how governments should reformulate their approaches to understanding and
levying taxes in an era of improved technology. Future scholarship would
aim to expand this type of scholarship. For example, there will be new
opportunities for taxation in other fields with advancements, such as with
renewable and sustainable energy technologies. Updated technology will
likely require updated approaches and frameworks for taxation. Another
example may be contributions at the intersection of tax law and other areas
of law, such as tort law. Tax law could be a useful tool to resolve some of
the legal and philosophical dilemmas in the assignment of liability and risk
in an era of autonomous technology.
Part IV chronicles the biggest and most relevant changes in technology
as they affect taxation. Part V expounds the current literature on the taxes
and technologies to be discussed in the Paper. Part VI discusses the two
primary claims of the thesis: new tax bases and better targeting.
Additionally, it discusses prescriptive solutions to how the particular taxes or
entitlements should be reformed to most effectively achieve their goals.
Finally, Part VII synthesizes some concluding thoughts and a general
methodology for reexamining the basic assumptions of our taxation regime.
Ahmed goes on to ask 'Can Artificial Intelligence Be Held Criminally Liable for
Cheating on Taxes or Advising Others to Cheat?', responding
An interesting theoretical question which will cease to be "theoretical"
in the near future-is about who would maintain liability in the case that an
autonomous robot intentionally filed its own tax return incorrectly, because
it determined the probability of being discovered was low relative to the
benefits of the evasion? More simply put, is there such a thing as mens rea,
or criminal intent, for artificially intelligent beings? Can they even have
intention?
i. Current Technology Does Not Enable Any of the
Main Objectives of the Criminal System by
Holding Robots Criminally Liable
Whether the role of criminal law is retributive, deterrent, rehabilitative,
or incapacitating,we potentially compromise these goals in a system with autonomous machine beings. As of current developments, they cannot
foreseeably express the fundamental human emotions that cause humans to
be deterred or rehabilitated and it would be difficult to argue that exacting
vengeance on a machine will successfully vindicate human injury. And
while a machine can be "incapacitated" by updating its code, pinpointing the
exact portion of code that caused it to respond and learn such behavior may
not be easily discoverable.
However, technology could advance to the point where computers do
become capable of expressing human emotion-and potentially even learn
to respond to behavior in a manner consistent with retribution or
deterrence. But given the uncertainty of such developments, artificially
intelligent beings should not yet be treated as persons for criminal liability
purposes; though, future developments may enable criminal punishment to
effectively modify the behavior of robots. Such advances would make a
personhood framework appropriate for such beings.
ii. Criminal Liability Also Inappropriate for Other
Parties
Holding the programmer liable in such a case may also not make sense,
especially if he or she did not anticipate the actions of the machine (the
programmer would not possess the requisite mens rea). Nor would it make
sense to impute intent to the corporation or individual owning the machine
because the machine possesses its own independent decision-making and is
programmed to learn from humans, partially via mimicry.' Furthermore,
corporations might be disincentivized from using advanced technologies if
such a liability risk exists for unknown or technologically unproven areas.
As a result, actions that would normally be considered criminal, such as tax
evasion, could potentially be punishable with only civil fines, or else risk an
"unfair" conceptualization of criminal law liability
b. Jurisdictional Questions Related to Artificially Intelligent
Beings
Another question that will need to be answered is whether the
government would honor the status of artificially intelligent beings as being
domiciled in a foreign country if they are conducting business from tax
havens-in the case that they are considered persons. The mechanism of
operation for these machines will likely continue to be based on the "cloud,"
a reference to the phenomenon of "Intemet-based computing that provides
shared processing resources and data to computers and other devices on
demand.""' Cloud computing systems, like Watson, host contents on the
internet rather than at specific servers in a physical location; thus,
artificially intelligent beings on the cloud will be deployable in any location
around the world without requiring control from a source within the United
States.
Furthermore, even artificially intelligent robots that are developed by a
corporation based in the United States or a developer from the United States
could still autonomously relocate to a tax haven and conduct business there.
Should the work performed by the robot be taxed based on the location of
the program's development (this could be an extraterritorial violation), or
the location of the corporation, or even the location of the actual machine in
the case that it is considered a person and has established a domicile?
There seems to be no clear answer to this question because the tax law
appears able to accommodate treatment of autonomous machines similarly
to the way it treats corporations as persons but such treatment may be
inconsistent with the necessity to tax robots as they replace more and more
taxable human labor.
c. Robot Tax
There have been many proposals for a robot tax, and even Bill Gates has
called for a tax on autonomous machines that replace the jobs that
individuals are currently performing. More officially, the European Parliament considered (although ultimately rejected) such a tax. The
European resolution looked at a number of various ethical and financial
implications of more complex artificial intelligence capabilities.One of
these principles referred to the establishment of a tax to provide a "general
basic income" to people to offset the losses in taxation from the
workforce. So while robots are currently not taxed, the more important
question is whether they should be.
i. Advantages of a Robot Tax
The key idea behind a robot tax is that the displacement of human
workers' jobs by robots will cause a rise in unemployment by humans.
The tax levied on many of these jobs are key revenue sources for
governments, and the absence of them being taxed will result in a smaller
revenue pool and thus potentially less resources to distribute handouts to
those such as the impoverished or unemployed. Companies and their
employees both pay taxes on any wages paid to employees. Robots are
currently exempt from any similar sort of tax, so there are currently great
efficiencies for companies to replace their human labor with robots.
ii. Disadvantages of a Robot Tax
The key disadvantages are that (i) innovation in robot technology will be
stifled; and (ii) taxable revenues may actually not decrease as a result of the
extra productivity that robots deliver.
As discussed earlier, robots, which do not currently carry wage taxes,
are replacing human labor subject to high wage taxes. This large differential
results in large incentives for innovators to develop machines to replace human labor. As the differential decreases, there is smaller value creation by
using robots. Thus, innovators cannot capture as much value and will not
create technologies that may have had some value with a higher differential
because there is less value to capture.
By incentivizing robot technology with little or no tax, the productivity
gains may be higher than a high tax. Companies that own robots will still be
required to pay corporate tax on profits derived from them. The loss in
taxable revenue from the displacement of human jobs may be offset by the
large gains in robot productivity. Additionally, taxes are collected on the sale
of robots to companies, which could be a great revenue stream if the
numbers of robots sold skyrocket in a new era of robots.