Showing posts with label IoT. Show all posts
Showing posts with label IoT. Show all posts

31 December 2024

Data Centres and Energy

The IEA Electricity 2024: Global trends Analysis and forecast to 2026 report states 

Global electricity demand from data centres could double towards 2026 

We estimate that data centres, cryptocurrencies, and artificial intelligence (AI) consumed about 460 TWh of electricity worldwide in 2022, almost 2% of total global electricity demand. Data centres are a critical part of the infrastructure that supports digitalisation along with the electricity infrastructure that powers them. The ever-growing quantity of digital data requires an expansion and evolution of data centres to process and store it. Electricity demand in data centres is mainly from two processes, with computing accounting for 40% of electricity demand of a data centre. Cooling requirements to achieve stable processing efficiency similarly makes up about another 40%. The remaining 20% comes from other associated IT equipment. 

Future trends of the data centre sector are complex to navigate, as technological advancements and digital services evolve rapidly. Depending on the pace of deployment, range of efficiency improvements as well as artificial intelligence and cryptocurrency trends, we expect global electricity consumption of data centres, cryptocurrencies and artificial intelligence to range between 620-1 050 TWh in 2026, with our base case for demand at just over 800 TWh – up from 460 TWh in 2022. This corresponds to an additional 160 TWh up to 590 TWh of electricity demand in 2026 compared to 2022, roughly equivalent to adding at least one Sweden or at most one Germany. ... 

Data centres are significant drivers of electricity demand growth in many regions

There are currently more than 8 000 data centres globally, with about 33% of these located in the United States, 16% in Europe and close to 10% in China. US data centre electricity consumption is expected to grow at a rapid pace in the coming years, increasing from around 200 TWh in 2022 (~4% of US electricity demand), to almost 260 TWh in 2026 to account for 6% of total electricity demand. Growth will be driven by increased adoption of 5G networks and cloud-based services, as well as competitive state tax incentives. 

China's State Grid Energy Research Institute expects electricity demand in the country’s data centre sector to double to 400 TWh by 2030, compared to 2020. We forecast electricity consumption from data centres in China to reach around 300 TWh by 2026. Regulations are being updated to promote sustainable practices in current and future data centres to align them with decarbonisation strategies. A major source of data centre growth is expected to come from the rapid expansion of 5G networks and the Internet of Things (IoT). 

In the European Union, data centre electricity consumption is estimated at slightly below 100 TWh in 2022, almost 4% of total EU electricity demand. Around 1 240 data centres were operating within Europe in 2022, with the majority concentrated in the financial centres of Frankfurt, London, Amsterdam, Paris, and Dublin. With a significant number of additional data centres planned, as well as new deployments that can be expected to be realised over the coming years, we forecast that electricity consumption in the data centre sector in the European Union will reach almost 150 TWh by 2026. 

Almost one-third of electricity demand in Ireland could come from data centres by 2026 

In Europe, the data centre market in Ireland is developing rapidly as their electricity consumption grows along with new policies and initiatives. Electricity demand from data centres in Ireland was 5.3 TWh in 2022, representing 17% of the country's total electricity consumed. That is equivalent to the amount of electricity consumed by urban residential buildings. At this pace, in a high case scenario, Ireland’s data centres might double their electricity consumption by 2026, and with AI applications penetrating the market at a fast rate, the sector could reach a share of 32% of the country’s total electricity demand in 2026 if most of the approved projects are able to be connected to the system. This assumes that at the same time efficiency gains in other sectors continue to take place. 

Ireland’s stock of data centres, currently at 82, is expected to grow by 65% in the coming years, with 14 data centres under construction and 40 approved to start the building phase. Ireland has one of the lowest corporate tax rates in the European Union (12.5%), which is an advantage for the sector’s expansion in the country. By contrast, European OECD countries’ average corporate tax rate is 21.5%. 

The rapid expansion of the data centre sector and the elevated electricity demand can pose challenges for the electricity system. To safeguard the system’s stability and reliability, Ireland’s Commission for Regulation of Utilities published in late 2021 its decision on the new requirements applicable to new and ongoing data centre grid connection applications with three assessment criteria to determine if the connection offer can be made. First, the location of the data centre with respect to whether they are within a constrained region of the electricity system. Second, the ability of the data centre to bring onsite dispatchable generation and/or storage equivalent, at least, to their demand. Third, the ability of the data centre to provide flexibility in their demand by reducing it when requested by a system operator. For the third clause, data centre operators that offer their servers for hire will have to update their contracts to reflect the new regulations. These requirements showcase the local government’s inclination to grant connections to those operators that can make efficient use of the grid and incorporate renewable energy sources with a view of decarbonisation targets. ... 

Denmark currently hosts 34 data centres, half of them located in Copenhagen. As in Ireland, Denmark’s total electricity demand is forecast to grow mainly due to the data centre sector’s expansion, which is expected to consume 6 TWh by 2026, reaching just under 20% of the country’s electricity demand. Denmark is the hub for a new pan-European initiative, Net Zero Innovation Hub for Data Centers. The hub offers a space for collaboration between suppliers, operators and governments to enable progress towards the sector’s innovation and decarbonisation while meeting increasing regulatory demands. 

Data centres in Nordic countries – such as Sweden, Norway, and Finland – benefit from lower electricity costs. This is attributed to lower cooling demand due to their colder weather, and to lower electricity prices in comparison to other major data centre hubs, such as Germany, France and the Netherlands. The largest actor amongst Nordic countries is Sweden, with 60 data centres, and half of them in Stockholm. In August 2023, plans for a nuclear-powered data centre were announced utilising small modular reactors (SMR) technology on the east coast of Sweden, with a commissioning date envisaged for 2030. Given decarbonisation targets, Sweden and Norway may further increase their participation in the data centre market since almost all of their electricity is generated from low-carbon sources. 

In the United States, the largest data centre hubs are located in California, Texas and Virginia. In the case of Virginia, their economy was dominated in 2021 by the data centre sector expansion, attracting 62% of all of the state’s new investments and providing more than 5 000 new jobs. Northern Virginia is the largest data centre market in the country, collecting USD 1 billion in local tax revenues per year, with growth trending higher as companies, such as Amazon’s planned USD 35 billion expansion by 2040, continue to increase their investment in the state. New legislation is aimed at tightening regulations on data centre developments, including zoning rules, mandatory environment and resource impact assessments, as well as guidelines on water usage. In US northeastern states, the regional transmission organisation PJM expects data centres to increasingly drive electricity demand, forecasting a rise in summer peak load from 151 GW in 2024 to 178 GW by 2034. 

Artificial intelligence and cryptocurrencies are additional sources of electricity demand growth 

Market trends, including the fast incorporation of AI into software programming across a variety of sectors, increase the overall electricity demand of data centres. Search tools like Google could see a tenfold increase of their electricity demand in the case of fully implementing AI in it. When comparing the average electricity demand of a typical Google search (0.3 Wh of electricity) to OpenAI’s ChatGPT (2.9 Wh per request), and considering 9 billion searches daily, this would require almost 10 TWh of additional electricity in a year. 

AI electricity demand can be forecast more comprehensively based on the amount of AI servers that are estimated to be sold in the future and their rated power. The AI server market is currently dominated by tech firm NVIDIA, with an estimated 95% market share. In 2023, NVIDIA shipped 100 000 units that consume an average of 7.3 TWh of electricity annually. By 2026, the AI industry is expected to have grown exponentially to consume at least ten times its demand in 2023. ... 

In 2022, cryptocurrencies consumed about 110 TWh of electricity, accounting for 0.4% of the global annual electricity demand, as much as the Netherland’s total electricity consumption. In our base case, we anticipate that the electricity consumption of cryptocurrencies will increase by more than 40%, to around 160 TWh by 2026. Nevertheless, uncertainties remain for the pace of acceleration in cryptocurrency adoption and technology efficiency improvements. Ethereum, the second largest cryptocurrency by market cap, reduced its electricity demand by an amazing 99% in 2022 by changing its mining mechanism. By contrast, Bitcoin is estimated to have consumed 120 TWh by 2023, contributing to a total cryptocurrency electricity demand of 130 TWh. Challenges in reducing electricity consumption remain, as energy savings can be offset by increases in other energy consuming operations, such as other cryptocurrencies, even as some become more efficient. 

Efficiency improvements and regulations will be crucial in restraining data centre energy consumption 

The revised Energy Efficiency Directive from the European Commission includes regulations applicable to the European data centre sector, promoting more transparency and accountability to enhance electricity demand management. Starting from 2024, operators have mandatory reporting obligations for the energy use and emissions from their data centres, and large-scale data centres are required to have waste heat recovery applications, when technically and economically feasible, while meeting climate neutrality by 2030. An earlier EU regulation, applicable since 2020, sets efficiency standards for data centres enabling better control over their environmental impact. A self-regulatory European initiative created in 2021, called the Climate Neutral Data Centre Pact, sets targets to achieve climate neutrality in the sector by 2030. More than 60 data centre operators have signed on to the pact, including large operators like Equinix, Digital Realty and Cyrus One. 

In the United States, the Energy Act of 2020 requires the federal government to conduct studies on the energy and water use of data centres, to create applicable energy efficiency metrics and good practices that promote efficiency, along with public reporting of historical data centre energy and water usage. The Department of Energy (DOE) is supporting the local production of semiconductors and is funding the development of more efficient semiconductors over the next two decades. More efficient semiconductors reduce cooling requirements, thus supporting the decarbonisation of the sector. At a state level, regulators in Virginia and Oregon have already imposed requirements for better sustainability practices and carbon emissions reductions. 

Chinese regulators will require all data centres acquired by public organisations to improve their energy efficiency and be entirely powered by renewable energy by 2032, starting with a 5% share mandate for renewables in 2023. 

New fields of research can help increase efficiency and reduce energy consumption in data centres 

The primary drivers of data centre electricity demand are the cooling systems and the servers themselves, with each typically accounting for 40% of the total consumption. The remaining 20% is consumed by the power supply system, storage devices and communication equipment. The adoption of high-efficiency cooling systems has the potential to reduce electricity demand in data centres by 10%. Other cooling research shows that a 20% reduction in consumption can be achieved when operating with direct-to-chip water cooling and specific low viscous fluids to cool all other components. Machine learning can help reduce the electricity demand of servers by optimizing their adaptability to different operating scenarios. Google reported using its DeepMind AI to reduce the electricity demand of their data centre cooling systems by 40%. 

In the long term, replacing supercomputers with quantum computers could reduce electricity demand of the sector if the transition is supported by efficient cooling systems. Quantum computers deliver more and faster processing power than supercomputers while consuming less energy, but they need to be cooled to temperatures near absolute zero (-273°C) while supercomputers can operate at 21°C. 

Data centres are evolving towards more sustainable and efficient operations, including transitioning to Hyperscale Data Centres, which can run large-scale operations without a significant increase in electricity consumption. This transition is also financially attractive, with the global market for Hyperscale Data Centres projected to double in size by 2026 compared to 2023, reaching a value of USD 212 billion. 

Another promising field of research for decarbonising data centre operations involves time and location shifting of electricity demand. Software developments can allow operators to temporarily shift power loads with carbon-aware models that relocate data centre workloads to regions with lower carbon intensity at selected times. Simultaneously, such methodology has shown the probability of increasing the operational affordability by reducing costs of consuming low- emissions energy around the clock by up to 34%. Results of this methodology combined with other energy efficiency measures in place and on-site low-emission energy production have demonstrated that data centres can achieve a 64% share of carbon-free energy in their total electricity consumption, according to Google’s 2023 Environmental Report.

24 November 2024

AI, Trade and the WTO

The WTO 'Trading with intelligence: How AI shapes and is shaped by international trade' report comments

The widespread and transformative impact that artificial intelligence (AI) is currently having on society is being felt in all areas, from work, production and trade to health, arts and leisure activities. New applications of AI are expected to create unprecedented new economic and societal opportunities and benefits. However, significant ethical and societal risks are also associated with the development and application of AI. These risks have implications for all these areas too, including trade. AI is a global issue, and as governments increasingly move to regulate AI, global cooperation is more important than ever. 

Against this backdrop, the present report examines the intersection of AI and international trade. It begins with a discussion of why AI is a trade issue, before delving into the ways in which AI may shape the future of international trade. It discusses key trade-related policy considerations raised by this technology and provides an overview of government initiatives taken both to promote and to regulate AI. The report also highlights the looming risk of regulatory fragmentation and its impact, in particular on trade opportunities for micro, small and medium- sized businesses. Finally, the report discusses the critical role of the WTO in facilitating AI-related trade, ensuring trustworthy AI and addressing emerging trade tensions. 

Why is AI a trade issue? 

AI is distinct from other digital technologies in several key ways, and it has the potential to affect international trade significantly. It is a general-purpose technology, capable of adapting to a wide range of domains and tasks with unprecedented flexibility and efficiency. It relies on large datasets to learn and improve its performance and accuracy. AI's functions and efficiency can evolve rapidly, leading to dynamic shifts in its capabilities and autonomy. Finally, its inherent complexity and opacity, as well as its potential failures and biases, raise significant concerns related to matters such as how to understand the reasons for and basis of AI decisions and recommendations, or regarding ethics and broader societal implications. AI can be leveraged to overcome trade costs associated with trade logistics, supply chain management and regulatory compliance. By enhancing trade logistics, overcoming language barriers, and minimizing search and match costs, AI can make trade more efficient. It can help to automate and streamline customs clearance processes and border controls, navigate complex trade regulations and compliance requirements, and predict risks. AI-based tools can be used in trade finance, and can significantly enhance supply chain visibility by providing real-time data analytics, predictive insights and automated decision-making processes. All of this could lower trade costs and, as a result, level the playing field for developing economies and small businesses, helping them to overcome trade barriers, enter global markets and participate in international trade. 

AI can transform patterns of trade in services, particularly digitally delivered services. It can enhance productivity, especially in services sectors that rely on manual processes, by enabling low-skilled workers to leverage best practices of more high-skilled workers more effectively. For example, generative AI can amplify the performance of business consultants by up to 40 per cent compared to those not using it. Greater productivity gain is also observed for lower-skilled workers (Dell’Acqua et al., 2023). Research also shows that access to generative AI increases the productivity of call centre workers by an average of 14 per cent, and by 34 per cent specifically for novice and low-skilled workers (Brynjolfsson et al., 2023). AI can also foster the development of innovative services and increase demand for them. However, while AI can enhance trade in digitally delivered services significantly, it has contributed to reducing the demand for certain traditional services. AI-enabled automation can also reduce the necessity to outsource certain services. 

AI can increase demand and trade in technology-related products. Because AI systems often rely on real-time data streams and seamless connectivity, the adoption of AI is spurring demand for complementary goods related to information and communications technology (ICT) infrastructure and information technology (IT) equipment. These include computer and telecommunications services, specialized development tools and software libraries. For example, the global market for AI chips was valued at US$ 61.5 billion in 2023 and it has been projected that it could reach US$ 621 billion by 2032 (S&S Insider, 2024). As many of these goods and services are often supplied by a small number of economies, international trade serves as a major channel to foster AI development worldwide. Further upstream in the value chain, trade in the extraction and processing of critical metals and minerals, as well as trade in energy, are also likely to gain in importance. In addition, AI has substantially heightened the demand for data, fundamentally reshaping the landscape of data usage and trade. 

By affecting productivity, and through shifts in production dynamics, AI may reshape economies' comparative advantages. AI is expected to enhance productivity across all economic sectors in both developed and developing economies, and to change the composition of inputs required for production, placing greater emphasis on capital investment, rather than on labour inputs. This shift in production dynamics could reshape trade patterns. Conversely, new sources of comparative advantage may emerge from factors like educated labour, digital connectivity and favourable regulations. Because AI is energy-intensive, economies with abundant renewable energy may also gain comparative advantages. However, although AI can potentially benefit all economies, the development and control of AI technology are likely to remain concentrated in large economies and companies with advanced AI capabilities, resulting in industrial concentration. The adoption of AI can drive productivity increases across various sectors and reduce trade costs, leading to global gains in trade and GDP. Simulations using the WTO global trade model show that, under an optimistic scenario of universal AI adoption and high productivity growth up until 2040, global real trade growth could increase by almost 14 percentage points. In contrast, a cautious scenario, with uneven AI adoption and low productivity growth, projects trade growth of just under 7 percentage points. The simulation further shows that, while high-income economies are expected to see the largest productivity gains, lower-income economies have better potential to reduce trade costs. 

The global trade and GDP impact of AI varies significantly across economies and sectors, depending on choices made concerning innovation and policies. While trade growth in high-income economies remains relatively stable across projected scenarios, low-income economies could experience much higher trade growth under the scenarios of universal AI adoption and high productivity growth (18.1 percentage points) compared to those of uneven AI adoption and low productivity growth (6.5 percentage points). The simulation results suggest that if developing economies improve their AI readiness by strengthening digital infrastructure, enhancing skills and boosting innovation and regulatory capacity, they will be in a better position to adopt AI effectively. 

These simulations show that digitally delivered services1 are expected to experience the highest trade growth. In an optimistic scenario of universal AI adoption, digitally delivered services are projected to see cumulative growth of nearly 18 percentage points relative to the baseline scenario, the largest increase across all sectors. The expected impact of AI on real trade growth also differs within sectors. Potentially digitally delivered services such as education, human healthcare, and recreational and financial services, as well as manufacturing sectors such as processed food, are projected to experience significant trade growth, largely driven by trade cost reductions. Meanwhile, sectors related to natural resource extraction and manufacturing sectors such as textiles are expected to see limited growth. 

The policies of AI and trade 

The discussion on how AI might reshape international trade raises important policy questions. The risk of a growing divide resulting from applications of AI is significant, as are data governance challenges and the need to ensure that AI is trustworthy and to clarify how it relates to intellectual property (IP) rights. The implementation of AI at the domestic, regional and international levels entails both benefits and risks, and a lack of coordination could cause increasing regulatory fragmentation with regard to AI. Addressing the risk of a growing AI divide is essential to leverage the opportunities offered by this technology. Currently, the capacity to develop AI technology is concentrated in a few large economies, and this is creating a significant divide between economies that are leading research and development (R&D) in AI – in particular China and the United States – and the rest of the world. This imbalance could be further exacerbated by the use of government subsidies to develop AI. The risk of industry concentration within a few large firms could also intensify the divide between firms. These features, combined with the opacity of AI algorithms and the possibility of tacit collusion among competitor firms to maintain higher prices, present challenges for competition authorities. 

The rise of AI is raising important data governance issues that will need to be addressed to prevent further digital trade barriers. Cross-border data flows are essential to AI, as vast amounts of data are needed to train AI models, as well as minimize possible biases. Thus, restrictions on data flows can slow AI innovation and development, increase costs for firms, and negatively impact trade in AI-enabled products. A recent study (OECD and WTO, 2024) found that if all economies fully restricted their data flows, this could result in a 5 per cent reduction in global GDP and a 10 per cent decrease in exports. However, the large datasets required by AI models raise significant privacy concerns. Therefore, a reasonable trade-off between accessing large amounts of data to train AI models and protecting individual privacy must be found. 

Ensuring that AI is trustworthy without hindering trade can be challenging. “AI trustworthiness” means that it meets expectations in terms of reliability, security, privacy, safety, accountability and quality in a verifiable way. However, given the behaviour and opaque nature of AI systems, as well as the potential dual-use of some AI products (i.e., for both civilian and military applications), striking a balance between ensuring that AI is trustworthy and enabling trade to flow as smoothly as possible may prove especially challenging. The evolutionary nature of AI makes regulation a perennial moving target. “Traditional" regulations and standards for goods, which normally focus on tangible, visible and static product requirements, may not be fully capable of addressing all of the different types of potential risks, including the ethical and societal questions that may result from the integration of AI into goods and services. Regulating to address questions of public morals, human dignity and other fundamental rights, such as discrimination or fairness, is not only challenging, but is also prone to causing regulatory fragmentation because the meaning and relative importance of such values may vary across societies. 

AI also poses new conceptual challenges for the traditional, “human-centric” approach to IP rights. Issues that deserve particular attention include the protection of AI algorithms and of copyrighted material for training AI, and the protection and ownership of AI generated outputs. These questions may call for a re-evaluation of existing IP legal frameworks. 

The immense potential of AI has prompted governments around the globe to take action to promote its development and use while mitigating its potential risks. At the domestic level, more and more jurisdictions are putting in place AI strategies and policies to enhance their AI capabilities. The number of economies having implemented AI strategies increased from three in 2017 to 75 in 2023. According to Stanford University's 2024 "AI Index", 25 AI-related regulatory measures were adopted in the United States in 2023, compared to just one in 2016, while the European Union has passed almost 130 AI-related regulatory measures since 2017. However, most domestic AI policy initiatives are being implemented by developed economies, which could further deepen the existing AI divide between developed and developing economies: while around 30 per cent of developing economies have put AI policy measures in place, only one least-developed country (LDC) – Uganda – has done so according to data from the Organisation for Economic Co-operation and Development (OECD) AI Policy Observatory. Also high on governments’ policy agendas are domestic initiatives to promote access to data through open data and data-sharing initiatives, with a view to fostering domestic innovation and competition, protecting privacy and controlling the flow of data across borders. What is emerging is a landscape of fragmented measures and heterogeneous domestic initiatives, which may lead to regulatory fragmentation. 

This fragmentation extends beyond AI-specific regulations to include sector-specific legislation, such as IP and data regulations, which also impact AI. In addition, the design of some border measures imposed on the hardware components and raw materials crucial to AI systems can affect competitors in other economies, leading to trade- distorting effects and further exacerbating fragmentation. The economic costs of regulatory fragmentation, in particular for small businesses, highlight the importance of mitigating regulatory heterogeneity; according to OECD and WTO (2024), the economic costs of the fragmentation of data flow regimes along geo-economic blocks amount to a loss of more than 1 per cent of real GDP. The increasing number of bilateral and regional cooperation initiatives on AI governance, many focusing on different priorities, add to the risk of creating a multitude of fragmented approaches. 

For example, while some bilateral cooperation initiatives focus primarily on aligning AI-related terminology and taxonomy, and on monitoring and measuring AI risks, others prioritize collaboration to promote alignment in general terms or focus primarily on AI safety and governance. Likewise, some regional initiatives prioritize human rights and ethics, while others focus on economic development and growth. 

Regional trade agreements (RTAs) and digital economy agreements are important vehicles to promote and regulate AI. AI-specific provisions have started to be incorporated into such agreements, but they mainly take the form of “soft” – i.e., non-binding – provisions focusing on the importance of collaboration to promote trusted, safe and responsible use of AI. Several AI-specific provisions explicitly refer to trade. Digital trade provisions included in RTAs, such as provisions on data flows, data localization, protection of personal information, access to government data, source code,2 competition in digital markets, and customs duties on electronic transmissions, are also important for AI development and use. The number of RTAs with digital trade provisions has been growing steadily since the early 2000s, and by the end of 2022, 116 RTAs – representing 33 per cent of all existing RTAs – had incorporated provisions related to digital trade (López-González et al., 2023). However, the depth of digital trade provisions included in RTAs varies significantly, reflecting diverging approaches. Few developing economies and LDCs have negotiated digital trade provisions. Disciplines on trade in services in RTAs are also an important channel through which governments' trade policies and trade obligations can affect the policy environment for AI, but the level of commitments undertaken differs significantly across economies. 

The last few years have witnessed a wave of international initiatives related to AI. While there are elements of complementarity among such initiatives and alignment on core principles, different initiatives prioritize different aspects of AI governance. A number of initiatives also contain various common elements that have important trade and WTO angles, such as the recognition of the role of regulations and standards, the need to avoid regulatory fragmentation, the importance of IP rights, the importance of privacy, personal data protection and data governance, and the importance of international cooperation, coordination and dialogue. Several of these initiatives also address the environmental impacts of AI. 

However, there is still no global alignment on AI terminology. Differing priorities, the overlap between initiatives, and lack of global agreement on key terminology could pose challenges at the implementation stage, limiting efforts to prevent fragmentation and to put in place a coherent global AI governance framework. Nevertheless, beyond initiatives to govern AI, an increasing number of international organizations, such as the International Telecommunication Union (ITU), the United Nations Educational, Scientific and Cultural Organization (UNESCO), the United Nations Industrial Development Organization (UNIDO) and the World Bank, are developing courses on AI and integrating AI in their technical assistance activities, some of which have a trade component. The WTO, as the only rules-based global body dealing with trade policy, can contribute to promoting the benefits of AI and limiting its potential risks. It can play an important role in limiting regulatory fragmentation, promoting the development of trustworthy AI and access to it, and facilitating trade in AI-related goods and services, thereby enabling the growth of AI and promoting innovation through IP. 

What role for the WTO? 

WTO rules and processes promote global convergence. The WTO is a forum that promotes transparency, non-discrimination, discussion, the exchange of good practices, regulatory harmonization, non-mandatory policy guidance, and global alignment through the negotiation of new binding trade rules on trade. Transparency provisions included in WTO agreements allow WTO members, as well as economic operators and consumers, to be kept abreast of latest regulatory developments. One example is the enhanced transparency provisions in the Technical Barriers to Trade (TBT) Agreement. By requiring early notification of regulatory measures and allowing opportunities to provide comments on these measures at a draft stage, the TBT Agreement can help to prevent obstacles to trade, as well as promote and accelerate global convergence. WTO members are increasingly notifying a wide range of regulations on digital technologies to the TBT Committee. For instance, more than 160 notifications have been made on regulations addressing cybersecurity and the Internet of Things (IoT)/robotics, both of which are relevant for AI. More recently, the TBT Committee has started receiving notifications of AI-specific regulations. Another example is the WTO Trade Policy Review Mechanism, which contributes to transparency in members’ trade policies. Finally, in terms of possible new substantive rules, various issues negotiated under the Joint Statement Initiative on E-commerce, which currently brings together 91 WTO members, may matter for AI. 

The WTO also provides a global forum for constructive dialogue, the exchange of good practices, and cooperation. This enables discussion among members of how best to design nuanced, flexible and adaptable regulatory solutions to address the goods, services and IP-related aspects of AI in a coordinated manner. In some areas, the WTO also promotes regulatory harmonization and coherence by encouraging the use of international standards, mutual recognition and equivalence, and through various "soft law" instruments, such as voluntary committee guidelines. 

The WTO is the cornerstone of global efforts to facilitate trade in services and goods that enable or are enabled by AI. Various aspects of the WTO rulebook can contribute to promoting the development of and access to AI. For example, the General Agreement on Trade in Services (GATS) plays an important role in shaping a policy environment that facilitates the development and uptake of AI. A majority of WTO members (out of 141 schedules of commitments, 84, or 60 per cent, contain commitments on computer services) have made specific commitments on market access and national treatment related to ICT services, which play a fundamental role in enabling and promoting AI. However, commitments in other sectors remain limited, and barriers to services trade remain high in overall terms. When it comes to goods, the Information Technology Agreement (ITA) aims to increase worldwide access to high-technology goods essential to AI by eliminating tariffs on the ICT products it covers. Meanwhile, the TBT Agreement can help to ensure that, when governments adopt AI standards and regulations, these are, to the extent possible, not trade-restrictive, and are optimal for attaining policy objectives. The Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement aims to foster a balanced IP system that incentivizes innovation through the enforcement and protection of IP rights, while promoting dissemination of and access to technology, to the mutual benefit of both producers and users of technological knowledge. Various WTO agreements also include provisions to promote the transfer of technology, and this can play an important role in the development of AI. Finally, the WTO Agreement on Government Procurement (GPA) 2012 promotes access to internationally available new AI technologies. Various principles, provisions and guidelines in the WTO rulebook can support trade in AI systems and AI-enabled products by minimizing international negative spillovers. Examples include the non-discrimination principle and the Agreement on Trade-Related Investment Measures (TRIMS), which recognizes that certain investment measures can restrict and distort trade and states that members may not apply investment measures that discriminate against foreign products or lead to quantitative restrictions. When it comes to technical regulations, standards and certification procedures, the TBT Agreement provides that regulatory intervention shall not be discriminatory nor any more trade-restrictive than necessary to achieve the intended policy objectives, and that it should, when justified, be subject to periodic reviews. And the Agreement on Subsidies and Countervailing Measures (SCM) can play a crucial role in navigating the dual aspects of AI development, by promoting technological innovation while preventing negative spillovers in international trade from government financial support. 

The WTO can help to prevent and settle trade tensions and frictions. The practice of raising "specific trade concerns" (STCs) allows WTO committees to serve as a venue for defusing potential trade tensions with regulatory measures in a cooperative, pragmatic and non-litigious way. In the TBT Committee, for instance, members have already been using this practice to discuss and address concerns with regulations involving a wide range of digital technologies and issues, including IoT, autonomous vehicles, 5G in robotics, industrial automation, cybersecurity, and more recently AI. The WTO also serves as a global forum to settle trade-related disputes. While there has been no dispute on AI so far, the WTO Dispute Settlement System has dealt with resolving disputes related to various aspects of the digital economy. 

The WTO promotes inclusiveness through special and differential treatment and technical assistance for developing economies. WTO agreements recognize the constraints faced by developing economies and, for this reason, include various special and differential (S&D) treatment provisions to help them to implement WTO rules and participate more effectively in international trade. Technical assistance and capacity-building are key pillars of the WTO’s work and play a fundamental role in furthering understanding of the WTO rules and agreements, as well as of other topics relevant to trade. Multi-stakeholder programmes, such as Aid for Trade and the Enhanced Integrated Framework, could, however, be leveraged further to help developing economies seize the benefits of AI for trade. 

As a forum for negotiation, discussion and rule-making, the WTO provides a multilateral framework that can help address the trade-related aspects of AI governance. Nevertheless, AI may have implications for international trade rules. Although it is a new technology, AI is developing rapidly, and is certainly already advanced enough to be a subject of discussions at the WTO. Its cross-cutting nature requires a cross-cutting policymaking approach to promote policy coherence. 

While AI governance extends beyond trade, trade remains a crucial element within AI governance. The WTO can contribute significantly to developing a robust AI governance framework. This report is a first attempt to explore some key implications of AI for trade and trade rules. As AI continues to evolve, governments should continue to discuss the intersection of AI and trade and its possible implications for the WTO rulebook.

22 November 2023

Proxies

'Surveillance deputies: When ordinary people surveil for the state' by Sarah Brayne, Sarah Lageson and Karen Levy in (2023) 57(4) Law & Society Review 462-488 comments 

 The state has long relied on ordinary civilians to do surveillance work, but recent advances in networked technologies are expanding mechanisms for surveillance and social control. In this article, we analyze the phenomenon in which private individuals conduct surveillance on behalf of the state, often using private sector technologies to do so. We develop the concept of surveillance deputies to describe when ordinary people, rather than state actors, use their labor and economic resources to engage in such activity. Although surveillance deputies themselves are not new, their participation in everyday surveillance deputy work has rapidly increased under unique economic and technological conditions of our digital age. Drawing upon contemporary empirical examples, we hypothesize four conditions that contribute to surveillance deputization and strengthen its effects: (1) when interests between the state and civilians converge; (2) when law institutionalizes surveillance deputization or fails to clarify its boundaries; (3) when technological offerings expand personal surveillance capabilities; and (4) when unequal groups use surveillance to gain power or leverage resistance. In developing these hypotheses, we bridge research in law and society, sociology, surveillance studies, and science and technology studies and suggest avenues for future empirical investigation. 

In 2020, Amazon announced that over 10 million users had joined its “Neighbors” app (Huseman, 2021). The app is integrated into the company's home surveillance devices, including the popular “Ring” doorbell camera—a video-enabled device that enables users to view, speak with, and record their front door area as well as the people who visit it. When a person purchases and installs a Ring doorbell, they are automatically enrolled in the Neighbors app, which enables users to post videos of “suspicious” activities and crimes (including the theft of Amazon packages from their doorsteps; Molla, 2020) and to view similar content posted by other users within five miles of their location. Although these “surveillance as a service” devices are marketed to, purchased, and installed by civilians, the state regularly seeks access to their data (West, 2019). The content collected by Ring cameras is shared directly with more than 2000 police departments across the United States through a combination of subpoenas, warrants, court orders and memorandums of understanding between municipalities or homeowners' associations and local law enforcement agencies (Lyons, 2021). Most often, that content is shared with the state by users who volunteer it to police (Gilliard, 2021; Haskins, 2021). Ring and Neighbors thus represents a convergence of interests among consumers, the state, and one of the largest and most powerful technology companies. Homeowners can protect their property; police have access to previously difficult-to-reach surveillance content; Amazon profits. 

Ring exemplifies the phenomenon of what we term surveillance deputization: when ordinary people use their labor and economic resources to engage in surveillance activities on behalf of the state. Our analysis of the historical development and contemporary forms of surveillance deputization demonstrate that the phenomenon shows no signs of abating, as states continue to implore people to watch and report on one another. Despite its prevalence, sociolegal scholarship has rarely examined surveillance deputization as a coherent phenomenon, and it remains an underspecified mechanism of state power. The case of surveillance deputization illustrates broader forces at play, including neoliberal privatization of state functions, the cultivation of risk and fear, and the interplay between law, technology, and privacy. It also sheds new light on core themes and debates in law and society literature, including legal consciousness, legal mobilization, and legal ambiguity—concepts which consider how ordinary people make sense of ambiguous and rapidly changing legal and quasi-legal contexts. Therefore, we articulate a theoretical framework of surveillance deputization rooted in a law and society approach, describing how it functions, what motivates participation, its implications, and how it intersects with state and corporate interests. We offer four hypotheses about its dynamics and implications: (1) the interest convergence hypothesis; (2) the legal institutionalization hypothesis; (3) the technological mediation hypothesis; and (4) the social stratification hypothesis. 

Our hypotheses draw upon several key themes in the law and society literature. First, surveillance deputization represents a case in which ordinary people must contend with both an ambiguous legal environment and a new suite of technological capabilities. Future law and society scholarship might continue to examine this interplay between lay people's understanding of law and legal rights as they implement new tools that in turn support functions typically relegated to the state. Our hypotheses also invoke concepts of legal mobilization, when both private companies and private individuals actively leverage surveillance to obtain quasi-legal outcomes or aid in legal processes, exposing unclear meanings of the law in the digital, platformed age. Finally, our analysis directly engages law and society scholarship with studies of technology. As we show, the networked, data-intensive technologies that have become the infrastructure of everyday life—like smartphones, Internet of Things (IoT) sensors, software, and digital platforms—are both intensifying and transforming these practices (Ferguson, 2017; Murakami Wood & Monahan, 2019). Our analysis shows how these new devices and capabilities benefit the interests of both the user and the state; they allow more expansive and invasive surveillance capabilities as technology evolves; they allow governments to evade privacy-protective legal constraints; and, while they have the potential to further marginalize vulnerable groups, they can potentially be used to turn the lens back on the state itself. 

Although this article focuses on surveillance deputization, we hope the framework and empirical hypotheses detailed below spurs sociolegal work on questions of how the law deals with technological change, how ordinary people make sense of and contribute to the workings of the legal system, and continuities and changes in the practice of policing and in legal institutions. We begin with a brief social history of surveillance deputization, then explain our analytic and theoretical approach, including the literatures we draw from and the empirical examples we provide. We then move to a discussion of our four hypotheses, laying the groundwork for testable propositions in future empirical work. We close by encouraging scholars to continue to examine whether and how the acceleration of surveillance deputization augments the scope of state surveillance, intensifies the effects of surveillance on marginalized populations, and opens opportunities for collective resistance.

22 February 2022

Australian IoT

'Consumer IoT and its under-regulation: Findings from an Australian study' by Diarmaid Harkin, Monique Mann and Ian Warren in (2022) Policy & Internet comments 

The expansive growth of consumer internet of things (IoT) has created a range of concerns around privacy, security, and their broader societal impacts. This article reports on findings from interviews with 32 key stakeholders from the fields of information security, policy and regulation, the IoT industry, consumer and privacy law, and academia in Australia. It details a broad variety of issues and concerns that go beyond the well-recognised issue of privacy or the technical standards of information security, to encompass a wider set of issues regarding the implications for vulnerable communities, the environment and industrial standards of IoT production. Most key stakeholders expressed the view that more robust regulation is required in Australia, but no clear regulatory priority or strategy was identified by our sample. The implications of these findings for further regulation of consumer IoT and future regulatory strategies are considered.

The authors argue 

The growth in popularity of consumer internet of things (IoT) devices for home use continues to advance globally. Available estimates suggest that 854 million units of ‘smart home devices’ were shipped worldwide in 2020 (IDC cited in Business Wire, 2020), and most projections indicate further expansion in the foreseeable future (e.g., Data Bridge Market Research, 2020; Mordor Intelligence, 2020; Report Crux Market Research, 2020). Consumer IoT is on the rise with more households having more devices and a flourishing industry is placing internet-connected features into a wider variety of everyday domestic goods. 

Consumer IoT is becoming entrenched in the community along three dimensions. First, there is an increasing breadth of the user-base with more individuals and groups encountering or deploying consumer IoT devices (Rainie & Anderson, 2017). Second, the volume and types of data collected by consumer IoTs are expanding and creating deepening ‘data-troves’ on personal activity (Ranger, 2020). Third, IoTs are used in a diversifying set of circumstances for an increasing array of functions such as assistance with home health, particularly for people with disability or the aged (Domingo, 2012; Metcalf et al., 2016). This creates numerous and increasing possibilities for IoT to provide valuable services for those in need, but also creates a range of potentially damaging impacts, unexpected harms, and a variety of legal, ethical and regulatory dilemmas around the questions of security, privacy and consent. 

This paper explores some of the issues created by the rapid growth of consumer IoT and their perceived under-regulation in the Australian context. While there is a commentary about the threats posed by the growth of IoT from scholars (e.g., Manwaring, 2017a), civil society groups (McSherry, 2015) and cyber-security professionals (see e.g., Herold, 2015), this paper adds to these debates by comprehensively mapping a broad range of perspectives about consumer IoT. By documenting empirical data from in-depth interviews with 32 key stakeholders and subject-matter experts in Australia, it will be shown that: (a) there are significant concerns about the growth and proliferation of consumer IoT; (b) these concerns focus on a multitude of issues that range from worries about the cyber-security of IoT devices to their impacts upon the environment; (c) many feel that consumer IoT is under-regulated in a way that is likely to lead to ongoing and foreseeable negative impacts; but (d) there is no clear consensus on how to regulate, what to regulate or the regulatory priorities. 

It will thus be shown that consumer IoT presents a unique confluence of issues that create a knot of regulatory problems. To demonstrate this argument, the paper will proceed over five parts. First, a brief overview of the current regulatory context for consumer IoT in Australia is provided, describing the environment that many experts regard as ‘under-regulated’. Second, the methods of this study are outlined describing how the interviews were conducted and with whom. Third, the key findings from the interviews are presented around the core concerns of the rapid growth in consumer IoT and the issues this presents for consumers, in addition to the broader impacts on areas such as the environment. This includes a discussion of how the majority of interview participants argued that consumer IoTs are, at present, under-regulated in Australia, but also expressed no clear unanimity of what the regulatory priorities should be. Finally, the implications of these findings for future regulatory efforts in Australia are unpacked.

20 July 2021

Memory

'Amazon Echo Dot or the Reverberating Secrets of IoT Devices' by Dennis Giesse and Guevara Noubir in Proceedings of Conference on Security and Privacy in Wireless and Mobile Networks, Abu Dhabi, United Arab Emirates, June 28–July 2, 2021 (WiSec ’21),comments 

Smart speakers, such as the Amazon Echo Dot, are very popular and routinely trusted with private and sensitive information. Yet, little is known about their security and potential attack vectors. We develop and synthesize a set of IoT forensics techniques, apply them to reverse engineer the hardware and software of the Ama- zon Echo Dot, and demonstrate its lacking protections of private user data. An adversary with physical access to such devices (e.g., purchasing a used one) can retrieve sensitive information such as Wi-Fi credentials, the physical location of (previous) owners, and cyber-physical devices (e.g., cameras, door locks). We show that such information, including all previous passwords and tokens, remains on the flash memory, even after a factory reset. This is due to the wear-leveling algorithms of the flash memory and lack of encryption. We identify and discuss the design flaws in the storage of sensitive information and the process of de-provisioning used devices. We demonstrate the practical feasibility of such attacks on 86 used devices purchased on eBay and flea markets. Finally, we propose secure design alternatives and mitigation techniques.

19 November 2020

Wearables

'Wearable Sensor Technology and Potential Uses Within Law Enforcement: Identifying High-Priority Needs to Improve Officer Safety, Health, and Wellness Using Wearable Sensor Technology' by Sean E. Goodison, Jeremy D. Barnum, Michael J. D. Vermeer, Dulani Woods, Siara I. Sitar, Shoshana R. Shelton and Brian A. Jackson (RAND, 2020) asks 

How do WSTs intersect with law enforcement interests, both for the individual officer and the agency? What are the specific challenges that WST presents for data privacy, ownership, and the public? What are the salient issues associated with WST, and what are specific ways to address them? Many wearable sensor technology (WST) devices on the market enable individuals and organizations to track and monitor personal health metrics in real time. These devices are worn by the user and contain sensors to capture various biomarkers. Although these technologies are not yet sufficiently developed for law enforcement purposes overall, WSTs continue to advance rapidly and offer the potential to equip law enforcement officers and agencies with data to improve officer safety, health, and wellness. 

The report  reflects a workshop by RAND  and the Police Executive Research Forum for the US National Institute of Justice on the current state of WST and how it might be applied by law enforcement organizations. Workshop participants discussed possible issues with acceptance of WST among members of law enforcement; new policies that will be necessary if and when WST is introduced in a law enforcement setting; and what data are gathered, how these data are collected, and how they are interpreted and used. 

 RAND's key findings are that 

  •  Current WSTs are not sufficiently developed for law enforcement purposes overall 
  • Commercial devices, although inexpensive and portable, lack the accuracy and precision needed to inform and support decisionmaking. 
  • WSTs used in medical settings, although capable of excellent accuracy and precision with high-quality data, are cost-prohibitive for wide distribution and are not portable. 
  • The short-term focus should be on preparing for a time when technology will be more applicable to law enforcement roles The key is to obtain buy-in among law enforcement officers now — not for current technology, but for devices developed in the future and possible downstream effects on the field as WSTs are deployed to support officer safety and wellness, workforce retention, liability, and other issues. 
  • The intersection between WST and law enforcement is currently defined by uncertainty The applicability of WST to law enforcement will be proportionate to how well the technology can reliably inform decisions about an officer's daily activities. 
  • Devices need to seamlessly integrate with the technology that law enforcement already carries, measures need to be valid and reliable, interpretation of the data needs to be clear, and policies need to be in place for managing and monitoring the data. 
  • Now is the time for law enforcement to participate in the process of developing WSTs Law enforcement specifications for WSTs might not match the commercial industry standard, so law enforcement needs to talk to — and be heard by — WST manufacturers. 

The consequent recommendations are

  • Officers should be educated about the multiple uses and purposes of WST. 
  • Pilot testing should be conducted, and feedback should be collected on experiences. Outcome measures should be identified early in the process. 
  • Policies and processes for when and why data may be shared should be developed and implemented. 
  • A sequenced or phased approach should be developed for taking validated technology to the field for scaled evaluations. 
  • Individual baselines should be established to account for differences among individuals. 
  • The state of the research should be monitored, and law enforcement and public expectations should be managed. 
  • A set of best practices should be defined for consumer wearable devices. 
  • Data should be encrypted at each layer, and end-to-end encryption should be employed. 
  • Guidance and education about how to interpret data and metrics should be developed for WST users.

17 November 2020

Internet of Bodies

The Internet of Bodies: Opportunities, Risks, and Governance (RAND, 2020) by Mary Lee, Benjamin Boudreaux, Ritika Chaturvedi, Sasha Romanosky, Bryce Downing comments 

A wide variety of internet-connected “smart” devices now promise consumers and businesses improved performance, convenience, efficiency, and fun. Within this broader Internet of Things (IoT) lies a growing industry of devices that monitor the human body, collect health and other personal information, and transmit that data over the internet. We refer to these emerging technologies and the data they collect as the Internet of Bodies (IoB) (see, for example, Neal, 2014; Lee, 2018), a term first applied to law and policy in 2016 by law and engineering professor Andrea M. Matwyshyn (Atlantic Council, 2017; Matwyshyn, 2016; Matwyshyn, 2018; Matawyshyn, 2019).  

IoB devices come in many forms. Some are already in wide use, such as wristwatch fitness monitors or pacemakers that transmit data about a patient’s heart directly to a cardiologist. Other products that are under development or newly on the market may be less familiar, such as ingestible products that collect and send information on a person’s gut, microchip implants, brain stimulation devices, and internet-connected toilets. 

These devices have intimate access to the body and collect vast quantities of personal biometric data. IoB device makers promise to deliver substantial health and other benefits but also pose serious risks, including risks of hacking, privacy infringements, or malfunction. Some devices, such as a reliable artificial pancreas for diabetics, could revolutionize the treatment of disease, while others could merely inflate health-care costs with little positive effect on outcomes. Access to huge torrents of live-streaming biometric data might trigger breakthroughs in medical knowledge or behavioral understanding. It might increase health outcome disparities, where only people with financial means have access to any of these benefits. Or it might enable a surveillance state of unprecedented intrusion and consequence. There is no universally accepted definition of the IoB. For the purposes of this report, we refer to the IoB, or the IoB ecosystem, as IoB devices (defined next, with further explanation in the passages that follow) together with the software they contain and the data they collect. 

An IoB device is defined as a device that

• contains software or computing capabilities 

• can communicate with an internet-connected device or network and satisfies one or both of the following: 

• collects person-generated health or biometric data 

• can alter the human body’s function. 

The software or computing capabilities in an IoB device may be as simple as a few lines of code used to configure a radio frequency identification (RFID) microchip implant, or as complex as a computer that processes artificial intelligence (AI) and machine learning algorithms. A connection to the internet through cellular or Wi-Fi networks is required but need not be a direct connection. For example, a device may be connected via Bluetooth to a smartphone or USB device that communicates with an internet-connected computer. Person-generated health data (PGHD) refers to health, clinical, or wellness data collected by technologies to be recorded or analyzed by the user or another person. Biometric or behavioral data refers to measurements of unique physical or behavioral properties about a person. Finally, an alteration to the body’s function refers to an augmentation or modification of how the user’s body performs, such as a change in cognitive enhancement and memory improvement provided by a brain-computer interface, or the ability to record whatever the user sees through an intraocular lens with a camera. 

IoB devices generally, but not always, require a physical connection to the body (e.g., they are worn, ingested, implanted, or otherwise attached to or embedded in the body, temporarily or permanently). Many IoB devices are medical devices regulated by the U.S. Food and Drug Administration (FDA). Figure 1 depicts examples of technologies in the IoB ecosystem that are either already available on the U.S. market or are under development. 

Devices that are not connected to the internet, such as ordinary heart monitors or medical ID brace- lets, are not included in the definition of IoB. Nor are implanted magnets (a niche consumer product used by those in the so-called bodyhacker community, described in the next section) that are not connected to smartphone applications (apps), because although they change the body’s functionality by allowing the user to sense electromagnetic vibrations, the devices do not contain software. Trends in IoB technologies and additional examples are further discussed in the next section. 

Some IoB devices may fall in and out of our definition at different times. For example, a Wi-Fi-connected smartphone on its own would not be part of the IoB; however, once a health app is installed that requires connection to the body to track user information, such as heart rate or number of steps taken, the phone would be considered IoB. Our definition is meant to capture rapidly evolving technologies that have the potential to bring about the various risks and benefits that are discussed in this report. We focused on analyzing existing and emerging IoB technologies that appear to have the potential to improve health and medical outcomes, efficiency, and human function or performance, but that could also endanger users’ legal, ethical, and privacy rights or present personal or national security risks. 

For this research, we conducted an extensive literature review and interviewed security experts, technology developers, and IoB advocates to under- stand anticipated risks and benefits. We had valuable discussions with experts at BDYHAX 2019, an annual convention for bodyhackers, in February 2019, and DEFCON 27, one of the world’s largest hacker conferences, in August 2019. In this report, we discuss trends in the technology landscape and outline the benefits and risks to the user and other stakeholders. We present the current state of gover- nance that applies to IoB devices and the data they collect and conclude by offering recommendations for improved regulation to best balance those risks and rewards.

30 July 2020

Government Access to Vehicle Data

The Australian National Transport Commission (NTC) discussion paper Government access to vehicle-generated data comments
 Vehicles are increasingly capturing a range of useful data about the road environment, the vehicle itself and the way it is used. The vehicle industry is also rapidly expanding the capability of vehicles to connect and share data. This could provide a new opportunity for governments to improve their transportation systems through access to and use of this new data. If government access is not considered in a nationally consistent way, governments risk creating a fragmented, overly burdensome or low-access data environment. 
Australia’s transport agencies have identified that this new data will be important for operating more dynamic and responsive transportation systems. This new vehicle-generated data has the potential to improve road safety, optimise the road network and better inform network planning. For this report we have defined vehicle-generated data as any data generated by a vehicle that produces information about the vehicle, the environment around the vehicle or the use of the vehicle. 
The purpose of this project is to develop policy options for government access to and use of vehicle-generated data for the purposes of road safety, network operations, investment, maintenance and planning. 
The purpose of this discussion paper is to:
  • ▪ discuss our understanding of key issues and challenges arising from government access to vehicle-generated data 
  • seek views on the opportunity statement and problem statements contained in this paper  
  • seek views on options that could address these problems. 
What are the opportunities and benefits? 
Improved road safety has been identified as a key need that could be addressed through greater access to vehicle-generated data. For example, sharing and reporting of traffic safety events could use vehicles to detect and warn occupants about dangerous road conditions, allowing transport agencies to respond more rapidly to incidents. It is also the area with the highest willingness among vehicle manufacturers to share data. Unlike other types of data such as vehicle movement data, this data is unique to the vehicle and cannot be as easily replicated from other data sources. 
To better understand the needs of transport agencies and industry, we hosted several co- design workshops. The workshops generated 23 use cases identifying different potential uses for vehicle-generated data. Transport agencies identified significant potential benefits for road safety, network planning and optimisation. Transport agencies saw that this data could better inform decision making and reduce road trauma. However, the detailed benefits and costs of these use cases are still unknown. We also found that further data requirement and business case development is needed on priority uses for vehicle-generated data. Further collaboration between industry and government to better understand the potential benefits and costs would be highly beneficial to achieve this and is strongly supported by stakeholders. 
Australia does not lag significantly behind international jurisdictions in government access to vehicle-generated data; however, there are early international collaboration efforts that Australia can learn from. Key among these is the European Union’s memorandum of understanding between government and industry on the exchange of vehicle-generated data to support eight safety-related use cases. 
What are the barriers and gaps? 
Vehicle-generated data can be costly to generate, carry, store and use, and can reveal sensitive information about users and businesses. Much of this data is not stored, broadcast or shared. There is currently a low market penetration of vehicles that can share this data. The key barriers to government access include:
  • There is no compelling reason or incentive for generators of vehicle-generated data to provide this information to transport agencies (with the exception of the road access, safety and productivity benefits provided to heavy vehicle operators through regulatory telematics). 
  • There are trust, cost and operational barriers to the exchange of vehicle-generated data and, outside of heavy vehicles, there is no data access framework to address these issues. 
  • In comparison with international markets, there are currently fewer vehicles capable of capturing and communicating vehicle-generated data on Australia’s roads, with only market-based mechanisms to encourage uptake. 
What are the opportunities and problems? 
We have identified one key opportunity for government to access vehicle-generated data: 1. There is an opportunity for stakeholder collaboration on exchange or sharing of vehicle-data for road safety purposes to understand: – what vehicle-generated data can be used to support road safety in Australia – what an appropriate framework and forum might look like to support such an exchange. 
We have identified three problems that will we need to overcome to create wider government access:
1. Vehicle-generated data is currently not provided to transport agencies for purposes that may have publicly beneficial outcomes. This could be due to current vehicle capabilities and/or a lack of incentive or reason for industry and road users to provide the data (the exception to this being heavy vehicles enrolled in a current regulatory access or compliance scheme under the Heavy Vehicle National Law). 
2. There is a lack of a data access framework to provide the necessary trust, data exchange systems, data standards/definitions, understanding of data needs and governance to establish data access and use (the exception to this being heavy vehicles enrolled in a current regulatory access or compliance scheme). 
3. The level of uptake and penetration of connectivity across the Australian vehicle fleet may delay the benefits of vehicle-generated data, particularly related to safety-critical events.
What are we proposing to address the opportunity and problems? 
Recognising that vehicle-generated data is still at the nascent stage of development in Australia and that stakeholders remain unclear on priorities, there is an opportunity for governments to adopt a new policy approach. We propose that a new collaboration between industry and governments begin with a focus on road safety data as the priority and common mission. This approach is in line with the European Union’s approach and has early consensus from industry and government. We propose: For future development on government access to vehicle-generated data, road safety is the priority for exchanging vehicle-generated data between industry and government. Industry and government should collaborate on identifying opportunities for exchanging road safety data and adopt a principle of non-commercial sharing or exchange. 
We have identified three options to address problems 1 and 2, which are: –
Option 1: Rely on existing arrangements between government and industry, with no changes to existing legislation or frameworks. 
Option 2: Establish a data exchange partnership between industry and government that will identify opportunities for exchanging vehicle-generated data as well as develop standards and consider proof of concept. 
Option 3: Introduce new legislation requiring industry to collect, store and retain vehicle-generated data while providing access to government.
The NTC’s preliminary preferred option is option 2. We believe this option can provide the best opportunity for government to better understand how to maximise the potential benefits and opportunities of vehicle-generated data while actively collaborating with industry. This option has received general early support from government and industry. 
To address problem 3 – a lack of stimulus to bring forward vehicle connectivity – we are proposing that the Commonwealth considers the costs, benefits and system requirements to require vehicles to send automated crash notification system messages and have these received and actioned. Europe has achieved this through introducing its eCall system. This would bind all vehicles to a capability to send data messages over private networks. This proposal could be enacted through the Commonwealth government considering adoption of international standards into an Australian Design Rule (with consequential amendments to the relevant state and territory in-service vehicle standards) and infrastructure and capability to receive and use emergency notification messages. This would result in a significant increase in the fleet penetration of connected vehicles in Australia. 
List of questions 
1 Do our problem and opportunity statements accurately define the key problems to be addressed, and do they capture the breadth of problems that would need to be addressed? 
2 In our table, have we accurately captured all the regulatory and legislative mechanisms government could currently use to access vehicle-generated data?
3 Are there other major local or international jurisdictional developments providing further access powers or arrangements for vehicle-generated data?
4 Do you agree with our assumptions on the currently low uptake and limited availability of technology that supports the generation of vehicle data and that there are few and limited current government access arrangements for vehicle-generated data?
5 What issues do you believe will be created if ExVe is adopted and that would need to be considered in Australia? 
6 Is there value in establishing a national data aggregator or trust broker? Could good data definitions, practices and cooperation between entities achieve the same outcome? 
7 Can you provide us with more information on either the costs or benefits for government access to vehicle-generated data for the use cases listed in Appendix B? 
8 Are there relevant international standards that should be adopted for vehicle- generated data? Are there any standards that could be locally developed? . 
9 Have we accurately described the key barriers to accessing vehicle-generated data? Are there additional barriers? 
10 Do you agree that road safety data should be considered the priority purpose for which we seek to exchange data with industry? 
11 What are the key data needs of transport agencies beyond those already identified? 
12 What further benefits from vehicle-generated data should be considered? 
13 We contend that a prioritised starting point should be established from which data for other purposes can be further developed. Are there other approaches that could achieve this? 
14 Do you agree with the analysis presented in Table 7? What other opportunities are there for vehicle-generated data, and why? 
15 Have priorities changed for land transport policy and for data access from vehicles with the onset of COVID-19? 
16 Should road safety be adopted as the priority for developing use cases for government use of vehicle-generated data? If not, what other approach should Australia take? 
17 Can data other than for the purposes of road safety be exchanged on non- commercial terms? 
18 Does the NTC’s preferred approach (option 2) best address the problems we have identified? If not, what approach would better address these problems? 
19 Does the NTC’s proposed approach best address the problems we have identified? If not, what approach would better address these problems? .

30 May 2020

IoT Chameleons

'De-camouflaging Chameleons: Requiring Transparency for Consumer Protection in the Internet of Things' by Rónán Kennedy in (2019) 10(1) European Journal of Law and Technology comments
Information and communications technology (ICT) and the development of the so-called 'Internet of Things' (IoT) provide new and valuable affordances to businesses and consumers. The use of sensors, software, and interconnectivity enable very useful adaptive capabilities. However, the rapid development of so-called 'smart devices' means that many everyday items, including software applications, are now impenetrable 'black boxes', and their behaviours are not fixed for all time. They are 'chameleon devices', which can be subverted for corporate deceit, surveillance, or computer crime. While aspects of the IoT and privacy have been discussed by other scholars, this paper contributes to the literature by bringing together examples of digital devices being surreptitiously diverted to purposes undesired by the consumer, reconceptualising these in the context of Foucauldian governmentality theory, and setting out a variety of proposals for law reform.
Kennedy argues
Information and communications technology (ICT) and the development of the so-called 'Internet of Things' (IoT) provide new and valuable affordances to businesses and consumers. The use of sensors, software, and interconnectivity (marketed as 'smartness') provide digital devices with very useful adaptive capabilities. The rapid development of so-called 'smart devices' means that many everyday items are now impenetrable 'black boxes'. However, unlike non-computerised devices, their behaviours are not fixed for all time, and they can be subverted for corporate deceit, surveillance, or computer crime. They become 'chameleon devices', hiding in plain sight. 
While aspects of the IoT and privacy have been discussed by other scholars, this paper contributes to the literature by highlighting the lack of consumer awareness of, and legal protection against, the unauthorised re-purposing of data by end-user devices. It presents examples of digital devices being surreptitiously diverted to purposes undesired by the consumer, placing these in the context of Foucauldian governmentality theory, and setting out a variety of proposals for European law reform, aiming at ensuring that Internet of Things devices operate in a moral, ethical, and legal fashion that is in keeping with public policy goals. Its key contribution is the notion of IoT devices as chameleons - capable of changing their behaviour and appearance to fit in with their surroundings but with an agency and agenda other than what they seem to be, whether that is at the behest of their manufacturer, law enforcement and security services, or criminals.
It explores two case studies which highlight different aspects of this developing phenomenon. First, the scandal surrounding Volkswagen's purported low-emissions diesel cars demonstrates the extent to which regulated entities can invade privacy by enrolling individuals in a massive corporate fraud. Second, the monitoring capacities of many Internet-connected devices provide new opportunities for surveillance. The weak security, lack of industry capacity, and widespread adoption of IoT devices mean that end-users are becoming particularly vulnerable to identity theft or to unwittingly providing infrastructure for criminality. This article places these troubling developments in the context of Foucauldian governmentality theory, demonstrating that each is an example of 'resistance' to the development of new means of power through ICT. It highlights how the capacity of ICT to bring together information across time and space also enables manufacturers, state actors, and criminals to act across these dimensions in ways that were hitherto impossible, maintaining or obtaining a degree of control over devices long after they are sold. It builds on existing literature on 'Foucault in Cyberspace', updating Boyle's critique of technological libertarianism for the Internet of Things and taking into account Cohen's proposals for the development of a new regulatory state. It connects this to the often under-appreciated issues that arise when regulation depends, to an ever-increasing degree, on technical standards and the expanding legal protections for trade secrets.
A new challenge posed by the IoT is how to respond to 'chameleon devices' which change their behaviour in response to external conditions. Existing literature has accepted the inevitability of IoT-related privacy breaches, been largely descriptive, or proposed only moderate reform that allows the market to continue to innovate. However, the article adopts Shaw's more radical critique of market-driven post-humanism as something which must be restrained, and builds on this to outline proposals for reform which would better protect the interests of consumers in an increasingly digitally-intermediated society. 
It therefore puts forward three possible responses: global labelling standards that clearly indicate transparency and privacy protections to consumers; mandatory open source in some instances or code escrow in others; and licensing requirements for software engineers. It explores in detail the extent to which certain provisions of the General Data Protection Regulation could assist with these proposals: the requirement in Articles 13 (2) (f), 14 (2) (g) and 15 (1) (h) that those subject to automated decision-making, including profiling, be provided with 'meaningful information about the logic involved'; the possibility under Article 12 (7) that this information 'be provided in combination with standardised icons in order to give in an easily visible, intelligible and clearly legible manner a meaningful overview of the intended processing'; and the support which Article 42 gives for the development of data protection seals and marks.
However, it highlights the limitations of these legislative provisions, particularly due to the recognition of the rights to trade secrets or intellectual property under recital 63. It therefore closes with recommendations for further reform of the law in this area that will assist in de-camouflaging the ever more present chameleon devices in our midst.

17 April 2020

Haptics and consent

'Teledildonics and rape by deception' by Robert Sparrow and Lauren Karas in (2020) Law, Innovation and Technology 1-20 comments
It is now possible to buy sex toys that connect to the user’s phone or computer via Bluetooth and can be controlled remotely. The use of such Internet-enabled haptic sex toys involves an ineliminable risk of being deceived about particular features of one’s sexual partner and/or about which person one was having ‘sex’ with. Where this occurs, it is possible that the user would become the victim of rape-by-deception. We argue that determining whether a person using an Internet-enabled haptic sex toy has been raped or not when they are involved in a sexual encounter with someone – or something – other than that they intended requires us to confront difficult questions about the definition and significance of sexual intercourse and about the nature and harm of rape. Our discussion of these topics suggests that the use of such devices is more ethically fraught than has been appreciated to date. 
 The authors argue
The search for new or improved sexual pleasures plays a significant – if often under acknowledged – role in driving the development of new technologies. It was perhaps inevitable, then, that progress in ‘haptics’ – the science and technology of the transmission of touch – would spark interest in the development of haptic sex toys.  Consequently, it is now possible to purchase a number of sex toys that transmit touch and physical sensation via the Internet.   
In this paper, we want to reflect on these technologies and some of the ethical and philosophical questions they raise for two reasons. First, given the popularity of ordinary sex toys, and the sexual opportunities and communities made possible by the Internet, it is reasonable to assume that large numbers of people will experiment with these new remote-controlled and interactive sex toys and that a significant number will use them regularly. Any ethical and/or philosophical issues they raise are thus of interest simply by  virtue of the number of people they might affect. Second, the use of such devices seems to involve a not-insignificant risk of users being deceived about the identity of the person with whom they are having ‘sex’. As we shall see below, it is possible that in such cases the user would become the victim of ‘rape by deception’. Until this issue can be resolved, the design and manufacture of teleoperated and remote-controlled sex toys involves profound moral hazards. 
While existing products fall short of allowing fully immersive ‘cybersex’, it seems likely that in the not-too-distant future devices that transmit a larger range of genital sensations, which we shall call Internet-enabled haptic sex  toys (henceforth IEHSTs), will be developed. In order to bring the philosophical questions that interest us into stark relief and to reduce the risk of our discussion being rendered obsolete by technological progress, we shall for the most part discuss the issues raised by the use of such IEHSTs. We suggest that determining whether a person using an IEHST has been raped or not, when they are involved in a sexual encounter with someone – or some thing – other than that they intended, requires one to confront difficult questions about the definition and significance of sexual penetration, what counts as consent, and the nature and harm of rape. Our discussion of these topics will draw upon the academic literature on the philosophy of sex, philosophical and feminist discussions of the nature of rape, and – in particular – the literature on rape by deception. We argue that if one allows that IEHSTs enable sexual intercourse via the Internet then they will involve a significant risk of rape by deception. This risk implies that the use of such devices would be – and, we suggest, the use of existing devices is – more ethically fraught than has been appreciated to date. We also hope, throughout our discussion, to show how thinking about IEHSTs offers a valuable opportunity to gain new insights into some old questions in the philosophy of sex. 
An important limitation of our discussion is that we are only concerned with the ethical and philosophical issues that are raised by the risk of rape involved in the use of these devices. As will become abundantly clear in the discussion that follows, it seems likely that even if one concludes that these devices do not involve a risk of rape, they do involve a significant risk of sexual assault, which might itself be enough to raise ethical red flags about their design and use. However, because of the length and philosophical complexity of our investigation of the risk of rape involved, this further set of questions must remain a topic for future investigations. 
Relatedly, we have not tried to settle the question of how IEHSTs should be regulated here for a number of reasons. First, as we hope our discussion demonstrates, the ethical and conceptual questions arising from the possibility of deception involved in cybersex are complex and profound. It is, we shall argue, plausible to hold that the use of IEHSTs may expose users to a risk of rape and also make it easier for malicious actors to rape people. However, it also seems likely that these devices will be popular with a class of potential users who do not share the philosophical commitments that suggest that deception in the context of the use of IEHSTs can constitute rape. Before we can decide whether – or how – we should regulate IEHSTs, then, it is important to consider the philosophical questions we address below. Moreover, second, even if one wished to regulate to minimise the risks posed by these devices, the social acceptability, and thus the effectiveness, of such regulation is likely to depend in part on whether the regulations track people’s intuitions about the nature and significance of the wrong done by those who misuse IEHSTs in various ways. Again, then, at a bare minimum we need to know if – and when – the wrong might be rape. Third, developing good regulation in this area would require paying attention to various pragmatic and technical matters (How hard would it be to identify those who hacked into such devices? Would it be possible to ensure that minors could not access them? What means might be available to prosecute people misusing these devices across national borders?) that are beyond the bounds of our expertise. For these reasons, we have chosen to leave the question of appropriate regulation to future investigators, but hope that, by clarifying the underlying ethical and conceptual issues, our own work will make their task easier. 
In the first section of the paper we provide a brief account of the existing range of remote-controlled and interactive sex toys, as well as the likely future of this technology, and explain what we shall understand by IEHSTs for the purposes of the current investigation. The second section of the paper outlines the prima facie case for allowing that the use of IEHSTs would constitute sex rather than masturbation. In the third section, we introduce the idea of ‘rape by deception’ and discuss some of the ways it challenges our intuitions regarding rape in other contexts. The fourth section argues that the use of IEHSTs would involve an ineliminable risk of being deceived about particular features of one’s sexual partner and/or about the identity of the person with whom one was having sex. In the fifth section, we consider a number of hypothetical scenarios designed to draw out the implications of such deception for the use of IEHSTs on the assumption that they do enable individuals who are separated by distance to have sex. The sixth section discusses some possible objections to our treatment of the cases in the previous section. In the seventh section, we assume, for the sake of argument, that the use of IEHSTs only involves masturbation and discuss a number of further hypothetical scenarios intended to draw out the implications of deception on this account. The eighth section considers the question of the reasonableness of beliefs about consent in the context of cybersex involving IEHSTs. We conclude by considering the implications of our discussion for the ethics of the use and design of IEHSTs now and in the future.