03 December 2014

Participatory Sensing

'Participatory Sensing: Enabling interactive local governance through citizen engagement' [PDF] by Slaven Marusic, Jayavardhana Gubbi, Helen Sullivan, Yee Wei Law and M. Palaniswami of the Department of Electrical and Electronic Engineering at the University of Melbourne argues that
Local government (such as the City of Melbourne) is accountable and responsible for establishment, execution and oversight of strategic objectives and resource management in the metropolis. Faced with a rising population, Council has in place a number of strategic plans to ensure it is able to deliver services that maintain (and ideally improve) the quality of life for its citizens (including residents, workers and visitors). This work explores participatory sensing (PS) and issues associated with governance in the light of new information gathering capabilities that directly engage citizens in collecting data and providing contextual insight that has the potential to greatly enhance Council operations in managing these environments. 
Their paper examines:
  • Key hurdles affecting the viability and uptake of PS from different stakeholder perspectives 
  • The capacity of PS as a new and mutually beneficial communication link between citizens and government; the respective value propositions in participating, whilst simultaneously increasing engagement and enhancing City operations through co-production with citizens
  • Technological elements of PS and associated privacy impacts through the application lens of noise monitoring
  • Social impacts of emerging pervasive technologies, particularly the encroachment upon privacy, associated risks and implications, not only for the individual but also the impact in shared environments
  • Responsibilities and avenues for mitigation assigned to respective stakeholders; including user awareness factors, policy frameworks and design level strategies
  • The role of reputation and trust management between stakeholders in fostering productive links, along with the capacity for citizen empowerment
  • The balance of perceived competing objectives of privacy and transparency, ethical strategies to address this challenge
  • User perceptions of related issues taken from studies of internet and social media usage through computing and mobile platforms
  • A development platform to measure user awareness of privacy risks, behavioural responses to a spectrum of engagement options effectively returning to the user a level of control over their participation 
  • Essential requirements for responsible implementation of PS platforms, considering ethical issues, responsibilities, privacy, transparency and accessibility 
The key findings are -
Participatory sensing
• The active role of the user is critical for the success of PS, requiring effective engagement, but also mitigation of disincentives, such as privacy concerns. As privacy risks increase in the context of multiplied information sources, despite available privacy preservation strategies, user awareness and control remain key elements. • Establishment and management of mutual trust is key to PS functioning as an effective medium for communication between stakeholders and for ensuring accountability. • Citizen empowerment is only achieved with the provision of information to assist individual decision-making, as well as the opportunity for responsibility and control over level of participation. • Incentivisation schemes need to recognise the value of the data/service being supplied by the user, the accessibility of the organiser provided service and ongoing value propositions .....
Privacy vs transparency
• Privacy is one’s control over access and flow of their information. Legal protections are limited to specific circumstances, so ethical means for supplementary privacy protection offer transparency with respect to data accuracy and embed privacy in the design. System transparency and verifiable privacy measures can build trust between stakeholders. • Social implications warrant mechanisms for managing data history. Additionally, informed user consent needs to be the goal and supported by effective communication of risks. Accordingly, users will utilise various means for protecting privacy, according to their level of awareness and evaluation of risks
Policy frameworks
• Systems for protection of personal information are essential for maintaining/building trust between organisations and citizens. This includes discovery of breaches and recourse for compensation. Existing policies cover data collection and handling; citizen engagement strategies; and feedback management. They reflect privacy concerns; principles of open and responsive government; and value in citizen contributions to governance. • Limitations of privacy legislation demands supplementary principles/guidelines for system implementation, including industry self-regulation, privacy by design and privacy impact assessment (PIA). Existing and supplementary measures thus need to be utilised and adapted for effective PS.
Pilot study
• The pilot study is based on a noise measurement app and central server for data aggregation and display. The app provides a spectrum of privacy level options, to be selected by the user, that reflect the type and amount of data to be collected/shared. • The privacy level selection interface serves to inform the user of data handling (and implies associated risk), while a refined interface can more explicitly convey this. This capability demonstrates a means for increasing user control over their level of participation. • A larger study can expand this capability; provide detailed assessment of public participation capacity; conduct a PIA; reach broader demographics; and further evaluate the issues raised throughout this paper.
Recommendations
• Prior to embarking on PS implementations, organisations thoroughly analyse all stakeholder concerns and ensure necessary steps are taken to address these, as outlined here • The City of Melbourne look for opportunities to test participatory sensing as a means of addressing specific community concerns in relation to noise nuisance • Policy makers review existing policy frameworks to ensure that they offer the appropriate combination of incentives and safeguards to facilitate greater citizen involvement in addressing issues of community concern (co-production) • Policy makers review existing organisational structures and professional cultures to identify any additional barriers to effective citizen engagement
In discussing privacy the authors comment
In literature associated with PS, social media platforms and now IoT, privacy concerns are listed as a significant issue. However, insufficient attention is given to the nature of these concerns, such as: the implications and risk factors of inadequate privacy protection measures; the impact on technology utilisation of users’ actual understanding of existing risk factors and any safeguards that may be available. Solutions that are provided are often limited in scope. Indeed, there is a risk that designers will focus on development of new system capabilities, neglecting the necessary ethical dilemmas by associating these challenges with data utilisation and so a task for someone else (Shilton, Participatory Sensing: Building Empowering Surveillance 2010). A survey of PS applications and associated challenges is provided in (Christin, Reinhardt, et al. 2011). Importantly, they acknowledge the lack of flexibility needed to reflect diverse viewpoints and awareness of privacy risk, implications and available mitigation strategies.
The first privacy requirement is the provision of secure communication links between the user and platform (hosted by the service provider). Conventional data encryption methods available on mobile computing platforms (such as Secure Socket Layer (SSL)/Transport Layer Security (TLS)) intend to ensure that only the intended receiver of the data transmissions is able to access the contents (De Cristofaro and Soriente 2013).
One of the obvious risks associated with PS applications is the same as that of any data sharing application, where the information being shared may actually reveal more about the user than is being intended, or agreed upon. It is now widely accepted that users of social media platforms need to take care in the way personal information is shared or publicly displayed, particularly when utilising multiple platforms. It has been demonstrated that seemingly innocuous postings can reveal information about location, behaviour, routine, social networks and identity. This may be described as information leakage, where it crosses over to another domain or platform to reveal either something more detailed or completely new when combined with other data. Each of the sensing modalities available on smartphones present the risk of revealing private information, from: daily routine (based on time stamped location); identity (from photos, gait analysis, voice recognition); or associations (from photos and voice recognition).
Importantly, the nature of participatory sensing has the capacity to reveal information, not only about the user, but also about others in their vicinity. Therefore, in addition to personal risk of exposure, awareness also needs to be established of the secondary exposure introduced into a given environment. Providing adequate safeguards is thus a multidimensional problem (Christin, Reinhardt, et al. 2011). When the associated services are primarily location focussed, such information has real-world implications. Location can be established, with varying degrees of accuracy, from GPS signals, cell tower locations, as well as from Wi-Fi and Bluetooth links to associated infrastructure. For example, some traffic authorities have deployed infrastructure to detect Bluetooth devices of vehicle users in order to map vehicle paths and travel times. Whilst in such instances, the information is being utilised to improve traffic flow and road infrastructure services, it demonstrates the vulnerability of individuals in allowing unsecured access to their communications and the secondary information that can be extracted from doing so. These concepts have also been explored in the context of Intelligent Transportation Systems that utilise various means of vehicle identification from license plate recognition, electronic tags for tolling systems or GPS devices. The roles of different interested stakeholders are noted along with the potential for establishing personally identifiable location information and how existing US privacy law impacts such operations (Garry, Douma and Simon 2012). Similar challenges have been faced by location based services on mobile phone or computing platforms for some time (Anderson, Henderson and Kotz 2007) (He, Wu and Khosla 2004) (Shahabi and Khoshgozaran 2007).
Privacy preservation
There exists a suite of proposed solutions for preserving privacy in participatory sensing whilst still permitting a desired level of engagement in the program, with some taken directly from networked computing strategies.
The first option is to provide some degree of anonymity for the user. It must be determined then, from whom anonymity is required, or rather, for the meaningful operation of the PS platform, for whom is identification required/permitted. It is widely acknowledged that users demonstrate different degrees of willingness in sharing data, depending on the relationship with the other party (or parties) involved.
Degrees of identity revelation maybe classed as:
• Completely anonymous • System Organisers/Network Operators o Requires secure end-to-end communications regardless of the number hops or network types utilised in the transmissions (e.g. Tor). • Selected peers/participants o A user/organiser defined subgroup, established based on certain criteria, such as predetermined trust or existing community group. • Other participants on the system o Identifiable to other users with access rights to the system. • Anyone able to eavesdrop given communications links somewhere along the network • Notionally hidden, but unsecured. • Open o Unrestricted publishing of identity linked with data contributions
Providing anonymity is not without its own implications. If not carefully designed, a system permitting anonymous contributions can be easily compromised if there is no means of verifying the quality and validity of the data. In such cases, it may leave organisers with no recourse to identify or manage misbehaving or malicious users. In this respect, the same tools that protect the privacy of the innocent also hide the identity of the malicious or criminal.
The common argument that is often posed in shifting the balance of privacy towards transparency is that some privacy must be sacrificed in order to ensure security. If this is indeed true and unavoidable, then certain questions immediately follow: to what degree must privacy be sacrificed; to whom is privacy to be sacrificed; and what level of trust can be assigned to this authority? Indeed, is it even possible to apply a threshold in trying to answer these highly debatable questions? Consideration must be given to the risk and implications associated with any subsequent abuses of this trust. So whilst a goal may be set for balancing privacy and security, according to some established value system, the viability of such thresholds only exist in as much as aggrieved parties have recourse to compensation from any breaches. Where this is implausible to guarantee, a more cautious approach would be to first question whether the emerging capability is actually increasing the degree of vulnerability of users without adequate protection and for insufficient return/reward for their participation.
From a practical perspective, a number of measures can be implemented at the design level, that provide users with varying levels of protection. Some of these include:
• Masking identity (utilising independent verification methods; allowing anonymous data contribution; simulating high participant density) • Masking location (data perturbation or reduced granularity) • Limited data release (sharing of aggregated or filtered rather than raw data)
Essentially, such methods rely on the existence or appearance of a large number of users within the system and within the specified location (k-anonymity), so increasing the difficulty with which a single user (or their data) may be identified. In this way, the system can compensate for malicious users (e.g. supplying misleading data); unreliable users (e.g. supplying incorrect/erroneous/low-quality data); collaborating users (e.g. to uncover identity or other restricted information about other participants; bias overall measurements; manipulate user reputations).
Self-surveillance describes the personal information that is captured, stored and potentially transmitted though the complicity of individuals. To combat the increased vulnerability to privacy breaches arising from self-surveillance and indeed the expected decreasing privacy, the concepts of Personal Data Vaults (PDV) and Personal Data Guardians (PDG) have been proposed as a means of giving users greater control over how their data is shared, whilst still making use of cloud infrastructure (Kang, et al. 2012). Self-surveillance applications relate more to measurement of biological parameters, activity and mobility. As such, whilst obviously of interest to the individual, there is also substantial interest from third-parties in being able to analyse and aggregate such data to deliver population-wide insights. There is a need then to balance the usefulness to the individual of such applications, whilst also being able to share some aspects of this data (agreeable to the user) with outside entities. The PDG serves as a trusted entity, with whom the user enters into a legal, fiduciary and confidential relationship (in a similar fashion to that with lawyers or doctors). In proposing PDG to act as trusted intermediaries, the flow of data is slowed, thereby preserving some degree of privacy.
Despite the advantages that PDVs seemingly provide, the privacy guarantee of a system that proposes secure storage in the cloud and transmission utilising many layers of communications infrastructure is difficult to ensure, particularly in light of many well publicised hacking and spying episodes. Two issues remain unresolved; the first is public perception of the degree of security available, whilst the second considers user behaviour in light of available security of privacy options and concern for associated risks. Investigating user perceptions of cloud computing security and data vulnerability in (Ion, et al. 2011), it is observed that a large degree of scepticism still exists impacting what users will store. There is also general acceptance that existing means for presenting Terms of Service (TOS) are largely ineffective, in that they are difficult to understand or generic and thus often ignored. As many as 51% of users don’t read online privacy policies, with most perceived as too long or complex and of those who read privacy policies, only 37% are able to gain the necessary information to decide whether or not to use the site (OAIC, Community attitudes to privacy survey 2013). This results in mismatched expectations of users as to their rights and the actual requirements of the service provider. Further complexity is introduced by changes across demographics and culture, in how privacy risk is perceived and trust of organisations assigned (Bélanger and Crossler 2011). Analysis of TOS for cloud services has also revealed a tendency for bias towards more detail regarding user obligations rather than the provider’s, with the authors recommending a legal framework (albeit in the US context) offering greater control and portability for the user for the management of their data (Kesan, Hayes and Bashir 2013). Ultimately, the effectiveness of any service guarantee is only as effective as one’s ability to detect contravening behaviour and the scope for compensation.
It must be considered whether the majority of users interact with these services naively or wellinformed. The significance of such factors is often merely assumed and only interpreted in terms of the effect on the total number of users, following which a range of privacy preservation measures are proposed in order to mitigate any detrimental impact on participation rates. In (De Cristofaro and Soriente 2013), the provision of adequate privacy protection is considered the single most important factor affecting the willingness of users to contribute data. Consequently, they propose a cryptographic system for Privacy-Enhanced Participatory Sensing Infrastructure (PEPSI), based on Identity Based Encryption and third-party generation of decryption keys (private key generator). They also acknowledge challenges in protecting query privacy from organisers; node privacy from network operators; and collusion attacks.
Reputation and trust
Trust management systems may be categorized as either rule-based or reputation-based (Yang, Zhang and Roe 2012). A rule-based system applies credential matching, based on credentials (certification), chain discovery (with importance placed on associated storage locations) and trust negotiation (protecting credentials, avoiding unnecessary exposure). Reputation-based trust management characterises user behaviour by collecting, distributing and aggregating assessments of user contributions in a way that may identify malicious behaviour. They describe the different stages of trust management as establishment of initial trust; observation of behaviour; and evolution of reputation and trust. Initialisation is always problematic, as there is little on which to base an applied trust level. Approaches have been proposed that draw on community ratings of new participants; however, this does not preclude other participants from providing biased or malicious reports.
The more conservative approach will allocate a low trust level to new users with a period of time for this reputation to be improved. The alternative is to assume trustworthiness and then downgrading the assigned trust level if warranted. In observing behaviour, anomaly detection methods may be applied to automatically detect potential misuse, by first classifying normal behaviour in a way that enables rapid detection of anomalous behaviour.
The impacts of these respective decisions needs to be quantified and related to the number of users in the system in order to determine the effectiveness and suitability of the associated trust management framework. This is particularly important when considering the concept of sharing reputations across communities, reputations on which other communities can then initialise their own trust levels for new users. This reaches back into questions of privacy, with identification being required across different domains. It also draws into question the reliability of trust management frameworks in one domain and their impacts on other domains. This may point to the need for a central trusted authority. Yet a distributed approach, applied with certain safeguards and agreed criteria, provides some protection against attacks on a single central repository of reputations.
Existing reputation models include: summation and average (aggregation of ratings to produce a single reputation score); discrete trust models (assigning labels to actions for ease of interpretability); and Bayesian systems (applying positive or negative ratings along with a probability distribution to determine reputation) (Reddy, Estrin and Srivastava, Pevasive 2010 2010). The first is susceptible to bias if the number rating is heavily skewed towards one side, while discrete models (non-mathematical) do not inherently support statistical inference of reputation confidence. Bayesian systems, however, offer the ability to establish a measure of confidence in the reputation score, by determining the probability that the expectation of a distribution (which determines the reputation) is within a certain error margin. It also allows for weighting of new and old measurements, effectively applying a forgetting factor, to more effectively update reputations giving preference to either more recent or historical behaviour. Another system evaluates a number of different attributes that are combined to determine reputation (Abdulmonem and Hunter 2010). These include: direct rating between members; inferred trustworthiness; rating of observations; contributor’s role/qualifications; quality of past contributed data; completed training programs; amount of contributed data; frequency of contributions.
In a similar fashion, (Yang, Zhang and Roe 2012) proposed a combination of direct information (objective evaluation based observable parameters), personal information (personal information which infers a degree of accountability if accurately provided) and indirect reputation (subjective measures such as community and organiser’s trust). As mentioned earlier, the last criteria is problematic and thus needs to be weighted accordingly, while personal information can similarly be weighted according to the level of detail supplied and its verifiability. In fact, this challenge was reflected in their experiments, observed in a small number of participants not willing to supply personal details.
In (Christin, Roßkopf, et al. 2012), the user reputation is cloaked utilising cryptographic primitives. Similarly based on k-anonymity, in (Huang and Kanhere 2012) a trusted reputation server is utilised in a manner that masks the reputation in transit so as to reduce prospects of linking user identity to reputation by external parties, whilst claiming greater flexibility in reputation assignment and accuracy. In (Wang, et al. 2013) anonymity and reputation challenges are balanced through the separation of the data reporting and reputation update processes. ...
4 Privacy vs Transparency
A commonly understood definition of privacy is the right to have one’s personal environment (and information contained therein) protected from intrusion. In this way, it is also interpreted as the right to be left alone (Brandeis and Warren 1890). Indeed a focus of this view by Brandeis and Warren in 1890, was in the provision of compensation of suffering resulting from privacy invasion, and so drew attention to dignitary harms (such as reputation) (Kesan, Hayes and Bashir 2013). Alternatively, in the modern context, it can be defined as control over personal information (Shilton and Estrin, Ethical Issues in Participatory Sensing 2012). It implicitly draws a contrast between what constitutes private space and public space and the subsequent delineation of private information versus public information. It also draws into question the status of private activity that necessarily crosses over to public spaces. To appreciate the significance of privacy protection, the function of privacy in a societal context “protects situated practices of boundary management through which the capacity for self-determination develops,” while “conditions of diminished privacy also impair the capacity to innovate” (Cohen 2013).
With specific reference to participatory sensing, privacy has been interpreted as “the guarantee that participants maintain control over the release of their sensitive information” including “protection of information that can be inferred from both the sensor readings themselves as well as from the interaction of the users with the participatory sensing system.” (Christin, Reinhardt, et al. 2011). Formal definitions have been established with reference to what is protected by law. As such, it varies in differing degrees according to the jurisdiction to which the law is being applied. According to the Victorian Government Privacy Commissioner, the common element is the ability to keep “your own actions, conversations, information and movements free from public knowledge and attention.” This can be interpreted differently in the context of the home, workspace or other environments. Importantly, associated legislation only covers certain types of information and activities. It is important to recognise that protections tied explicitly to identity are inadequate, as enough consolidated information may be used to identify an individual’s activities, locations and relationships and in that way leaves them vulnerable to exploitation, without ever requiring the establishment of identity (Wright, et al. 2009). To appreciate the legal boundaries applied in privacy law, it is necessary to acknowledge that it is different from both confidentiality and secrecy. Confidentiality in legal terms relates to information given under the obligation that it not be shared further. Such information is not usually publically available or easily accessed. Secrecy relates to the prevention of information becoming known, where such action may assist in privacy protection or in serving the public interest.
Limitations of existing US legal frameworks to effect genuine protections through combinations of privacy, data protection and communications laws, have been explored in (Bast and Brown 2013). In light of the inability to singularly protect privacy in that context, a combined approach is recommended that employs education, empowerment and enforcement (where available) (Thierer 2013).
Ethics in Design
Ideally, privacy safeguards embedded in the system design should give users the necessary control over the collection and release of their data. However, the personal nature of PS as well as the proximity to other persons not actively participating in the PS program raises certain complications. In this respect, it is important to consider alongside privacy, the notions of consent, equity and social forgetting (Shilton and Estrin, Ethical Issues in Participatory Sensing 2012).
Balancing the need for privacy and proposed means for satisfying it raises further questions about the fundamental ethics being applied and about the strength of those ethical principles when considered in the light of seemingly necessary compromise. When such principles are applied at the design level, applied haphazardly, or not applied at all, the implications for the user are likely to be an increased vulnerability from having shared revealing data and for the organisers, exposure from basing released information on inaccurate and possibly malicious data.
Design objectives can be easily placed in opposition as a means of more easily arriving at a certain outcome. For example, privacy versus accuracy has been proposed as such a compromise, where in order to preserve some degree of privacy, it has been suggested to perturb data supplied to the system in order to mask the real data that reveals perhaps the actual location or actual time at which samples were collected. Consideration must be given to how this is achieved. If the granularity of the data is merely reduced, it is more a reflection of the preparedness of the user to supply a certain level of detail. Therefore, it is not the accuracy of the data in question, but rather the quality. Transparency in the system ensures that the error margin or data granularity is known, such that subsequent operations on this data can factor in the associated data quality. This is different from intentionally supplying incorrect data, albeit with the same intention to mask activity or identity. In this instance, there is little recourse to separate legitimate users from malicious users, as both courses of action seek to bias the data pool. Social forgetting is raised in (Shilton and Estrin, Ethical Issues in Participatory Sensing 2012) as another element of the system necessary to reflect the broader principles at stake. PS presents the possibility for a historical archive of activity. The social media generation are slowly becoming aware of the longer term impacts of seemingly frivolous sharing of photos or posts, as what may be considered private moments suitable for perhaps a circle of friends is not seen in the same light by current or potential employers. As many as 17% of Australians regret something they have posted on a social networking site (increasing to 33% for young people) (OAIC, Community attitudes to privacy survey 2013). It has been suggested that such long-term recording and retention, reduces the capacity for a fresh start, or to be able to recover from one’s mistakes. In the US, this has been referred to as the “Eraser button” approach, whilst a similar EU version refers to a “Right to be forgotten” (Thierer 2013). As raised in the analysis of reputation management, to what extent does recent activity predict future behaviour compared with more distant activity? In that context the challenge was, rather, how to establish reputation quickly in the absence of historical data. Also important, is the duration of data and activity retention, and associated weighting applied in determining other factors such as reputation. In this instance, it becomes a factor for the user in establishing a trust that they (and their data) will be treated fairly and justly (Shilton and Estrin, Ethical Issues in Participatory Sensing 2012). Mechanisms exist within law to address such issues, so measures must surely be applied to extend such a capacity to emerging technologies that in principle seek to reflect social systems.
System transparency also applies where user consent is concerned. It is essential that where users are relied upon to actively participate in the sensing process, adequate consent be obtained for the utilisation of associated data contributions. This is challenging in an environment that by its nature encroaches upon people’s personal environments and furthermore by the degree of consent which may be obtained. Active and informed consent is essential if any sort of parity between users and organisers is to be established. As noted earlier in the challenge of overly complex Terms of Service Agreements, it is difficult to infer the level of understanding reached by a user in order to obtained informed consent. However, the level of consent can be qualified and ranges from:
• Passive consent – where utilisation of the platform implies consent given, and so utilised by organisers to extract content at will (so called ‘soft surveillance’ (Shilton and Estrin, Ethical Issues in Participatory Sensing 2012)). • Active consent – requires some action by the user to acknowledge agreement (standard TOS agreement), but does not necessarily indicate understanding, rather places responsibility upon the user. • Qualified consent - contingent on circumstances specified by users or guarantees provided by organisers, with the system operation reflecting selected preferences. • Informed consent – effective communication of usage conditions and implications, with consent elicited in a manner that reflects the level of understanding and agreement.
The extent of the implications and subsequent agreement required still needs to be explored in order to establish the level of risk that applies to each of the stakeholders. This can be extended to include a more thorough analysis of demographics within stakeholder groups, particularly where it concerns more vulnerable participants. Where the implications impact only the individual user, the direct and active consent suffices. However, where sensing applications increasingly extend to environments shared with others (be they private or public, home or work) the same mechanism may no longer be adequate. The system may protect the privacy of the user, but what of bystanders who may have no knowledge of sensing taking place, no knowledge of implications and potentially different views on the conditions for qualified consent. Is the user in a position to take responsibility for the privacy of people within the vicinity of the sensing system? The significance of these flow-on effects is dependent on the type of sensing taking place. In any case, due consideration must be given to these impacts in the design and in communicating any vulnerabilities to the user.
In order to establish trust, organisers need to be able to provide assurances of how the data is used, whilst system designers embed related functionality within the design. Where insufficient trust exists between users and organisers, users can still have the option of reducing the granularity of data shared or applying some other means of anonymisation (such that the supplied data may still contribute something meaningful).
In the context of citizen engagement and promoting interactive government, applying such procedures is a means for demonstrating transparency and for growing trust. The functionality of the PS platform then enables users to better understand the significance of risk factors as well as the measures applied by organisers (e.g. government) to mitigate such risks. It is ultimately the responsibility of the system operator to adequately inform users, particularly in the absence of adequate risk mitigation and vulnerabilities being exposed.
In determining the viability of such deployments, it must be established whether the introduction of such technology platforms is potentially increasing the vulnerability of particular demographics, or whether through careful design and deployment it acts to reduce such vulnerabilities. Whilst as many as 60% of Australian youth acknowledge the privacy risk of personal information and online services (OAIC, Community attitudes to privacy survey 2013), this still leaves too large a number unaware of the implications associated with what they perceive as ordinary online activity. If it is not possible at the development stage to mitigate these risks, than it may still serve this purpose by providing effective and explicit information transfer relating to the data sharing risks.
User perceptions
Requiring further investigation is the existing knowledge level of such issues as well as the relationship of risk awareness to participation rates.
Research by the Australian Communications and Media Authority (ACMA) has found increasing concern regarding security and privacy amongst mobile and online platform users. This is reflected in user online behaviour, where users commonly employ different digital identities (transactional, social and personal) as a proactive means for restricting access to personal information. In these different scenarios, users were more or less willing to contribute detailed identity information, responding to information demands by going elsewhere for the same service, or even providing misleading information (defensive inaccuracy). In this way, some users could be found exercising their own balance of data integrity and pseudonymity (ACMA, Digital footprints and identities Community attitudinal research 2013). At the same time, whilst nearly 40% of respondents were confident they could effect their desired privacy level through available privacy setting options, another 40% were only hopeful that this was the case, with remaining number holding a negative view.
More than two-thirds were concerned about the level of information shared when using location-based services. Other important findings, included: increased usage does not translate to greater understanding; risks are poorly understood; knowledge of risks and available protections were poor; and users desired more information to assist them to protect personal data (ACMA, Here, there and everywhere—Consumer behaviour and location services 2012).
There was found to be substantial trust in government and established banks for securing transactions and using them only for legitimate purposes. Significant trust and responsibility for managing and policing digital identity and breaches was still placed in government. The distinct roles of individual stakeholders have also been acknowledged, with: individuals having primary responsibility for protecting their personal information; service providers and industry operators responsible for enabling a secure environment; and government providing information and education services, raising awareness and enforcing safeguards (ACMA, Here, there and everywhere—Consumer behaviour and location services 2012). Indeed, high standards of transparency in data handling are also universally expected from all organisations, as well as demands for notification of handling breaches and protection and management practices (OAIC, Community attitudes to privacy survey 2013).
Similar studies of mobile user attitudes to privacy conducted by other agencies across Australia and around the world have also revealed concerns for users of mobile and online platforms. This includes apps that do not intrinsically incorporate data sharing.
GSMA studies of mobile user attitudes to privacy conducted across the UK, Spain and Singapore, with follow-up studies in Malaysia, Indonesia, Mexico, Columbia and Brazil, found that approximately half of the respondents expressed concern about sharing personal data whilst using mobile online services or apps, with over three quarters subsequently very selective about who they shared such information with. Before sharing location information of mobile phones, as many as 81% of people wanted their permission to be requested. It was also noted that most users took less security/privacy precautions when using mobile devices compared with PC use. Interestingly, 47% of users would change their usage behaviour if apps were found to use their personal information without consent, whilst 41% would limit their use (FutureSight 2011).
The Office of the Australian Information Commissioner (OAIC) survey on community attitudes to privacy, revealed some user awareness of privacy risks associated with online activity (OAIC, Community attitudes to privacy survey 2013). This naturally translates (in some degree) to mobile apps. Some of the key findings are noted here.
With respect to personal information, online services are viewed as the biggest privacy risks (including ID fraud and theft, followed by data security). Concerns about handling of personal information are evident in dissatisfaction with such data being sent offshore (with 90% expressing concern). This certainly raises questions about data ownership and the ability to guarantee associated protections according to how this data is handled (including communication and storage).
As many as 78% of Australians dislike having their activities monitored covertly on the internet. Some awareness of this activity exists, with around half respondents of the view that most websites and smartphone apps collect user information. Of those aware of such risks, more are actively seeking to protect their information, with 90% at times declining to provide personal information, 80% first checking website security, 72% clearing browser histories and 62% not using smartphone apps because of related concerns and around 30% providing false names or details.
There is seemingly a point at which user demands for privacy are relaxed or traded, with over a quarter of the population prepared to provide personal information in exchange for improved service or reduced prices.
How much then do breaches of trust by system operators or government affect user perceptions and ongoing behaviour? One third of those surveyed experienced problems with the treatment of their personal information. But while there is a better understanding of ombudsmen schemes, many still aren’t aware of reporting or complaint procedures. More trust is placed in government organisations than private companies (with only health and financial institutions exceptions). Due to concerns about the handling of personal information, 60% decided not to interact with a private company. No figures were reported regarding continued engagement with given organisations following a data breach. Efforts to embed privacy with associated guarantees, means of retribution or compensation for data breaches and adequate transparency of data usage all surely contribute to building or rebuilding trust  between different stakeholders. In this way, robust technological and policy frameworks for emerging ICT are essential. Participatory sensing is one such capability that occupies a unique space that can mutually benefit users and organisers, if implemented accordingly. Implemented haphazardly, it simply reflects the existing flaws and lopsided exchange dynamics that often exist respectively between users, commercial operators and governments. It goes one step further than straight forward app development principles. Likewise, it extends beyond activities of disengaged mass-surveillance, principally because it actively engages the user to contribute data and collection resources. So, the respective parties enter into an agreement or contract for the exchange of information. The user is only able to provide informed consent if they are made adequately aware of the system’s operation and their associated role and rights in interacting with the system.
Key points
• Privacy is one’s control over access and flow of their information • Legal protections are limited to specific circumstances • Ethical means for privacy protection offer transparency with respect to data accuracy and embed privacy in the design • System transparency and verifiable privacy measures can build trust • Social implications warrant mechanisms for managing data history • Informed user consent needs to be the goal and supported by effective communication of risks • Users will utilise various means for protecting privacy, according to their level of awareness and evaluation of risks