07 March 2015

A Metadata ATM?

Telstra has announced that it will offer its subscribers - for a fee - access to metadata rtegarding their phone accounts. The announcement reverses Telstra's previous adamant refusal of access.

Those who are just a tad sceptical about Telstra's ongoing commitment to caring and sharing might wonder whether access will have the same role as ATMs for banks, ie a quiet little earner once the set-up costs have been recovered.

Telstra indicates that consumers will get access to data about who they've called, the location from which the call was made, the time and duration. It will include "the actual location of the cell tower an outgoing call was connected to when the call was made".

The metadata will not identify incoming calls.

The fee will depend on the age of the requested data, with  Telstra indicating
Simple requests are expected to cost around $25, while detailed requests covering multiple services across several years will be charged at an hourly rate. This is the same practice of cost recovery that is applied to requests from law enforcement agencies
This new approach is all about giving you a clearer picture of the data we provide in response to lawful requests today.
The announcement anticipates belated release by the Australian Privacy Commissioner of his response to a complaint regarding Telstra's refusal to provide a journalist with access to metadata regarding his calls.on.

Information and Adjudication

'Information and the Aim of Adjudication: Truth or Consequences?' by Louis Kaplow in (2015) Stanford Law Review (Forthcoming) comments 
Adjudication is fundamentally about information, usually concerning individuals’ previous or proposed behavior. Legal system design is challenging because information ordinarily is costly and imperfect. This Article analyzes a broad array of system features, asking throughout whether design should aim at the truth or at consequences, how these approaches may differ, and what general lessons may be drawn from the comparison. It will emerge that the differences in approach are often large and their character is sometimes counterintuitive. Accordingly, system engineers concerned with social welfare need to aim explicitly at consequences. This message is not one opposed to truth per se but rather a strong admonition: it is dangerous to be attached to the alluring view that adjudication is primarily about generating results most in accord with the truth of the matter at hand.
Kaplow argues that in asking whether system designers should aim at the truth or at consequences
truth is not taken [in this article] as an abstract concept or a normative principle. Nor is it taken to have a unitary meaning in different realms. Instead, the notion is to be understood as a proxy criterion (or even a metaphor) that seems appealing in particular domains and thereby is often taken as an appropriate target by policy analysts when they assess pertinent elements of the legal system. This Article will assume, until the final Part, that system engineers are in fact concerned entirely with consequences for social welfare. The question then is whether, in undertaking their work, it is a plausible strategy to aim at the truth with the expectation that this protocol will ordinary lead to good consequences. Perhaps there are some moderate deviations, occasional exceptions, and limitations, but nevertheless the applicable notion of truth might typically provide a workable guide or at least a sensible starting point. A competing view is that system engineers need to focus quite explicitly on consequences themselves, largely setting truth to the side; indeed, perhaps they need to strive to ignore truth’s siren call.
The Article examines how these two methodologies differ and what general lessons may be drawn from the comparison. It will emerge that a large divergence often exists and that its character can be counterintuitive. Accordingly, system engineers really do need to concentrate on consequences. This message is not one that is opposed to truth per se but rather a strong admonition: it is dangerous to be attached to the alluring view that adjudication is primarily about generating results most in accord with the truth of the matter at hand.
To frame the inquiry, it is useful to consider the underlying reason that truth and consequences can produce such divergent prescriptions. Start in an idealized world in which it is possible to achieve perfect accuracy in adjudication—such that every outcome corresponds to the truth—at zero cost. In such a world, in most instances and with some further simplifications, social welfare would be maximized by going with the truth. Adjudication should aim at the truth, it would in fact hit the bull’s-eye every time, and the best consequences would be achieved. Harmful acts would always result in the appropriate sanction, benign acts would not be discouraged at all by the prospect of the mistaken imposition of sanctions, and system costs would be nonexistent. This is a first-best setting (or close to it), an analogue to a frictionless universe. Here, we do not see much reason for truth and consequences to conflict.
Our actual system design problem, of course, is in the world of the second best. Nontrivial system costs must usually be incurred to obtain even an approximation of the truth. Attempting to move closer is increasingly costly, and perfect truth is unobtainable. Should we nevertheless pretty much always aim at the truth?
The obvious answer is in the negative. The presence of costs alone tells us that we will need to make tradeoffs. Spending the entire GDP to get as close as possible to the truth in a single torts dispute would destroy society, not maximize social welfare. This simple point instructs us to avoid excess, but it fails to illuminate a course of analysis that can prescribe what moderation would look like and what it should depend on.
Moreover, once complete truth is off the table, we confront significant hurdles: the lack of an obvious metric for degrees of truth or of a way to place a value on truth units, whatever they may be. Inquiries are sometimes conducted as if there were some sort of Platonic truth measure, but little reflection is required to appreciate the need to dig deeper. More explicit treatments may invoke various criteria, such as the command to minimize the number of errors in adjudication or to aim at some ratio of true positives to false positives. Such guidelines, however, are ad hoc, and conflicting, and they can have absurd implications. For example, it is obvious that adjudicative errors are minimized by eliminating adjudication, and further analysis indicates that some proposed performance ratios are improved by raising the flow of innocent acts into the system (because such may well improve the system’s batting average). To foreshadow a bit, we can see from these examples that it is grossly insufficient to consider only what happens in adjudicated cases; underlying behavior and determinants of what enters the legal system will be central. The main purpose of the legal system is not for adjudication to look good according to some abstract standard but rather for its operation—including the anticipation thereof—to foster productive activity, restrain harmful conduct, and avoid undue expense. More fundamentally, such precepts, whether focused entirely on some notion of truth or on related considerations involving types of errors, are ungrounded. As already mentioned, the approach adopted in this Article is that legal system engineers should be guided by the maximization of social welfare. That is, adjudication should, in principle, aim at consequences. Whether, how, and to what extent truth is important will emerge in the course of the analysis. Any truth metric or valuation of truth will be a byproduct of the inquiry, not its driver. Aiming at the truth may sometimes be a good summary or proxy for part of what matters, but it is never the entire story (if for no other reason than cost) and is often a misleading guidepost. In complex systems, this sort of perspective is familiar. Indeed, even when the setting is simpler, one does not always aim directly as one would in an idealized world. A marksman might optimally aim high and to the left to account for distance and wind. But that involves just a modest refinement: the maxim that one should aim true is approximately correct. When building a road to the top of a mountain, however, aiming straight for the top—following the precept that the shortest distance between two points is a straight line—is a prescription for disaster. Switchbacks will be required, so that much of the time the road is actually going in the wrong direction by reference to the ultimate endpoint. Moreover, depending on the conditions, it might be best to go down, not up; around to the other side; and only then begin a zig-zagged ascent.
The foregoing should lead us to wonder whether adjudication design that aims primarily at the truth will perform poorly, but it does not tell us how worried we should be. That depends on whether this domain is more like a gradual incline with a few bumps or a treacherous mountainside with imposing obstacles. There are two general reasons to expect our challenge to be more like the latter.
The first is the presence of costs. Not only will we want to stop short of the top, but once we know this, it is impossible to determine how far to go without an explicit determination of the social value of moving closer. One can contemplate the meaning of truth until the end of days without illuminating that question. The value of truth in adjudication depends on its consequences, and valuing various outcomes is outside the realm of truth per se. As a comparison, how can we value an additional medical test of a stated precision without assessing the consequences of one or another course of treatment under different medical conditions? In this type of setting, truth is indeed something that matters, but it is only one element of a larger calculus. It is a start to recognize that tradeoffs must be made, but this recognition alone tells us little of their anatomy.
The second reason is that the design of adjudication in many settings influences behavior. We are centrally concerned about deterring harmful acts and avoiding the chilling of benign conduct. Such primary behavior, and also litigation itself, is endogenous; social welfare depends on the operation and feedbacks of the system as a whole. In such a complex and interactive environment, moving somewhat closer to truth in adjudication, by any simple metric, need not improve social welfare even without regard to costs. In addition, seemingly more expensive systems can be cheaper (for example, via deterrence, reducing the frequency of adjudication) and less expensive ones more costly. Due to these multiple and moving targets, the optimal design of adjudication may be more roundabout than building a road up a treacherous mountain: at least the mountain stands still.
This Article explores multiple dimensions of legal system design with a focus on information and, in particular, how information costs and limitations bear on the nexus between truth and consequences. Its scope is broad and, accordingly, the analysis is limited and often selective. In the process, however, we will see how key aspects of the interaction play out in many contexts and also identify some systematic similarities and differences across domains. As mentioned at the outset, one should keep in mind that the core message here is not anti-truth. Truth, after all, is usually the right guide in an idealized world, which suggests that sometimes we should expect it to point us in a good direction. As a motivation for analysis, at least, thinking about truth is useful. Moreover, truth may be consequential for social welfare for additional reasons. The claim throughout this Article is that careful analysis must aim at consequences, not at truth, because the dictates of truth are almost always seriously incomplete and often enough misleading that we must be careful not to have our imagination, investigation, and prescription distorted by an infatuation with truth in adjudication.
Part I begins by examining information and substantive legal commands because the purpose of adjudication is to effectuate substantive law. The analysis of information and the aim of adjudication are inevitably about the relationship between procedure and substance. Accordingly, it is important to start with central design features of substantive legal commands, specifically, those most directly implicating the core informational dimensions that will be emphasized later. Notably, primary behavior is influenced by the expected consequences of adjudication, which accordingly is the core information about the legal system, procedure and substance, that proves to be consequential. Of course, administrative costs matter as well. Specifically, Part I addresses the two dimensions of a taxonomy employed in some of the literature. One involves the distinction between rules and standards, wherein a rule for this purpose refers to specification of the content of a legal command ex ante, before parties engage in the primary behavior governed by the pertinent law, whereas under a standard the content is determined ex post, in adjudication, after parties have acted. The second dimension involves the precision—specificity or level of detail—with which the legal command is given content: that is, the extent to which it makes finer distinctions rather than placing conduct in broader categories. Both dimensions implicate information in several ways. They obviously govern the intensity of effort (and thus cost) of supplying legal content both ex ante and ex post. Moreover, they influence the law’s consequences for individuals’ behavior in the interim because such behavior depends critically on the extent to which individuals choose to become informed about the law before they act. One of the recurring themes of this Article emerges in both analyses: truth at the conclusion of adjudication—understood in this Part to refer to the alignment of outcomes with the substantive ideal—does not directly translate into ex ante behavior in conformance thereto. Indeed, the gulf can be wide: a regime closer to the truth in adjudication can result in individuals’ actions in the world being less in accord with it than under an optimal regime aimed at consequences.
Part II analyzes the treatment of errors in adjudication—conventionally understood as setting a burden of proof or other decision threshold—taking as given the quality of information, a matter deferred to Part III. Examined first is a simpler context, set to the side in much of this Article, in which adjudication concerns the regulation of proposed conduct: license applications, zoning variances, or the approval of mergers or new drugs. Here, the optimal evidence threshold involves standard cost-benefit analysis for decisions under uncertainty, just as in the medical testing illustration above. The truth of the matter—the likelihood that the applicant, say, proposes an activity of a harmful rather than of a benign type—is certainly relevant, but one must also consider the possible social gains and losses from the proposed activity. When harm is great and the benefit of the benign act is small, prohibition is optimal even when the likelihood that the proposed action is a harmful one is low, and conversely when harm is slight and benign activity is highly beneficial, even though the truth of the matter is that it is most probably harmful. Better to make many mistakes of little consequence than a few that are momentous. In this basic setting, truth is an element of consequences but it is far from the entire story. Next, the analysis returns to the setting examined in most of the Article, where individuals’ actions (torts, contract breaches, and so forth) precede adjudication, in which case their behavior is influenced by their anticipation of outcomes in adjudication. Here, truth—in the sense of the likelihood that the individual before the tribunal in fact committed a harmful act—is not even a component of the more elaborate calculus that determines what evidence threshold is optimal with regard to the consequences it engenders. The explanation is that the truth of the matter at hand concerns the static, descriptive question of how best to characterize the case before the tribunal, whereas the consequences of setting an evidence threshold somewhat higher or lower turn on the dynamic question of how such a modification would change individuals’ ex ante behavior. As suggested previously, the endogeneity of behavior can greatly obscure the relationship between truth and consequences. In general terms, this phenomenon carries over to the determination of optimal decision criteria at earlier stages of adjudication, including formal pretrial terminations and the informal conduct of investigations, such as by government agencies. Part II also examines a particular relationship between procedure and substance, namely, whether concerns for errors involving the mistaken imposition of sanctions on (or application of prohibitions to) benign conduct are best dealt with through restrictions on substantive legal commands or with more demanding decision criteria in adjudication. The latter tends to be favored on informational grounds because raising, say, the burden of proof tends to remove the weakest cases from the system. In this instance, welfare is best advanced by keeping substantive law aligned with truth, in the sense of conformity to the substantive ideal, and relegating any needed adjustments to the burden of proof, perhaps by moving it in a direction less aligned with truth in the sense of whether outcomes accord with the fact of the matter in the case under adjudication.
Part III shifts the focus to the accuracy of adjudication. Because greater accuracy—attempting to move closer to the truth—comes at a cost, it is necessary to place a value on accuracy, which can be done only by assessing its consequences. The value of accuracy varies greatly on a number of dimensions. Raising the degree of accuracy in the determination of liability improves the error tradeoff that was taken as given in Part II’s discussion of evidence thresholds and accordingly has a social value that reflects the corresponding consequences. Improving the accuracy with which damages are assessed has a relationship to ex ante behavior that is similar to that of making substantive legal commands more precise or refined. Specifically, the value of moving closer to the truth is not automatic but depends on the extent to which individuals will anticipate and thus react to the greater precision of adjudication. Interestingly, in some important settings, greater accuracy will be a pure waste of resources because the added ex post specification—concerning, for example, just how much a particular auto accident victim’s future earnings are diminished by an injury—cannot plausibility be predicted ex ante. To the extent that more accurate damage awards improve the precision of compensation for risk-averse victims, accuracy has social value, in this instance measured by a risk premium. In all, whether the truth has any consequences at all and the social value of the consequences it does have vary greatly by the issue, the context, and in many settings the information possessed by actors before adjudication occurs.
The relationship between truth and consequences becomes even more complex and in some instances further attenuated when the analysis is extended to take account of the endogeneity of behavior in adjudication: which cases are pursued and how much information parties choose to generate. Part IV examines some of the possibilities that may arise. For example, private litigation is initiated when plaintiffs anticipate an expected recovery in excess of their costs. Anything that influences the costs or outcomes of adjudication affects these decisions, which in turn have feedback effects on primary actors’ behavior that, as emphasized repeatedly, is predicated on their own expectations about adjudication, including how often it will occur. As one indication of the potential implications, it is explained that a reform that lowers the evidence threshold, so plaintiffs win more often, and simultaneously imposes a filing fee, to an extent that (altogether) keeps deterrence constant, will in some settings lower system costs and also reduce the extent to which benign conduct is chilled. That is, a concern for errors involving the mistaken imposition of liability may favor a reform that reduces, not raises, the evidence threshold. This situation arises when litigants possess information superior to the tribunal’s. Truth in the outcomes of adjudication can be less consequential than the knowledge that motivates self-interested plaintiffs’ filing decisions.
Part IV also considers parties’ incentives to generate information in adjudication. In certain key settings—notably, some of those explored in Part III—litigants’ incentives are excessive. Their private benefit from influencing the outcome favorably, even though truthfully, exceeds the social value, so the overall consequences for social welfare can be negative, precisely because too much truth is generated. Accordingly, one mechanism that might address the problem involves the tribunal being committed to ignore some truthful information that parties might present. In this instance, aiming at truth is directly in opposition to good consequences. The final two Parts of the Article step back from the system-design methodology employed thus far to consider a broader perspective on the legal system’s objectives. First, continuing to assume that the purpose of the legal system is to advance social welfare, Part V asks whether there are additional consequences of the degree to which adjudication generates the truth, specifically, consequences regarding perceived legitimacy, abuse of power and corruption, participation and other process values, and preferences for the truth per se. Then, Part VI briefly examines the pursuit of truth independent of its consequences for social welfare.
This Article does not attempt to be exhaustive either with regard to all the ways that truth and consequences may or may not diverge from each other, taking an optimal system design perspective,1 or all the reasons that aiming at the truth may be important after all. It seeks to present enough of the former to illustrate the range of possibilities and to demonstrate the significance of potential divergences, and to discuss enough of the latter to instigate further reflection. Information is indeed central to adjudication, and the wide variation and complexity of the subject indicate the need for explicit, ground-up analysis that clearly focuses on the system’s objectives rather than an approach that starts in the middle and employs an appealing 1Brief remarks are in order regarding two familiar categories of truth/consequences divergence. First, there are isolated pockets of recognized exceptions. For example, the exclusionary rule and the requirement of proof beyond a reasonable doubt are usually seen as deviations from the norm that are justified by special considerations. See infra note 109. Second, as already noted in this Introduction, the need for cost tradeoffs is widely acknowledged, as reflected in many features of system design, from judges regulating the length of trials to enforcers employing randomized rather than dragnet strategies (traffic control and tax audits). Cost tradeoffs motivate Part III’s analysis of how to place a value on truth, which requires a systematic tracing of consequences that, as mentioned, is not much illuminated either by invocations of truth or by concessions of the need for moderation in light of costs. but potentially misleading proxy criterion: aiming at truth.

Employee Data

'The Acquisition and Dissemination of Employee Data' by Matthew Finkin in Studies in Labour Law and Social Policy presents
a schematic comparing the legal approach to employer acquisition and dissemination of applicant and employee information in the European Union and the United States. The schematic sets out seven analytical heads:
  • source of law; 
  • scope; 
  • form; 
  • means of effectuation; 
  • conceptual grounding; 
  • valence; and 
  • the relationship of privacy protection to the freedom of expression.
The essay then examines the meaning of these categories and explores the commonalities and differences between the E.U. and the U.S. under each of them. It concludes by taking up a common problem: employer access to and use of applicant and employee social media communications. That specific comparison, on a current and pressing issue, breathes life into the analytical differences and shows that, despite the differences, the actual result “on the ground,” so to speak, may not differ significantly; that the remedial situation in both systems may render the protection they afford illusory.

Geoimmersive Surveillance and gateways

Geo-Immersive Surveillance & Canadian Privacy Law, a 348 page Juridical Science dissertation by Stuart Andrew Hargreaves (Faculty of Law, University of Toronto) from 2013 comments
Geo-immersive technologies digitally map public space for the purposes of creating online maps that can be explored by anyone with an Internet connection. This thesis considers the implications of their growth and argues that if deployed on a wide enough scale they would pose a threat to the autonomy of Canadians. I therefore consider legal means of regulating their growth and operation, whilst still seeking to preserve them as an innovative tool. I first consider the possibility of bringing ‘invasion of privacy’ actions against geo-immersive providers, but my analysis suggests that the jurisprudence relies on a ‘reasonable expectation of privacy’ approach that makes it virtually impossible for claims to privacy ‘in public’ to succeed. I conclude that this can be traced to an underlying philosophy that ties privacy rights to an idea of autonomy based on shielding the individual from the collective. I argue instead considering autonomy as ‘relational’ can inform a dialectical approach to privacy that seeks to protect the ability of the individual to control their exposure in a way that can better account for privacy claims made in public. I suggest that while it is still challenging to craft a private law remedy based on such ideas, Canada’s data protection legislation may be a more suitable vehicle. I criticize the Canadian Privacy Commissioner’s current approach to geo-immersive technologies as inadequate, however, and instead propose an enhanced application of the substantive requirements under Schedule 1 of PIPEDA that is consistent with a relational approach to privacy. I suggest this would serve to adequately curtail the growth of geo-immersive technologies while preserving them as an innovative tool. I conclude that despite criticisms that ‘privacy’ is an inadequate remedy for the harms of surveillance, in certain commercial contexts the fair information principles can, if implemented robustly, serve to regulate the collection of personal information ‘at source’ in a fashion that greatly restricts the potential for those harms.
'Rights of Passage: On Doors, Technology, and the Fourth Amendment' by Irus Braverman in (2015) Law, Culture and the Humanities comments
The importance of the door for human civilization cannot be overstated. In various cultures, the door has been a central technology for negotiating the distinction between inside and outside, private and public, and profane and sacred. By tracing the material and symbolic significance of the door in American Fourth Amendment case law, this article illuminates the vitality of matter for law’s everyday practices. In particular, it highlights how various door configurations affect the level of constitutional protections granted to those situated on the inside of the door and the important role of vision for establishing legal expectations of privacy. Eventually, I suggest that we might be witnessing the twilight of the “physical door” era and the beginning of a “virtual door” era in Fourth Amendment jurisprudence. As recent physical and technological changes present increasingly sophisticated challenges to the distinctions between inside and outside, private and public, and prohibited and accepted visions, the Supreme Court will need to carefully articulate what is worth protecting on the other side of the door.

Juvenile Justice and DNA

'DNA for Delinquency: Compulsory DNA Collection and a Juvenile's Best Interest' by Kevin Lapp in (2014) 14 University of Maryland Law Journal of Race, Religion, Gender & Class 50 comments
Thirty states and the federal government compel DNA collection from juveniles based on a finding of juvenile delinquency. A main justification for doing so has been that it deters recidivism and promotes rehabilitation, furthering the goals of the juvenile court and consistent with the court’s role as a “protecting parent.” There is little empirical evidence, however, that compulsory DNA collection deters people from committing crimes or fosters their rehabilitation. Whatever specific deterrence DNA databasing may achieve is certainly diminished with respect to juveniles, who are less deterrable than adults. This undermines the best-interest rationale for collecting DNA from juveniles. Indeed, to the extent that criminal justice contact has a criminogenic effect on juveniles, DNA collection from juveniles could produce unintended, perverse consequences. 
Consistent with recent Supreme Court jurisprudence, this Article marshals scientific and psychosocial evidence regarding juveniles to outline a developmental critique of the best-interest justification for DNA collection from juveniles adjudicated delinquent. It also asserts that the basis for treating children differently from adults does not reside solely, or even predominantly, in science. The modern conception of childhood (as a separate, protected space for those whose development must be guarded and promoted) demands, even more powerfully, perhaps, than the findings of adolescent brain science, that we not subject juveniles to compulsory DNA collection for purposes of databasing. At the very least, the aggregate collection of genetic data from juveniles adjudicated delinquent cannot be justified as being in their best interests.

06 March 2015

Journalist Metadata and Media Freedoms

The Parliamentary Joint Committee on Intelligence & Security to report on the question of "how to deal with the authorisation of a disclosure or use of telecommunications data for the purpose of determining the identity of a journalist's source".

The inquiry reflects recommendation 26 in the Committee's recent Advisory report on the Telecommunications (Interception and Access) Amendment (Data Retention) Bill 2014 (Cth):
 The Committee acknowledges the importance of recognising the principle of press freedom and the protection of journalists' sources. The Committee considers this matter requires further consideration before a final recommendation can be made.
The Committee therefore recommends that the question of how to deal with the authorisation of a disclosure or use of telecommunications data for the purpose of determining the identity of a journalist's source be the subject of a separate review by this Committee. 
The Committee is expected to report back to Parliament within three months.

The review will consider international best practice, including data retention regulation in the United Kingdom.

The Committee states
The Committee’s Chair, Mr Dan Tehan MP said “In its previous inquiry, the Committee acknowledged the importance of recognising the principle of press freedom and the protection of journalists’ sources.”
“Balancing this with the needs of law enforcement and security agencies to investigate serious offences, it was apparent that further consideration was needed on the question of how to deal with the authorisation of a disclosure or use of telecommunications data for the purpose of determining a journalist’s source”, he added.
“The Committee looks forward to engaging with stakeholders in a separate review on this matter.” In its previous inquiry on the Data Retention Bill, the Committee recommended a number of additional safeguards relating to agency access to telecommunications data. These included specific oversight by the Ombudsman or the Inspector-General of Intelligence and Security (as appropriate), and the Committee, of any instance where a journalist’s data is accessed by an agency for the purpose of determining a source. The Committee’s recommendations were supported by the Government.
The Data Retention Bill will require service providers to retain a standard set of telecommunications data for two years. The regime will commence six months after passage of the Bill, followed by an 18 month implementation phase.
The Bill includes measures to increase safeguards around how agencies access telecommunications data, including an enhanced oversight role for the Commonwealth Ombudsman and new restrictions on which agencies may access data.
In undertaking the new inquiry, the Committee intends to consult with media representatives, law enforcement and security agencies and the Independent National Security Legislation Monitor.

05 March 2015

Plagiarism and the legal profession

'Admission as a lawyer: the fearful spectre of academic misconduct' by Mark Thomas in (2013) 13(1) QUT Law Review 73-99 comments
Notwithstanding a cultural critique of the concepts that underpin the values of academic integrity, both the university, as a community of scholarship, and the legal profession, as a vocation self-defined by integrity, retain traditional values. Despite the lack of direct relevance of plagiarism to legal practice, courts now demonstrate little tolerance for applicants for admission against whom findings of academic misconduct have been made. Yet this lack of tolerance is neither fatal nor absolute, with the most egregious forms of academic misconduct, coupled with less than complete candour, resulting in no more than a deferral of an application for admission for six months. 
Where allegations are of a less serious nature, law schools deal with allegations in a less formal or punitive fashion, regarding it as an educative function of the university, assisting students to understand the cultural practices of scholarship. For law students seeking admission to practice, applicants are under an obligation of complete candour in disclosing any matters that bear on their suitability, including any finding of academic misconduct. 
Individual legal academics, naturally adhering to standards of academic integrity, often have only a general understanding of the admissions process. Applying appropriate standards of academic integrity, legal academics can create difficulties for students seeking admission by not recognising a pastoral obligation to ensure that students have a clear understanding of the impact adverse findings will have on admission. Failure to fulfil this obligation deprives students of the opportunity to take prompt remedial action as well as presenting practical problems for the practitioner who moves their admission. 
Thomas states -
For students anticipating admission as a lawyer, the implications of academic misconduct, though less spectacular than Burt’s, nonetheless represent a substantial threat to their ambitions, with the Queensland Court of Appeal having signalled, in 2004, its discomfort with admitting applicants to practice where adverse findings of academic misconduct were before the Court. In a broader academic context, Bowers had, as early as 1963, reported that three out of four university students surveyed had engaged in some form of ‘questionable’ activities, and Bowers and McCabe (in 1993) subsequently found that the proportion of students admitting to cheating was ‘remarkably constant.’ There was, however, a ‘dramatic increase in [impermissible] student collaboration’ where individual work was required, with the 11 percent figure in 1963 rising to 49 percent in 1993. 
Perceptions of academic misconduct in the modern university suggest that it remains ‘rife.’ Law schools are clearly not immune from this problem: Queensland’s Chief Justice observed in 2008 that he was ‘especially surprised’ by the frequency of academic misconduct disclosed in applications for admission. The current literature records that, since the turn of the century, academic misconduct is (again?) assuming ‘epidemic proportions.’  It is driven by the ‘swirling currents of [the] information revolution,’  the explosion in electronically available resources,  and the increasing commodification of education, with extrinsic factors (such as money and status) rather than intrinsic goals (community involvement, competence, affiliation and autonomy) motivating students, as well as the reimagining of the relationship between text, authors and audiences. These factors, it is claimed, have created an environment where the appropriation of others’ work ‘is deployed by students as a tactic to achieve educational success.’ The cost of degrees has created a climate where ‘[s]tudents are faced with many temptations to plagiarise,’ responding to the pressures of studying while working and an increasing pressure to ‘succeed’ and provide a return on investment. In the commercialised environment of modern university study, plagiarism and other forms of misconduct have been re- conceived as ‘consumptive practices,’ rather than failures of traditional scholarly culture. In a credentialist educational paradigm, academic misconduct becomes more easily rationalised. For the law student, the ramifications of a finding of academic misconduct are potentially more serious than in any other discipline. 
Confounding further the traditional values underpinning the institutional virtues of proper academic conduct are critical cultural attitudes which characterise ‘originality and individual authorship as mythologies.’ Postmodernist writing challenges at a fundamental level the concepts of original authorship of text, with concepts of attribution inevitably becoming equally contested. Barthes, for example, describes text not as ‘a line of words releasing a single “theological” meaning (the “message” of the Author-God) but a multi-dimensional space in which a variety of writings, none of them original, blend and clash.’ Similarly, Bakhtin sees all language as infused with linguistic baggage: ‘... all our utterances ... [are] filled with others' words.’ Plagiarism is, in such a context, postmodern textual liberation, recognising the continuous intertextual interplay of ideas, and the words which concretise them, as against a contested personal authorship. 
The intersection of such critiques and the culture of the academy has thus seen an assault in some quarters on the implied political stance inherent in the concept of academic integrity. Plagiarism and other forms of academic misconduct are thus identified as ‘insurrectionary’ in university culture, with its ideological fascination with reason, autonomy, originality and objectivity. That culture is predicated on a ‘common ideological ground in the creative, original individual who, as an autonomous scholar, presents his/her work to the public in his/her own name.’ The author as ‘the manufacturer’ of texts is, it is argued, an artefact of the ‘economic/ideological system which arose in [Enlightenment] Europe.’ 
Such critique, combined with the general lack of referencing and validation practices which inform the internet publication of opinions, arguments and criticism, has created a sharp divide between the ‘public’ world of writing, and writing within the scholarly disciplines. Such a boundary, however, is not necessarily understood by modern students, whose primary mode of communication is technological, and whose primary connection to information is electronic.  Students thus often view copying from online sources as being ‘significantly less dishonest than similar offences using printed sources,’  since it comes from a platform where the interchange of ideas is not governed by principles of ownership, but by free interchange and recombinant or pastiche expression. 
Such critical analysis of language (and the underlying reference to the psycho- linguistic modes of generating language informing such approaches) presents a picture which is antithetical to the strict boundaries of authorship that underpin the paradigm of knowledge and scholarship on which proper academic conduct is predicated.  As products of the Enlightenment, the modern (and modernist) university retains, at the institutional level, conventional understandings of authorship, where words, ideas and arguments are discretely attributable to specific sources, requiring acknowledgement as an integral part of the value system underpinning scholarly culture. It is here, for the law student, that the spectre of academic misconduct crystallises. 
As the unit co-ordinator for Professional Responsibility at the Queensland University of Technology (QUT), the author teaches the subject in which the regulation and discipline of the legal profession, including a substantial component on the admission process. The author has therefore had occasion to deal with instances of academic misconduct – ironically, even in the unit which deals with professional ethics. As a barrister, the author is frequently consulted by students seeking admission who have suddenly realised that they have ‘suitability matters,’ including academic misconduct, which now assume fearful (and not infrequently tearful) proportions. The author has moved a substantial number of admissions before the Court of Appeal, including significant numbers of admissions where findings of academic misconduct have been disclosed. 
Where an application involves suitability matters, the tenor of the occasion shifts from the routine ceremonial to something of a prosecutorial/adversarial process, taking on at least metaphorical resonances with a plea in mitigation by defence counsel. Yet what might be thought of as conventional mitigating factors (such as stress, illness, workload etc) are clearly not available to limit the culpability of a student who has disclosed academic misconduct. Indeed, they are the antithesis of mitigation in the courts view, demonstrating a preparedness to act dishonestly in stressful situations to achieve specific ends. 
The Queensland Court of Appeal has clearly signalled that academic misconduct is a factor bearing on fitness for practice. It is not, however, automatically disentitling. Developing appropriate submissions is often hampered, though, by the way in which academics have framed the documentation of their findings – with unintended ramifications for the student’s admission. While adverse findings of egregious plagiarism attract the Court’s full attention, they will also have been made within a formal committee process, and be accompanied by detailed documentation. Conversely, for lesser transgressions, the very informality which university policies mandate, and the scholarly values which academics bring, quite properly, to managing minor misconduct can problematise the presentation of persuasive submissions. 
As Chanock observes, the experience of academic standards which students bring with them from secondary school are ‘startling’. Most are used to referencing practices substantially less rigorous than those which apply at university. Chanock’s research showed that a third of students surveyed had not been expected to attribute direct quotations, and two-thirds had not been expected to reference sources of the materials included in submitted work. Referencing by means of a bibliographical entry was reported as sufficient even for direct quotation by a quarter of those surveyed. 
Despite differing reports of the intended destinations of law students, it seems that somewhere around a half of undergraduate law students undertake their LLB with the intention of seeking admission as lawyers. Few enter law school already focussed on an academic career. Many arrive in tertiary education without a clear understanding of the unique view of academic propriety which is embedded deeply in university culture, and do not generally anticipate remaining within that culture once qualified. The intrinsic values of academic propriety are to their minds a relatively brief engagement with an alien world for largely pragmatic purposes. 
Moreover, law students frequently do not see the relevance of plagiarism to the practice of law, considering that neither the judicial system nor the practice of law seem to place similar strictures on the re-use of material originating from another author. Bast and Samuels cite no less an authority that Judge Richard Posner for the proposition that a judge is ‘not expected to produce original scholarship,’ and Le Clercq observes of legal writing that ‘it is no embarrassment to lean on another’s opinion: it is a requirement.’  Indeed, a Dworkinesque view of the judicial enterprise provides some support for the view that legal judgments are the product of combined minds over a significant period of time.  Lest this seem to suggest that intertextuality (in the postmodern sense) is, in fact, a characteristic of common law judgments, it should be remembered that the persuasiveness of a judgment lies in the ideas and views relied on being identified with great precision. The resignation of Federal Magistrate Rimmer in 2006 suggests that both Posner’s and Le Clercq’s views ought not to be read as excluding the concept of plagiarism from judicial practice. Indeed, it is from the very attribution of ideas, arguments and explicit text to identified authoritative sources that such ‘borrowed’ ideas gain their traction. By contrast, the imperatives of professional legal practice are driven not by scholarly values, but by the need for persuasiveness and efficiency, with many documents generated in practice being the product of multiple authors signed off by a supervising practitioner, or through the liberal utilisation of unacknowledged precedents prepared for previous legal transactions and or litigation, not necessarily authored by the current user. Scholarly practice must, therefore, seem quite alien to students who are focused on a career in legal practice. 
Once at university, students are confronted by a bewildering array of terminology used to describe academic misconduct: cheating (in a ‘traditional’ sense involving exams, and as a generic term covering any form of gaining an unfair advantage in assessment); academic dishonesty; excessive collaboration; collusion; copying; plagiarism; inadequate citation; and non-attribution of sources. 
Yet despite all universities having policies on academic misconduct which set out definitions of academic misconduct inviting disciplinary action, it has been suggested there is ‘justifiable confusion’ as to the fundamental principles of intellectual honesty, and that students often do not understand the ‘full set of behaviours that constitute cheating.’ 
Traditional concepts of academic and scholarly standards emphasise that plagiarism (as one of the paradigm examples of academic misconduct) is ‘theft, an offence, with effective sanctions in [appropriate] socialising and disciplining domains.’ More than the mere breach of a rule, plagiarism is the disregard of the normative values of the university as an institution imbricated in a global community of scholarship, reflecting its dedication to authenticity and integrity in both its learning, teaching, and research aspects. Not that universities are oblivious to the discourse surrounding authorship and originality. Indeed, it is within universities that the critical discourse is propagated, informed by a mix of intellectual, social, professional and moral and or philosophical issues. As yet, however, the traditional values of scholarship are still primary discourses informing universities’ expectations of student conduct, and the very scholars who question plagiarism’s provenance (perhaps ironically) nonetheless comply with its dictates. 
University policies on academic misconduct generally define such misconduct as a breach of academic integrity. Failure to maintain academic integrity is subdivided into three forms: cheating in examinations; plagiarism; and other forms. Cheating in exams is defined as including ‘any action or attempted action on the part of a student which might gain that student an unfair advantage in the examination.’ Plagiarism is defined as ‘representing another person's ... ideas or work as one's own,’ with an inclusive list of five forms of plagiarism, of which three are referable to the study of law: direct copying, summarising, or paraphrasing another person's ... work without appropriate acknowledgement ... using or developing an idea or hypothesis from another person's ... work without appropriate acknowledgement, ... [and] representing the work of another person ... as the student's own work. The residual category, other forms, includes, as relevant: giving or providing for sale one's own work to another person, company or web- site etc for copying or use by another person, ... purchasing or otherwise obtaining assessment material through individuals, companies or web-based tools/services, ... [and] collusion or collaborating with others where not authorised in the assessment requirements. 
In most universities, academic misconduct is classified as major or minor, with disciplinary responsibility for dealing with minor misconduct generally vested in the unit co-ordinator. An example given of minor academic misconduct relevant to law is ‘incidental plagiarism (inadequate, incorrect or inconsistent citation and/or referencing of sources, paraphrasing too close to the original).’  This may include copying a few sentences, and includes inadvertent copying, such as where a student’s notes do not differentiate between a copied passage and the student’s own commentary. This was always possible in the non-technological era but the probability of inadvertent copying has risen considerably with the advent of the copy and paste function, on-screen windows showing documents under construction, and electronic sources from which text can be copied and pasted, or dragged, directly into another document. 
Minor plagiarism thus has a number of different faces. It may be minor by virtue of its extent (with a rough guide of ‘a few sentences’). It may be minor if it lacks intent. Or it may be essentially technical, taking the form of incorrect citation, without intent to pass the work off as one’s own. 
Often, penalties cannot be applied to minor academic misconduct, the policy response being conceived as educative, rather than punitive. Records must, however, be kept of the management of the incident.  The form of such records is not generally prescribed, and may range from detailed emails through to a simple notation in the university’s records management system. 
Major academic misconduct is generally dealt with by a formal investigation process at a committee level, involving procedures modelled on quasi-judicial proceedings required to afford natural justice to the student;  and an obligation to make a decision based on ‘logical, credible and relevant evidence.’  The discipline committee must routinely make available a report of its findings to the student. For law students seeking admission, however, the matter does not end here. 
It is quite rare for the details of academic misconduct to be placed on the public record as a result of an application for admission, although universities themselves keep confidential records when any adverse findings are made against a student. Details of academic misconduct disclosed in an application for admission are generally not readily accessible to the public. Such details appear in judicial decisions only in the limited number of cases where either the local professional body (in Queensland, the Legal Practitioners Admissions Board (LPAB))  has actively opposed admission or the Court has exercised its discretion to explore an applicant’s suitability by way of a full hearing or by remitting the matter to a Judge of the Supreme Court to make specific findings of fact. 
There is, therefore, a dearth of written judgements relating to academic misconduct as a suitability matter relevant to admission. Many of the textbooks on professional ethics deal with admissions in little detail, focusing on criminal convictions (primarily because these have been the subject of high profile cases such as Re B or Wentworth). Texts prior to 2004, indeed, generally make no mention of academic misconduct. The author has identified 2004 as the watershed on the basis that in the decision in Re AJG that year that the Chief Justice of Queensland observed: Over the last couple of years, the Court has, in strong terms, emphasised the unacceptability of [academic misconduct] ... on the part of an applicant for admission to the legal profession. At the last Admissions Sitting, the Court indicated a strengthening of its response to situations like this on the basis adequate warning had been given. 
However, unlike ‘critical’ academics, the courts have not embraced, either in the admission process or in the general mode of legal analysis, a postmodern view of the nature of reality. The legal system is (and will presumably remain) steadfastly a creature of the Enlightenment, its analysis Cartesian in origin and its goal objectivity. Like the university qua institution, its values are unsurprisingly conventional. Prior to 2004, academic misconduct had been largely beneath the radar, and Re AJG became the seminal statement of principle which would be developed significantly in Queensland and in other jurisdictions.


'The Valuation of Unprotected Works: A Case Study of Public Domain Photographs on Wikipedia' by Paul J. Heald, Kris Erickson and Martin Kretschmer asks
 What is the value of works in the public domain? We study the biographical Wikipedia pages of a large data set of authors, composers, and lyricists to determine whether the public domain status of available images leads to a higher rate of inclusion of illustrated supplementary material and whether such inclusion increases visitorship to individual pages. We attempt to objectively place a value on the body of public domain photographs and illustrations which are used in this global resource. We find that the most historically remote subjects are more likely to have images on their web pages because their biographical life-spans pre-date the existence of in-copyright imagery. We find that the large majority of photos and illustrations used on subject pages were obtained from the public domain, and we estimate their value in terms of costs saved to Wikipedia page builders and in terms of increased traffic corresponding to the inclusion of an image. Then, extrapolating from the characteristics of a random sample of a further 300 Wikipedia pages, we estimate a total value of public domain photographs on Wikipedia of between $246 to $270 million dollars per year.


Two recent academic bullying disputes -
Christos v- Curtin University of Technology [No 2] [2015] WASC 72
Chen v Monash University [2015] FCA 130

04 March 2015


'Law Enforcement Use of Drones & Privacy Rights in the United States' (University of Pittsburgh Legal Studies Research Paper No. 2014-34) by John Burkoff discusses
the constitutional, statutory, and regulatory law applicable to domestic law enforcement agents’ use of unmanned aerial vehicles (“drones”) in the United States. The use of drones by private citizens and law enforcement agencies has been increasing dramatically, raising the specter of unconstrained and unregulated invasions of individual privacy from the sky. The Federal Aviation Administration (FAA) regulates airspace in the U.S. and currently has adopted very strict constraints on the use of drones, except for their use by private hobbyists at low levels and away from heavily populated areas or airports. However, these FAA regulations are in the process of changing to be much more permissive and, in any event, they are ineffectively enforced. Some states, moreover, concerned about privacy issues, have recently enacted statutes restricting the use of drones by law enforcement agencies except in prescribed circumstances, e.g. after obtaining a warrant based upon probable cause. Many more states are considering the enactment of similar legislation. 
The important question remains whether or not the Fourth Amendment, the constitutional provision creating constraints on governmental searches and seizures, applies to law enforcement’s use of drones. If it does not, then – absent any other applicable state or FAA prohibition – police agencies would be permitted to use drones whenever they wanted and for whatever purpose they desired, without any restrictions at all on their use. Analyzing the current and changing state of Fourth Amendment doctrine, that the Fourth Amendment does apply to law enforcement usage of drones in many circumstances. The precise nature of that application remains to be determined, e.g. whether a warrant is required prior to such usage and whether the applicable antecedent justification required is probable cause or reasonable suspicion that the person or place to be observed is engaged in criminal activity. Law enforcement officers cannot, in any event, use drones – as a matter of federal constitutional law – when they: (1) are flown outside of navigable airspace; (2) create undue noise, wind, dust, or threat of injury; or (3) obtain any information in an effectively physically intrusive manner from a constitutionally protected area.

Smart Vehicles

Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk, a report from US Senator Edward Markey, highlights concerns regarding 'smart cars'.

Markey continues to be one of the most thoughtful and forward-looking privacy legislators in the US.

The report [PDF] states
New technologies in cars have enabled valuable features that have the potential to improve driver safety and vehicle performance. Along with these benefits, vehicles are becoming more connected through electronic systems like navigation, infotainment, and safety monitoring tools. The proliferation of these technologies raises concerns about the ability of hackers to gain access and control to the essential functions and features of those cars and for others to utilize information on drivers’ habits for commercial purposes without the drivers’ knowledge or consent.
To ensure that these new technologies are not endangering or encroaching on the privacy of Americans on the road, Senator Edward J. Markey (D-Mass.) sent letters to the major automobile manufacturers to learn how prevalent these technol- ogies are, what is being done to secure them against hacking attacks, and how personal driving informa- tion is managed.
This report discusses the responses to this letter from 16 major automobile manufacturers: BMW, Chrysler, Ford, General Motors, Honda, Hyundai, Jaguar Land Rover, Mazda, Mercedes-Benz, Mitsubi- shi, Nissan, Porsche, Subaru, Toyota, Volkswagen (with Audi), and Volvo. Letters were also sent to Aston Martin, Lamborghini, and Tesla, but those manufacturers did not respond.
The responses reveal the security and privacy practices of these companies and discuss the wide range of technology integration in new vehicles, data collection and management practices, and security measures to protect against malicious use of these technologies and data. 
The key findings from these responses are: 
1. Nearly 100% of cars on the market include wireless technologies that could pose vulnera- bilities to hacking or privacy intrusions. 
2. Most automobile manufacturers were unaware of or unable to report on past hacking incidents. 
3. Security measures to prevent remote access to vehicle electronics are inconsistent and haphazard across all automobile manufacturers, and many manufacturers did not seem to understand the questions posed by Senator Markey. 
4. Only two automobile manufacturers were able to describe any capabilities to diagnose or meaningfully respond to an infiltration in real-time, and most say they rely on technolo- gies that cannot be used for this purpose at all. 
5. Automobile manufacturers collect large amounts of data on driving history and vehicle performance. 
6. A majority of automakers offer technologies that collect and wirelessly transmit driving history data to data centers, including third-party data centers, and most do not describe effective means to secure the data. 
7. Manufacturers use personal vehicle data in various ways, often vaguely to “improve the customer experience” and usually involving third parties, and retention policies – how long they store information about drivers – vary considerably among manufacturers. 
8. Customers are often not explicitly made aware of data collection and, when they are, they often cannot opt out without disabling valuable features, such as navigation.
These findings reveal that there is a clear lack of appropriate security measures to protect drivers against hackers who may be able to take control of a vehicle or against those who may wish to collect and use personal driver information.
In response to the privacy concerns raised by Senator Markey and others, the two major coalitions of automobile manufacturers recently issued a voluntary set of privacy principles by which their members have agreed to abide. These principles send a meaningful message that automobile manufacturers are committed to protecting consumer privacy by ensuring transparency and choice, responsible use and security of data, and accountability. However, the impact of these principles depend in part on how the manufacturers interpret them, because
(1) the specific ways that transparency will be achieved are unclear and may not be noticed by the consumer, e.g., text in the user manual, 
(2) the provisions regarding choice for the consumer only address data sharing and do not refer to data collection in the first place, and 
(3) the guidelines for data use, security, and accountability largely leave these matters to the discretion of the manufacturers.
The alarmingly inconsistent and incomplete state of industry security and privacy practices, along with the voluntary principles put forward by industry, raises a need for the National Highway Traffic Safety Administration (NHTSA), in consultation with the Federal Trade Commission (FTC) on privacy issues, to promulgate new standards that will protect the data, security and privacy of drivers in the modern age of increasingly connected vehicles. 
Such standards should:
  • Ensure that vehicles with wireless access points and data-collecting features are protected against hacking events and security breaches; 
  • Validate security systems using penetration testing; 
  • Include measures to respond real-time to hacking events; 
  • Require that drivers are made explicitly aware of data collection, transmission, and use; 
  • Ensure that drivers are given the option to opt out of data collection and transfer of driver information to off-board storage; 
  • Require removal of personally identifiable information prior to transmission, when possible and upon consumer request.


In Australian Communications and Media Authority v Today FM (Sydney) Pty Ltd [2015] HCA 7 the High Court has considered ACMA's authority in relation to disregard by broadcaster Today FM of Australian privacy law.

In December 2012 Today FM recorded a prank telephone call between presenters of one of its radio programs and staff of the King Edward VII Hospital in London regarding the Duchess of Cambridge. The broadcaster did not have the consent of either of the hospital staff to the recording, which was broadcast some hours later and re-broadcast the following day.

ACMA, as the national broadcast sector regulator, undertook an investigation under the Broadcasting Services Act 1992 (Cth), with a preliminary finding that broadcast of the recording constituted an offence under Surveillance Devices Act 2007 (NSW). Section 11 of that Act prohibits communication of a private conversation obtained, without the consent of the principal parties to that conversation, through the use of a listening device.

ACMA accordingly considered that Today FM had breached the licence condition under cl 8(1)(g) of Schedule 2 in the Broadcasting Services Act 1992 (Cth). That condition requires that a licensee not use its broadcasting service in the commission of an offence against a Commonwealth enactment or a law of a State or Territory. In finalising its report ACMA determined that Today FM had breached the cl 8(1)(g) licence condition.

Today FM responded in June 2013 through proceedings in the Federal Court seeking declaratory and injunctive relief. The broadcaster contended in Today FM (Sydney) Pty Ltd v Australian Communications and Media Authority (2013) 218 FCR 447 that ACMA lacked the authority to find that Today FM had breached the cl 8(1)(g) licence condition unless and until a competent court adjudicated that it had committed the SDA offence. Today FM argued in the alternative that, if ACMA was authorised, the authorising legislative provisions are invalid as inconsistent with the separation of executive and judicial power under the Constitution.

The Federal Court at first instance rejected both of Today FM's arguments and dismissed the proceedings. The Full Court in Today FM (Sydney) Pty Ltd v Australian Communications and Media Authority (2014) 218 FCR 461 accepted Today FM's first argument and set aside ACMA's determination. By grant of special leave, ACMA appealed to the High Court.

The appeal was brought on three grounds -
  •  that the Full Court erred in construing cl 8(1)(g) as requiring, for the purposes of enforcement action under s 141 or s 143, that the Authority may only find that a relevant offence has been committed upon a conviction by a criminal court (or a finding by a criminal court that the offence is proved). 
  • that the Full Court erred in construing cl 8(1)(g) as requiring the Authority to defer administrative enforcement action until after (if at all) the conclusion of the criminal process and in holding the Authority bound by the outcome of that process. 
  • that the Full Court erred in construing the expression "commission of an offence" in cl 8(1)(g) as extending to the commission of offences by persons other than the commercial radio broadcasting licensee.

The HCA has now unanimously held that ACMA has power to make an administrative finding or express an opinion that a person has committed a criminal offence for the purpose of determining whether the holder of a commercial radio broadcasting licence  has breached the licence condition.

The Court held that ACMA does have power to make an administrative determination that a licensee has committed a criminal offence as a preliminary to taking enforcement action under the Broadcasting Services Act irrespective of whether there has been no finding by a court exercising criminal jurisdiction that the offence has been proven. This is because, in making such a determination, ACMA is not adjudging and punishing criminal guilt.

 The Court also held that, in making a determination, ACMA is not exercising judicial power.


''Are We There Yet?': Measuring Human Rights Sensibilities' by Simon Rice, Denise Meyerson and Kate Ogg in (2014) 20(1) The Australian Journal of Human Rights comments 
Evaluation of human rights laws needs to go beyond measurable activity and outputs, and should try to assess the existence and strength of an underlying human rights sensibility among those for whom human rights laws are an available tool. This article responds to Arthurs and Arnold’s (2005) critique of the Canadian Charter of Rights and Freedoms 1982, describing a pilot study that explores the feasibility of establishing indicators for knowledge and use of, and attitudes towards, human rights legislation. The study was conducted among legal and social service professionals in the Australian Capital Territory and Victoria, and demonstrates that it is possible to devise a simple and meaningful instrument for measuring human rights sensibilities and tracking changes to them over time. Such monitoring may assist in assessing the long-term success of human rights legislation in fostering the internalisation of human rights norms.

03 March 2015


In In the Matter of Proceeding No. 3159 of 1970 [2015] VSC 61 Forrest J has considered an application for leave to inspect a divorce file pursuant to Rule 28.05(2)(b) of the Supreme Court (General Civil Procedure) Rules 2005 (Vic).

The Court states -
2 The applicant is the eldest daughter of the parties to the divorce proceeding. The parties were married on 12 January 1948. A petition for dissolution of marriage was filed on 18 December 1970. On 13 October 1971, a decree nisi was granted. The decree nisi was made absolute on 2 December 1971.
3 The court documents of this proceeding remain upon the court file of the divorce proceeding (divorce file).
4 By a summons filed 18 December 2014, the applicant applies for leave to inspect the divorce file. The material relied upon in support is an affidavit sworn by the applicant on 12 December 2014.
5 In 2013, the applicant commenced proceedings claiming equitable damages from her mother, allegedly in consequence of her mother resiling from a representation made to her in 2004 that upon her mother’s death she would receive an equal one third of her mother’s assets (Western Australia proceeding). On the applicant’s account, almost $5 million has, to date, been disposed of by her mother by way of inter vivos gifts to her siblings or to a trust.
6 In her affidavit, the applicant says as follows:
The only rational basis that I can infer from my mother’s decision to exclude me from the distribution of her assets was that I was the unwilling witness to her adultery with the family doctor when I was child which ultimately led to my parents’ divorce.
7 On 14 January of this year, I ordered that the solicitors for the applicant’s mother be given notice of this application and have the opportunity to file any affidavit or submission in opposition. The Court has subsequently been advised that there is no opposition to access being granted to the file.
8 There have been a series of decisions of judges of this Court relating to the disclosure of material from old divorce files pursuant to r 28.05(2)(b). It is only necessary to refer to the most recent of those decision in which Dixon J said:
In determining whether to grant the application, the court must consider: (i) whether the interest of the applicant in accessing the file and the purpose for which the applicant intends to use any information in the file is proper or appropriate; and (ii) the confidentiality of any information contained in the file and the effluxion of time as it relates to the consequence of any disclosure upon the privacy of parties and relevant non-parties, and the extent to which that privacy may be compromised. Overall, the court must consider the utility of granting the access sought in all the prevailing circumstances.
9 In essence, the applicant contends that there may be (in fact, she uses the words ‘should be’) material in the divorce file that will provide an explanation for her mother’s conduct in relation to the disposition of assets – inconsistent, on the applicant’s case, with the promise made to her by her mother in 2004. For the following reasons, I am persuaded that the applicant should have access to the file.
10 In the circumstances of this case – that is, to obtain evidence for use in a civil proceeding, I consider that it is appropriate to use the test applicable to access to documents the subject of a subpoena. The applicant must establish: (a) a legitimate forensic purpose for which access to the documents is sought; and (b) is it on the cards or a reasonable possibility that the documents sought under the subpoena will materially assist her case?
11 Having inspected the file and having noted the basis upon which production is sought, and allowing for doubt as to whether this line of inquiry will ultimately be productive, I am satisfied that the applicant has: (a) Identified a legitimate forensic purpose – namely adducing evidence of the motivation of her mother; and (b) It is on the cards that the contents of the divorce file will assist her case. That is not to say, in any way, that as a matter of fact the material will be used in the Western Australia proceeding.
12 Given that there is no opposition by the applicant’s mother and that the applicant’s father is deceased, I see no real privacy issues arising. Finally, I note that the applicant has, in effect said that she will use the information solely for the Western Australia proceeding. I will ask her solicitors to provide an undertaking to this effect.
The decisions at para 8 are In the Matter of an Application by Jill Bear [2009] VSC 122, In the Matter of Proceeding Number 1496 of 1956 [2010] VSC 192, Re Proceeding Number 1364 of 1964 [2010] VSC 494, In the Matter of Proceeding Number 870 of 1947 [2011] VSC 172, Re Proceeding Number 1451 of 1952 [2011] VSC 545. The Dixon judgment cited in that para is In the Matter of Proceeding No. 1496 of 1956 [2010] VSC 192.


'The Constitution of Identity: New Modalities of Nationality, Citizenship, Belonging and Being' by Eve Darian-Smith in Austin Sarat and Patty Ewick (eds) Wiley Handbook of Law and Society (Wiley, Forthcoming) comments
In recent decades there has emerged a large and diverse body of sociolegal literature engaging in identity politics, or what some theorists call the politics of difference (Taylor 1992:38; see also Young 1990; Apiah 2006). Drawing on the theories and insights of scholars working in cultural studies, feminist studies, sociology, anthropology, geography, political science, history and law, this literature grew out of the civil rights movements of the 1960s and 1970s and gained momentum through the rise of new social movements and debates over multiculturalism in the 1980s and 1990s (Calhoun 1994). More recently, sociolegal literature on the politics of identity has had to expand in scale and reach in seeking to analyze the complex relations between individuals and the nation-state in the context of globalization (Lacey 2004).
expansion speaks to the ways people conceptualize their legal subjectivity and relations to others in emerging socio-political contexts that include the mobilization of global social movements, an expanding international human rights regime, and mass migrations of people that makes some people “illegal” and “stateless” and includes millions of refugees fleeing wars, poverty, and various natural and man-made disasters (Dauvergne 2008). This expansion in the sociolegal literature also reflects new socio-political contexts of a less obviously global nature present in subnational regions, global cities, borderlands, prisons, immigration offices, hospitals and tribal reservations (Perry and Maurer 2003). These trans-state and sub-state contexts suggest a diverse range of legal relations brought about by new labor markets, new industries and commodities, new forms of secular and religious violence, new cultural and sexual politics, new reproductive technologies, new materialist understandings of agency, and a rethinking of the autonomous subject/citizen with increasing attention being given to a blurring of conventional divides between the human and non-human.
In this essay I seek to highlight some of the sociolegal scholarship engaged in the constitution of legal identities within state and non-state contexts, and point to some of the emerging challenges and new directions scholarly conversations are moving. The essay is not meant to present an exhaustive summary of the literature but rather an outlining of the analytical approaches in which notions of identity vis-à-vis the nation-state have been thought about in the past, how and in what ways these approaches may be shifting in the present, and what we may as sociolegal scholars need to be thinking about as we confront the future. Whether we think of ourselves as living in a postnational moment or not, what is clear is that the idea of a person’s legal subjectivity and identity being constituted solely through the geo-political boundaries of the nation-state is no longer a given (Purvis and Hunt 1999). In other words, we can no longer pretend that the modernist concepts of “individual” and “state” are stable categories and share clearly demarcated relations that up until relatively recently have underscored the idea of state nationalism and a person’s sense of personal and collective belonging vis-à-vis a national polity. In short, how people conceptualize themselves is now widely acknowledged as not reducible to simplified and essentialized individual and group identities recognized in law through state policies and institutions. ...
In the discussion above, I outlined two sociolegal approaches to examining the constitution of legal identity – postcolonial approaches and democratic liberal approaches. These approaches are not mutually exclusive and share many overlapping concepts, theories and methods. Perhaps the greatest commonality is that both bodies of literature are deeply engaged with the concept of the nation-state and tend to ultimately affirm its normative and analytical centrality. In the first postcolonial approach, the nation-state is the primary actor on the international battlefield over which legal identity is fought between states and groups of peoples demanding the same legal status as states. In the second liberal approach, the nation-state is the geo-political container in which various peoples fight for self and collective recognition of their legal identities. As we move into the first half of this century, both of these approaches will undoubtedly remain critical discursive terrains of legal and cultural conflict, tension, and negotiation. However, both will also have to contend with new political pressures being brought to bear on the constitution of legal identity that are attracting attention in sociolegal scholarship and more general intellectual conversations.
Below I briefly point to three emerging lines of inquiry that are forcing some law and society scholars to reassess their thinking regarding the constitution of legal identity. These are (i) the concepts of postnational and denational citizenship and related issues of statelessness being experienced by millions of refugees many of whom cannot imagine, let alone claim, a national legal identity, (ii) the prominence of human rights discourse and the degree to which international legal institutions are impacting the constitution of legal identities, and (iii) emerging frontiers of technology and new materialist thinking which are forcing scholars to think differently about relations of sociability and the blurred divides between humans and non-humans and their respective relational legal identities.
'The Subjectification of the Citizen in European Public Law' by Marco Dani investigates
 the condition of the individual qua citizen as recognised and shaped by national constitutional democracies and supranational law, the legal and political orders constituting European public law. It firstly spells out the notion of ‘subjectification’ and its peculiar manifestation in the context of European public law. Then, it offers an excursus on the subjectification of the citizen by looking at its main constitutive dimensions: belonging, rights and participation. The excursus examines three distinct phases of the evolution of European integration. Firstly, it looks at the social state era and the affirmation of the constitutional subject, a type of citizen devised essentially within national constitutional democracies with supranational law offering just additional rights for the economically active. Secondly, it explores the transformation of the constitutional subject prompted by the expansion of supranational law and the emergence of the ‘advanced liberalism’ agenda. Finally, the paper evaluates the condition of the citizen during the financial crisis, a stage which probably witnesses the twilight of the constitutional subject as conceived of in the social state era. The upshot of this excursus contradicts more conventional accounts for subjectivity in the EU emphasising a civic turn in the understanding of the individual: if the relationships between individuals and the governmental projects constituting European public law are considered, the evolution of European integration is paralleled by an involution of citizenship. Or, at least, of the idea of citizenship imagined in national constitutional democracies in post-World War II.

Genomic Reidentification

'Redefining Genomic Privacy: Trust and Empowerment' by Yaniv Erlich, James B. Williams, David Glazer, Kenneth Yocum, Nita Farahany, Maynard Olson, Arvind Narayanan, Lincoln D. Stein, Jan A. Witkowski and Robert C. Kain in (2014) 12(11) PLoS Biology comments
Fulfilling the promise of the genetic revolution requires the analysis of large datasets containing information from thousands to millions of participants. However, sharing human genomic data requires protecting subjects from potential harm. Current models rely on de-identification techniques in which privacy versus data utility becomes a zero-sum game. Instead, we propose the use of trust-enabling techniques to create a solution in which researchers and participants both win. To do so we introduce three principles that facilitate trust in genetic research and outline one possible framework built upon those principles. Our hope is that such trust-centric frameworks provide a sustainable solution that reconciles genetic privacy with data sharing and facilitates genetic research.
The authors state -
Genomic research promises substantial societal benefits, including improving health care as well as our understanding of human biology, behavior, and history. To deliver on this promise, the research and medical communities require the active participation of a large number of human volunteers as well as the broad dissemination of genetic datasets. However, there are serious concerns about potential abuses of genomic information, such as racial discrimination and denial of services because of genetic predispositions, or the disclosure of intimate familial relationships such as nonpaternity events. Contemporary data-management discussions largely frame the value of data versus the risks to participants as a zero-sum game, in which one player's gain is another's loss. Instead, this manuscript proposes a trust-based framework that will allow both participants and researchers to benefit from data sharing.
Current models for protecting participant data in genetic studies focus on concealing the participants' identities. This focus is codified in the legal and ethical frameworks that govern research activities in most countries. Most data protection regimes were designed to allow the free flow of de-identified data while restricting the flow of personal information. For instance, both the Health Insurance Portability and Accountability Act (HIPAA) and the European Union privacy directive require either explicit subject consent or proof of minimized risk of re-identification before data dissemination. In Canada, the test for whether there is a risk of identification involves ascertaining whether there is a “serious possibility that an individual could be identified through the use of that information, alone or in combination with other available information” . To that end, the research community employs a fragmented system to enforce privacy that includes institutional review boards (IRBs), ad hoc data access committees (DACs), and a range of privacy and security practices such as the HIPAA Safe Harbor .
The current approach of concealing identities while relying on standard data security controls suffers from several critical shortcomings. First, standard data security controls are necessary but not sufficient for genetic data. For instance, access control and encryption can ensure the security of information at rest in the same fashion as for other sensitive (e.g., financial) information, protecting against outsiders or unauthorized users gaining access to data. However, there is also a need to prevent misuse of data by a “legitimate” data recipient. Second, recent advances in re-identification attacks, specifically against genetic information, reduce the utility of de-identification techniques. Third, de-identification does not provide individuals with control over data — a core element of information privacy.
With the growing limitations of de-identification, the current paradigm is not sustainable. At best, participants go through a lengthy, cumbersome, and poorly understood consent process that tries to predict worst-case future harm. At worst, they receive empty promises of anonymity. Data custodians must keep maneuvering between the opposite demands for data utility and privacy, relegating genetic datasets into silos with arbitrary access rules. Funding agencies waste resources funding studies whose datasets cannot be reused across and between large patient communities because of privacy concerns. Finally, well-intentioned researchers struggle to obtain genetic data from hard to access resources. These limitations impede serendipitous and innovative research and degrade a dataset's research value, with published results often overturned because of small sample sizes.
'Routes for breaching and protecting genetic privacy' by Erlich and Narayanan in (2014) 15(8) Nat Rev Genetics 570 states
We are entering an era of ubiquitous genetic information for research, clinical care and personal curiosity. Sharing these data sets is vital for progress in biomedical research. However, a growing concern is the ability to protect the genetic privacy of the data originators. Here, we present an overview of genetic privacy breaching strategies. We outline the principles of each technique, indicate the underlying assumptions, and assess their technological complexity and maturation. We then review potential mitigation methods for privacy-preserving dissemination of sensitive data and highlight different cases that are relevant to genetic applications.
They comment
Genetic datasets are typically published with additional metadata, such as basic demographic details, inclusion and exclusion criteria, pedigree structure, as well as health conditions that are critical to the study and for secondary analysis. These pieces of metadata can be exploited to trace the identity of the unknown genome.
Unrestricted demographic information conveys substantial power for identity tracing. It has been estimated that the combination of date of birth, sex, and 5-digit zip code uniquely identifies more than 60% of US individuals. In addition, there are extensive public resources with broad population coverage and search interfaces that link demographic quasi-identifiers to individuals, including voter registries, public record search engines (such as PeopleFinders.com) and social media. An initial study reported the successful tracing of the medical record of the Governor of Massachusetts using demographic identifiers in hospital discharge information25. Another study reported the identification of 30% of Personal Genome Project (PGP) participants by demographic profiling that included zip code and exact birthdates found in PGP profiles.
Since the inception of the Health Insurance Portability and Accountability Act (HIPAA) Privacy rule, dissemination of demographic identifiers have been the subject of tight regulation in the US health care system. The safe harbor provision requires that the maximal resolution of any date field, such as hospital admissions, will be in years. In addition, the maximal resolution of a geographical subdivision is the first three digits of a zip code (for zip codes of populations of greater than 20,000). Statistical analyses of the census data and empirical health records have found that the Safe Harbor provision provides reasonable immunity against identity tracing assuming that the adversary has access only to demographic identifiers. The combination of sex, age, ethnic group, and state is unique in less than 0.25%of the populations of each of the states.
Pedigree structures are another piece of metadata that are included in many genetic studies. These structures contain rich information, especially when large kinships are available. A systematic study analysed the distribution of 2,500 two-generation family pedigrees that were sampled from obituaries from a US town of 60,000 individuals. Only the number (but not the order) of male and female individuals in each generation was available. Despite this limited information, about 30% of the pedigree structures were unique, demonstrating the large information content that can be obtained from such data.
Another vulnerability of pedigrees is combining demographic quasi-identifiers across records to boost identity tracing despite HIPAA protections. For example, consider a large pedigree that states the age and state of all participants. The age and state of each participant leaks very minimal information, but knowing the ages of all first and second-degree relatives of an individual dramatically reduces the search space. Moreover, once a single individual in a pedigree is identified, it is easy to link between the identities of other relatives and their genetic datasets. The main limitation of identity tracing using pedigree structures alone is their low searchability. Family trees of most individuals are not publicly available, and their analysis requires indexing a large spectrum of genealogical websites. One notable exception is Israel, where the entire population registry was leaked to the web in 2006, allowing the construction of multi-generation family trees of all Israeli citizens
Identity tracing by genealogical triangulation
Genetic genealogy attracts millions of individuals interested in their ancestry or in discovering distant relatives. To that end, the community has developed impressive online platforms to search for genetic matches, which can be exploited by identity tracers. One potential route of identity tracing is surname inference from Y-chromosome data (Figure 2). In most societies, surnames are passed from father to son, creating a transient correlation with specific Y chromosome haplotypes. The adversary can take advantage of the Y chromosome–surname correlation and compare the Y haplotype of the unknown genome to haplotype records in recreational genetic genealogy databases. A close match with a relatively short time to the most common recent ancestor (MRCA) would signal that the unknown genome likely has the same surname as the record in the database.
The power of surname inference stems from exploiting information from distant patrilineal relatives of the unknown’s genome. Empirical analysis estimated that 10–14% of US white male individuals from the middle and upper classes are subject to surname inference based on scanning the two largest Y-chromosome genealogical websites with a built-in search engine35. Individual surnames are relatively rare in the population, and in most cases a single surname is shared by less than 40,000 US male individuals, which is equivalent to 13 bits of information (Box 1). In terms of identification, successful surname recovery is nearly as powerful as finding one’s zip code. Another feature of surname inference is that surnames are highly searchable. From public record search engines to social networks, numerous online resources offer query interfaces that generate a list of individuals with a specific surname. Surname inference has been utilized to breach genetic privacy in the past. Several sperm donor conceived individuals and adoptees successfully used this technique on their own DNA to trace their biological families. In the context of research samples, a recent study reported five successful surname inferences from Illumina datasets of three large families that were part of the 1000 Genomes project, which eventually exposed the identity of nearly fifty research participants.
The main limitation of surname inference is that haplotype matching relies on comparing Y chromosome Short Tandem Repeats (Y-STRs). Currently, most sequencing studies do not routinely report these markers, and the adversary would have to process large-scale raw sequencing files with a specialized tool. Another complication is false identification of surnames and inference of surnames with spelling variants compared to the original surname. Eliminating incorrect surname hits necessitates access to additional quasi-identifiers such as pedigree structure and typically requires a few hours of manual work. Finally, in certain societies, a surname is not a strong identifier and its inference does not provide the same power for re-identification as in the USA. For example, 400 million people in China hold one of the ten common surnames, and the top hundred surnames cover almost 90% of the population43, dramatically reducing the utility of surname inference for re-identification.
An open research question is the utility of non Y chromosome markers for genealogical triangulation. Websites such as Mitosearch.org and GedMatch.com run open searchable databases for matching mitochondrial and autosomal genotypes, respectively. Our expectation is that mitochondrial data will not be very informative for tracing identities. The resolution of mitochondrial searches is low due to the small size of the mitochondrial genome, meaning that a large number of individuals share the same mitochondrial haplotypes. In addition, matrilineal identifiers such as surname or clan are relatively rare in most human societies, complicating the usage of mitochondria haplotype for identity tracing. Autosomal searches on the other hand can be quite powerful. Genetic genealogy companies have started to market services for dense genome-wide arrays that enable the identification of distant relatives (on the order of 3rd to 4th cousins) with fairly sufficient accuracy. These hits would reduce the search space to no more than a few thousand individuals. The main challenge of this approach would be to derive a list of potential people from a genealogical match. As we stated earlier, family trees of most individuals are not publicly available, making such searches a very demanding task that would require indexing a large spectrum of genealogical websites. With the growing interest in genealogy, this technique might be easier in the future and should be taken into consideration.
'Identifying personal genomes by surname inference' by Gymrek, McGuire, Golan, Halperin and Erlich in (2013) 339(6117) Science 321-4 comments
Sharing sequencing data sets without identifiers has become a common practice in genomics. Here, we report that surnames can be recovered from personal genomes by profiling short tandem repeats on the Y chromosome (Y-STRs) and querying recreational genetic genealogy databases. We show that a combination of a surname with other types of metadata, such as age and state, can be used to triangulate the identity of the target. A key feature of this technique is that it entirely relies on free, publicly accessible Internet resources. We quantitatively analyze the probability of identification for U.S. males. We further demonstrate the feasibility of this technique by tracing back with high probability the identities of multiple participants in public sequencing projects.
They go on to report
Surname inference from personal genomes puts the privacy of current de-identified public data sets at risk. We focused on the male genomes in the collection of Utah Residents with Northern and Western European Ancestry (CEU). The informed consent of these individuals did not definitively guarantee their privacy and stated that future techniques might be able to identify them. To test the ability to trace back the identities of these samples from personal genomes, we processed with lobSTR 32 Illumina genomes of CEU male founders that reside in public repositories of the 1000 Genomes Project and the European Nucleotide Archive that were sequenced with read lengths of at least 76 bp. Most of these genomes were sequenced to a shallow depth of less than 5× and produced sparse Y-STR haplotypes. We selected the 10 genomes that had the most complete Y-STR haplotypes with a range of 34 to 68 markers to attempt surname recovery. Searching the genetic genealogy databases returned top-matching records with Mormon ancestry in 8 of the 10 individuals for whom the top hit had at least 12 comparable markers. Moreover, for four individuals, the top match consisted of multiple records with the same surname, increasing the confidence that the correct surname was retrieved. This potentially high surname recovery rate stems from a combination of the deep interest in genetic genealogy among this population and the large family sizes, which exponentially increases the number of targeted individuals for every person who is tested.
In five surname recovery cases, we fully identified the CEU individuals and their entire families with very high probabilities (Table 1). These five cases belonged to three pedigrees, in two of which the surnames of both the paternal and maternal grandfathers were recovered. Our strategy for tracing back individuals relied on the recovered surnames as well as publicly available Internet resources such as record search engines, obituaries, and genealogical Web sites, and demographic metadata available in the Coriell Cell Repository Web site. The year of birth was inferred by subtracting the ages in Coriell from the year of collecting samples. Each complete pedigree re-identification took 3 to 7 hours by a single person. The identified families matched exactly to the corresponding pedigree descriptions in the Coriell database: The number of children, the birth order of daughters and sons, and the state of residence were identical. All grandparents were alive in 1984, the year that the CEU cell line collection was established. In the two cases of a dual surname recovery from both grandfathers, the surname of the father and the maiden name of the mother matched exactly to the grandfathers’ surnames, substantially increasing the confidence of the recovery. Coriell also lists the ages  during sample collection for these two pedigrees, which agreed with the age differences of all tested cases with the identified family members. Using genealogical Web sites, we traced the patrilineal lineage that connects each identified genome through the MRCA to the record originator in the genetic genealogy database (Fig. 3). This analysis revealed that two to seven meiosis events link the CEU genome to the record source. Finally, we calculated that the probability of finding random families in the Utah population with these exact demographic characteristics is less than 1 in 105 to 5 × 109. In total, surname inference breached the privacy of nearly 50 individuals from these three pedigrees.
This study shows that data release, even of a few markers, from one person can spread through deep genealogical ties and lead to the identification of another person who might have no acquaintance with the person who released his genetic data. The propagation of information through shared male lines amplifies the range of identification, allowing ~135,000 records to potentially target several million U.S. males. Another feature of this identification technique is that it entirely relies on free, publicly available resources. It can be completed end-to-end with only computational tools and an Internet connection. The compatibility of our technique with public record search engines makes it much easier to continue identifying other data sets in the same pedigree, including female genomes, once one male target is identified. We envision that the risk of surname inference will grow in the future. Genetic genealogy enthusiasts add thousands of records to these databases every month. In addition, the advent of third-generation sequencing platforms with longer reads will enable even higher coverage of Y-STR markers, further strengthening the ability to link haplotypes and surnames.