07 September 2013

Secrets

Robert Carolina and Kenneth G. Paterson in 'Megamos Crypto, Responsible Disclosure, and the Chilling Effect of Volkswagen Aktiengesellschaft vs Garcia, et al' comment [PDF] that
The recent decision of the English High Court to censor the publication of an academic paper describing weaknesses in the Megamos Crypto automobile immobiliser system raises a number of concerns for members of the cryptographic academic community, legal practitioners, and commercial users of cryptographic products. In this paper we will provide a brief description of the technology at the heart of the dispute, the crypto research project, the court's decision, and then provide a critique of the decision and make observations about its potential impact. Our description and our observations are based on evidence as it was disclosed in the published decision of the court, Volkswagen Aktiengesellschaft vs Garcia, et al [2013] EWHC 1832 (Ch) (25 June 2013). This decision addressed a request for preliminary injunction pending a full trial on the merits. It remains possible that additional evidence introduced later, or existing evidence that has not been disclosed in the decision, could have a significant impact upon the observations and opinions presented here. We do not take any position, nor do we make any prediction, about the ultimate outcome of this case. 
 The authors note that -
... three researchers decided to test the strength of the Megamos Crypto system. This type of activity – an unsolicited effort to identify weaknesses in commercial crypto devices – is common in the field of crypto research. This would not be the first paper published by academics highlighting weaknesses in RF-based automobile security devices. See, for example, Indesteege, Keller, Dunkelman, Biham, and Preneel, "How To Steal Cars – A Practical Attack on KeeLoq", EUROCRYPT 2008, LNCS 4965, pp. 1–18, 2008. (Cooperative effort by academic researchers resident in Belgium and Israel, supported by both public and private research grants, revealing deficiencies in KeeLoq – a widely installed remote key entry system. We note without further comment that it is common practice for academics in this field to give such papers rather provocative titles.) 
To conduct their analysis of Megamos Crypto the researchers needed to obtain details of the crypto algorithm used. The manufacturers of the immobiliser do not publish the algorithm. The algorithm is claimed as a trade secret. It is not clear from the decision whether the researchers considered paying a laboratory to reverse engineer the crypto chip itself. The court was advised that reverse engineering the chip would cost in the region of €50,000 – an outlay that might have seemed expensive in the context of an academic grant proposal. Instead, the researchers identified a third party hardware and software product called Tango Programmer. This product (sold for an initial payment of €1,000 per unit) can be used, among other things, to create keys for automobile immobilisers using Megamos Crypto and other immobiliser systems. The algorithm is incorporated within the Tango Programmer software, but not directly disclosed by the manufacturer. 
The researchers conducted a careful study of the Tango Programmer software. From this they were able to reverse engineer the functionality of the system and discover the details of the cryptographic algorithm. Having obtained the algorithm, they set about to study the system. The researchers eventually identified three weaknesses in the system. (Para.11.) Two of these (use of weak secret keys and poor key updating practices) were not an issue in the case. The court did not comment on this, but we note that weaknesses of this type recur with sad frequency in the operation of secure systems. 
The other weakness identified is much more serious. This is alleged to be a weakness in the design of the cryptographic algorithm itself. To explain the flaw that they had uncovered, the researchers planned to include a description of the algorithm in their published paper. It was this desire to publish the (allegedly secret) algorithm that created the dispute. 
In November 2012 the three authors approached EM (the crypto chip manufacturer) to explain the weaknesses they had uncovered. (Para.15.) It is not clear from the decision whether the researchers understood that EM was using the algorithm under license from Thales. The court's published decision, unsurprisingly, does not provide details of the exact nature of the weakness in the Megamos Crypto algorithm. 
The researchers planned to publish their paper in August 2013 in the proceedings of the annual USENIX Security conference. Volkswagen learned of the soon-to-be published academic paper on 23 May 2013, and brought a lawsuit in the High Court of England to prohibit disclosure of the algorithm. (Para.16.) The lawsuit names the three academic authors (two resident in the Netherlands and one resident in England), and the two universities that employ them (in England and the Netherlands).
The authors conclude -
The decision constructs a narrative about the academics that is very unflattering. Faced with a request to delay publication for just a little while longer the researchers instead demanded the ability to publish immediately and thereby jeopardised the security of millions of cars. (We have already questioned whether this was such a serious risk.) While the court admits that the failure to make Volkswagen aware of the problem was not their fault, it chastises them anyway for failing to consent to any more delay: "A responsible approach would be to recognise the legitimacy of the interest in protecting the security of motor vehicles." (Para.41) The court delivers some of its most harsh commentary in describing the responsible disclosure process. "I think the defendants' mantra of 'responsible disclosure' is no such thing. It is a self-justification by defendants for the conduct they have already decided to undertake and it is not the action of responsible academics." (Para.42) 
We suggest that a review of the evidence disclosed in the decision also supports a different narrative. This begins by considering the difficult work undertaken by the academics as part of their mission to support security research. The selection of Megamos Crypto as a potential research subject, the sourcing of Tango Programmer, the reverse engineering work needed to liberate the algorithm from the software, and then the core research work of examining the crypto algorithm for flaws. The decision does not state how long the researchers spent on this process, but we have little doubt that it was significant. Acting under ethical guidelines regularly applied by academics in this field, they approached the chip manufacturer EM with their findings in November 2012. They offered their assistance to develop work-arounds or replacement technology. They planned to publish their findings in August 2013, nine months after private disclosure. Having been open with EM, they heard very little in response. The researchers were then surprised when Volkswagen entered the picture in May 2013 – seven months after initial disclosure to EM – and sued them. Volkswagen requested and received an emergency temporary injunction with no notice to the academics. Given that the only meeting about the weakness in Megamos Crypto described by the court took place in June 2013 – a matter of weeks before scheduled publication – we are left to ponder how much emotion may have entered the situation at this stage. 
The difference in these two competing narratives demonstrates a significant disagreement about what constitutes "responsible disclosure". It appears that the court may not have fully appreciated how this phrase is used as a term of art in the context of security research. There are three main methods of public disclosure that are in common use in this admittedly abstruse field: (1) non-disclosure, (2) responsible disclosure and (3) full disclosure. In the first case, the researcher tells the affected party, and then says nothing more; in the third case, the researcher publishes without telling the affected party in advance and without any regard to their interests. The second way is a middle path between these extremes that is now very widely followed by academics and more generally security researchers. Typically, six weeks is set as the "time to disclosure" in the case of software flaws, and six months in the case of hardware flaws. However, in extreme cases, where no simple fix is available and the impact is very serious, researchers might feel compelled to wait longer than six months. These time scales (six weeks and six months) are not unique to these academic researchers. They are widely used baselines within the field of security research. We imagine that the researchers felt that they had already "gone the extra mile" by disclosing nine months in advance of publication, and might have felt rather abused when someone other than the product's manufacturer suddenly appeared and brought a lawsuit only two months before planned publication. 
It is crucial to understand that "responsible disclosure" is simply a phrase used by researchers to describe one approach to the public disclosure of security flaws, one that is certainly more responsible than full disclosure, and arguably even more responsible than non-disclosure, given that the latter approach does not create any incentive for the affected party to address any disclosed flaws in their products. The court did not appear to appreciate this distinction, given the way in which the decision criticizes the researchers. (Para.42.) 
Furthermore, and more importantly, it is apparent (from para.14) that the court's understanding of the term is incomplete: there, a definition of responsible disclosure is offered which entails "telling the manufacturer in advance" about the flaws, but which does not include the critical point that, in this mode of disclosure, a date is set up-front for when disclosure will take place, irrespective of the circumstances at the time when that date is reached. Establishing such a publication deadline when disclosing to the manufacturer is not simply the arbitrary or capricious act of a petulant researcher. This mechanism is used to prevent affected parties (who, as noted above, often form part of complex supply chains) from unnecessary dithering and to ensure they have an incentive to address the identified security flaws. It seems that this missing point concerning timing is what leads the court to heap opprobrium on the researchers in paras. 41 and 42, where it is opined that "it was not consistent with the idea of responsible disclosure for the defendants to simply say, 'We are going ahead anyway'." and "I think the defendants' mantra of 'responsible disclosure' is no such thing." There is a value judgment implied by the use of the word "mantra" here – this meaning a phrase repeated often and without significant thought. Our experience is that academic security researchers and industrial consumers of cryptography alike do understand the significance and methodology of responsible disclosure, and accept it as the preferred, if not universal, modus operandus for disclosing security vulnerabilities. This apparent breakdown in understanding seems to heavily colour the court's view of the academics' probity. 
We find the strong language used to describe the actions of the academics both puzzling and disappointing. First, it is clear that their approach to "responsible disclosure" was well within normal guidelines followed by security researchers for the benefit of the security industry (and society) as a whole. Even if it were not, the strength of the court's condemnation is surprising given the reality it had already acknowledged – reasonably accessible methods are available that would allow the academics or anyone else to publish the algorithm without the permission of the complaining parties.
The authors suggest that -
This ruling is likely to have a chilling effect on legitimate security research in the UK. While the circumstances of this case are rather specific, and the decision hangs on those specifics, the case creates a degree of uncertainty and confusion around what can, and cannot, be done by security researchers without running the risk of encountering legal obstacles. For academic researchers, "publish or perish" is a no less pressing or relevant a motto for it being hackneyed through overuse. And the investment in time and effort required to conduct the kind of research relevant to this case is significant, as are the risks that any given research avenue selected will turn out to be unfruitful. So the mere perception that legal barriers to publication might arise is likely to cause some researchers, particularly new entrants to the field, to think twice about starting at all. 
It is then especially ironic that, all the while, the UK government, through its funding agencies (such as EPSRC) and UK government departments (such as CESG/GCHQ and Business, Innovation and Skills, BIS), has been investing heavily in cyber security research, with a proportion of that funding being directed towards projects involving the development of techniques for the analysis, discovery, and eventual elimination, of weaknesses in security systems. 
We may also speculate that the ruling may have repercussions beyond the UK. Academic research in cryptography and security is a discipline now observed routinely around the world. Multi-country collaborations (like the collaboration that is the subject of this case) are commonplace. It is unclear whether the High Court of England would have been vested with jurisdiction of this case but for the fact that one of the authors and his employer are resident in the United Kingdom. The remaining two authors are normally resident in the Netherlands. The putative publisher is based in the United States. Certainly courts in the United States are highly suspicious of such prior restraint cases due to a combination of the guarantee of free speech (under the First Amendment of the US Constitution) and certain limitations in the US treatment of trade secrets. (See generally, Samuelson, "Principles For Resolving Conflicts Between Trade Secrets And The First Amendment", 58 Hastings L.J. 777 (March, 2007).) 
As a result of this decision, it seems plausible that researchers based outside the UK may be less enticed by the prospect of working with UK-based researchers given the possible injunction of their eventual joint research papers. The effect would be to isolate UK- based security researchers, at a time when national governments are strongly emphasising the need for cross-border efforts in cyber security research (see for example the UK Cyber Security Strategy at https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/6096 1/uk-cyber-security-strategy-final.pdf). .... 
In granting a preliminary injunction that partially restrains publication of academic research into weaknesses in the Megamos Crypto system, the English High Court has taken a step that is – and should be – troubling to legitimate security researchers. In our opinion, the court's decision evinces a lack of understanding of the foundational principles of cryptography and secure system design that would have been necessary to conduct an appropriate enquiry into the risks of publication. The decision also appears to lack a clear understanding of the term of art "responsible disclosure", and the well- established role that this plays in security research. Although this is a preliminary decision, given the admitted infringement of free speech we find the application of law to the facts in this decision to be surprisingly brief and unhelpful. We are especially puzzled by the court's willingness to jump so quickly to the conclusion that the manufacturer of the Tango Programmer product engaged in misappropriation of a trade secret, and having reached that conclusion that the academics ought to have been aware of the misappropriation. If the court had better reasons to draw these inferences from the preliminary evidentiary record, it is unfortunate that the court did not describe this evidence in the published decision. We are also troubled at the chilling effect that this decision may have on legitimate security research in the UK. This decision, which we expect will be viewed as out of step with the prevailing trends of other countries regularly engaged in such research, could have the effect of isolating UK security research academics from their international colleagues – at precisely the time that the government in the UK is encouraging an increase in such research and in international cooperation. 
As a final comment, we have no doubt that the judge in this matter – who was required to hear this application and make this decision in a very compressed time frame – is an extremely able jurist. Judges, no matter how able, cannot be experts in all subjects. In English courts (and other common law courts around the world) it is the responsibility of others to explain to the court key elements of technology under review. Perhaps for no reason other than the compressed timetable leading up to the hearing and decision, it appears to us that this process of explaining complex technical facts and practices from an otherwise abstruse specialist field has somehow broken down.