Interview: Adrian Martin Speaks Out in Favor of Open Access

In the coming period, I will interview a number of researchers about their work and to what extent open access has a role to play in it. The debate around open access is often held on a policy level, with university boards or libraries and publishers. But the voices of those that actually make use of research papers, books and research data are often not heard. How does a researcher or practitioner see the open access movement enabling free online access to scholarly works? How does this affect their work? What initiatives of interest are being developed in particular fields and what are personal experiences with open access publishing? All kinds of questions that hopefully lead to helpful answers for other researchers engaging with open access.

First interview is with Adrian Martin. Adrian was born in 1959 in Australia. He is a film and arts critic for more than 30 years and as an associate professor in Film Culture and Theory he is currently affiliated with Monash University. His work has appeared in many journals and newspapers around the world, and has been translated into over twenty languages.

The interview starts:

Jeroen: When did you first hear of open access as a new way of distributing research to a wider audience?

Adrian: To appreciate my particular viewpoint on open access issues, you probably need to know where I am ‘coming from’. I am not now, and have rarely been in my life so far, a salaried academic. I have spent most of my life as what I guess is called an ‘independent researcher’. I have sometimes called myself a ‘freelance intellectual’, but I guess the more prosaic description would simply be ‘freelance writer/speaker’. So, not a journalist in the strict sense (I have never worked full-time for any newspaper or magazine), and only sometimes an employed academic within the university system.

Schermafbeelding 2017-06-25 om 21.48.27

Latest issue of Senses of Cinema

Therefore, my entry into these issues is as someone who, at the end of the 1990s, began to get heavily involved in the publication of online magazines, whether as editor, writer, or translator. These were not commercial or industrial publications, they were ‘labour of love’ projects, kin to the world of ‘small print magazines’ in the Australian arts scene (which I had been a part of in the 1980s). No special subscription process was required; it was always, simply, a completely open and accessible website. My entrée to this new, global, online, scene was through Bill Mousoulis, the founder of Senses of Cinema and later I was part of the editorial teams of Rouge, and currently LOLA. And I have contributed to many Internet publications of this kind since the start of the 21st century. The latter two publications do not use academic ‘peer review’ (although everything is carefully checked and edited), and are run on an active ‘curation’ model (i.e., we approach specific people to ask for texts) rather than an ‘open submission’ model.

I say this in order to make clear that my attitude and approach does not come from only, or even mainly, an academic/scholarly perspective. For me, open access is not primarily or solely about making formerly ‘closed’ academic research available to all – although that is certainly one important part of the field. Open access is about – well, open access, in the strongly political sense of making people feel that they are not excluded from reading, seeing, learning or experiencing anything that exists in the world. Long before I encountered the inspiring works of Jacques Rancière, I believe I agreed deeply with his political philosophy: that what we have to fight, at every moment, is the unequal ‘distribution of the sensible’, which means the ways in which a culture tries to enforce what is ‘appropriate’ for the citizens in each sector of society. As a kid who grew up in a working-class suburb of Australia before drifting off on the lines-of-flight offered by cinephilia and cultural writing, I am all too painfully aware of the types of learning and cultural experience that so many people deny themselves, because they have already internalised the sad conviction that it is ‘not for them’, not consistent with their ‘place’ in the world. Smash all such places, I say!

academiaopenThis is why I am temperamentally opposed to any tendency to keep the discussion of open access restricted to a discussion of university scholarship – or, indeed, as sometimes happens, with the effect of strengthening the ‘professional’ borders around this scholarship, and thus shutting non-university people (such as I consider myself today) out of the game. Let me give you a controversial example. I use, and encourage the use of Academia.edu. It is the only ‘repository of scholarly knowledge’ I know of that – despite its unwise name! – anyone can easily join and enjoy (once they are informed of it, and are encouraged to do so). Now, many people complain about the capitalistic nature of this site, and everything they say in this regard may be true. But when I ask them for an alternative that is as good and as extensive in its holdings, I am directed to ‘professional’ university repositories for texts – from which I am necessarily excluded from the outset, since I do not have a university job. This is bad! And reinforces all the worst tendencies in the field.

Likewise, I bristle at the suggestion (it occasionally comes up) that an online publication such as LOLA (among many other examples) is not really ‘scholarly’. Online magazines are regularly downgraded by being described as mere ‘blogs’ (when this is not so!), with no professional standards, etc. etc.. But my drive is, above all, a democratic one. I work mainly outside the university setting because I want access to be truly open. And I want the work to be lively and unalienated. A tall order, but we must forever strive for it! So, in a nutshell, for me the term ‘open access’ simply means ‘material freely available to all online’ – but material that is well written, well prepared, well edited and well presented.

Jeroen: Did you ever publish one of your papers (or other scholarly material) in open access? 

Adrian: Well, according to my above context of criteria, yes: a great deal, literally hundreds of essays! I believe I have covered a wide range of venues, from what I am calling Internet magazines (such as Transit and Desistfilm), through to online-only peer-reviewed publications (such as Movie, Necsus and The Cine-Files), through to the ‘paywall’ academic journals (such as Screen, Studies in Documentary Film and Continuum) which seem to exist less and less as solid, physical entities that one could actually obtain and hold a copy of (try buying one if you’re not a library), and more and more as a bunch of detached, virtual items (each article its own little island) on a digital checkout page of a wealthy publishing house’s website! This last point also applies to the chapters I have written for various academic books.

When I taught at Monash (Australia) and Goethe (Germany) universities from 2007 to 2015, I decided to ‘take a detour’ into this world of academic writing – partly because the institution demands or requires it, for the sake of judging promotions and so forth. I do not regret the type of in-depth, historical work, on a range of subjects, that this opportunity allowed me to do. But I am more than happy to be back in the less constrained, less rule-bound world of freelance writing. The university, finally, is all about a far too severe, restricted and vicious ‘distribution of the sensible’ – it tends to perpetuate itself, and close its professional ranks, rather than truly open its borders to what is beyond itself.

One of my best and happiest experiences with open access has been with the small American publisher, punctum books. I did my little book Last Day Every Day with them, and it has had three editions in three different languages there. Their care and dedication to projects is outstanding. The politics of punctum as an enterprise are incredibly noble and radical: people can opt to pay something for their books, or download them for free if they wish. Likewise, authors can take any money that comes to them, or choose to plough it back into the company (that’s what I did, and probably most of their authors do). At the same time, certain professional/academic standards are upheld: punctum has an extraordinary board, manuscripts are sent out for reporting, and so forth. They both ‘play the game’ of academic publishing as far as they have to, and also challenge the system in a remarkable way. I am proud to be involved with them.

Jeroen: You are an Australian scholar, living in Spain, traveling for lectures and conferences and studying and writing about a global topic as film and media studies is. How does free online scholarly content affect your daily work as a scholar?

Adrian: Well, I enjoy an extraordinary amount of access to the work of other critics and scholars, especially through Academia.edu, and through postings of links by individuals on social media. At the same time, the ‘paywalls’ shut me out, because the purchase rates are too high for me as an individual, and I have no university-sanctioned reading/downloading access. As a freelance writer, I have to go where the work is, and where the money (very modest!) is. So that itinerary necessarily cuts across ‘commercial’ and ‘academic’ lines, and also involves me with many brave projects that are largely non-academic, and commercial only on an artisanal scale: literary projects such as Australia’s Cordite, for example.

Jeroen: In your first answer, you already addressed the issues of Academia.edu (and I guess you can extend this to other commercial products with similar functionalities like ResearchGate) but you also stress the need for a good place to share papers and research output. In the sciences, the preprint and postprint is an excepted and efficient standard in the scholarly communication process. Even publishers allow it. Lots of institutional archives (e.g. ArXiv, and SSRN) have seen the light mid-90s. And the use of those repositories increases every year. In the humanities, there is no such culture. Do you think this could change in a time where sharing initial ideas is becoming easier? Or is the writing and publishing culture in the humanities intrinsically different from that in the sciences?

Adrian: You offer a very intriguing comparative perspective here, Jeroen. I have no experience of scholarship in the sciences, so what you say is surprising (and good!) news to me. Perhaps, in the humanities, there has been, for too long a time, a certain anxious aura built up around the individual ‘ownership’ of one’s ideas – and thereby most of us have gone along with this perceived need not to share our work so readily or easily in the preprint and postprint ways that you describe. But I do think this can change, and quite radically, if humanities people are encouraged to go in this direction. One can already see the signs of it, when scholars share their drafts of papers more readily (and widely) than before. I think it would be a very productive development.

 Jeroen: One of the biggest hurdles to take in the next 5 to 10 years regarding open access in the humanities are the costs of publishing. In the sciences, the dominant business model is based on APCs (Article Processing Charges). In the humanities this model is a problem. One of the reasons is that research budgets in the humanities and social sciences are much lower. Other reasons given are that since journal prices in the sciences are much higher there was an urgency to transfer to an open access environment. Subscription costs for humanities journals are much lower.

The majority of open access journals in the humanities and also in media studies have another business model and are often subsidized by institutions or foundations. But subsidies are often temporary. New initiatives like Open Library of Humanities and Knowledge Unlatched come up with different financial models, all aimed at unburdening individual authors, but all of these models still need to prove themselves. Nevertheless, things are changing. How do you see a sustainable open access publishing environment for the humanities, and more specifically film and media studies? 

Adrian: Issues of funding – and money, in general – are vexing indeed. Once again, let me make clear where I’m exactly ‘coming from’. With Rouge and LOLA magazines, we have never received, or even sought, any government funding or any kind of arts-industry subsidy; we have never sought or accepted any advertising revenue; and we have never benefitted from any university grants of any kind. We run these magazines on virtually no money (beyond basic operating costs) and of course, as a result, we are unable to pay any contributor (and we are always upfront about that). This is perhaps an extreme, but not uncommon position.  It was a decision that, in each case, we took. Why? Because we didn’t want the restrictions, and obligations, that come with the ‘public purse’ – or, indeed, with almost any source of ‘filthy lucre’! In Australia, for example, to accept government funding means you will have to meet a ‘quota’ of ‘local/national content’ – and if you don’t, you won’t get that subsidy again. Senses of Cinema has struggled with that poisoned chalice. With Rouge and LOLA, on the other hand, we enjoy the ‘stateless’ potentiality of online publishing – it is ‘of the world’ and belongs to the whole world (or at least, those in it who can read English!). Sometimes we engaged in (perhaps at our initiative) ‘co-production’ ventures, some of which panned out well (such as a book that Rouge made in collaboration with the Rotterdam Film Festival on Raúl Ruiz in 2004, or the publication last year in LOLA of certain chapters from a Japanese book tribute to Shigehiko Hasumi), and others which did not. But I and my colleagues stick to this generally penniless state of idealism!

I was naively shocked when I realised that academic publishers usually fund their open access projects through payments from writers! And that – as I discovered upon asking a few friends – some universities routinely subsidise these types of publications for their scholars. As a freelancer, once more, I am shut out from this particular system. Therefore, my next ‘academic’ book (Mysteries of Cinema for Amsterdam University Press) – ironically, largely comprised of my essays from non-academic print publications! – will not be Open Access, because I cannot personally afford that, and I have no ‘channel’ of institutional funding that I can access. Once again, that’s just the name of the game. I will be very happy when that book exists, but it will purely be a physical book for purchase only!

I have, therefore, no utopian visions for how to fund open access across the humanities board. Personally, I am currently looking into Patreon as a possible way to sustain arts/criticism-related website projects. It’s a democratic model: people pay to support your ongoing work, to give you time and space to creatively do it. It’s not like Kickstarter, which is geared to a single production, such as a feature film project. Patreon has proved a godsend for artists such as musicians. We shall see if it can also work in an open access publishing context.

Jeroen: You are one of the founding fathers and practitioners of the so-called audiovisual essay, a new rising digital video format in academic publishing. Instead of writing a paper in words, a compilation of images offers a new textual structure. Another digital format is the enriched publication; articles or books with data included. One of the issues, besides arranging new forms of reviewing, is copyright and reuse. The audiovisual essay format obviously benefits from images with an open license, like the Creative Commons licenses. This makes it possible to reuse and remix these images. Archives are being digitized rapidly, but only a small portion is currently available in the public domain. Scholars are often not allowed to make use of film quotes or stills in their works. How do you see the nearby future for using digitized media files for academic purposes in relation to copyright laws? 

Adrian: We are in an extraordinarily ‘grey area’ here – appropriately, I suppose, since things like LOLA are (I’m told) classified as ‘grey Open Access’! And the legal situation for audiovisual works can vary greatly from nation to nation. We are in a historical moment when a lot of experimentation is going ‘under the radar’ of legal restriction, or (in the eyes of the big corporations) is considered simply too minor to consider taking any action against. Bear in mind that most critical/scholarly work in audiovisual essays (of the kind that I do in collaboration with my partner, Cristina Álvarez López) is not about making large sums of money; it is still a marginal, ‘labour of love’ activity, just as small, cultural magazines were in the 1980s.

THINKING MACHINE 6

Still from audiovisual essay ‘Thinking Machine 6: Pieces of Spaces’. © Cristina Álvarez López & Adrian Martin, March 2017

This general fuzziness of the present moment is all to the good, in my opinion; we can all enjoy a certain freedom within it (with, occasionally, a ‘bite’ from above on particular questions of copyright: music use, for instance). I speak of no specific works or practitioners here, but much work in the audiovisual essay field happens both inside and outside of Creative Commons licenses. I don’t think anyone should be restricted to using just that. The front on which we all have to battle is ‘fair use’ or ‘fair dealing’ (hence the disclaimer ‘for study purposes only’ that Cristina & I place at the end of all our videos): the right to quote (and hence manipulate) audiovisual quotations for scholarly and artistic purposes, ranging all the way from lecture demonstration and re-montage analysis to parody and creative détournement/appropriation. The fully scholarly publication [in]Transition to which I and many others have contributed – no one will ever call that a blog! – takes full advantage, via its publishing ‘home base’ of USA, of everything that the fair use provisions in that country can allow. And I think you can see, if you peruse that site, how far the possibilities can go.

I very much liked the recent essay by Noah Berlatsky, “Fair Use Too Often Goes Unused” in The Chronicle of Higher Education, which argued that we – meaning not only writers and artists, but perhaps even more significantly editors and publishers – need to be questioning and pushing at the limits of the definition, practice and enforcement of fair use regulations. Too often (and I have experienced this myself) editors and publishers assume, at the outset, that a great deal is simply impossible, unthinkable: even the use of screenshots from movies! There is so much unnecessary fear and trepidation over such matters. Sure, no one wants to take a stupid risk and be sued as a result. But, to cite Berlatsky’s conclusion:

“Books and journal articles about visual culture need to be able to engage with, analyse, and share visual culture. Fair use makes that possible — but only if authors and presses are willing to assert their rights. Presses may take on a small risk in asserting fair use. But in return they give readers an invaluable opportunity to see [and I would add: hear!] what scholars are talking about.”

Jeroen: I want to thank you for this interview.


© Adrian Martin, June 2017

*During the NECS 2017 conference in Paris the session ‘The Changing Landscape of Open Access Publications in Film and Media Studies: Distributing Research and Exchanging Data’ will be held on Saturday July 1st. Download the final program here.

** 15 June 2018: some minor updates in lay-out and added a few links to mentioned projects.

 

 

 

 

 

The spy who pwned me

U.S. intelligence officers discuss Chinese espionage in dramatically different terms than they use in talking about the Russian interference in the U.S. presidential election of 2016. Admiral Michael Rogers, head of NSA and U.S. Cyber Command, described the Russian efforts as “a conscious effort by a nation state to attempt to achieve a specific effect” (Boccagno 2016). The former director of NSA and subsequently CIA, General Michael Hayden, argued, in contrast, that the massive Chinese breach of records at the U.S. Office of Personnel Management was “honorable espionage work” of a “legitimate intelligence target” (American Interest 2016; Gilman et.al 2017). Characterizing the Chinese infiltration as illegal hacking or warfare would challenge the legitimacy of state-sanctioned hacking for acquiring information and would upset the norms permitting every state to hack relentlessly into each other’s information systems.

The hairsplitting around state-sanctioned hacking speaks to a divide between the doctrinal understanding of intelligence professionals and the intuitions of non-professionals. Within intelligence and defense circles of the United States and its close allies, peacetime hacking into computers with the primary purpose of stealing information is understood to be radically different than using hacked computers and the information from them to cause what are banally called “effects”—from breaking hard drives or centrifuges, to contaminating the news cycles of other states, to playing havoc with electric grids. One computer or a thousand, the size of a hack doesn’t matter: scale doesn’t transform espionage into warfare. Intent is key. The Chinese effort to steal information: good old espionage, updated for the information age. The Russian manipulation of the election: information or cyber warfare.

Discussing the OPM hack, Gen. Hayden candidly acknowledged,

If I as director of CIA or NSA would have had the opportunity to grab the equivalent [employee records] in the Chinese system, I would not have thought twice… I would not have asked permission. I would have launched the Starfleet, and we would have brought those suckers home at the speed of light.[1]

Under Hayden and his successors, NSA has certainly brought suckers home from computers worldwide. Honorable computer espionage has become multilateral, mundane, and pursued at vast scale.[2]

In February 1996 John Perry Barlow declared to the “Governments of the Industrial World,” that they “have no sovereignty where we gather”—in cyberspace (Barlow 1996). Whatever their naivety in retrospect, such claims in the 1990s from right and left, from civil libertarians as well as defense hawks, justified governments taking preemptive measures to maintain their sovereignty. Warranted or not, the fear that the Internet would weaken the state fueled its dramatic, mostly secret, expansion at the beginning of the current century. By understanding the ways state-sponsored hacking exploded from the late 1990s onward, we see more clearly the contingent interplay of legal authorities and technical capacities that created the enhanced powers of the nation-state.

How did we get a mutual acceptance of state-sanctioned hacking? In a legal briefing for new staff, NSA tells a straightforward story of the march of technology. The movement from telephonic and other communication to the mass “exploitation” of computers was “a natural transition of the foreign collection mission of SIGINT” (signals intelligence). As communications moved from telex to computers and switches, NSA pursued those same communications” (NSA OGC  n.d.).  Defenders of NSA and its partner agencies regularly make similar arguments: anyone unwilling to accept the necessity of government hacking for the purposes of foreign intelligence is seen as having a dangerous and unrealistic unawareness of the threats nations face today. For many in the intelligence world today, hacking into computers and network infrastructures worldwide is, quite simply, an extension of the long-standing mission of “signals intelligence”—the collection and analysis of communications by someone other than the intended recipient.

Figure 1:  “Authority to Conduct CNE.” (NSA Office of General Counsel., n.d.:8)

Contrary to the seductive simplicity of the NSA slide, little was natural about the legalities around computer hacking in the 1990s. The legitimization of mass hacking into computers to collect intelligence wasn’t technologically or doctrinally pre-given, and hacking into computers didn’t—and doesn’t—easily equate to earlier forms of espionage. In the late 1990s and 2000s, information warfare capacities were being developed, and authority distributed, before military doctrine or legal analysis could solidify.[3]  Glimpsed even through the fog of classification, documents from the U.S. Department of Defense and intelligence agencies teem with discomfort, indecision, and internecine battles that testify to the uncertainty within the military and intelligence communities about the legal, ethical, and doctrinal use of these tools. More “kinetic” elements of the armed services focused on information warfare within traditional conceptions of military activity: the destruction and manipulation of the enemy command and control systems in active battle. Self-appointed modernizers demanded a far more encompassing definition that suggested the distinctiveness of information warfare and, in many cases, the radical disruption of traditional kinetic warfare.

The first known official Department of Defense definition of “Information Warfare,” promulgated in an only recently declassified 1992 document, comprised:

The competition of opposing information systems to include the exploitation, corruption, or destruction of an adversary’s information system through such means as signals intelligence and command and control countermeasures while protecting the integrity of one’s own information systems from such attacks (DODD TS 3600.1 1992:1).

Under this account, warfare included “exploitation”: the acquiring of information from an adversary’s computers whether practiced on or by the United (ibid.:4).[4] A slightly later figure (Figure 2) illustrates this inclusion of espionage in information warfare.

Figure 2: “Information Warfare” (Fields and McCarthy 1994: 27)

According to an internal NSA magazine, information warfare was “one of the new buzzwords in the hallways” of the Agency by 1994 (Redacted 1994:3). Over the next decade, the military services competed with NSA and among themselves over the definition and partitioning of information warfare activities. One critic of letting NSA control information warfare worried about “the Intelligence fox being put in charge of the Information Warfare henhouse” (Rothrock 1997:225).

Information warfare techniques were too valuable only to be used in kinetic war, a point Soviet strategists had long made. By the mid-1990s, the U.S. Department of Defense had embraced a broader doctrinal category, “Information Operations” (DODD S-3600 1996). Such operations comprised many things, including “computer network attack” (CNA) and “computer network defense” (CND) as well as older chestnuts like “psychological operations.” Central to the rationale for the renaming was that information warfare-like activities did not belong solely within the purview of military agencies and they did not occur only during times of formal or even informal war. One influential strategist, Dan Kuehl, explained, “associating the word ‘war’ with the gathering and dissemination of information has been a stumbling block in gaining understanding and acceptance of the concepts surrounding information warfare” (Kuehl 1997). Information warfare had to encompass collection of intelligence, deception, and propaganda, as well as more warlike activities such as deletion of data or destruction of hardware. Exploitation had to become peaceful.

Around 1996, a new doctrinal category, “Computer Network Exploitation” (CNE), emerged within the military and intelligence communities to capture the hacking of computer systems to acquire information from them.[5] The definition encompassed the acquisition of information but went further. “Computer network exploitation” encompassed collection and enabling for future use. The military and intelligence communities produced a series of tortured definitions. A 2001 draft document offered two versions, one succinct,

Intelligence collection and enabling operations to gather data from target or adversary automated information systems (AIS) or networks.

and the other clearer about this “enabling”:

Intelligence collection and enabling operations to gather data from target or adversary automated information systems or networks. CNE is composed of two types of activities: (1) enabling activities designed to obtain or facilitate access to the target computer system where the purpose includes foreign intelligence collection; and, (2) collection activities designed to acquire foreign intelligence information from the target computer system (Wolfowitz 2001:1-1).

Enabling operations were carefully made distinct from affecting a system, which takes on a war-like demeanor. Information operations involved “actions taken to affect adversary information and information systems, while defending one’s own information and information systems” (CJCSI 3210.1A 1998). CNE was related to but was not in fact an information “operation.” A crucial 1999 document from the CIA captured the careful, nearly casuistical, excision of CNE from Information Operations: “CNE is an intelligence collection activity and while not viewed as an integral pillar of DoD IO doctrine, it is recognized as an IO-related activity that requires deconfliction with IO” (DCID 7/3 2003: 3).  With this new category, “enabling” was hived off from offensive warfare, to clarify that exploiting a machine—hacking in and stealing data—was not an attack. It was espionage, whose necessity and ubiquity everyone ought simply to accept.

The new category of CNE subdued the protean activity of hacking and put it into an older legal box—that of espionage. The process of hacking into computers for the purpose of taking information and enabling future activities during peacetime was thus grounded in pre-existing legal foundations for signals intelligence. In contrast to the flurry of new legal authorities that emerged around computer network attack, computer network exploitation was largely made to rest on the hoary authorities of older forms of signals intelligence.[6]

A preliminary DoD document captured this domestication of hacking in 1999:

The treatment of espionage under international law may help us make an educated guess as to how the international community will react to information operations activities. . . . international reaction is likely to depend on the practical consequences of the activity. If lives are lost and property is destroyed as a direct consequence, the activity may very well be treated as a use of force. If the activity results only in a breach of the perceived reliability of an information system, it seems unlikely that the world community will be much exercised. In short, information operations activities are likely to be regarded much as is espionage—not a major issue unless significant practical consequences can be demonstrated (Johnson 1999:40; emphasis added).

In justifying computer espionage, military and intelligence thinkers rested on a Westphalian order of ordinary state relations with long standing norms. At the very moment that the novelty of state-sanctioned hacking for information was denied, however, a range of strategists and legal thinkers expounded how the novelty of information warfare would necessitate a radical alteration of the global order.

Beyond Westphalia

Mirroring Internet visionaries of left and right alike, military and defense wonks in the 1990s detailed how the Net would undermine national sovereignty. An article in RAND’s journal in 1995 explained,

Information war has no front line. Potential battlefields are anywhere networked systems allow access–oil and gas pipelines, for example, electric power grids, telephone switching networks. In sum, the U.S. homeland may no longer provide a sanctuary from outside attack (Rand Research Review 1995; emphasis added.)

In this line of thinking, a wide array of forms of computer intrusion became intimately linked to other forms of asymmetric dangers to the homeland, such as biological and chemical warfare.

Figure 3. Information warfare is different (Andrews 1996:3-2).

The porousness of the state in the global information age accordingly demanded an expansion—a hypertrophy—of state capacities and legal authorities at home and abroad to compensate. The worldwide network of surveillance revealed in the Snowden documents is a key product of this hypertrophy. In the U.S. intelligence community, the challenges of new technologies demanded rethinking Fourth Amendment prohibitions against unreasonable search and seizure. In a document intended to gain the support of the incoming presidential administration, NSA explained in 2000,

Make no mistake, NSA can and will perform its missions consistent with the Fourth Amendment and all applicable laws. But senior leadership must understand that today’s and tomorrow’s mission will demand a powerful, permanent presence on a global telecommunications network that will host the ‘protected’ communications of Americans as well as the targeted communications of adversaries (NSA 2000:32).

The briefing for the future president and his advisors delivered the hard truths of the new millennium. In the mid- to late 1990s, technically minded circles in the Departments of Defense and Justice, in corners of the Intelligence Community, and in various scattered think tanks around Washington and Santa Monica began sounding the call for a novel form of homeland security, where military and law enforcement, the government and private industry, and domestic and foreign surveillance would necessarily mix in ways long seen as illicit if not illegal. Constitutional interpretation, jurisdictional divisions, and the organization of bureaucracies alike would need to undergo dramatic—and painful—change. In a remarkable draft “Road Map for National Security” from 2000, a centrist bipartisan group argued, “in the new era, sharp distinctions between ‘foreign’ and ‘domestic’ no longer apply. We do not equate national security with ‘defense’” (U.S. Commission on National Security 2001).  9/11 proved the catalyst, but not the cause, of the emergence of the homeland security state of the new millennium. The George W. Bush administration drew upon this dense congeries of ideas, plans, vocabulary, constitutional reflection, and an overlapping network of intellectuals, lawyers, ex-spies, and soldiers to develop the new homeland security state. This intellectual framework justified the dramatic leap in the foreign depth and domestic breadth of the acquisition, collection, and analysis of communications of NSA and its Five Eyes partners, including computer network exploitation.

NSA Ad. New York Times Oct. 13, 1985.

The Golden Age of SIGINT

In its 2000 prospectus for the incoming presidential administration, the NSA included an innocent sounding clause: “in close collaboration with cryptologic and Intelligence Community partners, establish tailored access to specialized communications when needed” (National Security Agency 2001: 4). Tailored access meant government hacking­—CSE. In the early 1990s, NSA seemed to many a cold-war relic, inadequate to the times, despite its pioneering role in computer security and penetration testing from the late 1960s onward. By the late 2010s, NSA was at the center of the “golden age of SIGINT” focused ever more on computers, their contents, and the digital infrastructure (NSA 2012: 2).

From the mid 1990s, NSA and its allies gained extraordinary worldwide capacities, both in the “passive” collection of communications flowing through cables or the air and the “active” collection through hacking into information systems, whether it be the  president’s network, Greek telecom networks during the Athens Olympics, or in tactical situations throughout Iraq and Afghanistan (see Redacted-Texas TAO 2010; SID Today 2004).

Prioritizing offensive hacking over defense became very easy in this context. An anonymous NSA author explained the danger in 1997:

The characteristics that make cyber-based operations so appealing to us from an offensive perspective (i.e., low cost of entry, few tangible observables, a diverse and expanding target set, increasing amounts of ‘freely available’ information to support target development, and a flexible base of deployment where being ‘in range’ with large fixed field sites isn’t important) present a particularly difficult problem for the defense… before you get too excited about this ‘target-rich environment,’ remember, General Custer was in a target-rich environment too! (Redacted 1997: 9; emphasis added).

The Air Force and NSA pioneered computer security from the late 1960s: their experts warned that the wide adoption of information technology in the United States would make it the premier target-rich environment (Hunt 2012). NSA’s capacities developed as China, Russian, and other nations dramatically expanded their own computer espionage efforts (see figure 4 for the case of China c. 2010).

Figure 4. NSA’s list of major Chinese CNE efforts, called “BYZANTINE HADES.” (Redacted-NTOC 2010).

By 2008, and probably much earlier, the Agency and its close allies probed computers worldwide, tracked their vulnerabilities, and engineered viruses and worms both profoundly sophisticated and highly targeted.  Or as a key NSA hacking division bluntly put it: “Your data is our data, your equipment is our equipment—anytime, anyplace, by any legal means” (SID Today 2006: 2).

Figure 5. Worldwide SIGINT/Defense Cryptologic Platform, n.d., https://archive.org/details/NSA-Defense-Cryptologic-Platform.

While the internal division for hacking was named “Tailored Access Operations,” its work quickly moved beyond the highly tailored—bespoke—hacking of a small number of high priority systems.  In 2004, the Agency built new facilities to enable them to expand from “an average of 100-150 active implants to simultaneously managing thousands of implanted targets” (SID Today 2004a:2).  According to Matthew Aid, NSA had built tools (and adopted easily available open source tools) for scanning billions of digital devices for vulnerabilities; hundreds of operators were covertly “tapping into thousands of foreign computer systems” worldwide (Aid 2013). By 2008, the Agency’s distributed XKeyscore database and search system offered its analysts the option to “Show me all the exploitable machines in country X,” meaning that the U.S. government systematically evaluated all the available machines in some nations for potential exploitation and catalogued their vulnerabilities. Cataloging at scale is matched by exploiting machines at scale (National Security Agency 2008). One program, Turbine, sought to “allow the current implant network to scale to large size (millions of implants)” (Gallagher and Greenwald 2014). The British, Canadian, Australian partner intelligence agencies play central roles in this globe-spanning work.

The disanalogy with espionage

The legal status of government hacking to exfiltrate information rests on an analogy with traditional espionage. Yet the scale and techniques of state hacking strain the analogy. Two lawyers associated with U.S. Cyber Command, Col. Gary Brown and Lt. Col. Andrew Metcalf, offer two examples: “First, espionage used to be a lot more difficult. Cold Warriors did not anticipate the wholesale plunder of our industrial secrets. Second, the techniques of cyber espionage and cyber attack are often identical, and cyber espionage is usually a necessary prerequisite for cyber attack” (Brown and Metcalf 1998:117).

The colonels are right: U.S. legal work on intelligence in the digital age has tended to deny that scale is legally significant. The international effort to exempt sundry forms of metadata such as calling records from legal protection stems from the intelligence value of studying metadata at scale. The collection of the metadata of one person, on this view, is not legally different from the collection of the metadata of many people, as the U.S. Foreign Intelligence Surveillance Court has explained:

[so] long as no individual has a reasonable expectation of privacy in meta data [sic], the large number of persons whose communications will be subjected to the . . . surveillance is irrelevant to the issue of whether a Fourth Amendment search or seizure will occur.[7]

Yet metadata is desired by intelligence agencies just because it is revealing at scale. Since their inception, NSA and its Commonwealth analogues have focused as much at working with vast databases of “metadata” as on breaking cyphered texts. NSA’s historians celebrate a cryptographical revolution afforded through “traffic analysis” (Filby 1993). From reconstructing the Soviet “order of battle” in the Cold War to seeking potential terrorists now, the U.S. Government has long recognized the transformative power of machine analysis of large volumes of metadata while simultaneously denying the legal salience of that transformative power.

As in the case of metadata, U.S. legal work on hacking into computers does not consider scale as legally significant. Espionage at scale used to be tough going: the very corporeality of sifting through physical mail, or garbage, or even setting physical wiretaps, or other devices to capture microwave transmissions scale only with great expense, difficulty, and potential for discovery (Donovan 2017). Scale provided a salutary limitation on surveillance, domestic or foreign. As with satellite spying, computer network exploitation typically lacks this corporeality, barring cases of getting access to air-gapped computers, as in the case of the StuxNet virus. With the relative ease of hacking, the U.S. and its allies can know the exploitable machines in a country X, whether those machines belong to generals, presidents, teachers, professors, jihadis, or eight-year olds.

Hacking into computers unquestionably alters them, so the analogy with physical espionage is imperfect at best. A highly-redacted Defense Department “Information Operations Policy Roadmap” of 2003 underscores the ambiguity of “exploitation versus attack.” The document calls for clarity about the definition of an attack, both against the U.S. (slightly redacted) and by the U.S. (almost entirely redacted). “A legal review should determine what level of data or operating system manipulation constitutes an attack” (Department of Defense 2003:52).  Nearly every definition—especially every classified definition—of computer network exploitation includes “enabling” as well as exploitation of computers. The military lawyers Brown and Metcalf argue, “Cyber espionage, far from being simply the copying of information from a system, ordinarily requires some form of cyber maneuvering that makes it possible to exfiltrate information. That maneuvering, or ‘enabling’ as it is sometimes called, requires the same techniques as an operation that is intended solely to disrupt” (Brown and Metcalfe 1998:117) “Enabling” is the key moment where the analogy between traditional espionage and hacking into computers breaks down. The secret definition, as of a few years ago, explains that enabling activities are “designed to obtain or facilitate access to the target computer system for possible later” computer network attack. The enabling function of an implant placed on a computer, router, or printer is the preparation of the space of future battle: it’s as if every time a spy entered a locked room to plant a bug, that bug contained a nearly unlimited capacity to materialize a bomb or other device should distant masters so desire. An implant essentially grants a third-party control over a general-purpose machine: it is not limited to the exfiltration of data. Installing an implant within a computer is like installing a cloaked 3-D printer into physical space that can produce a photocopier, a weapon, and a self-destructive device at the whim of its master. One NSA document put it clearly: “Computer network attack uses similar tools and techniques as computer network exploitation. If you can exploit it, you can attack it” (SID Today 2004b).

In a leaked 2012 Presidential Policy Directive, the Obama administration clarified the lines between espionage and information warfare explicitly to allow that espionage may produce results akin to an information attack. Amid a broad array of new euphemisms, CNE was transformed into “cyber collection,” which “includes those activities essential and inherent to enabling cyber collection, such as inhibiting detection or attribution, even if they create cyber effects” (Presidential Policy Directive (PPD)-20: 2-3). The bland term ‘cyber effects’ is defined as “the manipulation, disruption, denial, degradation, or destruction of computers, information or communications systems, networks, physical or virtual infrastructure controlled by computers or information systems, or information resident thereon.” Espionage, then, often will be attack in all but name. The creation of effects akin to attack need not require the international legal considerations of war, only the far weaker legal regime around espionage. With each clarification, the gap between actual government hacking for the purpose of obtaining information and traditional espionage widens; and the utility of espionage as a category for thinking through the tough policy and legal choices around hacking diminishes.

Surveilling Irony

By the end of the first decade of the 2000s, sardonic geek humor within NSA reveled in the ironic symbols of government overreach. A classified NSA presentation trolled civil libertarians: “Who knew that in 1984” an iPhone “would be big brother”  and “the Zombies would be paying customers” (Spiegel Online 2013). Apple’s famous 1984 commercial dramatized how better technology would topple the corporatized social order, presaging a million dreams of the Internet disrupting wonted order. Far from undermining the ability of traditional states to know and act, the global network has created one of the greatest intensifications of the power of sovereign states since 1648. Whether espoused by cyber-libertarians or RAND strategists, the threat from the Net enabled new authorities and undermined civil liberties. The potential weakening of the state justified its hypertrophy. The centralization of online activity into a small number of dominant platforms—Weibo, Google, Facebook, with their billions of commercial transactions, has enabled a scope of surveillance unexpected by the most optimistic intelligence mavens in the 1990s. The humor is right on.

Signals intelligence is a hard habit to break—civil libertarian presidents like Jimmy Carter and Barack Obama quickly found themselves taken with being able to peek at the intimate communications of friends and foes alike, to know their negotiating positions in advance, to be three steps ahead in the game of 14-dimensional chess. State hacking at scale seems to violate the sovereignty of states at the same time as it serves as a potent form of sovereign activity today. Neither the Chinese hacking into OPM databases nor the alleged Russian intervention in the recent US and French elections accords well with many basic intuitions about licit activities among states. If it would be naïve to imagine the evanescence of state-sanctioned hacking, it is doctrinally and legally disingenuous to treat that hacking as entirely licit based on ever less applicable analogies to older forms of espionage.

As the theorists in the U.S. military and intelligence worlds in the 1990s called for new concepts and authorities appropriate to the information age, they nevertheless tamed hacking for information by treating it as continuous with traditional espionage. The near ubiquity of state-sanctioned hacking should not sanction an ill-fitting legal and doctrinal frame that ensures its monotonic increase. Based on an analogy to spying that ignores scale, “computer network exploitation” and its successor concepts preclude the rigorous analysis necessary for the hard choices national security professionals rightly insist we must collectively make. We need a ctrl+alt+del. Let’s hope the implant isn’t persistent.

Matthew L. Jones teaches history of science and technology at Columbia. He is the author, most recently, of Reckoning with Matter: Calculating Machines, Innovation, and Thinking about Thinking from Pascal to Babbage. (Chicago, 2016).

References

Aid, Matthew M. 2013. “Inside the NSA’s Ultra-Secret China Hacking Group,” Foreign Policy. June 10. Available at: link.

American Interest. 2015. “Former CIA Head: OPM Hack was ‘Honorable Espionage Work.’” The American Interest. June 16. Available at: link.

Andrews, Duane. 1996. “Report of the Defense Science Board Task Force on Information Warfare-Defense (IW-D),” December.

Barlow, John Perry. “A Declaration of the Independence of Cyberspace.” Electronic Frontier Foundation, February 8, 1996. Available at: link.

Berkowitz, Bruce D. 2003. The New Face of War: How War Will Be Fought in the 21st Century. New York: Free Press

Boccagno, Julia. 2016. “NSA Chief speaks candidly of Russia and U.S. Election.” CBS News. November 17. Available at:  link.

Brown, Gary D. and Andrew O. Metcalf. 1998. “Easier Said Than Done: Legal Reviews of Cyber Weapons,” Journal of National Security Law and Policy 7.

CJCSI 3210.01A. 1998. “Joint Information Operations Policy,” Joint Chiefs, November 6. Available at: link.

DCID 7/3. 2003. “Information Operations and Intelligence Community Related Activities.” Central Intelligence Agency, June 5.  Available at: link.

Department of Defense. 2003. “Information Operations Roadmap,” October 30. Available at: link.

DODD TS 3600.1. 1992.  “Information Warfare (U),” December 21. Available at: link.

DODD S-3600.1, 1996. “Information Operations (IO) (U),” December 9. Available at: link.

Donovan, Joan. 2017. “Refuse and Resist!” Limn 8, February. Available at: link.

Falk, Richard A. 1962. “Space Espionage and World Order: A Consideration of the Samos-Midas Program,” in Essays on Espionage and International Law. Akron: Ohio State University Press.

Fields, Craig, and James McCarthy, eds. 1994. “Report of the Defense Science Board Summer Study Task Force on Information Architecture for the Battlefield,” October. Available at: link.

Filby, Vera R. 1993. United States Cryptologic History, Sources in Cryptologic History, Volume 4, A Collection of Writings on Traffic Analysis. Fort Meade, MD: NSA Center for Cryptological History.

Gallagher, Ryan and Glenn Greenwald. 2014. “How the NSA Plans to Infect ‘Millions’ of Computers with Malware,” The Intercept. March 12. Available at:  link.

Gilman, Nils, Jesse Goldhammer, and Steven Weber. 2017. “Can You Secure an Iron Cage?” Limn 8, February. Available at: link.

Hunt, Edward. 2012. “U.S. Government Computer Penetration Programs and the Implications for Cyberwar,” IEEE Annals of the History of Computing. 34(3):4–21.

Johnson, Philip A. 1999. “An Assessment of International Legal Issues in Information Operations,” 1999, 40.  Available at: link.

Kaplan, Fred M. Dark Territory: The Secret History of Cyber War. New York: Simon & Schuster, 2016.

Kuehl, Dan. 1997. “Defining Information Power,” Strategic Forum: Institute for National Strategic Studies, National Defense University, no. 115 (June). Available at: link.

Lin Herbert S. 2010.  “Offensive Cyber Operations and the Use of Force,” Journal of National Security Law & Policy, 4.

National Security Agency/Central Security Service. 2000. “Transition 2001” December. Available at: link.

National Security Agency. 2008 “XKEYSCORE.”  February 25. Available at:  link

National Security Agency. 2012. “(U) SIGINT Strategy, 2012-2016,” February 23. Available at:  link.

NSA Office of General Counsel. n.d.  “(U/FOUO) CNO Legal Authorities,” slide 8, available at: link.

Owens, William, Kenneth W. Dam, and Herbert S. Lin 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington D.C.: National Academies Press.

Presidential Policy Directive (PPD)-20: “U.S. Cyber Operations Policy,” October 16, 2012. Available at:  link.

Rand Research Review. 1995. “Information Warfare: A Two-Edged Sword.” Rand Research Review: Information Warfare and Cuberspace Security. Ed. A. Schoben. Santa Monica: Rand. Available at: link.

Rattray, Gregory J. 2001. Strategic Warfare in Cyberspace. Cambridge, Mass: MIT Press.

Redacted. 1994 “Information Warfare: A New Business Line for NSA,” Cryptolog. July.

Redacted. 1997. “IO, IO, It’s Off to Work We Go . . . (U),” Cryptolog. Spring.

Redacted-NTOC, V225. 2010, “BYZANTINE HADES: An Evolution of Collection,” June. Slides available at: link.

Redacted-Texas TAO/FTS327. 2010. “Computer-Network Exploitation Successes South of the Border,” November 15. Available from link.

Rid, Thomas. Rise of the Machines: A Cybernetic History. New York: W. W. Norton & Company, 2016.

Rothrock, John. 1997. “Information Warfare: Time for Some Constructive Criticism,” in Athena’s Camp: Preparing for Conflict in the Information Age, ed. John Arquilla and David Ronfeldt.  Santa Monica: Rand.

Schmitt, Michael N., ed. 2017. Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations: Prepared by the International Groups of Experts at the Invitation of the NATO Cooperative Cyber Defence Centre of Excellence, 2nd ed. Cambridge: Cambridge University Press. DOI:10.1017/9781316822524.

SID Today. 2004. “Another Successful Olympics Story,” October 6, 2004. Available at: link.

SID Today. 2004a. “Expanding Endpoint Operations.” September 17. Available at: link.

SID Today. 2004b. “New Staff Supports Network Attack.” October 21. Available at: link

SIDToday. 2006. “The ROC: NSA’s Epicenter for Computer Network Operations,” September 6. Available at:  link.

Spiegel Online. 2013. “Spying on Smartphones,” SPIEGEL ONLINE, September 9. Available at: link.

United States Commission on National Security/21st Century. 2001. Road Map for National Security: Imperative for Change. January 31. Final Draft. Available at: link

Wolfowitz, Paul. 2001. “Department of Defense Directive 3600.1 Draft,” October.


[1] In conversation with Gerard Baker, June 15, 2015.  Available at link.

[2] For the current state of international consensus on cyber espionage among international lawyers, see Schmitt 2017, rule 32.

[3] See Berkowitz 2003:59-65; Rattray 2003; Rid 2016:294-339 and Kaplan 2016

[4] Drawn from the signals intelligence idiolect, “exploitation” means, roughly, making some qualities of a communication available for acquisition. With computers, this typically means discovering bugs in systems, or using pilfered credentials, and then building robust ways to gain control of the system or at least to exfiltrate information from it.

[5] Computer Network Exploitation (CNE) was developed alongside two new doctrinal categories emerging in 1996: more aggressive “Computer Network Attack,” (CNA) which uses that access to destroy information or systems, and “Computer Network Defense” (CND). For exploitation versus attack, see (Owens et. al. 2009; Lin 2010:63).

[6] Especially NSCID-6 and Executive Order 12,333. The development of satellite reconnaissance had earlier challenged mid twentieth century conceptions of espionage. For a vivid sense of the difficulty of resolving these challenges, see (Falk 1962: 45-82).

[7] Quotation from secret decision with redacted name and date, p. 63, quoted in Amended Memorandum Opinion, No. BR 13-109 (Foreign Intelligence Surveillance Court August 29, 2013).

The Way Out

The Way Out. Invisible Insurrections and Radical Imaginaries in the UK Underground 1961-1991 Kasper Opstrup A counterculture history of art and experimental politics that turns the world inside out The Way Out examines the radical political and hedonist imaginaries of the experimental fringes of the UK Underground from 1961 to 1991 By examining the relations between collective and collaborative practices … Continue reading →

L’idée de l’Europe au Siècle des Lumières

This blog post was originally posted as an article on the Adventures on the Bookshelf blog – you can read it here. In 1813, Germaine de Staël published a seminal work called De l’Allemagne, which offered a wide-ranging introduction to German romantic … Continue reading

Who’s hacking whom?

Who is hacking whom? The case of Brian Farrell (a.k.a. “Doctor Clu”) raises a host of interesting questions about the nature of hacking, vulnerability disclosure, the law, and the status of security research. Doctor Clu was brought to trial by FBI agents who identified him by his Internet Protocol (IP) address. But Clu was using Tor (The Onion Router) to hide his identity, so the FBI had to find a way to “hack” the system to reveal his identity. They didn’t do this directly, though. Allegedly, they subpoenaed some information security researchers at Carnegie Mellon University’s Software Engineering Institute (SEI) for a list of IP addresses.  Why did SEI have the IP addresses? Ironically, these Department of Defense-funded researchers had bragged about a presentation they would give at the Black Hat security conference on de-anonymising Tor users “on a budget.” For whatever reason, they had Clu’s IP address as a result of their work, and the FBI managed to get it from them. Clu’s defense team tried to find out how exactly it was obtained and argued that this was a violation of the 4th amendment, but the judge refused: IP addresses are public, he said; even on Tor, where users have no ‘expectation of privacy.’

In this case, security researchers ‘hacked’ Tor in a technical sense; but the FBI also hacked the researchers in a legal sense – by subpoenaing the exploit and its results in order to bring Clu to trial. As in the recent WannaCry ransomware attack, or the Apple iPhone vs. FBI San Bernardino terrorism investigation of summer 2016, this case reveals the entanglement of security research, the hoarding of exploits and vulnerabilities, the use of those tools by law enforcement and spy agencies, and ultimately citizens’ right to privacy online. The rest of this piece explores this entanglement, and asks: what are the politics of disclosing vulnerabilities? What new risks and changed expectations exist in a world where it is not clear who is hacking whom? What responsibilities do researchers have to protect their subjects and what expectations do Tor users have to be protected from such research?

“Tor’s motivation for three hops is Anonymity”[1]

“Tor is a low-latency anonymity-preserving network that enables its users to protect their privacy online” and enables “anonymous communication” (AlSabah et al., 2012: 73). The Tor p2p network is a mesh of proxy servers where the data is bounced through relays, or nodes. As of this writing, more than 7,000 relays enable the transferral of data, applying “onion routing” as a tactic for anonymity (Spitters et al., 2014).[2]  Onion routing was first developed and designed by the US Naval Research Laboratory in order to secure online intelligence activities. Data is sent using Tor through a proxy configuration (3 relays: entry, middle, exit) adding a layer of encryption at every node whilst decrypting the data at every “hop” and forwarding it to the next onion router. In this way, the “clear text” does not appear at the same time and thereby hides the IP address, masking the identity of the user and providing anonymity. At the end of a browsing session the user history is deleted along with the HTTP cookie. Moreover, the greater the number of people using Tor, the higher the anonymity level for users who are connected to the p2p network; volunteers around the world provide servers and enable the Tor traffic to flow.

There is also controversy surrounding the Tor network, connecting it to the so-called “Dark Net” and its “hidden services” that range from the selling of illegal drugs, weapons, and child pornography to sites of anarchism, hacktivism, and politics (Spitters et al., 2014: 1). All of this has increased the risks involved in using Tor. As shown in numerous studies (AlSabah et al., 2012, Spitters et al., 2014, Çalışkan et al., 2015, Winter et al., 2014 and Biryukov et al., 2013), different actors have compromised the Tor network, cracking its anonymity. These actors potentially include the NSA, authoritarian governments worldwide, and multinational corporations: all organisations that would like to discover the identity of users and their personal information (see for example, the case of Hacking Team).[3] Specifically, it should not be discounted that Tor exit node operators have access to the traffic going through their exit nodes, whoever they are (Çalışkan et al., 2015: 29). Besides governmental actors in the security industries, activists, dissidents and whistle-blowers using Tor, there are also academics that carry out research attempting to “hack” Tor.

The Researchers’ Ethical Dilemma

In January 2015, Brian Farrell aka “Doctor Clu,” was arrested and charged with one count of conspiracy to distribute illegal “hard” drugs such as cocaine, methamphetamine and heroin at a “hidden service” marketplace (Silk Road 2.0) on the so-called “Dark Net”(Geuss 2015).[4] His IP address (along with other users) was purportedly captured in early 2014 by researchers, Alexander Volynkin and Michael McCord, when they were carrying out their empirical study at SEI, a non-profit organisation at Carnegie Mellon University (CMU) in Pittsburgh, U.S.A. The SEI researchers were supposedly able to bypass security and with their hack, obtain around 1000 IP addresses of users.

Since the beginning of 2014, an unnamed source had been giving authorities the IP address of those who accessed this specific part of the site (Vinton 2015).

The researchers from SEI at CMU were invited to present their methods and findings on how to “de-anonymize hundreds of thousands of Tor clients and thousands of hidden services” at the Black Hat security conference in July 2014, but they never showed up and the reason of their cancellation is still posted on the website (Figure 1).

Figure 1: Black Hat 2014 website Schedule Update (link.)

As the next screenshot of the Internet Archive’s Way Back Machine reflects (Figure 2), the researcher’s abstract elucidated their braggadocio of a low budget exploit of Tor for around $3000, as well as a call out to others to try:

Looking for the IP address of a Tor user? Not a problem. Trying to uncover the location of a Hidden Service? Done. We know because we tested it, in the wild…. (Volynkin 2014).

Figure 2: Black Hat 2014 Briefings (link).

With regard to ethical research considerations, the researchers’ “anonymous subjects” didn’t realize or know they were participating in a study-cum-hack. Many in the security research community regard this as an infringement of ethical standards included in the IEEE Code of Ethics that prohibits “injuring others, their property, reputation, or employment by false or malicious action” (IEEE n.D.: section 2.4.2). Even when following such an officially recognized IEEE ethical code, “failure, discovery, and unintended or collateral consequences of success” (Greenwald et. al. 2008:78) could potentially harm “objects of study”– in this case the visitors to the Silk Road 2.0. The Dark Net is perhaps trickier than other fields but there are also academics carrying out research there, contacting users, building their trust and protecting their sources.[5] Supposedly SEI started hosting part of Tor’s relays, but intentionally set up “malicious actors” so that they could carry out their research. According to one anonymous source reported at Motherboard, SEI

had the ability to deanonymize a new Tor hidden service in less than two weeks. Existing hidden services required upwards of a month, maybe even two months. The trick is that you have to get your attacking Tor nodes into a privileged position in the Tor network, and this is easier for new hidden services than for existing hidden services (Cox 2015).

It is crucial that the Tor Project is always informed of the exploit even before it is released so that they can fix potential flaws that enable deanonymization. During the past several years, researchers have continuously shared their data with the Tor Project and reported their findings, such as malicious attacks, or what is called “sniffing” – when the exit relay information is compromised. Once a study is published, patches are developed and Tor improves upon itself as these breaches of security are uncovered.  Unlike other empirical studies, the SEI researchers did not inform the Tor Project of their exploits. Instead Tor discovered the exploits and contacted the researchers, who declined to give details. Only after the abstract for Black Hat (late June 2014) was published online did the researchers “give the Tor Project a few hints about the attack but did not reveal details” (Felten 2014). The Tor Project ejected the attacking relays and worked on a fix for all of July 2014, with a software update release at the end of the month, along with an explanation of the attack (Dingledine 2014). As this case shows, not only “malicious actors,” but also certain researchers can collect data on Tor users. According to the Tor Project director Roger Dingledine the SEI researchers acted inappropriately:

Such action is a violation of our trust and basic guidelines for ethical research. We strongly support independent research on our software and network, but this attack crosses the crucial line between research and endangering innocent users (Dingledine 2014).

A Subpoena for Research

Richard Nixon’s 1973 Grand Jury subpoena.

In November 2015, the integrity of these two SEI researchers was again brought into question when the rumour circulated that they had been subpoenaed by the FBI to hand over their collated IP addresses. According to an assistant researcher at CMU Nicolas Christin, SEI is a non-profit and not an academic institution and therefore the researchers at SEI are not academics but instead are “focusing specifically on software-related security and engineering issues” and in 2015 the SEI renewed a 5-year governmental contract for 1,73 billion dollars (Lynch 2015). In an official media statement, CMU’s SEI responded by explaining that their mission encompassed searching and identifying “vulnerabilities in software and computing networks so that they may be corrected” (CMU 2015). Important to note is that the US government (specifically the Departments of Defense and of Homeland Security) funds many of these research centers, such as CERT (Computer Emergency Response Team), a division of SEI which has existed ever since the Morris Worm first created a need for such an entity (Kelty 2011). To be precise, it is one of the Federally Funded Research and Development Centers (FFRDC), which are

unique non-profit entities sponsored and funded by the U.S. government that address long-term problems of considerable complexity, analyze technical questions with a high degree of objectivity, and provide creative and cost-effective solutions to government problems (Lynch 2015).

Legally, in the U.S., the FBI, SEC and the DEA can all subpoena researchers to share their research. However, the obtained information was not for public consumption, but for an agency within the U.S. Department of Justice, the FBI. Matt Blaze, a computer scientist at the University of Pennsylvania made the following statement about conducting research:

When you do experiments on a live network and keep the data, that data is a record that can be subpoenaed. As academics, we’re not used to thinking about that. But it can happen, and it did happen (Vitáris 2016).

Besides the ethical questions regarding the researchers handing over their findings to the governments that have supported them (ostensibly with tax-payer money), the politics of security research and vulnerability disclosure continues to be a heated debate within academia and the general public. It seems that issuing subpoenas by law enforcement might provide a means to gather data on citizens and to obtain knowledge of academic research – which then remains hidden from the public. Computer security defense lawyer Tor Ekeland gave this comment:

It seems like they’re trying to subpoena surveillance techniques. They’re trying to acquire intel[ligence] gathering methods under the pretext of an individual criminal investigation (Vitáris 2016).

It is not clear whether the FBI was using a subpoena to acquire exploits, or if the CMU (SEI) researchers were originally hired by the FBI and only later disclosed what happened, stating that they had been subpoenaed?[6] Either way, it would raise the issue of whether the FBI required a search warrant in order to obtain the evidence – the IP addresses.

Internet Search and Seizure

In January 2016, Farrell’s defense filed a motion to compel discovery, in an attempt to understand exactly how the IP address was obtained, as well as the past two-year history of the relationship between the FBI and SEI through working contracts. In February 2016, the Farrell case came to court in Seattle where it was finally revealed to the public that the “university-based research institute” was confirmed to be SEI at CMU, subpoenaed by the FBI (Farivar 2016). The court denied the defense’s motion to compel discovery. This statement from the order—Section II, Analysis—written by US District Judge Richard A. Jones answered the question of whether a search warrant was needed to obtain IP addresses:

SEI’s identification of the defendant’s IP address because of his use of the Tor network did not constitute a search subject to Fourth Amendment scrutiny (Cox 2016).[7]

In order to claim protection under the Fourth Amendment, there needs to be a demonstration of an “expectation of privacy,” which is not subjective but recognized as reasonable by other members of society. Furthermore, Judge Jones claimed that the IP address “even those of Tor users, are public, and that Tor users lack a reasonable expectation of privacy” (Cox 2016).

Again, according to the party’s submissions, such a submission is made despite the understanding communicated by the Tor Project that the Tor network has vulnerabilities and that users might not remain anonymous. Under these circumstances Tor users clearly lack a reasonable expectation of privacy in their IP addresses while using the Tor network. In other words, they take a significant gamble on any real expectation of privacy under these circumstances (Jones 2016:3).

Judge Jones reasoned that Farrell didn’t have a reasonable expectation of privacy because he used Tor; but he also stated that IP addresses are public because he willingly gave his IP address to an Internet Service Provider (ISP), in order to have internet access. Moreover, the citation (precedent) that Judge Jones drew upon to uphold his order, namely, United States v. Forrester, ruled that individuals have no reasonable ‘expectation of privacy’ with internet IP addresses and email addresses:

The Court reaches this conclusion primarily upon reliance on United States v. Forrester, 512 F.2d 500 (9th Cir. 2007). In Forrester, the court clearly enunciated that: Internet users have no expectation of privacy in …the IP address of the websites they visit because they should know that this information is provided to and used by Internet service providers for the specific purpose of directing the routing of information (Jones 2016:2-3).

Trust

In March 2016, Farrell eventually pleaded guilty to one count of conspiracy regarding the distribution of heroin, cocaine and amphetamines in connection with the hidden marketplace Silk Road 2.0 and received an eight-year prison sentence. In this case, the protection of an anonymous IP address was thwarted in various ways (a hack, a subpoena, a ruling) with regard to governmental intrusion. Privacy technologists, such as Christopher Soghoian, have provided testimony in similar cases, explaining that the government states that obtaining IP addresses “isn’t such a big deal,” yet the government can’t seem to elucidate how they could actually obtain them (Kopstein 2016).

“Campfire” XKCD 742

Whoever wanted to know the IP address would have to be in control of many nodes in the Tor network, around the world; and one would have to intercept this traffic and then correlate the entry and exit nodes. Besides the difficulty factor, these correlation techniques cost time and money and these exploits, including the one from the SEI researchers, were possible in 2014. Even if IP addresses are considered public when using Tor, they are anonymous unless they are correlated with a specific individual’s device.[8] To correlate Farrell’s IP address, the FBI had to obtain the list of IP addresses from Farrell’s ISP provider, Comcast.

The judge’s cited reason for denying the motion to compel disclosure was that IP addresses are in and of themselves not private, as people willingly provide them to third parties. Nowadays people increasingly use the internet (and write emails) instead of the telephone; and in order to do so, they must divulge their IP address to an ISP in order to access the internet. When users are outside of the Tor anonymity network, their IP is exposed to an ISP. However, when inside the “closed field” of Tor, is there no expectation of privacy along with the security of the content? And by extension, is there not an expectation of anonymity with the security of users’ identity?

Judge Jones also argued that that Farrell didn’t have an expectation of privacy because he handed over his IP address to strangers running the Tor network.

[I]t is the Court’s understanding that in order for a prospective user to use the Tor network they must disclose information, including their IP addresses, to unknown individuals running Tor nodes, so that their communications can be directed towards their destinations. Under such a system, an individual would necessarily be disclosing his identifying information to complete strangers (Jones 2016:3).

Herewith the notion of trust surfaces and plays a salient role. When people share information with ethnographers, anthropologists, activists or journalists and it takes months, sometimes years to gain people’s trust; and the anonymity of the source often needs to be maintained. These days when people choose to use the Tor network they trust a community that can see the IP address at certain points, and they trust that the Tor exit node operators do not divulge their collected IP addresses nor make correlations. In an era of so-called Big Data, as more user data is collated (by companies, governments and researchers) correlation becomes easier and deanonymization occurs more frequently. With the Farrell case, researchers’ ethical dilemmas, the politics of vulnerability disclosure and law enforcement’s “hacking” of Tor all played a role in obtaining his IP address. Despite opposing judicial rulings, it can be argued that Tor users do have an expectation of privacy whereas the capture of IP addresses for users seeking anonymity online has been expedited.

Renée Ridgway is presently a PhD candidate at Copenhagen Business School (MPP) and a research affiliate with the Digital Cultures Research Lab (DCRL), Leuphana University, Lüneburg. Her research investigates the conceptual as well as technological implications of using search, ranging from the personalisation of Google to anonymous browsing using Tor. Recent contributions to publications include Ephemera, SAGE Encyclopaedia of the Internet, Hacking Habitat, Money Labs (INC), OPEN!, APRJA and Disrupting Business.

References

AlSabah, Mashael; Bauer, Kevin and Goldberg, Ian. 2012. “Enhancing Tor’s Performance using Real-time Traffic Classification.” presented at CCS’12, Raleigh, North Carolina, USA. October 16–18.

Bartlett, Jamie. 2014. The Dark Net: Inside the Digital Underworld. Portsmith: Heinemann.

Biryukov, A., Pustogarov, I. and Weinmann, R.P. 2013. “Trawling for tor hidden services: Detection, measurement, deanonymization,” in Security and Privacy (SP). 2013 IEEE Symposium on. IEEE, pp. 80–94.

Çalışkan, Emin, Minárik, Tomáš, and Osula; Anna-Maria. 2015. Technical and Legal Overview of the Tor Anonymity Network. Tallin: CCDCOE, NATO Cooperative Cyber Defence Centre of Excellence.

Carnegie Mellon University (CMU). 2015. “Media Statement.” November 18th. Available at: link.

Cox, Joseph. 2015. “Tor Attack Could Unmask New Hidden Sites in Under Two Weeks.” November 13th. Available at: link.

—. 2016 “Confirmed: Carnegie Mellon University Attacked Tor, Was Subpoenaed By Feds.” February 24th. Available at: link

Dingledine, Roger  a.k.a. arma. 2014. “Tor security advisory: “relay early” traffic confirmation attack,” Tor Project Blog. July 30th.  Available at: link.

Dittrich et. al. 2009. Towards Community Standards for Ethical Behavior in Computer Security Research. Stevens CS Technical Report 20091, April 20th. Available at: link.

Farivar, Cyrus. 2016. “Top Silk Road 2.0 admin “DoctorClu” pleads guilty, could face 8 years in prison.” Ars Technica, April 4th. Available at: link.

Felten, Ed. 2014 “Why were CERT researchers attacking Tor?” Freedom to Tinker Blog. July 31. Available at: link.

Fox-Brewster, Thomas. 2015. “$30,000 to $1 Million — Breaking Tor Can Bring In The Big Bucks.” Forbes Magazine.  November 12th Available at: link.

Geuss, Megan. 2015 “Alleged “right hand man” to Silk Road 2.0 leader arrested in Seattle.” Ars Technica. January 21st. Available at: link.

Greenwald, Stephen J. et. al. 2008. “Towards an Ethical Code for Information Security?” NSPW’08, September 22–25 Available at: link.

IEEE. N.d. IEEE Code of Ethics. Available at: link.

Jones, Richard A. 2016b. Order on Defendant’s Motion to Compel United States v. Farrell, CR15-029RAJ. U.S. District Court, Western District of Washington, Filed 02/23/16. Available at: link.

Kelty, Christopher M. 2011. “The Morris Worm.” Limn. Issue Number One: Systemic Risk. Available at: link.

Kopstein, Joshua. 2016. “Confused Judge Says You Have No Expectation of Privacy When Using Tor.” Motherboard, Available here: link.

Lynch, Richard. 2015. “CMU’s Software Engineering Institute Contract Renewed by Department of Defense for $1.73 Billion.” Press Release, Carnegie Mellon University. July 28th  Available here: link.

Spitters, Martijn, Verbruggen, Stefan and van Staalduinen, Mark. 2014.  “Towards a Comprehensive Insight into the Thematic Organization of the Tor Hidden Services,” presented at 2014 IEEE Joint Intelligence and Security Informatics Conference, Los Angeles, CA, USA; 15 -17 Dec 2014

Vinton, Kate. 2015. “Alleged Silk Road 2.0 Operator’s Right-Hand Man Arrested On Drug Charges.” Forbes Magazine. January 21. Available at: link.

Vitáris, Benjamin. 2016. “FBI’s Attack On Tor Shows The Threat Of Subpoenas To Security Researchers.”Deep Dot Web Blog. March 8 Available here: link.

Volynkin, Alexander and McCord, Michael. 2014. “Deanonymizing users on a budget.” Black Hat 2014 Briefings. Available at: link.

Winter, Philipp; Köwer, Richard, et. al. 2014. “Spoiled Onions: Exposing Malicious Tor Exit Relays.” In: Privacy Enhancing Technologies Symposium. Springer.


[1] (Winter et al., 2014: 6).

[2] https://torstatus.blutmagie.de/

[3] “The Italian organisation, which even its CEO called a “notorious” provider of government spyware, was looking to expand its line of products, Rabe said. That included targeting the anonymizing Tor network, where civil rights activists, researchers, pedophiles and drug dealers alike try to hide from the global surveillance complex” (Fox-Brewster 2015).

[4] (U.S. v. Farrell, U.S. District Court, W.D. Wash., No. 15-mj-00016) Complaint for Violation. Available at: link.

[5] I refer here specifically to Jamie Bartlett’s ‘The Dark Net’ research.

[6] February 24, 2016: “When asked how the FBI knew that a Department of Defence research project on Tor was underway, so that the agency could then subpoena for information, Jillian Stickels, a spokesperson for the FBI, told Motherboard in a phone call that ‘For that specific question, I would ask them [Carnegie Mellon University]. If that information will be released at all, it will probably be released from them.’” Available at: link

[7] Scrutiny of the Fourth Amendment shows the original text of 1789 that was later ratified in the Bill of Rights, the first 10 amendments to the US Constitution: The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized. Available at: link.

[8] http://whatismyipaddress.com

Hacker Madness

Hackers induce hysteria. They are the unknown, the terrifying, the enigma. The enigma that can breach and leak the deepest secrets you’ve carelessly accreted over the years in varied fits of passion, desperation, boredom, horniness, obsession, and jubilation on your computers, phones and the internet. Maybe you’re the government, maybe you’re just some innocent schmuck—maybe you’re both. Maybe you don’t deserve to be exposed, maybe you do. The common fear is that you will never know who exposed you. Is it a he, a she, or an it? The FBI? The NSA?  You feel vulnerable and it feels as though what happened is black magic because you understand nothing about how it was done. Terrifying, fascinating, excruciating black magic, practiced by an enigma.

Or maybe you do know how the enigma did it, and you feel stupid:  because the enigma exposed your lazy information security—maybe because your password was just “1234”, or your birthday, or maybe you logged into a public Wi-Fi network without VPN, and maybe, just maybe, you used the same password for all your accounts.  You’re a moron for doing that, and you know it; but it never occurred to you that anyone would bother to hack you at Starbucks. You’re hysterical over an enigma that could be anywhere in the world; or perhaps your roommate, child, or lover in your own home.

I regularly observe this hysteria. I’m a defense lawyer who represents hackers in federal courts across the United States. I’m writing this in an airport in Kentucky after the sentencing of a client. He and his colleague hacked a cheap high school football fan website to protest the rape of a minor in Steubenville, Ohio by members of the high school football team. They posted a video of my client in a Guy Fawkes mask decrying the rape. They helped organize protests over the rape in the town. It attracted national media attention. It led to the federal government indicting my client for felony computer crime. The federal government never prosecuted anyone involved in the rape.

My client was part of a movement protesting what they viewed as the small town’s attempted cover up of the extent of the rape. Much ire was directed at the local county prosecutor (not to be confused with the federal prosecutors in Kentucky who indicted my client) who initially handled the case. The perception was that she was intentionally limiting the scope of the prosecution because she was closely connected to the football team through her son. Social media postings of football team-members seemed to implicate more than the two football players she initially went after. Eventually, she recused herself from the case. After this, the town’s school superintendent, the high school principal, the high school wrestling coach, and the high school football coach were indicted on various felony and misdemeanor charges including obstruction of justice and evidence tampering. It’s unlikely any of this would have happened without the attention my client, along with many others, helped bring to the case

The local prosecutor wasn’t even the one who got hacked. That person, perhaps out of fear, stayed out of it. Yet this prosecutor, in a letter submitted to the court at my client’s sentencing, breathlessly condemned my client as a terrorist—yes, a terrorist—for bringing attention to the sordid details of the attempted cover-up of the extent of a 16-year-old girl’s rape. A rape that involved the girl incapacitated by alcohol being publicly and repeatedly penetrated and urinated on by members of the football team, their jocular enthusiasm captured in the photos they posted on social media. No one died, no one except the rape victim was physically hurt, yet my client was called a terrorist and thrown in jail because a $15 website with an easily guessed password got hacked.  All of this, because of the embarrassment, the shame, and the vulnerability—not that of the rape victim, but of a town whose dark secrets had been breached and leaked.

My client got two years – the two rapists got one and two years respectively. My client didn’t physically or financially harm anyone.  At best the damage was reputational, but that was self-inflicted by people in the town. My client didn’t rape a minor. Metaphorically, the town did, and in reality, members of its high school football team did. Nonetheless, in that case and most I deal with, the federal criminal “justice” system hysterically treats hackers on par with rapists and other violent felons.

Including the Steubenville rape case, I’ve now had two clients called “terrorists” in open court. In the second case, the former boss of a client of mine, in a moment that almost made me laugh out loud in court, called him a terrorist at his sentencing. I suspect the boss was a bit jealous of my client’s journalistic talent and was ruefully avenging his own feelings of inadequacy and loss of control. This particular client had quit his job in a pique after justifiably accusing his boss at the local TV station of engaging in crappy journalistic practices. After departing his job, he helped hack (allegedly) the LA Times website, owned by the same parent company and sharing the same content management system; a few words were changed in a story about tax cuts.

The edits— the government liked to refer to it as the “defacement” — were removed and the article restored to its original state within forty minutes. For this, the sentencing recommendation from pre-trial services was 7 ½ years, the government asked for 5, and the judge gave him 2. Again, no one was physically hurt, the financial loss claims were dubious, and the harm was reputational, at best. But my client was sentenced more seriously than if he’d violently, physically assaulted someone. In fact, he’d probably have faced less sentencing exposure if he’d beaten his boss with a baseball bat.

Unsurprisingly, his actions were portrayed as a threat to the freedom of the press. There was some pious testimony from an LA Times editor about the threat to a so-called great paper’s integrity. But when the cries of terrorism are stripped away, a more mundane explanation for all the sanctimony emerges: the “victim’s” information security sucked. They routinely failed to deactivate passwords and system access for ex-employees.  After the hack, they discovered scores of still active user accounts for ex-employees that took them months to sort through and clean up. They stuck my terrorist client with the bill for fixing their bad infosec, of course. All of this, because of the embarrassment, the shame, and the vulnerability–not of an employee, but that of a powerful organization.

Another one of my clients who lived in a corrupt Texas border town was targeted by a federal prosecutor. The talented young man had committed the egregious sin of running a routine port scan on the local county government’s website using standard commercially available software. Don’t know what a port scan is? Don’t worry, all you need to know is that it’s black magic. This client had also gotten into it a tiff with a Facebook admin, exchanged some testy emails with the admin, but walked away from it while the admin continued to send him emails. A routine internet cat-fight of little import that wouldn’t raise eyebrows with anyone mildly experienced with the internet’s trash talking and petty squabbles.

But this client, like most of my clients, was purportedly affiliated with Anonymous. This led to an interesting state of affairs that demonstrates both the fear and the contempt the government has for enigmatic hackers. In essence, the FBI detained my client and threatened him with a felony hacking prosecution unless he agreed to hack the ruthlessly violent Mexican Zeta Cartel.  Fearing for his loved ones and himself, my client sensibly declined this death wish. But the FBI persisted. The FBI specifically wanted a document that purportedly listed all the U.S. government officials on the take from the Zetas. No one even knew if this document existed, but the FBI didn’t care much about that fact. After my client declined, he was charged with 26 felony counts of hacking and 18 felony counts of cyberstalking based on his interaction with the Facebook admin.

Naturally, this case was brought to my attention. After examining the Indictment and engaging in a few interesting discussions with the federal prosecutor, my client pleaded guilty to a single misdemeanor count of hacking related to his port scanning of the local government website. Better to take a misdemeanor than run the risk of a federal criminal trial where the conviction rate is north of 90%. But the fact that this hysterical prosecution was brought in the first place reflects poorly on the exercise of prosecutorial discretion about hacking on the part of the Department of Justice. Again, no one was hurt, no one lost money, but my client was facing a maximum of 440 years in jail under the original Indictment.

My hands down favorite example of hacker-induced hysteria was directed at me and my co-counsel in open court. I couldn’t hack my way out of a paper bag, but prosecutors love to tar me by association with my clients. In this instance, on the eve of trial on a Friday in open federal court, the prosecutor—along with the FBI agent on the case— accused my co-counsel and me of hacking the FBI, downloading a top-secret document, removing the top-secret markings on it, and then producing it as evidence we wanted to use at trial. Co-counsel and I were completely baffled, exchanged glances, and then told the court we would give the court an answer on Monday as to the document’s origins—and to this criminal, law license jeopardizing accusation.

It turns out we’d downloaded the document in question from the FBI’s public website. The FBI had posted the document because it was responsive to a Freedom of Information Act request. The FBI had removed the top-secret markings in so doing. Needless to say, we corrected the record on Monday.  Pro-tip for rookie litigators: If your adversary produces a document you have a serious question about, it’s best to confer with your adversary off the record about it before you cast accusations in open court that implicate them in felony hacking and Espionage Act violations. But, such is the hysteria that hacking induces that it spills over to the lawyers that defend them.  How many lawyers who defend murderers are accused of murder?

The feelings of vulnerability, fear of the unknown, and embarrassment that feed the hysterical reaction to hackers also lead to the fetishizing of hackers in popular culture. T.V. shows like Mr. Robot, House of Cards, and movies like Live Free or Die Hard, where the hackers are both villains and heroes, all exacerbate this fetish. And this makes life harder for me and my clients because we have to combat these stereotypes pre-trial, at trial, and during their incarceration should that come to be. Pre-trial, my clients are subjected to irrational, restrictive terms of release that rest on the assumption that mere use of a computer will lead to something nefarious. During trial, we have to combat the jury’s preconceptions of hackers. And if and when they’re put in jail, convicted hackers are often treated on par with the worst, most violent felons. Almost all of my incarcerated clients were thrown in solitary for irrational, hacker-induced hysteria reasons. But those are stories for another day.

The hysteria hackers induce is real, and it is dangerous. It leads to poorly conceived and drafted draconian laws like America’s Computer Fraud and Abuse Act. It distorts our criminal justice system by causing prosecutors and courts to punish mundane computer information security acts on par with rape and murder. Often, I receive phone calls from information security researchers, with fear in their voice, worried that some routine, normally accepted part of their profession is exposing them to felony liability. Usually I have to tell them that it probably is.

And the hysteria destroys the lives of our best computer talents, who should be cultivated and not thrown in jail for mundane activities or harmless pranks. All good computer minds I’ve met do both. Thus, not only is hacker-induced hysteria detrimental to our criminal justice system in that it distorts traditional notions of fairness, justice, and punishment based on irrational fears. It is fundamentally harmful to our national economy. And that should give even the most ardent defenders of the capitalistic order at the Department of Justice and the FBI pause, if not stop them dead in their tracks, before pursuing hysterical hacking prosecutions.

The best proof that this hysteria is unwarranted and unnecessary most of the time is the fate of persecuted hackers and hacktivists themselves.  Most of those arrested for pranks, explorations, and even risky, hard-core acts of hacktivism aren’t a detriment to society, they’re beneficial to our society and economy.  After their youthful learning romps, they’ve matured their technical skills—unlearnable in any other fashion—into laudable projects. Robert Morris was author of the Morris Worm. He’s responsible for one of earliest CFAA cases because his invention got out of his control and basically slowed down the internet, such as it was, in 1988. Now he’s a successful Silicon Valley entrepreneur and tenured professor at MIT who has made significant contributions to computer science. Kevin Poulsen is an acclaimed journalist; Mark Abene and Kevin Mitnick are successful security researchers.  And those’re just the old-school examples from the ancient—in computer time—1990’s.

Younger hackers are doing the same. From the highly entertaining hacker collective Lulzsec, Mustafa Al Bassam is now completing a PhD in cryptography at University College London; Jake Davis is translating hacker lore, culture, and ethics to the public at large; Donncha O’Cearbhaill, is employed at a human rights technology firm and is a contributor to the open source project Tor (no relation); Ryan Ackryod and Darren Martyin are also successful security researchers.  Sabu, the most famous member of Lulzsec, of course, has enjoyed a successful career as a snitch, hacking foreign government websites on behalf of the FBI and generally basking in the fame and lack of prison time his sell out engendered. And I’m not going to talk about the young, entertaining hackers that haven’t been caught yet.  But the ones I care about, the ones I think are important, aren’t interested in making money off your bad infosec. They’re just obsessed by how the system works, and a big part of that is taking the system apart. Perhaps I share that with them as a federal criminal defense lawyer.

All these hackers exemplify the harms that hysteria can have: misdirecting the energy of exactly the people who can help test, secure and transform the world we occupy in the name of public values that we share: values our own government should be defending, instead of destroying.

“The Troll on Karl Johan Street” By Theodor Kittelsen, 1892.

Tor’s parents are from Norway, hence his name. Yes, it’s real. The only reason you think it should have an “H” in it is because you’ve watched that movie. Tor is way sexier than Chris Hemsworth. His name also precedes the invention of The Onion Router and him becoming a computer lawyer.  Don’t know what The Onion Router is? That’s ok, just know it’s black magic. Tor didn’t know what it was until everyone starting asking if Tor was his real name when he repped weev, one of the most famous internet trolls in the English language. They still talk, despite the fact that weev is basically a neo-Nazi and the Gestapo tortured Tor’s dad for four days and then threw him in a concentration camp. His dad taught him resistance techniques and the value of a sense of humor in the face of the moral smugness of the state. Since weev, Tor has also represented a bunch of hackers in federal courts across the United States, and is going to take the non-public part of that and his other off-the-record representations to his grave. At which point—the point of his death—perhaps there will be an information dump, just for the Lulz. Or his name isn’t Tor Ekeland.

‘Ye shall know them by their fruits’

Read and download Just Managing? for free here. “If you’re one of those families, if you’re just managing, I want to address you directly.” (Theresa May; 13 July 2016) Words are tricky things, and we can all agree that ‘talk is cheap’. … Continue reading

Interview: Mustafa Al-Bassam

Gabriella Coleman:  Based on what you’ve seen and reported do you think we (not just lay people, but experts on the subject) are thinking clearly about vulnerability?  Is our focus in the right place (e.g. threat awareness, technical fixes, bug bounties, vulnerabilities disclosure), or do you think people are missing  something, or misinterpreting the problem?

Mustafa Al-Bassam: Based on the kind of vulnerabilities that we [LulzSec] were exploiting at Fortune 500 companies, I don’t think that there is a lack of technology or knowledge in place to stop vulnerabilities from being introduced, but the problem is that there is a lack of motivation to deploy such knowledge. We exploited extremely basic vulnerabilities such as SQL injection, in companies like Sony, that are quite easy to prevent.

I believe the key problem is that most companies (especially those that are not technology companies – like Sony) don’t have much of an incentive to invest money in making sure their systems are vulnerability-free, because security isn’t a key value proposition in their business model, it’s merely an overhead cost to be minimized. Sony fired their entire security team shortly before they got hacked over 30 times in 2011. For such companies, security only becomes a concern for them when it becomes a PR disaster. So that’s what LulzSec did: make security a PR disaster.

We’ve seen this before: when Yahoo! was breached in 2014, the CEO made the decision not to inform customers of the breach. Because it would have been a PR disaster for them, that may have seen them lose customers to their competitors, causing them to lose money.

That begs the question: how can we expect companies to do the right thing and inform customers of breaches, if doing the right thing will cause them to lose money? And so, why should companies bother to invest in keeping their systems free of vulnerabilities, if they can simply brush compromises under a carpet? After all, it is the customer that loses from having their information compromised, rather than the company, as long as the customer keeps paying.

So I think if we can incentivize companies to be more transparent about their security and breaches, customers can make better-informed decisions about which products and services to use, making it more likely for companies to invest in their security. One way this might happen in the future is through the rise of cybersecurity insurance; more and more companies are signing up to cybersecurity insurance. A standard cybersecurity insurance claim policy should require the company to disclose to its customers when a breach occurs. That way, it makes more economic sense for a company to disclose breaches and also invest in security to get lower insurance premiums or avoid PR disasters.

GC: I wanted to ask about the rise of cybersecurity insurance and whether major firms all already have purchased policies, what the policies currently look like, and whether they actually prevent good security since the companies rely on insurance to recoup their losses?

Christopher Kelty: Yes, I don’t actually understand what cybersecurity insurance insures against— does it insure brand equity? Does it insure against government fines? Lawsuits against a corporation for breach of duty? all of these things?  Just curious)

GC: Exactly, I don’t think many of us have a sense of what this insurance looks like and if you can give us a picture, even a limited picture of what you know and how the insurance works, that would be a great addition to our issue.

MAB: The current cybersecurity insurance market premium is $2.5 billion but it’s still early stages because insurance companies have very little data on breaches to be able to calculate what premiums should be (Joint Risk Management Section 2017: 9). As a result, premiums are quite high and too expensive for small and medium sized businesses, and this will continue to be the case until cybersecurity insurance companies get more data about breaches to properly calculate the risks.

Cybersecurity insurance has been used in several high-profile breaches, most notably Sony Pictures which received a $151 million insurance payout for its large internal network breach alleged to be by North Korea (Joint Risk Management Section: 4).

These policies cover a wide range of losses including costs for ransomware payments, forensic investigations, lost income, civil penalties, lost digital assets, reputational damage, theft of money and customer notification.

I think in the long-term it’s unlikely that companies will adopt a stance where they stop investing in security and just rely on the insurance to recoup losses, because insurance companies will have a concrete economic interest to make sure that payouts happen as rarely as possible, and that means raising the premiums of companies that constantly get breached until they can’t ignore their security problems. Historically, this economic interest is shifted to the customer because it’s usually the customer that loses when their data gets breached and the company doesn’t report it.

If anything, I believe that cybersecurity insurance will make companies more likely to do the right thing when they are breached and inform customers, because the costs of customer notification and reputational damage would be covered by the insurance. At the moment if a company does the right thing and informs their customers of a breach, the company suffers reputational damage, so there is little incentive to do the right thing. This will prevent incidents from occurring such as when Yahoo! failed to disclose a data breach affecting 500m customers for over two years (Williams-Alvarez 2017).

CK:  I wonder if there is more of a spectrum here— from bug bounties to vulnerabilities equities processes (VEP) to cybersecurity insurance— all of them being a way to formalize the knowledge of when and where vulnerabilities exist, or when they are exploited.   What are the pros and cons of these different approaches (I can imagine that a VEP is really overly bureaucratic and unenforceable, whereas insurance might produce its own incentives to exploit or over/under-report for financial gain).  Any thoughts on this?

MAB: Bug bounties and cybersecurity insurance policies are controlled purely by the market and are an objective way to measure the economic value or impact of vulnerabilities, whereas VEP is a more subjective process that is subject to political objectives.

In theory VEP should be a safeguard to be used situations where it is in the public interest to disclose vulnerabilities that may otherwise be more profitable to exploit, but this is not the case in practice. Take the recent WannaCry ransomware attack for example, which used an exploit developed by the National Security Agency, and affected hundreds of companies around the world and the UK’s National Health Service (NHS). You have to ask if the economic and social impact of that exploit falling in the wrong hands was really worth all the intelligence activities that the NSA used it for. How many people died because the NHS couldn’t treat patients when their systems were offline?
GC: Do you have a sense of what the US government (and others around the world) are doing to attract top hacker talent—for good and bad reasons?  Should governments be doing more?  Should it be an issue that we (in the public) know more about?

MAB: In the UK, the intelligence services like the Government Communications Head Quarters (GCHQ) run aggressive recruitments campaigns to recruit technologists. Even going so far as to graffiti ‘hipster’ adverts on the streets of a techy part of London (BBC NewsBeat 2015). They have to do this because they know that their pay is very low compared to London tech companies. In fact, Privacy International – a charity which fights GCHQ – will pay you more to campaign against GCHQ than GCHQ will pay you to work for them as a technologist.

So in order to try to recruit top tech talent, they have to try and lure people in by the promise that the work will be interesting and “patriotic”, rather than it paying well. That is obviously becoming a harder line to toe though, because the intelligence agencies are less popular with technologists in the UK than ever, given the government’s campaign against encryption. Their talent pool is extremely limited.

What I would actually like to see however, is key decision makers in government becoming more tech savvy themselves. Technology and politics are so intertwined these days that I think it’s reasonable that at least a few Members of Parliament should have coding skills. Perhaps someone should run a coding workshop or class for interested Members of Parliament?

CK: I have trouble understanding how improved technical knowledge of MPs would lead to better political decisions if (given your answer to the first question) all the incentives are messed up.  This is a very old problem of engineers vs. managers in any organization.   The engineers can see all the problems and want to fix them; the managers think the problems are different or unimportant.  Just to play devil’s advocate, is it possible that hackers, engineers, or infosec researchers also need a better understanding of how firms and governments work?  Is there a two-way street here?

MAB: I mean this in a more general sense: politicians make poor political decisions when they deal with technical information security problems they don’t understand, for example with the recent encryption debate. In the UK, the Investigatory Powers Bill was recently passed, which allows the government to force communications platforms based in the UK to backdoor their products if they use end-to-end encryption. Luckily most of these platforms aren’t based in the UK, so it will have little impact. But this has a harmful effect on the UK technology sector, as no UK technology company can now guarantee that their customer’s communications are fully secure, which means UK tech firms are less competitive.

A classic example of poor political decisions in dealing with such problems is the EU cookie law, which requires all websites to ask users before they place cookies on their computers (The Register 2017). In theory it sounds great but in practice most users always agree and click yes because the request dialogs are disruptive to their user experience. Even so, a saner way to implement such a policy would be to require the few mainstream browsers to only set website cookies after user approval, rather than ask millions of websites to change their code.

There are already plenty of hackers and engineers who are involved in politics, but there are very few politicians who are involved in technology. Even when engineers consult with the government on policies, their advice is often ignored, as we have seen with the Investigatory
Powers Bill.

Mustafa Al-Bassam (“tflow”) is a doctoral researcher at the Department of Computer Science at University College London. He was also one of 6 core members of the hacking collective LulzSec.

References

BBC News Beat. (2015). “Spy agency GCHQ facing fines for ‘hipster’ job adverts on London streets.” November 27th.  Available at: link.

Joint Risk Management Section of the Society of Actuaries. (2017). “Cybersecurity: Impact on Insurance Business and Operations.” Report by Canadian Institute of Actuaries, Casualty Actuary Society, the Society of Actuaries. Available at: link.

The Register. (2017). “Planned ‘cookie law’ update will exacerbate problems of old law – expert.” March 1st. Available at link.

Williams-Alvarez, Jennifer. (2017). “Yahoo general counsel resigns amid data breach controversy.” Legal Week, March 2nd.  Available at link.

What Is To Be Hacked?

At the beginning of 2017 information security researcher, Amnesty International technologist, and hacker Claudio (“nex”) Guarnieri launched “Security without Borders,” an organization devoted to helping civil society deal with technical details of information security: surveillance, malware, phishing attacks, etc. Journalists, activists, nongovernmental organizations (NGOs), and others are all at risk from the same security flaws and inadequacies that large corporations and states are, but few can afford to secure their systems without help. Here Guarnieri explains how we got to this stage and what we should be doing about it.

***

Computer systems were destined for a global cultural and economic revolution that the hacker community long anticipated. We saw the potential; we saw it coming. And while we enjoyed a brief period of reckless banditry, playing cowboys of the early interconnected age, we also soon realized that information technology would change everything, and that information security would be critical. The traditionally subversive and anti-authoritarian moral principles of hacker subculture increasingly have been diluted by vested interests. The traditional distrust of the state is only meaningfully visible in some corners of our community. For the most part—at least its most visible part—members of the security community/industry are enjoying six-figure salaries, luxurious suites in Las Vegas, business class traveling, and media attention.

The internet has morphed with us: once an unexplored space we wandered in solitude, it has become a marketplace for goods, the primary vehicle for communication, and the place to share cat pictures, memes, porn, music, and news as well as an unprecedented platform for intellectual liberation, organization, and mobilization. Pretty great, right? However, to quote Kevin Kelly:

There is no powerfully constructive technology that is not also powerfully destructive in another direction, just as there is no great idea that cannot be greatly perverted for great harm…. Indeed, an invention or idea is not really tremendous unless it can be tremendously abused. This should be the first law of technological expectation: the greater the promise of a new technology, the greater is the potential for harm as well (Kelly 2010:246).

Sure enough, we soon observed the same technology of liberation become a tool for repression. It was inevitable, really.

Now, however, there is an ever more significant technological imbalance between states and their citizens. As billions of dollars are poured into systems of passive and active surveillance—mind you, not just by the United States, but by every country wealthy enough to do so—credible defenses either lag, or remain inaccessible, generally only available to corporations with deep enough pockets. The few ambitious free software projects attempting to change things are faced with rather unsustainable funding models, which rarely last long enough to grow the projects to maturity.

Nation states are well aware of this imbalance and use it to their own advantage. We have learned through the years that technology is regularly used to curb dissent, censor information, and identify and monitor people, especially those engaged in political struggles. We have seen relentless attacks against journalists and activists in Ethiopia, the crashing of protest movements in Bahrain, the hounding of dissidents in Iran, and the tragedy that became of Syria, all complemented with electronic surveillance and censorship. It is no longer hyperbole to say that people are sometimes imprisoned for a tweet.

As a result, security can no longer be a privilege, or a commodity in the hands of those few who can afford it. Those who face imprisonment and violence in the pursuit of justice and democracy cannot succeed if they do not communicate securely, or if they cannot remain safe online. Security must become a fundamental right to be exercised and protected. It is the precondition for privacy, and a key enabler for any fundamental freedom of expression. While the security industry is becoming increasingly dependent—both financially and politically—on the national security and defense sector, there is a renewed need for a structured social and political engagement from the hacker community.

Some quarters of the hacker community have long been willing to channel their skills toward political causes, but the security community lags behind. Eventually some of us become mature enough to recognize the implications and social responsibilities we have as technologists. Some of us get there sooner, some later; some never will. Having a social consciousness can even be a source of ridicule among techies. You can experience exclusion when you become outspoken on matters that the larger security and hacking communities deem foreign to their competences. Don’t let that intimidate you.

As educated professionals and technicians, we need to recognize the privilege we have, like our deep understanding of the many facets of technology; we must realize that we cannot abdicate the responsibility of upholding human rights in a connected society while continuing to act as its gatekeepers. Whether creating or contributing to free software, helping someone in need, or pushing internet corporations to be more respectful of users’ privacy, dedicating your time and abilities to the benefit of society is concretely a political choice and you should embrace that with consciousness and pride.

***

Today we face unprecedented challenges, and so we need to rethink strategies and re-evaluate tactics.

In traditional activism, the concept of “bearing witness” is central. It is the practice of observing and documenting a wrongdoing, without interfering, and with the assumption that exposing it to the world, causing public outcry, might be sufficient to prevent it in the future. It is a powerful and, at times, the only available and meaningful tactic. This wasn’t always the case. In activist movements, the shift of tactics is generally observed in reaction to the growth, legitimization, and structuring of the movements themselves as they conform to the norms of society and of acceptable behavior.

Similarly, as we conform too, we also “bear witness.” We observe, document, and report on the abuses of technology, which is a powerful play in the economic tension that exists between offense and defense. Whether it is a journalist’s electronic communications intercepted or computer compromised, or the censorship of websites and blocking of messaging systems, the exposure of the technology empowering such repressions increases the costs of their development and adoption. By bearing witness, such technologies can be defeated or circumvented, and consequently re-engineered. Exposure can effectively curb their indiscriminate adoption, and factually become an act of oversight. Sometimes we can enforce in practice what the law cannot in words.

The case of Hacking Team is a perfect example. The operations of a company that produced and sold spyware to governments around the world were more effectively scrutinized and understood as a result of the work of a handful of geeks tracking and repeatedly exposing to public view the abuses perpetrated through the use of that same spyware. Unfortunately, regulations and controls never achieved quite as much. At a key moment, an anonymous and politicized hacker mostly known by the moniker “Phineas Phisher” (Franceschi-Bicchierai 2016) arrived, hacked the company, leaked all the emails and documents onto the internet, and quite frankly outplayed us all. Phineas, whose identity remains unknown almost two years later, had previously also hacked Gamma Group, a British and German company and competitor of Hacking Team, and became a sort of mischievous hero in the hacktivist circles for his or her brutal hacks and the total exposure of these companies’ deepest secrets. In a way, one could argue that Phineas achieved much more attention from the public, and better results, than anyone had previously, myself included. Sometimes an individual, using direct action techniques, can do more than a law, a company, or an organization can.

However, there is one fundamental flaw in the practice of bearing witness. It is a strategy that requires accountability to be effective. It requires naming and shaming. And when the villain is not an identifiable company or an individual, none of these properties are available to us in the digital world. The internet provides attackers plausible deniability and an escape from accountability. It makes it close to impossible to identify them, let alone name and shame them. And in a society bombarded with information and increasingly reminded by the media of the risks and breaches that happen almost daily, the few stories we do tell are becoming repetitive and boring. After all, in front of the “majesty” of the Mirai DDoS attacks (Fox-Brewster 2016), or the hundreds of millions of online accounts compromised every other week, or even in front of the massive spying infrastructure of the Five Eyes (Wikipedia 2017c), who in the public would care about an activist from the Middle East, unknown to most, being compromised by a crappy trojan (Wikipedia 2017d) bought from some dodgy website for 25 bucks?

We need to stop, take a deep breath, and look at the world around us. Are we missing the big picture? First, hackers and the media alike need to stop thinking that the most interesting or flamboyant research is the most important. When the human rights abuses of HackingTeam or FinFisher are exposed, it makes for a hell of a media story. At times, some of the research I have coauthored has landed on the front pages of major newspapers. However, those cases are exceptions, and not particularly representative of the reality of technology use as a tool for repression by a state. For every dissident targeted by sophisticated commercial spyware made by a European company, there are hundreds more infected with free-to-download or poorly written trojans that would make any security researcher yawn. Fighting the illegitimate hacking of journalists and dissidents is a never-ending cat and mouse game, and a rather technically boring one. However, once you get past the boredom of yet another DarkComet (Wikipedia 2017b) or Blackshades (Wikipedia 2017a) remote administration tool (RAT), or a four-year-old Microsoft Office exploit, you start to recognize the true value of this work: it is less technical and more human.

I have spent the last few years offering my expertise to the human rights community. And while it is deeply gratifying, it is also a mastodontic struggle. Securing global civil society is a road filled with obstacles and complications. And while it can provide unprecedented challenges to the problem-solving minds of hackers, it also comes with the toll of knowing that lives are at stake, not just some intellectual property, or some profits, or a couple of blinking boxes on a shelf.

How do you secure a distributed, dissimilar, and diverse network of people who face different risks, different adversaries, and operate in different places, with different technologies, and different services? It’s a topological nightmare. We—the security community—secure corporations and organizations with appropriate modeling, by making uniform and tightening the technology used, and by watching closely for anomalies in that model. But what we—the handful of technologists working in the human rights field—often do is merely “recommend” one stock piece of software or another and hope it is not going to fail the person we are “helping.”

For example, I recently traveled to a West African country to meet some local journalists and activists. From my perennial checklist of technological solutionism to preach everywhere I go, I suggested to one of these activists that he encrypt his phone. Later that night, as we met for dinner, he waved his phone at me upon coming in. The display showed his Android software had failed the encryption process, and corrupted the data on his phone, despite his having followed all the appropriate steps. He looked at me and said: “I’m never going to encrypt anything ever again.” Sometimes the technology we advocate is inadequate. Sometimes it is inaccessible, or just too expensive. Sometimes it simply fails.

However, tools aside, civil society suffers a fundamental lack of awareness and understanding of the threats it faces. The missing expertise and the financial inability to access technological solutions and services that are available to the corporate world certainly isn’t making things any easier. We need to approach this problem differently, and to recognize that civil society isn’t going to secure itself.

To help, hackers and security professionals first need to become an integral part of the social struggles and movements that are very much needed in this world right now. Find a cause, help others: a local environmental organization campaigning against fracking, or a citizen journalist group exposing corruption, or a global human rights organization fighting injustice. The help of security-minded hackers could make a significant impact, first as a conscious human being, and only second as a techie, especially anywhere our expertise is so lacking.

And second, we need to band together. Security Without Borders is one effort to create a platform for like-minded people to aggregate. While it might fail in practice, it has succeeded so far in demonstrating that there are many hackers who do care. Whatever the model will be, I firmly believe that through coordinated efforts of solidarity and volunteering, we can make those changes in society that are very much needed, not for fame and fortune this time, but for that “greater good” that we all, deep down, aspire to.

Claudio Guarnieri, aka Nex, is a security researcher and human rights activist. He is a technologist at Amnesty International, a researcher with the Citizen Lab, and the co-founder of Security Without Borders.

Reference

Fox-Brewster, Thomas. 2016. “How Hacked Cameras are Helping Launch the Biggest Attacks the Internet Has Ever Seen.” Forbes, September 25. Available at link.

Franceschi-Bicchierai, Lorenzo. 2016. “Hacker ‘Phineas Fisher’ Speaks on Camera for the First Time—Through a Puppet.” Motherboard, July 20. Available at link.

Kelly, Kevin. 2010. What Technology Wants. New York: Viking Press.

Wikipedia. 2017a. “Blackshades.” Wikipedia, last updated March 23. Available at link.

———. 2017b. “DarkComet.” Wikipedia, last updated May 14. Available at link.

———. 2017c. “Five Eyes.” Wikipedia, last updated April 19. Available at link.

———. 2017d. “Trojan horse.” Wikipedia, last updated May 12. Available at link.

Image Credits: “The Good Samaritan, after Delacroix [After WannaCry]” by Vincent Van Gogh. 1890.