We’re redeveloping the site…

There is nothing to see at the moment so only the home page is public. Login to use the navigation. We should be done August, 2017.

Things are Happening in the Humanities. But You Need to be Patient

A few weeks ago, Peter Suber, one of the leading figures of the open access movement, published a blog post on the website of The American Philosophical Association, entitled: ‘Why Open Access is Moving so Slow in the Humanities’. In there, he sums up 9 reasons why this is the case and I will just mention a few below:

‘Journal subscriptions are much higher in the Sciences Technology and Medicine (STM), than in the Humanities & Social Sciences (HSS). In the humanities, relatively affordable journal prices defuse the urgency of reducing prices or turning to open access as part of the solution.’

‘Much more STM research is funded than humanities research, so there is more money available for paying any open access charges.’

‘STM faculty typically need to publish journal articles to earn tenure, while humanities faculty need to publish books. But the logic of open access applies better to articles, which authors give away, than to books, which have the potential to earn royalties.’

Sadness of it all is that this post is a slightly revised version from the original from 2004. Today we’re still dealing with almost the same issues as 13 years ago. One of Suber’s conclusions is that “Open access isn’t undesirable or unattainable in the humanities. But it is less urgent and harder to subsidize than in the sciences.”[1]

I fully agree with this conclusion. But did we achieve nothing for the humanities then? No, a lot of things have happened in the last 5 to 10 years helping the humanities to make a transition to open access. But we are not there yet.

Open Access Journals

Globally several humanities journals have made the flip from toll access (TA) to open access and several new open access (niche) journals have seen the light in the last couple of years. Currently 9,426 open access journals are indexed by the DOAJ, of which a substantial part is in the humanities. A majority of those journals however, and we must not forget this, don’t charge a dime to publish research in open access.[2] In many cases, and this is exemplary for the humanities, foundations, institutions, and societies are paying for publishing research.

The financial model for open access in the humanities is not an easy road. In my previous life as a publisher in the humanities I’ve developed a few gold open access journals, all financed with money from institutions or research grants. However, subsidies for a journal coming from different institutions is a fragile model. Some of the journals had the ambition to move towards an APC model. None have done it so far.

New kid on the block, but very successful, is the Open Library of Humanities, run by Martin Eve and Caroline Edwards. They proposed and have implemented a model, which is a library funded model. With enough supporting libraries they are able to publish humanities research with no APCs. Main goal is to unburden authors with all kinds of financial hassle.

Institutional publishing

Another trend is the renewed rise of institutional (library) open access publishing. Some examples are Stockholm University Press, UCL Press and Meson Press. They distinguish themselves from traditional university press in the way that they only publish research in open access.

Online research tools

Other interesting developments are the experiments with redefining online publishing. I think it’s safe to say that these experiments just happen in the field of media studies. Collaborative research, writing and publication platforms like MediaCommons and the recently launched Manifold are very exiting initiatives. They all experiment with new digital formats, writing and publishing tools, and data publications.

Open Access Books

Open access for the academic book is on the agenda since 2008 / 2009 with the development of, amongst others, the OAPEN platform. And with indexes like the Directory of Open Access Books, established in 2011, open access books become visible and findable. Two weeks ago, a new milestone was reached with 8000+ open access books being indexed by DOAB and published by 213 publishers.Schermafbeelding 2017-06-23 om 23.55.15

However, open access for books is still underrated. There is a lack of aligned policies. Also, the lack of funding options makes it still very difficult for (smaller) humanities publishers to come up with a sustainable model for open access books. The focus for open access funding still lies with article publishing in journals and the financial models that come along with it.

For this website, I keep track of funders (research councils and universities) that actively support open access book publishing in media studies. I do this since 2015, but up till now the options for funding can be counted on 4 hands maximum. But even in the field of open access books things are happening with projects like Knowledge Unlatched. This project looks at funding coming directly from university libraries, supporting the ‘platform’ or book package and not the individual publication.

So, the important question now is what types of sustainable business models are appropriate for open access publishing in the humanities?

I think one important thing to keep in mind is that if we keep comparing the STM with the HSS it will not getting us very far. Another problem is that (open access) funding policies are still very focused on a local or national level or simply only look at APCs/BPCs. We need to work on a better international alignment of open access policies (per discipline) with different stakeholders (funders, libraries, publishers).

The Dutch Approach: Open Science

In February of this year, the National Plan Open Science[3] was launched in the Netherlands. Towards 2020 this roadmap concentrates on three key areas:

  1. Open access to scientific publications (open access).
  2. Make optimal use and reuse of research data.
  3. Adapting evaluation and award systems to bring them in line with the objectives of open science (reward systems).

cover-os-eng2One of the requirements is that by 2020 all researchers working for a Dutch research university need to publish their work (journals and books(!)) in open access. So this includes the HSS as well. To accomplish this the plan is launched to align all Dutch stakeholders to meet these requirements.

During the launch all the important academic stakeholders (research funders and associations) in the Netherlands explicitly committed themselves to this job. In Finland, similar things are happening.[4] And in other countries discussions have started about open access and open science requirements and indicators as well. It’s of great importance to connect these initiatives together as much as possible.

Preprints… “what”?

One other thing that Suber also mentions in his blog and I’d like to bring into this discussion, are preprints. In the humanities depositing preprints or post prints is not so common as it is in the sciences. That is for obvious reasons; loss of arguments and research outcomes, scooping, etc. etc. But are all these reasons still valid?

As academic community, it’s important to share your research to improve science. In the HSS we are apparently in need for platforms that can quickly disseminate research, based on the popularity (also among humanities scholars) of commercial social sharing platforms like Academia.edu and Researchgate. Note that I deliberately call them social sharing platforms, because that’s what they are.

It’s important that we need to make clear to academics what the implications are when using platforms like Academia.edu and ResearchGate. Both examples are commercial enterprises and interested in as much (personal) data as possible. The infrastructure serves a need but it comes with a cost. We need to think of sustainable alternatives.

IMG_7384

Preprint servers per discipline. Image credit: Bosman, J. & Kramer, B.

Back to the preprint discussion. In the humanities (thus for media studies), it is unusual to share research before it is published in a journal or book. But if everyone is so eager to share their publications in different stages of their research why is it still not common practice to share the work on a preprint server, comparable with ArXiv or SSRN (when it was not Elsevier property), and new servers like LawArXiv, SocArXiv, PsyArXiv, etc.

Will it ever become common practice in the humanities to share research in an earlier stage? Maybe this practice could help moving the humanities a bit quicker?

Who knows.

Notes

[1] https://blog.apaonline.org/2017/06/08/open-access-in-the-humanities-part-2/

[2] https://scholarlykitchen.sspnet.org/2015/08/26/do-most-oa-journals-not-charge-an-apc-sort-of-it-depends/

[3] https://www.openscience.nl/en

[4] http://openscience.fi/publisher_costs

Header image credit: Slughorn’s hourglass in Harry Potter and the Half-Blood Prince. © Warner Brothers

Netherlands Organisation for Scientific Research (NWO) Terminates Open Access Incentive Fund on January 2018 – Some Considerations

On Monday, June 26, the Netherlands Research Council (NWO) announced that they will terminate the Incentive Fund Open Access on January 1, 2018.[1] NWO started this Incentive Fund in 2010 to finance open access publications and activities that highlight open access during scientific conferences.

The fund has been useful for advancing open access since it became available in 2010. However, this decision soon follows the launch of the National Plan Open Science (NPOS)[2], signed by NWO, early 2017. In this plan institutions commit themselves explicitly to work on a healthy open access climate to achieve 100% open access for researchers affiliated to Dutch research universities. Now it’s obvious that this fund is not going to be the solution. However, it’s a remarkable step especially now. There is still a lot to do.

The choice is unfortunate, the more because NWO has been one of the first national research councils in Europe with an active open access policy and, moreover, a well-funded program from which APCs (and BPCs) could be paid, provided that the research will be available immediately after publication (the Gold route). On a national level NWO and the Austrian Science Fund (FWF) were the first funding bodies to mandate books and allocate money for BPCs. This policy is therefore quite unique, and only in the last three years or so, it’s under development at other places.

The Incentive Fund was founded with the aim to stimulate Gold open access. NWO hoped that with such a fund, this could be a model that universities would take over; individual institutions should bear the cost of open access with their own budgets. This has hardly come to fruition. Only the University of Amsterdam, Utrecht University, Delft University of Technology, and Wageningen University & Research have had such funds. At this very moment only Utrecht still runs an Open Access fund.

It is absolutely fair to ask why NWO should keep on spending money if it turns out that universities seem to find this step difficult. But now the boy scout decides to throw in the towel. Understandable, but disappointing. There are enough pros (and yes, cons as well) to consider.

In this piece, I would like to give some considerations why it would not (or would) be wise to terminate this fund. I take the arguments that NWO puts forward[3]:

“NWO believes that the academic world is now sufficiently aware of open access publishing and its importance.”

I doubt this very much. The debate on open access has so far been predominantly conducted by policy makers, libraries, and publishers. Researchers often submit their articles to the established and renowned, usually high-impact, journals. This (imposed) culture does not necessarily lead to more articles in open access journals. And yes, there are many researchers who are aware or the benefits of open access and publish their work in open access, but to say that this is ‘sufficiently’? The ‘academic world’ is any case an international one.

“Currently there are many more opportunities for authors to make their publications available via open access channels without having to pay for publication costs. In part, this has been achieved through open access agreements between Dutch universities and publishers. In addition, there are a growing number of open access journals and platforms that do not charge publication costs.”

True, enormous steps have been taken over the past 20 years. Lots of journals made the transfer to open access. There are (commercial and non-profit) platforms for articles, preprints, post prints – you name it. But are these all for free?

NWO brings up the current OA Big Deals in the Netherlands. However, these deals are mainly focused on hybrid journals. All Gold open access journals from, for example, Springer or Wiley are out of the deal. For these journals, an APC is still required. At present, only the deal with Cambridge University Press includes 20 Gold open access journals.[4]

In addition, the OA Big Deals only cover a part of all the Dutch open access publications in journals. At the moment, as academic community, we are trying to get more insight into this.

Not to mention the diversity of the deals. At Elsevier, it is possible to publish in 276 journals for ‘free’. All other (1800+) are still paid for. It is therefore nonsense to think that there are enough channels for researchers to publish their research in open access? I want to stress here that I don’t want to say that the APC-model is the holy-grail. Far from. But it’s the reality with which researchers are faced.

“Finally, there is the green route, which authors can use to deposit their articles in a (university) repository at no cost.”

Yes correct. And we have repositories for every university for more than 10 years. With varying success. However, for the time being, the government has also been advocating open access through the Golden route (i.e. via journals) since 2013 and above all stating that it is the most future-proof. Not in the last place by the VSNU.[5] For NWO, the Golden route has always been the main goal. In addition, NWO demands immediate open access (without embargo period). This is hardly possible with Green (self-archiving) open access unless NWO wants to force researchers to publish the preprint without peer review? Apparently, they have revised their own terms and policy. This can happen of course, but I find it strange to argue that a fund aimed at publishing in journals needs to be terminated when Gold is the standard.

You could also argue that this fund leads to pushing more money in the (publishing) system. Then I’d like to say, let’s do better with the national deals and not only focus on hybrid journals.

In addition, there is the already mentioned National Open Science Program (NPOS). This plan focuses on three key areas, which are: 1. Promoting open access to scientific publications (open access). 2. Promoting optimal use and reuse or research data. 3. Adapting evaluation and award systems to bring them into line with the objectives of open science (reward systems).

One of the ambitions is full open access to publications. As stated:

“The ambition of the Netherlands is to achieve full open access in 2020. The principle that publicly funded research results should also be publicly available at no extra cost is paramount. Until the ambition of full open access to publications in the Netherlands and beyond is achieved, access to scientific information will be limited for the majority of society.”[6]

In this transition phase. And with this, NWO supported, ambition in mind, the termination of a transit fund (this is how it should be seen) seems a bit premature to me. However, it should be said that the possibility remains to budget open access publications in project funding at NWO. But it is to be seen for how long this will happen considering their response: ‘for the time being’.

Untitled

This post has been posted in Dutch online journal Scienceguide Friday, June 30.

Notes

[1] https://www.nwo.nl/en/news-and-events/news/2017/nwos-incentive-fund-for-open-access-to-end-on-1-january-2018.html

[2] https://www.openscience.nl/binaries/content/assets/subsites-evenementen/open-science/national_plan_open_science_the_netherlands_february_2017_en_.pdf

[3] https://www.nwo.nl/en/policies/open+science/open+access+publishing

[4] http://openaccess.nl/en/publisher/cambridge-university-press

[5] http://www.vsnu.nl/files/documenten/Domeinen/Onderzoek/Open%20access/13330%20U%20aan%20OCW%20-%20%20OpenAccess.pdf

[6] National Plan Open Science, p.21

OBP Nominated for Education Award

We are delighted to announce that we are 2017 WISE Awards Finalists! The World Innovation Summit for Education (WISE) rewards organisations for their innovative and impactful approaches to today’s most urgent education challenges, and we are thrilled to be recognised … Continue reading

Interview: Adrian Martin Speaks Out in Favor of Open Access

In the coming period, I will interview a number of researchers about their work and to what extent open access has a role to play in it. The debate around open access is often held on a policy level, with university boards or libraries and publishers. But the voices of those that actually make use of research papers, books and research data are often not heard. How does a researcher or practitioner see the open access movement enabling free online access to scholarly works? How does this affect their work? What initiatives of interest are being developed in particular fields and what are personal experiences with open access publishing? All kinds of questions that hopefully lead to helpful answers for other researchers engaging with open access.

First interview is with Adrian Martin. Adrian was born in 1959 in Australia. He is a film and arts critic for more than 30 years and as an associate professor in Film Culture and Theory he is currently affiliated with Monash University. His work has appeared in many journals and newspapers around the world, and has been translated into over twenty languages.

The interview starts:

Jeroen: When did you first hear of open access as a new way of distributing research to a wider audience?

Adrian: To appreciate my particular viewpoint on open access issues, you probably need to know where I am ‘coming from’. I am not now, and have rarely been in my life so far, a salaried academic. I have spent most of my life as what I guess is called an ‘independent researcher’. I have sometimes called myself a ‘freelance intellectual’, but I guess the more prosaic description would simply be ‘freelance writer/speaker’. So, not a journalist in the strict sense (I have never worked full-time for any newspaper or magazine), and only sometimes an employed academic within the university system.

Schermafbeelding 2017-06-25 om 21.48.27

Latest issue of Senses of Cinema

Therefore, my entry into these issues is as someone who, at the end of the 1990s, began to get heavily involved in the publication of online magazines, whether as editor, writer, or translator. These were not commercial or industrial publications, they were ‘labour of love’ projects, kin to the world of ‘small print magazines’ in the Australian arts scene (which I had been a part of in the 1980s). No special subscription process was required; it was always, simply, a completely open and accessible website. My entrée to this new, global, online, scene was through Bill Mousoulis, the founder of Senses of Cinema and later I was part of the editorial teams of Rouge, and currently LOLA. And I have contributed to many Internet publications of this kind since the start of the 21st century. The latter two publications do not use academic ‘peer review’ (although everything is carefully checked and edited), and are run on an active ‘curation’ model (i.e., we approach specific people to ask for texts) rather than an ‘open submission’ model.

I say this in order to make clear that my attitude and approach does not come from only, or even mainly, an academic/scholarly perspective. For me, open access is not primarily or solely about making formerly ‘closed’ academic research available to all – although that is certainly one important part of the field. Open access is about – well, open access, in the strongly political sense of making people feel that they are not excluded from reading, seeing, learning or experiencing anything that exists in the world. Long before I encountered the inspiring works of Jacques Rancière, I believe I agreed deeply with his political philosophy: that what we have to fight, at every moment, is the unequal ‘distribution of the sensible’, which means the ways in which a culture tries to enforce what is ‘appropriate’ for the citizens in each sector of society. As a kid who grew up in a working-class suburb of Australia before drifting off on the lines-of-flight offered by cinephilia and cultural writing, I am all too painfully aware of the types of learning and cultural experience that so many people deny themselves, because they have already internalised the sad conviction that it is ‘not for them’, not consistent with their ‘place’ in the world. Smash all such places, I say!

academiaopenThis is why I am temperamentally opposed to any tendency to keep the discussion of open access restricted to a discussion of university scholarship – or, indeed, as sometimes happens, with the effect of strengthening the ‘professional’ borders around this scholarship, and thus shutting non-university people (such as I consider myself today) out of the game. Let me give you a controversial example. I use, and encourage the use of Academia.edu. It is the only ‘repository of scholarly knowledge’ I know of that – despite its unwise name! – anyone can easily join and enjoy (once they are informed of it, and are encouraged to do so). Now, many people complain about the capitalistic nature of this site, and everything they say in this regard may be true. But when I ask them for an alternative that is as good and as extensive in its holdings, I am directed to ‘professional’ university repositories for texts – from which I am necessarily excluded from the outset, since I do not have a university job. This is bad! And reinforces all the worst tendencies in the field.

Likewise, I bristle at the suggestion (it occasionally comes up) that an online publication such as LOLA (among many other examples) is not really ‘scholarly’. Online magazines are regularly downgraded by being described as mere ‘blogs’ (when this is not so!), with no professional standards, etc. etc.. But my drive is, above all, a democratic one. I work mainly outside the university setting because I want access to be truly open. And I want the work to be lively and unalienated. A tall order, but we must forever strive for it! So, in a nutshell, for me the term ‘open access’ simply means ‘material freely available to all online’ – but material that is well written, well prepared, well edited and well presented.

Jeroen: Did you ever publish one of your papers (or other scholarly material) in open access? 

Adrian: Well, according to my above context of criteria, yes: a great deal, literally hundreds of essays! I believe I have covered a wide range of venues, from what I am calling Internet magazines (such as Transit and Desistfilm), through to online-only peer-reviewed publications (such as Movie, Necsus and The Cine-Files), through to the ‘paywall’ academic journals (such as Screen, Studies in Documentary Film and Continuum) which seem to exist less and less as solid, physical entities that one could actually obtain and hold a copy of (try buying one if you’re not a library), and more and more as a bunch of detached, virtual items (each article its own little island) on a digital checkout page of a wealthy publishing house’s website! This last point also applies to the chapters I have written for various academic books.

When I taught at Monash (Australia) and Goethe (Germany) universities from 2007 to 2015, I decided to ‘take a detour’ into this world of academic writing – partly because the institution demands or requires it, for the sake of judging promotions and so forth. I do not regret the type of in-depth, historical work, on a range of subjects, that this opportunity allowed me to do. But I am more than happy to be back in the less constrained, less rule-bound world of freelance writing. The university, finally, is all about a far too severe, restricted and vicious ‘distribution of the sensible’ – it tends to perpetuate itself, and close its professional ranks, rather than truly open its borders to what is beyond itself.

One of my best and happiest experiences with open access has been with the small American publisher, punctum books. I did my little book Last Day Every Day with them, and it has had three editions in three different languages there. Their care and dedication to projects is outstanding. The politics of punctum as an enterprise are incredibly noble and radical: people can opt to pay something for their books, or download them for free if they wish. Likewise, authors can take any money that comes to them, or choose to plough it back into the company (that’s what I did, and probably most of their authors do). At the same time, certain professional/academic standards are upheld: punctum has an extraordinary board, manuscripts are sent out for reporting, and so forth. They both ‘play the game’ of academic publishing as far as they have to, and also challenge the system in a remarkable way. I am proud to be involved with them.

Jeroen: You are an Australian scholar, living in Spain, traveling for lectures and conferences and studying and writing about a global topic as film and media studies is. How does free online scholarly content affect your daily work as a scholar?

Adrian: Well, I enjoy an extraordinary amount of access to the work of other critics and scholars, especially through Academia.edu, and through postings of links by individuals on social media. At the same time, the ‘paywalls’ shut me out, because the purchase rates are too high for me as an individual, and I have no university-sanctioned reading/downloading access. As a freelance writer, I have to go where the work is, and where the money (very modest!) is. So that itinerary necessarily cuts across ‘commercial’ and ‘academic’ lines, and also involves me with many brave projects that are largely non-academic, and commercial only on an artisanal scale: literary projects such as Australia’s Cordite, for example.

Jeroen: In your first answer, you already addressed the issues of Academia.edu (and I guess you can extend this to other commercial products with similar functionalities like ResearchGate) but you also stress the need for a good place to share papers and research output. In the sciences, the preprint and postprint is an excepted and efficient standard in the scholarly communication process. Even publishers allow it. Lots of institutional archives (e.g. ArXiv, and SSRN) have seen the light mid-90s. And the use of those repositories increases every year. In the humanities, there is no such culture. Do you think this could change in a time where sharing initial ideas is becoming easier? Or is the writing and publishing culture in the humanities intrinsically different from that in the sciences?

Adrian: You offer a very intriguing comparative perspective here, Jeroen. I have no experience of scholarship in the sciences, so what you say is surprising (and good!) news to me. Perhaps, in the humanities, there has been, for too long a time, a certain anxious aura built up around the individual ‘ownership’ of one’s ideas – and thereby most of us have gone along with this perceived need not to share our work so readily or easily in the preprint and postprint ways that you describe. But I do think this can change, and quite radically, if humanities people are encouraged to go in this direction. One can already see the signs of it, when scholars share their drafts of papers more readily (and widely) than before. I think it would be a very productive development.

 Jeroen: One of the biggest hurdles to take in the next 5 to 10 years regarding open access in the humanities are the costs of publishing. In the sciences, the dominant business model is based on APCs (Article Processing Charges). In the humanities this model is a problem. One of the reasons is that research budgets in the humanities and social sciences are much lower. Other reasons given are that since journal prices in the sciences are much higher there was an urgency to transfer to an open access environment. Subscription costs for humanities journals are much lower.

The majority of open access journals in the humanities and also in media studies have another business model and are often subsidized by institutions or foundations. But subsidies are often temporary. New initiatives like Open Library of Humanities and Knowledge Unlatched come up with different financial models, all aimed at unburdening individual authors, but all of these models still need to prove themselves. Nevertheless, things are changing. How do you see a sustainable open access publishing environment for the humanities, and more specifically film and media studies? 

Adrian: Issues of funding – and money, in general – are vexing indeed. Once again, let me make clear where I’m exactly ‘coming from’. With Rouge and LOLA magazines, we have never received, or even sought, any government funding or any kind of arts-industry subsidy; we have never sought or accepted any advertising revenue; and we have never benefitted from any university grants of any kind. We run these magazines on virtually no money (beyond basic operating costs) and of course, as a result, we are unable to pay any contributor (and we are always upfront about that). This is perhaps an extreme, but not uncommon position.  It was a decision that, in each case, we took. Why? Because we didn’t want the restrictions, and obligations, that come with the ‘public purse’ – or, indeed, with almost any source of ‘filthy lucre’! In Australia, for example, to accept government funding means you will have to meet a ‘quota’ of ‘local/national content’ – and if you don’t, you won’t get that subsidy again. Senses of Cinema has struggled with that poisoned chalice. With Rouge and LOLA, on the other hand, we enjoy the ‘stateless’ potentiality of online publishing – it is ‘of the world’ and belongs to the whole world (or at least, those in it who can read English!). Sometimes we engaged in (perhaps at our initiative) ‘co-production’ ventures, some of which panned out well (such as a book that Rouge made in collaboration with the Rotterdam Film Festival on Raúl Ruiz in 2004, or the publication last year in LOLA of certain chapters from a Japanese book tribute to Shigehiko Hasumi), and others which did not. But I and my colleagues stick to this generally penniless state of idealism!

I was naively shocked when I realised that academic publishers usually fund their open access projects through payments from writers! And that – as I discovered upon asking a few friends – some universities routinely subsidise these types of publications for their scholars. As a freelancer, once more, I am shut out from this particular system. Therefore, my next ‘academic’ book (Mysteries of Cinema for Amsterdam University Press) – ironically, largely comprised of my essays from non-academic print publications! – will not be Open Access, because I cannot personally afford that, and I have no ‘channel’ of institutional funding that I can access. Once again, that’s just the name of the game. I will be very happy when that book exists, but it will purely be a physical book for purchase only!

I have, therefore, no utopian visions for how to fund open access across the humanities board. Personally, I am currently looking into Patreon as a possible way to sustain arts/criticism-related website projects. It’s a democratic model: people pay to support your ongoing work, to give you time and space to creatively do it. It’s not like Kickstarter, which is geared to a single production, such as a feature film project. Patreon has proved a godsend for artists such as musicians. We shall see if it can also work in an open access publishing context.

Jeroen: You are one of the founding fathers and practitioners of the so-called audiovisual essay, a new rising digital video format in academic publishing. Instead of writing a paper in words, a compilation of images offers a new textual structure. Another digital format is the enriched publication; articles or books with data included. One of the issues, besides arranging new forms of reviewing, is copyright and reuse. The audiovisual essay format obviously benefits from images with an open license, like the Creative Commons licenses. This makes it possible to reuse and remix these images. Archives are being digitized rapidly, but only a small portion is currently available in the public domain. Scholars are often not allowed to make use of film quotes or stills in their works. How do you see the nearby future for using digitized media files for academic purposes in relation to copyright laws? 

Adrian: We are in an extraordinarily ‘grey area’ here – appropriately, I suppose, since things like LOLA are (I’m told) classified as ‘grey Open Access’! And the legal situation for audiovisual works can vary greatly from nation to nation. We are in a historical moment when a lot of experimentation is going ‘under the radar’ of legal restriction, or (in the eyes of the big corporations) is considered simply too minor to consider taking any action against. Bear in mind that most critical/scholarly work in audiovisual essays (of the kind that I do in collaboration with my partner, Cristina Álvarez López) is not about making large sums of money; it is still a marginal, ‘labour of love’ activity, just as small, cultural magazines were in the 1980s.

THINKING MACHINE 6

Still from audiovisual essay ‘Thinking Machine 6: Pieces of Spaces’. © Cristina Álvarez López & Adrian Martin, March 2017

This general fuzziness of the present moment is all to the good, in my opinion; we can all enjoy a certain freedom within it (with, occasionally, a ‘bite’ from above on particular questions of copyright: music use, for instance). I speak of no specific works or practitioners here, but much work in the audiovisual essay field happens both inside and outside of Creative Commons licenses. I don’t think anyone should be restricted to using just that. The front on which we all have to battle is ‘fair use’ or ‘fair dealing’ (hence the disclaimer ‘for study purposes only’ that Cristina & I place at the end of all our videos): the right to quote (and hence manipulate) audiovisual quotations for scholarly and artistic purposes, ranging all the way from lecture demonstration and re-montage analysis to parody and creative détournement/appropriation. The fully scholarly publication [in]Transition to which I and many others have contributed – no one will ever call that a blog! – takes full advantage, via its publishing ‘home base’ of USA, of everything that the fair use provisions in that country can allow. And I think you can see, if you peruse that site, how far the possibilities can go.

I very much liked the recent essay by Noah Berlatsky, “Fair Use Too Often Goes Unused” in The Chronicle of Higher Education, which argued that we – meaning not only writers and artists, but perhaps even more significantly editors and publishers – need to be questioning and pushing at the limits of the definition, practice and enforcement of fair use regulations. Too often (and I have experienced this myself) editors and publishers assume, at the outset, that a great deal is simply impossible, unthinkable: even the use of screenshots from movies! There is so much unnecessary fear and trepidation over such matters. Sure, no one wants to take a stupid risk and be sued as a result. But, to cite Berlatsky’s conclusion:

“Books and journal articles about visual culture need to be able to engage with, analyse, and share visual culture. Fair use makes that possible — but only if authors and presses are willing to assert their rights. Presses may take on a small risk in asserting fair use. But in return they give readers an invaluable opportunity to see [and I would add: hear!] what scholars are talking about.”

Jeroen: I want to thank you for this interview.


© Adrian Martin, June 2017

*During the NECS 2017 conference in Paris the session ‘The Changing Landscape of Open Access Publications in Film and Media Studies: Distributing Research and Exchanging Data’ will be held on Saturday July 1st. Download the final program here.

** 15 June 2018: some minor updates in lay-out and added a few links to mentioned projects.

 

 

 

 

 

The spy who pwned me

U.S. intelligence officers discuss Chinese espionage in dramatically different terms than they use in talking about the Russian interference in the U.S. presidential election of 2016. Admiral Michael Rogers, head of NSA and U.S. Cyber Command, described the Russian efforts as “a conscious effort by a nation state to attempt to achieve a specific effect” (Boccagno 2016). The former director of NSA and subsequently CIA, General Michael Hayden, argued, in contrast, that the massive Chinese breach of records at the U.S. Office of Personnel Management was “honorable espionage work” of a “legitimate intelligence target” (American Interest 2016; Gilman et.al 2017). Characterizing the Chinese infiltration as illegal hacking or warfare would challenge the legitimacy of state-sanctioned hacking for acquiring information and would upset the norms permitting every state to hack relentlessly into each other’s information systems.

The hairsplitting around state-sanctioned hacking speaks to a divide between the doctrinal understanding of intelligence professionals and the intuitions of non-professionals. Within intelligence and defense circles of the United States and its close allies, peacetime hacking into computers with the primary purpose of stealing information is understood to be radically different than using hacked computers and the information from them to cause what are banally called “effects”—from breaking hard drives or centrifuges, to contaminating the news cycles of other states, to playing havoc with electric grids. One computer or a thousand, the size of a hack doesn’t matter: scale doesn’t transform espionage into warfare. Intent is key. The Chinese effort to steal information: good old espionage, updated for the information age. The Russian manipulation of the election: information or cyber warfare.

Discussing the OPM hack, Gen. Hayden candidly acknowledged,

If I as director of CIA or NSA would have had the opportunity to grab the equivalent [employee records] in the Chinese system, I would not have thought twice… I would not have asked permission. I would have launched the Starfleet, and we would have brought those suckers home at the speed of light.[1]

Under Hayden and his successors, NSA has certainly brought suckers home from computers worldwide. Honorable computer espionage has become multilateral, mundane, and pursued at vast scale.[2]

In February 1996 John Perry Barlow declared to the “Governments of the Industrial World,” that they “have no sovereignty where we gather”—in cyberspace (Barlow 1996). Whatever their naivety in retrospect, such claims in the 1990s from right and left, from civil libertarians as well as defense hawks, justified governments taking preemptive measures to maintain their sovereignty. Warranted or not, the fear that the Internet would weaken the state fueled its dramatic, mostly secret, expansion at the beginning of the current century. By understanding the ways state-sponsored hacking exploded from the late 1990s onward, we see more clearly the contingent interplay of legal authorities and technical capacities that created the enhanced powers of the nation-state.

How did we get a mutual acceptance of state-sanctioned hacking? In a legal briefing for new staff, NSA tells a straightforward story of the march of technology. The movement from telephonic and other communication to the mass “exploitation” of computers was “a natural transition of the foreign collection mission of SIGINT” (signals intelligence). As communications moved from telex to computers and switches, NSA pursued those same communications” (NSA OGC  n.d.).  Defenders of NSA and its partner agencies regularly make similar arguments: anyone unwilling to accept the necessity of government hacking for the purposes of foreign intelligence is seen as having a dangerous and unrealistic unawareness of the threats nations face today. For many in the intelligence world today, hacking into computers and network infrastructures worldwide is, quite simply, an extension of the long-standing mission of “signals intelligence”—the collection and analysis of communications by someone other than the intended recipient.

Figure 1:  “Authority to Conduct CNE.” (NSA Office of General Counsel., n.d.:8)

Contrary to the seductive simplicity of the NSA slide, little was natural about the legalities around computer hacking in the 1990s. The legitimization of mass hacking into computers to collect intelligence wasn’t technologically or doctrinally pre-given, and hacking into computers didn’t—and doesn’t—easily equate to earlier forms of espionage. In the late 1990s and 2000s, information warfare capacities were being developed, and authority distributed, before military doctrine or legal analysis could solidify.[3]  Glimpsed even through the fog of classification, documents from the U.S. Department of Defense and intelligence agencies teem with discomfort, indecision, and internecine battles that testify to the uncertainty within the military and intelligence communities about the legal, ethical, and doctrinal use of these tools. More “kinetic” elements of the armed services focused on information warfare within traditional conceptions of military activity: the destruction and manipulation of the enemy command and control systems in active battle. Self-appointed modernizers demanded a far more encompassing definition that suggested the distinctiveness of information warfare and, in many cases, the radical disruption of traditional kinetic warfare.

The first known official Department of Defense definition of “Information Warfare,” promulgated in an only recently declassified 1992 document, comprised:

The competition of opposing information systems to include the exploitation, corruption, or destruction of an adversary’s information system through such means as signals intelligence and command and control countermeasures while protecting the integrity of one’s own information systems from such attacks (DODD TS 3600.1 1992:1).

Under this account, warfare included “exploitation”: the acquiring of information from an adversary’s computers whether practiced on or by the United (ibid.:4).[4] A slightly later figure (Figure 2) illustrates this inclusion of espionage in information warfare.

Figure 2: “Information Warfare” (Fields and McCarthy 1994: 27)

According to an internal NSA magazine, information warfare was “one of the new buzzwords in the hallways” of the Agency by 1994 (Redacted 1994:3). Over the next decade, the military services competed with NSA and among themselves over the definition and partitioning of information warfare activities. One critic of letting NSA control information warfare worried about “the Intelligence fox being put in charge of the Information Warfare henhouse” (Rothrock 1997:225).

Information warfare techniques were too valuable only to be used in kinetic war, a point Soviet strategists had long made. By the mid-1990s, the U.S. Department of Defense had embraced a broader doctrinal category, “Information Operations” (DODD S-3600 1996). Such operations comprised many things, including “computer network attack” (CNA) and “computer network defense” (CND) as well as older chestnuts like “psychological operations.” Central to the rationale for the renaming was that information warfare-like activities did not belong solely within the purview of military agencies and they did not occur only during times of formal or even informal war. One influential strategist, Dan Kuehl, explained, “associating the word ‘war’ with the gathering and dissemination of information has been a stumbling block in gaining understanding and acceptance of the concepts surrounding information warfare” (Kuehl 1997). Information warfare had to encompass collection of intelligence, deception, and propaganda, as well as more warlike activities such as deletion of data or destruction of hardware. Exploitation had to become peaceful.

Around 1996, a new doctrinal category, “Computer Network Exploitation” (CNE), emerged within the military and intelligence communities to capture the hacking of computer systems to acquire information from them.[5] The definition encompassed the acquisition of information but went further. “Computer network exploitation” encompassed collection and enabling for future use. The military and intelligence communities produced a series of tortured definitions. A 2001 draft document offered two versions, one succinct,

Intelligence collection and enabling operations to gather data from target or adversary automated information systems (AIS) or networks.

and the other clearer about this “enabling”:

Intelligence collection and enabling operations to gather data from target or adversary automated information systems or networks. CNE is composed of two types of activities: (1) enabling activities designed to obtain or facilitate access to the target computer system where the purpose includes foreign intelligence collection; and, (2) collection activities designed to acquire foreign intelligence information from the target computer system (Wolfowitz 2001:1-1).

Enabling operations were carefully made distinct from affecting a system, which takes on a war-like demeanor. Information operations involved “actions taken to affect adversary information and information systems, while defending one’s own information and information systems” (CJCSI 3210.1A 1998). CNE was related to but was not in fact an information “operation.” A crucial 1999 document from the CIA captured the careful, nearly casuistical, excision of CNE from Information Operations: “CNE is an intelligence collection activity and while not viewed as an integral pillar of DoD IO doctrine, it is recognized as an IO-related activity that requires deconfliction with IO” (DCID 7/3 2003: 3).  With this new category, “enabling” was hived off from offensive warfare, to clarify that exploiting a machine—hacking in and stealing data—was not an attack. It was espionage, whose necessity and ubiquity everyone ought simply to accept.

The new category of CNE subdued the protean activity of hacking and put it into an older legal box—that of espionage. The process of hacking into computers for the purpose of taking information and enabling future activities during peacetime was thus grounded in pre-existing legal foundations for signals intelligence. In contrast to the flurry of new legal authorities that emerged around computer network attack, computer network exploitation was largely made to rest on the hoary authorities of older forms of signals intelligence.[6]

A preliminary DoD document captured this domestication of hacking in 1999:

The treatment of espionage under international law may help us make an educated guess as to how the international community will react to information operations activities. . . . international reaction is likely to depend on the practical consequences of the activity. If lives are lost and property is destroyed as a direct consequence, the activity may very well be treated as a use of force. If the activity results only in a breach of the perceived reliability of an information system, it seems unlikely that the world community will be much exercised. In short, information operations activities are likely to be regarded much as is espionage—not a major issue unless significant practical consequences can be demonstrated (Johnson 1999:40; emphasis added).

In justifying computer espionage, military and intelligence thinkers rested on a Westphalian order of ordinary state relations with long standing norms. At the very moment that the novelty of state-sanctioned hacking for information was denied, however, a range of strategists and legal thinkers expounded how the novelty of information warfare would necessitate a radical alteration of the global order.

Beyond Westphalia

Mirroring Internet visionaries of left and right alike, military and defense wonks in the 1990s detailed how the Net would undermine national sovereignty. An article in RAND’s journal in 1995 explained,

Information war has no front line. Potential battlefields are anywhere networked systems allow access–oil and gas pipelines, for example, electric power grids, telephone switching networks. In sum, the U.S. homeland may no longer provide a sanctuary from outside attack (Rand Research Review 1995; emphasis added.)

In this line of thinking, a wide array of forms of computer intrusion became intimately linked to other forms of asymmetric dangers to the homeland, such as biological and chemical warfare.

Figure 3. Information warfare is different (Andrews 1996:3-2).

The porousness of the state in the global information age accordingly demanded an expansion—a hypertrophy—of state capacities and legal authorities at home and abroad to compensate. The worldwide network of surveillance revealed in the Snowden documents is a key product of this hypertrophy. In the U.S. intelligence community, the challenges of new technologies demanded rethinking Fourth Amendment prohibitions against unreasonable search and seizure. In a document intended to gain the support of the incoming presidential administration, NSA explained in 2000,

Make no mistake, NSA can and will perform its missions consistent with the Fourth Amendment and all applicable laws. But senior leadership must understand that today’s and tomorrow’s mission will demand a powerful, permanent presence on a global telecommunications network that will host the ‘protected’ communications of Americans as well as the targeted communications of adversaries (NSA 2000:32).

The briefing for the future president and his advisors delivered the hard truths of the new millennium. In the mid- to late 1990s, technically minded circles in the Departments of Defense and Justice, in corners of the Intelligence Community, and in various scattered think tanks around Washington and Santa Monica began sounding the call for a novel form of homeland security, where military and law enforcement, the government and private industry, and domestic and foreign surveillance would necessarily mix in ways long seen as illicit if not illegal. Constitutional interpretation, jurisdictional divisions, and the organization of bureaucracies alike would need to undergo dramatic—and painful—change. In a remarkable draft “Road Map for National Security” from 2000, a centrist bipartisan group argued, “in the new era, sharp distinctions between ‘foreign’ and ‘domestic’ no longer apply. We do not equate national security with ‘defense’” (U.S. Commission on National Security 2001).  9/11 proved the catalyst, but not the cause, of the emergence of the homeland security state of the new millennium. The George W. Bush administration drew upon this dense congeries of ideas, plans, vocabulary, constitutional reflection, and an overlapping network of intellectuals, lawyers, ex-spies, and soldiers to develop the new homeland security state. This intellectual framework justified the dramatic leap in the foreign depth and domestic breadth of the acquisition, collection, and analysis of communications of NSA and its Five Eyes partners, including computer network exploitation.

NSA Ad. New York Times Oct. 13, 1985.

The Golden Age of SIGINT

In its 2000 prospectus for the incoming presidential administration, the NSA included an innocent sounding clause: “in close collaboration with cryptologic and Intelligence Community partners, establish tailored access to specialized communications when needed” (National Security Agency 2001: 4). Tailored access meant government hacking­—CSE. In the early 1990s, NSA seemed to many a cold-war relic, inadequate to the times, despite its pioneering role in computer security and penetration testing from the late 1960s onward. By the late 2010s, NSA was at the center of the “golden age of SIGINT” focused ever more on computers, their contents, and the digital infrastructure (NSA 2012: 2).

From the mid 1990s, NSA and its allies gained extraordinary worldwide capacities, both in the “passive” collection of communications flowing through cables or the air and the “active” collection through hacking into information systems, whether it be the  president’s network, Greek telecom networks during the Athens Olympics, or in tactical situations throughout Iraq and Afghanistan (see Redacted-Texas TAO 2010; SID Today 2004).

Prioritizing offensive hacking over defense became very easy in this context. An anonymous NSA author explained the danger in 1997:

The characteristics that make cyber-based operations so appealing to us from an offensive perspective (i.e., low cost of entry, few tangible observables, a diverse and expanding target set, increasing amounts of ‘freely available’ information to support target development, and a flexible base of deployment where being ‘in range’ with large fixed field sites isn’t important) present a particularly difficult problem for the defense… before you get too excited about this ‘target-rich environment,’ remember, General Custer was in a target-rich environment too! (Redacted 1997: 9; emphasis added).

The Air Force and NSA pioneered computer security from the late 1960s: their experts warned that the wide adoption of information technology in the United States would make it the premier target-rich environment (Hunt 2012). NSA’s capacities developed as China, Russian, and other nations dramatically expanded their own computer espionage efforts (see figure 4 for the case of China c. 2010).

Figure 4. NSA’s list of major Chinese CNE efforts, called “BYZANTINE HADES.” (Redacted-NTOC 2010).

By 2008, and probably much earlier, the Agency and its close allies probed computers worldwide, tracked their vulnerabilities, and engineered viruses and worms both profoundly sophisticated and highly targeted.  Or as a key NSA hacking division bluntly put it: “Your data is our data, your equipment is our equipment—anytime, anyplace, by any legal means” (SID Today 2006: 2).

Figure 5. Worldwide SIGINT/Defense Cryptologic Platform, n.d., https://archive.org/details/NSA-Defense-Cryptologic-Platform.

While the internal division for hacking was named “Tailored Access Operations,” its work quickly moved beyond the highly tailored—bespoke—hacking of a small number of high priority systems.  In 2004, the Agency built new facilities to enable them to expand from “an average of 100-150 active implants to simultaneously managing thousands of implanted targets” (SID Today 2004a:2).  According to Matthew Aid, NSA had built tools (and adopted easily available open source tools) for scanning billions of digital devices for vulnerabilities; hundreds of operators were covertly “tapping into thousands of foreign computer systems” worldwide (Aid 2013). By 2008, the Agency’s distributed XKeyscore database and search system offered its analysts the option to “Show me all the exploitable machines in country X,” meaning that the U.S. government systematically evaluated all the available machines in some nations for potential exploitation and catalogued their vulnerabilities. Cataloging at scale is matched by exploiting machines at scale (National Security Agency 2008). One program, Turbine, sought to “allow the current implant network to scale to large size (millions of implants)” (Gallagher and Greenwald 2014). The British, Canadian, Australian partner intelligence agencies play central roles in this globe-spanning work.

The disanalogy with espionage

The legal status of government hacking to exfiltrate information rests on an analogy with traditional espionage. Yet the scale and techniques of state hacking strain the analogy. Two lawyers associated with U.S. Cyber Command, Col. Gary Brown and Lt. Col. Andrew Metcalf, offer two examples: “First, espionage used to be a lot more difficult. Cold Warriors did not anticipate the wholesale plunder of our industrial secrets. Second, the techniques of cyber espionage and cyber attack are often identical, and cyber espionage is usually a necessary prerequisite for cyber attack” (Brown and Metcalf 1998:117).

The colonels are right: U.S. legal work on intelligence in the digital age has tended to deny that scale is legally significant. The international effort to exempt sundry forms of metadata such as calling records from legal protection stems from the intelligence value of studying metadata at scale. The collection of the metadata of one person, on this view, is not legally different from the collection of the metadata of many people, as the U.S. Foreign Intelligence Surveillance Court has explained:

[so] long as no individual has a reasonable expectation of privacy in meta data [sic], the large number of persons whose communications will be subjected to the . . . surveillance is irrelevant to the issue of whether a Fourth Amendment search or seizure will occur.[7]

Yet metadata is desired by intelligence agencies just because it is revealing at scale. Since their inception, NSA and its Commonwealth analogues have focused as much at working with vast databases of “metadata” as on breaking cyphered texts. NSA’s historians celebrate a cryptographical revolution afforded through “traffic analysis” (Filby 1993). From reconstructing the Soviet “order of battle” in the Cold War to seeking potential terrorists now, the U.S. Government has long recognized the transformative power of machine analysis of large volumes of metadata while simultaneously denying the legal salience of that transformative power.

As in the case of metadata, U.S. legal work on hacking into computers does not consider scale as legally significant. Espionage at scale used to be tough going: the very corporeality of sifting through physical mail, or garbage, or even setting physical wiretaps, or other devices to capture microwave transmissions scale only with great expense, difficulty, and potential for discovery (Donovan 2017). Scale provided a salutary limitation on surveillance, domestic or foreign. As with satellite spying, computer network exploitation typically lacks this corporeality, barring cases of getting access to air-gapped computers, as in the case of the StuxNet virus. With the relative ease of hacking, the U.S. and its allies can know the exploitable machines in a country X, whether those machines belong to generals, presidents, teachers, professors, jihadis, or eight-year olds.

Hacking into computers unquestionably alters them, so the analogy with physical espionage is imperfect at best. A highly-redacted Defense Department “Information Operations Policy Roadmap” of 2003 underscores the ambiguity of “exploitation versus attack.” The document calls for clarity about the definition of an attack, both against the U.S. (slightly redacted) and by the U.S. (almost entirely redacted). “A legal review should determine what level of data or operating system manipulation constitutes an attack” (Department of Defense 2003:52).  Nearly every definition—especially every classified definition—of computer network exploitation includes “enabling” as well as exploitation of computers. The military lawyers Brown and Metcalf argue, “Cyber espionage, far from being simply the copying of information from a system, ordinarily requires some form of cyber maneuvering that makes it possible to exfiltrate information. That maneuvering, or ‘enabling’ as it is sometimes called, requires the same techniques as an operation that is intended solely to disrupt” (Brown and Metcalfe 1998:117) “Enabling” is the key moment where the analogy between traditional espionage and hacking into computers breaks down. The secret definition, as of a few years ago, explains that enabling activities are “designed to obtain or facilitate access to the target computer system for possible later” computer network attack. The enabling function of an implant placed on a computer, router, or printer is the preparation of the space of future battle: it’s as if every time a spy entered a locked room to plant a bug, that bug contained a nearly unlimited capacity to materialize a bomb or other device should distant masters so desire. An implant essentially grants a third-party control over a general-purpose machine: it is not limited to the exfiltration of data. Installing an implant within a computer is like installing a cloaked 3-D printer into physical space that can produce a photocopier, a weapon, and a self-destructive device at the whim of its master. One NSA document put it clearly: “Computer network attack uses similar tools and techniques as computer network exploitation. If you can exploit it, you can attack it” (SID Today 2004b).

In a leaked 2012 Presidential Policy Directive, the Obama administration clarified the lines between espionage and information warfare explicitly to allow that espionage may produce results akin to an information attack. Amid a broad array of new euphemisms, CNE was transformed into “cyber collection,” which “includes those activities essential and inherent to enabling cyber collection, such as inhibiting detection or attribution, even if they create cyber effects” (Presidential Policy Directive (PPD)-20: 2-3). The bland term ‘cyber effects’ is defined as “the manipulation, disruption, denial, degradation, or destruction of computers, information or communications systems, networks, physical or virtual infrastructure controlled by computers or information systems, or information resident thereon.” Espionage, then, often will be attack in all but name. The creation of effects akin to attack need not require the international legal considerations of war, only the far weaker legal regime around espionage. With each clarification, the gap between actual government hacking for the purpose of obtaining information and traditional espionage widens; and the utility of espionage as a category for thinking through the tough policy and legal choices around hacking diminishes.

Surveilling Irony

By the end of the first decade of the 2000s, sardonic geek humor within NSA reveled in the ironic symbols of government overreach. A classified NSA presentation trolled civil libertarians: “Who knew that in 1984” an iPhone “would be big brother”  and “the Zombies would be paying customers” (Spiegel Online 2013). Apple’s famous 1984 commercial dramatized how better technology would topple the corporatized social order, presaging a million dreams of the Internet disrupting wonted order. Far from undermining the ability of traditional states to know and act, the global network has created one of the greatest intensifications of the power of sovereign states since 1648. Whether espoused by cyber-libertarians or RAND strategists, the threat from the Net enabled new authorities and undermined civil liberties. The potential weakening of the state justified its hypertrophy. The centralization of online activity into a small number of dominant platforms—Weibo, Google, Facebook, with their billions of commercial transactions, has enabled a scope of surveillance unexpected by the most optimistic intelligence mavens in the 1990s. The humor is right on.

Signals intelligence is a hard habit to break—civil libertarian presidents like Jimmy Carter and Barack Obama quickly found themselves taken with being able to peek at the intimate communications of friends and foes alike, to know their negotiating positions in advance, to be three steps ahead in the game of 14-dimensional chess. State hacking at scale seems to violate the sovereignty of states at the same time as it serves as a potent form of sovereign activity today. Neither the Chinese hacking into OPM databases nor the alleged Russian intervention in the recent US and French elections accords well with many basic intuitions about licit activities among states. If it would be naïve to imagine the evanescence of state-sanctioned hacking, it is doctrinally and legally disingenuous to treat that hacking as entirely licit based on ever less applicable analogies to older forms of espionage.

As the theorists in the U.S. military and intelligence worlds in the 1990s called for new concepts and authorities appropriate to the information age, they nevertheless tamed hacking for information by treating it as continuous with traditional espionage. The near ubiquity of state-sanctioned hacking should not sanction an ill-fitting legal and doctrinal frame that ensures its monotonic increase. Based on an analogy to spying that ignores scale, “computer network exploitation” and its successor concepts preclude the rigorous analysis necessary for the hard choices national security professionals rightly insist we must collectively make. We need a ctrl+alt+del. Let’s hope the implant isn’t persistent.

Matthew L. Jones teaches history of science and technology at Columbia. He is the author, most recently, of Reckoning with Matter: Calculating Machines, Innovation, and Thinking about Thinking from Pascal to Babbage. (Chicago, 2016).

References

Aid, Matthew M. 2013. “Inside the NSA’s Ultra-Secret China Hacking Group,” Foreign Policy. June 10. Available at: link.

American Interest. 2015. “Former CIA Head: OPM Hack was ‘Honorable Espionage Work.’” The American Interest. June 16. Available at: link.

Andrews, Duane. 1996. “Report of the Defense Science Board Task Force on Information Warfare-Defense (IW-D),” December.

Barlow, John Perry. “A Declaration of the Independence of Cyberspace.” Electronic Frontier Foundation, February 8, 1996. Available at: link.

Berkowitz, Bruce D. 2003. The New Face of War: How War Will Be Fought in the 21st Century. New York: Free Press

Boccagno, Julia. 2016. “NSA Chief speaks candidly of Russia and U.S. Election.” CBS News. November 17. Available at:  link.

Brown, Gary D. and Andrew O. Metcalf. 1998. “Easier Said Than Done: Legal Reviews of Cyber Weapons,” Journal of National Security Law and Policy 7.

CJCSI 3210.01A. 1998. “Joint Information Operations Policy,” Joint Chiefs, November 6. Available at: link.

DCID 7/3. 2003. “Information Operations and Intelligence Community Related Activities.” Central Intelligence Agency, June 5.  Available at: link.

Department of Defense. 2003. “Information Operations Roadmap,” October 30. Available at: link.

DODD TS 3600.1. 1992.  “Information Warfare (U),” December 21. Available at: link.

DODD S-3600.1, 1996. “Information Operations (IO) (U),” December 9. Available at: link.

Donovan, Joan. 2017. “Refuse and Resist!” Limn 8, February. Available at: link.

Falk, Richard A. 1962. “Space Espionage and World Order: A Consideration of the Samos-Midas Program,” in Essays on Espionage and International Law. Akron: Ohio State University Press.

Fields, Craig, and James McCarthy, eds. 1994. “Report of the Defense Science Board Summer Study Task Force on Information Architecture for the Battlefield,” October. Available at: link.

Filby, Vera R. 1993. United States Cryptologic History, Sources in Cryptologic History, Volume 4, A Collection of Writings on Traffic Analysis. Fort Meade, MD: NSA Center for Cryptological History.

Gallagher, Ryan and Glenn Greenwald. 2014. “How the NSA Plans to Infect ‘Millions’ of Computers with Malware,” The Intercept. March 12. Available at:  link.

Gilman, Nils, Jesse Goldhammer, and Steven Weber. 2017. “Can You Secure an Iron Cage?” Limn 8, February. Available at: link.

Hunt, Edward. 2012. “U.S. Government Computer Penetration Programs and the Implications for Cyberwar,” IEEE Annals of the History of Computing. 34(3):4–21.

Johnson, Philip A. 1999. “An Assessment of International Legal Issues in Information Operations,” 1999, 40.  Available at: link.

Kaplan, Fred M. Dark Territory: The Secret History of Cyber War. New York: Simon & Schuster, 2016.

Kuehl, Dan. 1997. “Defining Information Power,” Strategic Forum: Institute for National Strategic Studies, National Defense University, no. 115 (June). Available at: link.

Lin Herbert S. 2010.  “Offensive Cyber Operations and the Use of Force,” Journal of National Security Law & Policy, 4.

National Security Agency/Central Security Service. 2000. “Transition 2001” December. Available at: link.

National Security Agency. 2008 “XKEYSCORE.”  February 25. Available at:  link

National Security Agency. 2012. “(U) SIGINT Strategy, 2012-2016,” February 23. Available at:  link.

NSA Office of General Counsel. n.d.  “(U/FOUO) CNO Legal Authorities,” slide 8, available at: link.

Owens, William, Kenneth W. Dam, and Herbert S. Lin 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington D.C.: National Academies Press.

Presidential Policy Directive (PPD)-20: “U.S. Cyber Operations Policy,” October 16, 2012. Available at:  link.

Rand Research Review. 1995. “Information Warfare: A Two-Edged Sword.” Rand Research Review: Information Warfare and Cuberspace Security. Ed. A. Schoben. Santa Monica: Rand. Available at: link.

Rattray, Gregory J. 2001. Strategic Warfare in Cyberspace. Cambridge, Mass: MIT Press.

Redacted. 1994 “Information Warfare: A New Business Line for NSA,” Cryptolog. July.

Redacted. 1997. “IO, IO, It’s Off to Work We Go . . . (U),” Cryptolog. Spring.

Redacted-NTOC, V225. 2010, “BYZANTINE HADES: An Evolution of Collection,” June. Slides available at: link.

Redacted-Texas TAO/FTS327. 2010. “Computer-Network Exploitation Successes South of the Border,” November 15. Available from link.

Rid, Thomas. Rise of the Machines: A Cybernetic History. New York: W. W. Norton & Company, 2016.

Rothrock, John. 1997. “Information Warfare: Time for Some Constructive Criticism,” in Athena’s Camp: Preparing for Conflict in the Information Age, ed. John Arquilla and David Ronfeldt.  Santa Monica: Rand.

Schmitt, Michael N., ed. 2017. Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations: Prepared by the International Groups of Experts at the Invitation of the NATO Cooperative Cyber Defence Centre of Excellence, 2nd ed. Cambridge: Cambridge University Press. DOI:10.1017/9781316822524.

SID Today. 2004. “Another Successful Olympics Story,” October 6, 2004. Available at: link.

SID Today. 2004a. “Expanding Endpoint Operations.” September 17. Available at: link.

SID Today. 2004b. “New Staff Supports Network Attack.” October 21. Available at: link

SIDToday. 2006. “The ROC: NSA’s Epicenter for Computer Network Operations,” September 6. Available at:  link.

Spiegel Online. 2013. “Spying on Smartphones,” SPIEGEL ONLINE, September 9. Available at: link.

United States Commission on National Security/21st Century. 2001. Road Map for National Security: Imperative for Change. January 31. Final Draft. Available at: link

Wolfowitz, Paul. 2001. “Department of Defense Directive 3600.1 Draft,” October.


[1] In conversation with Gerard Baker, June 15, 2015.  Available at link.

[2] For the current state of international consensus on cyber espionage among international lawyers, see Schmitt 2017, rule 32.

[3] See Berkowitz 2003:59-65; Rattray 2003; Rid 2016:294-339 and Kaplan 2016

[4] Drawn from the signals intelligence idiolect, “exploitation” means, roughly, making some qualities of a communication available for acquisition. With computers, this typically means discovering bugs in systems, or using pilfered credentials, and then building robust ways to gain control of the system or at least to exfiltrate information from it.

[5] Computer Network Exploitation (CNE) was developed alongside two new doctrinal categories emerging in 1996: more aggressive “Computer Network Attack,” (CNA) which uses that access to destroy information or systems, and “Computer Network Defense” (CND). For exploitation versus attack, see (Owens et. al. 2009; Lin 2010:63).

[6] Especially NSCID-6 and Executive Order 12,333. The development of satellite reconnaissance had earlier challenged mid twentieth century conceptions of espionage. For a vivid sense of the difficulty of resolving these challenges, see (Falk 1962: 45-82).

[7] Quotation from secret decision with redacted name and date, p. 63, quoted in Amended Memorandum Opinion, No. BR 13-109 (Foreign Intelligence Surveillance Court August 29, 2013).

The Way Out

The Way Out. Invisible Insurrections and Radical Imaginaries in the UK Underground 1961-1991 Kasper Opstrup A counterculture history of art and experimental politics that turns the world inside out The Way Out examines the radical political and hedonist imaginaries of the experimental fringes of the UK Underground from 1961 to 1991 By examining the relations between collective and collaborative practices … Continue reading →

L’idée de l’Europe au Siècle des Lumières

This blog post was originally posted as an article on the Adventures on the Bookshelf blog – you can read it here. In 1813, Germaine de Staël published a seminal work called De l’Allemagne, which offered a wide-ranging introduction to German romantic … Continue reading

Who’s hacking whom?

Who is hacking whom? The case of Brian Farrell (a.k.a. “Doctor Clu”) raises a host of interesting questions about the nature of hacking, vulnerability disclosure, the law, and the status of security research. Doctor Clu was brought to trial by FBI agents who identified him by his Internet Protocol (IP) address. But Clu was using Tor (The Onion Router) to hide his identity, so the FBI had to find a way to “hack” the system to reveal his identity. They didn’t do this directly, though. Allegedly, they subpoenaed some information security researchers at Carnegie Mellon University’s Software Engineering Institute (SEI) for a list of IP addresses.  Why did SEI have the IP addresses? Ironically, these Department of Defense-funded researchers had bragged about a presentation they would give at the Black Hat security conference on de-anonymising Tor users “on a budget.” For whatever reason, they had Clu’s IP address as a result of their work, and the FBI managed to get it from them. Clu’s defense team tried to find out how exactly it was obtained and argued that this was a violation of the 4th amendment, but the judge refused: IP addresses are public, he said; even on Tor, where users have no ‘expectation of privacy.’

In this case, security researchers ‘hacked’ Tor in a technical sense; but the FBI also hacked the researchers in a legal sense – by subpoenaing the exploit and its results in order to bring Clu to trial. As in the recent WannaCry ransomware attack, or the Apple iPhone vs. FBI San Bernardino terrorism investigation of summer 2016, this case reveals the entanglement of security research, the hoarding of exploits and vulnerabilities, the use of those tools by law enforcement and spy agencies, and ultimately citizens’ right to privacy online. The rest of this piece explores this entanglement, and asks: what are the politics of disclosing vulnerabilities? What new risks and changed expectations exist in a world where it is not clear who is hacking whom? What responsibilities do researchers have to protect their subjects and what expectations do Tor users have to be protected from such research?

“Tor’s motivation for three hops is Anonymity”[1]

“Tor is a low-latency anonymity-preserving network that enables its users to protect their privacy online” and enables “anonymous communication” (AlSabah et al., 2012: 73). The Tor p2p network is a mesh of proxy servers where the data is bounced through relays, or nodes. As of this writing, more than 7,000 relays enable the transferral of data, applying “onion routing” as a tactic for anonymity (Spitters et al., 2014).[2]  Onion routing was first developed and designed by the US Naval Research Laboratory in order to secure online intelligence activities. Data is sent using Tor through a proxy configuration (3 relays: entry, middle, exit) adding a layer of encryption at every node whilst decrypting the data at every “hop” and forwarding it to the next onion router. In this way, the “clear text” does not appear at the same time and thereby hides the IP address, masking the identity of the user and providing anonymity. At the end of a browsing session the user history is deleted along with the HTTP cookie. Moreover, the greater the number of people using Tor, the higher the anonymity level for users who are connected to the p2p network; volunteers around the world provide servers and enable the Tor traffic to flow.

There is also controversy surrounding the Tor network, connecting it to the so-called “Dark Net” and its “hidden services” that range from the selling of illegal drugs, weapons, and child pornography to sites of anarchism, hacktivism, and politics (Spitters et al., 2014: 1). All of this has increased the risks involved in using Tor. As shown in numerous studies (AlSabah et al., 2012, Spitters et al., 2014, Çalışkan et al., 2015, Winter et al., 2014 and Biryukov et al., 2013), different actors have compromised the Tor network, cracking its anonymity. These actors potentially include the NSA, authoritarian governments worldwide, and multinational corporations: all organisations that would like to discover the identity of users and their personal information (see for example, the case of Hacking Team).[3] Specifically, it should not be discounted that Tor exit node operators have access to the traffic going through their exit nodes, whoever they are (Çalışkan et al., 2015: 29). Besides governmental actors in the security industries, activists, dissidents and whistle-blowers using Tor, there are also academics that carry out research attempting to “hack” Tor.

The Researchers’ Ethical Dilemma

In January 2015, Brian Farrell aka “Doctor Clu,” was arrested and charged with one count of conspiracy to distribute illegal “hard” drugs such as cocaine, methamphetamine and heroin at a “hidden service” marketplace (Silk Road 2.0) on the so-called “Dark Net”(Geuss 2015).[4] His IP address (along with other users) was purportedly captured in early 2014 by researchers, Alexander Volynkin and Michael McCord, when they were carrying out their empirical study at SEI, a non-profit organisation at Carnegie Mellon University (CMU) in Pittsburgh, U.S.A. The SEI researchers were supposedly able to bypass security and with their hack, obtain around 1000 IP addresses of users.

Since the beginning of 2014, an unnamed source had been giving authorities the IP address of those who accessed this specific part of the site (Vinton 2015).

The researchers from SEI at CMU were invited to present their methods and findings on how to “de-anonymize hundreds of thousands of Tor clients and thousands of hidden services” at the Black Hat security conference in July 2014, but they never showed up and the reason of their cancellation is still posted on the website (Figure 1).

Figure 1: Black Hat 2014 website Schedule Update (link.)

As the next screenshot of the Internet Archive’s Way Back Machine reflects (Figure 2), the researcher’s abstract elucidated their braggadocio of a low budget exploit of Tor for around $3000, as well as a call out to others to try:

Looking for the IP address of a Tor user? Not a problem. Trying to uncover the location of a Hidden Service? Done. We know because we tested it, in the wild…. (Volynkin 2014).

Figure 2: Black Hat 2014 Briefings (link).

With regard to ethical research considerations, the researchers’ “anonymous subjects” didn’t realize or know they were participating in a study-cum-hack. Many in the security research community regard this as an infringement of ethical standards included in the IEEE Code of Ethics that prohibits “injuring others, their property, reputation, or employment by false or malicious action” (IEEE n.D.: section 2.4.2). Even when following such an officially recognized IEEE ethical code, “failure, discovery, and unintended or collateral consequences of success” (Greenwald et. al. 2008:78) could potentially harm “objects of study”– in this case the visitors to the Silk Road 2.0. The Dark Net is perhaps trickier than other fields but there are also academics carrying out research there, contacting users, building their trust and protecting their sources.[5] Supposedly SEI started hosting part of Tor’s relays, but intentionally set up “malicious actors” so that they could carry out their research. According to one anonymous source reported at Motherboard, SEI

had the ability to deanonymize a new Tor hidden service in less than two weeks. Existing hidden services required upwards of a month, maybe even two months. The trick is that you have to get your attacking Tor nodes into a privileged position in the Tor network, and this is easier for new hidden services than for existing hidden services (Cox 2015).

It is crucial that the Tor Project is always informed of the exploit even before it is released so that they can fix potential flaws that enable deanonymization. During the past several years, researchers have continuously shared their data with the Tor Project and reported their findings, such as malicious attacks, or what is called “sniffing” – when the exit relay information is compromised. Once a study is published, patches are developed and Tor improves upon itself as these breaches of security are uncovered.  Unlike other empirical studies, the SEI researchers did not inform the Tor Project of their exploits. Instead Tor discovered the exploits and contacted the researchers, who declined to give details. Only after the abstract for Black Hat (late June 2014) was published online did the researchers “give the Tor Project a few hints about the attack but did not reveal details” (Felten 2014). The Tor Project ejected the attacking relays and worked on a fix for all of July 2014, with a software update release at the end of the month, along with an explanation of the attack (Dingledine 2014). As this case shows, not only “malicious actors,” but also certain researchers can collect data on Tor users. According to the Tor Project director Roger Dingledine the SEI researchers acted inappropriately:

Such action is a violation of our trust and basic guidelines for ethical research. We strongly support independent research on our software and network, but this attack crosses the crucial line between research and endangering innocent users (Dingledine 2014).

A Subpoena for Research

Richard Nixon’s 1973 Grand Jury subpoena.

In November 2015, the integrity of these two SEI researchers was again brought into question when the rumour circulated that they had been subpoenaed by the FBI to hand over their collated IP addresses. According to an assistant researcher at CMU Nicolas Christin, SEI is a non-profit and not an academic institution and therefore the researchers at SEI are not academics but instead are “focusing specifically on software-related security and engineering issues” and in 2015 the SEI renewed a 5-year governmental contract for 1,73 billion dollars (Lynch 2015). In an official media statement, CMU’s SEI responded by explaining that their mission encompassed searching and identifying “vulnerabilities in software and computing networks so that they may be corrected” (CMU 2015). Important to note is that the US government (specifically the Departments of Defense and of Homeland Security) funds many of these research centers, such as CERT (Computer Emergency Response Team), a division of SEI which has existed ever since the Morris Worm first created a need for such an entity (Kelty 2011). To be precise, it is one of the Federally Funded Research and Development Centers (FFRDC), which are

unique non-profit entities sponsored and funded by the U.S. government that address long-term problems of considerable complexity, analyze technical questions with a high degree of objectivity, and provide creative and cost-effective solutions to government problems (Lynch 2015).

Legally, in the U.S., the FBI, SEC and the DEA can all subpoena researchers to share their research. However, the obtained information was not for public consumption, but for an agency within the U.S. Department of Justice, the FBI. Matt Blaze, a computer scientist at the University of Pennsylvania made the following statement about conducting research:

When you do experiments on a live network and keep the data, that data is a record that can be subpoenaed. As academics, we’re not used to thinking about that. But it can happen, and it did happen (Vitáris 2016).

Besides the ethical questions regarding the researchers handing over their findings to the governments that have supported them (ostensibly with tax-payer money), the politics of security research and vulnerability disclosure continues to be a heated debate within academia and the general public. It seems that issuing subpoenas by law enforcement might provide a means to gather data on citizens and to obtain knowledge of academic research – which then remains hidden from the public. Computer security defense lawyer Tor Ekeland gave this comment:

It seems like they’re trying to subpoena surveillance techniques. They’re trying to acquire intel[ligence] gathering methods under the pretext of an individual criminal investigation (Vitáris 2016).

It is not clear whether the FBI was using a subpoena to acquire exploits, or if the CMU (SEI) researchers were originally hired by the FBI and only later disclosed what happened, stating that they had been subpoenaed?[6] Either way, it would raise the issue of whether the FBI required a search warrant in order to obtain the evidence – the IP addresses.

Internet Search and Seizure

In January 2016, Farrell’s defense filed a motion to compel discovery, in an attempt to understand exactly how the IP address was obtained, as well as the past two-year history of the relationship between the FBI and SEI through working contracts. In February 2016, the Farrell case came to court in Seattle where it was finally revealed to the public that the “university-based research institute” was confirmed to be SEI at CMU, subpoenaed by the FBI (Farivar 2016). The court denied the defense’s motion to compel discovery. This statement from the order—Section II, Analysis—written by US District Judge Richard A. Jones answered the question of whether a search warrant was needed to obtain IP addresses:

SEI’s identification of the defendant’s IP address because of his use of the Tor network did not constitute a search subject to Fourth Amendment scrutiny (Cox 2016).[7]

In order to claim protection under the Fourth Amendment, there needs to be a demonstration of an “expectation of privacy,” which is not subjective but recognized as reasonable by other members of society. Furthermore, Judge Jones claimed that the IP address “even those of Tor users, are public, and that Tor users lack a reasonable expectation of privacy” (Cox 2016).

Again, according to the party’s submissions, such a submission is made despite the understanding communicated by the Tor Project that the Tor network has vulnerabilities and that users might not remain anonymous. Under these circumstances Tor users clearly lack a reasonable expectation of privacy in their IP addresses while using the Tor network. In other words, they take a significant gamble on any real expectation of privacy under these circumstances (Jones 2016:3).

Judge Jones reasoned that Farrell didn’t have a reasonable expectation of privacy because he used Tor; but he also stated that IP addresses are public because he willingly gave his IP address to an Internet Service Provider (ISP), in order to have internet access. Moreover, the citation (precedent) that Judge Jones drew upon to uphold his order, namely, United States v. Forrester, ruled that individuals have no reasonable ‘expectation of privacy’ with internet IP addresses and email addresses:

The Court reaches this conclusion primarily upon reliance on United States v. Forrester, 512 F.2d 500 (9th Cir. 2007). In Forrester, the court clearly enunciated that: Internet users have no expectation of privacy in …the IP address of the websites they visit because they should know that this information is provided to and used by Internet service providers for the specific purpose of directing the routing of information (Jones 2016:2-3).

Trust

In March 2016, Farrell eventually pleaded guilty to one count of conspiracy regarding the distribution of heroin, cocaine and amphetamines in connection with the hidden marketplace Silk Road 2.0 and received an eight-year prison sentence. In this case, the protection of an anonymous IP address was thwarted in various ways (a hack, a subpoena, a ruling) with regard to governmental intrusion. Privacy technologists, such as Christopher Soghoian, have provided testimony in similar cases, explaining that the government states that obtaining IP addresses “isn’t such a big deal,” yet the government can’t seem to elucidate how they could actually obtain them (Kopstein 2016).

“Campfire” XKCD 742

Whoever wanted to know the IP address would have to be in control of many nodes in the Tor network, around the world; and one would have to intercept this traffic and then correlate the entry and exit nodes. Besides the difficulty factor, these correlation techniques cost time and money and these exploits, including the one from the SEI researchers, were possible in 2014. Even if IP addresses are considered public when using Tor, they are anonymous unless they are correlated with a specific individual’s device.[8] To correlate Farrell’s IP address, the FBI had to obtain the list of IP addresses from Farrell’s ISP provider, Comcast.

The judge’s cited reason for denying the motion to compel disclosure was that IP addresses are in and of themselves not private, as people willingly provide them to third parties. Nowadays people increasingly use the internet (and write emails) instead of the telephone; and in order to do so, they must divulge their IP address to an ISP in order to access the internet. When users are outside of the Tor anonymity network, their IP is exposed to an ISP. However, when inside the “closed field” of Tor, is there no expectation of privacy along with the security of the content? And by extension, is there not an expectation of anonymity with the security of users’ identity?

Judge Jones also argued that that Farrell didn’t have an expectation of privacy because he handed over his IP address to strangers running the Tor network.

[I]t is the Court’s understanding that in order for a prospective user to use the Tor network they must disclose information, including their IP addresses, to unknown individuals running Tor nodes, so that their communications can be directed towards their destinations. Under such a system, an individual would necessarily be disclosing his identifying information to complete strangers (Jones 2016:3).

Herewith the notion of trust surfaces and plays a salient role. When people share information with ethnographers, anthropologists, activists or journalists and it takes months, sometimes years to gain people’s trust; and the anonymity of the source often needs to be maintained. These days when people choose to use the Tor network they trust a community that can see the IP address at certain points, and they trust that the Tor exit node operators do not divulge their collected IP addresses nor make correlations. In an era of so-called Big Data, as more user data is collated (by companies, governments and researchers) correlation becomes easier and deanonymization occurs more frequently. With the Farrell case, researchers’ ethical dilemmas, the politics of vulnerability disclosure and law enforcement’s “hacking” of Tor all played a role in obtaining his IP address. Despite opposing judicial rulings, it can be argued that Tor users do have an expectation of privacy whereas the capture of IP addresses for users seeking anonymity online has been expedited.

Renée Ridgway is presently a PhD candidate at Copenhagen Business School (MPP) and a research affiliate with the Digital Cultures Research Lab (DCRL), Leuphana University, Lüneburg. Her research investigates the conceptual as well as technological implications of using search, ranging from the personalisation of Google to anonymous browsing using Tor. Recent contributions to publications include Ephemera, SAGE Encyclopaedia of the Internet, Hacking Habitat, Money Labs (INC), OPEN!, APRJA and Disrupting Business.

References

AlSabah, Mashael; Bauer, Kevin and Goldberg, Ian. 2012. “Enhancing Tor’s Performance using Real-time Traffic Classification.” presented at CCS’12, Raleigh, North Carolina, USA. October 16–18.

Bartlett, Jamie. 2014. The Dark Net: Inside the Digital Underworld. Portsmith: Heinemann.

Biryukov, A., Pustogarov, I. and Weinmann, R.P. 2013. “Trawling for tor hidden services: Detection, measurement, deanonymization,” in Security and Privacy (SP). 2013 IEEE Symposium on. IEEE, pp. 80–94.

Çalışkan, Emin, Minárik, Tomáš, and Osula; Anna-Maria. 2015. Technical and Legal Overview of the Tor Anonymity Network. Tallin: CCDCOE, NATO Cooperative Cyber Defence Centre of Excellence.

Carnegie Mellon University (CMU). 2015. “Media Statement.” November 18th. Available at: link.

Cox, Joseph. 2015. “Tor Attack Could Unmask New Hidden Sites in Under Two Weeks.” November 13th. Available at: link.

—. 2016 “Confirmed: Carnegie Mellon University Attacked Tor, Was Subpoenaed By Feds.” February 24th. Available at: link

Dingledine, Roger  a.k.a. arma. 2014. “Tor security advisory: “relay early” traffic confirmation attack,” Tor Project Blog. July 30th.  Available at: link.

Dittrich et. al. 2009. Towards Community Standards for Ethical Behavior in Computer Security Research. Stevens CS Technical Report 20091, April 20th. Available at: link.

Farivar, Cyrus. 2016. “Top Silk Road 2.0 admin “DoctorClu” pleads guilty, could face 8 years in prison.” Ars Technica, April 4th. Available at: link.

Felten, Ed. 2014 “Why were CERT researchers attacking Tor?” Freedom to Tinker Blog. July 31. Available at: link.

Fox-Brewster, Thomas. 2015. “$30,000 to $1 Million — Breaking Tor Can Bring In The Big Bucks.” Forbes Magazine.  November 12th Available at: link.

Geuss, Megan. 2015 “Alleged “right hand man” to Silk Road 2.0 leader arrested in Seattle.” Ars Technica. January 21st. Available at: link.

Greenwald, Stephen J. et. al. 2008. “Towards an Ethical Code for Information Security?” NSPW’08, September 22–25 Available at: link.

IEEE. N.d. IEEE Code of Ethics. Available at: link.

Jones, Richard A. 2016b. Order on Defendant’s Motion to Compel United States v. Farrell, CR15-029RAJ. U.S. District Court, Western District of Washington, Filed 02/23/16. Available at: link.

Kelty, Christopher M. 2011. “The Morris Worm.” Limn. Issue Number One: Systemic Risk. Available at: link.

Kopstein, Joshua. 2016. “Confused Judge Says You Have No Expectation of Privacy When Using Tor.” Motherboard, Available here: link.

Lynch, Richard. 2015. “CMU’s Software Engineering Institute Contract Renewed by Department of Defense for $1.73 Billion.” Press Release, Carnegie Mellon University. July 28th  Available here: link.

Spitters, Martijn, Verbruggen, Stefan and van Staalduinen, Mark. 2014.  “Towards a Comprehensive Insight into the Thematic Organization of the Tor Hidden Services,” presented at 2014 IEEE Joint Intelligence and Security Informatics Conference, Los Angeles, CA, USA; 15 -17 Dec 2014

Vinton, Kate. 2015. “Alleged Silk Road 2.0 Operator’s Right-Hand Man Arrested On Drug Charges.” Forbes Magazine. January 21. Available at: link.

Vitáris, Benjamin. 2016. “FBI’s Attack On Tor Shows The Threat Of Subpoenas To Security Researchers.”Deep Dot Web Blog. March 8 Available here: link.

Volynkin, Alexander and McCord, Michael. 2014. “Deanonymizing users on a budget.” Black Hat 2014 Briefings. Available at: link.

Winter, Philipp; Köwer, Richard, et. al. 2014. “Spoiled Onions: Exposing Malicious Tor Exit Relays.” In: Privacy Enhancing Technologies Symposium. Springer.


[1] (Winter et al., 2014: 6).

[2] https://torstatus.blutmagie.de/

[3] “The Italian organisation, which even its CEO called a “notorious” provider of government spyware, was looking to expand its line of products, Rabe said. That included targeting the anonymizing Tor network, where civil rights activists, researchers, pedophiles and drug dealers alike try to hide from the global surveillance complex” (Fox-Brewster 2015).

[4] (U.S. v. Farrell, U.S. District Court, W.D. Wash., No. 15-mj-00016) Complaint for Violation. Available at: link.

[5] I refer here specifically to Jamie Bartlett’s ‘The Dark Net’ research.

[6] February 24, 2016: “When asked how the FBI knew that a Department of Defence research project on Tor was underway, so that the agency could then subpoena for information, Jillian Stickels, a spokesperson for the FBI, told Motherboard in a phone call that ‘For that specific question, I would ask them [Carnegie Mellon University]. If that information will be released at all, it will probably be released from them.’” Available at: link

[7] Scrutiny of the Fourth Amendment shows the original text of 1789 that was later ratified in the Bill of Rights, the first 10 amendments to the US Constitution: The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized. Available at: link.

[8] http://whatismyipaddress.com