Interview: Kim Zetter
Christopher Kelty: So our first question is: What kind of technical or political thresholds have we crossed, and have you seen, in your time reporting on hacking and information security? Is Stuxnet [2010] a case of such a threshold, or the DNC [Democratic National Committee] hack? Since you’ve been doing this for a long time, maybe you have a particular sense of what’s changed, and when, over the last, say, decade or so?
Kim Zetter: I think we have a number of thresholds in the last decade. And the DNC hack definitely is a threshold of a sort. But it’s not an unexpected threshold. There’s been a build up to that kind of activity for a while. I think what’s surprising about that is really how long it took for something like that to occur. Stuxnet is a different kind of threshold, obviously, in the military realm. It’s a threshold not only in terms of having a proof-of-concept of code that can cause physical destruction—which is something we hadn’t seen before—but also it marks a threshold in international relations because it opens the way for other countries to view this as a viable option for responding to disputes instead of going the old routes: through the UN or attacking the other nation, or sanctions or something like that. This is a very attractive alternative because it allows you to do something immediately and have an immediate effect, and also to do it with a plausible deniability because of the anonymity and attribution issues.
CK: Why do you say this is long overdue?
KZ: With regard to the DNC hack, we’ve seen espionage and political espionage is not something new. The only thing that’s new here is the leaking of the data that was stolen rather than, let’s say, the covert usage of it. Obviously, the CIA has been involved in influencing elections for a long time, and other intelligence agencies have as well. But it’s new to do it in this very public way, and through a hack, where it’s almost very transparent. You know, when the CIA is influencing an election, it’s a covert operation—you don’t see their hand behind it—or at least that’s what a covert operation is supposed to be. You don’t know who did it. And in this way, [the DNC hack] was just so bold.
But we’ve seen sort of a step and progression of this in the hacking world. We saw when Anonymous hacked HBGary [2011] and leaked email spools there. We saw the Sony hack [2014] where they leaked email spools. And both of these put private businesses on notice that this was a new danger to executives. And then we saw the Panama Papers leak [2016], where it became a threat to wealthy individuals and governments trying to launder or hide money. And now that practice has moved into a different realm. So that’s why I’m saying that this is long overdue in the political realm, and we’re going to see a lot more of it now. And the DNC hack is a bit like Stuxnet in that it opens the floodgates—it puts a stamp of approval on this kind of activity for nations.CK: This is at the heart what I think a lot of people in the issue are trying to address. It seems that the nexus between hacking as a technical and practical activity and the political status of the leaks, the attacks, etc., is somehow inverting, so there’s a really interesting moment where hacking moved from being something fun…[with] occasionally political consequences to something political…[with] fun as a side effect.
KZ: Right. I’ve been covering security and hacking since 1999. And we started off with the initial hacks; things like the “I Love You” virus, things…that were sort of experimental, that weren’t necessarily intentional in nature. People just…testing the boundaries of this realm: the cyber realm. And then e-commerce took off after 2000 and it became the interest of criminals because there was a monetary gain to it. And then we had the progression to state-sponsored espionage—instead of doing espionage in the old ways with a lot of resources, covert operatives, physical access, things like that. This opened a whole new realm; now we have remote destructive capabilities.
CK: So, let me ask a related question: in a case like the DNC hack, do we know that this wasn’t a case of someone who had hacked the emails and then found someone, found the right person to give them to, or who was contracted to do the hacking?
KZ: Yes. I think that’s a question that we may not get an answer to, but I think that…you’re referring to something that we call “hybrid attacks.” There are two scenarios here. One is that some opportunistic hacker is just trying to get into any random system, finds a system that’s valuable, and then decides to go find a buyer, someone who’s interested in [what was obtained]. And then the stuff gets leaked in that manner. If that were the case in DNC, though, there probably would have been some kind of exchange for money, because a hacker—a mercenary hacker like that—is not going to do that for free.
But then you have this other scenario, where you have what I’m referring to now as hybrid attacks. We saw something similar in the hack of the Ukraine power grid [2015–2016], where forensic investigators saw very distinct differences between the initial stages of the hack, and the later stages of the hack which were more sophisticated. The initial hack, which was done through a phishing attack in the same way [as the DNC was hacked], got them into a system and they did some reconnaissance and they discovered what they had. And then it looks like they handed the access off to more sophisticated actors who actually understood the industrial control systems that were controlling the electrical grid. And they created sophisticated code that was designed to overwrite the firmware on the grid and shut it off and prevent them from turning it back on.
So there is a hybrid organization where front groups are doing the initial legwork; they aren’t necessarily fully employed by a government or military, but are certainly rewarded for it when they get access to a good system. And then the big guys come in and take over.
When you look at the hack of the DNC and the literature around it—the reporting around it—they describe two different groups in that network. They describe an initial group that got in around late summer, early fall, around 2015. One group gets in and then the second group comes in around March 2016. And that’s the group that ultimately leaked the emails. It’s unclear if that was a cooperative relationship or completely separate. But I think we’re going to have this problem more and more, where you have either a hybrid of groups cooperating, or problems with multiple groups independently being in a system. And this is because there are only so many targets that are really high-value targets, who could be of interest to a lot of different kinds of groups.
CK: What I find interesting about hacking are some of the parallels to how we’ve dealt with preparedness over the last couple of decades, independent of the information security realm. You know, thinking about very unlikely events and needing to be prepared, whether that’s climate change–related weather events or emerging diseases. Some of the work that we’ve done in Limn prior to this has been focused on the way those very rare events have been restructuring our capacity to respond and prepare for things. Is there something similar happening now with hacking, and with events—basically starting with Stuxnet—where federal agencies but also law enforcement are reorienting around the rare events? Do you see that happening?
KZ: I suppose that’s what government is best at, right? Those big events that supposedly we can’t tackle ourselves. So I think it’s appropriate if the government focuses on the infrastructure issues. And I don’t mean just the critical infrastructure issues like the power grid and chemical plants, but the infrastructure issues around the internet. I don’t think that we should give it over entirely to them. But in some cases, they are the only ones that actually can have an influence. One example is the FDA [U.S. Food and Drug Administration], and its recent rules around securing medical devices for manufacturers and vendors who create medical devices. It’s so remarkable to think that there was never a security requirement for our medical devices, right? It’s only in the last year that they thought it appropriate to actually even look at security. But it shouldn’t be a surprise because we had the same thing with electronic voting machines.
CK: Yeah, it’s a shock and laughter moment, it seems to repeat itself. Switching gears a little bit: one of the questions we have for you has to do with your experience in journalism, doing this kind of work. Do you see interesting new challenges that are emerging, issues of finding sources, verifying claims, getting in touch with people? What are some of the major challenges you’ve encountered as a journalist trying to do this work over the last couple of decades?
KZ: I think that one of the problems that’s always existed [in] reporting [about] hackers is that unlike most other sources they’re oftentimes anonymous. And so you are left as a journalist to take the word of a hacker, what they say about themselves. You obviously put things in context in the story, and you say, “According to the hacker,” or “He is a 20-year-old student,” or “He’s based in Brazil.” There’s not a lot of ways you can verify who you’re talking to. And you also have the same kind of difficulties in verifying their information. Someone tells you they hacked a corporation and you ask, “Can you give me screenshots to show that you have access inside this network?” Well, they can doctor screenshots. What else can they give you to verify? Can they give you passwords that they used, can they tell you more about the network and how they got in? Can they give you a sample of the data that they stole? And then of course you have to go out and verify that. Well, the victim in many cases is often not going to verify that for you. They’re going to deny that they were hacked; they’re going to deny that they had security problems that allowed someone in. They may even deny that the data that came from them is their data. We saw that with parts of the DNC hack. And it was true that some of the data hadn’t come from them. It had come from someone else.
CK: Do you find that—do you think that—finding sources to tell you about this stuff is different for studying hacking than for other domains? Do you basically go back to the same sources over and over again once you develop a list of good people, or do you have to find new ones with every event?
KZ: In terms of getting comments from researchers, those are the kinds of sources I would go back to repeatedly. When you’re talking about a hacker, of course, you can only generally talk with them about the hacks that they claimed to have participated in. And then of course they can just disappear, like the Shadow Brokers. After that initial release and flurry of publicity, several journalists contacted the Shadow Brokers, got some interviews, and then the Shadow Brokers disappeared and stopped giving interviews. So that’s always the problem here. Your source can get arrested and disappear that way, or willfully disappear in other ways. You may only end up having part of the information that you need.
CK: We have a number of articles about the difficulty of interpreting hacks and leaks and the expectation that the content of the leaks will have an immediate and incontrovertible effect—Pentagon Papers-style, or even Snowden-style. A leak that will be channeled through the media and have an effect on the government. We seem to be seeing a change in that strategic use of leaks. Do you see that in your own experience here too? That the effectiveness of these leaks is changing now?
KZ: You know, I think we’re still working that out. We’re trying to figure out the most effective way of doing this. You have the WikiLeaks model that gets thousands of documents from Chelsea Manning, and then just dumps them online and is angry that no one is willing to sift through them to figure out the significance of them. And then you have the model, like the Snowden leak, where they were given in bulk to journalists, and then journalists sifted through them to try and find documents and create stories around them. But in that case, many of the documents were still published. Then we have the alternative, which is the Panama Papers, where the data is given to journalists, but the documents don’t get published. All we see are the stories around them. And so we’re left to determine from the journalists: Did they interpret them correctly? Do they really say what they think they say?
We saw that problem with the Snowden documents. In the initial story that the Washington Post published about the Prism program, they said that, based on their interpretation of the documents, the NSA [National Security Agency] had a direct pipeline into the servers of these companies. And they misinterpreted that. But because they made the documents available it was easy for the public to see it themselves and say, “I think you need to go back and re-look at this.” With the Panama Papers we don’t have that. So there are multiple models happening here, and it’s unclear which is the most effective. Also, with the DNC, we got a giant dump of emails, and everyone was sifting through them simultaneously. The same with the Ashley Madison emails: everyone was trying to find something significant. There is sort of the fatigue factor: if you do multiple stories in a week, or even two weeks, people stop reading them because it feels like another story exactly like the last one.
And that’s the problem with large leaks. On the one hand you expect that they’re going to have big impact; on the other hand, the reading public can only absorb or care about so many at a time, especially when so many other things are going on.
CK: The DNC hacks also seem to have a differential effect: there was the sort of Times and Post readers who may be fatigued hearing about it and who fell away quickly. But then there’s the conspiracy theory–Breitbart world of trying to make something out of the risotto recipes and spirit cooking. And it almost feels like the hack was not a hack of the DNC, but a hack of the media and journalism system in a way.
KZ: Yeah, it was definitely manipulation of the media, but only in the sense that they knew what media would be interested in, right? You’re not going to dump the risotto recipes on the media (although the media would probably start up with that just a bit, just for the humor of it). But they definitely know what journalists like and want. And I don’t think that journalists should apologize for being interested in publishing stories that could expose bad behavior on the part of politicians. That exists whether or not you have leaked emails. That’s what leaking is about. And especially in a campaign. There’s always manipulation of the media; government-authorized leaks are manipulation of the media as well.
CK: I think I like that connection, because what’s so puzzling to me is to call the DNC hacks “manipulating the presidential election” suggests that we haven’t ever manipulated the presidential election through the media before, which would be absurd, [Laughter.] So there’s a sort of irony to the fact that we now recognize it as something that involves statecraft in a different way.
KZ: And also that it was from an outsider: I mean, usually it’s the opposite party that’s manipulating the media to affect the outcome. I think they’re all insulted that an outside party was much more effective at it than any of them were. [Laughter.]
CK: Okay, one last question. What’s happening to hacker talent these days? Who’s being recruited? Do you have a sense in talking to people that the sort of professional landscape for hackers, information security professionals, etc., has been changing a lot? And if so, where are people going? And what are they becoming?
KZ: The U.S. government has been recruiting hackers from hacker conferences since hacker conferences began. From the very first DEFCON, undercover FBI and military were attending the conferences not only to learn what the hackers were learning about, but also to find talent. The problem of course is that as the cybersecurity industry grew, it became harder and harder for the government and the military to hold onto the talent that they had. And that’s not going to change. They’re not going to be able to pay the salaries that the private industry can pay. So what you see, of course, is the NSA contracting with private companies to provide the skills that they would have gotten if they could have hired those same people.
So what’s always going to be a problem is that the government is not always going to get the most talented [people]. They may get them for the first year, or couple of years. But beyond that, they’re always going to lose to the commercial industry. Was that your question? I’m not sure if I answered it.
CK: Well, it was, but I’m also interested in what kinds of international recruitment, what shake-up in the security agencies is happening around trying to find talent for this stuff? I know that the NSA going to DEFCON goes all the way back, but now even if you’re a hacker and you’re recruited by NSA, you may also be recruited by other either state agencies or private security firms who are engaged in something new.
KZ: Right. In the wake of the Snowden leaks, there may be people who would have been…willing to work for the government before who aren’t willing to work there now. And certainly Trump is not going to help the government and military recruit talents in the way that past administrations might have been able to appeal to patriotism and, you know, national duty. I think that that’s going to become much more difficult for the government under this administration.
Interview conducted February 2017.
Strengthening Democracy Through Open Education
Here Be Monsters: A Punctum Publishing Primer
The Final Countdown
Appel : Les effets des changements climatiques sur la vie, la société et l’environnement au Sahel
Projet d’un ouvrage collectif coordonné provisoirement par Florence Piron et Alain Olivier, de l’Université Laval, avec un comité scientifique (ouvert) composé de Fatima Alher (OSM Niger), Sophie Brière (Québec), Gustave Gaye (Université de Maroua), Moussa Mbaye (Enda Tiers-monde, Sénégal), Amadou Oumarou (Université Abdou Moumouni, Niger), André Tindano (Université de Ouagadougou, Burkina Faso).
Objectif
Dans une visée de justice cognitive, cet ouvrage collectif pluridisciplinaire, plurilingue, évolutif et en libre accès traitera des effets des changements climatiques sur la vie, la société et l’environnement au Sahel, tels que vus, vécus et analysés par des chercheurs et chercheuses, des étudiants et étudiantes et des associations et habitants de toutes les régions concernées, du Sénégal à l’Érythrée.
Argument
La circulation des résultats de la recherche scientifique d’une université à l’autre en Afrique francophone est encore très laborieuse, a fortiori avec l’Afrique anglophone. L’enquête menée par le projet de recherche-action SOHA sur les ressources scientifiques des étudiants et étudiantes d’Afrique francophone a montré que les mémoires de maîtrise et les thèses restent bien souvent sur les tablettes des départements et ne sont pas accessibles d’une université à l’autre, alors que leurs thèmes peuvent être très proches. Cette situation freine le développement des connaissances locales et diminue la qualité de la science produite dans ces universités : elle peut être répétitive et moins diversifiée ou innovante que si les résultats circulaient davantage.
C’est le cas des travaux de recherche sur les effets des changements climatiques au Sahel. Au fil de l’enquête SOHA, nous avons appris que les travaux de l’Institut supérieur du Sahel de l’Université de Maroua (nord-Cameroun), qui offre, entre autres, une filière en sciences environnementales avec l’option « désertification et ressources naturelles » (http://uni-maroua.com/fr/ecole/institut-superieur-du-sahel), sont peu ou pas connus au Département de géographie de l’UFR/SU de l’Université de Ouagadougou 1 au Burkina Faso et réciproquement. Pourtant, ces unités travaillent sur le même sujet qui est d’une importance cruciale pour ces deux pays. En effet, de nombreuses recherches montrent bien les effets réels des changements climatiques dans tout le Sahel, notamment une imprévisibilité accrue des précipitations qui perturbe le cycle agricole, ce qui entraîne des migrations plus soutenues vers les villes et bien d’autres conséquences environnementales, sociales et économiques.
Comment circulent les savoirs sur cet enjeu? Les articles scientifiques sont en grande majorité publiés dans des revues des pays du Nord qui sont rarement en libre accès et qui, pour des raisons structurelles, publient très peu les chercheurs et chercheuses œuvrant dans les universités sahéliennes et encore moins les étudiants qui y ont fait des mémoires ou des thèses. Quant aux livres sur le sujet, rares sont les maisons d’édition qui acceptent de les mettre en libre accès. Notre projet vise donc, en premier lieu, à offrir aux scientifiques et étudiant-e-s des régions sahéliennes, toutes disciplines confondues, qui travaillent sur les effets des changements climatiques dans leur pays un nouveau moyen de mise en valeur et de circulation des savoirs qu’ils produisent, à savoir un ouvrage collectif en libre accès, publié sous licence Creative Commons, imprimable à la demande, en tout ou par section.
Nous voulons aussi intégrer dans ce livre les savoirs produits dans les organisations paysannes ou locales, ainsi que dans les ONG : des savoirs empiriques importants, mais qui sont plutôt méprisés par la science qui n’y voit que de la « littérature grise » ou des savoirs de qualité inférieure. Il nous semble au contraire important de revaloriser ces savoirs dans une perspective de circulation des idées et des informations.
Notre conception des effets des changements climatiques est large, afin de ne laisser échapper aucune discipline ou thématique traitée dans les travaux de recherche produits par les universités ou les associations sahéliennes : effets sur l’agriculture, sur l’élevage, sur la biodiversité (plantes et espèces animales menacées), sur l’accès à l’eau, mais aussi sur les familles, sur les migrations, sur l’emploi, etc.
Originalité du projet
- un ouvrage collectif en libre accès formé de nombreux chapitres pouvant être régulièrement mis à jour ou complétés par de nouveaux chapitres, ouverts aux commentaires sur le web et sous licence Creative Commons (ce qui en permet la réutilisation libre)
- un ouvrage pouvant circuler sous la forme de PDF (volume complet ou en sections) imprimés à la demande dans différents pays
- des auteurs et auteures diversifiés : des hommes et des femmes, des jeunes et des aînés, des étudiants et des étudiantes, des chercheurs et des chercheuses, des membres d’associations, de regroupements, de collectifs, des citoyens et citoyennes. La seule exigence : être du Sahel (ou collaborer de très près avec des personnes du Sahel) et être en lien étroit avec au moins une université sahélienne
- un projet qui vise la contribution de tous les pays francophones ayant une composante sahélienne (Sénégal, Mauritanie, Mali, Burkina Faso, Niger, Cameroun, Tchad) par le biais de leurs universités, centres de recherche et associations; les contributions anglophones du Sahel (Nigeria, Soudan du Sud, Erythrée) seront aussi les bienvenues
- des chapitres en français, mais qui pourront aussi être traduits dans d’autres langues (africaines ou européennes), intégralement ou sous la forme d’un long résumé
- un projet qui a une visée multidisciplinaire et encyclopédique
- un comité scientifique diversifié et une révision par les pairs ouverte et collaborative, visant l’amélioration continue des chapitres.
Processus de création du livre
Ce projet de livre est ouvert à tous et toutes, dans un état d’esprit qui rejette toute perspective de compétition ou d’exclusion. Au contraire, la visée de justice cognitive de ce livre nous amène à vouloir l’ouvrir à tous les savoirs et à toutes les épistémologies, pour autant que cela nous aide à comprendre son objet. Nous travaillerons donc avec tous les auteurs et auteures qui veulent participer à cette aventure pour améliorer leur proposition ou leur texte afin que ce livre devienne une ressource précieuse.
Sur le plan des consignes d’écriture, il est tout à fait possible d’inclure des photos ou d’autres images. Il est également possible de proposer, en guise de chapitre, la transcription d’une entrevue ou d’un témoignage ou encore une vidéo pour la version en ligne, si cela permet à des savoirs d’entrer dans notre livre. Par contre, afin de maximiser l’accessibilité et l’utilisation du livre, nous demandons de restreindre l’usage de tout jargon spécialisé.
La circulation de cet appel dans toutes les universités sahéliennes est cruciale pour respecter la visée de justice cognitive et de circulation régionale de l’information. Pour cela, nous faisons appel à la bonne volonté des uns et des autres et nous mènerons un inventaire des unités de recherche sahéliennes traitant des changements climatiques et des associations qui s’y intéressent afin d’y recruter le maximum d’auteurs et d’auteures.
À noter que la rédaction de ces chapitres est bénévole et ne sera pas rémunérée. La gratification des auteurs et auteures sera de voir leur chapitre circuler et être utilisé au service du bien commun de l’Afrique sahélienne.
Les auteures et auteurs participant au livre seront invités à échanger tout au long du processus d’écriture et d’édition dans un groupe Facebook ou WhatsApp, afin de partager des idées, des références et des premières versions, dans l’esprit d’entraide et de collaboration qui est promu par la justice cognitive.
Calendrier
- Avril-août 2017 : Inventaire des unités de recherche et des associations et circulation de l’appel
- 30 septembre : Date limite pour envoyer une proposition (un résumé de quelques phrases)
- Septembre 2017 – janvier 2018 : Réception des chapitres, travail d’édition et mise en ligne au fur et à mesure (dès qu’un chapitre est prêt, il est mis en ligne).
- Avril 2018 : Publication d’une version complète et impression d’exemplaires sur demande.
Pour participer
Dès que possible, envoyez un message à l’adresse propositions@editionscienceetbiencommun.org avec votre biographie (en quelques lignes), les coordonnées complètes de votre institution ou de votre association et un résumé du chapitre (ou des chapitres) que vous souhaitez proposer. Ce résumé consiste à présenter en quelques phrases le contenu du texte que vous souhaitez proposer, en l’associant, dans la mesure du possible, à un contexte sahélien précis (région, ville, village, projet de recherche, intervention, etc.).
Les valeurs et le projet éditorial des Éditions science et bien commun
Merci de les lire attentivement sur cette page.
New Set of Books in Media and Communication Studies Unlatched
The project Knowledge Unlatched (KU) offers a library sponsored model to ensure open access for monographs and edited collections in the arts & humanities and social sciences. Libraries can take part in Knowledge Unlatched by pledging for the offered title list. The KU project started in 2013 with a pilot of 28 books from 13 publishers to create a platform where authors, publishers, libraries and readers could potentially all benefit from open access for books. Authors see their work disseminated on a global maximized scale and in the KU model they won’t be bothered with BPCs. It is a fact that free accessible books have been downloaded extensively, on top of the normal sales of the paper version. Citations are not necessarily increasing, but they will come faster.[1] Publishers can experiment with generating new revenue streams for open access books. Libraries are paying (you could have a discussion on where the money should come from) but in return are supporting open access for books and deliver accessibility for their researchers (online and with a cheaper acquired paper version – see below). And readers can read and download the books for free.
KU is an example of a crowdfunded, or better, consortium open access funding model. This model spreads costs and offers a broad access for books. It is currently the most important platform, and most likely the biggest in terms of scale, offering a constant stream of open access books. But is this model working?
I have mentioned it before in a previous post that some libraries [2] and commentators [3] see that the model could be sensitive to double-dipping and others have raised the the issue of free-riding (non-paying members taking advantage of the open access books made available by paying members). KU is aware of these issues. As Frances Pinter, the founder of KU, points out in an interview: “in order to deal with the free rider issue, we’re giving the member libraries an additional discount. So, when they buy into the free and they buy the premium, the total will be less than any non-member would have to buy for a premium version.”[4] The collections offered are still fairly small considered to the global output but we’re still in the early days of open access monograph publishing. If more publishers are involved and participating in the growth of the entire collection more libraries could become interested as well.
The KU project started in 2013 with a pilot (Pilot 1: 2013-2014). The pilot consisted of a collection of 28 new books (front list) covering topics in the humanities and social sciences from 13 scholarly publishers including the following university presses: Amsterdam, Cambridge, Duke, Edinburgh, Liverpool, Manchester, Michigan, Purdue, Rutgers and Temple, plus commercial presses: Bloomsbury Academic, Brill and De Gruyter. The pilot was a success and all 28 titles were made available on the OAPEN repository. OAPEN is an online platform for peer reviewed academic books in the humanities and social sciences. In collaboration with the Directory of Open Access Books index it offers services for discovery, aggregation and preservation of open access books and related metadata. Just recently the library passed the milestone of 4 million downloads since it started reporting COUNTER compliant usage statistics (September 2013).
We have reached a new milestone: over 4.2 million #openaccess #books downloaded. COUNTER compliant data by our partner @IRUSNEWS pic.twitter.com/ovPJGnK3ES
— OAPEN (@OAPENbooks) 29 maart 2017
User statistics for books unlatched by Knowledge Unlatched in the Pilot and Round 2 have been published in the fall of 2016 by KU. Just to give you an idea of the impact the Round 2 collection contains 78 books and these titles have reached just under 40,000 downloads. The average download per title (via OAPEN) is 503.[5]
Back to the yearly rounds of open access books. The second round (Round 2: 2015-2016) was much larger and consisted of 78 new titles from 26 scholarly publishers. In this round, the collection was built on five main disciplines, namely: anthropology, literature, history, politics and media & communications. Of course, I’m really happy with the last one, being one of the main disciplines of the KU book lists. This round was a success too and 78 books have been unlatched. 10 of them are dealing with the subject media and communication. This collection of 10 can be viewed and downloaded here.
The third round (2016-2017) includes 343 titles (147 front list and 196 backlist) from 54 publishers. Just recently it has been announced that for this round sufficient libraries have pledged.[6] This means that in the next few months the entire list will become available for free downloading.
The good news is that of those 343 books, for the media and communications studies list, 9 titles are brand new (front list) and 13 books are back list titles (not older then 2 years). I think it is a good move to add back-list titles as well, since we tend to focus on only the new and latest stuff. But as we all know in the humanities and social sciences books have a long(er) life. Publishers of these 22 media and communication titles are amongst others Amsterdam University Press, Duke University Press, Intellect, transcript Verlag, UCL Press, Ottowa University Press and University of Toronto Press. The books of round 2 will be made available on the OAPEN platform. Note that some of these publisher don’t charge BPCs. They see the KU project as an addition to their business model and an option to publish books in open access. Some, like UCL Press and Amsterdam University Press, have a standard open access option for all their books and charge BPCs.[7]
Normally I won’t post links to open access publications, since we have other spaces for this (Film Studies for Free and recently launched OpenMediaScholar) but for the sake of completeness I’m adding the following list of books that have been or will be published in the OAPEN library from early to mid-2017.
Frontlist
- Cinema, Trance and Cybernetics, by Ute Holl, published by Amsterdam University Press.
- Dying in Full Detail: Mortality and Digital Documentary, by Jennifer Malkowski, published by Duke University Press.
- Unbecoming Cinema: Unsettling Encounters with Ethical Event Films, by David H., published by Intellect.
- Digital Environments Ethnographic Perspectives Across Global Online and Offline Spaces, by Urte Undine Frömming, Steffen Köhn, Samantha Fox, Mike Terry (Eds.), published by transcript Verlag.
- Independent Theatre in Contemporary Europe: Structures – Aesthetics – Cultural Policy Theater, by Manfred Brauneck (Ed.), published by transcript Verlag.
- Performing the Digital: Performance Studies and Performances in Digital Cultures, by Timon Beyes, Martina Leeker, Imanuel Schipper (Eds.), published by transcript Verlag.
- The Web as History: Using Web Archives to Understand the Past and the Present, by Niels Brügger & Ralph Schroeder, published by UCL Press.
- eAccess to Justice Law, Technology and Media, by Jane Bailey, Valerie Steeves (Eds.), published by University of Ottawa Press.
- The Unmaking of Home in Contemporary Art Cultural Spaces, by Claudette Lauzon, published by University of Toronto Press.
Backlist
- Medium, Messenger, Transmission: An Approach to Media Philosophy, by Sybille Krämer, published by Amsterdam University Press.
- Cinema at the End of Empire: A Politics of Transition in Britain and India, by Priya Jaikumar, published by Duke University Press.
- Media, Erotics, and Transnational Asia, by Purnima Mankekar & Louisa Schein, published by Duke University Press.
- Crowd Scenes: Movies and Mass Politics, by Michael Tratner, published by Fordham University Press.
- The Digital Condition: Class and Culture in the Information Network, by Rob Wilkie, published by Fordham University Press
- Music and Levels of Narration in Film, by Guido Heldt, published by Intellect
- Undercover Reporting: The Truth About Deception, by Brooke Kroeger, published by Northwestern University Press.
- Hybridity, or the Cultural Logic of Globalization, Marwan Kraidy, published by Temple University Press.
- Feminist Media: Participatory Spaces, Networks and Cultural Citizenship, by Elke Zobl, Ricarda Drüeke (Eds.), transcript Verlag.
- Gaze Regimes: Film and Feminisms in Africa, by Antje Schuhmann, Jyoti Mistry, Nobunye Levin, Dorotheex Wenner, Christina von Braun, published by Wits University Press.
*Update (05-02-2017): Added more links of books available in the OAPEN library.
Notes
[1] Montgomery, L. (2015). Knowledge Unlatched: A Global Library Consortium Model for Funding Open Access Scholarly Books. p.8. http://www.knowledgeunlatched.org/wp-content/uploads/sites/3/2015/04/Montgomery-Culture-8-Chapter.pdf
[2] Blog by Martin Eve: On Open Access Books and “Double-Dipping”. January 31, 2015.
[3] Interview with Frances Pinter, Knowledge Unlatched, January, 2013.
[4] Some literature on this topic: Ferwerda, E., Snijders, R. Adema, J. ‘OAPEN-NL – A project Exploring Open Access Monograph Publishing in the Netherlands: Final Report’ p.4.
Snijder, R., (2013). A higher impact for open access monographs: disseminating through OAPEN and DOAB at AUP. Insights. 26(1), pp.55–59. DOI: http://doi.org/10.1629/2048-7754.26.1.55
Snijder, R. (2014). The Influence of Open Access on Monograph Sales: The experience at Amsterdam University Press. LOGOS 25/3, 2014, page 13‐23, DOI: http://doi.org/10.1163/1878‐4712‐11112047
[5] User Statics for the KU Pilot Collection and Round 2
[6] http://www.knowledgeunlatched.org/2017/02/ku-unlatches/ (February 2017)
[7] For a list of publishers active in the field of media studies and their OA models, see the Resource page.
Image credit: Designed by Photoangel / Freepik
Journal Subscription and Open Access Expenditures: Opening the Vault
For years, there was no overview of what the total amount being paid for journal subscriptions was per institute or on a national level, due to restrictions in the contracts with publishers (the famous non-disclosure agreements). The information on universities’ expenditures on subscriptions has therefore been secret information up to now.
With the transition towards open access and the related recent (re-)negotiations with big publishers to have an open access publishing option in their journals, there is a growing attention on the institutional and national expenditures. It is for several reasons that we need to have an insight in these costs to know what the cost-benefits would ideally be if we have a full shift to open access. But above all it should be standard policy to know what is happening with tax-money anyway.
In Finland, The Netherlands, U.K. and at some institutions in Swiss this data have been published publicly because in these countries several Freedom of Information (FOI), and Government Information Act (WOB – in The Netherlands) requests have been submitted, and above all, granted.
The following information is to give you a quick overview of the status and the available data:
Finland
In 2016 information on journal subscription costs paid to individual publishers by the Finnish research institutions has been released by the Finnish Ministry of Education and Culture, and its Open Science and Research Initiative funded 2014–2017 (Academic Publishing Costs in Finland 2010–2015). Since this data is spanning all expenditures, Finland is the first country to release this data for all its institutions.
More information on the dataset can be found here and here.The Netherlands
In 2016, two requests for information have been submitted. The first request arrived on 28 April 2016, and requested the publication of the total amount of the budget that the university has spent annually on subscriptions to academic journals over the past five years and the purchase of academic books over the past five years.
This request has been granted in September 2016 and the subscription costs data has been released here.
In September 2016, all Dutch universities received a second request relating to the open access license deals. Since 2015 negotiations started with the big publishers about the implementation of open access into the existing ‘big deals’. Currently the Netherlands is the only country where this is happening on such a united scale. All higher education institutes are acting as one party towards the publishers. Normally the details of those deals are contracted as a non-disclosure agreement but this second request asked for publication of those open access contracts. Just recently it has been granted as well and now contract details publishers such as Elsevier, Springer, Wiley, Taylor & Francis, ACS, Sage, Karger, Thieme, Walter de Gruyter, RSC, Emerald have been publicized. [1]A list of the publishers’ contracts can be found here.
U.K.
In the U.K. Stuart Lawson, Doctoral researcher at Birkbeck, University of London, has done some great work on getting insights in the journal subscription expenditures at U.K. higher education institutions. Not all instisutes are represented, but he managed to collect pricing data of 150 institutions with ten of the largest publishers from 2010-14. The raw data can be found here.
For the last three years (starting in 2014) for transparency reasons he systematically collects the APC expenditures data of several research institutes as well. This data can be found here.
Swiss
In 2015, also after a FOI request, the ETH Zürich published an overview of the costs for journal subscriptions (2010-2014) with the three largest publishers, Elsevier, Springer and Wiley.
There is some more data on the financial flows in Swiss academic publishing to be found in this report.
Image credit: Designed by Kjpargeter / Freepik
#WorldsUpsideDown Exhibition
Call For Papers: Happiness
Emergent research into happiness is still largely situated in fields such as sociology, psychology, and neuroscience. Traditionally the uncontested domain of the Humanities, the question of “How should we live?” is too rarely approached in contemporary literary and cultural studies. Indeed, even in a thriving field such as affect studies, research still largely focuses on negative emotions, ugly feelings (Ngai), shame (Probyn), paranoia (Sedgwick), failure (Halberstam), and the cruelty of optimism (Berlant). But perhaps the critical tide is turning. Scholars are beginning to theorise the end of our well-rehearsed “hermeneutics of suspicion,” and conjecturing what comes after (Felski). They are mapping the potential path for a “eudaimonic criticism” (Pawelski & Moore) and an “ethics of hope” (Braidotti), looking towards a more positive future (Muñoz). Critical and historical studies on empathy (Meghan; Keen), joy (Potkay) and happiness itself (Ahmed) are also emerging.
Inspired by the growing body of scholarship on optimistic representations of gender, sexuality, and queerness, Writing from Below enters the fray with this invitation to explore and interrogate positive, successful, fulfilling, life-affirming expressions of gender and sexuality in contemporary or historical literature, culture, and society.
Papers could engage with (but are not limited to):
- Pleasure, joy, jouissance, delight, splendour, enchantment, empathy, and kindness
- Love, passion, and amour fou
- Middlebrow pleasure
- Living the queer life, and queer(ing) happiness
- Eudaimonia, mindfulness, and wellbeing
- Eudaimonic reading, and the eudaimonic turn in cultural and literary studies
- The hermeneutics of suspicion, paranoid and reparative reading, and their aftermath
- Ethical criticism, the ethics of hope, and hopelessness
- The body as site of happiness, joy, pleasure, etc.
- Affect, the theories and/or histories of positive emotions
- Celebration, and celebration as protest
- Burlesque, clowning, circus, carnivals, and the carnivalesque
- Kitsch, camp, and drag
- Sex and play, sex lives, fun
- Vitality, verve, vigour, and liveliness
- Biological life, bios, zoe, survival, sur-vivre [living-on], affirmation
- The utopian tendencies of gender studies and queer theory
- The (queer) future, queer futurity, and happy endings
Gender studies and queer theory are located across and between disciplines, and so we welcome submissions from across (and outside of, against and up against) the full cross-/inter/-trans-disciplinary spectrum, and from inside and outside of conventional academia.
Do not be limited. Be brave. Play with form, style, and genre. Invent, demolish, reimagine.
The deadline for submissions is 29 May 2017.
Written submissions, whether critical or creative, should be between 3,000 and 6,000 words in length, and should adhere strictly to the 16th edition of the Chicago Manual of Style.
All submissions—critical, creative, and those falling in between; no matter the format or medium—will be subject to a process of double-blind peer review.
For more information, please contact our guest editor, Dr Juliane Roemhild: J.Roemhild@latrobe.edu.au
Tilda Swinton by Brigitte Lacombe (2012)