Interview: Lorenzo Franceschi-Bicchierai

Gabriella Coleman: As you know well, the DNC [Democratic National Committee] hack and leak were quite controversial, with a batch of commentators and journalists debating whether the contents of the email were newsworthy, and another batch of commentators assessing their geopolitical significance. Our Limn issue features pieces that in fact assess the importance of the DNC hack in quite distinct ways: one author taking the position [that] the emails lacked consequential news, while another author forwards a public interest defense of their release. As someone who has covered these sorts of hacks and leaks, how important was the DNC-Podesta hack? And in what way? Does it represent a new political or technical threshold?

Lorenzo Franceschi-Bicchierai: They were definitely relevant from a geopolitical standpoint, if you will. All signs point to Russia. So, this was a nation-state hacking a legitimate target from the point of view of their interests, and from the intelligence point of view, these were legitimate targets. So, that’s not too crazy, and this is something that would get a lot of people on Twitter saying, “Well, spies are gonna spy.” But I think it was interesting because, of course, it did cross a threshold or line, if you will. Because this wasn’t just hacking and spying on them, it was putting everything in the open. They published the stolen data through WikiLeaks, they published through their own leaking platforms, they had this website called DC Leaks, and they had the famous Guccifer 2.0. They had all kinds of channels and they were actually very good at using multiple channels just to get as much attention as possible, even if the content wasn’t actually that compelling.

GC: What you’re suggesting is that the trade craft of state spying has always worked on these discretionary channels, that is, back channels that only the intelligence world has access to. And all of a sudden here’s this moment where they decide to move everything from the back stage to the front stage.

LFB: Yes that’s definitely a good way to put it. Spies, by definition, work in the shadows. We know about intelligence operations when they leak or when someone talks, and sometimes it’s years later. At that point it’s not even that newsy. But in this case, it all unfolded in real time, which was very interesting. The big question in the DNC-Podesta hack that we’ll probably never know the answer to—if the DNC and Crowdstrike didn’t come out with the attribution, if they didn’t come out saying this is Russian intelligence—is: would the hackers and the Russian government have responded in the way that they have? By systematically leaking documents and slowly dripping information? I don’t know. Maybe they would, maybe they wouldn’t….

GC: Ah, that’s a good point. Did Crowdstrike call out Russia before the material was leaked?

LFB: Yes, CrowdStrike attributed the attack to Russia on June 14, and Guccifer 2.0 came out on June 15. But it’s important to note that there was another website, also linked to Russia, that started leaking stuff before that. The site was called DCLeaks and it started publishing stolen documents just a few days before CrowdStrike went public, but it’s almost like no one noticed it right away. DCLeaks published hacked emails from Hillary’s [Clinton] staff on June 8, according to the Internet Archive. This means that perhaps Russia was already going to leak documents, and CrowdStrike’s accusation only accelerated the plan. Perhaps they were planning to release the more interesting stuff closer to the election, but they felt like they somehow had to respond to the public accusation. Who knows!

GC: Your point is an important one because it suggests that perhaps the execution of this hack and leak was experimental and it also seemed quite sloppy as well.

LFB: Maybe it was the plan all along. Even if it wasn’t, it definitely didn’t look very well planned at times. I think the best example is this Guccifer 2.0 persona. He—let’s say “he,” just because they claim to be male—he showed up a day after the CrowdStrike and Washington Post reports and it definitely seemed like the character was a little bit thrown together. He claimed to be a hacktivist trying to take the credit he deserved, which would have made sense if he really wasn’t a Russian spy or someone working for Russian spies. But then he chooses the name of another famous hacker as his own, simply adding the 2 in front of it and—you know this better than me—some hackers can have a big ego; why not just come up with a different name?

GC: True, they want recognition for their work.

LFB: Just like writers. You know, it’s like, “I wanna have people know that I did something that I think [is] awesome and worthy of recognition” thing. We all have our egos. And using the same name as another famous hacker from years ago just sounds very strange. I don’t think I’ve ever seen that before.

GC: It’s funny to imagine the meeting where this happened in some nondescript Russian intelligence office where someone’s like, “All right, we are looking for a volunteer to play the role of the hacker….” And whoever got nominated or volunteered didn’t do a very good job. Which is a little bit weird because Russia does seem to obviously have a lot of talent in this area.

LFB: Yeah, they seem to be very good with these information campaigns and deception campaigns, and stuff like that. It’s always possible that they contracted this out to someone. Maybe they thought this would be an easy job, but somehow it snowballed.

GC: Let’s turn to the next question, which is related to the first one. Many of these recent leaks, from Cablegate to the DNC leaks, are massive, and the journalistic field mandates quick turnaround so that you have to report on this material very quickly, right? What interpretive or other challenges have you faced when reporting on these hacks and leaks?

LFB: Yes, there’s many. You definitely nailed one of the biggest ones: the quickness and fast-paced environment. And I think that sources are catching up to it, or sources of leaks and publishers of leaks, I guess. There are still large data dumps that just drop out of the blue. And everyone scrambles to search through them. But, for example, WikiLeaks have become very good at staging leaks in phases. They slowly put out stuff because they know very well that they’re going to extend the time that they cover [an issue], that they will get attention. With the Podesta leaks, it was almost every day that there was something new.

GC: Right, that was very well timed and orchestrated.

LFB: And it wouldn’t have worked if they had just dumped everything the first day. Because we’re humans too and we get overwhelmed. And everyone gets—readers get overwhelmed too. And if you dump 3,000 emails, you’re just going to get a certain level of attention. If you do it in segments, and in phases, then you get more attention. I think sources are catching up to that.

LFB: But the other challenge is that sometimes you get things wrong, or you just assume that the documents are correct, and you publish the story based on the documents, saying, “Oh, this happened.” And maybe you haven’t had time to verify. There’s also competition. You always want to be the first. The ideal scenario is always getting something exclusively so you have the time to go through it. The advantage, though, of having stuff in the public is the crowdsourcing aspect. So, for example, when The Shadow Brokers data came out, pretty much everyone in the infosec world spent the entire day, all their free time, looking through what had come out.  And they published their thoughts and their findings in real time on Twitter.

For example, one of these people was Mustafa Al-Bassam. So that’s something that maybe you can’t get if you have information exclusively. And then getting something exclusively obviously has its advantages, but that’s one of the drawbacks. You don’t get the instant feedback from a large community.

GC: And that seems to have happened with the recent CIA-WikiLeaks leak as well.

LFB: And it happened with the Hacking Team leak. It was very useful for me and others to keep an eye on Twitter and see what people found because there was just so much data… That’s also exactly what happened when the Shadow Brokers dumped hacking tools stolen from the NSA [National Security Agency]. These weren’t just emails or documents that a lot of people could look at and understand or try to verify. These were sophisticated pieces of code that needed people with a lot of technical skills to understand and figure out what they were used for and whether they worked. Luckily there’s a lot of very good infosec people on Twitter and just following their analysis on the social network was really useful for us journalists.

GC: Based on what you’ve seen and reported, do you think that we—not just lay people, but experts on the subject—are thinking clearly on vulnerability? Is there a focus in the right place on threat awareness, technical fixes, bug bounties, vulnerability disclosure, or do you think people are missing something or are misrepresenting the problem?

LFB: In the infosec world there’s sort of a fetish for technical achievements. And it’s understandable, it’s not the only field. But sometimes this fetish for the latest, amazing zero-day, or the new proof-of-concept way to put ransomware on a thermostat—which, you know, is tough, I wrote a story about it—but sometimes it makes us forget that these are still kind of esoteric threats, maybe, and also unrealistic threats. In the real world, what happens usually is phishing, or your angry partner or ex-partner still knows your password to your email and after you break up they get into your email… stuff like that. Some cybersecurity expert might scoff at this and say, “That’s not hacking,” but that’s what hurts the most, though.

And I think that, for example, Citizen Lab has done a great job of highlighting some real-world cases of abuse, of hacking tools used against regular people, but also dissidents and human rights defenders. And in many of those cases, there was no fancy exploit, there was no amazing feat of coding or anything involved. It was just maybe a phishing email or phishing SMS [text message]. So I think that we could all—both journalists and the industry—do a better job of explaining the real risks to an average person and telling them what to do, because just scaring them is not going to help.

GC: Yeah, this is a great point and reminds me of considering public health–type campaigns: in this case, a concerted security hygiene program to teach everyday people the basics of security. The history of biomedical public health campaigns are instructive here. When the germ theory of illness was gaining ground, it took enormous effort and labor to convince people to change their habits, like to wash their hands, to cover their mouths when they were sneezing. It took a few decades of public health campaigns both to convince people that there was something called bacteria that could make you sick, and that you had to change your behavior. So why wouldn’t we need something similar for computer security? But that’s obviously something that info security companies—rightfully so—are probably not going to invest in.

LFB: Yeah, there’s not a lot of money in that. But I think that we could demand more and expect more from companies that are only maybe tangentially in the infosec industry—like Google, Facebook, these big giants—that everyone yuses, more or less. So they can really make a big difference. If Google made two-step verification mandatory, or if they just made it an option to choose when you create your account, that could make a huge difference in the adoption of these measures.

GC: That’s an excellent point.

GC: Let’s turn to another final question: Can you tell us a little bit about challenges you face writing on hackers and security?

LFB: One of the challenges is cutting through the noise. Infosec and cybersecurity have become so popular now that there’s so much noise. And it’s very easy to get lost in the daily noise. And as an online journalist, the risk is double because that’s kind of like my job: I have to be on everyday and see what happens everyday. Let me give you an example: yesterday there was some revelation about a vulnerability in the web versions of Telegram and WhatsApp. It made a lot of noise. It wasn’t that big of a deal in the sense that we don’t know many people are affected. Probably quite a few. But we don’t know how many people use the web versions of these apps.

Another challenge here is that so many people are trying to position themselves as experts in this field. As a journalist, it’s sometimes very hard to select your sources wisely because there are a lot of people that want to say something. They want to have their opinion broadcasted, they just want to join the fray and talk about the latest infosec news.

GC: How do you go about resolving that noise? Are there some experts that you rely on more than others? Do you talk with colleagues?

LFB: Yes, I think it’s a combination of everything you said. Talking to colleagues helps. I work with a really great journalist, Joseph Cox, who you know as well. It helps sometimes to share…. We ask each other: who shall I talk to? That helps. It’s also just a matter of time. When I started out, it was really hard to tell [who to talk to]. You would go on Twitter or just…everyone seemed like an expert. It’s very easy to say “cybersecurity expert” or whatever, and make claims that sound more or less informed.

The PR and marketing machine behind the infosec world is also very strong. Every time there’s a breaking story, we get dozens of emails trying to sell random people saying stuff that is not even that interesting. But there’s a lot of money involved, and so marketing is very powerful in the present world. I think after a while you just become very cynical—in a good way. If you smell the marketing campaign, then you’re like, okay, I should probably ignore this because it’s just marketing.

GC: Right. Is there sometimes a situation where it is a marketing campaign, but it is also a really cool important technology that has the potential to change things, or already has?

LFB: Yeah, sometimes attention is warranted. I’m trying to think of an example. I mean, for example, [the cybersecurity company] Kaspersky has a really big marketing side, and they do push their research very strongly through their marketing and PR people.  Most of the time, their research is actually very interesting, so it’s not necessarily—like if you use marketing, it’s not necessarily bad. There’s just too much of it now.  The problem with marketing is mostly when the sources or companies try to make their research look too good or make unfounded claims. Obviously I understand that they’re trying to get attention. But I think that actually—they don’t realize it—that that sometimes can backfire.

GC: Right, that’s a good point. And you know, I’m always thinking of potential PhD topics for my students; it would be really interesting to study the domain of infosec company research and the processes of knowledge vetting. How is it similar or different to academic peer review? And as you say, there’s a lot of very respected researchers and the material coming out of there is often very strong and important. But from my understanding…they will limit what they release too. Right?

LFB: As a company, yeah.

GC: Right, because you don’t want people being able to take things from you. So there’s this fine line between researching, getting the data out there, but maybe not always being able to reveal everything.

LFB: And that’s why, for example, an average Citizen Lab report is more interesting than an average infosec company X and Y report, because—and this is the point that Ron Deibert, the director of Citizen Lab, made when I spoke to him recently—you know, we don’t have to hide anything. And they want to encourage other people to look at the data and look at it themselves.

Another big challenge is the anonymity and pseudonymity of sources. It’s almost like a default…I don’t have the numbers…but I think a big part of my sources and my colleague’s sources are often anonymous or pseudonymous. They have a nickname, they have an alias. And the challenge sometimes is: Is this the same person I spoke to the other night? And the challenge there is not just verifying who they are, which is sometimes impossible, the challenge is sometimes keeping your head straight, and your sanity. Because the person sounds a little bit different. And “sounds” is probably the wrong word… because the tone is different…and you start thinking, is this a group? A friend of the guy or the lady that I spoke to the other day? But I think that when this happens, you have to focus on the content of the conversation, what they’re talking about, what documents they might be providing. The story might be there, although…it’s sometimes easy to forget, but what readers care the most about is people. So, the hacker, the hacktivist, is very often one of the most interesting parts of every story.

GC: Right, often there’s a lot of mystique around them or hacker groups. And…I know this well from my research about how difficult it can be to always be dealing with pseudonymous people. I thought Jeremy Hammond was an agent provocateur by the way he acted. And I was completely wrong, you know. It can be very hard to suss out these things.

LFB: Definitely. I think that’s one of the biggest challenges, for sure. But it’s also interesting in a way. I don’t fault them for trying to protect their identity. And that’s just how it is. And that’s not going to change anytime soon. Sometimes it is frustrating. Sometimes you wish you could have that certainty. In real life, you see a face, and that’s the person. But in these cases, there’s not really much to go on.

Interview conducted March 2017.

Lorenzo Franceschi-Bicchierai is a a staff writer at Motherboard, where he covers hacking, information security, and digital rights.

Ownership and Cultural Heritage

This free to read book grew out of discussions about how multimedia technologies afforded scholars new ways of sharing documentation and scientific knowledge with the cultural owners of these collected oral genres. Funded by the Netherlands Organisation for Scientific Research … Continue reading

Interview: Kim Zetter

Christopher Kelty:  So our first question is: What kind of technical or political thresholds have we crossed, and have you seen, in your time reporting on hacking and information security? Is Stuxnet [2010] a case of such a threshold, or the DNC [Democratic National Committee] hack? Since you’ve been doing this for a long time, maybe you have a particular sense of what’s changed, and when, over the last, say, decade or so?

Kim Zetter: I think we have a number of thresholds in the last decade. And the DNC hack definitely is a threshold of a sort. But it’s not an unexpected threshold. There’s been a build up to that kind of activity for a while. I think what’s surprising about that is really how long it took for something like that to occur. Stuxnet is a different kind of threshold, obviously, in the military realm. It’s a threshold not only in terms of having a proof-of-concept of code that can cause physical destruction—which is something we hadn’t seen before—but also it marks a threshold in international relations because it opens the way for other countries to view this as a viable option for responding to disputes instead of going the old routes: through the UN or attacking the other nation, or sanctions or something like that. This is a very attractive alternative because it allows you to do something immediately and have an immediate effect, and also to do it with a plausible deniability because of the anonymity and attribution issues.

CK: Why do you say this is long overdue?

KZ: With regard to the DNC hack, we’ve seen espionage and political espionage is not something new. The only thing that’s new here is the leaking of the data that was stolen rather than, let’s say, the covert usage of it. Obviously, the CIA has been involved in influencing elections for a long time, and other intelligence agencies have as well. But it’s new to do it in this very public way, and through a hack, where it’s almost very transparent.  You know, when the CIA is influencing an election, it’s a covert operation—you don’t see their hand behind it—or at least that’s what a covert operation is supposed to be. You don’t know who did it. And in this way, [the DNC hack] was just so bold.

Kim Zetter is the author of the definitive book
on the StuxNet virus, Countdown to Zero Day
Broadway Books, 2015.

But we’ve seen sort of a step and progression of this in the hacking world. We saw when Anonymous hacked HBGary [2011] and leaked email spools there. We saw the Sony hack [2014] where they leaked email spools. And both of these put private businesses on notice that this was a new danger to executives. And then we saw the Panama Papers leak [2016], where it became a threat to wealthy individuals and governments trying to launder or hide money. And now that practice has moved into a different realm. So that’s why I’m saying that this is long overdue in the political realm, and we’re going to see a lot more of it now. And the DNC hack is a bit like Stuxnet in that it opens the floodgates—it puts a stamp of approval on this kind of activity for nations.

CK: This is at the heart what I think a lot of people in the issue are trying to address. It seems that the nexus between hacking as a technical and practical activity and the political status of the leaks, the attacks, etc., is somehow inverting, so there’s a really interesting moment where hacking moved from being something fun…[with] occasionally political consequences to something political…[with] fun as a side effect.

KZ: Right. I’ve been covering security and hacking since 1999. And we started off with the initial hacks; things like the “I Love You” virus, things…that were sort of experimental, that weren’t necessarily intentional in nature. People just…testing the boundaries of this realm: the cyber realm. And then e-commerce took off after 2000 and it became the interest of criminals because there was a monetary gain to it. And then  we had the progression to state-sponsored espionage—instead of doing espionage in the old ways with a lot of resources, covert operatives, physical access, things like that. This opened a whole new realm; now we have remote destructive capabilities.  

CK: So, let me ask a related question: in a case like the DNC hack, do we know that this wasn’t a case of someone who had hacked the emails and then found someone, found the right person to give them to, or who was contracted to do the hacking?

KZ: Yes. I think that’s a question that we may not get an answer to, but I think that…you’re referring to something that we call “hybrid attacks.” There are two scenarios here. One is that some opportunistic hacker is just trying to get into any random system, finds a system that’s valuable, and then decides to go find a buyer, someone who’s interested in [what was obtained]. And then the stuff gets leaked in that manner. If that were the case in DNC, though, there probably would have been some kind of exchange for money, because a hacker—a mercenary hacker like that—is not going to do that for free.

But then you have this other scenario, where you have what I’m referring to now as hybrid attacks. We saw something similar in the hack of the Ukraine power grid [2015–2016], where forensic investigators saw very distinct differences between the initial stages of the hack, and the later stages of the hack which were more sophisticated. The initial hack, which was done through a phishing attack in the same way [as the DNC was hacked], got them into a system and they did some reconnaissance and they discovered what they had. And then it looks like they handed the access off to more sophisticated actors who actually understood the industrial control systems that were controlling the electrical grid. And they created sophisticated code that was designed to overwrite the firmware on the grid and shut it off and prevent them from turning it back on.

So there is a hybrid organization where front groups are doing the initial legwork; they aren’t necessarily fully employed by a government or military, but are certainly rewarded for it when they get access to a good system. And then the big guys come in and take over.

When you look at the hack of the DNC and the literature around it—the reporting around it—they describe two different groups in that network. They describe an initial group that got in around late summer, early fall, around 2015.  One group gets in and then the second group comes in around March 2016. And that’s the group that ultimately leaked the emails.  It’s unclear if that was a cooperative relationship or completely separate. But I think we’re going to have this problem more and more, where you have either a hybrid of groups cooperating, or problems with multiple groups independently being in a system. And this is because there are only so many targets that are really high-value targets, who could be of interest to a lot of different kinds of groups.

CK: What I find interesting about hacking are some of the parallels to how we’ve dealt with preparedness over the last couple of decades, independent of the information security realm. You know, thinking about very unlikely events and needing to be prepared, whether that’s climate change–related weather events or emerging diseases. Some of the work that we’ve done in Limn prior to this has been focused on the way those very rare events have been restructuring our capacity to respond and prepare for things. Is there something similar happening now with hacking, and with events—basically starting with Stuxnet—where federal agencies but also law enforcement are reorienting around the rare events? Do you see that happening?

KZ: I suppose that’s what government is best at, right? Those big events that supposedly we can’t tackle ourselves. So I think it’s appropriate if the government focuses on the infrastructure issues. And I don’t mean just the critical infrastructure issues like the power grid and chemical plants, but the infrastructure issues around the internet. I don’t think that we should give it over entirely to them. But in some cases, they are the only ones that actually can have an influence. One example is the FDA [U.S. Food and Drug Administration], and its recent rules around securing medical devices for manufacturers and vendors who create medical devices. It’s so remarkable to think that there was never a security requirement for our medical devices, right? It’s only in the last year that they thought it appropriate to actually even look at security.  But it shouldn’t be a surprise because we had the same thing with electronic voting machines.

CK: Yeah, it’s a shock and laughter moment, it seems to repeat itself.  Switching gears a little bit: one of the questions we have for you has to do with your experience in journalism, doing this kind of work.  Do you see interesting new challenges that are emerging, issues of finding sources, verifying claims, getting in touch with people? What are some of the major challenges you’ve encountered as a journalist trying to do this work over the last couple of decades?

KZ: I think that one of the problems that’s always existed [in] reporting [about] hackers is that unlike most other sources they’re oftentimes anonymous. And so you are left as a journalist to take the word of a hacker, what they say about themselves. You obviously put things in context in the story, and you say, “According to the hacker,” or “He is a 20-year-old student,” or  “He’s based in Brazil.”  There’s not a lot of ways you can verify who you’re talking to. And you also have the same kind of difficulties in verifying their information. Someone tells you they hacked a corporation and you ask, “Can you give me screenshots to show that you have access inside this network?” Well, they can doctor screenshots. What else can they give you to verify? Can they give you passwords that they used, can they tell you more about the network and how they got in? Can they give you a sample of the data that they stole? And then of course you have to go out and verify that. Well, the victim in many cases is often not going to verify that for you. They’re going to deny that they were hacked; they’re going to deny that they had security problems that allowed someone in. They may even deny that the data that came from them is their data. We saw that with parts of the DNC hack.  And it was true that some of the data hadn’t come from them. It had come from someone else.

CK: Do you find that—do you think that—finding sources to tell you about this stuff is different for studying hacking than for other domains? Do you basically go back to the same sources over and over again once you develop a list of good people, or do you have to find new ones with every event?

KZ: In terms of getting comments from researchers, those are the kinds of sources I would go back to repeatedly. When you’re talking about a hacker, of course, you can only generally talk with them about the hacks that they claimed to have participated in. And then of course they can just disappear, like the Shadow Brokers. After that initial release and flurry of publicity, several journalists contacted the Shadow Brokers, got some interviews, and then the Shadow Brokers disappeared and stopped giving interviews. So that’s always the problem here. Your source can get arrested and disappear that way, or willfully disappear in other ways. You may only end up having part of the information that you need.

CK: We have a number of articles about the difficulty of interpreting hacks and leaks and the expectation that the content of the leaks will have an immediate and incontrovertible effect—Pentagon Papers-style, or even Snowden-style. A leak that will be channeled through the media and have an effect on the government. We seem to be seeing a change in that strategic use of leaks. Do you see that in your own experience here too? That the effectiveness of these leaks is changing now?

KZ: You know, I think we’re still working that out. We’re trying to figure out the most effective way of doing this. You have the WikiLeaks model that gets thousands of documents from Chelsea Manning, and then just dumps them online and is angry that no one is willing to sift through them to figure out the significance of them. And then you have the model, like the Snowden leak, where they were given in bulk to journalists, and then journalists sifted through them to try and find documents and create stories around them. But in that case, many of the documents were still published. Then we have the alternative, which is the Panama Papers, where the data is given to journalists, but the documents don’t get published. All we see are the stories around them. And so we’re left to determine from the journalists: Did they interpret them correctly? Do they really say what they think they say?

We saw that problem with the Snowden documents. In the initial story that the Washington Post published about the Prism program, they said that, based on their interpretation of the documents, the NSA [National Security Agency] had a direct pipeline into the servers of these companies. And they misinterpreted that. But because they made the documents available it was easy for the public to see it themselves and say, “I think you need to go back and re-look at this.” With the Panama Papers we don’t have that. So there are multiple models happening here, and it’s unclear which is the most effective. Also, with the DNC, we got a giant dump of emails, and everyone was sifting through them simultaneously. The same with the Ashley Madison emails: everyone was trying to find something significant. There is sort of the fatigue factor: if you do multiple stories in a week, or even two weeks, people stop reading them because it feels like another story exactly like the last one.

And that’s the problem with large leaks. On the one hand you expect that they’re going to have big impact; on the other hand, the reading public can only absorb or care about so many at a time, especially when so many other things are going on.

CK: The DNC hacks also seem to have a differential effect: there was the sort of Times and Post readers who may be fatigued hearing about it and who fell away quickly. But then there’s the conspiracy theory–Breitbart world of trying to make something out of the risotto recipes and spirit cooking. And it almost feels like the hack was not a hack of the DNC, but a hack of the media and journalism system in a way.

KZ: Yeah, it was definitely manipulation of the media, but only in the sense that they knew what media would be interested in, right? You’re not going to dump the risotto recipes on the media (although the media would probably start up with that just a bit, just for the humor of it). But they definitely know what journalists like and want. And I don’t think that journalists should apologize for being interested in publishing stories that could expose bad behavior on the part of politicians. That exists whether or not you have leaked emails.  That’s what leaking is about. And especially in a campaign. There’s always manipulation of the media; government-authorized leaks are manipulation of the media as well.

CK: I think I like that connection, because what’s so puzzling to me is to call the DNC hacks “manipulating the presidential election” suggests that we haven’t ever manipulated the presidential election through the media before, which would be absurd, [Laughter.] So there’s a sort of irony to the fact that we now recognize it as something that involves statecraft in a different way.

KZ: And also that it was from an outsider: I mean, usually it’s the opposite party that’s manipulating the media to affect the outcome. I think they’re all insulted that an outside party was much more effective at it than any of them were. [Laughter.]

CK: Okay, one last question. What’s happening to hacker talent these days? Who’s being recruited? Do you have a sense in talking to people that the sort of professional landscape for hackers, information security professionals, etc., has been changing a lot? And if so, where are people going? And what are they becoming?

KZ: The U.S. government has been recruiting hackers from hacker conferences since hacker conferences began. From the very first DEFCON, undercover FBI and military were attending the conferences not only to learn what the hackers were learning about, but also to find talent. The problem of course is that as the cybersecurity industry grew, it became harder and harder for the government and the military to hold onto the talent that they had. And that’s not going to change. They’re not going to be able to pay the salaries that the private industry can pay. So what you see, of course, is the NSA contracting with private companies to provide the skills that they would have gotten if they could have hired those same people.

So what’s always going to be a problem is that the government is not always going to get the most talented [people]. They may get them for the first year, or couple of years. But beyond that, they’re always going to lose to the commercial industry. Was that your question? I’m not sure if I answered it.

CK: Well, it was, but I’m also interested in what kinds of international recruitment, what shake-up in the security agencies is happening around trying to find talent for this stuff? I know that the NSA going to DEFCON goes all the way back, but now even if you’re a hacker and you’re recruited by NSA, you may also be recruited by other either state agencies or private security firms who are engaged in something new.

KZ: Right. In the wake of the Snowden leaks, there may be people who would have been…willing to work for the government before who aren’t willing to work there now. And certainly Trump is not going to help the government and military recruit talents in the way that past administrations might have been able to appeal to patriotism and, you know, national duty. I think that that’s going to become much more difficult for the government under this administration.

Interview conducted February 2017.

Kim Zetter is an award-winning, senior staff reporter at Wired covering cybercrime, privacy, and security. She is the author of Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon.

Strengthening Democracy Through Open Education

This blog post was originally published by Patrick Blessinger as an article on University World News – you can access it here. Open Education: International Perspectives in Higher Education can be read and downloaded for free here. Open education is … Continue reading

Here Be Monsters: A Punctum Publishing Primer

by EILEEN A. JOY Caring for [ourselves] is not self-indulgence, it is self-preservation and that is an act of political warfare. ~ Audre Lorde But we can’t (and we won’t!) continue to be administered by a ruthless regime of technocrats that wants to turn everyone and everything into bodiless data, into sermons sent over the[...]

The Final Countdown

The Final Countdown: Europe, Refugees and the Left Edited by Jela Krečič. There is a commonly accepted notion that we live in a time of serious crisis that moves between the two extremes of fundamentalist terrorism and right wing populism. The latter draws its power from the supposed threat of immigrants: it proposes to resolve the immigrant crisis by placing … Continue reading →

Appel : Les effets des changements climatiques sur la vie, la société et l’environnement au Sahel

Projet d’un ouvrage collectif coordonné provisoirement par Florence Piron et Alain Olivier, de l’Université Laval, avec un comité scientifique (ouvert) composé de Fatima Alher (OSM Niger),  Sophie Brière (Québec), Gustave Gaye (Université de Maroua), Moussa Mbaye (Enda Tiers-monde, Sénégal), Amadou Oumarou (Université Abdou Moumouni, Niger), André Tindano (Université de Ouagadougou, Burkina Faso).

Objectif

Dans une visée de justice cognitive, cet ouvrage collectif pluridisciplinaire, plurilingue, évolutif et en libre accès traitera des effets des changements climatiques sur la vie, la société et l’environnement au Sahel, tels que vus, vécus et analysés par des chercheurs et chercheuses, des étudiants et étudiantes et des associations et habitants de toutes les régions concernées, du Sénégal à l’Érythrée.

Argument

La circulation des résultats de la recherche scientifique d’une université à l’autre en Afrique francophone est encore très laborieuse, a fortiori avec l’Afrique anglophone. L’enquête menée par le projet de recherche-action SOHA sur les ressources scientifiques des étudiants et étudiantes d’Afrique francophone a montré que les mémoires de maîtrise et les thèses restent bien souvent sur les tablettes des départements et ne sont pas accessibles d’une université à l’autre, alors que leurs thèmes peuvent être très proches. Cette situation freine le développement des connaissances locales et diminue la qualité de la science produite dans ces universités : elle peut être répétitive et moins diversifiée ou innovante que si les résultats circulaient davantage.

C’est le cas des travaux de recherche sur les effets des changements climatiques au Sahel. Au fil de l’enquête SOHA, nous avons appris que les travaux de l’Institut supérieur du Sahel de l’Université de Maroua (nord-Cameroun), qui offre, entre autres, une filière en sciences environnementales avec l’option « désertification et ressources naturelles » (http://uni-maroua.com/fr/ecole/institut-superieur-du-sahel), sont peu ou pas connus au Département de géographie de l’UFR/SU de l’Université de Ouagadougou 1 au Burkina Faso et réciproquement. Pourtant, ces unités travaillent sur le même sujet qui est d’une importance cruciale pour ces deux pays. En effet, de nombreuses recherches montrent bien les effets réels des changements climatiques dans tout le Sahel, notamment une imprévisibilité accrue des précipitations qui perturbe le cycle agricole, ce qui entraîne des migrations plus soutenues vers les villes et bien d’autres conséquences environnementales, sociales et économiques.

Comment circulent les savoirs sur cet enjeu? Les articles scientifiques sont en grande majorité publiés dans des revues des pays du Nord qui sont rarement en libre accès et qui, pour des raisons structurelles, publient très peu les chercheurs et chercheuses œuvrant dans les universités sahéliennes et encore moins les étudiants qui y ont fait des mémoires ou des thèses. Quant aux livres sur le sujet, rares sont les maisons d’édition qui acceptent de les mettre en libre accès. Notre projet vise donc, en premier lieu, à offrir aux scientifiques et étudiant-e-s des régions sahéliennes, toutes disciplines confondues, qui travaillent sur les effets des changements climatiques dans leur pays un nouveau moyen de mise en valeur et de circulation des savoirs qu’ils produisent, à savoir un ouvrage collectif en libre accès, publié sous licence Creative Commons, imprimable à la demande, en tout ou par section.

Nous voulons aussi intégrer dans ce livre les savoirs produits dans les organisations paysannes ou locales, ainsi que dans les ONG : des savoirs empiriques importants, mais qui sont plutôt méprisés par la science qui n’y voit que de la « littérature grise » ou des savoirs de qualité inférieure. Il nous semble au contraire important de revaloriser ces savoirs dans une perspective de circulation des idées et des informations.

Notre conception des effets des changements climatiques est large, afin de ne laisser échapper aucune discipline ou thématique traitée dans les travaux de recherche produits par les universités ou les associations sahéliennes : effets sur l’agriculture, sur l’élevage, sur la biodiversité (plantes et espèces animales menacées), sur l’accès à l’eau, mais aussi sur les familles, sur les migrations, sur l’emploi, etc.

Originalité du projet

  • un ouvrage collectif en libre accès formé de nombreux chapitres pouvant être régulièrement mis à jour ou complétés par de nouveaux chapitres, ouverts aux commentaires sur le web et sous licence Creative Commons (ce qui en permet la réutilisation libre)
  • un ouvrage pouvant circuler sous la forme de PDF (volume complet ou en sections) imprimés à la demande dans différents pays
  • des auteurs et auteures diversifiés : des hommes et des femmes, des jeunes et des aînés, des étudiants et des étudiantes, des chercheurs et des chercheuses, des membres d’associations, de regroupements, de collectifs, des citoyens et citoyennes. La seule exigence : être du Sahel (ou collaborer de très près avec des personnes du Sahel) et être en lien étroit avec au moins une université sahélienne
  • un projet qui vise la contribution de tous les pays francophones ayant une composante sahélienne (Sénégal, Mauritanie, Mali, Burkina Faso, Niger, Cameroun, Tchad) par le biais de leurs universités, centres de recherche et associations; les contributions anglophones du Sahel (Nigeria, Soudan du Sud, Erythrée) seront aussi les bienvenues
  • des chapitres en français, mais qui pourront aussi être traduits dans d’autres langues (africaines ou européennes), intégralement ou sous la forme d’un long résumé
  • un projet qui a une visée multidisciplinaire et encyclopédique
  • un comité scientifique diversifié et une révision par les pairs ouverte et collaborative, visant l’amélioration continue des chapitres.

Processus de création du livre

Ce projet de livre est ouvert à tous et toutes, dans un état d’esprit qui rejette toute perspective de compétition ou d’exclusion. Au contraire, la visée de justice cognitive de ce livre nous amène à vouloir l’ouvrir à tous les savoirs et à toutes les épistémologies, pour autant que cela nous aide à comprendre son objet. Nous travaillerons donc avec tous les auteurs et auteures qui veulent participer à cette aventure pour améliorer leur proposition ou leur texte afin que ce livre devienne une ressource précieuse.

Sur le plan des consignes d’écriture, il est tout à fait possible d’inclure des photos ou d’autres images. Il est également possible de proposer, en guise de chapitre, la transcription d’une entrevue ou d’un témoignage ou encore une vidéo pour la version en ligne, si cela permet à des savoirs d’entrer dans notre livre. Par contre, afin de maximiser l’accessibilité et l’utilisation du livre, nous demandons de restreindre l’usage de tout jargon spécialisé.

La circulation de cet appel dans toutes les universités sahéliennes est cruciale pour respecter la visée de justice cognitive et de circulation régionale de l’information. Pour cela, nous faisons appel à la bonne volonté des uns et des autres et nous mènerons un inventaire des unités de recherche sahéliennes traitant des changements climatiques et des associations qui s’y intéressent afin d’y recruter le maximum d’auteurs et d’auteures.

À noter que la rédaction de ces chapitres est bénévole et ne sera pas rémunérée. La gratification des auteurs et auteures sera de voir leur chapitre circuler et être utilisé au service du bien commun de l’Afrique sahélienne.

Les auteures et auteurs participant au livre seront invités à échanger tout au long du processus d’écriture et d’édition dans un groupe Facebook ou WhatsApp, afin de partager des idées, des références et des premières versions, dans l’esprit d’entraide et de collaboration qui est promu par la justice cognitive.

Calendrier

  • Avril-août 2017 : Inventaire des unités de recherche et des associations et circulation de l’appel
  • 30 septembre : Date limite pour envoyer une proposition (un résumé de quelques phrases)
  • Septembre 2017 – janvier 2018 : Réception des chapitres, travail d’édition et mise en ligne au fur et à mesure (dès qu’un chapitre est prêt, il est mis en ligne).
  • Avril 2018 : Publication d’une version complète et impression d’exemplaires sur demande.

Pour participer

Dès que possible, envoyez un message à l’adresse propositions@editionscienceetbiencommun.org avec votre biographie (en quelques lignes), les coordonnées complètes de votre institution ou de votre association et un résumé du chapitre (ou des chapitres) que vous souhaitez proposer. Ce résumé consiste à présenter en quelques phrases le contenu du texte que vous souhaitez proposer, en l’associant, dans la mesure du possible, à un contexte sahélien précis (région, ville, village, projet de recherche, intervention, etc.).

Les valeurs et le projet éditorial des Éditions science et bien commun

Merci de les lire attentivement sur cette page.

Les consignes d’écriture sont sur cette page.

New Set of Books in Media and Communication Studies Unlatched

The project Knowledge Unlatched (KU) offers a library sponsored model to ensure open access for monographs and edited collections in the arts & humanities and social sciences. Libraries can take part in Knowledge Unlatched by pledging for the offered title list. The KU project started in 2013 with a pilot of 28 books from 13 publishers to create a platform where authors, publishers, libraries and readers could potentially all benefit from open access for books. Authors see their work disseminated on a global maximized scale and in the KU model they won’t be bothered with BPCs. It is a fact that free accessible books have been downloaded extensively, on top of the normal sales of the paper version. Citations are not necessarily increasing, but they will come faster.[1] Publishers can experiment with generating new revenue streams for open access books. Libraries are paying (you could have a discussion on where the money should come from) but in return are supporting open access for books and deliver accessibility for their researchers (online and with a cheaper acquired paper version – see below). And readers can read and download the books for free.

KU is an example of a crowdfunded, or better, consortium open access funding model. This model spreads costs and offers a broad access for books. It is currently the most important platform, and most likely the biggest in terms of scale, offering a constant stream of open access books. But is this model working?

I have mentioned it before in a previous post that some libraries [2] and commentators [3] see that the model could be sensitive to double-dipping and others have raised the the issue of free-riding (non-paying members taking advantage of the open access books made available by paying members). KU is aware of these issues. As Frances Pinter, the founder of KU, points out in an interview: “in order to deal with the free rider issue, we’re giving the member libraries an additional discount. So, when they buy into the free and they buy the premium, the total will be less than any non-member would have to buy for a premium version.”[4] The collections offered are still fairly small considered to the global output but we’re still in the early days of open access monograph publishing. If more publishers are involved and participating in the growth of the entire collection more libraries could become interested as well.

The KU project started in 2013 with a pilot (Pilot 1: 2013-2014). The pilot consisted of a collection of 28 new books (front list) covering topics in the humanities and social sciences from 13 scholarly publishers including the following university presses: Amsterdam, Cambridge, Duke, Edinburgh, Liverpool, Manchester, Michigan, Purdue, Rutgers and Temple, plus commercial presses: Bloomsbury Academic, Brill and De Gruyter. The pilot was a success and all 28 titles were made available on the OAPEN repository. OAPEN is an online platform for peer reviewed academic books in the humanities and social sciences. In collaboration with the Directory of Open Access Books index it offers services for discovery, aggregation and preservation of open access books and related metadata. Just recently the library passed the milestone of 4 million downloads since it started reporting COUNTER compliant usage statistics (September 2013).

User statistics for books unlatched by Knowledge Unlatched in the Pilot and Round 2 have been published in the fall of 2016 by KU. Just to give you an idea of the impact the Round 2 collection contains 78 books and these titles have reached just under 40,000 downloads. The average download per title (via OAPEN) is 503.[5]

Schermafbeelding 2017-04-04 om 21.34.09

Back to the yearly rounds of open access books. The second round (Round 2: 2015-2016) was much larger and consisted of 78 new titles from 26 scholarly publishers. In this round, the collection was built on five main disciplines, namely: anthropology, literature, history, politics and media & communications. Of course, I’m really happy with the last one, being one of the main disciplines of the KU book lists. This round was a success too and 78 books have been unlatched. 10 of them are dealing with the subject media and communication. This collection of 10 can be viewed and downloaded here.

The third round (2016-2017) includes 343 titles (147 front list and 196 backlist) from 54 publishers. Just recently it has been announced that for this round sufficient libraries have pledged.[6] This means that in the next few months the entire list will become available for free downloading.

The good news is that of those 343 books, for the media and communications studies list, 9 titles are brand new (front list) and 13 books are back list titles (not older then 2 years). I think it is a good move to add back-list titles as well, since we tend to focus on only the new and latest stuff. But as we all know in the humanities and social sciences books have a long(er) life. Publishers of these 22 media and communication titles are amongst others Amsterdam University Press, Duke University Press, Intellect, transcript Verlag, UCL Press, Ottowa University Press and University of Toronto Press. The books of round 2 will be made available on the OAPEN platform. Note that some of these publisher don’t charge BPCs. They see the KU project as an addition to their business model and an option to publish books in open access. Some, like UCL Press and Amsterdam University Press, have a standard open access option for all their books and charge BPCs.[7]

Normally I won’t post links to open access publications, since we have other spaces for this (Film Studies for Free and recently launched OpenMediaScholar) but for the sake of completeness I’m adding the following list of books that have been or will be published in the OAPEN library from early to mid-2017.

Frontlist

Backlist

*Update (05-02-2017): Added more links of books available in the OAPEN library.

Notes

[1] Montgomery, L. (2015). Knowledge Unlatched: A Global Library Consortium Model for Funding Open Access Scholarly Books. p.8. http://www.knowledgeunlatched.org/wp-content/uploads/sites/3/2015/04/Montgomery-Culture-8-Chapter.pdf 

[2] Blog by Martin Eve: On Open Access Books and “Double-Dipping”. January 31, 2015.

[3] Interview with Frances Pinter, Knowledge Unlatched, January, 2013.

[4] Some literature on this topic: Ferwerda, E., Snijders, R. Adema, J. ‘OAPEN-NL – A project Exploring Open Access Monograph Publishing in the Netherlands: Final Report’ p.4.

Snijder, R., (2013). A higher impact for open access monographs: disseminating through OAPEN and DOAB at AUP. Insights. 26(1), pp.55–59. DOI: http://doi.org/10.1629/2048-7754.26.1.55

Snijder, R. (2014). The Influence of Open Access on Monograph Sales: The experience at Amsterdam University Press. LOGOS 25/3, 2014, page 13‐23, DOI: http://doi.org/10.1163/1878‐4712‐11112047  

[5] User Statics for the KU Pilot Collection and Round 2

[6] http://www.knowledgeunlatched.org/2017/02/ku-unlatches/ (February 2017)

[7] For a list of publishers active in the field of media studies and their OA models, see the Resource page.

Image credit: Designed by Photoangel / Freepik

Journal Subscription and Open Access Expenditures: Opening the Vault

For years, there was no overview of what the total amount being paid for journal subscriptions was per institute or on a national level, due to restrictions in the contracts with publishers (the famous non-disclosure agreements). The information on universities’ expenditures on subscriptions has therefore been secret information up to now.

With the transition towards open access and the related recent (re-)negotiations with big publishers to have an open access publishing option in their journals, there is a growing attention on the institutional and national expenditures. It is for several reasons that we need to have an insight in these costs to know what the cost-benefits would ideally be if we have a full shift to open access. But above all it should be standard policy to know what is happening with tax-money anyway.

In Finland, The Netherlands, U.K. and at some institutions in Swiss this data have been published publicly because in these countries several Freedom of Information (FOI), and Government Information Act (WOB – in The Netherlands) requests have been submitted, and above all, granted.

The following information is to give you a quick overview of the status and the available data:

Finland

In 2016 information on journal subscription costs paid to individual publishers by the Finnish research institutions has been released by the Finnish Ministry of Education and Culture, and its Open Science and Research Initiative funded 2014–2017 (Academic Publishing Costs in Finland 2010–2015). Since this data is spanning all expenditures, Finland is the first country to release this data for all its institutions.

Schermafbeelding 2017-03-31 om 13.49.41.png

Total costs by publisher

More information on the dataset can be found here and here.

The Netherlands

In 2016, two requests for information have been submitted. The first request arrived on 28 April 2016, and requested the publication of the total amount of the budget that the university has spent annually on subscriptions to academic journals over the past five years and the purchase of academic books over the past five years.

This request has been granted in September 2016 and the subscription costs data has been released here.

Schermafbeelding 2017-03-31 om 13.07.51

Costs incurred by universities, 2015

In September 2016, all Dutch universities received a second request relating to the open access license deals. Since 2015 negotiations started with the big publishers about the implementation of open access into the existing ‘big deals’. Currently the Netherlands is the only country where this is happening on such a united scale. All higher education institutes are acting as one party towards the publishers. Normally the details of those deals are contracted as a non-disclosure agreement but this second request asked for publication of those open access contracts. Just recently it has been granted as well and now contract details  publishers such as Elsevier, Springer, Wiley, Taylor & Francis, ACS, Sage, Karger, Thieme, Walter de Gruyter, RSC, Emerald have been publicized. [1]

A list of the publishers’ contracts can be found here.

U.K.

In the U.K. Stuart Lawson, Doctoral researcher at Birkbeck, University of London, has done some great work on getting insights in the journal subscription expenditures at U.K. higher education institutions. Not all instisutes are represented, but he managed to collect pricing data of 150 institutions with ten of the largest publishers from 2010-14. The raw data can be found here.

For the last three years (starting in 2014) for transparency reasons he systematically collects the APC expenditures data of several research institutes as well. This data can be found here.

Swiss

In 2015, also after a FOI request, the ETH Zürich published an overview of the costs for journal subscriptions (2010-2014) with the three largest publishers, Elsevier, Springer and Wiley.

Schermafbeelding 2017-03-31 om 13.26.51There is some more data on the financial flows in Swiss academic publishing to be found in this report.

Image credit: Designed by Kjpargeter / Freepik

#WorldsUpsideDown Exhibition

#WorldsUpsideDown 11 March – 2 April @ Firstsite, Colchester Riots. Revolts. Revolution. All flashing moments which throw the world – and our relationship with it – in question. From the uprising against the Russian Czar one hundred years ago to the Arab spring and protests against war, austerity and the continuing failure of politics as usual, people have pinned their … Continue reading →