Hacker Madness

Hackers induce hysteria. They are the unknown, the terrifying, the enigma. The enigma that can breach and leak the deepest secrets you’ve carelessly accreted over the years in varied fits of passion, desperation, boredom, horniness, obsession, and jubilation on your computers, phones and the internet. Maybe you’re the government, maybe you’re just some innocent schmuck—maybe you’re both. Maybe you don’t deserve to be exposed, maybe you do. The common fear is that you will never know who exposed you. Is it a he, a she, or an it? The FBI? The NSA?  You feel vulnerable and it feels as though what happened is black magic because you understand nothing about how it was done. Terrifying, fascinating, excruciating black magic, practiced by an enigma.

Or maybe you do know how the enigma did it, and you feel stupid:  because the enigma exposed your lazy information security—maybe because your password was just “1234”, or your birthday, or maybe you logged into a public Wi-Fi network without VPN, and maybe, just maybe, you used the same password for all your accounts.  You’re a moron for doing that, and you know it; but it never occurred to you that anyone would bother to hack you at Starbucks. You’re hysterical over an enigma that could be anywhere in the world; or perhaps your roommate, child, or lover in your own home.

I regularly observe this hysteria. I’m a defense lawyer who represents hackers in federal courts across the United States. I’m writing this in an airport in Kentucky after the sentencing of a client. He and his colleague hacked a cheap high school football fan website to protest the rape of a minor in Steubenville, Ohio by members of the high school football team. They posted a video of my client in a Guy Fawkes mask decrying the rape. They helped organize protests over the rape in the town. It attracted national media attention. It led to the federal government indicting my client for felony computer crime. The federal government never prosecuted anyone involved in the rape.

My client was part of a movement protesting what they viewed as the small town’s attempted cover up of the extent of the rape. Much ire was directed at the local county prosecutor (not to be confused with the federal prosecutors in Kentucky who indicted my client) who initially handled the case. The perception was that she was intentionally limiting the scope of the prosecution because she was closely connected to the football team through her son. Social media postings of football team-members seemed to implicate more than the two football players she initially went after. Eventually, she recused herself from the case. After this, the town’s school superintendent, the high school principal, the high school wrestling coach, and the high school football coach were indicted on various felony and misdemeanor charges including obstruction of justice and evidence tampering. It’s unlikely any of this would have happened without the attention my client, along with many others, helped bring to the case

The local prosecutor wasn’t even the one who got hacked. That person, perhaps out of fear, stayed out of it. Yet this prosecutor, in a letter submitted to the court at my client’s sentencing, breathlessly condemned my client as a terrorist—yes, a terrorist—for bringing attention to the sordid details of the attempted cover-up of the extent of a 16-year-old girl’s rape. A rape that involved the girl incapacitated by alcohol being publicly and repeatedly penetrated and urinated on by members of the football team, their jocular enthusiasm captured in the photos they posted on social media. No one died, no one except the rape victim was physically hurt, yet my client was called a terrorist and thrown in jail because a $15 website with an easily guessed password got hacked.  All of this, because of the embarrassment, the shame, and the vulnerability—not that of the rape victim, but of a town whose dark secrets had been breached and leaked.

My client got two years – the two rapists got one and two years respectively. My client didn’t physically or financially harm anyone.  At best the damage was reputational, but that was self-inflicted by people in the town. My client didn’t rape a minor. Metaphorically, the town did, and in reality, members of its high school football team did. Nonetheless, in that case and most I deal with, the federal criminal “justice” system hysterically treats hackers on par with rapists and other violent felons.

Including the Steubenville rape case, I’ve now had two clients called “terrorists” in open court. In the second case, the former boss of a client of mine, in a moment that almost made me laugh out loud in court, called him a terrorist at his sentencing. I suspect the boss was a bit jealous of my client’s journalistic talent and was ruefully avenging his own feelings of inadequacy and loss of control. This particular client had quit his job in a pique after justifiably accusing his boss at the local TV station of engaging in crappy journalistic practices. After departing his job, he helped hack (allegedly) the LA Times website, owned by the same parent company and sharing the same content management system; a few words were changed in a story about tax cuts.

The edits— the government liked to refer to it as the “defacement” — were removed and the article restored to its original state within forty minutes. For this, the sentencing recommendation from pre-trial services was 7 ½ years, the government asked for 5, and the judge gave him 2. Again, no one was physically hurt, the financial loss claims were dubious, and the harm was reputational, at best. But my client was sentenced more seriously than if he’d violently, physically assaulted someone. In fact, he’d probably have faced less sentencing exposure if he’d beaten his boss with a baseball bat.

Unsurprisingly, his actions were portrayed as a threat to the freedom of the press. There was some pious testimony from an LA Times editor about the threat to a so-called great paper’s integrity. But when the cries of terrorism are stripped away, a more mundane explanation for all the sanctimony emerges: the “victim’s” information security sucked. They routinely failed to deactivate passwords and system access for ex-employees.  After the hack, they discovered scores of still active user accounts for ex-employees that took them months to sort through and clean up. They stuck my terrorist client with the bill for fixing their bad infosec, of course. All of this, because of the embarrassment, the shame, and the vulnerability–not of an employee, but that of a powerful organization.

Another one of my clients who lived in a corrupt Texas border town was targeted by a federal prosecutor. The talented young man had committed the egregious sin of running a routine port scan on the local county government’s website using standard commercially available software. Don’t know what a port scan is? Don’t worry, all you need to know is that it’s black magic. This client had also gotten into it a tiff with a Facebook admin, exchanged some testy emails with the admin, but walked away from it while the admin continued to send him emails. A routine internet cat-fight of little import that wouldn’t raise eyebrows with anyone mildly experienced with the internet’s trash talking and petty squabbles.

But this client, like most of my clients, was purportedly affiliated with Anonymous. This led to an interesting state of affairs that demonstrates both the fear and the contempt the government has for enigmatic hackers. In essence, the FBI detained my client and threatened him with a felony hacking prosecution unless he agreed to hack the ruthlessly violent Mexican Zeta Cartel.  Fearing for his loved ones and himself, my client sensibly declined this death wish. But the FBI persisted. The FBI specifically wanted a document that purportedly listed all the U.S. government officials on the take from the Zetas. No one even knew if this document existed, but the FBI didn’t care much about that fact. After my client declined, he was charged with 26 felony counts of hacking and 18 felony counts of cyberstalking based on his interaction with the Facebook admin.

Naturally, this case was brought to my attention. After examining the Indictment and engaging in a few interesting discussions with the federal prosecutor, my client pleaded guilty to a single misdemeanor count of hacking related to his port scanning of the local government website. Better to take a misdemeanor than run the risk of a federal criminal trial where the conviction rate is north of 90%. But the fact that this hysterical prosecution was brought in the first place reflects poorly on the exercise of prosecutorial discretion about hacking on the part of the Department of Justice. Again, no one was hurt, no one lost money, but my client was facing a maximum of 440 years in jail under the original Indictment.

My hands down favorite example of hacker-induced hysteria was directed at me and my co-counsel in open court. I couldn’t hack my way out of a paper bag, but prosecutors love to tar me by association with my clients. In this instance, on the eve of trial on a Friday in open federal court, the prosecutor—along with the FBI agent on the case— accused my co-counsel and me of hacking the FBI, downloading a top-secret document, removing the top-secret markings on it, and then producing it as evidence we wanted to use at trial. Co-counsel and I were completely baffled, exchanged glances, and then told the court we would give the court an answer on Monday as to the document’s origins—and to this criminal, law license jeopardizing accusation.

It turns out we’d downloaded the document in question from the FBI’s public website. The FBI had posted the document because it was responsive to a Freedom of Information Act request. The FBI had removed the top-secret markings in so doing. Needless to say, we corrected the record on Monday.  Pro-tip for rookie litigators: If your adversary produces a document you have a serious question about, it’s best to confer with your adversary off the record about it before you cast accusations in open court that implicate them in felony hacking and Espionage Act violations. But, such is the hysteria that hacking induces that it spills over to the lawyers that defend them.  How many lawyers who defend murderers are accused of murder?

The feelings of vulnerability, fear of the unknown, and embarrassment that feed the hysterical reaction to hackers also lead to the fetishizing of hackers in popular culture. T.V. shows like Mr. Robot, House of Cards, and movies like Live Free or Die Hard, where the hackers are both villains and heroes, all exacerbate this fetish. And this makes life harder for me and my clients because we have to combat these stereotypes pre-trial, at trial, and during their incarceration should that come to be. Pre-trial, my clients are subjected to irrational, restrictive terms of release that rest on the assumption that mere use of a computer will lead to something nefarious. During trial, we have to combat the jury’s preconceptions of hackers. And if and when they’re put in jail, convicted hackers are often treated on par with the worst, most violent felons. Almost all of my incarcerated clients were thrown in solitary for irrational, hacker-induced hysteria reasons. But those are stories for another day.

The hysteria hackers induce is real, and it is dangerous. It leads to poorly conceived and drafted draconian laws like America’s Computer Fraud and Abuse Act. It distorts our criminal justice system by causing prosecutors and courts to punish mundane computer information security acts on par with rape and murder. Often, I receive phone calls from information security researchers, with fear in their voice, worried that some routine, normally accepted part of their profession is exposing them to felony liability. Usually I have to tell them that it probably is.

And the hysteria destroys the lives of our best computer talents, who should be cultivated and not thrown in jail for mundane activities or harmless pranks. All good computer minds I’ve met do both. Thus, not only is hacker-induced hysteria detrimental to our criminal justice system in that it distorts traditional notions of fairness, justice, and punishment based on irrational fears. It is fundamentally harmful to our national economy. And that should give even the most ardent defenders of the capitalistic order at the Department of Justice and the FBI pause, if not stop them dead in their tracks, before pursuing hysterical hacking prosecutions.

The best proof that this hysteria is unwarranted and unnecessary most of the time is the fate of persecuted hackers and hacktivists themselves.  Most of those arrested for pranks, explorations, and even risky, hard-core acts of hacktivism aren’t a detriment to society, they’re beneficial to our society and economy.  After their youthful learning romps, they’ve matured their technical skills—unlearnable in any other fashion—into laudable projects. Robert Morris was author of the Morris Worm. He’s responsible for one of earliest CFAA cases because his invention got out of his control and basically slowed down the internet, such as it was, in 1988. Now he’s a successful Silicon Valley entrepreneur and tenured professor at MIT who has made significant contributions to computer science. Kevin Poulsen is an acclaimed journalist; Mark Abene and Kevin Mitnick are successful security researchers.  And those’re just the old-school examples from the ancient—in computer time—1990’s.

Younger hackers are doing the same. From the highly entertaining hacker collective Lulzsec, Mustafa Al Bassam is now completing a PhD in cryptography at University College London; Jake Davis is translating hacker lore, culture, and ethics to the public at large; Donncha O’Cearbhaill, is employed at a human rights technology firm and is a contributor to the open source project Tor (no relation); Ryan Ackryod and Darren Martyin are also successful security researchers.  Sabu, the most famous member of Lulzsec, of course, has enjoyed a successful career as a snitch, hacking foreign government websites on behalf of the FBI and generally basking in the fame and lack of prison time his sell out engendered. And I’m not going to talk about the young, entertaining hackers that haven’t been caught yet.  But the ones I care about, the ones I think are important, aren’t interested in making money off your bad infosec. They’re just obsessed by how the system works, and a big part of that is taking the system apart. Perhaps I share that with them as a federal criminal defense lawyer.

All these hackers exemplify the harms that hysteria can have: misdirecting the energy of exactly the people who can help test, secure and transform the world we occupy in the name of public values that we share: values our own government should be defending, instead of destroying.

“The Troll on Karl Johan Street” By Theodor Kittelsen, 1892.

Tor’s parents are from Norway, hence his name. Yes, it’s real. The only reason you think it should have an “H” in it is because you’ve watched that movie. Tor is way sexier than Chris Hemsworth. His name also precedes the invention of The Onion Router and him becoming a computer lawyer.  Don’t know what The Onion Router is? That’s ok, just know it’s black magic. Tor didn’t know what it was until everyone starting asking if Tor was his real name when he repped weev, one of the most famous internet trolls in the English language. They still talk, despite the fact that weev is basically a neo-Nazi and the Gestapo tortured Tor’s dad for four days and then threw him in a concentration camp. His dad taught him resistance techniques and the value of a sense of humor in the face of the moral smugness of the state. Since weev, Tor has also represented a bunch of hackers in federal courts across the United States, and is going to take the non-public part of that and his other off-the-record representations to his grave. At which point—the point of his death—perhaps there will be an information dump, just for the Lulz. Or his name isn’t Tor Ekeland.

‘Ye shall know them by their fruits’

Read and download Just Managing? for free here. “If you’re one of those families, if you’re just managing, I want to address you directly.” (Theresa May; 13 July 2016) Words are tricky things, and we can all agree that ‘talk is cheap’. … Continue reading

Interview: Mustafa Al-Bassam

Gabriella Coleman:  Based on what you’ve seen and reported do you think we (not just lay people, but experts on the subject) are thinking clearly about vulnerability?  Is our focus in the right place (e.g. threat awareness, technical fixes, bug bounties, vulnerabilities disclosure), or do you think people are missing  something, or misinterpreting the problem?

Mustafa Al-Bassam: Based on the kind of vulnerabilities that we [LulzSec] were exploiting at Fortune 500 companies, I don’t think that there is a lack of technology or knowledge in place to stop vulnerabilities from being introduced, but the problem is that there is a lack of motivation to deploy such knowledge. We exploited extremely basic vulnerabilities such as SQL injection, in companies like Sony, that are quite easy to prevent.

I believe the key problem is that most companies (especially those that are not technology companies – like Sony) don’t have much of an incentive to invest money in making sure their systems are vulnerability-free, because security isn’t a key value proposition in their business model, it’s merely an overhead cost to be minimized. Sony fired their entire security team shortly before they got hacked over 30 times in 2011. For such companies, security only becomes a concern for them when it becomes a PR disaster. So that’s what LulzSec did: make security a PR disaster.

We’ve seen this before: when Yahoo! was breached in 2014, the CEO made the decision not to inform customers of the breach. Because it would have been a PR disaster for them, that may have seen them lose customers to their competitors, causing them to lose money.

That begs the question: how can we expect companies to do the right thing and inform customers of breaches, if doing the right thing will cause them to lose money? And so, why should companies bother to invest in keeping their systems free of vulnerabilities, if they can simply brush compromises under a carpet? After all, it is the customer that loses from having their information compromised, rather than the company, as long as the customer keeps paying.

So I think if we can incentivize companies to be more transparent about their security and breaches, customers can make better-informed decisions about which products and services to use, making it more likely for companies to invest in their security. One way this might happen in the future is through the rise of cybersecurity insurance; more and more companies are signing up to cybersecurity insurance. A standard cybersecurity insurance claim policy should require the company to disclose to its customers when a breach occurs. That way, it makes more economic sense for a company to disclose breaches and also invest in security to get lower insurance premiums or avoid PR disasters.

GC: I wanted to ask about the rise of cybersecurity insurance and whether major firms all already have purchased policies, what the policies currently look like, and whether they actually prevent good security since the companies rely on insurance to recoup their losses?

Christopher Kelty: Yes, I don’t actually understand what cybersecurity insurance insures against— does it insure brand equity? Does it insure against government fines? Lawsuits against a corporation for breach of duty? all of these things?  Just curious)

GC: Exactly, I don’t think many of us have a sense of what this insurance looks like and if you can give us a picture, even a limited picture of what you know and how the insurance works, that would be a great addition to our issue.

MAB: The current cybersecurity insurance market premium is $2.5 billion but it’s still early stages because insurance companies have very little data on breaches to be able to calculate what premiums should be (Joint Risk Management Section 2017: 9). As a result, premiums are quite high and too expensive for small and medium sized businesses, and this will continue to be the case until cybersecurity insurance companies get more data about breaches to properly calculate the risks.

Cybersecurity insurance has been used in several high-profile breaches, most notably Sony Pictures which received a $151 million insurance payout for its large internal network breach alleged to be by North Korea (Joint Risk Management Section: 4).

These policies cover a wide range of losses including costs for ransomware payments, forensic investigations, lost income, civil penalties, lost digital assets, reputational damage, theft of money and customer notification.

I think in the long-term it’s unlikely that companies will adopt a stance where they stop investing in security and just rely on the insurance to recoup losses, because insurance companies will have a concrete economic interest to make sure that payouts happen as rarely as possible, and that means raising the premiums of companies that constantly get breached until they can’t ignore their security problems. Historically, this economic interest is shifted to the customer because it’s usually the customer that loses when their data gets breached and the company doesn’t report it.

If anything, I believe that cybersecurity insurance will make companies more likely to do the right thing when they are breached and inform customers, because the costs of customer notification and reputational damage would be covered by the insurance. At the moment if a company does the right thing and informs their customers of a breach, the company suffers reputational damage, so there is little incentive to do the right thing. This will prevent incidents from occurring such as when Yahoo! failed to disclose a data breach affecting 500m customers for over two years (Williams-Alvarez 2017).

CK:  I wonder if there is more of a spectrum here— from bug bounties to vulnerabilities equities processes (VEP) to cybersecurity insurance— all of them being a way to formalize the knowledge of when and where vulnerabilities exist, or when they are exploited.   What are the pros and cons of these different approaches (I can imagine that a VEP is really overly bureaucratic and unenforceable, whereas insurance might produce its own incentives to exploit or over/under-report for financial gain).  Any thoughts on this?

MAB: Bug bounties and cybersecurity insurance policies are controlled purely by the market and are an objective way to measure the economic value or impact of vulnerabilities, whereas VEP is a more subjective process that is subject to political objectives.

In theory VEP should be a safeguard to be used situations where it is in the public interest to disclose vulnerabilities that may otherwise be more profitable to exploit, but this is not the case in practice. Take the recent WannaCry ransomware attack for example, which used an exploit developed by the National Security Agency, and affected hundreds of companies around the world and the UK’s National Health Service (NHS). You have to ask if the economic and social impact of that exploit falling in the wrong hands was really worth all the intelligence activities that the NSA used it for. How many people died because the NHS couldn’t treat patients when their systems were offline?
GC: Do you have a sense of what the US government (and others around the world) are doing to attract top hacker talent—for good and bad reasons?  Should governments be doing more?  Should it be an issue that we (in the public) know more about?

MAB: In the UK, the intelligence services like the Government Communications Head Quarters (GCHQ) run aggressive recruitments campaigns to recruit technologists. Even going so far as to graffiti ‘hipster’ adverts on the streets of a techy part of London (BBC NewsBeat 2015). They have to do this because they know that their pay is very low compared to London tech companies. In fact, Privacy International – a charity which fights GCHQ – will pay you more to campaign against GCHQ than GCHQ will pay you to work for them as a technologist.

So in order to try to recruit top tech talent, they have to try and lure people in by the promise that the work will be interesting and “patriotic”, rather than it paying well. That is obviously becoming a harder line to toe though, because the intelligence agencies are less popular with technologists in the UK than ever, given the government’s campaign against encryption. Their talent pool is extremely limited.

What I would actually like to see however, is key decision makers in government becoming more tech savvy themselves. Technology and politics are so intertwined these days that I think it’s reasonable that at least a few Members of Parliament should have coding skills. Perhaps someone should run a coding workshop or class for interested Members of Parliament?

CK: I have trouble understanding how improved technical knowledge of MPs would lead to better political decisions if (given your answer to the first question) all the incentives are messed up.  This is a very old problem of engineers vs. managers in any organization.   The engineers can see all the problems and want to fix them; the managers think the problems are different or unimportant.  Just to play devil’s advocate, is it possible that hackers, engineers, or infosec researchers also need a better understanding of how firms and governments work?  Is there a two-way street here?

MAB: I mean this in a more general sense: politicians make poor political decisions when they deal with technical information security problems they don’t understand, for example with the recent encryption debate. In the UK, the Investigatory Powers Bill was recently passed, which allows the government to force communications platforms based in the UK to backdoor their products if they use end-to-end encryption. Luckily most of these platforms aren’t based in the UK, so it will have little impact. But this has a harmful effect on the UK technology sector, as no UK technology company can now guarantee that their customer’s communications are fully secure, which means UK tech firms are less competitive.

A classic example of poor political decisions in dealing with such problems is the EU cookie law, which requires all websites to ask users before they place cookies on their computers (The Register 2017). In theory it sounds great but in practice most users always agree and click yes because the request dialogs are disruptive to their user experience. Even so, a saner way to implement such a policy would be to require the few mainstream browsers to only set website cookies after user approval, rather than ask millions of websites to change their code.

There are already plenty of hackers and engineers who are involved in politics, but there are very few politicians who are involved in technology. Even when engineers consult with the government on policies, their advice is often ignored, as we have seen with the Investigatory
Powers Bill.

Mustafa Al-Bassam (“tflow”) is a doctoral researcher at the Department of Computer Science at University College London. He was also one of 6 core members of the hacking collective LulzSec.

References

BBC News Beat. (2015). “Spy agency GCHQ facing fines for ‘hipster’ job adverts on London streets.” November 27th.  Available at: link.

Joint Risk Management Section of the Society of Actuaries. (2017). “Cybersecurity: Impact on Insurance Business and Operations.” Report by Canadian Institute of Actuaries, Casualty Actuary Society, the Society of Actuaries. Available at: link.

The Register. (2017). “Planned ‘cookie law’ update will exacerbate problems of old law – expert.” March 1st. Available at link.

Williams-Alvarez, Jennifer. (2017). “Yahoo general counsel resigns amid data breach controversy.” Legal Week, March 2nd.  Available at link.

What Is To Be Hacked?

At the beginning of 2017 information security researcher, Amnesty International technologist, and hacker Claudio (“nex”) Guarnieri launched “Security without Borders,” an organization devoted to helping civil society deal with technical details of information security: surveillance, malware, phishing attacks, etc. Journalists, activists, nongovernmental organizations (NGOs), and others are all at risk from the same security flaws and inadequacies that large corporations and states are, but few can afford to secure their systems without help. Here Guarnieri explains how we got to this stage and what we should be doing about it.

***

Computer systems were destined for a global cultural and economic revolution that the hacker community long anticipated. We saw the potential; we saw it coming. And while we enjoyed a brief period of reckless banditry, playing cowboys of the early interconnected age, we also soon realized that information technology would change everything, and that information security would be critical. The traditionally subversive and anti-authoritarian moral principles of hacker subculture increasingly have been diluted by vested interests. The traditional distrust of the state is only meaningfully visible in some corners of our community. For the most part—at least its most visible part—members of the security community/industry are enjoying six-figure salaries, luxurious suites in Las Vegas, business class traveling, and media attention.

The internet has morphed with us: once an unexplored space we wandered in solitude, it has become a marketplace for goods, the primary vehicle for communication, and the place to share cat pictures, memes, porn, music, and news as well as an unprecedented platform for intellectual liberation, organization, and mobilization. Pretty great, right? However, to quote Kevin Kelly:

There is no powerfully constructive technology that is not also powerfully destructive in another direction, just as there is no great idea that cannot be greatly perverted for great harm…. Indeed, an invention or idea is not really tremendous unless it can be tremendously abused. This should be the first law of technological expectation: the greater the promise of a new technology, the greater is the potential for harm as well (Kelly 2010:246).

Sure enough, we soon observed the same technology of liberation become a tool for repression. It was inevitable, really.

Now, however, there is an ever more significant technological imbalance between states and their citizens. As billions of dollars are poured into systems of passive and active surveillance—mind you, not just by the United States, but by every country wealthy enough to do so—credible defenses either lag, or remain inaccessible, generally only available to corporations with deep enough pockets. The few ambitious free software projects attempting to change things are faced with rather unsustainable funding models, which rarely last long enough to grow the projects to maturity.

Nation states are well aware of this imbalance and use it to their own advantage. We have learned through the years that technology is regularly used to curb dissent, censor information, and identify and monitor people, especially those engaged in political struggles. We have seen relentless attacks against journalists and activists in Ethiopia, the crashing of protest movements in Bahrain, the hounding of dissidents in Iran, and the tragedy that became of Syria, all complemented with electronic surveillance and censorship. It is no longer hyperbole to say that people are sometimes imprisoned for a tweet.

As a result, security can no longer be a privilege, or a commodity in the hands of those few who can afford it. Those who face imprisonment and violence in the pursuit of justice and democracy cannot succeed if they do not communicate securely, or if they cannot remain safe online. Security must become a fundamental right to be exercised and protected. It is the precondition for privacy, and a key enabler for any fundamental freedom of expression. While the security industry is becoming increasingly dependent—both financially and politically—on the national security and defense sector, there is a renewed need for a structured social and political engagement from the hacker community.

Some quarters of the hacker community have long been willing to channel their skills toward political causes, but the security community lags behind. Eventually some of us become mature enough to recognize the implications and social responsibilities we have as technologists. Some of us get there sooner, some later; some never will. Having a social consciousness can even be a source of ridicule among techies. You can experience exclusion when you become outspoken on matters that the larger security and hacking communities deem foreign to their competences. Don’t let that intimidate you.

As educated professionals and technicians, we need to recognize the privilege we have, like our deep understanding of the many facets of technology; we must realize that we cannot abdicate the responsibility of upholding human rights in a connected society while continuing to act as its gatekeepers. Whether creating or contributing to free software, helping someone in need, or pushing internet corporations to be more respectful of users’ privacy, dedicating your time and abilities to the benefit of society is concretely a political choice and you should embrace that with consciousness and pride.

***

Today we face unprecedented challenges, and so we need to rethink strategies and re-evaluate tactics.

In traditional activism, the concept of “bearing witness” is central. It is the practice of observing and documenting a wrongdoing, without interfering, and with the assumption that exposing it to the world, causing public outcry, might be sufficient to prevent it in the future. It is a powerful and, at times, the only available and meaningful tactic. This wasn’t always the case. In activist movements, the shift of tactics is generally observed in reaction to the growth, legitimization, and structuring of the movements themselves as they conform to the norms of society and of acceptable behavior.

Similarly, as we conform too, we also “bear witness.” We observe, document, and report on the abuses of technology, which is a powerful play in the economic tension that exists between offense and defense. Whether it is a journalist’s electronic communications intercepted or computer compromised, or the censorship of websites and blocking of messaging systems, the exposure of the technology empowering such repressions increases the costs of their development and adoption. By bearing witness, such technologies can be defeated or circumvented, and consequently re-engineered. Exposure can effectively curb their indiscriminate adoption, and factually become an act of oversight. Sometimes we can enforce in practice what the law cannot in words.

The case of Hacking Team is a perfect example. The operations of a company that produced and sold spyware to governments around the world were more effectively scrutinized and understood as a result of the work of a handful of geeks tracking and repeatedly exposing to public view the abuses perpetrated through the use of that same spyware. Unfortunately, regulations and controls never achieved quite as much. At a key moment, an anonymous and politicized hacker mostly known by the moniker “Phineas Phisher” (Franceschi-Bicchierai 2016) arrived, hacked the company, leaked all the emails and documents onto the internet, and quite frankly outplayed us all. Phineas, whose identity remains unknown almost two years later, had previously also hacked Gamma Group, a British and German company and competitor of Hacking Team, and became a sort of mischievous hero in the hacktivist circles for his or her brutal hacks and the total exposure of these companies’ deepest secrets. In a way, one could argue that Phineas achieved much more attention from the public, and better results, than anyone had previously, myself included. Sometimes an individual, using direct action techniques, can do more than a law, a company, or an organization can.

However, there is one fundamental flaw in the practice of bearing witness. It is a strategy that requires accountability to be effective. It requires naming and shaming. And when the villain is not an identifiable company or an individual, none of these properties are available to us in the digital world. The internet provides attackers plausible deniability and an escape from accountability. It makes it close to impossible to identify them, let alone name and shame them. And in a society bombarded with information and increasingly reminded by the media of the risks and breaches that happen almost daily, the few stories we do tell are becoming repetitive and boring. After all, in front of the “majesty” of the Mirai DDoS attacks (Fox-Brewster 2016), or the hundreds of millions of online accounts compromised every other week, or even in front of the massive spying infrastructure of the Five Eyes (Wikipedia 2017c), who in the public would care about an activist from the Middle East, unknown to most, being compromised by a crappy trojan (Wikipedia 2017d) bought from some dodgy website for 25 bucks?

We need to stop, take a deep breath, and look at the world around us. Are we missing the big picture? First, hackers and the media alike need to stop thinking that the most interesting or flamboyant research is the most important. When the human rights abuses of HackingTeam or FinFisher are exposed, it makes for a hell of a media story. At times, some of the research I have coauthored has landed on the front pages of major newspapers. However, those cases are exceptions, and not particularly representative of the reality of technology use as a tool for repression by a state. For every dissident targeted by sophisticated commercial spyware made by a European company, there are hundreds more infected with free-to-download or poorly written trojans that would make any security researcher yawn. Fighting the illegitimate hacking of journalists and dissidents is a never-ending cat and mouse game, and a rather technically boring one. However, once you get past the boredom of yet another DarkComet (Wikipedia 2017b) or Blackshades (Wikipedia 2017a) remote administration tool (RAT), or a four-year-old Microsoft Office exploit, you start to recognize the true value of this work: it is less technical and more human.

I have spent the last few years offering my expertise to the human rights community. And while it is deeply gratifying, it is also a mastodontic struggle. Securing global civil society is a road filled with obstacles and complications. And while it can provide unprecedented challenges to the problem-solving minds of hackers, it also comes with the toll of knowing that lives are at stake, not just some intellectual property, or some profits, or a couple of blinking boxes on a shelf.

How do you secure a distributed, dissimilar, and diverse network of people who face different risks, different adversaries, and operate in different places, with different technologies, and different services? It’s a topological nightmare. We—the security community—secure corporations and organizations with appropriate modeling, by making uniform and tightening the technology used, and by watching closely for anomalies in that model. But what we—the handful of technologists working in the human rights field—often do is merely “recommend” one stock piece of software or another and hope it is not going to fail the person we are “helping.”

For example, I recently traveled to a West African country to meet some local journalists and activists. From my perennial checklist of technological solutionism to preach everywhere I go, I suggested to one of these activists that he encrypt his phone. Later that night, as we met for dinner, he waved his phone at me upon coming in. The display showed his Android software had failed the encryption process, and corrupted the data on his phone, despite his having followed all the appropriate steps. He looked at me and said: “I’m never going to encrypt anything ever again.” Sometimes the technology we advocate is inadequate. Sometimes it is inaccessible, or just too expensive. Sometimes it simply fails.

However, tools aside, civil society suffers a fundamental lack of awareness and understanding of the threats it faces. The missing expertise and the financial inability to access technological solutions and services that are available to the corporate world certainly isn’t making things any easier. We need to approach this problem differently, and to recognize that civil society isn’t going to secure itself.

To help, hackers and security professionals first need to become an integral part of the social struggles and movements that are very much needed in this world right now. Find a cause, help others: a local environmental organization campaigning against fracking, or a citizen journalist group exposing corruption, or a global human rights organization fighting injustice. The help of security-minded hackers could make a significant impact, first as a conscious human being, and only second as a techie, especially anywhere our expertise is so lacking.

And second, we need to band together. Security Without Borders is one effort to create a platform for like-minded people to aggregate. While it might fail in practice, it has succeeded so far in demonstrating that there are many hackers who do care. Whatever the model will be, I firmly believe that through coordinated efforts of solidarity and volunteering, we can make those changes in society that are very much needed, not for fame and fortune this time, but for that “greater good” that we all, deep down, aspire to.

Claudio Guarnieri, aka Nex, is a security researcher and human rights activist. He is a technologist at Amnesty International, a researcher with the Citizen Lab, and the co-founder of Security Without Borders.

Reference

Fox-Brewster, Thomas. 2016. “How Hacked Cameras are Helping Launch the Biggest Attacks the Internet Has Ever Seen.” Forbes, September 25. Available at link.

Franceschi-Bicchierai, Lorenzo. 2016. “Hacker ‘Phineas Fisher’ Speaks on Camera for the First Time—Through a Puppet.” Motherboard, July 20. Available at link.

Kelly, Kevin. 2010. What Technology Wants. New York: Viking Press.

Wikipedia. 2017a. “Blackshades.” Wikipedia, last updated March 23. Available at link.

———. 2017b. “DarkComet.” Wikipedia, last updated May 14. Available at link.

———. 2017c. “Five Eyes.” Wikipedia, last updated April 19. Available at link.

———. 2017d. “Trojan horse.” Wikipedia, last updated May 12. Available at link.

Image Credits: “The Good Samaritan, after Delacroix [After WannaCry]” by Vincent Van Gogh. 1890.

Interview: Lorenzo Franceschi-Bicchierai

Gabriella Coleman: As you know well, the DNC [Democratic National Committee] hack and leak were quite controversial, with a batch of commentators and journalists debating whether the contents of the email were newsworthy, and another batch of commentators assessing their geopolitical significance. Our Limn issue features pieces that in fact assess the importance of the DNC hack in quite distinct ways: one author taking the position [that] the emails lacked consequential news, while another author forwards a public interest defense of their release. As someone who has covered these sorts of hacks and leaks, how important was the DNC-Podesta hack? And in what way? Does it represent a new political or technical threshold?

Lorenzo Franceschi-Bicchierai: They were definitely relevant from a geopolitical standpoint, if you will. All signs point to Russia. So, this was a nation-state hacking a legitimate target from the point of view of their interests, and from the intelligence point of view, these were legitimate targets. So, that’s not too crazy, and this is something that would get a lot of people on Twitter saying, “Well, spies are gonna spy.” But I think it was interesting because, of course, it did cross a threshold or line, if you will. Because this wasn’t just hacking and spying on them, it was putting everything in the open. They published the stolen data through WikiLeaks, they published through their own leaking platforms, they had this website called DC Leaks, and they had the famous Guccifer 2.0. They had all kinds of channels and they were actually very good at using multiple channels just to get as much attention as possible, even if the content wasn’t actually that compelling.

GC: What you’re suggesting is that the trade craft of state spying has always worked on these discretionary channels, that is, back channels that only the intelligence world has access to. And all of a sudden here’s this moment where they decide to move everything from the back stage to the front stage.

LFB: Yes that’s definitely a good way to put it. Spies, by definition, work in the shadows. We know about intelligence operations when they leak or when someone talks, and sometimes it’s years later. At that point it’s not even that newsy. But in this case, it all unfolded in real time, which was very interesting. The big question in the DNC-Podesta hack that we’ll probably never know the answer to—if the DNC and Crowdstrike didn’t come out with the attribution, if they didn’t come out saying this is Russian intelligence—is: would the hackers and the Russian government have responded in the way that they have? By systematically leaking documents and slowly dripping information? I don’t know. Maybe they would, maybe they wouldn’t….

GC: Ah, that’s a good point. Did Crowdstrike call out Russia before the material was leaked?

LFB: Yes, CrowdStrike attributed the attack to Russia on June 14, and Guccifer 2.0 came out on June 15. But it’s important to note that there was another website, also linked to Russia, that started leaking stuff before that. The site was called DCLeaks and it started publishing stolen documents just a few days before CrowdStrike went public, but it’s almost like no one noticed it right away. DCLeaks published hacked emails from Hillary’s [Clinton] staff on June 8, according to the Internet Archive. This means that perhaps Russia was already going to leak documents, and CrowdStrike’s accusation only accelerated the plan. Perhaps they were planning to release the more interesting stuff closer to the election, but they felt like they somehow had to respond to the public accusation. Who knows!

GC: Your point is an important one because it suggests that perhaps the execution of this hack and leak was experimental and it also seemed quite sloppy as well.

LFB: Maybe it was the plan all along. Even if it wasn’t, it definitely didn’t look very well planned at times. I think the best example is this Guccifer 2.0 persona. He—let’s say “he,” just because they claim to be male—he showed up a day after the CrowdStrike and Washington Post reports and it definitely seemed like the character was a little bit thrown together. He claimed to be a hacktivist trying to take the credit he deserved, which would have made sense if he really wasn’t a Russian spy or someone working for Russian spies. But then he chooses the name of another famous hacker as his own, simply adding the 2 in front of it and—you know this better than me—some hackers can have a big ego; why not just come up with a different name?

GC: True, they want recognition for their work.

LFB: Just like writers. You know, it’s like, “I wanna have people know that I did something that I think [is] awesome and worthy of recognition” thing. We all have our egos. And using the same name as another famous hacker from years ago just sounds very strange. I don’t think I’ve ever seen that before.

GC: It’s funny to imagine the meeting where this happened in some nondescript Russian intelligence office where someone’s like, “All right, we are looking for a volunteer to play the role of the hacker….” And whoever got nominated or volunteered didn’t do a very good job. Which is a little bit weird because Russia does seem to obviously have a lot of talent in this area.

LFB: Yeah, they seem to be very good with these information campaigns and deception campaigns, and stuff like that. It’s always possible that they contracted this out to someone. Maybe they thought this would be an easy job, but somehow it snowballed.

GC: Let’s turn to the next question, which is related to the first one. Many of these recent leaks, from Cablegate to the DNC leaks, are massive, and the journalistic field mandates quick turnaround so that you have to report on this material very quickly, right? What interpretive or other challenges have you faced when reporting on these hacks and leaks?

LFB: Yes, there’s many. You definitely nailed one of the biggest ones: the quickness and fast-paced environment. And I think that sources are catching up to it, or sources of leaks and publishers of leaks, I guess. There are still large data dumps that just drop out of the blue. And everyone scrambles to search through them. But, for example, WikiLeaks have become very good at staging leaks in phases. They slowly put out stuff because they know very well that they’re going to extend the time that they cover [an issue], that they will get attention. With the Podesta leaks, it was almost every day that there was something new.

GC: Right, that was very well timed and orchestrated.

LFB: And it wouldn’t have worked if they had just dumped everything the first day. Because we’re humans too and we get overwhelmed. And everyone gets—readers get overwhelmed too. And if you dump 3,000 emails, you’re just going to get a certain level of attention. If you do it in segments, and in phases, then you get more attention. I think sources are catching up to that.

LFB: But the other challenge is that sometimes you get things wrong, or you just assume that the documents are correct, and you publish the story based on the documents, saying, “Oh, this happened.” And maybe you haven’t had time to verify. There’s also competition. You always want to be the first. The ideal scenario is always getting something exclusively so you have the time to go through it. The advantage, though, of having stuff in the public is the crowdsourcing aspect. So, for example, when The Shadow Brokers data came out, pretty much everyone in the infosec world spent the entire day, all their free time, looking through what had come out.  And they published their thoughts and their findings in real time on Twitter.

For example, one of these people was Mustafa Al-Bassam. So that’s something that maybe you can’t get if you have information exclusively. And then getting something exclusively obviously has its advantages, but that’s one of the drawbacks. You don’t get the instant feedback from a large community.

GC: And that seems to have happened with the recent CIA-WikiLeaks leak as well.

LFB: And it happened with the Hacking Team leak. It was very useful for me and others to keep an eye on Twitter and see what people found because there was just so much data… That’s also exactly what happened when the Shadow Brokers dumped hacking tools stolen from the NSA [National Security Agency]. These weren’t just emails or documents that a lot of people could look at and understand or try to verify. These were sophisticated pieces of code that needed people with a lot of technical skills to understand and figure out what they were used for and whether they worked. Luckily there’s a lot of very good infosec people on Twitter and just following their analysis on the social network was really useful for us journalists.

GC: Based on what you’ve seen and reported, do you think that we—not just lay people, but experts on the subject—are thinking clearly on vulnerability? Is there a focus in the right place on threat awareness, technical fixes, bug bounties, vulnerability disclosure, or do you think people are missing something or are misrepresenting the problem?

LFB: In the infosec world there’s sort of a fetish for technical achievements. And it’s understandable, it’s not the only field. But sometimes this fetish for the latest, amazing zero-day, or the new proof-of-concept way to put ransomware on a thermostat—which, you know, is tough, I wrote a story about it—but sometimes it makes us forget that these are still kind of esoteric threats, maybe, and also unrealistic threats. In the real world, what happens usually is phishing, or your angry partner or ex-partner still knows your password to your email and after you break up they get into your email… stuff like that. Some cybersecurity expert might scoff at this and say, “That’s not hacking,” but that’s what hurts the most, though.

And I think that, for example, Citizen Lab has done a great job of highlighting some real-world cases of abuse, of hacking tools used against regular people, but also dissidents and human rights defenders. And in many of those cases, there was no fancy exploit, there was no amazing feat of coding or anything involved. It was just maybe a phishing email or phishing SMS [text message]. So I think that we could all—both journalists and the industry—do a better job of explaining the real risks to an average person and telling them what to do, because just scaring them is not going to help.

GC: Yeah, this is a great point and reminds me of considering public health–type campaigns: in this case, a concerted security hygiene program to teach everyday people the basics of security. The history of biomedical public health campaigns are instructive here. When the germ theory of illness was gaining ground, it took enormous effort and labor to convince people to change their habits, like to wash their hands, to cover their mouths when they were sneezing. It took a few decades of public health campaigns both to convince people that there was something called bacteria that could make you sick, and that you had to change your behavior. So why wouldn’t we need something similar for computer security? But that’s obviously something that info security companies—rightfully so—are probably not going to invest in.

LFB: Yeah, there’s not a lot of money in that. But I think that we could demand more and expect more from companies that are only maybe tangentially in the infosec industry—like Google, Facebook, these big giants—that everyone yuses, more or less. So they can really make a big difference. If Google made two-step verification mandatory, or if they just made it an option to choose when you create your account, that could make a huge difference in the adoption of these measures.

GC: That’s an excellent point.

GC: Let’s turn to another final question: Can you tell us a little bit about challenges you face writing on hackers and security?

LFB: One of the challenges is cutting through the noise. Infosec and cybersecurity have become so popular now that there’s so much noise. And it’s very easy to get lost in the daily noise. And as an online journalist, the risk is double because that’s kind of like my job: I have to be on everyday and see what happens everyday. Let me give you an example: yesterday there was some revelation about a vulnerability in the web versions of Telegram and WhatsApp. It made a lot of noise. It wasn’t that big of a deal in the sense that we don’t know many people are affected. Probably quite a few. But we don’t know how many people use the web versions of these apps.

Another challenge here is that so many people are trying to position themselves as experts in this field. As a journalist, it’s sometimes very hard to select your sources wisely because there are a lot of people that want to say something. They want to have their opinion broadcasted, they just want to join the fray and talk about the latest infosec news.

GC: How do you go about resolving that noise? Are there some experts that you rely on more than others? Do you talk with colleagues?

LFB: Yes, I think it’s a combination of everything you said. Talking to colleagues helps. I work with a really great journalist, Joseph Cox, who you know as well. It helps sometimes to share…. We ask each other: who shall I talk to? That helps. It’s also just a matter of time. When I started out, it was really hard to tell [who to talk to]. You would go on Twitter or just…everyone seemed like an expert. It’s very easy to say “cybersecurity expert” or whatever, and make claims that sound more or less informed.

The PR and marketing machine behind the infosec world is also very strong. Every time there’s a breaking story, we get dozens of emails trying to sell random people saying stuff that is not even that interesting. But there’s a lot of money involved, and so marketing is very powerful in the present world. I think after a while you just become very cynical—in a good way. If you smell the marketing campaign, then you’re like, okay, I should probably ignore this because it’s just marketing.

GC: Right. Is there sometimes a situation where it is a marketing campaign, but it is also a really cool important technology that has the potential to change things, or already has?

LFB: Yeah, sometimes attention is warranted. I’m trying to think of an example. I mean, for example, [the cybersecurity company] Kaspersky has a really big marketing side, and they do push their research very strongly through their marketing and PR people.  Most of the time, their research is actually very interesting, so it’s not necessarily—like if you use marketing, it’s not necessarily bad. There’s just too much of it now.  The problem with marketing is mostly when the sources or companies try to make their research look too good or make unfounded claims. Obviously I understand that they’re trying to get attention. But I think that actually—they don’t realize it—that that sometimes can backfire.

GC: Right, that’s a good point. And you know, I’m always thinking of potential PhD topics for my students; it would be really interesting to study the domain of infosec company research and the processes of knowledge vetting. How is it similar or different to academic peer review? And as you say, there’s a lot of very respected researchers and the material coming out of there is often very strong and important. But from my understanding…they will limit what they release too. Right?

LFB: As a company, yeah.

GC: Right, because you don’t want people being able to take things from you. So there’s this fine line between researching, getting the data out there, but maybe not always being able to reveal everything.

LFB: And that’s why, for example, an average Citizen Lab report is more interesting than an average infosec company X and Y report, because—and this is the point that Ron Deibert, the director of Citizen Lab, made when I spoke to him recently—you know, we don’t have to hide anything. And they want to encourage other people to look at the data and look at it themselves.

Another big challenge is the anonymity and pseudonymity of sources. It’s almost like a default…I don’t have the numbers…but I think a big part of my sources and my colleague’s sources are often anonymous or pseudonymous. They have a nickname, they have an alias. And the challenge sometimes is: Is this the same person I spoke to the other night? And the challenge there is not just verifying who they are, which is sometimes impossible, the challenge is sometimes keeping your head straight, and your sanity. Because the person sounds a little bit different. And “sounds” is probably the wrong word… because the tone is different…and you start thinking, is this a group? A friend of the guy or the lady that I spoke to the other day? But I think that when this happens, you have to focus on the content of the conversation, what they’re talking about, what documents they might be providing. The story might be there, although…it’s sometimes easy to forget, but what readers care the most about is people. So, the hacker, the hacktivist, is very often one of the most interesting parts of every story.

GC: Right, often there’s a lot of mystique around them or hacker groups. And…I know this well from my research about how difficult it can be to always be dealing with pseudonymous people. I thought Jeremy Hammond was an agent provocateur by the way he acted. And I was completely wrong, you know. It can be very hard to suss out these things.

LFB: Definitely. I think that’s one of the biggest challenges, for sure. But it’s also interesting in a way. I don’t fault them for trying to protect their identity. And that’s just how it is. And that’s not going to change anytime soon. Sometimes it is frustrating. Sometimes you wish you could have that certainty. In real life, you see a face, and that’s the person. But in these cases, there’s not really much to go on.

Interview conducted March 2017.

Lorenzo Franceschi-Bicchierai is a a staff writer at Motherboard, where he covers hacking, information security, and digital rights.

Ownership and Cultural Heritage

This free to read book grew out of discussions about how multimedia technologies afforded scholars new ways of sharing documentation and scientific knowledge with the cultural owners of these collected oral genres. Funded by the Netherlands Organisation for Scientific Research … Continue reading

Interview: Kim Zetter

Christopher Kelty:  So our first question is: What kind of technical or political thresholds have we crossed, and have you seen, in your time reporting on hacking and information security? Is Stuxnet [2010] a case of such a threshold, or the DNC [Democratic National Committee] hack? Since you’ve been doing this for a long time, maybe you have a particular sense of what’s changed, and when, over the last, say, decade or so?

Kim Zetter: I think we have a number of thresholds in the last decade. And the DNC hack definitely is a threshold of a sort. But it’s not an unexpected threshold. There’s been a build up to that kind of activity for a while. I think what’s surprising about that is really how long it took for something like that to occur. Stuxnet is a different kind of threshold, obviously, in the military realm. It’s a threshold not only in terms of having a proof-of-concept of code that can cause physical destruction—which is something we hadn’t seen before—but also it marks a threshold in international relations because it opens the way for other countries to view this as a viable option for responding to disputes instead of going the old routes: through the UN or attacking the other nation, or sanctions or something like that. This is a very attractive alternative because it allows you to do something immediately and have an immediate effect, and also to do it with a plausible deniability because of the anonymity and attribution issues.

CK: Why do you say this is long overdue?

KZ: With regard to the DNC hack, we’ve seen espionage and political espionage is not something new. The only thing that’s new here is the leaking of the data that was stolen rather than, let’s say, the covert usage of it. Obviously, the CIA has been involved in influencing elections for a long time, and other intelligence agencies have as well. But it’s new to do it in this very public way, and through a hack, where it’s almost very transparent.  You know, when the CIA is influencing an election, it’s a covert operation—you don’t see their hand behind it—or at least that’s what a covert operation is supposed to be. You don’t know who did it. And in this way, [the DNC hack] was just so bold.

Kim Zetter is the author of the definitive book
on the StuxNet virus, Countdown to Zero Day
Broadway Books, 2015.

But we’ve seen sort of a step and progression of this in the hacking world. We saw when Anonymous hacked HBGary [2011] and leaked email spools there. We saw the Sony hack [2014] where they leaked email spools. And both of these put private businesses on notice that this was a new danger to executives. And then we saw the Panama Papers leak [2016], where it became a threat to wealthy individuals and governments trying to launder or hide money. And now that practice has moved into a different realm. So that’s why I’m saying that this is long overdue in the political realm, and we’re going to see a lot more of it now. And the DNC hack is a bit like Stuxnet in that it opens the floodgates—it puts a stamp of approval on this kind of activity for nations.

CK: This is at the heart what I think a lot of people in the issue are trying to address. It seems that the nexus between hacking as a technical and practical activity and the political status of the leaks, the attacks, etc., is somehow inverting, so there’s a really interesting moment where hacking moved from being something fun…[with] occasionally political consequences to something political…[with] fun as a side effect.

KZ: Right. I’ve been covering security and hacking since 1999. And we started off with the initial hacks; things like the “I Love You” virus, things…that were sort of experimental, that weren’t necessarily intentional in nature. People just…testing the boundaries of this realm: the cyber realm. And then e-commerce took off after 2000 and it became the interest of criminals because there was a monetary gain to it. And then  we had the progression to state-sponsored espionage—instead of doing espionage in the old ways with a lot of resources, covert operatives, physical access, things like that. This opened a whole new realm; now we have remote destructive capabilities.  

CK: So, let me ask a related question: in a case like the DNC hack, do we know that this wasn’t a case of someone who had hacked the emails and then found someone, found the right person to give them to, or who was contracted to do the hacking?

KZ: Yes. I think that’s a question that we may not get an answer to, but I think that…you’re referring to something that we call “hybrid attacks.” There are two scenarios here. One is that some opportunistic hacker is just trying to get into any random system, finds a system that’s valuable, and then decides to go find a buyer, someone who’s interested in [what was obtained]. And then the stuff gets leaked in that manner. If that were the case in DNC, though, there probably would have been some kind of exchange for money, because a hacker—a mercenary hacker like that—is not going to do that for free.

But then you have this other scenario, where you have what I’m referring to now as hybrid attacks. We saw something similar in the hack of the Ukraine power grid [2015–2016], where forensic investigators saw very distinct differences between the initial stages of the hack, and the later stages of the hack which were more sophisticated. The initial hack, which was done through a phishing attack in the same way [as the DNC was hacked], got them into a system and they did some reconnaissance and they discovered what they had. And then it looks like they handed the access off to more sophisticated actors who actually understood the industrial control systems that were controlling the electrical grid. And they created sophisticated code that was designed to overwrite the firmware on the grid and shut it off and prevent them from turning it back on.

So there is a hybrid organization where front groups are doing the initial legwork; they aren’t necessarily fully employed by a government or military, but are certainly rewarded for it when they get access to a good system. And then the big guys come in and take over.

When you look at the hack of the DNC and the literature around it—the reporting around it—they describe two different groups in that network. They describe an initial group that got in around late summer, early fall, around 2015.  One group gets in and then the second group comes in around March 2016. And that’s the group that ultimately leaked the emails.  It’s unclear if that was a cooperative relationship or completely separate. But I think we’re going to have this problem more and more, where you have either a hybrid of groups cooperating, or problems with multiple groups independently being in a system. And this is because there are only so many targets that are really high-value targets, who could be of interest to a lot of different kinds of groups.

CK: What I find interesting about hacking are some of the parallels to how we’ve dealt with preparedness over the last couple of decades, independent of the information security realm. You know, thinking about very unlikely events and needing to be prepared, whether that’s climate change–related weather events or emerging diseases. Some of the work that we’ve done in Limn prior to this has been focused on the way those very rare events have been restructuring our capacity to respond and prepare for things. Is there something similar happening now with hacking, and with events—basically starting with Stuxnet—where federal agencies but also law enforcement are reorienting around the rare events? Do you see that happening?

KZ: I suppose that’s what government is best at, right? Those big events that supposedly we can’t tackle ourselves. So I think it’s appropriate if the government focuses on the infrastructure issues. And I don’t mean just the critical infrastructure issues like the power grid and chemical plants, but the infrastructure issues around the internet. I don’t think that we should give it over entirely to them. But in some cases, they are the only ones that actually can have an influence. One example is the FDA [U.S. Food and Drug Administration], and its recent rules around securing medical devices for manufacturers and vendors who create medical devices. It’s so remarkable to think that there was never a security requirement for our medical devices, right? It’s only in the last year that they thought it appropriate to actually even look at security.  But it shouldn’t be a surprise because we had the same thing with electronic voting machines.

CK: Yeah, it’s a shock and laughter moment, it seems to repeat itself.  Switching gears a little bit: one of the questions we have for you has to do with your experience in journalism, doing this kind of work.  Do you see interesting new challenges that are emerging, issues of finding sources, verifying claims, getting in touch with people? What are some of the major challenges you’ve encountered as a journalist trying to do this work over the last couple of decades?

KZ: I think that one of the problems that’s always existed [in] reporting [about] hackers is that unlike most other sources they’re oftentimes anonymous. And so you are left as a journalist to take the word of a hacker, what they say about themselves. You obviously put things in context in the story, and you say, “According to the hacker,” or “He is a 20-year-old student,” or  “He’s based in Brazil.”  There’s not a lot of ways you can verify who you’re talking to. And you also have the same kind of difficulties in verifying their information. Someone tells you they hacked a corporation and you ask, “Can you give me screenshots to show that you have access inside this network?” Well, they can doctor screenshots. What else can they give you to verify? Can they give you passwords that they used, can they tell you more about the network and how they got in? Can they give you a sample of the data that they stole? And then of course you have to go out and verify that. Well, the victim in many cases is often not going to verify that for you. They’re going to deny that they were hacked; they’re going to deny that they had security problems that allowed someone in. They may even deny that the data that came from them is their data. We saw that with parts of the DNC hack.  And it was true that some of the data hadn’t come from them. It had come from someone else.

CK: Do you find that—do you think that—finding sources to tell you about this stuff is different for studying hacking than for other domains? Do you basically go back to the same sources over and over again once you develop a list of good people, or do you have to find new ones with every event?

KZ: In terms of getting comments from researchers, those are the kinds of sources I would go back to repeatedly. When you’re talking about a hacker, of course, you can only generally talk with them about the hacks that they claimed to have participated in. And then of course they can just disappear, like the Shadow Brokers. After that initial release and flurry of publicity, several journalists contacted the Shadow Brokers, got some interviews, and then the Shadow Brokers disappeared and stopped giving interviews. So that’s always the problem here. Your source can get arrested and disappear that way, or willfully disappear in other ways. You may only end up having part of the information that you need.

CK: We have a number of articles about the difficulty of interpreting hacks and leaks and the expectation that the content of the leaks will have an immediate and incontrovertible effect—Pentagon Papers-style, or even Snowden-style. A leak that will be channeled through the media and have an effect on the government. We seem to be seeing a change in that strategic use of leaks. Do you see that in your own experience here too? That the effectiveness of these leaks is changing now?

KZ: You know, I think we’re still working that out. We’re trying to figure out the most effective way of doing this. You have the WikiLeaks model that gets thousands of documents from Chelsea Manning, and then just dumps them online and is angry that no one is willing to sift through them to figure out the significance of them. And then you have the model, like the Snowden leak, where they were given in bulk to journalists, and then journalists sifted through them to try and find documents and create stories around them. But in that case, many of the documents were still published. Then we have the alternative, which is the Panama Papers, where the data is given to journalists, but the documents don’t get published. All we see are the stories around them. And so we’re left to determine from the journalists: Did they interpret them correctly? Do they really say what they think they say?

We saw that problem with the Snowden documents. In the initial story that the Washington Post published about the Prism program, they said that, based on their interpretation of the documents, the NSA [National Security Agency] had a direct pipeline into the servers of these companies. And they misinterpreted that. But because they made the documents available it was easy for the public to see it themselves and say, “I think you need to go back and re-look at this.” With the Panama Papers we don’t have that. So there are multiple models happening here, and it’s unclear which is the most effective. Also, with the DNC, we got a giant dump of emails, and everyone was sifting through them simultaneously. The same with the Ashley Madison emails: everyone was trying to find something significant. There is sort of the fatigue factor: if you do multiple stories in a week, or even two weeks, people stop reading them because it feels like another story exactly like the last one.

And that’s the problem with large leaks. On the one hand you expect that they’re going to have big impact; on the other hand, the reading public can only absorb or care about so many at a time, especially when so many other things are going on.

CK: The DNC hacks also seem to have a differential effect: there was the sort of Times and Post readers who may be fatigued hearing about it and who fell away quickly. But then there’s the conspiracy theory–Breitbart world of trying to make something out of the risotto recipes and spirit cooking. And it almost feels like the hack was not a hack of the DNC, but a hack of the media and journalism system in a way.

KZ: Yeah, it was definitely manipulation of the media, but only in the sense that they knew what media would be interested in, right? You’re not going to dump the risotto recipes on the media (although the media would probably start up with that just a bit, just for the humor of it). But they definitely know what journalists like and want. And I don’t think that journalists should apologize for being interested in publishing stories that could expose bad behavior on the part of politicians. That exists whether or not you have leaked emails.  That’s what leaking is about. And especially in a campaign. There’s always manipulation of the media; government-authorized leaks are manipulation of the media as well.

CK: I think I like that connection, because what’s so puzzling to me is to call the DNC hacks “manipulating the presidential election” suggests that we haven’t ever manipulated the presidential election through the media before, which would be absurd, [Laughter.] So there’s a sort of irony to the fact that we now recognize it as something that involves statecraft in a different way.

KZ: And also that it was from an outsider: I mean, usually it’s the opposite party that’s manipulating the media to affect the outcome. I think they’re all insulted that an outside party was much more effective at it than any of them were. [Laughter.]

CK: Okay, one last question. What’s happening to hacker talent these days? Who’s being recruited? Do you have a sense in talking to people that the sort of professional landscape for hackers, information security professionals, etc., has been changing a lot? And if so, where are people going? And what are they becoming?

KZ: The U.S. government has been recruiting hackers from hacker conferences since hacker conferences began. From the very first DEFCON, undercover FBI and military were attending the conferences not only to learn what the hackers were learning about, but also to find talent. The problem of course is that as the cybersecurity industry grew, it became harder and harder for the government and the military to hold onto the talent that they had. And that’s not going to change. They’re not going to be able to pay the salaries that the private industry can pay. So what you see, of course, is the NSA contracting with private companies to provide the skills that they would have gotten if they could have hired those same people.

So what’s always going to be a problem is that the government is not always going to get the most talented [people]. They may get them for the first year, or couple of years. But beyond that, they’re always going to lose to the commercial industry. Was that your question? I’m not sure if I answered it.

CK: Well, it was, but I’m also interested in what kinds of international recruitment, what shake-up in the security agencies is happening around trying to find talent for this stuff? I know that the NSA going to DEFCON goes all the way back, but now even if you’re a hacker and you’re recruited by NSA, you may also be recruited by other either state agencies or private security firms who are engaged in something new.

KZ: Right. In the wake of the Snowden leaks, there may be people who would have been…willing to work for the government before who aren’t willing to work there now. And certainly Trump is not going to help the government and military recruit talents in the way that past administrations might have been able to appeal to patriotism and, you know, national duty. I think that that’s going to become much more difficult for the government under this administration.

Interview conducted February 2017.

Kim Zetter is an award-winning, senior staff reporter at Wired covering cybercrime, privacy, and security. She is the author of Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon.

Strengthening Democracy Through Open Education

This blog post was originally published by Patrick Blessinger as an article on University World News – you can access it here. Open Education: International Perspectives in Higher Education can be read and downloaded for free here. Open education is … Continue reading

Here Be Monsters: A Punctum Publishing Primer

by EILEEN A. JOY Caring for [ourselves] is not self-indulgence, it is self-preservation and that is an act of political warfare. ~ Audre Lorde But we can’t (and we won’t!) continue to be administered by a ruthless regime of technocrats that wants to turn everyone and everything into bodiless data, into sermons sent over the[...]

The Final Countdown

The Final Countdown: Europe, Refugees and the Left Edited by Jela Krečič. There is a commonly accepted notion that we live in a time of serious crisis that moves between the two extremes of fundamentalist terrorism and right wing populism. The latter draws its power from the supposed threat of immigrants: it proposes to resolve the immigrant crisis by placing … Continue reading →