Taters Gonna Tate. . .But Do Platforms Have to Platform?

A white man holds a cigar in the center of the picture, his mouth is visible on the left edge of the picture, blowing smoke rings.

In March 2025, shortly after returning to the United States from Romania, where he and his brother Tristan had been held under house arrest for two years after being charged with human trafficking, rape, and forming a criminal group to sexually exploit women, the social media influencer and self-described misogynist Andrew Tate’s podcast, Pimping H**s Degree was removed from Spotify for violating that platform’s policies.

According to the technology media outlet 404 Media, which first reported the news, some Spotify employees had complained in an internal Slack channel about the availability of Tate’s shows on their platform. “Pretty vile that we’re hosting Andrew Tate’s content,” wrote one. “Happy Women’s History Month, everybody!” wrote another. A change.org petition to call on Spotify to remove harmful Andrew Tate content, meanwhile, received over 150,000 signatures.

When asked for comment by the U.K. Independent, a Spotify spokesperson clarified that they removed the content in question because it violated the company’s policies, not because of any internal employee discussion. These policies state, in part, that content hosted on the platform should not “promote violence, incite hatred, harass, bully, or engage in any other behavior that may place people at risk of serious physical harm or death.”

Still, there is a veritable fire hose of Tate content available on Spotify. A search for the name “Andrew Tate” on the platform yields upwards of 15 feeds (and a music account) associated with the pro kickboxer-turned-self-help guru, many of which seem to be updated on a sporadic basis or not at all. Apple Podcasts, meanwhile, features an equally wide spectrum of shows with titles like Tatecast, Tate Speech, Andrew Tate Motivation, and Tate Talk [Ed. Note: Normally there’d be links to this media–and the author has provided all of his sources, but we at SO! does not want to drive idle traffic to these sites or pingbacks to/from them. If you want to follow Andrew Salvati’s path, all these titles are readily findable with a quick cut-and-paste Google search.–JS]

With so many different feeds out there, wading into the Andrew Tate audio ecosystem can be a bewildering experience. There isn’t just one podcast; there’s a continuous unfolding of feeds populated by short clips of content pulled from other sources.

But this may be the point exactly.

Andrew Tate on Anything Goes With James English, CC BY 3.0 via Wikimedia Commons

As I learned from this article in the Guardian and these interviews with YouTuber and entrepreneur MrBeast (“MrBeast On Andrew Tate’s MARKETING” and “MrBeast Reveals Andrew Tate’s Strategy”), Tate achieved TikTok virality, in part, by encouraging fans to share clips of video podcast interviews – rather than the whole interview itself – on the platform.

“Now is the best time to do podcasts than ever before,” MrBeast said in one interview. “Now it’s like the clips are re-uploaded for months on months. It gets so many views outside of the actual podcast … I would call it the ‘Tate Model’ … Like I think if you’re an influencer, you should go on like a couple dozen podcasts. You should clip all the best parts and just put it on a folder and just give it to your fans. Like literally promote you for free.” Though it can be hard to tell exactly who uploaded a podcast to Spotify, it seems that something like this is happening on the platform – that fans of Tate are sharing their favorite clips of his interviews and monologues pulled from other sources.

In its “About” section, for instance, a Spotify feed called Andrew Tate Motivational Speech declares that “this is a mix of the most powerful motivational speeches I’ve found from Andrew Tate. He’s a 4 time [sic] kickboxing world champion and he’s been having a big impact on social media.” In another Spotify feed called Tate Therapy, posters are careful to note that they “do not represent Mr. Tate in any way. We simply love his message. So we put together some of his best speeches.”

Given that Spotify is increasingly a social media platform, rather than simply an audio streaming service–users can collaborate on playlists and see what their friends are listening to–it follows that this practice of clipping and sharing Tate content may potentially expand the influencer’s online footprint. It may also serve as insurance against the company’s attempts to remove content or completely deplatform Tate: surely Spotify can’t police all the feeds that it hosts

So, what is it that Andrew Tate is saying – and how is he saying it?

To get a sense of why he has been called the “King of Toxic Masculinity,” and a “divisive social media star,” I had a listen to several of the interviews and monologues posted to Andrew Tate Speech Daily on Apple Podcasts, which, of all of the Andrew Tate audio feeds, is the most consistently updated.

The first thing to take note of is his voice. It’s brisk and aggressive and carefully enunciated – it’s like he’s daring you to take issue with what he, an accomplished and eloquent man, is saying. Above all, listening to Tate feels like being spoken to like an inferior, because that is precisely what he preys on. His accent, moreover – now British, now American – is unique, lending itself to some unusual pronunciations that can be considered as a part of his system of authority and charm.

One of Tate’s main arguments about what ails men today – and it is clear from his mode of address that he assumes he is talking to men exclusively – is that they are trapped in a system of social and economic “slavery” that he unimaginatively calls “The Matrix” after the film series of the same name. Though he is somewhat vague in his descriptions, in the podcast episode “Andrew Tate on The Matrix,” he explains that power, as it actually exists in the world, is held by elites who rely on systems of representation (language, texts) to effect their will. These systems of representation, however, are prone to abuse because they are ultimately subject to human fallibility. Tangible assets, like wealth, he reasons, are susceptible to control by “The Matrix,” as they can be taken away arbitrarily by the redefinition of decisions and the printing/signing of documents. His example, though it is a little hard to follow, is that if someone says something that the government doesn’t like, a judge can simply order that their house be taken away. Instead, Tate argues that individuals can escape “The Matrix” by building intangible assets (here, he gives no examples), which cannot be taken away by elites and their bureaucracy. It is a difficult path, he cautions (and here, he sounds sympathetic), and one that not everyone has the discipline to endure.

Tate gets a little more specific in the episode “Andrew Tate on The Global Awakening. The Modern Slave System,” in which he asserts that elites are using the system of fiat currency – a term that cryptocurrency supporters like to use to disparage government-issued currencies – to keep individuals “enslaved.” In this modern version of enslavement, he explains, individuals are forced to work for currency, but, since fiat currency is subject to inflation and other forms of manipulation, only end up making the bare amount they need to survive. The result, he argues, is a system in which the rich get richer and the poor get poorer (of course this ignores the real possibility of shitcoin and other crypto manipulation schemes). It’s quite a populist message for a guy who is famous for his luxurious lifestyle. Still, his message here is consistent: with the proper amount of discipline, a willingness to speak truth to power, and faith in God (he converted to Islam in October 2023) will result in an awakening of consciousness that will finally end the stranglehold that elites have on power – will finally break “The Matrix.”

On the other hand, Tate deems women incapable of the discipline required to break out of “The Matrix” – he seems to think that they are too materialistic, too distractible, too enamored of the chains that elites use to bind individuals to the system to see beyond them (see “Andrew Tate on ‘Fun’”). In his view, women are better off at home bearing children or fulfilling male sexual desires. (In an apparent demonstration of male dominance, Tate’s “girlfriends” often appear in the background of his videos cleaning house).  

For his part, Tate claims that his own legal troubles, and his own vilification in the press, are part of a coordinated campaign of persecution against him for exposing the way that the world really works (see, for example, “Andrew Tate: Survival, Power, and the System Exposed”). From this vantage, Tate seems to be acting as what the ancient Greeks called a parrhesiastes, someone who, as Michel Foucault writes, not only sees it as his duty to speak the truth, but takes a risk in doing so, since what he says is opposed by the majority. Indeed, often congratulating himself on his bravery in the face of “The Matrix,” Tate has suggested that his role as a truth teller might get him sent to jail (“Andrew Tate on the Common Man”), or worse (“Survival, Power, and the System Exposed.”) In such moments, he plays the martyr, adopting a quiet, yet defiant voice. 

Aside from the aspirational lifestyle he purveys – the fast cars, the money, the women, the flashy clothes, the jets, the mansions, the cigars, and the six pack – it seems to me that this parrhesia is a key part of what makes Tate popular among men and boys (as of February 2025, he had over 10 million followers on X [formerly Twitter]). What he reveals to them, though it is often muddled, is the way in which elites maintain social control under advanced capitalism. It’s all rather Gramscian in the sense that it is concerned with the hegemony of a dominant class, though, ironically, Tate seems too much of a capitalist himself to engage in Marxian social critique. Instead of offering a politics of class solidarity, Tate merely rehearses familiar neoliberal scripts about pulling oneself up by the bootstraps (see “You Must Constantly Build Yourself”), getting disciplined, going to the gym, developing skills, and starting a business. For Tate, life is a competition, a war, though most men don’t realize it.

And I think this is the key to understanding Tate’s parrhesia – it’s not only that he is speaking truth to power in his criticism of “The Matrix”; he also sees himself as speaking an uncomfortable truth to his listeners, truths that they might not be ready to hear. As in the movie, The Matrix, he says in “Andrew Tate on the Global Awakening,” some minds are not ready to have the true nature of reality revealed to them. In his perorations, therefore, Tate often takes a sharp and combative tone, accusing his listeners of being guilty of complacency and complicity in the face of “The Matrix.”

“If I were to explain to you right here, right now, in a compendious and concise way, most of you wouldn’t understand,” he says in “Andrew Tate on The Matrix.” “And those of you who do understand will not be prepared to do the work it takes to then actually genuinely escape. But those of you who are truly unhappy inside of your hearts, those of you who understand there’s something more to life, there’s a different level of reality you’ve yet to experience … But if your mind is ready to be free, if you’re ready to truly understand how the world operates and become a person who is difficult to kill, hard to damage, and escape The Matrix truly, once and for all, then I am willing to teach you.”

Tate on Anything Goes With James English, CC BY 3.0 , via Wikimedia Commons

For those persuaded by this line of thinking, or who are otherwise made to feel guilty about their complicity in “The Matrix,” Tate offers a special “Real World” course at $49 per month, which teaches students how they can leverage AI and e-commerce tools to earn their own money and finally be free.

And that’s really what it’s all about – all the social media influencing, all the clip sharing, all the obnoxious antics, and deliberately controversial statements – they are all calculated to raise his public profile (good or bad) so that he can sell the online courses that have made him and his brother Tristan fabulously wealthy.

It is for this reason that I don’t think that Spotify’s deplatforming of one of Tate’s shows will ultimately do anything meaningful to stem his popularity. If anything, the added controversy will likely confirm to his fans that he has been right all along – that the elites who are in control of “The Matrix” are so threatened by the truth that he tells about the world and about women that they will first deplatform him and then send him to jail.

No, we will only rid ourselves of Tate when he becomes irrelevant. This may happen if he ends up going to prison in Romania or in the UK (where he also faces charges of rape and human trafficking). But even then, there are many vying to take his place.

Featured Image: Close-up and remixed image of Andrew Tate’s mouth and arm, Image by Heute, CC BY 4.0

Andrew J. Salvati is an adjunct professor in the Media and Communications program at Drew University, where he teaches courses on podcasting and television studies. His research interests include media and cultural memory, television history, and mediated masculinity. He is the co-founder and occasional co-host of Inside the Box: The TV History Podcast, and Drew Archives in 10.

This post also benefitted from the review of Spring 2025 Sounding Out! interns Sean Broder and Alex Calovi. Thank you!

REWIND! . . .If you liked this post, you may also dig:

Robin Williams and the Shazbot Over the First Podcast–Andrew Salvati

“I am Thinking Of Your Voice”: Gender, Audio Compression, and a Cyberfeminist Theory of Oppression: Robin James

DIY Histories: Podcasting the Past: Andrew Salvati

Listening to MAGA Politics within US/Mexico’s Lucha Libre –Esther Díaz Martín and Rebeca Rivas

Gendered Sonic Violence, from the Waiting Room to the Locker Room–Rebecca Lentjes


Irony is an opportunity for ambivalence: Interview with Maya Indira Ganesh about her Book Auto-Correct

In 2025, ARTez Press published Auto-Correct: The Fantasies and Failures of AI, Ethics, and the Driverless Car by Maya Indira Ganesh. I talked with Maya about the book — why and how technologies fail, the meaning of ethics within and outside technologies, and the ambivalence that comes with irony (as well as critique). The interview was recorded on April 15th in Zoom, automatically transcribed, and lightly edited for clarity.

In my PhD project, I keep thinking about how one can relate to the fact that algorithmic technologies err and fail all the time. All things fall apart and break down—that much is a truism. Yet, how we choose to make sense of it individually and collectively is a different matter. What initially drew my attention to Maya’s book is how she describes failures of self-driving cars as happening at different scales and moments in time. The idea of error-free technology is thus a dream, and yet not all failures are alike.

Dmitry Muravyov: Yeah, so to start us off, you mentioned how long this project has been going. For those doing PhDs and turning them into further projects or books, I’m curious: What question or intellectual concern has driven you throughout this process? Was there a thought you kept returning to—like, I need to put this out into the world because it’s important?

Maya Indira Ganesh: There are two dimensions to that. First, in Germany, you have to publish to complete your PhD; it’s not considered ‘done’ until you do. I used that requirement as a chance to turn my thesis into a printed book. Second, since finishing my PhD at Leuphana University, I’ve mostly been teaching. Almost exactly four years ago as I was handing in my thesis, I was also interviewing for a job at this university. I was hired to co-design and co-lead a new master’s program in AI, Ethics, and Society.

Teaching AI ethics made me aware of what I put on reading lists—how to bring critical humanities and social science perspectives into conversations about technology, values, and AI. I noticed gaps in the literature. Not that I’m claiming to fill those gaps with my book, but there’s a standard set of citations on the social shaping of technology, epistemic infrastructures, and AI’s emergence. Teaching working professionals—people building tech or making high-level decisions—pushed me to ask: “How do I make theory accessible without diluting it?” They wanted depth but weren’t academics. So, I thought, “What can they read that’s not tech journalism or long-form criticism?” That became a motivation.

The other thing I’ve wrestled with is the temporality of academic research versus the speed of AI innovation. It’s about the politics of AI time. A big question asked of AI in general and driverless cars in particular is this question: ‘When will it arrive?’ You don’t ask that about most technologies because, say, a car is tangible—you see it, you know it’s here. But so much AI operates invisibly in the background. Its rhetoric is all about it always being almost here, ‘just around the corner’.

Credit: Maya Indira Ganesh

As an academic, though, timing doesn’t matter—unless you’re under the delusion your work will “change everything,” which, let’s be honest, few believe. But also, no one had written about driverless cars this way. Most books are policy or innovation-focused. I thought, “Why not a critical cultural study of this artifact?”

Dmitry Muravyov: It really makes me think, especially when people talk about regulation, there are so many times metaphors like “we’re lagging behind.”

I’m really interested in how technologies fail, and obviously that’s a huge theme in your book—it’s right there in the title. I’ve been trying to make sense of one of your chapters in a particular way, and I’d love to hear your thoughts. You talk a lot about how driverless cars are kind of set up to fail in certain ways, and how all these accident reports are always partial, always uncertain.

But reading Chapter 2, I noticed you sort of map out why these crashes happen, and I think I’ve got three main patterns. First, there’s the human-machine handover failure—like when the human just zones out for a second and can’t take over when they need to. Then there are the computer vision gaps, where the car’s system just doesn’t ‘get’ what it’s seeing— objects just don’t register properly. And third, there’s this mismatch between the car and its environment, where the infrastructure isn’t right for what the car needs to work.

But then you also show how the tech industry tries to deal with these failures, right? For the handover problem, they push this whole ‘teamwork’ idea in their PR — making the car seem more human, more relatable. For the vision gaps, there’s all this invisible data work going on behind the scenes to patch things up. And for the infrastructure issue, they’re literally reshaping cities to fit the cars—testing them in the real world, not just labs.

So, my question is: Would you say these are basically strategies to compensate for the cars’ weaknesses? And do you think it’s mostly the tech industry driving these fixes?

Maya Indira Ganesh: Wow, yeah—that’s such a good summary, and you’ve definitely read the book! [laughs] You’re completely right; this is exactly it.

And yeah, these rhetorical moves are chiefly coming from the tech industry, because they’re the ones who really see these problems up close. But the way they handle it is interesting—it’s like they’re working on two levels:

Making it seem human. At one level, they’re saying, “Look, it’s just like a person!” Whether it’s comparing driving to human cognition, or even calling the software the “driver” for the car’s “hardware”—like the CEO from Waymo does. If you make it feel human, suddenly people are more forgiving, right?

Andrew Ng from Baidu, who says, “Hey, this tech is still learning, be considerate—cut it some slack”! Which, okay—but why should I feel concerned for a car? This works because cars feel familiar, cars are anthropomorphized anyway, and are distinctly gendered at that. Cars, like boats, are given monikers, are usually ‘she’.  We tend not to do this with an invisible credit-scoring algorithm.

The other move is the strategy of blaming actual humans. This isn’t new. Back when cars were first invented, jaywalking laws were invented to shift responsibility onto pedestrians for running out onto the street and disrupting the space for experimental automobility in city spaces, and new drivers. This was in the early days of automobility in the US before traffic lights existed, and people were unaware of how this new technology worked, and were more familiar with horse-drawn carriages. Rather than regulate cars and drivers, what happened was to blame the human for not crossing the road correctly. That’s why the car hit you.” There is a similar playbook now: “praise the machine, punish the human” as Tim Hwang and Madeline Elish put it—it’s this endless cycle of Oh, the tech’s fine—you’re the problem.

Dmitry Muravyov: This whole process really seems to be about adaptation, right? We humans are fallible beings, but in this context of coexisting with technology, it feels like our failures are the ones that need adjusting—we have to change to fit driverless cars, for instance.

But I’m curious, could we distinguish between more and less desirable types of failure? If we accept that neither tech nor humans can be perfect—that we’re all prone to fail in some way—does that open up new ways to think about these systems differently?

Maya Indira Ganesh: Good question. Actually, I touch on this in the book’s epilogue about the “real vs. fantasy” worlds of technology. When you focus on the real world, you have to confront failure—that breakdown is crucial for understanding how systems interact with human society. That’s why these technologies have to leave their controlled “toy worlds” and enter our messy reality, where they inevitably fail. That failure gives us valuable data about how the system actually works.

But here’s the tension: By dwelling in the fantasy of what the technology could be—that idealized future where everything works perfectly—we avoid grappling with its real-world flaws. The driverless car is interesting because it’s too tangible for pure futurism—you can’t pretend its failures are just “speculative risks” like you might with AI doom scenarios. Yet even with AVs, there’s still this tendency to say “Oh, the real version is coming later” to deflect from today’s problems.

So, in short: If we obsess over the technology’s potential, we don’t have to account for how it’s actually failing in material, accountable ways right now.

Credit: Maya Indira Ganesh

Dmitry Muravyov: It makes me curious—is it possible to envision technologies that recognize their intrinsic fallibility and try to account for it? Maybe in certain ways, rather than others, as your discussion of existential risk shows.

Following up on that, you discuss ethics in the book so well. You interrogate the assumptions and limitations of machine ethics, showing how it localizes ethics within computational architecture, making it a design problem to solve. I love how you describe it: “the statistical becomes the technological medium of ethics”—and you contrast this with “human phenomenological, embodied, spiritual, or shared technologies for making sense of the world.” Could you talk more about this opposition?

Maya Indira Ganesh: I think machine ethics is really interesting because it’s such a niche field that people don’t talk about enough. But it actually does a great job of showing what people are trying to do when they try to embed values into machines—to make decisions that align with certain ethics. But the thing is, this approach works at small scales, not for complex systems like driverless cars in cities.

Of course, we want that in some cases—like removing violent extremism or child pornography online. That’s clear-cut. But then you get into nuances: What if it’s a GIF mimicking beheading, but with no real-world groups or ideologies attached? Suddenly it’s not so simple.

The problem is, machine ethics—and a lot of tech ethics—assumes technology can be totalizing, seamless. We don’t want to deal with breaks or failures, or messy systems talking to each other. Right now, every wave of digitization just gets called “AI.” For 15 years, we’ve had digitized systems working (or not working) in different ways—now AI is being patched on top, often in janky ways.

Take public sector AI in the UK—there are a number of projects trying to apply LLMs to correct doctors’ note-taking, to make casework more efficient. But this is just responding to earlier failures of digitization! We have PDFs that were supposed to make documents portable, but now we’re stuck with stacks of uneditable forms. Every “solution” creates new problems.

So maybe we shouldn’t even call it “ethics” anymore. What we really need is to ask: What values are driving our societies? Efficiency? Profit? Innovation? These are ideological choices that get normalized. The point of my book is that ethics can’t just live inside machines—we need to ask how we want to organize our cities and societies, with all their messiness. Maybe LLMs could help facilitate those conversations, rather than pretending to be the solution. But we’re still figuring that out.

Dmitry Muravyov: When I first thought about this question, I was thinking about how you position ethics in two ways. On the one hand, as something technological and localized within computational architecture (the machine ethics project), and, on the other hand, as something more embodied and societal.

You seem to criticize machine ethics for not being “ethics” in that fuller sense. But now I’m wondering—are you actually saying that machine ethics can serve a purpose, we just shouldn’t call it “ethics” to avoid confusion? Would that be accurate?

Maya Indira Ganesh: Yes, exactly. The framing of “ethics” hasn’t helped us reckon with what kind of society we want to build. It either gets reduced to designing machines that mimic human decision-making (as if machines could create the social through their choices) or becomes corporate self-regulation theater, which we’ve seen fail as companies discard ethics when inconvenient.

Now, I’ll admit: Terms like “ethics” do have power. When you call something unethical, it activates people—no one wants that label. But we’ve overused these concepts until they’re hollow.

But here’s the key point: People are remaking society through technology—just not with “ethics” as we’ve framed it. Look at the U.S., where companies can now ignore AI safety under Trump. This isn’t about not caring—it’s about competing visions of society.

The Elon Musks and Chris Rufos have very clear ideologies about the world they want. And that’s what we need to confront: Not “ethics” as a technical problem, but the values and power struggles shaping our technological future.

So yes—we need value discussions, just not under the exhausted banner of “ethics.”

Dmitry Muravyov: There’s this interesting contrast in your reply between the ethical and the social that I want to explore further. Let me bring in my own experience too—I also teach technology ethics courses to engineers and computer scientists. I’ll play devil’s advocate a bit here, because while your book offers strong (and often justified) criticism of engineering ethics, I want to push back slightly.

That emphasis on individual responsibility you critique—it’s a weak point. Students tell me (or, more often, I imagine that this is something they can tell): “These ideas are nice, but eventually I’ll need a job, a paycheck, and I’ll have defined responsibilities within an organization.” Many so-called “ethical” issues in tech may be better addressed through labor organizing and unions rather than ethics courses.

But to defend ethics—even when we acknowledge how socially determined our positions are, there’s still an ethical weight to our decisions and relationships that doesn’t disappear. How do you see this tension between the social and ethical? Do you view ethics as having any autonomous space?

Maya Indira Ganesh: That’s a really good question, and it connects directly to what I was saying earlier. In teaching AI ethics to engineers, policy makers, even defense department staff, the core problem is treating ethics as something separable from the social, something we can formalize into machines. That’s why machine ethics fascinates me—it embodies this flawed approach.

Everything meaningful requires context. It resists automation. To your student’s dilemma—yes, we’re socially constrained, but there’s no substitute for personal reckoning. There are forms of social inquiry and ethical engagement that can’t—and shouldn’t—be automated.

This connects powerfully to Nick Seaver’s work about music recommendation algorithms. He studies these engineers who pride themselves on crafting “caring,” bespoke algorithms—until their startups scale. Suddenly, their intimate knowledge of musical nuance gets replaced by crude metrics and automated systems. What fascinates me is how they cope: Seaver finds that they perform this psychological reframing where the “ethical” part of their work migrates to some other more manageable domain so they can stomach the compromises required by scale.

Credit: Maya Indira Ganesh

Dmitry Muravyov: I think it’s indeed an interesting way to think that if ethics has to be somewhere, but at the same time, it can be in many places. So, we can think indeed: what is the place for ethics in this particular time and space?

The last thing I wanted to discuss was the irony you explore. The way I made sense of it was seeing the “irony of autonomy” as a type of technological critique. Often, the traditional critical move is one of suspicion—unmasking what’s actually going on behind the hood. In technology studies and humanities, we’ve seen rethinking of critique—reparative critique, diffractive critique, post-critique.

But irony seems different. When I first read your piece introducing irony in the book, I caught myself smiling—it sparked something in me. How do you see this use of irony in relation to the history of technological critique? Especially given your earlier piece with Emanuel Moss about refusal and resistance as modes of critique.

Maya Indira Ganesh: The “irony of autonomy” (playing on Lisanne’s Bainbrdige’s work (1983) about the irony of automation) was my way of historicizing these debates, showing how we’re replaying similar responses to automation today. We perform this charade of pretending machines act autonomously while knowing how deeply entangled we are with them.

Over time, I’ve struggled with that irony, albeit not in a bad way. It connects to a melancholia in my other writing about our embodied digital lives, especially around gender and technology. There’s a strong cyberfeminist influence here—this Haraway-esque recognition of how technologies shape gendered existence.

I don’t think we’re meant to resolve this tension. Like Haraway and cyberfeminists suggest, we need to sit with that discomfort. Disabled communities understand this deeply—when you rely on technologies for basic existence, you develop a nuanced relationship with them. There’s no clean ethical position.

A disabled colleague once challenged me when I asked if she wanted better functioning tech: “Actually, no—if it works too smoothly, people assume it always will. The breakdowns create necessary moments to see who’s being left out.” In our resistance and refusal piece with Emmanuel Moss, we were pushing back against overly literal critique. Resistance gets co-opted so easily—tech companies now use activists’ language! Refusal offers complexity, but isn’t a blueprint. You can’t exist outside these systems.

Irony is an opportunity for ambivalence, it is a politics of not turning away, while refusing to ever be fully reconciled with the digital.

Dmitry Muravyov: Sometimes I think when certain critical moves—like undermining or unmasking—are presented to audiences without humanities backgrounds, like computer science students… You can get this response where it feels like you’re taking the joy out of their work.

What I appreciate about irony as an alternative is that it lets people chuckle or smile first. Maybe through that smile, they can think: “Hey, maybe we shouldn’t automate everything.” That moment of laughter might plant the seed for a more ambivalent attitude.

Maya Indira Ganesh: Actually, I think critique has become largely about exposing corporate capture—it’s tied up with legal/regulatory battles now. I get this from friends and colleagues sometimes, “You’re not being hard enough on this.” But why can’t computing be fun? It is fun for many people. It creates beautiful things too.

That’s why I want that ambivalent space—to sit with both the problems and possibilities. If we open up how we think about our relationships with technology and each other… maybe we can make something different.

Dmitry Muravyov: There can still be joy at the end!

Biographies

Maya Indira Ganesh is Associate Director (Research Culture & Partnerships), co-director of the Narratives and Justice Program, and a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence (CFI). She draws on varied theoretical and methodological genres, including feminist scholarship, participatory methods, Media and Cultural studies, and Science and Technology Studies to examine how AI is being deployed in public, and how AI’s marginalised and expert publics shape the technology.

Dmitry Muravyov is a PhD Candidate at TU Delft, working in the AI DeMoS Lab. Drawing on philosophy of technology, STS, and media studies, he currently focuses on the political and ethical issues of algorithmic fallibility, a collectively shared condition of living with technology’s breakdowns, failures, and errors.

 

Band People with Franz Nicolay

Minor Compositions Podcast Episode 28 Band People with Franz Nicolay This episode is a recording of a seminar held at the University of Essex with Franz Nicolay on his book Band People. In it Franz Nicolay explores the working and creative lives of musicians. In it, he argues that to talk about the role of […]

16 May, 14:00 CEST Girl Online 🎀 Symposihmm #1 by The Hmm

[Repost, original event page here ]

Girl Online 🎀 Symposihmm #1

Performing as a girl online can be a powerful way to subvert the algorithm. And thanks to the whiplash of the girlboss epidemic, a meeker and cute self-image is now taking hold. Trends like girl mathbabygirl, and girl dinner reflect a tendency across genders to self-infantilise, a growing resistance to industrialized understandings of adulthood, often tied to economic strains and shifting life expectations, particularly amongst younger generations.

At the same time, the notion of girlhood itself is being questioned, reframed, and adopted in online spaces. As AI isolates our feeds even more by sorting us into predetermined categories, labels influence how we’re seen—and how we see ourselves. With machine learning gradually influencing more of our daily lives, how will our online actions and self-understandings change as a whole?

Afternoon programme  14:00 – 17:30

Today, we often make ourselves small online. Where the girlboss of yesteryear was on her grind to “have it all”, we now see a trend of flippantly shirking gendered responsibilities: we’re just girls, don’t expect us to cook a full meal every night (girl dinner). This trend of self-infantilisation is being embraced by men as well, who are posting about their boy apartments instead of man caves, well into their thirties. In a series of short talks and a panel discussion, we’ll explore online self-infantilisation. What is at the root of this phenomenon? And what are the benefits of this tactic?

With Maya B. Kronic, Mela Miekus and Mita Medri, and more…

  • 14.00 – 15.30, Workshop — Ink your Online Identity (few spots left!)
    • Explore the history of online identity and investigate digital self-presentation. Then design and apply temporary tattoos, reflecting critically on the digital self.
  • 14.00 – 15.30, Reading group (sold out)
    • Collective reading session delving into selected passages of Tiqqun’s text “Preliminary Materials for a Theory of the Young-Girl”. No prep needed!
  • 16.00 – 17.30, Panel — Self-infantilisation, with Maya B. Kronic, Mela Miekus, Mita Medri, and Jernej Markelj

౨ৎ Break ౨ৎ

Evening programme  19.00 – 21.30

Online, ‘girl’ is less a gender than a strategy—playful, ironic, and vulnerable behavior performs well under the algorithm. For this part of the program, we’ll explore ‘girl’ as a marketing tool, a power move, a form of desire, and a proven formula for online success. But is this strictly a product of today’s media environment, or does it echo earlier representations of girlhood? And what does the future of the girl look like in a world shaped by neural media?

  • Performance — Good Girl by Mireille Tap
  • Interview — Artist Martine Neddam about the Girl in 20th century media
  • Keynote lecture — K Allado-McDowell on the performance of girlhood and identity
  • Performance — djjustgirlythings

 

📅 Date: Friday 16 May 2025
🕗 Time: 14.00 – 21.30 CEST
📍 Location: SPUI25, Spui 25-27, Amsterdam, and online.
🎟 Tickets: Various categories from €7,50 to €27,50. Student and livestream tickets available ✨

Feel free to reach out to us at info@thehmm.nl for solidarity tickets.

Can’t join us in person in Amsterdam? Or just want to watch from the comfort of your laptop or phone? This event is hybrid so you can also buy a ticket to join Girl Online via our livestream website.

♿ Accessibility note

SPUI25 is located on the ground floor, there is a threshold at the door that staff are happy to help with. Unfortunately, there is no accessible toilet. During the event we can provide live closed captioning for those with hearing impairments and disabilities. Please reach out to us if you are joining on-site and have this access need, so that we can reserve a seat for you within view of the screen with captions. If you are joining online via our livestream, live captioning will be available as one of the streaming modes.

🎀 Girl Online is a full-day programme hosted by The Hmm, a platform for internet cultures, taking place across SPUI25 and University of Amsterdam locations on Friday 16 May. Expect talks, performances, workshops, and more. This first ever Symposihmm will dive into girl trends, self-infantilisation, girl as a strategy in digital spaces, and the future of girlhood. It is part of This is who you’re being mean to, The Hmm’s broader 2025 year theme, exploring gender expression online.

💙 This programme is kindly supported by the Creative Industries Fund NLhet CultuurfondsAmsterdams Fonds voor de Kunst, and the Netherlands Institute for Cultural Analysis, and made in partnership with University of Amsterdam Media Studies and Institute of Network Cultures.

L’économie solidaire en Haïti : femmes, territoires et initiatives populaires

Un livre de Christophe Providence

Pour accéder au livre en version html, cliquez ici.
Pour télécharger le PDF, cliquez ici (à venir).

Haïti est un pays où l’économie sociale et solidaire (ESS) et le secteur informel constituent les principaux moteurs de la résilience économique et sociale. Dans un contexte marqué par des inégalités territoriales, un accès limité aux infrastructures et un cadre institutionnel fragile, ces modèles économiques permettent à des millions de personnes, en particulier aux femmes, de générer des revenus, d’assurer la survie de leurs familles et de dynamiser les territoires.Cet ouvrage propose une analyse approfondie des dynamiques entrepreneuriales rurales et urbaines en Haïti, en mettant en lumière le rôle central des femmes dans le commerce, l’agriculture et les services communautaires. À travers une approche pluridisciplinaire combinant économie territoriale, économie du développement et nouvelle économie géographique, il questionne l’efficacité des politiques publiques haïtiennes et propose des pistes d’action pour une meilleure intégration de l’ESS dans les stratégies de développement territorial et économique.Pourquoi l’ESS reste-t-elle sous-exploitée en Haïti? Quels sont les défis et opportunités pour structurer et renforcer l’économie informelle? Comment les politiques publiques peuvent-elles mieux accompagner les femmes entrepreneures et les initiatives locales?

***

ISBN pour l’impression : à venir

ISBN pour le PDF : à venir

DOI : à venir

236 pages
Design de la couverture : Kate McDonnell
Date de publication : 2025

***

Table des matières

Résumé / Rezime

Making & Breaking 4: Psychogeographies of the Present

Reworking the Situationist heritage and applying it to our time, many of the approaches presented here extend beyond the city and physical environments into the virtual dimensions of digital socialities, identifying new forces of power and potential sources of emancipation.

At a time when it has become fashionable to celebrate the looming apocalypse as post- or transhuman payback, we urgently need to reinvigorate our desire for the future. Approaching cultural production in psychogeographic terms might help identify what blockages are at play in constraining contemporary art and culture to addressing what feels like only a handful of topics, in a handful of ways.

Contributors include: Experimental Jetset, Max Haiven, Liam Young, !Mediengruppe Bitnik, Dan McQuillan, Image Acts, Total Refusal, and Tristam Adams. Edited and published by our friends at CARADT, Sebastian Olma and Jess Henderson.

Click here to access the latest issue of Making & Breaking: Psychogeographies of the Present

Free Jazz, Revolution and the Politics of Peter Brötzmann

Minor Compositions Podcast Episode 27 Free Jazz, Revolution and the Politics of Peter Brötzmann For this episode we have a discussion of the book Peter Brötzmann: Free-Jazz, Revolution and the Politics of Improvisation with its author Daniel Spicer and long time comrade and fellow radical theorist / free jazz musician Richard Gilman-Opalsky. In it we […]

April Newsletter

April Newsletter
The Spring path on the island of Mainau during the tulip blooming. By Haka on Wikimedia Commons, 24 April 2010, CC BY-SA.

Welcome to our April Newsletter!  

April Newsletter

We hope this newsletter finds you well and that your April has been rejuvenating. We write with publication announcements, new posts on academic freedom, and announcements about a book prize and a new series partnership.


Here's what happened this month:

April Newsletter

We published five new books

Humans, Dogs and Other Beings: Myths, Stories, and History in the Land of Genghis Khan by Baasanjav Terbish

Active Speech: Critical Perspectives on Teresa Deevy by Úna Kealy and Kate McCarthy (eds.)

Tragedy and the Witness: Shakespeare and Beyond by Fred Parker

Women Writers in the Romantic Age by John Claiborne Isbell

Coral Conservation: Global Evidence for the Effects of Actions by Ann Thornton, William H. Morgan, Eleanor K. Bladon, Rebecca K. Smith, and William J. Sutherland

All of our titles are free to read and download. Explore our complete catalogue.


April Newsletter
Photo by Mick Haupt on Unsplash

We shared two blog posts on academic freedom, censorship & open access

We have just published two posts—one written by our team, and one by our author Ash Lierman—reflecting on the increasing threats to academic freedom that we are seeing in the United States and elsewhere, and the role of open access in fighting back against these trends. We have released the posts today to coincide with the #DefendResearch day of action on the 100th day of the Trump presidency. Find out more about the Declaration To #DefendResearch Against U.S. Government Censorship, which OBP has signed.


April Newsletter

We announced a new series partnership with the Philological Society

We are delighted that we have begun a partnership with the Philological Society, the oldest learned society in Great Britain devoted to the scholarly study of language and languages, to publish their book series: Publications of the Philological Society. We have listed the first two books we will publish as part of the series: A Grammar of Etulo: A Niger-Congo (Idomoid) Language by Chikelu I. Ezenwafor-Afuecheta and Benjamin Franklin, Orthoepist and Phonetician: Insights into the Genesis of Colonial American-English Phonology by Gary D. German. We look forward to sharing these and many more titles in the series via our Diamond open access model, with no barriers for readers or authors.


April Newsletter

We received news of a prize-winning chapter in Prismatic Jane Eyre

We are thrilled to learn that “The Translatability of Love: The Romance Genre and the Prismatic Reception of Jane Eyre in Twentieth-Century Iran” by Kayvan Tahmasebian and Rebecca Ruth Gould, a chapter published as part of Prismatic Jane Eyre: Close-Reading a World Novel Across Languages by Reynolds et al., has won The Nineteenth Century Studies Association Article Prize. Huge congratulations to Kayvan and Rebecca!


NEW BOOK DISCOUNT: Enjoy 10% off when you spend £100 and 20% off when you spend £200 (or the equivalent in supported currencies) at OBP! The discount will be applied automatically at checkout.


That's all for this month!


Defending Academic Freedom in an Age of Censorship: Reflections from author Ash Lierman

Defending Academic Freedom in an Age of Censorship: Reflections from author Ash Lierman

By Ash Lierman, Instruction & Education Librarian at Campbell Library, Rowan University, New Jersey, USA

As an OBP author in the U.S., the impacts I personally experience from the current environment are multivalent. I am a university librarian, so aspects of my livelihood are at risk from massive cuts in funding to university research and in federal support for libraries. I support the faculty researchers, doctoral and master’s students, and pre-service teachers in my university’s College of Education, so I am keenly aware of the devastating impacts promised by the dismantling of the Department of Education. I am also an educational researcher in my own right, with a book published by OBP focused on disabled and neurodivergent students in higher education: one of the marginalized communities we are to be prohibited from referring to as such, facing the systemic oppression and discrimination we are to be prohibited from naming, who are sure to be even more disenfranchised than ever by attacks on their legal protections and the governmental bodies charged with their programs and services.

At the same time, I am also a disabled and transgender researcher. I was drawn in large part to my research because of the intersectional identities I share with many of its subjects, and in no way am I alone in this. Many scholars of topics increasingly identified as “politically sensitive,” and many of those recognized as the most brilliant luminaries of their fields, are invested in these topics in part because of the connections of personal identities. Their scholarship is informed and enriched by their insider perspectives, and by the challenges these perspectives can present to normative framings and ways of knowing. For those of us who share the identities we study, research is more than only research. At its best, it is joining together with our communities to better the lives of their members, conducted with (not on) partners rather than subjects. More personally still, it is drawn from and inscribed upon our own bodies. We cannot be separated from our research; it is us. A necessary consequence is that, when the subjects of our work are made ineligible for funding and a risk to our institutions, when the language that describes them is made taboo, we are doubly erased: not just intellectually, but personally. We are the diversity that the university can no longer risk openly valuing – not to mention the “gender ideology,” in my case and those of many other underrepresented, precarious, and marginalized trans academics.

All of this was of course on my mind as I attended the stellar Thinking Trans // Trans Thinking conference, hosted by the department of Philosophy at Lafayette College, at the end of March. (Interdisciplinarity has never been optional for librarians; we are always called upon to develop a knowledge of the research landscape that transcends boundaries.) In the Methods panel that opened the second day, I had the privilege of hearing from Blas Radi, a philosopher and scholar of social epistemology and trans studies at the Universidad de Buenos Aires, as one of the invited speakers. Beyond the main content of his presentation, he also offered his perspective to the discussion of doing trans studies in the current repressive political climate of the U.S., positioning it alongside the extremely repressive political climate of Argentina in which he has worked for years. It served as a crucial reminder, to me and I think to others present, of what should not be forgotten: what scholars in the U.S. are now facing as a crisis is the least of what has been a normal situation across large parts of the world for decades, particularly in the Global South – and, in many cases, under regimes that were enabled by American exploitation and imperialism.

It is this, in turn, that leads me back to OBP, Diamond Open Access in general, and the potential that it holds now more than ever. OA has often been recognized as playing a vital role in democratizing research and publication, especially for researchers from the Global South and other circumstances that may limit their access to funds and freedom of information. This is particularly true in the case of Diamond OA, which additionally supports equal access to publishing by scholars from the Global South by removing the barrier of publication fees. This equalizing power has been a driving force in its support from the library profession, and it was my values as a librarian around knowledge justice that led me to choose OA publishing for my own work. Now, with the threats that research faces in the U.S., OA offers to play the same role for many of us who have previously had the privilege of overlooking it.

In this aspect, as in others, this moment is actually an opportunity for American researchers: to find solidarity and build coalitions internationally, to learn with humility from those who have gone through this before us, to invest in structures that truly support knowledge production by and for all of us, and to work toward protecting and helping one another through oppressive structures and regimes. These are not small tasks to undertake, I acknowledge, especially when the stress and fear of the rapid changes in our situation feel so overwhelming. There are real and pragmatic threats that loom over our lives that must be managed, and ourselves and our loved ones to care for. I think it is also important to remember, though, that danger is not the only thing here for us in this moment. As the example of OA can demonstrate, there is also the potential to pursue renewal and reimaginings of how research is structured, and the chance that we could one day rise from what is broken with something stronger.

Read our stance on censorship: OBP has also written about open access and academic freedom from our perspective as an open access publisher in this post.

You can also read Ash’s book, The Struggle You Can’t See: Experiences of Neurodivergent and Invisibly Disabled Students in Higher Education.

Defending Academic Freedom in an Age of Censorship: Why Open Access Matters More Than Ever

Defending Academic Freedom in an Age of Censorship: Why Open Access Matters More Than Ever

By the OBP team

The recent wave of government censorship in America under the Trump administration has sent a chilling message to US scholars, librarians, universities and publishers alike: the freedom and stability we often take for granted in order to undertake and publish research is not guaranteed. As publishers committed to open access (OA), the suppression, distortion, and erasure of research is antithetical to our core mission: to share rigorous academic work freely. This is why we have signed the Declaration to Defend Research Against U.S. Government Censorship, and we urge others to do the same.

The assault on academic freedom is not a hypothetical risk; it is happening now. Government agencies have restricted the terminology that can be used in government-funded research, frozen funding for politically ‘sensitive’ topics, and removed publicly available data from official sources. Researchers have been targeted for pursuing knowledge that challenges political narratives, while universities have been threatened with the removal of funding in order to coerce their obedience to the administration’s will. In response to these actions by the American government, we have even seen attempts by scholarly societies to censor published research without informing the author. These acts do not merely impact the individuals and institutions directly involved; they strike at the very heart of scholarly integrity and global knowledge production.

We believe that OA publishing is crucial because it is an act of resistance against censorship and control. When knowledge is published OA, it can be accessed and shared in many places with no restriction – making it much more difficult to smother once published. When knowledge is openly available, it cannot be erased.

At OBP, we do not limit research to a single proprietary platform. We distribute widely and openly, ensuring that scholarship is available across multiple external repositories, platforms, and archives, freely available to download and share. This decentralization means that if one source is lost or modified, the research remains available elsewhere, safeguarding it for future generations. It ensures that authors' hard work will not disappear due to political pressure or institutional instability. By publishing OA, authors have confidence that their contributions to knowledge will persist, no matter the challenges ahead.

Arguments for the benefits of OA often focus on the reader: the reader’s access is not inhibited by a paywall or a price for a physical copy that is too expensive to afford. But OA can also liberate and protect the author, ensuring global reach for their work, and safeguarding it from erasure under political or institutional pressure. However, for the freedom to publish and preserve one’s research to be meaningful, it must not be dependent on the author’s ability to pay a fee.  This is why we use a Diamond OA model that does not charge authors, and support other publishers in developing the same via our work on Copim’s infrastructures.

OA is often perceived as a risk: can a publisher risk shifting to an OA model; can libraries have confidence that their OA investments are worthwhile or that an OA initiative is ‘sustainable’; can authors take the risk of publishing with a less well-known Diamond OA press? But in a world where old certainties are crumbling – where censorship, platform instability and political interference pose real threats – OA offers something traditional publishing cannot: resilience. Once knowledge is openly available, it cannot easily be silenced. In this light, OA is not a gamble: it is a safeguard.

Open Book Publishers stands unequivocally against threats to academic freedom. We will continue to support researchers in sharing their work freely, without fear of suppression. We urge our colleagues to recognize the urgency of this moment and join us in this commitment.

What do our authors say? We share here a post from Ash Lierman, author of The Struggle You Can’t See: Experiences of Neurodivergent and Invisibly Disabled Students in Higher Education, who writes from their perspective as a US-based researcher, author and librarian, and as a disabled and transgender person. Read Ash’s post.

Sign the Declaration to Defend Research Against U.S. Government Censorship here. Use the hashtag #DefendResearch to spread the word.