Out Now! Duckrabbits Unveiled: A Sneak Peek at the Postartistic Theory and Practice

By Kacper Greń (visuals) and Kuba Szreder (text)

This comic tells the story of the duckrabbit, the spirit animal of postartistic practice.  The coming of the duckrabbits was envisioned already in 1971 by art theoretician Jerzy Ludwiński, when he wrote: ‘Perhaps, even today, we do not deal with art. We might have overlooked the moment when it transformed itself into something else, something which we cannot yet name. It is certain, however, that what we deal with offers greater possibilities.’ The duckrabbit emerged in in the 2010s – a decade overshadowed by looming authoritarianism and multiplying crises – when it became the spirit animal of Consortium for Postartistic Practices and the Office for Postartistic Services in Warsaw, Poland. Living and working inside and out of museums, art history, objecthood, street protests, and artist studios, the duckrabbits found their habitats in unusual and ambivalent places, resistant to the dominant forces of the mainstream art world and political suppression.

It takes a duckrabbit village to bring a comic to life. The initial impulse came from duckrabbits-in-arms Sebastian Cichocki, Kuba Depczyński, Marianka Dobkowska, and Bogna Stafańska, curators of the Postartistic Congress. The first edition was commissioned by commissioned by the Insitu Foundation. The narrative draws from years of making and thinking together with the Consortium for Postartistic Practices and the Office of Postartistic Services (co-run with the Bęz Zmiana Foundation in Warsaw). The initial version of this comic was drafted by Kacper Greń during the seminar ‘Art Beyond Art’ led by Kuba Szreder at the Department for Artistic Research and Curatorial Studies, at the Academy of Fine Arts in Warsaw. The final print edition is published as an INC Zine by the Institute of Network Cultures, coordinated by Sepp Eckenhaussen.

Visuals: Kacper Greń
Text: Kuba Szreder
Coordination & production: Sepp Eckenhaussen
Print
: GPS Group, Slovenia

Published by the Institute of Network Cultures, Amsterdam 2025
ISBN: 978-90-82520-91-9

Email: info@networkcultures.org
Web: www.networkcultures.org

This publication is licenced under Creative Commons Attribution NonCommerical ShareAlike 4.0 Unported (CC BY-NC-SA 4.0)

Order a free copy of Duckrabbits Unveiled here: https://networkcultures.org/publications/order-inc-publications
Or download the PDF here: http://networkcultures.org/wp-content/uploads/2025/05/duckrabbits_ONLINE.pdf

Surrealism, Bugs Bunny, and the Blues

Minor Compositions Podcast Episode 29 Surrealism, Bugs Bunny, and the Blues   This episode is a discussion with Paul Buhle, Abigail Susik, and Penelope Rosemont about the newly released book Surrealism, Bugs Bunny, and the Blues: Selected Writings on Popular Culture. This collection brings together legendary Chicago surrealist Franklin Rosemont’s writings on popular culture over […]

Thinking Face Emoji #1: Girlboss, Through the Years with The Hmm and Sam Cummins

This is the first episode of Thinking Face Emoji, a podcast miniseries by The Hmm, in collaboration with the Institute of Network Cultures, and supported by the Creative Industries Fund NL.

Hosts Margarita Osipian and Sjef van Beers from The Hmm, are joined by Sam Cummins, of Nymphet Alumni, to discuss the girlboss. Overly familiar with the many critiques this online stereotype has gotten over the years, they shift the focus to look at the cultural and aesthetic environment that led to the girlboss, her inception, and the impact she made on the (online) culture today.

Mentioned in this episode:
What is a Girlboss? (Netflix): www.youtube.com/watch?v=ScpqleOv_o8
Ban Bossy, ‘I’m Not Bossy. I’m the Boss.’: www.youtube.com/watch?v=6dynbzMlCcw
Beyoncé at the 2014 MTV Video Music Awards: www.youtube.com/watch?v=6maPmEQIiQI
That Feeling You Recognize? Obamacore: www.vulture.com/article/obamacore…amala-harris.html
What Do Students at Elite Colleges Really Want? www.nytimes.com/2024/05/22/busine…tudents-jobs.html
Nymphet Alumni Ep. 113: Information Age Grindset w/ Ezra Marcus: www.nymphetalumni.com/p/ep-113-infor…grindset-w-fae
All-woman Blue Origin crew floats in space: www.youtube.com/watch?v=1looEUDCLsQ
In Space, No One Can Hear You Girlboss: pitchfork.com/thepitch/katy-perry-space/

Find The Hmm at: www.thehmm.nl
Find Sam and Nymphet Alumni at: www.nymphetalumni.com

Jingle and sound design by Jochem van der Hoek. Editing by Salome Berdzenishvili. Cover art by Aspirin

Taters Gonna Tate. . .But Do Platforms Have to Platform?: Listening to the Manosphere

A white man holds a cigar in the center of the picture, his mouth is visible on the left edge of the picture, blowing smoke rings.

In March 2025, shortly after returning to the United States from Romania, where he and his brother Tristan had been held under house arrest for two years after being charged with human trafficking, rape, and forming a criminal group to sexually exploit women, the social media influencer and self-described misogynist Andrew Tate’s podcast, Pimping H**s Degree was removed from Spotify for violating that platform’s policies.

According to the technology media outlet 404 Media, which first reported the news, some Spotify employees had complained in an internal Slack channel about the availability of Tate’s shows on their platform. “Pretty vile that we’re hosting Andrew Tate’s content,” wrote one. “Happy Women’s History Month, everybody!” wrote another. A change.org petition to call on Spotify to remove harmful Andrew Tate content, meanwhile, received over 150,000 signatures.

When asked for comment by the U.K. Independent, a Spotify spokesperson clarified that they removed the content in question because it violated the company’s policies, not because of any internal employee discussion. These policies state, in part, that content hosted on the platform should not “promote violence, incite hatred, harass, bully, or engage in any other behavior that may place people at risk of serious physical harm or death.”

Still, there is a veritable fire hose of Tate content available on Spotify. A search for the name “Andrew Tate” on the platform yields upwards of 15 feeds (and a music account) associated with the pro kickboxer-turned-self-help guru, many of which seem to be updated on a sporadic basis or not at all. Apple Podcasts, meanwhile, features an equally wide spectrum of shows with titles like Tatecast, Tate Speech, Andrew Tate Motivation, and Tate Talk [Ed. Note: Normally there’d be links to this media–and the author has provided all of his sources, but we at SO! does not want to drive idle traffic to these sites or pingbacks to/from them. If you want to follow Andrew Salvati’s path, all these titles are readily findable with a quick cut-and-paste Google search.–JS]

With so many different feeds out there, wading into the Andrew Tate audio ecosystem can be a bewildering experience. There isn’t just one podcast; there’s a continuous unfolding of feeds populated by short clips of content pulled from other sources.

But this may be the point exactly.

Andrew Tate on Anything Goes With James English, CC BY 3.0 via Wikimedia Commons

As I learned from this article in the Guardian and these interviews with YouTuber and entrepreneur MrBeast (“MrBeast On Andrew Tate’s MARKETING” and “MrBeast Reveals Andrew Tate’s Strategy”), Tate achieved TikTok virality, in part, by encouraging fans to share clips of video podcast interviews – rather than the whole interview itself – on the platform.

“Now is the best time to do podcasts than ever before,” MrBeast said in one interview. “Now it’s like the clips are re-uploaded for months on months. It gets so many views outside of the actual podcast … I would call it the ‘Tate Model’ … Like I think if you’re an influencer, you should go on like a couple dozen podcasts. You should clip all the best parts and just put it on a folder and just give it to your fans. Like literally promote you for free.” Though it can be hard to tell exactly who uploaded a podcast to Spotify, it seems that something like this is happening on the platform – that fans of Tate are sharing their favorite clips of his interviews and monologues pulled from other sources.

In its “About” section, for instance, a Spotify feed called Andrew Tate Motivational Speech declares that “this is a mix of the most powerful motivational speeches I’ve found from Andrew Tate. He’s a 4 time [sic] kickboxing world champion and he’s been having a big impact on social media.” In another Spotify feed called Tate Therapy, posters are careful to note that they “do not represent Mr. Tate in any way. We simply love his message. So we put together some of his best speeches.”

Given that Spotify is increasingly a social media platform, rather than simply an audio streaming service–users can collaborate on playlists and see what their friends are listening to–it follows that this practice of clipping and sharing Tate content may potentially expand the influencer’s online footprint. It may also serve as insurance against the company’s attempts to remove content or completely deplatform Tate: surely Spotify can’t police all the feeds that it hosts

So, what is it that Andrew Tate is saying – and how is he saying it?

To get a sense of why he has been called the “King of Toxic Masculinity,” and a “divisive social media star,” I had a listen to several of the interviews and monologues posted to Andrew Tate Speech Daily on Apple Podcasts, which, of all of the Andrew Tate audio feeds, is the most consistently updated.

The first thing to take note of is his voice. It’s brisk and aggressive and carefully enunciated – it’s like he’s daring you to take issue with what he, an accomplished and eloquent man, is saying. Above all, listening to Tate feels like being spoken to like an inferior, because that is precisely what he preys on. His accent, moreover – now British, now American – is unique, lending itself to some unusual pronunciations that can be considered as a part of his system of authority and charm.

One of Tate’s main arguments about what ails men today – and it is clear from his mode of address that he assumes he is talking to men exclusively – is that they are trapped in a system of social and economic “slavery” that he unimaginatively calls “The Matrix” after the film series of the same name. Though he is somewhat vague in his descriptions, in the podcast episode “Andrew Tate on The Matrix,” he explains that power, as it actually exists in the world, is held by elites who rely on systems of representation (language, texts) to effect their will. These systems of representation, however, are prone to abuse because they are ultimately subject to human fallibility. Tangible assets, like wealth, he reasons, are susceptible to control by “The Matrix,” as they can be taken away arbitrarily by the redefinition of decisions and the printing/signing of documents. His example, though it is a little hard to follow, is that if someone says something that the government doesn’t like, a judge can simply order that their house be taken away. Instead, Tate argues that individuals can escape “The Matrix” by building intangible assets (here, he gives no examples), which cannot be taken away by elites and their bureaucracy. It is a difficult path, he cautions (and here, he sounds sympathetic), and one that not everyone has the discipline to endure.

Tate gets a little more specific in the episode “Andrew Tate on The Global Awakening. The Modern Slave System,” in which he asserts that elites are using the system of fiat currency – a term that cryptocurrency supporters like to use to disparage government-issued currencies – to keep individuals “enslaved.” In this modern version of enslavement, he explains, individuals are forced to work for currency, but, since fiat currency is subject to inflation and other forms of manipulation, only end up making the bare amount they need to survive. The result, he argues, is a system in which the rich get richer and the poor get poorer (of course this ignores the real possibility of shitcoin and other crypto manipulation schemes). It’s quite a populist message for a guy who is famous for his luxurious lifestyle. Still, his message here is consistent: with the proper amount of discipline, a willingness to speak truth to power, and faith in God (he converted to Islam in October 2023) will result in an awakening of consciousness that will finally end the stranglehold that elites have on power – will finally break “The Matrix.”

On the other hand, Tate deems women incapable of the discipline required to break out of “The Matrix” – he seems to think that they are too materialistic, too distractible, too enamored of the chains that elites use to bind individuals to the system to see beyond them (see “Andrew Tate on ‘Fun’”). In his view, women are better off at home bearing children or fulfilling male sexual desires. (In an apparent demonstration of male dominance, Tate’s “girlfriends” often appear in the background of his videos cleaning house).  

For his part, Tate claims that his own legal troubles, and his own vilification in the press, are part of a coordinated campaign of persecution against him for exposing the way that the world really works (see, for example, “Andrew Tate: Survival, Power, and the System Exposed”). From this vantage, Tate seems to be acting as what the ancient Greeks called a parrhesiastes, someone who, as Michel Foucault writes, not only sees it as his duty to speak the truth, but takes a risk in doing so, since what he says is opposed by the majority. Indeed, often congratulating himself on his bravery in the face of “The Matrix,” Tate has suggested that his role as a truth teller might get him sent to jail (“Andrew Tate on the Common Man”), or worse (“Survival, Power, and the System Exposed.”) In such moments, he plays the martyr, adopting a quiet, yet defiant voice. 

Aside from the aspirational lifestyle he purveys – the fast cars, the money, the women, the flashy clothes, the jets, the mansions, the cigars, and the six pack – it seems to me that this parrhesia is a key part of what makes Tate popular among men and boys (as of February 2025, he had over 10 million followers on X [formerly Twitter]). What he reveals to them, though it is often muddled, is the way in which elites maintain social control under advanced capitalism. It’s all rather Gramscian in the sense that it is concerned with the hegemony of a dominant class, though, ironically, Tate seems too much of a capitalist himself to engage in Marxian social critique. Instead of offering a politics of class solidarity, Tate merely rehearses familiar neoliberal scripts about pulling oneself up by the bootstraps (see “You Must Constantly Build Yourself”), getting disciplined, going to the gym, developing skills, and starting a business. For Tate, life is a competition, a war, though most men don’t realize it.

And I think this is the key to understanding Tate’s parrhesia – it’s not only that he is speaking truth to power in his criticism of “The Matrix”; he also sees himself as speaking an uncomfortable truth to his listeners, truths that they might not be ready to hear. As in the movie, The Matrix, he says in “Andrew Tate on the Global Awakening,” some minds are not ready to have the true nature of reality revealed to them. In his perorations, therefore, Tate often takes a sharp and combative tone, accusing his listeners of being guilty of complacency and complicity in the face of “The Matrix.”

“If I were to explain to you right here, right now, in a compendious and concise way, most of you wouldn’t understand,” he says in “Andrew Tate on The Matrix.” “And those of you who do understand will not be prepared to do the work it takes to then actually genuinely escape. But those of you who are truly unhappy inside of your hearts, those of you who understand there’s something more to life, there’s a different level of reality you’ve yet to experience … But if your mind is ready to be free, if you’re ready to truly understand how the world operates and become a person who is difficult to kill, hard to damage, and escape The Matrix truly, once and for all, then I am willing to teach you.”

Tate on Anything Goes With James English, CC BY 3.0 , via Wikimedia Commons

For those persuaded by this line of thinking, or who are otherwise made to feel guilty about their complicity in “The Matrix,” Tate offers a special “Real World” course at $49 per month, which teaches students how they can leverage AI and e-commerce tools to earn their own money and finally be free.

And that’s really what it’s all about – all the social media influencing, all the clip sharing, all the obnoxious antics, and deliberately controversial statements – they are all calculated to raise his public profile (good or bad) so that he can sell the online courses that have made him and his brother Tristan fabulously wealthy.

It is for this reason that I don’t think that Spotify’s deplatforming of one of Tate’s shows will ultimately do anything meaningful to stem his popularity. If anything, the added controversy will likely confirm to his fans that he has been right all along – that the elites who are in control of “The Matrix” are so threatened by the truth that he tells about the world and about women that they will first deplatform him and then send him to jail.

No, we will only rid ourselves of Tate when he becomes irrelevant. This may happen if he ends up going to prison in Romania or in the UK (where he also faces charges of rape and human trafficking). But even then, there are many vying to take his place.

Featured Image: Close-up and remixed image of Andrew Tate’s mouth and arm, Image by Heute, CC BY 4.0

Andrew J. Salvati is an adjunct professor in the Media and Communications program at Drew University, where he teaches courses on podcasting and television studies. His research interests include media and cultural memory, television history, and mediated masculinity. He is the co-founder and occasional co-host of Inside the Box: The TV History Podcast, and Drew Archives in 10.

This post also benefitted from the review of Spring 2025 Sounding Out! interns Sean Broder and Alex Calovi. Thank you!

REWIND! . . .If you liked this post, you may also dig:

Robin Williams and the Shazbot Over the First Podcast–Andrew Salvati

“I am Thinking Of Your Voice”: Gender, Audio Compression, and a Cyberfeminist Theory of Oppression: Robin James

DIY Histories: Podcasting the Past: Andrew Salvati

Listening to MAGA Politics within US/Mexico’s Lucha Libre –Esther Díaz Martín and Rebeca Rivas

Gendered Sonic Violence, from the Waiting Room to the Locker Room–Rebecca Lentjes


Taters Gonna Tate. . .But Do Platforms Have to Platform?

A white man holds a cigar in the center of the picture, his mouth is visible on the left edge of the picture, blowing smoke rings.

In March 2025, shortly after returning to the United States from Romania, where he and his brother Tristan had been held under house arrest for two years after being charged with human trafficking, rape, and forming a criminal group to sexually exploit women, the social media influencer and self-described misogynist Andrew Tate’s podcast, Pimping H**s Degree was removed from Spotify for violating that platform’s policies.

According to the technology media outlet 404 Media, which first reported the news, some Spotify employees had complained in an internal Slack channel about the availability of Tate’s shows on their platform. “Pretty vile that we’re hosting Andrew Tate’s content,” wrote one. “Happy Women’s History Month, everybody!” wrote another. A change.org petition to call on Spotify to remove harmful Andrew Tate content, meanwhile, received over 150,000 signatures.

When asked for comment by the U.K. Independent, a Spotify spokesperson clarified that they removed the content in question because it violated the company’s policies, not because of any internal employee discussion. These policies state, in part, that content hosted on the platform should not “promote violence, incite hatred, harass, bully, or engage in any other behavior that may place people at risk of serious physical harm or death.”

Still, there is a veritable fire hose of Tate content available on Spotify. A search for the name “Andrew Tate” on the platform yields upwards of 15 feeds (and a music account) associated with the pro kickboxer-turned-self-help guru, many of which seem to be updated on a sporadic basis or not at all. Apple Podcasts, meanwhile, features an equally wide spectrum of shows with titles like Tatecast, Tate Speech, Andrew Tate Motivation, and Tate Talk [Ed. Note: Normally there’d be links to this media–and the author has provided all of his sources, but we at SO! does not want to drive idle traffic to these sites or pingbacks to/from them. If you want to follow Andrew Salvati’s path, all these titles are readily findable with a quick cut-and-paste Google search.–JS]

With so many different feeds out there, wading into the Andrew Tate audio ecosystem can be a bewildering experience. There isn’t just one podcast; there’s a continuous unfolding of feeds populated by short clips of content pulled from other sources.

But this may be the point exactly.

Andrew Tate on Anything Goes With James English, CC BY 3.0 via Wikimedia Commons

As I learned from this article in the Guardian and these interviews with YouTuber and entrepreneur MrBeast (“MrBeast On Andrew Tate’s MARKETING” and “MrBeast Reveals Andrew Tate’s Strategy”), Tate achieved TikTok virality, in part, by encouraging fans to share clips of video podcast interviews – rather than the whole interview itself – on the platform.

“Now is the best time to do podcasts than ever before,” MrBeast said in one interview. “Now it’s like the clips are re-uploaded for months on months. It gets so many views outside of the actual podcast … I would call it the ‘Tate Model’ … Like I think if you’re an influencer, you should go on like a couple dozen podcasts. You should clip all the best parts and just put it on a folder and just give it to your fans. Like literally promote you for free.” Though it can be hard to tell exactly who uploaded a podcast to Spotify, it seems that something like this is happening on the platform – that fans of Tate are sharing their favorite clips of his interviews and monologues pulled from other sources.

In its “About” section, for instance, a Spotify feed called Andrew Tate Motivational Speech declares that “this is a mix of the most powerful motivational speeches I’ve found from Andrew Tate. He’s a 4 time [sic] kickboxing world champion and he’s been having a big impact on social media.” In another Spotify feed called Tate Therapy, posters are careful to note that they “do not represent Mr. Tate in any way. We simply love his message. So we put together some of his best speeches.”

Given that Spotify is increasingly a social media platform, rather than simply an audio streaming service–users can collaborate on playlists and see what their friends are listening to–it follows that this practice of clipping and sharing Tate content may potentially expand the influencer’s online footprint. It may also serve as insurance against the company’s attempts to remove content or completely deplatform Tate: surely Spotify can’t police all the feeds that it hosts

So, what is it that Andrew Tate is saying – and how is he saying it?

To get a sense of why he has been called the “King of Toxic Masculinity,” and a “divisive social media star,” I had a listen to several of the interviews and monologues posted to Andrew Tate Speech Daily on Apple Podcasts, which, of all of the Andrew Tate audio feeds, is the most consistently updated.

The first thing to take note of is his voice. It’s brisk and aggressive and carefully enunciated – it’s like he’s daring you to take issue with what he, an accomplished and eloquent man, is saying. Above all, listening to Tate feels like being spoken to like an inferior, because that is precisely what he preys on. His accent, moreover – now British, now American – is unique, lending itself to some unusual pronunciations that can be considered as a part of his system of authority and charm.

One of Tate’s main arguments about what ails men today – and it is clear from his mode of address that he assumes he is talking to men exclusively – is that they are trapped in a system of social and economic “slavery” that he unimaginatively calls “The Matrix” after the film series of the same name. Though he is somewhat vague in his descriptions, in the podcast episode “Andrew Tate on The Matrix,” he explains that power, as it actually exists in the world, is held by elites who rely on systems of representation (language, texts) to effect their will. These systems of representation, however, are prone to abuse because they are ultimately subject to human fallibility. Tangible assets, like wealth, he reasons, are susceptible to control by “The Matrix,” as they can be taken away arbitrarily by the redefinition of decisions and the printing/signing of documents. His example, though it is a little hard to follow, is that if someone says something that the government doesn’t like, a judge can simply order that their house be taken away. Instead, Tate argues that individuals can escape “The Matrix” by building intangible assets (here, he gives no examples), which cannot be taken away by elites and their bureaucracy. It is a difficult path, he cautions (and here, he sounds sympathetic), and one that not everyone has the discipline to endure.

Tate gets a little more specific in the episode “Andrew Tate on The Global Awakening. The Modern Slave System,” in which he asserts that elites are using the system of fiat currency – a term that cryptocurrency supporters like to use to disparage government-issued currencies – to keep individuals “enslaved.” In this modern version of enslavement, he explains, individuals are forced to work for currency, but, since fiat currency is subject to inflation and other forms of manipulation, only end up making the bare amount they need to survive. The result, he argues, is a system in which the rich get richer and the poor get poorer (of course this ignores the real possibility of shitcoin and other crypto manipulation schemes). It’s quite a populist message for a guy who is famous for his luxurious lifestyle. Still, his message here is consistent: with the proper amount of discipline, a willingness to speak truth to power, and faith in God (he converted to Islam in October 2023) will result in an awakening of consciousness that will finally end the stranglehold that elites have on power – will finally break “The Matrix.”

On the other hand, Tate deems women incapable of the discipline required to break out of “The Matrix” – he seems to think that they are too materialistic, too distractible, too enamored of the chains that elites use to bind individuals to the system to see beyond them (see “Andrew Tate on ‘Fun’”). In his view, women are better off at home bearing children or fulfilling male sexual desires. (In an apparent demonstration of male dominance, Tate’s “girlfriends” often appear in the background of his videos cleaning house).  

For his part, Tate claims that his own legal troubles, and his own vilification in the press, are part of a coordinated campaign of persecution against him for exposing the way that the world really works (see, for example, “Andrew Tate: Survival, Power, and the System Exposed”). From this vantage, Tate seems to be acting as what the ancient Greeks called a parrhesiastes, someone who, as Michel Foucault writes, not only sees it as his duty to speak the truth, but takes a risk in doing so, since what he says is opposed by the majority. Indeed, often congratulating himself on his bravery in the face of “The Matrix,” Tate has suggested that his role as a truth teller might get him sent to jail (“Andrew Tate on the Common Man”), or worse (“Survival, Power, and the System Exposed.”) In such moments, he plays the martyr, adopting a quiet, yet defiant voice. 

Aside from the aspirational lifestyle he purveys – the fast cars, the money, the women, the flashy clothes, the jets, the mansions, the cigars, and the six pack – it seems to me that this parrhesia is a key part of what makes Tate popular among men and boys (as of February 2025, he had over 10 million followers on X [formerly Twitter]). What he reveals to them, though it is often muddled, is the way in which elites maintain social control under advanced capitalism. It’s all rather Gramscian in the sense that it is concerned with the hegemony of a dominant class, though, ironically, Tate seems too much of a capitalist himself to engage in Marxian social critique. Instead of offering a politics of class solidarity, Tate merely rehearses familiar neoliberal scripts about pulling oneself up by the bootstraps (see “You Must Constantly Build Yourself”), getting disciplined, going to the gym, developing skills, and starting a business. For Tate, life is a competition, a war, though most men don’t realize it.

And I think this is the key to understanding Tate’s parrhesia – it’s not only that he is speaking truth to power in his criticism of “The Matrix”; he also sees himself as speaking an uncomfortable truth to his listeners, truths that they might not be ready to hear. As in the movie, The Matrix, he says in “Andrew Tate on the Global Awakening,” some minds are not ready to have the true nature of reality revealed to them. In his perorations, therefore, Tate often takes a sharp and combative tone, accusing his listeners of being guilty of complacency and complicity in the face of “The Matrix.”

“If I were to explain to you right here, right now, in a compendious and concise way, most of you wouldn’t understand,” he says in “Andrew Tate on The Matrix.” “And those of you who do understand will not be prepared to do the work it takes to then actually genuinely escape. But those of you who are truly unhappy inside of your hearts, those of you who understand there’s something more to life, there’s a different level of reality you’ve yet to experience … But if your mind is ready to be free, if you’re ready to truly understand how the world operates and become a person who is difficult to kill, hard to damage, and escape The Matrix truly, once and for all, then I am willing to teach you.”

Tate on Anything Goes With James English, CC BY 3.0 , via Wikimedia Commons

For those persuaded by this line of thinking, or who are otherwise made to feel guilty about their complicity in “The Matrix,” Tate offers a special “Real World” course at $49 per month, which teaches students how they can leverage AI and e-commerce tools to earn their own money and finally be free.

And that’s really what it’s all about – all the social media influencing, all the clip sharing, all the obnoxious antics, and deliberately controversial statements – they are all calculated to raise his public profile (good or bad) so that he can sell the online courses that have made him and his brother Tristan fabulously wealthy.

It is for this reason that I don’t think that Spotify’s deplatforming of one of Tate’s shows will ultimately do anything meaningful to stem his popularity. If anything, the added controversy will likely confirm to his fans that he has been right all along – that the elites who are in control of “The Matrix” are so threatened by the truth that he tells about the world and about women that they will first deplatform him and then send him to jail.

No, we will only rid ourselves of Tate when he becomes irrelevant. This may happen if he ends up going to prison in Romania or in the UK (where he also faces charges of rape and human trafficking). But even then, there are many vying to take his place.

Featured Image: Close-up and remixed image of Andrew Tate’s mouth and arm, Image by Heute, CC BY 4.0

Andrew J. Salvati is an adjunct professor in the Media and Communications program at Drew University, where he teaches courses on podcasting and television studies. His research interests include media and cultural memory, television history, and mediated masculinity. He is the co-founder and occasional co-host of Inside the Box: The TV History Podcast, and Drew Archives in 10.

This post also benefitted from the review of Spring 2025 Sounding Out! interns Sean Broder and Alex Calovi. Thank you!

REWIND! . . .If you liked this post, you may also dig:

Robin Williams and the Shazbot Over the First Podcast–Andrew Salvati

“I am Thinking Of Your Voice”: Gender, Audio Compression, and a Cyberfeminist Theory of Oppression: Robin James

DIY Histories: Podcasting the Past: Andrew Salvati

Listening to MAGA Politics within US/Mexico’s Lucha Libre –Esther Díaz Martín and Rebeca Rivas

Gendered Sonic Violence, from the Waiting Room to the Locker Room–Rebecca Lentjes


Irony is an opportunity for ambivalence: Interview with Maya Indira Ganesh about her Book Auto-Correct

In 2025, ARTez Press published Auto-Correct: The Fantasies and Failures of AI, Ethics, and the Driverless Car by Maya Indira Ganesh. I talked with Maya about the book — why and how technologies fail, the meaning of ethics within and outside technologies, and the ambivalence that comes with irony (as well as critique). The interview was recorded on April 15th in Zoom, automatically transcribed, and lightly edited for clarity.

In my PhD project, I keep thinking about how one can relate to the fact that algorithmic technologies err and fail all the time. All things fall apart and break down—that much is a truism. Yet, how we choose to make sense of it individually and collectively is a different matter. What initially drew my attention to Maya’s book is how she describes failures of self-driving cars as happening at different scales and moments in time. The idea of error-free technology is thus a dream, and yet not all failures are alike.

Dmitry Muravyov: Yeah, so to start us off, you mentioned how long this project has been going. For those doing PhDs and turning them into further projects or books, I’m curious: What question or intellectual concern has driven you throughout this process? Was there a thought you kept returning to—like, I need to put this out into the world because it’s important?

Maya Indira Ganesh: There are two dimensions to that. First, in Germany, you have to publish to complete your PhD; it’s not considered ‘done’ until you do. I used that requirement as a chance to turn my thesis into a printed book. Second, since finishing my PhD at Leuphana University, I’ve mostly been teaching. Almost exactly four years ago as I was handing in my thesis, I was also interviewing for a job at this university. I was hired to co-design and co-lead a new master’s program in AI, Ethics, and Society.

Teaching AI ethics made me aware of what I put on reading lists—how to bring critical humanities and social science perspectives into conversations about technology, values, and AI. I noticed gaps in the literature. Not that I’m claiming to fill those gaps with my book, but there’s a standard set of citations on the social shaping of technology, epistemic infrastructures, and AI’s emergence. Teaching working professionals—people building tech or making high-level decisions—pushed me to ask: “How do I make theory accessible without diluting it?” They wanted depth but weren’t academics. So, I thought, “What can they read that’s not tech journalism or long-form criticism?” That became a motivation.

The other thing I’ve wrestled with is the temporality of academic research versus the speed of AI innovation. It’s about the politics of AI time. A big question asked of AI in general and driverless cars in particular is this question: ‘When will it arrive?’ You don’t ask that about most technologies because, say, a car is tangible—you see it, you know it’s here. But so much AI operates invisibly in the background. Its rhetoric is all about it always being almost here, ‘just around the corner’.

Credit: Maya Indira Ganesh

As an academic, though, timing doesn’t matter—unless you’re under the delusion your work will “change everything,” which, let’s be honest, few believe. But also, no one had written about driverless cars this way. Most books are policy or innovation-focused. I thought, “Why not a critical cultural study of this artifact?”

Dmitry Muravyov: It really makes me think, especially when people talk about regulation, there are so many times metaphors like “we’re lagging behind.”

I’m really interested in how technologies fail, and obviously that’s a huge theme in your book—it’s right there in the title. I’ve been trying to make sense of one of your chapters in a particular way, and I’d love to hear your thoughts. You talk a lot about how driverless cars are kind of set up to fail in certain ways, and how all these accident reports are always partial, always uncertain.

But reading Chapter 2, I noticed you sort of map out why these crashes happen, and I think I’ve got three main patterns. First, there’s the human-machine handover failure—like when the human just zones out for a second and can’t take over when they need to. Then there are the computer vision gaps, where the car’s system just doesn’t ‘get’ what it’s seeing— objects just don’t register properly. And third, there’s this mismatch between the car and its environment, where the infrastructure isn’t right for what the car needs to work.

But then you also show how the tech industry tries to deal with these failures, right? For the handover problem, they push this whole ‘teamwork’ idea in their PR — making the car seem more human, more relatable. For the vision gaps, there’s all this invisible data work going on behind the scenes to patch things up. And for the infrastructure issue, they’re literally reshaping cities to fit the cars—testing them in the real world, not just labs.

So, my question is: Would you say these are basically strategies to compensate for the cars’ weaknesses? And do you think it’s mostly the tech industry driving these fixes?

Maya Indira Ganesh: Wow, yeah—that’s such a good summary, and you’ve definitely read the book! [laughs] You’re completely right; this is exactly it.

And yeah, these rhetorical moves are chiefly coming from the tech industry, because they’re the ones who really see these problems up close. But the way they handle it is interesting—it’s like they’re working on two levels:

Making it seem human. At one level, they’re saying, “Look, it’s just like a person!” Whether it’s comparing driving to human cognition, or even calling the software the “driver” for the car’s “hardware”—like the CEO from Waymo does. If you make it feel human, suddenly people are more forgiving, right?

Andrew Ng from Baidu, who says, “Hey, this tech is still learning, be considerate—cut it some slack”! Which, okay—but why should I feel concerned for a car? This works because cars feel familiar, cars are anthropomorphized anyway, and are distinctly gendered at that. Cars, like boats, are given monikers, are usually ‘she’.  We tend not to do this with an invisible credit-scoring algorithm.

The other move is the strategy of blaming actual humans. This isn’t new. Back when cars were first invented, jaywalking laws were invented to shift responsibility onto pedestrians for running out onto the street and disrupting the space for experimental automobility in city spaces, and new drivers. This was in the early days of automobility in the US before traffic lights existed, and people were unaware of how this new technology worked, and were more familiar with horse-drawn carriages. Rather than regulate cars and drivers, what happened was to blame the human for not crossing the road correctly. That’s why the car hit you.” There is a similar playbook now: “praise the machine, punish the human” as Tim Hwang and Madeline Elish put it—it’s this endless cycle of Oh, the tech’s fine—you’re the problem.

Dmitry Muravyov: This whole process really seems to be about adaptation, right? We humans are fallible beings, but in this context of coexisting with technology, it feels like our failures are the ones that need adjusting—we have to change to fit driverless cars, for instance.

But I’m curious, could we distinguish between more and less desirable types of failure? If we accept that neither tech nor humans can be perfect—that we’re all prone to fail in some way—does that open up new ways to think about these systems differently?

Maya Indira Ganesh: Good question. Actually, I touch on this in the book’s epilogue about the “real vs. fantasy” worlds of technology. When you focus on the real world, you have to confront failure—that breakdown is crucial for understanding how systems interact with human society. That’s why these technologies have to leave their controlled “toy worlds” and enter our messy reality, where they inevitably fail. That failure gives us valuable data about how the system actually works.

But here’s the tension: By dwelling in the fantasy of what the technology could be—that idealized future where everything works perfectly—we avoid grappling with its real-world flaws. The driverless car is interesting because it’s too tangible for pure futurism—you can’t pretend its failures are just “speculative risks” like you might with AI doom scenarios. Yet even with AVs, there’s still this tendency to say “Oh, the real version is coming later” to deflect from today’s problems.

So, in short: If we obsess over the technology’s potential, we don’t have to account for how it’s actually failing in material, accountable ways right now.

Credit: Maya Indira Ganesh

Dmitry Muravyov: It makes me curious—is it possible to envision technologies that recognize their intrinsic fallibility and try to account for it? Maybe in certain ways, rather than others, as your discussion of existential risk shows.

Following up on that, you discuss ethics in the book so well. You interrogate the assumptions and limitations of machine ethics, showing how it localizes ethics within computational architecture, making it a design problem to solve. I love how you describe it: “the statistical becomes the technological medium of ethics”—and you contrast this with “human phenomenological, embodied, spiritual, or shared technologies for making sense of the world.” Could you talk more about this opposition?

Maya Indira Ganesh: I think machine ethics is really interesting because it’s such a niche field that people don’t talk about enough. But it actually does a great job of showing what people are trying to do when they try to embed values into machines—to make decisions that align with certain ethics. But the thing is, this approach works at small scales, not for complex systems like driverless cars in cities.

Of course, we want that in some cases—like removing violent extremism or child pornography online. That’s clear-cut. But then you get into nuances: What if it’s a GIF mimicking beheading, but with no real-world groups or ideologies attached? Suddenly it’s not so simple.

The problem is, machine ethics—and a lot of tech ethics—assumes technology can be totalizing, seamless. We don’t want to deal with breaks or failures, or messy systems talking to each other. Right now, every wave of digitization just gets called “AI.” For 15 years, we’ve had digitized systems working (or not working) in different ways—now AI is being patched on top, often in janky ways.

Take public sector AI in the UK—there are a number of projects trying to apply LLMs to correct doctors’ note-taking, to make casework more efficient. But this is just responding to earlier failures of digitization! We have PDFs that were supposed to make documents portable, but now we’re stuck with stacks of uneditable forms. Every “solution” creates new problems.

So maybe we shouldn’t even call it “ethics” anymore. What we really need is to ask: What values are driving our societies? Efficiency? Profit? Innovation? These are ideological choices that get normalized. The point of my book is that ethics can’t just live inside machines—we need to ask how we want to organize our cities and societies, with all their messiness. Maybe LLMs could help facilitate those conversations, rather than pretending to be the solution. But we’re still figuring that out.

Dmitry Muravyov: When I first thought about this question, I was thinking about how you position ethics in two ways. On the one hand, as something technological and localized within computational architecture (the machine ethics project), and, on the other hand, as something more embodied and societal.

You seem to criticize machine ethics for not being “ethics” in that fuller sense. But now I’m wondering—are you actually saying that machine ethics can serve a purpose, we just shouldn’t call it “ethics” to avoid confusion? Would that be accurate?

Maya Indira Ganesh: Yes, exactly. The framing of “ethics” hasn’t helped us reckon with what kind of society we want to build. It either gets reduced to designing machines that mimic human decision-making (as if machines could create the social through their choices) or becomes corporate self-regulation theater, which we’ve seen fail as companies discard ethics when inconvenient.

Now, I’ll admit: Terms like “ethics” do have power. When you call something unethical, it activates people—no one wants that label. But we’ve overused these concepts until they’re hollow.

But here’s the key point: People are remaking society through technology—just not with “ethics” as we’ve framed it. Look at the U.S., where companies can now ignore AI safety under Trump. This isn’t about not caring—it’s about competing visions of society.

The Elon Musks and Chris Rufos have very clear ideologies about the world they want. And that’s what we need to confront: Not “ethics” as a technical problem, but the values and power struggles shaping our technological future.

So yes—we need value discussions, just not under the exhausted banner of “ethics.”

Dmitry Muravyov: There’s this interesting contrast in your reply between the ethical and the social that I want to explore further. Let me bring in my own experience too—I also teach technology ethics courses to engineers and computer scientists. I’ll play devil’s advocate a bit here, because while your book offers strong (and often justified) criticism of engineering ethics, I want to push back slightly.

That emphasis on individual responsibility you critique—it’s a weak point. Students tell me (or, more often, I imagine that this is something they can tell): “These ideas are nice, but eventually I’ll need a job, a paycheck, and I’ll have defined responsibilities within an organization.” Many so-called “ethical” issues in tech may be better addressed through labor organizing and unions rather than ethics courses.

But to defend ethics—even when we acknowledge how socially determined our positions are, there’s still an ethical weight to our decisions and relationships that doesn’t disappear. How do you see this tension between the social and ethical? Do you view ethics as having any autonomous space?

Maya Indira Ganesh: That’s a really good question, and it connects directly to what I was saying earlier. In teaching AI ethics to engineers, policy makers, even defense department staff, the core problem is treating ethics as something separable from the social, something we can formalize into machines. That’s why machine ethics fascinates me—it embodies this flawed approach.

Everything meaningful requires context. It resists automation. To your student’s dilemma—yes, we’re socially constrained, but there’s no substitute for personal reckoning. There are forms of social inquiry and ethical engagement that can’t—and shouldn’t—be automated.

This connects powerfully to Nick Seaver’s work about music recommendation algorithms. He studies these engineers who pride themselves on crafting “caring,” bespoke algorithms—until their startups scale. Suddenly, their intimate knowledge of musical nuance gets replaced by crude metrics and automated systems. What fascinates me is how they cope: Seaver finds that they perform this psychological reframing where the “ethical” part of their work migrates to some other more manageable domain so they can stomach the compromises required by scale.

Credit: Maya Indira Ganesh

Dmitry Muravyov: I think it’s indeed an interesting way to think that if ethics has to be somewhere, but at the same time, it can be in many places. So, we can think indeed: what is the place for ethics in this particular time and space?

The last thing I wanted to discuss was the irony you explore. The way I made sense of it was seeing the “irony of autonomy” as a type of technological critique. Often, the traditional critical move is one of suspicion—unmasking what’s actually going on behind the hood. In technology studies and humanities, we’ve seen rethinking of critique—reparative critique, diffractive critique, post-critique.

But irony seems different. When I first read your piece introducing irony in the book, I caught myself smiling—it sparked something in me. How do you see this use of irony in relation to the history of technological critique? Especially given your earlier piece with Emanuel Moss about refusal and resistance as modes of critique.

Maya Indira Ganesh: The “irony of autonomy” (playing on Lisanne’s Bainbrdige’s work (1983) about the irony of automation) was my way of historicizing these debates, showing how we’re replaying similar responses to automation today. We perform this charade of pretending machines act autonomously while knowing how deeply entangled we are with them.

Over time, I’ve struggled with that irony, albeit not in a bad way. It connects to a melancholia in my other writing about our embodied digital lives, especially around gender and technology. There’s a strong cyberfeminist influence here—this Haraway-esque recognition of how technologies shape gendered existence.

I don’t think we’re meant to resolve this tension. Like Haraway and cyberfeminists suggest, we need to sit with that discomfort. Disabled communities understand this deeply—when you rely on technologies for basic existence, you develop a nuanced relationship with them. There’s no clean ethical position.

A disabled colleague once challenged me when I asked if she wanted better functioning tech: “Actually, no—if it works too smoothly, people assume it always will. The breakdowns create necessary moments to see who’s being left out.” In our resistance and refusal piece with Emmanuel Moss, we were pushing back against overly literal critique. Resistance gets co-opted so easily—tech companies now use activists’ language! Refusal offers complexity, but isn’t a blueprint. You can’t exist outside these systems.

Irony is an opportunity for ambivalence, it is a politics of not turning away, while refusing to ever be fully reconciled with the digital.

Dmitry Muravyov: Sometimes I think when certain critical moves—like undermining or unmasking—are presented to audiences without humanities backgrounds, like computer science students… You can get this response where it feels like you’re taking the joy out of their work.

What I appreciate about irony as an alternative is that it lets people chuckle or smile first. Maybe through that smile, they can think: “Hey, maybe we shouldn’t automate everything.” That moment of laughter might plant the seed for a more ambivalent attitude.

Maya Indira Ganesh: Actually, I think critique has become largely about exposing corporate capture—it’s tied up with legal/regulatory battles now. I get this from friends and colleagues sometimes, “You’re not being hard enough on this.” But why can’t computing be fun? It is fun for many people. It creates beautiful things too.

That’s why I want that ambivalent space—to sit with both the problems and possibilities. If we open up how we think about our relationships with technology and each other… maybe we can make something different.

Dmitry Muravyov: There can still be joy at the end!

Biographies

Maya Indira Ganesh is Associate Director (Research Culture & Partnerships), co-director of the Narratives and Justice Program, and a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence (CFI). She draws on varied theoretical and methodological genres, including feminist scholarship, participatory methods, Media and Cultural studies, and Science and Technology Studies to examine how AI is being deployed in public, and how AI’s marginalised and expert publics shape the technology.

Dmitry Muravyov is a PhD Candidate at TU Delft, working in the AI DeMoS Lab. Drawing on philosophy of technology, STS, and media studies, he currently focuses on the political and ethical issues of algorithmic fallibility, a collectively shared condition of living with technology’s breakdowns, failures, and errors.

 

Band People with Franz Nicolay

Minor Compositions Podcast Episode 28 Band People with Franz Nicolay This episode is a recording of a seminar held at the University of Essex with Franz Nicolay on his book Band People. In it Franz Nicolay explores the working and creative lives of musicians. In it, he argues that to talk about the role of […]

16 May, 14:00 CEST Girl Online 🎀 Symposihmm #1 by The Hmm

[Repost, original event page here ]

Girl Online 🎀 Symposihmm #1

Performing as a girl online can be a powerful way to subvert the algorithm. And thanks to the whiplash of the girlboss epidemic, a meeker and cute self-image is now taking hold. Trends like girl mathbabygirl, and girl dinner reflect a tendency across genders to self-infantilise, a growing resistance to industrialized understandings of adulthood, often tied to economic strains and shifting life expectations, particularly amongst younger generations.

At the same time, the notion of girlhood itself is being questioned, reframed, and adopted in online spaces. As AI isolates our feeds even more by sorting us into predetermined categories, labels influence how we’re seen—and how we see ourselves. With machine learning gradually influencing more of our daily lives, how will our online actions and self-understandings change as a whole?

Afternoon programme  14:00 – 17:30

Today, we often make ourselves small online. Where the girlboss of yesteryear was on her grind to “have it all”, we now see a trend of flippantly shirking gendered responsibilities: we’re just girls, don’t expect us to cook a full meal every night (girl dinner). This trend of self-infantilisation is being embraced by men as well, who are posting about their boy apartments instead of man caves, well into their thirties. In a series of short talks and a panel discussion, we’ll explore online self-infantilisation. What is at the root of this phenomenon? And what are the benefits of this tactic?

With Maya B. Kronic, Mela Miekus and Mita Medri, and more…

  • 14.00 – 15.30, Workshop — Ink your Online Identity (few spots left!)
    • Explore the history of online identity and investigate digital self-presentation. Then design and apply temporary tattoos, reflecting critically on the digital self.
  • 14.00 – 15.30, Reading group (sold out)
    • Collective reading session delving into selected passages of Tiqqun’s text “Preliminary Materials for a Theory of the Young-Girl”. No prep needed!
  • 16.00 – 17.30, Panel — Self-infantilisation, with Maya B. Kronic, Mela Miekus, Mita Medri, and Jernej Markelj

౨ৎ Break ౨ৎ

Evening programme  19.00 – 21.30

Online, ‘girl’ is less a gender than a strategy—playful, ironic, and vulnerable behavior performs well under the algorithm. For this part of the program, we’ll explore ‘girl’ as a marketing tool, a power move, a form of desire, and a proven formula for online success. But is this strictly a product of today’s media environment, or does it echo earlier representations of girlhood? And what does the future of the girl look like in a world shaped by neural media?

  • Performance — Good Girl by Mireille Tap
  • Interview — Artist Martine Neddam about the Girl in 20th century media
  • Keynote lecture — K Allado-McDowell on the performance of girlhood and identity
  • Performance — djjustgirlythings

 

📅 Date: Friday 16 May 2025
🕗 Time: 14.00 – 21.30 CEST
📍 Location: SPUI25, Spui 25-27, Amsterdam, and online.
🎟 Tickets: Various categories from €7,50 to €27,50. Student and livestream tickets available ✨

Feel free to reach out to us at info@thehmm.nl for solidarity tickets.

Can’t join us in person in Amsterdam? Or just want to watch from the comfort of your laptop or phone? This event is hybrid so you can also buy a ticket to join Girl Online via our livestream website.

♿ Accessibility note

SPUI25 is located on the ground floor, there is a threshold at the door that staff are happy to help with. Unfortunately, there is no accessible toilet. During the event we can provide live closed captioning for those with hearing impairments and disabilities. Please reach out to us if you are joining on-site and have this access need, so that we can reserve a seat for you within view of the screen with captions. If you are joining online via our livestream, live captioning will be available as one of the streaming modes.

🎀 Girl Online is a full-day programme hosted by The Hmm, a platform for internet cultures, taking place across SPUI25 and University of Amsterdam locations on Friday 16 May. Expect talks, performances, workshops, and more. This first ever Symposihmm will dive into girl trends, self-infantilisation, girl as a strategy in digital spaces, and the future of girlhood. It is part of This is who you’re being mean to, The Hmm’s broader 2025 year theme, exploring gender expression online.

💙 This programme is kindly supported by the Creative Industries Fund NLhet CultuurfondsAmsterdams Fonds voor de Kunst, and the Netherlands Institute for Cultural Analysis, and made in partnership with University of Amsterdam Media Studies and Institute of Network Cultures.

L’économie solidaire en Haïti : femmes, territoires et initiatives populaires

Un livre de Christophe Providence

Pour accéder au livre en version html, cliquez ici.
Pour télécharger le PDF, cliquez ici (à venir).

Haïti est un pays où l’économie sociale et solidaire (ESS) et le secteur informel constituent les principaux moteurs de la résilience économique et sociale. Dans un contexte marqué par des inégalités territoriales, un accès limité aux infrastructures et un cadre institutionnel fragile, ces modèles économiques permettent à des millions de personnes, en particulier aux femmes, de générer des revenus, d’assurer la survie de leurs familles et de dynamiser les territoires.Cet ouvrage propose une analyse approfondie des dynamiques entrepreneuriales rurales et urbaines en Haïti, en mettant en lumière le rôle central des femmes dans le commerce, l’agriculture et les services communautaires. À travers une approche pluridisciplinaire combinant économie territoriale, économie du développement et nouvelle économie géographique, il questionne l’efficacité des politiques publiques haïtiennes et propose des pistes d’action pour une meilleure intégration de l’ESS dans les stratégies de développement territorial et économique.Pourquoi l’ESS reste-t-elle sous-exploitée en Haïti? Quels sont les défis et opportunités pour structurer et renforcer l’économie informelle? Comment les politiques publiques peuvent-elles mieux accompagner les femmes entrepreneures et les initiatives locales?

***

ISBN pour l’impression : à venir

ISBN pour le PDF : à venir

DOI : à venir

236 pages
Design de la couverture : Kate McDonnell
Date de publication : 2025

***

Table des matières

Résumé / Rezime

Making & Breaking 4: Psychogeographies of the Present

Reworking the Situationist heritage and applying it to our time, many of the approaches presented here extend beyond the city and physical environments into the virtual dimensions of digital socialities, identifying new forces of power and potential sources of emancipation.

At a time when it has become fashionable to celebrate the looming apocalypse as post- or transhuman payback, we urgently need to reinvigorate our desire for the future. Approaching cultural production in psychogeographic terms might help identify what blockages are at play in constraining contemporary art and culture to addressing what feels like only a handful of topics, in a handful of ways.

Contributors include: Experimental Jetset, Max Haiven, Liam Young, !Mediengruppe Bitnik, Dan McQuillan, Image Acts, Total Refusal, and Tristam Adams. Edited and published by our friends at CARADT, Sebastian Olma and Jess Henderson.

Click here to access the latest issue of Making & Breaking: Psychogeographies of the Present