It’s been an eventful week in cyber sexy land – she says, on Wednesday night. I was interviewed by the BBC’s amazing “unofficial porn expert” Thomas Fabbri for an article on Instagram censorship and quoted in The Next Web‘s piece together with indie porn director Erika Lust. Salty World leaked Facebook’s community guidelines for nudity – apparently inspired by Victoria’s Secret ads (SMH). The campaign I’m one of the founders of, #EveryBODYVisible, was also recently featured in the Huffington Post and Daily Mail as women on Instagram began changing genders to male under our suggestion and started noticing more engagement with their posts. Aaand I won a flying penis shaped award at the 25th Sexual Freedom Awards for my anti-censorship and anti-male gaze activism. So, since in just over a month I’ll be speaking at the Algorithms For Her Conference on algorithm bias at King’s College London, I thought I’d give you a preview on my main argument: that algorithms perpetrate something that is already rife in our society – whorephobia. Which is why I will hereafter refer to them as algwhoreithms.
The Democratising Promise of The Internet
So let’s put this all into context, shall we? The early 2010s gave us the Arab Spring, a variety of hashtag-based movements and changes in news consumption and human interaction. Suddenly, breaking the news via Twitter, sparking debate via Facebook and #breakingtheinternet via Instagram became a thing. There was this idea that, through social media, a new, better democracy was being made online, by private citizens, who suddenly had a voice to raise against the powerful. Fast-forward to 2019, after Cambridge Analytica and the Fappening, and it’s dawning on us: social media aren’t the heralds of democracy we thought they were. Could it be that, instead of giving voice to the voiceless, the social networks we know and love are actually perpetrators of the unchangeable status quo?
Instagram Censorship
Recently, queer brands and sex businesses have lamented not being able to advertise on social media platforms like YouTube and Instagram. The accounts of sex workers, adult performers and artists were disabled after a campaign by a variety of incels set out to take their profiles off social media platforms in a morality ‘purge’. In August, Instagram (and, therefore, its parent company Facebook) had to apologise to both pole dancers through this blog and to Carnival dancers, too, for censoring their posts. In both cases, the platform denied wanting to target specific communities, arguing content and hashtags were moderated “in errorâ€. Yet, error or not, the censorship vs. freedom of expression dilemma at the heart of social media moderation today seems to show that it’s social media’s own infrastructure that favours certain users as opposed to others.
In Facebook’s 2018 Blueprint for Content Governance and Enforcement, Mark Zuckerberg stated that the media giant learnt how “borderline†inappropriate content generates more traffic. Therefore, Facebook now plans to regulate what they define as “borderline†– e.g. nudity or violent speech – from the start, to prevent this interaction from even happening. While this might sound great, in the Blueprint Zuckerberg also revealed that the platform’s algorithm that are trained to detect what’s “borderline†have become almost foolproof to detect nudity… but don’t do nuance and are still ‘learning’ how to detect hate speech.
The repercussions on platforms like Facebook and Instagram have been felt strongly. On Instagram, this has resulted in the “vaguely inappropriate†content moderation policy, also known by users as ‘shadowban’ – meaning posts aren’t shown on the explore page, preventing users from discovering new content. This policy has been affecting artists, sex workers, brands and even fitness studios – who previously sold and advertised through Instagram – straight in their livelihood, and has sparked debate about the algwhoreithms’ main targets.
Inappropriate or Discriminating?
The definition of inappropriateness seems to affect very specific users. Instagram defines the following as inappropriate:
- Content that shows sexual intercourse
- Genitals
- Close-ups of butts that are fully naked
- Some photos of female nipples (which ones aren’t clear), although photos of post-mastectomy scarring, women breastfeeding or pictures of breasts where nipples are censored are ok.
Instagram claim they do allow photos of paintings, sculptures, and other art that depicts nude figures. Yet, male nipples don’t get censored by the platform, contrary to female ones. Throughout 2018 and 2019, hashtags such as #woman and #femalefitness were censored (and reinstated after users’ complaints to the platform) while #malefitness or #man went untouched.
Furthermore, on a variety of platforms, cyberflashing (read: unsolicited dick picks) has become commonplace, with many women lamenting they have felt violated or guilty for receiving them – often through platforms like Instagram or Facebook, but also via dating apps. Yet, since these pictures are often sent via direct message, there is no function on platforms like Instagram to report on them: profiles reported often aren’t disabled because, obviously, most users would not post that type of content on their feed.
The Failure To Regulate Hate Speech
The hate speech Facebook’s algorithm is still learning to regulate is defined by the Council of Europe’s Recommendation No. R (97) 20 of the Committee of Ministers of Member States On “Hate Speech†as:
“covering all forms of expression which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance, including: intolerance expressed by aggressive nationalism and ethnocentrism, discrimination and hostility against minorities, migrants and people of immigrant origin.â€
Council of Europe
It can wreak havoc on public debate, making us accept the demonisation of individuals and groups as normal. The steps from “I can say whatever I want about you†to “I can do whatever I want to you†isn’t that far-fetched – as proven by many IRL atrocities being linked to online propaganda as of late.
After the shooting in El Paso, hosting platform Cloudflare removed 8chan – the extremist site the shooter and many extremists visited – off its client list. However, as Aaron Sankin pointed out, a variety of hosting platforms refuse to censor hate speech and continue taking money from extremist sites in the name of freedom of expression. He writes that: “America is a country without hate speech laws, one built on the premise that it’s not the government’s job to decide what types of speech should be prohibited,” and platforms themselves refuse to avoid doing business with companies and groups that spread hate.
Unfriendly Infrastructure (for Women and Minorities)
By unfriendly or “aggressive infrastructureâ€, I and a variety of authors mean that social networks are inherently built to drive certain targets off of them – and that even if they don’t do so intentionally, they reach that result by refusing to solve certain issues and prioritise others.
In short, the Internet’s own infrastructure is facilitating behaviours such as outright aggression – without platforms doing much to prevent it. Harvey, for instance, finds that IRL inequalities are being replicated online:
“inattention and indifference to how participation may be inhibited by designed affordances and functionality, likely due to how these interactions are linked to platform profitability, is what makes the Internet’s built environment one that is increasingly an ‘aggressive architecture.’â€
She adds that “‘active inactivity’ in dealing with toxic and hateful speech and action in the regulation of these sites is what becomes aggressive architecture as the concerns, needs, and well-being of publics continue to go unaddressed despite their visibility.†Lawson, too, writes that more often than not an online harassment target’s IRL vulnerability has followed them online, showcasing how platforms’ own infrastructure leaves users vulnerable to harassment.
What I Mean With Algwhoreithms
By choosing to refer to algorithms as algwhoreithms I mean that Internet infrastructure is replicating something already rife in society: whorephobia. And it’s doing so by spotting anything that might be remotely related to sex work, or female sexuality, and demonising it and hiding it either through the shadowban or through outright deletion – in short, through algwhoreithm bias.
In my interview with the East London Strippers Collective, the founder Stacey defined whorephobia as “when one woman judges another by her own personal standards, believing she is somehow better, cleaner or of higher morals because she would never… such-and-such.” I would like to extend that to the fear public figures, social media platforms, ‘respectable’ members of society have of anything remotely sexual, inevitably dubbing it as a taboo and as something shameful.
All the above leads me to say that it’s the social networks’ own infrastructure that doesn’t look after its most vulnerable users, who need a voice the most – women, sex workers, people who don’t conform to cis straight ideas of gender, the LGBTQIA+ community, people of colour. Everyone that has somehow been made to feel ‘taboo’. That’s aggressive infrastructure right there: platforms prefer censoring women’s bodies, minorities, queer accounts and any businesses even remotely related to sex work as opposed to investing in more nuanced moderation – replicating IRL inequalities.
Although social networks like Instagram have denied this, many link this obsession with nudity as opposed to hate speech back to FOSTA/SESTA, the bill approved in the US to tackle sex trafficking. While it has been argued it does nothing of the sort, the bill might change social media as we know it, making platforms liable for what’s posted on them.
Platforms got scared, and they went after nipples. And isn’t it worrying that the repercussions of a US-based law that has already been contested are being felt by users all over the world?
Instagram’s constant lack of clarity and refusal to explain how its algwhoreithms work in interviews only contributes to more anger and more censorship. If users do not know what is inappropriate, they can’t profit from the platform. It’s like buying a product without the instructions leaflet: completely counterproductive, except that the product uses our content to make a profit.
The difference in priorities in targeting the algwhoreithm leads one to think that women and minorities’ right to use their bodies as they please comes second to extremists’ right to share their inflammatory posts. The fact that platforms have decided to regulate nudity first paints bodies – especially female and queer bodies – as more problematic and more of a threat than hate speech. This is wrong.
We Need Nuance and International Law To Distribute Power
The fact that – algwhoreithms or not – social networks prefer to regulate nudity over hate speech tells a lot about the platforms’ agendas. Nudity, in many ways, can be an expression of freedom and tolerance. Hate speech often makes us accept and normalise the most violent of words, with real-life consequences. In my opinion, it is not a coincidence that, with the far-right turn politics in countries such as Italy, Brazil, Britain and the US is taking, regulating nudity is becoming a priority over hate speech – because hate speech has become a propaganda tool, a propaganda that can be fed through (and paid for) the social networks that are meant to be regulating it.
In my research on online abuse on high profile criminal cases, I speak of striking a balance between freedom of expression and censorship, a balance based on context – be it the contextual vulnerability of targets, or the context in which some content is posted. I propose to strike this balance using human rights law: according to Article 10 of the European Convention of Human Rights and Article 19 of the International Covenant on Civil and Political Right, publishing content that is shocking or offensive (within reason) is covered by freedom of expression.
The moderating approach many social networks are currently adopting – such as Facebook and Instagram’s “Community Guidelines†– is lazy and unbalanced, tipping the scale towards the far-right and hate speech and against women and minorities. That goes against what social media claimed to have been born for: to discover new, different voices, a tool for freedom. Right now, it only promotes one category’s freedom.
Allowing platforms to continue to regulate content through unspecified community guidelines is dangerous. Social networks are businesses – businesses that have been able to influence voter behaviour and change public opinion – and they will naturally look after their profits. Letting them decide what’s inappropriate or not, without targeted, fair and internationally agreed upon laws, would mean placing an awful and scary amount of power in their hands.
New Developments
It was recently revealed that Instagram will introduce an age gate for under 18s related in particular to sexual content. It remains to be seen how teens will be able to trick the algwhoreithm – and whether people sharing non-NSFW posts will still be affected.
In the meantime, a variety of women all over the world started changing gender on Instagram after a prompt from @everybodyvisible.
Instagram has so far denied this has anything to do with drops or spikes in engagement, recycling the same old comment with a variety of news outlets:
“We want to make sure the content we recommend to people on Instagram is safe and appropriate for everyone,’ a spokesperson for Facebook said in a statement. ‘Ensuring women feel heard is an essential part of that effort. Gender information from profiles has no impact on content we filter from hashtags or the Explore page.”
IG in the Daily Mail
Oh, and in the latest version of Facebook’s community guidelines, the platform tells users they cannot offer or ask for nudes, sexual content or sex chat using “commonly sexual emojis” or “regional sexualised slang” – in short, no peaches or aubergines for you. Soz.
What I’ve Written About Algwhoreithms So Far
- My first interview with Instagram, where they fed me non-answers about community guidelines by way of their press team, denying the shadowban was a thing
- My interview with Exotic Cancer, where she talks about her experience of account deletion and shadowban
- An analysis of what the shadowban means for social media
- Comments from Instagram, explaining how they ban hashtags
- Instagram’s apology to pole dancers through this blog
- Behind the scenes of the #EveryBODYVisible campaign
Watch My YouTube Video Explaining The Shadowban
What Do You Think?
Do you think algwhoreithms voluntarily target women and minorities or do you think it’s part of the law of unintended consequences? Do you completely disagree? Would love to hear what you think [be nice please].
[…] Article written by Blogger on Pole Re:Â aggressive architecture […]