Instagram Quietly Admitted Algorithm Bias… But How Will They Fight It?

Following the #BlackLivesMatter protests, Instagram have pledged to do better to promote diversity on their platform, making big promises for change. In doing so, they obliquely admitted algorithm bias exists on the platform (something they hadn’t done so far). Following my experience with anti-censorship campaign EveryBODYVisible and my research on online moderation, I’ve once again spoken with Instagram to attempt to understand what this means. Read on to find out.

Disclaimer

The issue of algorithm bias due to my race is something that I, as a white person, have not experienced. However, I have been shadowbanned because of nudity and/or pole dancing, something that in the past has happened to a variety of people of colour and to Carnival dancers (according to both previous Instagram apologies and to testimonies shared with us at EveryBODYVisible).

This post originates from my ongoing activism work both through this blog and through anti-censorship campaign EveryBODYVisible, for which I’m the head of research. It also originates from my own research on online moderation and algorithm bias, and from my past interviews with Instagram following censorship of my posts and posts of people in the pole dance industry.

Instagram’s Announcement on Algorithmic Bias

Instagram has been crucial during lockdown – not only as a way to keep in touch or to promote ourselves, but also for the anti-racism movement to share resources and for people to share their support for #BlackLivesMatter. After all, Instagram is one of the biggest social networks people work with when it comes to promotion or raising awareness of issues – which is why it’s so important that we know how it works, and if it has inherent biases in sharing content.

On June 15, Instagram CEO Adam Mosseri published the following post.

Visualizza questo post su Instagram

We stand in solidarity with the Black community. But that’s not enough. Words are not enough. That’s why we’re committed to looking at the ways our policies, tools, and processes impact Black people and other underrepresented groups on Instagram. Addressing the feedback we get has always been an integral part of how we work, and has helped us build a better Instagram for everyone. We’re going to focus on four areas: * Harassment * Account verification * Content distribution * Algorithmic bias It’s not enough to simply celebrate or amplify Black voices on Instagram. We need to make sure we’re doing everything we can to protect them as well, and doing so requires we address the specific ways they’re impacted. Our focus will start with Black community, but we’re also going to look at how we can better serve other underrepresented groups. Instagram should be a place where everyone feels safe, supported, and free to express themselves, and I’m hoping this will get us closer to that. Link in bio for more.

Un post condiviso da Adam Mosseri 😷 (@mosseri) in data:

On top of donation announcements towards diversity, Mosseri provided more information in this article in IG’s news page, highlighting four areas in which Instagram will attempt to make sure black voices are heard.

  1. Harassment: Any work to address the inequalities Black people face has to start with the specific safety issues they experience day to day, both on and off platform. Then we need to address potential gaps in how our products and policies protect people from those issues.
  2. Account verification: We’re looking into our current verification criteria and will make changes to ensure it’s as inclusive as possible. Verification is an area we constantly get questions on“ what the guidelines are, and whether or not the criteria is favoring some groups more than others.
  3. Distribution: We’ll review how content is filtered on Explore and Hashtag pages to understand where there may be vulnerability to bias. On top of that, we need to be clearer about how decisions are made when it comes to how people’s posts get distributed. Over the years we’ve heard these concerns sometimes described across social media as ‘shadowbanning,’ filtering people without transparency, and limiting their reach as a result. Soon we’ll be releasing more information about the types of content we avoid recommending on Explore and other places.
  4. Algorithmic bias: Some technologies risk repeating the patterns developed by our biased societies. While we do a lot of work to help prevent subconscious bias in our products, we need to take a harder look at the underlying systems we’ve built, and where we need to do more to keep bias out of these decisions.
<img fetchpriority=
Picture by Claudio Schwartz on Unsplash

Let’s Break It Down

Why is Mosseri’s post interesting? Because Instagram have so far denied discrimination against specific communities in any interview with me and Mosseri himself said in February that the shadowban was not a thing. This resulted in him being called out for lying by the Huffington Post.

The above pledges are an oblique way to admit that bias – what communities ranging from athletes to sex workers, from sex educators to models were raising awareness of throughout 2019 and 2020 – was actually in place.

While promises to improve content filtering and moderation to avoid further marginalisation of communities are very much welcome, and they are what I, EveryBODYVisible and many others have been demanding since last summer, this is once again an example of Instagram’s sly and clever PR machine.

<img decoding=
Picture by Kate Torline on Unsplash

Instagram have basically been gaslighting audiences into thinking that the shadowban, algorithm bias and censorship were just their imagination… only to admit they existed without admitting it months later. Burying in the convenient promise for action after the scrutiny brands have been under in the aftermath of #BlackLivesMatter, this way Instagram save face promising change and the big news isn’t the bias – it’s the promise to do something about it.

Of course, I don’t mean that this bias is voluntary. But it has been felt by a variety of communities, and turning its acknowledgement into a PR device during the already critical #BlackLivesMatter battle is frustrating.

But How Will IG Actually Tackle Algorithm Bias?

While the above promises reflect research – including my own – portraying the issues with and consequences of algorithm bias, they essentially state the already obvious without clear action points. They do not outline the “gaps” making black people and users vulnerable to harassment. They don’t provide insight into how their algorithm furthers that bias and vulnerability through harassment. And they do not say how they are going to keep bias out of their systems in the future. This is just another example of brands posting their variation of a black square without much more ‘meat’ to their promise.

Of course, Instagram cannot reveal too much: that would mean divulging their own business’ secrets. But the lack of clarity is striking. So, once again, I got in touch with their PR team who, after almost a yearly hiatus, actually replied. Unfortunately, once again, I received some more PR pledges and the promise that more information will be shared publicly in due course.

Instagram have told me that they are committed to better serve what they called underrepresented groups using the app, going even beyond race and focusing on other characteristics, too.

They acknowledged the feedback received by communities such as LGBTQ+ groups, body positivity activists, artists and adult performers, saying this has helped them build a more inclusive product… I guess we’ll be the judge of that later on.

They concluded by saying that their goal is for IG to be a place where everyone feels safe, supported, and free to express themselves, and that they’re hoping this work will get us closer to that goal.

What Can Platforms Do To Improve Diversity and Avoid Excessive Censorship?

Instagram’s PR team referred to it as a “place”, which is a welcome definition as opposed to the murkier ‘platform’, something that, in the past, portrayed them as a tool for freedom of expression but relieved them of most responsibilities towards regulation. However, it’s worth thinking about what type of place, or space, Instagram is.

In one of my most recent papers, I talk about social media platforms like Instagram as a spatial hybrid: a space that has assumed characteristics of civic spaces (e.g. hosting debate and being open to everyone with access to it), but owned by corporate entities. A ‘corpo-civic’ space.

<img loading=

In off-line corpo-civic spaces (like privately owned malls, or theme parks, or public spaces surveilled by corporate CCTV firms) the exclusion or punishment of citizens without legal reasons is considered a violation of human rights. So in my paper I use human rights law to define users’ rights and companies’ responsibilities within a corpo-civic space on social media. In the corpo-civic spaces social media users currently find themselves in, they should be able to:

  1. Post different, challenging, shocking opinions without being censored if they do not infringe or limit other people’s rights;
  2. Share content that is within community guidelines even if it is ‘borderline’;
  3. Not be discriminated against according to gender, race, sexual orientation, religion and the like, according to Article 14 of the European Convention on Human Rights;
  4. Receive accurate, thorough and fair explanations on why and how their content is being used or moderated;
  5. Appeal or opt out of decisions made about them if they think they are unfair.

The above points would be more consistent with international human rights standards governing other types of speech and forms of expression, and would more successfully deliver Instagram’s own promise for a space where users can express themselves. To do so, social media giants operating in a corpo-civic space should:

  1. Avoid harming and discriminating against their users;
  2. Be transparent about their decisions, moderation teams and bias;
  3. Provide as much clarity as possible to both singular users and to the wider world about the ins and outs of its moderation and data usage, allowing for repeated appeals if necessary;
  4. Work with user communities to introduce moderation that is fair and diverse and represents as many perspectives as possible;
  5. Promote accountability about their actions and decisions with users, the industry and governments.

More recommendations can be found in my latest paper, published in Feminist Media Studies and available for open access readership here.

More Resources To Find Out About Censorship and Bias

Pin This Post

 <img loading=

5 Comments

Leave a Reply

Verified by ExactMetrics