Jay Costello – The Establishment https://theestablishment.co Mon, 22 Apr 2019 20:17:33 +0000 en-US hourly 1 https://wordpress.org/?v=5.1.1 https://theestablishment.co/wp-content/uploads/2018/05/cropped-EST_stamp_socialmedia_600x600-32x32.jpg Jay Costello – The Establishment https://theestablishment.co 32 32 Why Are Tumblr, Twitter, And YouTube Blocking LGBTQ+ Content? https://theestablishment.co/why-are-tumblr-twitter-and-youtube-blocking-lgbtq-content-e54e7acf4b5c/ Thu, 20 Jul 2017 21:26:46 +0000 https://theestablishment.co/?p=3422 Read more]]>

For years, LGBTQ+ content has been categorized as NSFW by social-media platforms.

Unsplash/Wesson Wang

Last month, Tumblr joined several other social-media outlets in actively celebrating Pride, sharing a post encouraging LGBTQ+ content. That same month, it also introduced a new measure called Safe Mode, intended to give users “more control over what you see and what you don’t.”

Ironically, it looked to many users like one of the things that set the new feature off was LGBTQ+ content .

In theory, Safe Mode hid “sensitive content” (later clarified to mean “specifically, nudity”) from those who had the filter turned on. It was optional for most users, but mandatory for those under 18. In reality, and to be blunt, the algorithm simply did not work. It failed to block some nudity, sporadically hid everything from educational PowerPoints to gifsets of video games to pictures of cute animals — and, crucially, routinely censored LGBTQ+ content, regardless of how non-sexual it might be.

When the queer community pushed back, Tumblr apologized, and hastily clarified that the problem was the unintentional result of what’s known as “false flags.” In a post, the company wrote:

“The major issue was some Tumblrs had marked themselves as Adult/NSFW (now Explicit) as a courtesy to their fellow users, and their perfectly safe posts were getting marked sensitive unintentionally.”

Essentially, the algorithm had initially assumed that every post shared by someone who self-identified as “Explicit” was sensitive, and this was affecting some LGBTQ+ content. In response to the outcry, Tumblr removed this assumption so that posts are now judged only by their content rather than whoever has created or shared them. The company also gave some details on the algorithm itself, which attempts to use photo recognition to recognize nudity, and described it as “not perfect.”

Ongoing changes do appear to have made the algorithm more successful — or at least less twitchy. Still, people were right to worry.

For years, and across many of the most prominent social-media platforms, LGBTQ+ content has been categorized as NSFW. Back in March, YouTube creators found that LGBTQ+ adjacent videos they had created were being hidden from viewers via the company’s own safety option, called “Restricted Mode.” This included coming-out stories, educational content, and even this video titled “GAY flag and me petting my cat to see if youtube blocks this.”

For years, and across many of the most prominent social-media platforms, LGBTQ+ content has been categorized as NSFW.

YouTube has shared vaguely that it uses “community flagging, age-restrictions, and other signals to identify and filter out potentially inappropriate content.” In response to criticism over the blocking of LGBTQ+ related videos, YouTube sent out a statement that illuminated little about the problem itself, or how exactly it hoped to address it:

The company did seemingly make some changes, as fewer videos are now being blocked. But months later, issues remain. The aforementioned cat video, for one, continues to come up as blocked. As of mid-July, searching “gay” with the filter off returned a wide variety of videos, with top results including documentaries, coming-out videos, and same-gender kisses with hundreds of thousands of views. Turning the filter on, these videos disappeared, with the top results averaging a far smaller viewership.

Long before its latest hiccup, Tumblr had similar issues with filtration. In 2013, searching for tags like “gay” and “lesbian” would return no results on certain mobile versions of the site because they were flagged as inherently NSFW. In response to backlash at the time, the company wrote that “the solution is more intelligent filtering which our team is working diligently on. We’ll get there soon.” According to a reddit thread, this was still a problem in August 2016.

Twitter faced its own criticism this June, for flagging tweets with the word “queer” as potentially “offensive content.” Like YouTube, it replied vaguely, saying simply that it was “working on a fix.” As with Tumblr, the issue was particularly glaring for taking place during Pride Month, when the company was otherwise touting its LGBTQ+ inclusivity.

The Capitalist Appropriation Of Gay Pride

The question of why this keeps happening is a complicated one. Because social-media companies tend to be tight-lipped about how things work on the back-end, we have limited information on the source of these glitches. In some cases, as with Safe Mode, the problems do seem to stem from technical flaws. But it’s hard to imagine human bias not playing a role in at least some of this censorship.

In any case, social-media companies should be taking these filtration issues seriously—their impact on queer people, and especially queer youth, cannot be overstated.

Online, LGBTQ+ communities are far more likely to be welcoming to people of all ages and identities; they are places where minors can explore their sexuality and gender, learn about themselves, and get invaluable support. Tumblr in particular has long been the home of a robust LGBTQ+ community — it’s very difficult to use Tumblr without being exposed to the idea that not only are LGBTQ+ people everywhere, but they’re loud. This can show a questioning person that being LGBTQ+ is nothing to be ashamed of.

YouTube is another crucial resource for young LGBTQ+ people, as are a thousand smaller websites that anyone can access through a simple search — provided their engine doesn’t categorize their query as “unsafe,” like the “kids-oriented” search engine Kiddle once did for “bisexual,” “transgender,” and so forth.

The internet’s imperfect algorithms are not born of a vacuum. They are created by people, who are influenced by their assumptions. And when these programmers make these kinds of choices or mistakes, they don’t only make finding resources more difficult. They undermine the hard work of content creators who often ask for nothing in return except to know that an LGBTQ+ kid feels a little better. Instead of getting a comforting word from an understanding person, the child gets a warning that their identity is considered unsafe by an unfeeling corporation.

The internet’s imperfect algorithms are not born of a vacuum.

There’s also a troubling irony at play in all this — not only are these social media platforms failing their LGBTQ+ users with their “security” settings, they’re doing it to disguise the fact that they aren’t truly making their sites safer. Tumblr is absolutely not a safe website. Bots that spam porn and literal Nazis abound, and Safe Mode does nothing to prevent direct person-to-person harassment, including any aimed at minors. Ditto Twitter, where white supremacist activity has increased 600% since 2012, and Daesh continues to have a presence. Thousands of people, mostly women, LGBTQ+ people, and people of color, experience torrents of harassment everyday. Twitter’s solution — hiding “sensitive content,” with filters blocking the word “queer”— has paradoxically hurt the very group it should be working to protect.

Our community is rightfully angry at this continued erasure. We build our support, education, and love on platforms owned by corporations who don’t seem to care about us. Social-media platforms may pay lip service to inclusivity, but in the end, we’re left only with systems that flag our identity as wrong.

We will continue to fight back against this censorship, because we understand a fundamental truth: Our existence is not sensitive content.

Looking For A Comments Section? We Don’t Have One.

]]>