Social platforms failing to keep LGBTQ users safe, GLAAD says
A top LGBTQ advocacy organization, after seeing an escalation of hateful attacks on platforms such as TikTok and Twitter, came up with a score for the social media sites’ safety for vulnerable users. All of the networks received a failing grade.
The top five social networks all scored less than 50 out of 100% on safety for LGBTQ users, a new ranking in the Social Media Safety Index developed by GLAAD, an organization that fights hate against gay, lesbian, bisexual, transgender and queer people. The report, released Wednesday, ranks Instagram the least bad at 48%, followed by Facebook, Twitter, YouTube and TikTok.
“Basically, the entire industry is failing LGBTQ people,” said Jenni Olson, senior director of GLAAD’s social media safety program.
The new scores are based on factors ranging from whether the platform includes gender pronouns on profiles, to content moderation and diversity of their workforces. In the last year, companies have made some improvements, and GLAAD hopes its report will be a catalyst for more change, Olson said.
In February, TikTok updated its Community Guidelines to explicitly prohibit misgendering, deadnaming and misogyny, after prompting by GLAAD and UltraViolet, a female-empowerment organization. Deadnaming is the practice of referring to a trans person by the name they were assigned at birth. GLAAD calls it “an invasion of privacy that undermines the trans person’s identity, and can put them at risk for discrimination, even violence.”
On Twitter last Tuesday, actor Elliot Page’s deadname was trending for 45 minutes before the platform removed it, according to Buzzfeed News, even though deadnaming is explicitly prohibited in Twitter’s hateful conduct policy.
A Twitter Inc. spokesperson said the company already takes feedback from GLAAD and has “engaged with GLAAD to better understand their recommendations and are committed to an open dialogue to better inform our work to support LGBTQ safety.”
A spokesperson for Alphabet Inc.’s Google, which includes YouTube, said the company has made “significant progress in our ability to quickly remove hateful and harassing content, and to prominently surface content in search results and recommendations from authoritative sources.”
A spokesperson for Meta Platforms Inc., which owns Facebook and Instagram, said the company prohibits violent or dehumanizing content directed against LGBTQ people and removes claims about someone’s gender identity upon their request.
“We also work closely with our partners in the civil rights community to identify additional measures we can implement through our products and policies,” the spokesperson said.
More than any other group, LGBTQ users face online harassment, according to the Anti-Defamation League, which looks at civil rights broadly, including antisemitism and bias. This year’s results of the group’s annual Online Hate and Harassment survey show 66% of LGBTQ respondents experience harassment, compared to 38% of non-LGBTQ respondents.
This connects to real-world anti-LGBTQ attacks and even legislation, GLAAD says. The group’s report states that Republican lawmakers have proposed 325 bills they consider anti-LGBTQ since the beginning of this year.
“There’s a direct line, in terms of the anti-LGBTQ rhetoric on social media platforms, particularly from powerful right-wing politicians and right-wing media accounts and pundits,” Olson said.
León Powell, bilingual community specialist for nonprofit trans hotline and microgrant organization Trans Lifeline, routinely removes comments from the organization’s social media posts that say things like: “It’s a mental illness,” “All trans people need to kill themselves,” and “You’re grooming children.” Powell, who uses they/them pronouns, spends hours every week deleting and reporting these comments, and estimates that three out of every five posts receives this kind of hate.
When Powell does report harmful comments to social media platforms, they said the platforms often don’t take action, saying the comment was not hate speech but just someone’s opinion.
Powell said seeing these comments is hard, but fortunately they have support resources that not everyone has.
“If you go to YouTube and look for just ‘transgender,’ you’re going to see as many if not more videos against trans people and trying to debunk transgenderism as something fake or people pretending,” they said, “And so I can only imagine how heartbreaking and confusing and damaging that can be for a young trans person or a trans person in general trying to come to terms with their identity.”
A YouTube spokesperson said the site surfaces mostly authoritative sources in search results for “transgender” and that if someone searches for “conversion therapy,” YouTube will provide context that it’s a dangerous and discredited practice.
Powell said some Trans Lifeline advertisements on Facebook get flagged as promoting hateful speech because they include the word “transgender.” When that happens, they have to resubmit the advertisement for human review. That’s why Powell thinks social media platforms need more humans monitoring content, versus an artificial intelligence system.
The GLAAD report calls on companies to disclose a training for content moderators that trains them on the needs of vulnerable users. Olson called on companies to strengthen and enforce community guidelines, respect data privacy and improve transparency with algorithm designs.
“There needs to be some kind of regulatory oversight that will actually create accountability for these companies,” Olson said. “At the end of the day, the lack of civil discourse on platforms negatively impacts everyone.”