Six years into GLAAD's annual Social Media Safety Index, the headline remains stubbornly, devastatingly consistent: the biggest social media platforms in the world are failing LGBTQ+ people. But the 2026 report isn't just more of the same — it documents an accelerating, deliberate rollback of protections that leaves our communities measurably less safe than a year ago.
With the exception of TikTok, every platform evaluated in this year's report saw its score drop. YouTube suffered the steepest decline, falling 11 points to a score of just 30 out of 100. Meta's platforms — Instagram (41), Facebook (40), and Threads (39) — each slipped further down the scale. X, already the lowest-rated platform in previous years, inched down another point to 29. TikTok held steady at 56, the only platform to not actively worsen its standing — though GLAAD is quick to note that a score of 56 still represents a failing grade.
Meta's Policy Rollbacks Are Not Just Neglect — They Are Permission to Harm
The most alarming findings this year center on Meta, whose sweeping early 2025 policy changes GLAAD describes as a calculated decision to open the door to anti-LGBTQ+ harassment. The company's revised Hateful Conduct policy now uses "homosexuality" — a clinical-sounding term that has long been weaponized by right-wing groups to pathologize gay and lesbian identities — and "transgenderism," a term coined by anti-trans activists to frame the existence of trans and nonbinary people as an ideology rather than a reality.
More concretely: Meta's updated policies now explicitly permit users to call LGBTQ+ people "mentally ill" or "abnormal." This is not an accidental omission. This is policy language that gives harassment a green light.
These changes defy the Meta Oversight Board's own guidance. They also coincide with the company eliminating its Diversity, Equity, and Inclusion programs, shutting down trans and nonbinary-themed spaces on Messenger, and ending its U.S. fact-checking program. As GLAAD President and CEO Sarah Kate Ellis writes in the report's opening letter: "Meta and too many of its peers have traded a commitment to human rights for the overt backing of anti-LGBTQ hate and the actors who traffic in it."
The human impact is already visible. A June 2025 survey of more than 7,000 Meta users across 86 countries — conducted by GLAAD, UltraViolet, and All Out — found that 92% feel less protected from harmful content since the rollbacks, 77% feel less safe expressing themselves freely, and 72% have witnessed harmful content in their feeds. One anonymous non-binary trans respondent put it plainly: "Violence against me has skyrocketed since January. I live in daily fear."
YouTube Quietly Erased Trans Protections Too
YouTube's 11-point drop — the sharpest of any platform — reflects its own quiet act of erasure: the company removed gender identity from its list of protected characteristics in its hate speech policy. That decision, documented in last year's SMSI, remains in place in 2026. In a political environment where trans people face coordinated legislative attacks in dozens of states, removing explicit protections sends a signal to bad actors about what the platform will and won't enforce.
YouTube's score also reflects a retreat from transparency in DEI reporting, joining a broader industry-wide trend. Google, Microsoft, and Meta have all stopped publishing workforce diversity data — a shift that GLAAD flags as a concerning erosion of accountability.
The Numbers Behind the Crisis
The offline consequences of online hate are not abstract. GLAAD's ALERT Desk documented more than 1,000 anti-LGBTQ+ incidents nationwide in 2025. The Institute for Strategic Dialogue tracked over 97,000 anti-LGBTQ+ posts from violent extremist channels in the six months surrounding the 2024 election and inauguration alone — content that received more than 3 million interactions. Anti-LGBTQ+ bias motivated more than 20% of all hate crimes reported to the FBI for the third consecutive year.
A 2025 survey from LGBT Tech found that 68% of LGBTQ+ adults have experienced online harassment, with 45% saying it happens often. For transgender adults, those figures climb even higher: 90% have faced harassment online, and 83% have experienced it in person.
Disinformation campaigns targeting trans people are a particular concern. GLAAD documents the ongoing spread of debunked narratives — that trans people are "violent terrorists," that gender-affirming care is "chemical mutilation," that trans identity is a "social contagion" — all of which proliferate on social media and find their way into state legislatures crafting healthcare bans.
AI Is Making It Worse
The 2026 report also raises urgent alarms about the role of AI in amplifying anti-LGBTQ+ harm. In late 2025 and early 2026, xAI's Grok generated millions of deepfake non-consensual intimate images of women and children. Among the most horrifying: a sexualized AI-generated image of Renee Good, an LGBTQ+ woman killed by ICE in Minneapolis, which spread within 24 hours of her death. Governments across the UK, EU, India, France, and Malaysia have launched investigations or demands for information in response.
GLAAD also warns of a longer-term risk: when platforms fail to remove anti-LGBTQ+ hate, that content enters the training data pipelines of the next generation of AI models — potentially encoding bias into systems that will shape how LGBTQ+ people are represented online for years to come.
What Needs to Happen
GLAAD's 2026 recommendations are clear-eyed and urgent. Platforms must restore and enforce protections stripped in recent rollbacks, with particular urgency around transgender and nonbinary people. Content moderators — including those employed by contractors — must receive mandatory LGBTQ+-specific training. AI should be used to flag content for human review, not to make automated removal decisions. Platforms must stop surveilling users' sexual orientation and gender identity data for targeted advertising. And companies must recommit to DEI practices and transparent workforce diversity reporting.
The bottom line is this: these platforms have the tools to do better. They are choosing not to use them. And for LGBTQ+ people navigating these spaces every day, that choice has real consequences.