Pennsylvania senators John Fetterman and Bob Casey recently rescinded support of funding for William Way LGBT Community Center after the organization was targeted by Libs of TikTok — a right-wing social media account that continually posts inaccurate, inflammatory and anti-LGBTQ+ content.
In the post, Libs of TikTok claimed that a federal appropriations bill, which was set to be voted on, included “$1M of your tax dollars to go towards renovating an LGBTQ Center in PA which boasts rooms to try BDSM and s*x f*tishes and hosts BDSM and s*x k*nk parties.” The post refers to a monthly gathering hosted by Aviary, a kink community, which does not actually allow sex or penetration at events.
“Unfortunately, at the 11th hour my staff was made aware that funding for William Way, which was in the bill because I championed it, would not pass in the FY24 appropriations process,” Fetterman said in a statement. “The choice was either to pull it or watch it get stripped out, attacked by Republicans, and ultimately killed.”
This isn’t the first time Libs of TikTok and other right-wing queerphobia on social media has had a tangible impact on political decisions. Misinformation continually guides the rhetoric that influences mainstream conservative news — and many of those narratives begin on social media.
The emerging false narrative of “trans terrorism”
This recent Libs of TikTok post came just days after Taylor Lorenz of The Washington Post interviewed the account’s creator Chaya Raichik. During the interview, Lorenz confronted Raichik about a variety of topics — including her contributions to the spread of misinformation.
Lorenz said to Raichik, “You still have a post up accusing the Uvalde shooter of being trans. Obviously, that’s been debunked.”
Raichik wasn’t the only conservative with a large platform to share false information about the shooter. She and others took to social media to promote an inaccurate narrative as though it were fact without checking the veracity of their claims.
And it wasn’t the first time she or other conservatives have incorrectly accused violent actors of being trans. Right-wing influencers recently made the same claim about the Lakewood shooter.
Raichik and other conservatives also reported that last year’s July 4th shooter in Philadelphia is trans even though they lacked credible evidence. Still, Libs of TikTok and other conservative outlets latched onto their claim — ignoring the shooter’s pro-gun, pro-Trump social media posts.
“The Uvalde shooter wasn’t trans,” Raichik finally admitted after Lorenz asked her directly. Raichik called that specific claim a “mistake” but still defended her post as “free speech.”
One of the problems with these kinds of mistakes is that they don’t get corrected on her platform — or by mainstream conservative news outlets, which often base their own reporting on social media. And the lack of apology or correction perpetuates false narratives.
This rhetoric has led to extremist claims of “trans terrorism,” an issue Libs of TikTok continues to push as though it is an observable pattern. Conservative podcast host Matt Walsh introduced one of his shows by saying, “Trans terrorism is becoming a major problem in our country. We need to talk about it.”
But research doesn’t support that claim. Trans people are rarely perpetrators of mass casualty events — with one study noting that as few as 0.11% of shooters are trans and underlining that trans people are more likely to be the victims of violence.
The trickle-down effect on queer youth
Teens and tweens are increasingly becoming heavy users across multiple platforms. According to Pew Research Center, nine out of 10 teens use Youtube, where anti-LGBTQ+ content is a problem in the uploaded videos, comments sections, and advertisements — and over 60% use TikTok, whose algorithm was found to promote homophobic and transphobic posts on its “For You” page. The platform was criticized in 2020 after a report by The Australian Strategic Policy Institute found evidence of shadowbanning LGBTQ+ content and moderation policies that are too quick to censor LGBTQ+ content.
One Pennsylvania parent of a trans tween told PGN that anti-trans social media posts led to harassment that impacted her child’s school experience. At first, a particular classmate would show the child videos of an influencer who filmed “get-ready-with-me” style content while sharing the same misperceptions often touted by conservative political leaders. The family informed the school about the issue.
“I thought it could be an opportunity to teach them how to look out for things like this online — that the teachers could talk with the classes about misinformation and maybe how to fact-check things,” said the parent, who requested anonymity.
But she said the school was not receptive to this idea.
The parent believes it’s possible that the classmate didn’t have harmful intentions at first, but the culture of her child’s classroom shifted over time. In 2023, multiple classmates campaigned against the child’s use of a shared bathroom.
“The school took a neutral approach,” the parent said, explaining that the child was permitted to use the school nurse’s office as a restroom. This led to her child feeling otherized and to children treating them as though they weren’t to be trusted. One of the classmates started a rumor that the trans student had a “hit list,” a narrative this parent believes was influenced by claims of “trans terrorism” and other misinformation on social media.
“This all started with TikTok,” underlined the parent.
Why does this continue to happen?
Although individual communities can develop their own culture around social media use and media literacy, it’s also an important part of platform development. Some people believe it’s the responsibility of platforms themselves to ensure misinformation doesn’t go unnoticed or unaddressed.
During the interview with Lorenz, Raichik claimed that a community note placed alongside her post about the Uvalde shooter will help users recognize her mistake. But Twitter’s Community Notes program has been deemed a failure by experts who say it doesn’t effectively combat the kind of misinformation that harms LGBTQ+ people and sometimes adds more false and biased information to posts.
“This is why content moderation is so challenging,” said Alex Popken, the VP of Trust and Safety at WebPurify — a service provider that advises companies on these issues and offers moderation services that merge artificial intelligence with human analysis.
“Both technology and people are imperfect, and you’re operating at these substantial scales in terms of the volumes that are processed by machines and people — so inevitably, there’s going to be a margin of error,” she said, noting that moderation systems can be more successful when they incorporate both artificial intelligence and human oversight.
“The way that we partner with these platforms is they have a policy that they need to enforce on their platform, and they need help doing so,” Popken explained about WebPurify.
“There are a number of factors and facets [policy teams] are considering when they are drafting policies,” explained Popken, who was previously the head of Trust and Safety Operations at Twitter. “What are the laws and regulations that need to inform policies? How can we consider the user experience and how to keep users safe and engaged on our platform?”
“And then of course, there’s constant iteration too,” Popken underlined. “Policy is largely informed by what’s happening in the world and new threat vectors — like, for example, generative AI — or even worldwide incidents, like wars, can create new vectors of misinformation. The policy landscape is constantly evolving and really requires pretty robust input from multiple third parties — even including people like academics, civil society, etc.”
“I think the way that a lot of platforms think about it is, how can they create an environment that feels welcoming and enjoyable for all users?” she said. But sometimes attempts to welcome “all” people create avenues that leave more marginalized people unprotected. Policies are sometimes drafted in a way that over-policies marginalized users and gives more freedoms to influencers — who can be the main perpetrators of harm.
“Speaking specifically about marginalized communities, the online world can certainly be a haven for seeking support and understanding — but we also know that it can be a hellscape of discrimination and harassment and even disinformation,” Popken said. “And all of these things are illegal — but they ultimately contribute to a really negative experience for users and also create I think broader implications societally.”
The top five social media platforms — Facebook, Instagram, TikTok, YouTube and Twitter — all received low and failing scores by GLAAD’s most recent annual Social Media Safety Index, which analyzes how well companies protect queer people from hate speech and tracks the impact of online misinformation in the real world. Some have taken to Congress in hopes that legislation will curb these issues.
When it comes to information that can negatively impact users but doesn’t cross a legal boundary, Popken explained that moderation is guided by a platform’s community standards — the kinds of policies WebPurify seeks to help companies enforce.
“We kind of call that lawful but awful speech,” she said, underlining a GLAAD report illustrating that 84% of LGBTQ+ social media users say platforms do not feature enough protections to prevent discrimination, harassment or disinformation.
“I think that statistic is telling and I think it’s sort of a resounding message to platforms that this group does not feel super heard or protected and more needs to be done about that in general,” she underlined, noting that policies — which should be constantly reassessed for effectiveness — should be reevaluated.
“We know that platforms have long had hate speech policies in place — but misinformation policies are newer, and I think particularly for the queer community, we tend to see new misinformation trends that really need to be addressed and targeted,” Popken emphasized. “That would include things like dangerous and discredited practices around conversion therapy or misinformation around so-called ‘trans terrorism.’”
“These are things that crop up that need to be specifically addressed through policy and through content moderation,” she said, noting that this should be part of a platform’s design. “It really involves platforms leaning into these communities — and certainly internally hiring people who are experts and can speak authoritatively to these topics — but also really engaging with one’s users to understand, what would it take to make these users feel more protected on their platform?”
“We know and our data reflects that, for example, LGBTQIA+ users face disproportionate rates of online harm than other groups,” she said. “And as a result, I do think that a thoughtful and considerate approach to safety needs to be prioritized.”