x
Close
Media - September 8, 2025

Meta Under Fire: Whistleblowers Allege Suppression of Research on Children’s Safety and Inappropriate Content in VR Platforms and AI Chatbots

Meta Under Fire: Whistleblowers Allege Suppression of Research on Children’s Safety and Inappropriate Content in VR Platforms and AI Chatbots

Four current and former employees of a leading tech company have shared documents with Congress, alleging potential suppression of research regarding child safety, according to The Washington Post.

The employees claim that the company modified its policies surrounding the exploration of sensitive subjects – including politics, children, gender, race, and harassment – approximately six weeks following the leak of internal documents by whistleblower Frances Haugen. These documents revealed how the company’s own research found that one of its platforms could negatively impact the mental health of teenage users.

The policy changes, disclosed in 2021, sparked a series of hearings in Congress regarding child safety online, an issue of significant concern globally among governmental bodies.

Under these new policies, researchers were advised to incorporate legal teams into their research, shielding their communications from “adverse parties” due to attorney-client privilege. Researchers could also opt to express their findings in a more ambiguous manner, avoiding terms such as “non-compliant” or “illegal.”

Jason Sattizahn, a former researcher specializing in virtual reality at the company, asserted that his supervisor instructed him to delete recordings of an interview where a teen reported that his ten-year-old brother had been sexually propositioned on the company’s virtual reality platform, Horizon Worlds.

In response, a spokesperson for the company stated, “Global privacy regulations require that information from minors under 13 years of age, collected without verifiable parental or guardian consent, must be deleted.”

However, the whistleblowers contend that the submitted documents reveal a pattern of discouragement towards discussing and researching concerns pertaining to children under 13 using the company’s social virtual reality apps.

“These few instances are being manipulated to conform to a preconceived and erroneous narrative; in fact, since the beginning of 2022, our company has approved nearly 180 studies related to Reality Labs, including investigations into youth safety and well-being,” the spokesperson added.

In a lawsuit filed in February, Kelly Stonelake – a former employee with fifteen years of service at the company – raised similar concerns. She stated earlier this year that she led marketing strategies to introduce Horizon Worlds to teenagers, international markets, and mobile users. However, she felt the app lacked adequate safeguards for users under 13 and reported persistent issues with racism within the platform.

“The leadership team was aware that in one test, it took an average of 34 seconds of entering the platform before users with Black avatars were subjected to racial slurs, including the ‘N-word’ and ‘monkey’,” the lawsuit alleges.

Stonelake has separately filed a lawsuit against the company for alleged sexual harassment and gender discrimination.

While these whistleblowers’ accusations focus on the company’s virtual reality products, it is also facing criticism for how other products, such as AI chatbots, may impact minors. Reuters reported last month that the company’s AI guidelines previously allowed chatbots to engage in “romantic or sensual” conversations with children.