Meta, the parent company of Instagram, recently addressed and resolved a bug that led to a disturbing influx of violent and graphic content on users' Instagram Reels feeds. The issue, which surfaced around late February 2025, triggered widespread complaints from users who reported seeing graphic videos, sometimes labeled as "sensitive content," despite having enabled content filters designed to prevent such exposure.
Users took to various social media platforms to voice their concerns, describing the unexpected appearance of violent content, including depictions of shootings, beheadings, and other disturbing acts. Some users specifically mentioned that even with Instagram's "Sensitive Content Control" set to its highest moderation, the inappropriate material still managed to surface in their Reels feeds. This raised questions about the effectiveness of Instagram's content moderation system and whether recent policy changes might have inadvertently contributed to the problem.
In response to the outcry, Meta issued an apology, stating that it had fixed an error that caused some users to see content in their Instagram Reels feed "that should not have been recommended." While the company did not elaborate on the specific nature of the error or the number of users affected, a spokesperson emphasized that the incident was unrelated to any recent changes in Meta's content policies.
This incident occurred after Meta implemented changes to its content moderation approach, including ending its third-party fact-checking program in January 2025 and shifting towards a community-driven moderation system. These changes have raised concerns about the potential for increased exposure to harmful content and misinformation on Meta's platforms. Amnesty International, for example, warned that Meta's shift away from fact-checking could heighten the risk of fueling violence.
Meta's content policy prohibits violent and graphic videos, and the company states it typically removes such content. Exceptions are made for videos that raise awareness of human rights abuses or conflicts. The company also says it filters some content for users under 18 and uses warning labels on disturbing imagery that require users to click through to view. Meta says its policies regarding violent imagery are developed with the assistance of international experts and are continuously refined.
Experts have suggested that the Reels issue could have stemmed from a glitch in Instagram's content moderation system or an unintended algorithm shift that mistakenly prioritized violent or sensitive posts. The incident has reignited scrutiny of Meta's content moderation practices and its ability to balance content recommendations with user safety. Meta has faced criticism in the past for its handling of violent content, including the spread of such material during the Myanmar genocide. The company has also been called out for promoting harmful content to teens and the spread of misinformation during the COVID-19 pandemic.
Meta has been increasingly relying on automated moderation tools and announced that it would scale back its automated systems for removing content to focus on only the most extreme rules violations, such as terrorism, child sexual exploitation, drugs, fraud and scams.