Meta, the parent company of Instagram, is actively addressing a recent surge of violent content appearing on Instagram Reels. This comes after widespread user complaints about encountering graphic videos, including depictions of violence, accidents, and other disturbing imagery, even with sensitive content filters enabled. The company has acknowledged the issue, attributing it to a "technical error" that caused some users to see content that should not have been recommended and has issued an apology for the mistake.
According to Meta, the error has been fixed. However, the incident has raised concerns about the effectiveness of Meta's content moderation systems and the balance between content recommendations and user safety. Users reported seeing disturbing videos, including shootings, beheadings, and people being struck by vehicles. Some videos were flagged with "Sensitive Content" warnings, while others were not.
Meta's official policy prohibits violent and graphic content, and the company states that it typically removes such content to protect users. Exceptions are sometimes made for videos that raise awareness about issues like human rights abuses or conflicts, though these may carry warning labels. To manage content across its technologies, Meta employs a "remove, reduce, inform" strategy. This involves removing harmful content that violates policies, reducing the distribution of problematic content that doesn't violate policies, and informing people with additional context to help them decide what to view or share.
The recent influx of violent content has led to speculation about the cause. While Meta claims it was due to a technical error unrelated to policy changes, some experts suggest a malfunction in the content moderation system or an unintended algorithm shift could be to blame. Instagram's AI typically scans posts for sensitive material and restricts their visibility, so a failure in this system could lead to inappropriate content appearing in users' feeds.
This incident occurs after Meta has made significant changes to its content moderation policies, including dismantling its fact-checking department in favor of community-driven moderation. In January 2025, Meta CEO Mark Zuckerberg announced that the platform would replace third-party fact-checking with user-written “community notes” and shift its focus to addressing high-severity violations such as terrorism, fraud, and child exploitation. These changes have sparked debate, with some fearing they could lead to more harmful content slipping through the cracks. Amnesty International warned that Meta's changes could raise the risk of fueling violence.
Meta relies heavily on automated moderation tools, including AI algorithms that proactively search for harmful content. While the company states that its technology detects and removes the vast majority of violating content before it is reported, incidents like this raise questions about the effectiveness of these tools. Meta has faced criticism for failing to effectively balance content recommendations and user safety.
Moving forward, Meta is likely to face increased scrutiny regarding its content moderation practices. The company says that it develops its policies around violent and graphic imagery with the help of international experts and that refining those policies is an ongoing process. It remains to be seen how Meta will adjust its strategies to prevent similar incidents from happening in the future and ensure a safer experience for its users.