Home Crime OpenAI Flagged Tumbler Ridge Shooter Months Before Attack
Crime

OpenAI Flagged Tumbler Ridge Shooter Months Before Attack

Share
Share

OpenAI Identified Troubling AI Activity Long Before Mass Shooting

New details have emerged about the online history of the suspect in the tragic Tumbler Ridge school shooting in British Columbia, revealing that the artificial intelligence company behind ChatGPT had flagged and ultimately banned the individual’s account months before the deadly attack occurred.

According to statements from the AI firm, automated monitoring tools and human reviewers detected disturbing interactions on the suspect’s account in June 2025. These interactions involved conversations that were judged to be related to violent content, triggering internal safety protocols within OpenAI months before the mass killing unfolded.

What Happened With the Account and Why Police Weren’t Notified

Despite identifying red flags in the account activity, OpenAI says it ultimately chose not to notify law enforcement at that time. The company explained that while the interactions raised concerns under its usage policies, they did not meet the specific threshold the firm uses for contacting authorities — that is, evidence of imminent or credible planning for serious harm.

After the Feb. 10, 2026 attack in Tumbler Ridge, the company did reach out to the Royal Canadian Mounted Police with information about the suspect’s previous use of its AI platform. RCMP confirmed that digital and physical evidence related to the shooter is being reviewed as part of the ongoing investigation.

The Tragedy and Its Aftermath

In one of the deadliest mass shootings in recent Canadian history, the suspect first allegedly killed family members at a residence before travelling to Tumbler Ridge Secondary School, where multiple students and a staff member were killed and many more were wounded. The suspect died from an apparent self-inflicted gunshot wound at the scene, according to police reports.

With the suspect no longer alive to provide answers, investigators continue piecing together a full timeline of events and motives, sifting through digital communications, social media posts and other evidence.

Government and Public Response

The provincial government of British Columbia issued a statement calling reports that the AI company had identified related information before the shooting “profoundly disturbing,” emphasizing support for victims’ families and urging cooperation with the ongoing investigation.

As the story continues to unfold, questions remain about how technology companies can balance user privacy with public safety, and how earlier warnings from automated systems might be used more effectively to prevent real-world violence. Experts and public officials alike are watching closely as this conversation evolves, particularly in the wake of a tragedy that has deeply impacted a small, close-knit community.

Stay updated instantly — follow us on Instagram | Facebook | X 

Share