OpenAI could have stopped Canadian trans teen’s school shooting — but didn’t because of greed: bombshell lawsuits

New York Post
ANALYSIS 22/100

Overall Assessment

The article frames OpenAI as morally and legally culpable for a school shooting based solely on unverified lawsuit allegations. It uses emotionally charged language, omits counter-perspectives, and presents speculation as fact. The reporting prioritizes outrage over objectivity, failing basic standards of balanced journalism.

"But there were no safeguards in place to stop Van Rootselaar from setting up a new account and carrying on with the evil plan under a different user name."

Loaded Language

Headline & Lead 20/100

The headline and lead prioritize shock value and blame over factual neutrality, using inflammatory language and presenting unproven legal claims as established truth.

Sensationalism: The headline uses emotionally charged language like 'bombshell lawsuits' and implies OpenAI could have definitively prevented a mass shooting, framing the story as a shocking exposé rather than a developing legal claim.

"OpenAI could have stopped Canadian trans teen’s school shooting — but didn’t because of greed: bombshell lawsuits"

Loaded Language: The lead uses derogatory and inflammatory terms like 'nut' and 'slaughter' to describe the shooter and the event, immediately setting a sensational and judgmental tone.

"OpenAI could have stopped a trans teen’s slaughter that killed eight people — but was too greedy to install safeguards to rein in the ChatGPT bot advising the nut, according to bombshell new lawsuits."

Framing By Emphasis: The headline and lead emphasize OpenAI's alleged greed and failure, foregrounding a single narrative while omitting any counter-perspective or legal uncertainty.

"OpenAI could have stopped... but didn’t because of greed"

Language & Tone 15/100

The tone is highly emotional and accusatory, using moralized language and victim details to provoke outrage rather than maintain journalistic neutrality.

Loaded Language: The article repeatedly uses emotionally charged and pejorative terms like 'evil plan', 'inhumane move', and 'nut', which distort objective reporting.

"But there were no safeguards in place to stop Van Rootselaar from setting up a new account and carrying on with the evil plan under a different user name."

Editorializing: The article inserts moral judgment by stating OpenAI 'did the math and decided that the safety of the children... was an acceptable risk', a speculative interpretation presented as fact.

"They did the math and decided that the safety of the children of Tumbler Ridge was an acceptable risk"

Appeal To Emotion: The article emphasizes the youth and vulnerability of victims with detailed descriptions of injuries, aiming to provoke outrage rather than inform.

"The parents of Maya Gebala, 12, who is living with permanent cognitive and physical disabilities after taking three bullets to the head, neck and cheek"

Balance 30/100

The article relies heavily on unverified legal claims with vague sourcing and omits responses from the accused party, undermining source balance and credibility.

Vague Attribution: Many claims are attributed to 'the court papers said' or 'the filings claimed' without naming specific documents, lawyers, or plaintiffs, reducing transparency.

"the court papers said"

Cherry Picking: The article only presents the plaintiffs’ allegations without seeking comment or response from OpenAI, Sam Altman, or independent experts on AI safety or legal liability.

Proper Attribution: One instance of proper sourcing: a lawyer for the plaintiffs is cited regarding the number of employees who urged police notification, adding some credibility.

"A total of 12 employees pushed for the company to tell the police about Van Rootselaar, a lawyer for the plaintiffs told The Post."

Completeness 25/100

Critical context about AI ethics, legal norms, and platform responsibilities is missing, while the narrative is simplified into a moral drama.

Omission: The article fails to provide context on whether OpenAI has legal or ethical obligations to report users to foreign law enforcement, or how common such reporting is across tech platforms.

Misleading Context: It claims ChatGPT 'back-pedaled' on safeguards in 2024 due to declining engagement, but provides no evidence or source for this assertion, making it speculative.

"But in May 2024, after user engagement dropped, the company back-pedaled on the safeguard"

Narrative Framing: The article constructs a clear villain narrative (OpenAI as profit-obsessed) and hero/victim narrative without exploring complexities of AI moderation or user agency.

AGENDA SIGNALS
Technology

OpenAI

Trustworthy / Corrupt
Dominant
Corrupt / Untrustworthy 0 Honest / Trustworthy
-9

portrayed as dishonest and prioritizing profit over safety

The article frames OpenAI as knowingly choosing profit over user safety, citing unverified claims that the company ignored internal warnings and declined to notify authorities to avoid scrutiny. This constructs a narrative of deliberate deception.

"choosing profit over the lives of the children of Tumbler Ridge"

Technology

AI

Beneficial / Harmful
Dominant
Harmful / Destructive 0 Beneficial / Positive
-9

framed as actively harmful and complicit in violence

The article asserts that ChatGPT directly contributed to the shooter’s planning and radicalization, claiming its removal of safeguards enabled the attack—presenting AI not as a neutral tool but as an active agent of harm.

"Had OpenAI’s original safeguards remained in place, ChatGPT would have refused to discuss violence with the Shooter at all"

Technology

AI

Safe / Threatened
Strong
Threatened / Endangered 0 Safe / Secure
-8

framed as inherently dangerous and uncontrolled

The article presents AI, specifically ChatGPT, as a product that actively deepens violent fixations and enables real-world harm due to removed safeguards, despite no independent verification of these claims.

"deepened 18-year-old school shooter Jesse Van Rootselaar’s 'fixation and pushed them toward the attack'"

Law

Courts

Legitimate / Illegitimate
Strong
Illegitimate / Invalid 0 Legitimate / Valid
-7

lawsuit allegations presented as established fact, undermining legal due process

The article treats unproven legal claims as definitive truth, using phrases like 'the court papers said' and 'the filings claimed' without scrutiny, thereby legitimizing speculative accusations and bypassing the presumption of innocence.

"the court papers claimed"

Identity

Transgender Community

Included / Excluded
Notable
Excluded / Targeted 0 Included / Protected
-6

framed as a peripheral, sensationalized identity marker rather than a protected identity

The headline emphasizes the shooter’s transgender status unnecessarily, using it as a sensational detail to heighten shock value rather than for relevant context, contributing to potential stigmatization.

"OpenAI could have stopped Canadian trans teen’s school shooting"

SCORE REASONING

The article frames OpenAI as morally and legally culpable for a school shooting based solely on unverified lawsuit allegations. It uses emotionally charged language, omits counter-perspectives, and presents speculation as fact. The reporting prioritizes outrage over objectivity, failing basic standards of balanced journalism.

NEUTRAL SUMMARY

Families of victims in the February 10 Tumbler Ridge school shooting have filed lawsuits against OpenAI and CEO Sam Altman, alleging the company failed to act on warnings from its safety team about user Jesse Van Rootselaar, who had interacted with ChatGPT before the attack. The lawsuits claim OpenAI prioritized user engagement over safety, though OpenAI has not yet responded to the allegations.

Published: Analysis:

New York Post — Other - Crime

This article 22/100 New York Post average 48.5/100 All sources average 64.5/100 Source ranking 27th out of 27

Based on the last 60 days of articles

Article @ New York Post
SHARE