Let’s get this out of the way right away. The shooting in Tumbler’s Ridge, British Columbia was a heartbreaking tragedy. My heart goes out to the victims, their friends and family, and the community. No part of me wants something like this to ever happen again. Please keep that in mind as you read this.

There have been two news stories recently related to AI and surveillance.

The first was about Nest’s Superbowl advertisement. Showing how Nest cameras on homes can be used to find lost dogs. They claimed that people who had lost their dogs could upload photos, then Nest would use their AI to scan images from their cameras to help find the dog. This was immediately seen as a mass surveillance network using customer home Nest cameras, and it had obvious other uses than finding dogs. It was almost universally rejected for what it was, more intrusive surveillance of the public.

Recently we learned that Open AI employees had flagged the behaviour of the Tumbler Ridge shooter ‘s interactions with its ChatGPT chat bot. In June of 2025 their AI abuse detection system flagged the shooter’s account for “furtherance of violent activities” and banned the account for violating its usage policy.

The company considered notifying the RCMP, but ultimately decided that the behaviour didn’t meet the threshold for notifying law enforcement. Stating that they did not identify the behaviour as credible evidence of imminent planning. The general public response to this incident has been one of outrage aimed at OpenAI quite the opposite of the outrage rightfully directed at Nest. In the first case we were angry at a corporation for clearly advocating for general public surveillance, in the second, we are collectively angry at a corporation for not sharing the results of general public surveillance with law enforcement.

So my question here is where do we draw the line? How much surveillance is too much surveillance? For decades now governments have gotten more and more  intrusive in regards to our privacy. Cameras on streetcorners, firms like Palantir actively spying on us at every opportunity.

We are told it is for our safety! We are told they use these systems to identify threats before anything bad can happen. It’s okay, you’re fine if you don’t do anything wrong. Why do you care if the police have your fingerprints, or DNA if you never commit a crime right? This is all very Minority Report to me.

Let me restate, what happened in Tumbler’s Ridge was a terrible tragedy, and I wish it never happened, and I hope it never happens again. So I am struggling here. Because emotionally I wish OpenAI had notified the RCMP. Pragmatically though I am viewing these two news stories as similar, with opposite reactions. Again, where do we draw the line between too much surveillance and not enough?

And furthermore what do we expect the response would have been had the RCMP been notified of the shooter’s ChatGPT behaviour? Would they have been arrested? For what? Having bad thoughts they put into a chatbot? Again, very Minority Report. Would they have been offered mental health supports? Given how bad our mental health supports actually are, that is both unlikely to have happened and unlikely to have helped if it did happen.

Would it have prevented the shooting? I am skeptical. I don’t think there are easy solutions to these problems and assuming if the RCMP knew about these chat interactions would have prevented the shooting is very optimistic to say the least.

I get the raw emotional response “oh my God, they knew and let it happen!”, it is a totally understandable immediate response. But that still begs the question where do we draw that line. OpenAI had a line where they felt the behaviour didn’t pose an imminent threat. Clearly that line was in the wrong place. We know that now. But how comfortable are you with moving that line.

Should the government or public corporations be allowed to read your emails to assess threats? Listen to your phone calls, surveil your personal conversations via Alexa, read your personal journals, Watch your behaviour via your webcam?

Again, parallels to dystopian future fiction seem to be the direction we are headed. Minority Report, 1984, etc. Constant public surveillance is dangerous, and the outcry over this particular case is, in my mind, what they want us to feel. They want us to justify their watching our every move to “keep us safe”. I don’t want that, I don’t think you should either.

I don’t have the answers on how we prevent tragedies like Tumbler’s Ridge. I do think they start with better mental health supports and not more public surveillance though.

Keep reading