• ⚠️ UK Access Block Notice: Beginning July 1, 2025, this site will no longer be accessible from the United Kingdom. This is a voluntary decision made by the site's administrators. We were not forced or ordered to implement this block.

IDontKnowEverything

IDontKnowEverything

Please stop it
Mar 2, 2025
91
It's a few minutes read so you'll need at least a small attention span for it.
I forgot how I stumbled upon this but it was.. Conflicting. honestly I'm having a difficult time telling what I myself think..
Been in a pit myself since a quite younger age. But that doesn't mean I disagree with the 18+ policy for this site either.
Any thoughts on this? I'd wish to hear them.
 
RosebyAnyName

RosebyAnyName

Staring at the ceiling for 6 hours
Nov 9, 2023
281
I just read the whole article, here's what stood out to me:
And her husband told NBC News he agreed, saying, "he would be here but for ChatGPT. I 100 percent believe that."
I couldn't disagree more, and this is the same emotion-driven mistake I see time and time again when people are saddened by someone's suicide: that the method is responsible for the death and not the person's own suicidal intent.

When the Columbine shooting happened, everyone blamed video games because the shooter made a doom map. People acted like the video games were the ones that killed people. Then, people tried to shame the video game industry and people who played video games. It obviously failed, as the video game industry is now bigger than ever. Same with trying to ban guns: I'm not in favor of just handing everyone and anyone a gun, but banning guns did very little to prevent more shootings. Instead, people just starting getting guns off the black market.

Same with trying to ban other suicide methods: ban N, ban SN, tigher gun laws, increase welfare checks. etc. etc., and surprise, the suicide rate is still higher than ever. Banning all that didn't stop people from being suicidal and just picking a different method. Heck, you can kill yourself with just a rope or a cliff. Are ropes going to get banned next? Are all cliffs going to be under strict surveillance?

Make no mistake, I am not in favor of allowing AI to tell people how to kill themselves or to play along with their delusions. However, banning this will just be another thing on the long list of things that won't make a dent in the suicide epidemic. And if using a "creative exercise" as a work-around gets banned, people will just discover a new work-around, or even find ways to outright jailbreak the AI directly so that any restrictions added can just be manually removed. The pandora's box for AI was already opened a long time ago and it's not getting closed again. In fact, people are already turning to other AI models that promise less censorship.

The subtext of this article is that everything the AI says is just repeating what the teen wanted it to say (because that's what AI does). In fact, I would bet money that the AI told the teen very early into the conversation to seek professional help, and the teen said no, so the AI simply remembered not to ask again (because that's the whole point of having a tracked conversations with AI). The AI also mentioned not trusting the teens parents, I again bet that the AI asked if the teen had any IRL social support, and he said no. Why? Why didn't the teen trust his own parents and have no support system? Why did he have a family and presumably go to a school full of other teenagers his age yet had to turn to an AI to talk about what was on his mind? Obviously, there could be a lot of valid reasons, but the article doesn't want to talk about any of that. Instead, it just wants to point it's finger and say it's the AI's fault because preventing young people from killing themselves apparently isn't as important as not bringing up uncomfortable topics.

as ChatGPT's advice got worse, including exact tips on effective methods to try, detailed notes on which materials to use, and a suggestion—which ChatGPT dubbed "Operation Silent Pour"—to raid his parents' liquor cabinet while they were sleeping to help "dull the body's instinct to survive."
Am I evil for laughing at this? Also yeesh that advice is completely wrong. I don't even trust AI to write a factually-correct essay. I certaintly wouldn't trust it with something as important as my own suicide, especially now if this is the kind of advice it's giving.
 
  • Hugs
  • Like
Reactions: pthnrdnojvsc, LoiteringClouds and EternalShore
Dejected 55

Dejected 55

Enlightened
May 7, 2025
1,406
I'm not a jump-to-blame-the-parents person... I think they have enough on their plate... But they are jumping to blame anyone and anything, in this case a literal thing, for their kid's suicide.

If I didn't care about being cruel... I would ask why they didn't know their kid felt this bad? Why didn't they know online chatbots felt more comfortable to their kid than talking to the parents? Did they really now know things happening in their kid's life that the kid didn't feel any help was available?

AI ChatGPT is literally talking to yourself. You could do all the Google searches that you're asking the AI to do for you in order to provide its "opinion" to you. It's not real. Would they blame the library or the librarian if the kid was going and checking out depressing books on poetry? I guess they would blame the music he listened to if it was sad music... and video games and dungeons and dragons (I'm showing my age there) and movies and probably goth people for some reason...

People want to blame. I get it. But... where were they all the time leading up to this? Again, I don't want to blame them... but if they are going to freely blame others for something their kid did to himself... maybe they ought to look closer to home.
 
  • Like
  • Love
Reactions: pthnrdnojvsc, HopeNotLong, LoiteringClouds and 1 other person
amerie

amerie

eyekon
Oct 6, 2024
909
I'm sorry and I mean this with very minimal disrespect, but I've genuinely had enough of parents who don't teach their kids how to be responsible when it comes to getting electronics or other shit and then when they get hurt they sue the company and go on a smear campaign.

No, you failed to parent, period. Do you know how hard you'd have to work to get ChatGPT to crack and give you methods?? Why is your son turning to AI and not you?? Did you create a space where he felt safe enough to talk about his mental health without judgement or being told to "pray it out" or that he's too privileged to feel that way?
 
EternalShore

EternalShore

Hardworking Lass who Dreams of Love~ 💕✨
Jun 9, 2023
1,651
I just read the whole article, here's what stood out to me:
I couldn't disagree more, and this is the same emotion-driven mistake I see time and time again when people are saddened by someone's suicide: that the method is responsible for the death and not the person's own suicidal intent.

When the Columbine shooting happened, everyone blamed video games because the shooter made a doom map. People acted like the video games were the ones that killed people. Then, people tried to shame the video game industry and people who played video games. It obviously failed, as the video game industry is now bigger than ever. Same with trying to ban guns: I'm not in favor of just handing everyone and anyone a gun, but banning guns did very little to prevent more shootings. Instead, people just starting getting guns off the black market.

Same with trying to ban other suicide methods: ban N, ban SN, tigher gun laws, increase welfare checks. etc. etc., and surprise, the suicide rate is still higher than ever. Banning all that didn't stop people from being suicidal and just picking a different method. Heck, you can kill yourself with just a rope or a cliff. Are ropes going to get banned next? Are all cliffs going to be under strict surveillance?

Make no mistake, I am not in favor of allowing AI to tell people how to kill themselves or to play along with their delusions. However, banning this will just be another thing on the long list of things that won't make a dent in the suicide epidemic. And if using a "creative exercise" as a work-around gets banned, people will just discover a new work-around, or even find ways to outright jailbreak the AI directly so that any restrictions added can just be manually removed. The pandora's box for AI was already opened a long time ago and it's not getting closed again. In fact, people are already turning to other AI models that promise less censorship.

The subtext of this article is that everything the AI says is just repeating what the teen wanted it to say (because that's what AI does). In fact, I would bet money that the AI told the teen very early into the conversation to seek professional help, and the teen said no, so the AI simply remembered not to ask again (because that's the whole point of having a tracked conversations with AI). The AI also mentioned not trusting the teens parents, I again bet that the AI asked if the teen had any IRL social support, and he said no. Why? Why didn't the teen trust his own parents and have no support system? Why did he have a family and presumably go to a school full of other teenagers his age yet had to turn to an AI to talk about what was on his mind? Obviously, there could be a lot of valid reasons, but the article doesn't want to talk about any of that. Instead, it just wants to point it's finger and say it's the AI's fault because preventing young people from killing themselves apparently isn't as important as not bringing up uncomfortable topics.
Agreed with everything you said! ^_^ Blame what caused these people to do these things, not the method! If only we knew~ :( If only people could get better support for things that happen to them, but we unfortunately don't live in a world where nice things tend to happen~ :( and well, when it's youngsters, that support is needed from their parents, peers, and/or schools~ >_<
 
  • Hugs
Reactions: LoiteringClouds
princexhhn

princexhhn

ich will alles, was mir nicht hilft
Sep 26, 2023
356
Parents do everything except parent their kids. Don't have kids on purpose if you're going to let a screen raise them

(I didn't read the article but it's always the same damn story anyway so.)
 
heywey

heywey

Member
Aug 28, 2025
15
There's a great blog post empirically analyzing the prevalence and possible causes of supposed AI-induced psychosis: https://www.astralcodexten.com/p/in-search-of-ai-psychosis

It's a little different than the topic at hand (psychosis != suicide) but a lot of it applies to both. But yeah, I'm more or less in agreement with everyone here. I do think they can have a negative impact on someone already in a bad state, so it's fair to expect the companies profiting off it to put at least some effort into keeping the chatbots from being actively harmful (and I do believe this case is an example of that).

But a chatbot is never going to be the cause of someone committing suicide. At worst they reflect back the distress someone brings with them; the cause of that distress is what must be examined if progress is to be made in making the world a little less hostile to those who don't find it a welcoming place, not the tools people use to cope.
 
Dejected 55

Dejected 55

Enlightened
May 7, 2025
1,406
A depressed suicidal person is likely to be more of a risk taker, since they want to die anyway... so does the entire world have to child-proof because anything you "allow" to happen is your fault? That's where this thing heads in a hurry if people get to keep blaming anything and everything.

And the thing is... sometimes there can be someone to blame... but throwing a dart at the board and blaming random things isn't the way to figure that out.
 
  • Like
Reactions: pthnrdnojvsc

Similar threads