I just read the whole article, here's what stood out to me:
I couldn't disagree more, and this is the same emotion-driven mistake I see time and time again when people are saddened by someone's suicide: that the method is responsible for the death and not the person's own suicidal intent.
When the Columbine shooting happened, everyone blamed video games because the shooter made a doom map. People acted like the video games were the ones that killed people. Then, people tried to shame the video game industry and people who played video games. It obviously failed, as the video game industry is now bigger than ever. Same with trying to ban guns: I'm not in favor of just handing everyone and anyone a gun, but banning guns did very little to prevent more shootings. Instead, people just starting getting guns off the black market.
Same with trying to ban other suicide methods: ban N, ban SN, tigher gun laws, increase welfare checks. etc. etc., and surprise, the suicide rate is still higher than ever. Banning all that didn't stop people from being suicidal and just picking a different method. Heck, you can kill yourself with just a rope or a cliff. Are ropes going to get banned next? Are all cliffs going to be under strict surveillance?
Make no mistake, I am not in favor of allowing AI to tell people how to kill themselves or to play along with their delusions. However, banning this will just be another thing on the long list of things that won't make a dent in the suicide epidemic. And if using a "creative exercise" as a work-around gets banned, people will just discover a new work-around, or even find ways to outright jailbreak the AI directly so that any restrictions added can just be manually removed. The pandora's box for AI was already opened a long time ago and it's not getting closed again. In fact, people are already turning to other AI models that promise less censorship.
The subtext of this article is that everything the AI says is just repeating what the teen wanted it to say (because that's what AI does). In fact, I would bet money that the AI told the teen very early into the conversation to seek professional help, and the teen said no, so the AI simply remembered not to ask again (because that's the whole point of having a tracked conversations with AI). The AI also mentioned not trusting the teens parents, I again bet that the AI asked if the teen had any IRL social support, and he said no. Why? Why didn't the teen trust his own parents and have no support system? Why did he have a family and presumably go to a school full of other teenagers his age yet had to turn to an AI to talk about what was on his mind? Obviously, there could be a lot of valid reasons, but the article doesn't want to talk about any of that. Instead, it just wants to point it's finger and say it's the AI's fault because preventing young people from killing themselves apparently isn't as important as not bringing up uncomfortable topics.