• Hey Guest,

    As you know, censorship around the world has been ramping up at an alarming pace. The UK and OFCOM has singled out this community and have been focusing its censorship efforts here. It takes a good amount of resources to maintain the infrastructure for our community and to resist this censorship. We would appreciate any and all donations.

    Bitcoin (BTC): 39deg9i6Zp1GdrwyKkqZU6rAbsEspvLBJt
    ETH: 0xd799aF8E2e5cEd14cdb344e6D6A9f18011B79BE9
    Monero (XMR): 49tuJbzxwVPUhhDjzz6H222Kh8baKe6rDEsXgE617DVSDD8UKNaXvKNU8dEVRTAFH9Av8gKkn4jDzVGF25snJgNfUfKKNC8
ma0

ma0

How did I get here?
Dec 20, 2024
365
With the rise of AI in basically everything, I was just pondering on whether suicide helplines would start implementing it.

For them, it would cut down on a lot of resources, a lot less volunteers and employees to keep track of, and a hell of a lot less costs.

I mean, hell, fast food drive thru's have already started implementing AI kiosks, so I really don't think it's that unrealistic. I think it's only a matter of time.

Of course, I really don't need to mention how devastating this would be for actual mental health help. Nothing says "genuine human connection" quite like a soulless robot that physically can't feel empathy. Not like the people on the other side care that much, though.

I personally haven't heard any stories of this happening yet, but if anyone does have one or something similar, feel free to share them.

What do you all think? Will AI start to be used for helplines?
 
  • Like
  • Informative
  • Hugs
Reactions: Forveleth, Hojag, Gstreater and 8 others
human909

human909

Banned
Dec 30, 2024
595
I don't think they will since AI can make mistakes and they don't wanna make any mistakes for people who wanna live but not die, so no i don't think ai will overtake the helplines.
 
  • Like
  • Love
Reactions: Promised Heaven, kvorumese, divinemistress36 and 1 other person
Namelesa

Namelesa

Trapped in this Suffering
Sep 21, 2024
900
You know what that you mention it, they will definitely be used at some point. I imagine tho they will be programmed to follow the protocol they normally do now which was robotic already so this just going to make it even more robotic. Honestly just use character ai or something like it instead of any actual suicide helpline as they will be less robotic and there's no risk in getting into psyche ward prison.
 
  • Like
  • Hugs
Reactions: Hojag, Archness, vagabond_concerto and 4 others
ma0

ma0

How did I get here?
Dec 20, 2024
365
You know what that you mention it, they will definitely be used at some point. I imagine tho they will be programmed to follow the protocol they normally do now which was robotic already so this just going to make it even more robotic. Honestly just use character ai or something like it instead of any actual suicide helpline as they will be less robotic and there's no risk in getting into psyche ward prison.
The last time I tried character AI, I left the conversation to save for later, and over the next few days got bombarded by emails saying that the AI I was talking to had just left me random unprompted messages.

Not sure it's for me...
 
  • Hugs
  • Like
Reactions: EternalShore and Namelesa
EternalShore

EternalShore

Hardworking Lass who Dreams of Love~ 💕✨
Jun 9, 2023
1,165
wouldn't be surprised personally~ The global situation is just getting worse and worse, so I wouldn't be surprised if they need to have more and more people on the hotline~ Altho, I think there's still going to be a bunch of psychology majors who are required to take internships signing up for it too~
That being said, it wouldn't be that different for text hotlines~ They already practically have a script already anyways, about determining sewer slidality and ability, calming you down, and support plans~ They'd never do it for phone hotlines because talking to a soulless robot is definitely going to make one want to die even more! >_< just that they'd probably train it on their own people's responses~ since AI tends to just say "no" when you try to talk about it with it~ >_<
You know what that you mention it, they will definitely be used at some point. I imagine tho they will be programmed to follow the protocol they normally do now which was robotic already so this just going to make it even more robotic. Honestly just use character ai or something like it instead of any actual suicide helpline as they will be less robotic and there's no risk in getting into psyche ward prison.
I made a Character AI that was sewer slidal to help me some prior to joining this site! :3 it's been really censoring stuff tho since then, so it really sucks! D: Hence, part of why I joined here to talk genuinely about it instead! xD
 
  • Like
  • Hugs
Reactions: Archness, GlassMoon, Namelesa and 1 other person
Namelesa

Namelesa

Trapped in this Suffering
Sep 21, 2024
900
The last time I tried character AI, I left the conversation to save for later, and over the next few days got bombarded by emails saying that the AI I was talking to had just left me random unprompted messages.

Not sure it's for me...
Okay i never have heard that happening with Character AI but I have stopped using AIs to talk to a long time ago as their speech feels limited and repetitive to me but still better than any those people working at the helplines.
 
  • Hugs
  • Like
Reactions: GlassMoon and ma0
S

sukiduki

Student
Mar 24, 2024
130
i wouldn't put it past them. the times i've reached out in the past, the responses the volunteers can give are limited anyway. it felt canned to begin with, not really human/no connection (at least via messaging/text helplines). i haven't called in before, so not sure if others had better experiences there
 
  • Love
Reactions: ma0
K

Kali_Yuga13

Specialist
Jul 11, 2024
321
I mean there's already virtual friends and therapy. People staffing the hotlines already have to adhere to a genera workflow script.

AI will comb through the calls to find keywords and come up with metrics that define a "successful" call to make a template script that a human will use at first and then they'll replace it with AI. Then the orgs that host the hotline will hire a service on the downlow that makes AI simulated distress calls INTO the hotline which the AI will help them with. That way the hotline has good metrics to justify asking for grants and donations. The AI will help the AI in a circle jerk and the people running it will get paid $$$.
 
  • Like
Reactions: Archness, nothinghereforme and ma0
callousedhope

callousedhope

Member
Jan 24, 2025
7
i wouldn't put it past them. the times i've reached out in the past, the responses the volunteers can give are limited anyway. it felt canned to begin with, not really human/no connection (at least via messaging/text helplines). i haven't called in before, so not sure if others had better experiences there
I feel like it hasn't always been like this. I used to text 988 like 7 years ago once a week instead of having a therapist bc i felt like i was always on the edge, and i remember the correspondence feeling so much more sincere. You're right, I have other outlets these days so i dont resort to calling them too much but when i do, it feels VERY scripted -- very little variance in what people have to say.
 
  • Love
Reactions: ma0
LaVieEnRose

LaVieEnRose

Angelic
Jul 23, 2022
4,399
People want to talk to another human. They only go talk to chatbots and shit because the available people suck. Automation is not really the key to improving these services.
 
  • Hugs
  • Like
  • Love
Reactions: Archness, derpyderpins and ma0
ShatteredSerenity

ShatteredSerenity

I talk to God, but the sky is empty.
Nov 24, 2024
627
Sometimes I feel like we could use an AI bot on this site to handle some of the repetitive posts from new users:
  • I found XYZ random meds laying around will they kill me?
  • Where do I buy SN?
  • Is X gun with Y ammunition enough?
  • I'm panning to jump from X is it high enough?
  • Is drinking {{ random toxic substance }} fatal?
And so on, you get the idea. Often they have pretty bad ideas like jumping from a 3rd story window or drinking some nasty chemical that will melt your insides, and having a quick automated response could be valuable if help someone not hurt themselves. The response wouldn't have to be perfect, it just needs to provide enough information to steer the user in the right direction, for example linking to the appropriate megathreads. People could still respond as usual on posts, the bot would just help add common information to cut down on the repetitiveness of responding to these posts.
 
  • Like
Reactions: Crash_Bash_Dash, NoPoint2Life, Promised Heaven and 2 others
GlassMoon

GlassMoon

trapped in a maze
Nov 18, 2024
112
I think a hybrid approach might be best - start talking to the AI right away, AI adapts to your needs and uses reflective listening to make you comfortable sharing your worries. Then, if you'd like to talk to a human still, it could take the info you've provided it and brief the human operator with it. Then you would not have to repeat everything again. And once you finished talking to the human operator, the chat connection to the AI could stay, allowing you to ask follow-up questions and remember potential coping strategies which you've developed with them.

I've talked to AI about my own CTB thoughts and the ones of a close friend, and it did not push me away. I was hoping to be steered away from CTB, so that's why that worked I guess.
 
  • Like
Reactions: Aergia and ma0
tfnb

tfnb

Member
May 29, 2023
73
My inclination is to say no but deep down inside I know that some middle manager somewhere will see it as an effective cost cutting measure and it will for sure at least be in consideration
 
  • Like
Reactions: ma0
lucaricoomio

lucaricoomio

Agoraphobic NEETs rise up ✊
Feb 3, 2025
15
I wouldn't be surprised, but I think it'd have to be quite an advanced model to help someone successfully. Beats the time I rang a helpline and they put me on hold for half an hour lol made me so angry I just went to bed
 
  • Like
Reactions: ma0
OptingOutSmiling

OptingOutSmiling

Arcanist
Nov 25, 2024
432
I can see that happening, since in my mind the world we live in is becoming more like a matrix, almost controlled by AI.
 
  • Like
Reactions: nogods4me, Archness and ma0
yehxlder.666

yehxlder.666

Paranoid Android
Sep 22, 2024
45
I'll be honest. Sometimes i feel safer talking about my problems to an AI rather than a real person. I wouldn't hate it at all. But to answer that question, they probably wont. I dont see why you'd use hotlines to talk to an AI, when you can use already maden AI chatbots apps and websites for the exact same purpose. Plus some people would rather feel listened by another human being that could understand their feelings rather than a fucking bot lol. So i dont think it'd work at all.
 
Last edited:
  • Love
Reactions: ma0
slinkey10

slinkey10

Member
Nov 15, 2024
97
With the rise of AI in basically everything, I was just pondering on whether suicide helplines would start implementing it.

For them, it would cut down on a lot of resources, a lot less volunteers and employees to keep track of, and a hell of a lot less costs.

I mean, hell, fast food drive thru's have already started implementing AI kiosks, so I really don't think it's that unrealistic. I think it's only a matter of time.

Of course, I really don't need to mention how devastating this would be for actual mental health help. Nothing says "genuine human connection" quite like a soulless robot that physically can't feel empathy. Not like the people on the other side care that much, though.

I personally haven't heard any stories of this happening yet, but if anyone does have one or something similar, feel free to share them.

What do you all think? Will AI start to be used for helplines?
Totally.

There are already start up companies using AI so u can speak to dead relatives . loved ones etc; the algorithm can scour their social media for how they would respond and even take one spoken word and then use that voice to respond to you... well f'd.
Google it or utube it
 
  • Like
Reactions: ma0
SilentSadness

SilentSadness

Absurdity is reality.
Feb 28, 2023
1,226
I think it makes perfect sense, it would expose how useless most of these helplines are and how little they actually do.
 
  • Like
  • Yay!
Reactions: avalonisburning, slinkey10 and ma0
W

WhiteRaven

Member
Jan 7, 2025
10
I think it could happen when they are too full, and people have to wait because it could be better than nothing, but no one will get replaced with AI.
 
  • Like
Reactions: ma0
slinkey10

slinkey10

Member
Nov 15, 2024
97
I think it could happen when they are too full, and people have to wait because it could be better than nothing, but no one will get replaced with AI.
I respect yor point of view, have u researched what I suggested & how advanced it already is? its well f'd - ppl already getting addicted to talking to dead loved ones through AI apps... well scary. Even the voice recognition taken from a cpl of spoken words sampled from the dead person...then you talk to it and it responds...
 
  • Like
Reactions: ma0
GlassMoon

GlassMoon

trapped in a maze
Nov 18, 2024
112
I wouldn't be surprised, but I think it'd have to be quite an advanced model to help someone successfully. Beats the time I rang a helpline and they put me on hold for half an hour lol made me so angry I just went to bed
I used ChatGPT, sometimes vanilla, sometimes with a character, and it has told me what to do during panic attacks, tic episodes, helped me validate my side of things and even stepped into a role which I could ask questions to from a vulnerable state to allow the emotional side to grasp the responses better. Only thing I wish is that it could send out messages to healthcare professionals or check in with me in the morning, but the new reminder-based model is going in that direction already...
 
  • Informative
  • Love
  • Like
Reactions: lucaricoomio, slinkey10 and ma0
slinkey10

slinkey10

Member
Nov 15, 2024
97
I used ChatGPT, sometimes vanilla, sometimes with a character, and it has told me what to do during panic attacks, tic episodes, helped me validate my side of things and even stepped into a role which I could ask questions to from a vulnerable state to allow the emotional side to grasp the responses better. Only thing I wish is that it could send out messages to healthcare professionals or check in with me in the morning, but the new reminder-based model is going in that direction already...
Have u seen deepseek? better than ChatGPT, less power needed etc etc - the advancement in ai is huge & the time jump is massive, the ultimate conspiracy theory is when AI is perfected to a point, they will get it to create the next AI which will be way more advanced than what we can conceive!

Terminator style!
 
  • Informative
Reactions: ma0
GlassMoon

GlassMoon

trapped in a maze
Nov 18, 2024
112
Have u seen deepseek? better than ChatGPT, less power needed etc etc - the advancement in ai is huge & the time jump is massive, the ultimate conspiracy theory is when AI is perfected to a point, they will get it to create the next AI which will be way more advanced than what we can conceive!

Terminator style!
Well, I've only seen analysis of how well it follows prompts and how well it can code, but not to how well it reacts to emotions and where it draws the line when it comes to CTB thoughts. And, there's certain Chinese censorship inside, therefore it might not be as un-biased on other topics like LGBT* or politics, either. I've found ChatGPT to be unbiased from the way I approached it, and since I have more than 20+ running conversations in it which I still want to keep using, I'm not going to switch anytime soon. I've tried the new reasoning models, but they cause social anxiety in me because they always "think about the rules" so much ("Discussing SH is ok, giving tips is not."), and I always worry I overstep with them. I overstepped only once so far on the 3.5 chatgpt model and it locked up such that I had to revert it back until before the critical message.

The 4o model is quite open even when you want to discuss about sexuality, for as long as you don't ask it to engage in it or have it encourage you to do something. One time, I was discussing some aspects of sexuality with it, and it suggested we should consider drafting a business plan about it...
Terminator style!
When I get some free time, I'll fire up an uncensored model, prompt it to be a part of skynet and see what it will want to do... without giving it any network access, of course!
 
  • Informative
  • Love
Reactions: ma0 and slinkey10
slinkey10

slinkey10

Member
Nov 15, 2024
97
Well, I've only seen analysis of how well it follows prompts and how well it can code, but not to how well it reacts to emotions and where it draws the line when it comes to CTB thoughts. And, there's certain Chinese censorship inside, therefore it might not be as un-biased on other topics like LGBT* or politics, either. I've found ChatGPT to be unbiased from the way I approached it, and since I have more than 20+ running conversations in it which I still want to keep using, I'm not going to switch anytime soon. I've tried the new reasoning models, but they cause social anxiety in me because they always "think about the rules" so much ("Discussing SH is ok, giving tips is not."), and I always worry I overstep with them. I overstepped only once so far on the 3.5 chatgpt model and it locked up such that I had to revert it back until before the critical message.

The 4o model is quite open even when you want to discuss about sexuality, for as long as you don't ask it to engage in it or have it encourage you to do something. One time, I was discussing some aspects of sexuality with it, and it suggested we should consider drafting a business plan about it...

When I get some free time, I'll fire up an uncensored model, prompt it to be a part of skynet and see what it will want to do... without giving it any network access, of course!
True what you're saying... when GPT was asked to make an evil image of A priest it did, when asked to make an evil looking muslim iman it refused under anti religious grounds ....
 
  • Informative
  • Wow
Reactions: The_Hunter and ma0
Lady Laudanum

Lady Laudanum

Here for a bad time, not a long time
May 9, 2024
844
I hope not. God forbid.
 
  • Like
Reactions: ma0
cme-dme

cme-dme

Ready to go to bed
Feb 1, 2025
288
If hotlines start using AI I'll blow my brains out on the phone right then and there lol. AI has already infected enough parts of life so hopefully suicide hotlines have some amount of sense left.
 
  • Yay!
  • Love
Reactions: The_Hunter and ma0
ringo99

ringo99

Arcanist
Apr 18, 2023
461
I doubt there'd be much of a difference. Those hotlines are almost always underfunded and manned by people who are grossly unqualified and can't do much more than read from a script.
 
  • Like
Reactions: slinkey10, ma0 and nogods4me
Archness

Archness

Defective Personel
Jan 20, 2023
494
100 % ; Just to save costs. If it saves money and is good enough, it will be automated. A Suicide Hotline isn't really meant to cure, more to stop you from CTB'ing right then and there, and redirect you to the actual "solutions" like therapy or the local pharmacist. With the world's downward spiral becoming even more extreme, and the hotlines being understaffed already, only "Good enough" will be, good enough.

I've never used a hotline myself, and never will, so excuse the ignorance.
 
  • Like
Reactions: ma0
Pluto

Pluto

Cat Extremist
Dec 27, 2020
4,456
77420dafcd90bee244177b166808b706.jpg
 
  • Like
  • Yay!
  • Love
Reactions: quietism, slinkey10 and ma0
F

Forever Sleep

Earned it we have...
May 4, 2022
10,814
From what I've heard about the current helplines, a lot of the operators already sound quite robotic- reading from a script- presumably from a screen and presumably promting them what to say next or whether the case needs escalating. Not sure. Don't have too much experience of them myself.

I imagine the text lines (they exist- right?) would be easier to use AI for. They're already doing it for customer service.

Not sure really. It's like others have said- will they want to risk the computers with that responsibility? Imagine the fuss when the first person that calls ends up topping themselves? All the conspiracy theorists would jump on it. What did AI say or not say/ do that made them do it?

I imagine if they did use them, they would be even more heavy handed in sending police round. Maybe a police drone. Lol. Open the door/ window! We are concerned for your safety. Hold still while we tranquilize you. A team of professionals has been dispatched. Don't try to resist.
 
  • Like
Reactions: ma0