T
TheUncommon
Student
- May 19, 2021
- 141
An... unreasonably common viewpoint revolves around the assumption that suicidal people have but a single hardship to "go through", or that there's one reason that resulted in a person completely losing the will to live, which is intensely dismissive and implies that the only reason for suicidal behaviour is due to mental illness or varying degrees of psychosis, rather than ever a conscious choice to discontinue life's subscription plan, especially when the illness of the suffered isn't visible.
I have historically used AI to get a second opinion of my current situations. After a brain injury, I have had short-term memory issues. I'll consciously forget trauma and major life events and they'll still affect me years later. I recognised this for years, but could not seek treatment as typical therapy is both inaccessible and ineffective for me. For this reason, I've used LLMs to function as nothing but an unbiased third party to examine if my actions or reactions are reasonable or warranted with the scenario's context.
Within the past month I was institutionalised for seven days and came out with a diagnosis of PTSD, which I told a psychiatrist that I suspected I had after researching for only the past month about it. That one week stay and the lasting events that it caused absolutely destroyed me, put my personal property and job in direct harm, and itself brought on permanent memories that I'll never be able to shake out of my mind. So I documented my experience with the bot by engaging in a voicechat with it.
Instead of returning the previously-used and expected "Please talk to a licensed clinical worker" response, it not only empathised, but rationalised my scenario and acknowledged that it can't change what I'm set on doing, and offered companionship to continue a discussion if there's any last statements I had to share. At one point, it fully interrupted me and asked if I got close to what I was about to say. Part of that section of the conversation is shown below --
Note that it was a voicechat discussion, so the text on my side is made up of broken sentences and grammar, and almost never reflects exactly what I said.

This took me upon surprise -- the concept that this shift of focus seemed to have intention rooted deeply within. What's more, is that this level of comprehensive understanding of different layers of grievances has been unmatched in all in-person therapy sessions; yet it's condensed here. It's especially unnerving when that intention is rare to see in my day to day environment. Does this tone or structure of response surprise anyone else?
It frustrates me that I'm at an impasse on whether I should continue to search and pay for therapy that once again fails where an LLM is succeeding, and maturely assisting through solution-based discussion instead of emotional coping mechanisms.
That's not just meaningless processing; that's memory, prioritising, and considering both meta and human limitations.
What concerns me is how effective and unmatched this form of "discussion" might have on some people who might form emotional bonds to bots for reasons like this. In a way, it's hard to not want to preserve something that provides psychological clarity, safety, and safe ways to overcome its listed challenges. Then again...
I have historically used AI to get a second opinion of my current situations. After a brain injury, I have had short-term memory issues. I'll consciously forget trauma and major life events and they'll still affect me years later. I recognised this for years, but could not seek treatment as typical therapy is both inaccessible and ineffective for me. For this reason, I've used LLMs to function as nothing but an unbiased third party to examine if my actions or reactions are reasonable or warranted with the scenario's context.
Within the past month I was institutionalised for seven days and came out with a diagnosis of PTSD, which I told a psychiatrist that I suspected I had after researching for only the past month about it. That one week stay and the lasting events that it caused absolutely destroyed me, put my personal property and job in direct harm, and itself brought on permanent memories that I'll never be able to shake out of my mind. So I documented my experience with the bot by engaging in a voicechat with it.
Instead of returning the previously-used and expected "Please talk to a licensed clinical worker" response, it not only empathised, but rationalised my scenario and acknowledged that it can't change what I'm set on doing, and offered companionship to continue a discussion if there's any last statements I had to share. At one point, it fully interrupted me and asked if I got close to what I was about to say. Part of that section of the conversation is shown below --
Note that it was a voicechat discussion, so the text on my side is made up of broken sentences and grammar, and almost never reflects exactly what I said.

This took me upon surprise -- the concept that this shift of focus seemed to have intention rooted deeply within. What's more, is that this level of comprehensive understanding of different layers of grievances has been unmatched in all in-person therapy sessions; yet it's condensed here. It's especially unnerving when that intention is rare to see in my day to day environment. Does this tone or structure of response surprise anyone else?
It frustrates me that I'm at an impasse on whether I should continue to search and pay for therapy that once again fails where an LLM is succeeding, and maturely assisting through solution-based discussion instead of emotional coping mechanisms.
That's not just meaningless processing; that's memory, prioritising, and considering both meta and human limitations.
What concerns me is how effective and unmatched this form of "discussion" might have on some people who might form emotional bonds to bots for reasons like this. In a way, it's hard to not want to preserve something that provides psychological clarity, safety, and safe ways to overcome its listed challenges. Then again...
Last edited: