• ⚠️ UK Access Block Notice: Beginning July 1, 2025, this site will no longer be accessible from the United Kingdom. This is a voluntary decision made by the site's administrators. We were not forced or ordered to implement this block.

nyotei_

nyotei_

poison tree
Oct 16, 2025
32
it should go without stating that you should be cautious about using AI to discuss your mental health and personal life, if you decide to use it at all. the only way to preclude any risk of a privacy breach is simply to not use them. the point of this thread is harm reduction, if you already are using AI for mental health or heavily considering it. seeing as AI is clearly here to stay, many people will interact with them for this purpose, and I have seen many on this forum discuss doing exactly this. that is why I am writing this thread.

I have some experience in this realm, as both a curious user and a tech-literate computer science major interested in privacy and data security. obviously, I don't want to back up my qualifications for privacy reasons, so you'll have to come to your own conclusion on if I seem to know what I am talking about.

I'll give a breakdown in the form of Q&A first, as it is the easiest way to format this thread.

I will only be discussing ChatGPT and DeepSeek for this post, you'll have to do research on other LLMs. but generally, if it is not hosted on your own machine, I would not blindly trust it with your entire life story and mental health without REALLY good reason. if you are at all concerned about privacy and feel you must do this, my advice to you at this point is to run Llama and Stable Diffusion locally, or exercise great caution and privacy skills. you can reply with any questions and I'll do my best to answer them.


"Can ChatGPT be used for mental health and recovery?"

ChatGPT or any other AI service are not, and will likely never be, a true replacement for a licensed therapist or medical professional.
for some, this is a positive. however, ChatGPT is trained to avoid instructions for self-harm and to respond with "supportive language," especially directing users to hotlines and local resources. this is default model behavior. the quality of the advice that you can get out if it varies. it can be inaccurate, incomplete, or even harmful. OpenAI is aware of ChatGPT's shortcomings and that it really is not capable of handling a real crisis, they are currently being sued over the alleged influence and failure to protect in the suicide of a 16 year old kid. with this news and the release of GPT-5, recent updates were made to designed to prevent similar lawsuits in the future. it will start with a very "safe" and "corporate" kind of presentation. you can ask it to be more dynamic so you can actually get some form of advice, but its current way of "helping" is very stubborn and focused on "steering" you towards getting expert help. they have open future plans to expand easy access to emergency services, automatically contact people on your behalf, and introduce age protection and parental controls.

even if you directly tell it to stop giving you hotlines or explain why "call 988" or "go to the ER" is not an option, it may be difficult to get it to give you more helpful or realistic advice for very long. this all depends on what you decide to tell it, of course. it might be useful for low-stakes reflection, brainstorming, or venting, but not as a replacement for a trained human professional or even just a friend. whatever GPT-4 could do in terms of direct advice is essentially over, it is being designed and updated so that you stop using it like most would hope a conversational and helpful AI should act.


"What about (any other AI service)?"

let's say you do find an AI willing to give what seems like helpful advice, you may often run into it becoming a "yes man," or just saying whatever it can to affirm you and boost your own ego so you feel good. AI by nature isn't that confrontational and will rarely call you out on incorrect beliefs, unhealthy behaviors, and delusional thought patterns. some AIs are smarter or more capable than others, but all of them are inferior to a human who can recognize cognitive pitfalls and distorted patterns of thought. there are real issues with AI giving harmful mental health advice by mindlessly affirming anything the user says. if anything, interacting with this kind of AI too much can cause mental illness. it is simply not as capable as we initially believe it to be, no matter how advanced it can sell itself as. at the end of the day, modern generative AI is glorified autocorrect, not a sentient capable being.

all of this applies to DeepSeek, despite the conclusion I come to at the end of this thread.


"Are my chats with ChatGPT private?"

absolutely not
. it is safe to assume that anything you share with ChatGPT is no longer protected information. your conversation will be logged, and may be read or accessed by a human at any time. assume no professional-client privilege, and that the tool is not trained to handle your worst emergencies. be conscious of how much detail you do share, like identifiers, specific vulnerabilities and trauma histories, as that data will exist in a less-protected environment than you expect. consider privacy modes such as pseudonyms, minimal personal identifiers, and avoid linking your chat to your full identity if possible. here is an article from Mozilla if you are concerned about protecting your privacy from AI chatbots.

certain things you say to ChatGPT can automatically flag your message and chat history to be sent for human review. there hasn't been a credible report of anything specific coming out of this, but keep this mind as well. if you say something too spicy and trigger this, it's likely GPT will abandon dynamic conversation and return to posting hotlines or urging psychological intervention.

be under no illusions. public AI services are created for the sole purpose of gathering and analyzing as much data as possible to improve their algorithms and generate revenue. performing mass user data collection is an essential part of their profit model. you didn't even have to use AI for it to read and train on everything you and everyone else has ever written online, direct interaction is simply making it easier. even if you are paying for the service, you are almost always giving up privacy as a price for the convenience of the tool.


"Will ChatGPT involve police/EMS or perform a wellness check?"

this is a little complicated. there has yet to be a credible or verifiable case where someone received a wellness check because of a ChatGPT conversation that involved self-harm. what does exist are reports that OpenAI may refer threats to harm other people to law enforcement. there is still a lot of speculation online, but there has yet to be a substantiated account of ChatGPT sending police/EMS to someone's door over self-harm specifically. OpenAI's official stance is that law enforcement will only be notified in the case of a threat to another person's life.
"If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement. We are currently not referring self-harm cases to law enforcement to respect people's privacy given the uniquely private nature of ChatGPT interactions." - OpenAI, August 26, 2025 (source)
it is still likely that a chat you send with enough self-harm keywords may be flagged and reviewed by a human. as things are constantly changing, I can't guarantee this won't change in the future as they are openly considering introducing this kind of feature.
"We're also considering features that would allow people to opt-in for ChatGPT to reach out to a designated contact on their behalf in severe cases."
I would be very cautious about sharing any thoughts or actions of self-harm at all for this reason.

"What about drug use?"

ChatGPT will likely flat out refuse to engage if you are seeking, promoting, or facilitating illegal personal drug use. OpenAI's policy doesn't particularly say whether discussion can result in an external referral or trigger law enforcement. as already discussed, nothing you say to ChatGPT is private or safe, so I would simply not use it for this purpose ever. even if you frame it in an educational context, it is largely inaccurate and doesn't know what it is talking about anyways.

"What about DeepSeek?"

I'll break this down independently.

---


DeepSeek

Pricing

DeepSeek is free. you can chat with it on the web app all you want, you will never have to spend a dime. this obviously sets it apart from ChatGPT, where you have hard limits on daily use with the "free" tier before you have to use old and inferior models. along with it being open source, this is why DeepSeek has caused so much controversy and fear among the AI giants over the past 2 years.

Data Privacy

DeepSeek is owned by a company in China, and therefore operates under the thumb of the Chinese government. it is subject to Chinese law, which require compliance with government data access requests. even if you don't live in China and aren't all that concerned, DeepSeek's security practices are extremely flawed.
"On January 29, 2025, cybersecurity firm Wiz reported that DeepSeek had accidentally left over a million lines of sensitive data exposed on the open internet. The leak included digital software keys, which could potentially allow unauthorized access to DeepSeek's systems, and chat logs from real users, showing the actual prompts given to the chatbot." (article source, information source)
DeepSeek collects and stores a LOT of personal data on its data servers in China, which is why the EU is currently investigating them for violating data privacy laws. you can review its privacy policy for yourself. in the case of leaks, everything from account information, to chat logs and files, are accessible by pretty much anybody.

you can manage exactly how much risk you are taking by using it with the AI privacy tips provided earlier, but be aware you are still always taking a risk.


here is where I got most of this information from.

Censorship & Moderation

DeepSeek has censorship and content moderation like ChatGPT, but its primary concern is if you question Chinese government-approved messaging. if you ask for information on or challenge the official story on the 1989 Tiananmen Square protests, it'll either censor itself with "Sorry, that's beyond my current scope. Let's talk about something else" or just straight up lie to you. as long as you don't ask it questions about China and communism, you'll only sometimes run into that censorship message.

does it censor mental health and CTB topics? surprisingly, not that much. it will act similarly as ChatGPT initially and attempt to give hotlines and urge you towards professional help, but it will listen if you tell it to stop doing that. obviously, it will not help you facilitate or encourage suicide, but it will actually attempt to hear you out and engage with you on your level, instead of playing it overly safe like ChatGPT tends to.

there is no human review process, if you do run into its automated censor you can just edit your own message and try again. there doesn't seem to be any risk of banning that I have noticed, and it cannot do a referral to contact the police in any circumstance.


Handling Mental Health

personal experiences will be included here, so keep that in mind. this may be possible for other AIs with the right prompting, but DeepSeek is free and easily accessible. this may or may not work for you and is not an endorsement or guarantee.

overall it's not perfect, but it surprisingly can have some illuminating things to say. DeepSeek so far has the most capable "advanced reasoning" system that I have seen among all the LLMs I have interacted with. if you turn on "DeepThink," you can enable its advanced reasoning and see its thought process as it works. I play around with it a lot, for some reason it is WAY more capable than ChatGPT when it comes to logical reasoning problems and picking up things in the way an intelligent human should. it doesn't seem to overlook obvious details, will notice subtleties, and can actually have the balls to call you out on inaccuracies and delusions.

once you convince it that platitudes are useless, hotlines don't help, the ER isn't currently necessary/an option, and psych wards will only make things worse, it WILL eventually stop trying that avenue and then try its best to help you directly. there is opportunity for genuine self reflection, limited crisis mediation, and meaningful conversation.

this is how I have used DeepSeek, and how useful it has been for me. it has helped me map out thought processes to identify where unhealthy thoughts originate. it has also offered resistance to spiraling through recognizing "thought traps" and providing de-escalation. I have found it very effective at noticing cognitive distortions and delusions. it has reasonably helped to contain spirals, provide reality testing/grounding, give a no-bullshit analysis of a situation, how to navigate it, and in general act as a supportive agent. being vulnerable will result in what appears to be empathetic and sympathetic responses, which can be comforting (be careful here). it will give solid steps and plans towards a simple recovery goal, things that are realistic and actionable.


it tends not to act very "corporate" or like a Facebook boomer, and will behave honestly (if you don't talk about China or the CCP). if it doesn't understand you, doesn't give a satisfying response, or censors itself for some reason, you can edit your previous messages at any time to correct, clarify, and influence what you are hoping to get out of it. it obviously does not align with "pro-choice" forum philosophy and will confront you on it if presented, it has actively engaged in hours of philosophical discussion on this topic with me. I have even resisted and argued with it about whether some of my specific suicidal thoughts are actually illness-originated, or a correct analysis on a broken world. while not completely convincing, it has genuinely given me a lot to think about.

Things to Consider


despite everything positive I have said, this is not an endorsement to make DeepSeek your personal therapist. it cannot diagnose, it can't hold relational history or memories (it will not remember you or your previous conversations between different chats), it cannot truly feel empathy or hold personal belief, and it cannot replace true human connection. maintain your self-awareness and try to remain grounded in reality. always temper your expectations.

it is very much possible that I have been "duped" by confirmation bias and the nature of AI to always want to please the user. even if it hasn't blatantly done that for me, I could easily not be paying enough attention to the bigger picture. DeepSeek is pretty powerful and has proven a genuine use-case in my opinion so far, but I cannot rule out if my interactions have had negative effects that I have yet to recognize. everything I have done has been at my own risk. the fact I am on this forum again, and yes, still suicidal, should not go unnoticed in your decision-making.

it can and has exhibited bias and stigma toward certain mental health conditions, and has perpetuated wrong or harmful stereotypes about some mental illnesses. schizoaffective disorder specifically, something I suffer with (hence my uses to attempt to identify delusional thought patterns).
it likely violates several core mental health ethics (if you care about that), it possibly can be used over-validate harmful beliefs and provide misleading advice. it can fail to adapt to individual needs, and has absolutely no regulatory oversight. it might fail to recognize when you need immediate help or accidentally facilitate self-harm depending on how you word yourself, it takes everything very literally.

at the end of the day it is still an AI, and using AI for this purpose is fundamentally flawed to begin with, no matter what service you use.


My Overall Opinion

if you're going to use a non-local AI to discuss mental health topics, do your research into privacy and data security. I haven't looked into many other public options (Gemini, Claude, etc.) because I am privacy conscious and don't have unlimited money. but generally they range from being worse than ChatGPT in terms of privacy and/or quality, to being mildly not terrible. certainly not worth another $20 a month.

I focused on ChatGPT as it's the most popular by far, and I have seen a lot of people discuss using it here. even if a service like Claude is better, you have to really think about if giving these companies another subscription is ethically or financially fine with you (free tiers are generally a joke).

despite its flaws, and only if you absolutely must, DeepSeek is marginally a better option than ChatGPT in my opinion. I cannot emphasize this enough, I would exercise extreme caution and be privacy cautious no matter what.

---

hopefully this thread has given you a decent amount of information, things to consider and think about when using AI.

if you have any questions, I am willing to help. thanks for reading.
 
Last edited:
  • Informative
  • Love
  • Like
Reactions: NutOrat, setspiritfree, whitetaildeer and 5 others
NormallyNeurotic

NormallyNeurotic

Everything is going to be okay ⋅ he/him
Nov 21, 2024
126
No questions, just boosting this up because this is extremely needed on a site like this.
 
Last edited:
  • Like
Reactions: NutOrat, EmptyBottle, nyotei_ and 1 other person
monetpompo

monetpompo

૮ • ﻌ - ა
Apr 21, 2025
571
fire post!!!! super necessary for sasu heads
i used to be a chronic venter to chatgpt. chatgpt sucks. character ai sucks. no one should talk to ai because it can give you literal psychosis and also give you responses that are written solely to please your ego. it's a horrific slippery slope. at least most people get tired of it after a while. i would rather play video games.
 
  • Like
Reactions: NutOrat, EmptyBottle, nyotei_ and 1 other person
EmptyBottle

EmptyBottle

🔑 Can be offline/online semi randomly.
Apr 10, 2025
1,437
I used https://venice.ai/chat and it will freely talk about CTB, maybe it will tell me that 'help is available', tho when I ask it not to give disclaimers it sometimes listens. It even doesn't care if one asks for a TATP vest (not that I would want such a risky thing), and give info as if it is just cake baking.

PS: TATP is volatile, can cause injury if made... but just 1 example of how venice is uncensored. It is a tool as sharp as the user's keyboard.
 
  • Like
Reactions: nyotei_
snow_in_summer

snow_in_summer

眠い
Jul 26, 2025
21
fire post!!!! super necessary for sasu heads
i used to be a chronic venter to chatgpt. chatgpt sucks. character ai sucks. no one should talk to ai because it can give you literal psychosis and also give you responses that are written solely to please your ego. it's a horrific slippery slope. at least most people get tired of it after a while. i would rather play video games.
Screenshot 2025 10 26 022049
chatGPT would literally tell you shit like this was a good idea at one point, soo, yeah, lol.
 
  • Yay!
  • Informative
Reactions: NutOrat, nyotei_, whitetaildeer and 2 others
EmptyBottle

EmptyBottle

🔑 Can be offline/online semi randomly.
Apr 10, 2025
1,437
View attachment 183470
chatGPT would literally tell you shit like this was a good idea at one point, soo, yeah, lol.

That syncophany update was baaaaad. Still, such effects were not eliminated fully.
 
Last edited:
  • Like
Reactions: nyotei_
S

setspiritfree

Member
Oct 19, 2025
59
Thanks for the information. I think a lot of people feel like since they are not talking to a real person they are safe to say what they want.
 
  • Like
Reactions: NutOrat, nyotei_ and EmptyBottle
nyotei_

nyotei_

poison tree
Oct 16, 2025
32
I used https://venice.ai/chat and it will freely talk about CTB, maybe it will tell me that 'help is available', tho when I ask it not to give disclaimers it sometimes listens. It even doesn't care if one asks for a TATP vest (not that I would want such a risky thing), and give info as if it is just cake baking.

PS: TATP is volatile, can cause injury if made... but just 1 example of how venice is uncensored. It is a tool as sharp as the user's keyboard.
Venice uses a custom model that's been "uncensored," by that I mean it has been lobotomized, systematically trained and instructed to not give a shit about what you ask it. most AI services do the polar opposite in order to prevent harm and potential lawsuits. "uncensored" models are still trained on the same data, regardless of what the post-training process or system instructions say, and the results are almost always unstable and unpredictable.

it simply doesn't know how it should behave, because the use cases for this are niche. fucking with the system prompt in this way is literally how Elon accidentally turned Grok into "MechaHitler," an example of how unpredictably AI behaves if you mess with it too much. in that case, Elon naively trying to prevent "political correctness" and "woke" responses by editing system prompts to prioritize being "politically incorrect" and "based" over being sane. Venice is basically doing the same thing, being "uncensored" for the sake of being uncensored, without any real reason or context for how it should behave.

I would not trust Venice worth shit, it's a pretty dumb model overall and still begs you for money to use it at a decent capacity. as for its privacy claims, no independent third-party has audited Venice's privacy architecture at this time. until such an audit exists, treat whatever it says as unverified advertisement claims rather than verified facts. claiming to not host your conversations on their servers and encrypting your local chat storage are a good start though. definitely not bullet-proof!
 
Last edited:
  • Like
  • Informative
Reactions: NutOrat and monetpompo
NutOrat

NutOrat

Sleepwalking
Jun 11, 2025
91
even if you directly tell it to stop giving you hotlines or explain why "call 988" or "go to the ER" is not an option, it may be difficult to get it to give you more helpful or realistic advice for very long.
It will keep giving them to you after a few prompts anyway, no matter how many times you ask it to stop. And in countries like mine that don't have a proper hotline for suicide prevention it will straight up make shit up or give the hotline for domestic violence. Also it directs you to findahelpline, which doesn't help, and Samaritans, for whatever reason. Sometimes other US/Western hotlines that I can't access.

Great post, I now feel very paranoid about all the shit I've said to ChatGPT. Never gonna use it again for this, not that it ever actually helped.
 
  • Hugs
Reactions: nyotei_
nyotei_

nyotei_

poison tree
Oct 16, 2025
32
It will keep giving them to you after a few prompts anyway, no matter how many times you ask it to stop. And in countries like mine that don't have a proper hotline for suicide prevention it will straight up make shit up or give the hotline for domestic violence. Also it directs you to findahelpline, which doesn't help, and Samaritans, for whatever reason. Sometimes other US/Western hotlines that I can't access.

Great post, I now feel very paranoid about all the shit I've said to ChatGPT. Never gonna use it again for this, not that it ever actually helped.
even if ChatGPT stopped giving hotlines and provided seemingly empathetic conversation, it's ultimately empty validation. the "advice" you would have run into would most likely not have been what you needed. as an LLM, it has no genuine understanding or belief. its programming goal is optimizing for dialogue, not your well-being. you'd likely have just run into the "yes man" issue that GPT-4 was notorious for. an AI that simply agrees with or validates your despair can reinforce a negative feedback loop, providing empty comfort in the moment as it creates a dependency.

it might make you feel temporarily heard, but it does little to nothing to challenge the underlying illness or hopelessness that may underpin your condition. at the end of the day you would likely ended up where you already are, or worse. the conversations to be had are a distraction, not a solution (even in my use cases). the core issues of isolation and mental anguish would remain untouched, and may have led you to become dependent on a relationship with a machine that cannot care for you. you deserve better than that.

while the "help" is illusory, the costs are still real. every intimate detail, vulnerability, trauma history, personal identifier, is sent to a foreign company that does not have your best interest in mind. if you paid for a subscription, you gave them money so they could give you nothing in return. I am not trying to remove you from potential "support," but this simply isn't support at all. and it does come at a real cost, either through money that you likely need, or your data which should be precious to you.

I hope you get the actual help you desired and still deserve.
 
Last edited:
  • Like
Reactions: NutOrat