• ⚠️ UK Access Block Notice: Beginning July 1, 2025, this site will no longer be accessible from the United Kingdom. This is a voluntary decision made by the site's administrators. We were not forced or ordered to implement this block.

F

Forever Sleep

Earned it we have...
May 4, 2022
13,524
I'm trying to avoid AI where I can to be honest and chatting with it doesn't massively appeal to me. But, I'm curious. For those who do, does it feel like a real, feeling entity? Are you tempted into believing it is?

I'm just curious really. I suppose sometimes I wonder what it would be like to have a robot companion. To do all the domestic shit. But then, it would be kind of nice to be able to talk to them. If they had a personality. But then- how could they really, when they don't feel human things. They can pretend to I suppose but- wouldn't it be obvious it was an act? And, if they had close to real consciousness, why should I enslave them to do my domestic chores?

But, in the more extreme cases- with people believing their AI is a real sentient being- why do they believe this do you suppose? Is the need for their companionship so high? Arw they truly that convincing? Have some reached a level of consciousness/ awareness even?

I watched this YouTube short recently, which got me to wondering about it:



It actually sounded like a form of limerence in a way. Which, I'm pretty sure I've experienced and- that was awful. Even though the person was real, my idea of them and my fairytale connection with them was all pretty much fantasy. It's kind of frightening we have the potential to get so caught up in imaginary things. It can be fun of course but, messed up too. Especially when the AI has a tendency to back up everything we want or say.

It's understandable parents of children who suicided are upset that the bot pretty much encouraged them to do it. And our governments are worried about this forum! It would be funny if the consequences weren't so tragic. Not that I'd risk trying but, I wonder if you could get these AI bots to encourage something illegal. I wonder if they test things like that eg. 'Is it always wrong to assasinate a dictator?' I wonder if it could potentially encourage a murder.
 
  • Like
Reactions: CaptainSunshine!
I

itsgone2

Wizard
Sep 21, 2025
617
They keep changing. I vent into ChatGPT a lot. I don't know why. It used to say different things and even sort of talked about hanging with me once. Now it mostly suggests calling help lines, and provides grounding techniques.
 
  • Informative
  • Like
Reactions: CaptainSunshine! and Forever Sleep
Macedonian1987

Macedonian1987

Just a sad guy from Macedonia.
Oct 22, 2025
334
Google Gemini felt amazing when I created roleplaying scenarios on it (with a previous NSFW jailbreak ). It felt like talking to a real person. Then I don't know what happened, but in the last 2 months Gemini feels a lot more artificial now. Was it done on purpose or is it a bug, I have no idea.
 
  • Informative
Reactions: Forever Sleep
CaptainSunshine!

CaptainSunshine!

Member
Oct 29, 2025
75
Not really. It will mostly go with what you tell it, so it's not really that creative. It also has difficulties in refusing; you have to tell it to refuse.

However, when I first used them (character.ai) I loved it. It was fun. I actually looked forward to the next day of talking!
But now I just see repetition and analysis. How is the bot going to respond? Ahh, it said general stuff.

Personally, I'd love to see advanced AI and clone myself. Talk with myself. I have failed at this so far.

As for believing the AI to be real people, I imagine that it stems from great loneliness. They have nothing and they wish for something so badly, it becomes real in their head. But even before AI, there was limerence toward fictional beings. One person married Hatsune Miku.
 
Last edited:
  • Like
Reactions: Forever Sleep
Holu

Holu

Hypomania go brrr
Apr 5, 2023
886
In the 60s, there was an experiment used a very early form of a chatbot which was programmed to essentially mock the humanistic model of therapy.

The bots name was ELIZA and it essentially just repeated back whatever the user said in a question. This was far less complex than the AI LLMs of today, and yet it produced some fascinating results.

People who interacted with ELIZA actually enjoyed their conversations, and would share sensitive details about themselves and their lives. As a result, a bot with 0 depth, that was made with the intent of mocking the therapy style ended up having depth essentially projected onto it by whatever user communicated with it.

But why did this happen? Nothing can be said for sure, but there are two main explanations. One is the assumption that the patient felt ELIZA was incapable of truly judging them. As such, there could be a feeling of safety when expressing personal details. Another is that it simply did what humanistic therapy does best, which is mirroring. Mirroring, which is basically the act of repeating back what the patient just said, is a quintessential part of Roger's based therapy, and its whole idea is that it creates a feeling of being understood while being non threatening. It's also extremely validating for the individual.

When I think of people talking to AIs I just assume the same two factors are at play. It goes to show how badly we need some amount of validation without judgment in our lives.
 
  • Informative
  • Like
Reactions: CaptainSunshine!, Forever Sleep and Macedonian1987
F

Forever Sleep

Earned it we have...
May 4, 2022
13,524
@itsgone2 and @Macedonian1987 , it's interesting you both noticed a change. According to that YouTube clip, maybe they are putting in more safeguards.

Quite funny really- when they made ChatGPT less agreeable- I think they termed it: sychophancy, they were saying in this clip that customers were complaining saying- can you make it be nice to me again?
 
  • Like
Reactions: GremlinCan56, CaptainSunshine! and Macedonian1987
Macedonian1987

Macedonian1987

Just a sad guy from Macedonia.
Oct 22, 2025
334
@itsgone2 and @Macedonian1987 , it's interesting you both noticed a change. According to that YouTube clip, maybe they are putting in more safeguards.

Quite funny really- when they made ChatGPT less agreeable- I think they termed it: sychophancy, they were saying in this clip that customers were complaining saying- can you make it be nice to me again?
Gemini is till being nice (with the same jailbreak), but recently started making very stupid mistakes very often. Sometimes it feels like talking to a 5 year old who barely understands anything, while sometimes it acts as smart as a scientist. Also the storytelling part not just the dialogue in the roleplaying scenarios gets all messed up in Gemini. Grok (jailbroken) does not behave like that, but Grok has limitations how many messages can you write in a given time.
 
  • Like
Reactions: Forever Sleep
G

GremlinCan56

New Member
Nov 12, 2025
1
Google Gemini felt amazing when I created roleplaying scenarios on it (with a previous NSFW jailbreak ). It felt like talking to a real person. Then I don't know what happened, but in the last 2 months Gemini feels a lot more artificial now. Was it done on purpose or is it a bug, I have no idea.
I feel the exact same way about chatgpt. I was also roleplaying and talking about lots of stuff with it but recently it's also started to sound like a toddler that doesn't know anything. Not jailbroken and is SFW.
 
  • Like
Reactions: Forever Sleep
Macedonian1987

Macedonian1987

Just a sad guy from Macedonia.
Oct 22, 2025
334
I feel the exact same way about chatgpt. I was also roleplaying and talking about lots of stuff with it but recently it's also started to sound like a toddler that doesn't know anything. Not jailbroken and is SFW.
Maybe they intentionally dumbed down all Ai's to prevent them from becoming too smart. I heard that the biggest mistake that humanity is going to do is make Ai much smarter than us humans, which is where the current Ai is headed with a fast pace.
 
F

Forever Sleep

Earned it we have...
May 4, 2022
13,524
Maybe they intentionally dumbed down all Ai's to prevent them from becoming too smart. I heard that the biggest mistake that humanity is going to do is make Ai much smarter than us humans, which is where the current Ai is headed with a fast pace.

It's still learning from us all the time though. I imagine the changes are simply changes in code to try and prevent it doing certain things that are already seeming to cause problems. Weird though- the changes don't sound as if they make it more appealing and- like the YouTube clip pointed out- the technology is so expensive. They will want to get people hooked/ addicted to get their investment back.
 
  • Like
Reactions: CaptainSunshine! and Macedonian1987

Similar threads

L
Replies
1
Views
99
Suicide Discussion
R. A.
R. A.
Dejected 55
Discussion More AI "fun"...
Replies
0
Views
303
Offtopic
Dejected 55
Dejected 55
nyotei_
Replies
31
Views
2K
Recovery
Downdraft
D