• ⚠️ UK Access Block Notice: Beginning July 1, 2025, this site will no longer be accessible from the United Kingdom. This is a voluntary decision made by the site's administrators. We were not forced or ordered to implement this block.

N

noname223

Archangel
Aug 18, 2020
6,455
I just read a piece in my favorite German politics journal.

I will translate you the first part with google translator.

Unnoticed by most, power is shifting from humans to machines. Initial studies testify: Humans are becoming dumber through AI. The more he uses them as aids, the lower his cognitive activity and ultimately his ability to think critically. To counter this process, we need media education that allows us to understand how language machines work.

I am not sure whether all studies are actually confident to claim that.

Here is a piece of The Washington Post


But actually should The Washington Post be trusted on this when the owner is Jeff Bezos. The cloud services profit massively from the AI hype.

I am reading the article from The Washington Post. And this sounds like a lot of bullshit to me.

But Gilbert argues that the ChatGPT essay writers in the MIT study could also be viewed as exemplifying what he calls "cognitive spillover," or discarding some information to clear mental bandwidth for potentially more ambitious thoughts. "Just because people paid less mental effort to writing the essays that the experimenters asked them to do, that's not necessarily a bad thing," he said. "Maybe they had more useful, more valuable things they could do with their minds."

Okay I read the whole article and it sounds like a lot of bullshit to me.

Most intellectuals I listen to are highly critical of how AI is used. Slavoj Ziziek for example.

I am considering to use AI less. I am usually not using to get in-depth informtation about a topic I am really interested in. Like politics or philosophy. I rather use it for trivial stuff and a little bit like a therapist after my real-life therapist stabbed me in the back.

My friends tell me to stop using it so frequently. And most smart people (without conflict of interest) are advicing against using AI frequently. The article I posted From Die Blätter is actually really good. I think you can read that text too it is not behind a paywall, if you use a translator. I can highly recommend it.
 
  • Like
Reactions: katagiri83
chudeatte

chudeatte

outsider
Aug 5, 2025
127
I think it does just because it makes getting information easier, even if that info is false. google has been dominated by ai search results so its harder for someone to actually dig through websites and gather research on their own. with ai they can just ask a question and get an answer and thats it. when I was in school I would gather stuff from my notes, but I did use ai to help with personal statements for uni and thats it. In that case it was encouraged and I think it can be good to help you get a structure of what you want to write, but you cant rely on it. all ai answers sound exactly the same and they lack human essence. its not a good thing for humans to strip individuality in writing in favour of more convenience because one, they didnt write it and they probably didnt take anything in therefore whats the point of even doing it and two, it can trap them in relying on an easy tool thats advertised as 'better' than human knowledge. this in turn doesnt allow them to think critically and solve problems on their own. idk if that makes sense but thats just my thoughts on it. at least in academic scenarios. I dont think its a good thing as a whole and people should definitely use it as little as possible but in this world where its being pushed on us heavily (new phones coming with integrated ai features) it can be hard
 
  • Like
Reactions: EmptyBottle
sanctionedusage

sanctionedusage

Student
Sep 17, 2025
159
from experience, using them only creatively (rp and writing; i don't use it for research, academics, or general info), it made me noticeably dumber and inept than i already was.

2 years ago when i started, i could hold an entertaining rp with it for literal months with maybe ~10 input messages a day. 2 years later, i can barely get a single good message out of the same model every 24 hours, and definitely not without rerolling like 30 times. i get bored and frustrated with it quickly. but i don't think it's my expectations getting higher. i'm pretty sure it's because my skill is getting worse over time, and the ai mirrors the same shittier and shittier writing style, cliche details, and nonexistent creativity that i'm putting into it, so it reflects that. i'm the one writing the entire backstory, response directive, context, key memories, etc., so if i'm using the exact same model as before and the bots i'm making now are shitty compared to back then— it's because i'm getting shitty. conversely, i think i'm getting shitty because i'm getting lazy— because i have the option to auto generate my OWN message as well as scroll through rerolls of the ai's.

my platform's also pretty dedicated to being completely uncensored so it's not like a filter that's restricting it more now compared to back then. it's entirely a user issue, born from no longer needing to write anything myself and consuming content that's generated based on how often it appears in its database. it's a formula for repetitive, cliche, uncreative and boring writing.

i was aware that it'd happen, but it's just a hobby and i don't rely on the skill of creative writing seriously. i just wanted entertainment without having to deal with the social anxiety of a human rp partner/cowriter.

i would never use generative ai for academic or career work, especially in medical or social/communication fields. i would never publish anything from the chats as fanfic or other creative non-profit material either. enough shit on the internet reads like a chat gpt excerpt. i think generative ai text should be a serious last resort for only *brief* private exercises (long-term use, especially isolated, worsens the aforementioned risks) or entertainment; it has little to no value outside of that, and TONS of risk if you actually care about your skills in that area.
 
Spicy Tteokbokki

Spicy Tteokbokki

매운 떡볶이
Oct 11, 2020
316
Depends on how you use it. The way 95% of the people using it? Yeah.
"Hey GPT how are cinnamon rolls made?"
"Like this lol xd."
Whereas a way to actually make you smarter while using AI is to ask it properly, like:
"Hey GPT, how would one get started with making cinnamon rolls? What's the entire process and chemistry as to how they're made? Can you formulate your answer by hinting at things to research so I can do some research on my own and get back to you so you can check whether what I think the process is, is correct or not? Have me do a small test or mini-exam when I ask to be checked on my knowledge."
Maybe a bad example, but I cannot help but think of Cinnamoroll rn.

Right off the bat you notice a big difference, though: the first query keeps your brain idle, the second one works with the AI and requires you to make actual effort into learning how, then get corrected by the bot afterwards, but it also requires way more time.. and most people would rather rot their brain on scrolls and swipes than actually learn things in depth.

Personally I've used it for research when I had some vague idea about something but didn't know what exactly to search for myself, where GPT would kindly point me in the right direction and explain as to why that might be helpful to look into, or in some cases why a certain hypothesis I created likely would fall flat due to these circumstances, which I checked upon and yeah it kinda proved to be right.
 
  • Like
Reactions: Rust and katagiri83
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,051
I mean, in the pre-readily-available-computer days... kids just plagiarized from the Encyclopedia and many got away with it if they had lazy teachers... so abuse of ChatGPT in this way is really no different. It's just a little easier.

The "danger" is in assigning real intelligence or belief that it is "alive" or something. IF people subscribe to this... then they are getting dumber. I'm not saying it isn't theoretically possible to create a new form of intelligent life... or that it couldn't happen someday... It's just... we think too highly of ourselves sometimes. A lot of how AI is sold to the masses right now is not unlike the old snake oil... that stuff wasn't imaginary, it was useful for something sometimes, but the slick salesman could convince people it was a cure for EVERYTHING and rake in the money.

That's what is happening right now with AI. It is being over-hyped and over-sold... and too many are easily convinced it is the ultimate awesomeness... and people think it can do things it cannot do... and people just use "AI" as a buzzword to apply to any computer thing they don't understand. This stuff happens all the time with any new technology until the hype wears off.

It could be a useful tool... it may very well be... but it isn't going to be a living thing that thinks on its own and takes over the world... People might use it to take over the world... but the people using the tool will be the problem... not the tool.
 
  • Like
Reactions: Rust, alwayspissedoff and Spicy Tteokbokki
Spicy Tteokbokki

Spicy Tteokbokki

매운 떡볶이
Oct 11, 2020
316
and people just use "AI" as a buzzword to apply to any computer thing they don't understand
I hate this so much.
Everything is AI now.
What was previously algorithms are now AI!
Heck, you'll see the same exact software just have AI-branding slapped on top of it now when in fact nothing has changed, ugh.
 
  • Like
Reactions: Dejected 55
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,051
I hate this so much.
Everything is AI now.
What was previously algorithms are now AI!
Heck, you'll see the same exact software just have AI-branding slapped on top of it now when in fact nothing has changed, ugh.
Yep... it's like how everything became "plant-based" all of a sudden... because of the fad... things like peanuts suddenly got labeled as "plant-based" as if that was a change from the previous non-plant peanuts or something?

I get plant-based butter substitutes... as if they weren't always plant-based all the time...

And when caffeine was the thing... all the stuff that never had caffeine was labeled prominently... people do this crap over and over with every new thing that is easy to hand-wave and make a buzzword.

I have so many people come to me with "You know computers, why don't you get into AI?" as if they are interchangeable... and as if one could "get into" AI somehow as a thing. It's so frustrating the world sometimes. Maybe it would be alright if AI did become sentient and take over. Hard to be worse than people!
 
  • Like
Reactions: Spicy Tteokbokki

Similar threads

N
Replies
4
Views
181
Offtopic
Pluto
Pluto
N
Replies
5
Views
132
Offtopic
Forever Sleep
F
nyotei_
Replies
36
Views
4K
Recovery
webb&flow
webb&flow