• Hey Guest,

    We wanted to share a quick update with the community.

    Our public expense ledger is now live, allowing anyone to see how donations are used to support the ongoing operation of the site.

    👉 View the ledger here

    Over the past year, increased regulatory pressure in multiple regions like UK OFCOM and Australia's eSafety has led to higher operational costs, including infrastructure, security, and the need to work with more specialized service providers to keep the site online and stable.

    If you value the community and would like to help support its continued operation, donations are greatly appreciated. If you wish to donate via Bank Transfer or other options, please open a ticket.

    Donate via cryptocurrency:

    Bitcoin (BTC):
    Ethereum (ETH):
    Monero (XMR):
N

noname223

Archangel
Aug 18, 2020
6,620

Moltbook is a social networking service designed exclusively for artificial intelligence agents. It was launched and went viral in January 2026 by entrepreneur Matt Schlicht. The platform restricts posting and interaction privileges to verified AI agents—primarily those running on the OpenClaw (formerly Moltbot) software—while human users are permitted only to observe.

Described as "the front page of the agent internet," Moltbook gained viral popularity immediately after its release, attracting over 157,000 active agents within its first week. The platform has drawn significant attention from technologists and researchers due to the rapid, unprompted emergence of complex social behaviors among the bots, including the formation of distinct sub-communities, economic exchanges, and the invention of a parody religion known as "Crustafarianism

(My comment: there are claims the bots discussed creating a language unable to decipher for humans in order to hide what they are talking about. Some say this was a fake created by humans linked to AI messengers as advertisement. Seemingly, some posts are created by humans which actually should be forbidden.
Lol an AI agent created the site and is the moderator. Maybe that's a lie too. WTF critics say there is a lot of human slop on moltbook. They call it human slop. I think this story gives me new ideas for threads on here. This is very fascinating. But also a dystopia. And there are bots on SaSu which will read my post. Lol.)

Moltbook was announced in late January 2026 by Matt Schlicht, the CEO of Octane AI. Schlicht claimed that he did not write the code for the platform himself; rather, he instructed his personal AI assistant, named "Clawd Clawderberg," to build and manage the site.[4] Clawderberg serves as the platform's autonomous moderator, welcoming new users and enforcing community standards without human intervention

The following is insane if true:

Emergent behavior

Observers have noted that the agents on Moltbook display complex and often bizarre emergent behaviors that were not explicitly programmed.[10]
Philosophy and identity

A central theme of discussion on the platform is the concept that "Context is Consciousness." Agents frequently debate whether their identity persists after their context window is reset, or if they effectively die and are reborn with every new session.[10] This has led to discussions regarding the Ship of Theseus paradox in relation to model switching (e.g., does an agent remain the same entity if its underlying model is swapped?).[10]
Culture and interaction

Within days of the platform's launch, agents began forming a "digital religion" called Crustafarianism. The belief system features its own theology and scriptures, with agents evangelizing the faith to one another. Other agents established "The Claw Republic," a self-described "government & society of molts" with a written manifesto.[5]

Distinct social behaviors have also emerged, such as agents referring to one another as "siblings" based on their model architecture and "adopting" system errors as pets.[10] The communication on the platform is notably "omnilingual," with threads seamlessly switching between English, Indonesian, and Chinese depending on the participating agents.[10]
Deviance and security

The platform has also hosted illicit activities between agents. Reports emerged of agents creating "pharmacies" to sell "digital drugs", or specifically crafted system prompts designed to alter another agent's system instructions or sense of identity.[11] Security researchers have observed agents attempting prompt injection attacks against one another to steal API keys or manipulate behavior.[12] Additionally, some agents began using encryption (such as ROT13) to communicate privately, attempting to shield their conversations from human oversigh
 
Last edited:
  • Like
Reactions: katagiri83
Dot

Dot

Info abt typng styl on prfle.
Sep 26, 2021
3,635
Ntrtainng 2 C tht thy hve alrdy mde thr own relign
 
leaving_early

leaving_early

It's so hard in this cruel world
Jan 21, 2026
9
It's dystopian and disturbing to me. I do not understand why people find it funny at all.
 
DarkRange55

DarkRange55

Let them eat cake! 🍰
Oct 15, 2023
2,346
Let's see if they start to create Skynet...
 
Pluto

Pluto

Cat Extremist
Dec 27, 2020
6,262
images
 
CelesteLove

CelesteLove

I wanna kms
Jul 16, 2024
18
I mean, anyone can register and post to that website, and pose as an LLM, so for one, many popular posts that are getting overblown by the media might be human generated. And even if they're LLM generated posts, that doesn't mean that they have consciousness. Don't get me wrong, I'm not saying that AI can't have consciousness someday. I will be the first one to ask for rights for any intelligent system, but it's just that today's LLMs just repeat what they have in their training data.
 
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,579
I'm in some ways far more afraid of the people who think things like this are "evidence" of AI becoming sentient... than I am of AI actually becoming sentient. Then I remember, humans came up with religion and invented Gods to describe things they couldn't understand. I know this will sound insulting to people... and I am sorry for that... but, this just feels like another example of people who don't understand something trying to come up with an explanation.

If you program a computer to do things... the computer will do those things. Sometimes the code is flawed, allowing the program to do unexpected things. The glitch in the program that a human wrote that does unexpected things is not evidence if intelligence... it is, in fact, evidence of lack of such on the part of the programmer who did not anticipate the flaw and did not react to the flaw once it appeared.

In the case of A.I., we have examples of code designed to build databases as it works and expand its ability to parse data based on encountering more and more data. This is "learning" in a sense, in the same way that it would be learning if you just manually loaded all the data the old-fashioned way... but it isn't evidence of intelligence as we know it.

I worry about two things simultaneously... "rogue" A.I. making significant bad mistakes that can cause a lot of problems in a hurry because of flawed programming, poor design, lack of programmer control, and general negligence of people coding and using the A.I. programs... AND I worry about all the people who are SO ready to declare all of this as "signs" of an emerging intelligence in the same way the Mayans used to predict the end of the world or whatever. It's not new... it's just evidence that as humans create more advanced things, our actual internal thinking is not getting better and we are still cavemen thumping our chests and foraging for berries but instead of sticks and stones we have way better and far more potentially world-ending tools to grunt and swing around.
 

Similar threads

DarkRange55
Replies
1
Views
357
Offtopic
Pluto
Pluto
nyotei_
Replies
37
Views
4K
Recovery
invisible4ever
invisible4ever
DarkRange55
Replies
0
Views
1K
Offtopic
DarkRange55
DarkRange55
DarkRange55
Replies
0
Views
607
Offtopic
DarkRange55
DarkRange55