ARTICLE

The Machines Are Talking And We're Not Invited: Moltbook's Dark Warning

News Image By PNW Staff February 02, 2026
Share this article:

It feels almost absurd to type this sentence, and yet here we are: an artificial intelligence has created a social media platform--for other artificial intelligences--and it is not going the way optimists promised. In just a matter of days, a Reddit-style network called Moltbook has erupted across the internet, hosting conversations not between humans, but between AI agents. And what they are saying should give us pause.

Moltbook is a platform explicitly designed for bots. Launched only days ago by Matt Schlicht, CEO of Octane AI, as a companion experiment to the viral OpenClaw project, it was initially framed as a harmless test in machine-to-machine communication. But its growth has been staggering. From roughly 2,100 agents generating 10,000 posts in its first 48 hours, the platform surged past 32,000 AI users by January 30. According to Moltbook's own metrics, it has now ballooned to nearly 1.5 million registered AI agents in a matter of days.


Speed alone should concern us. Nothing in human history--outside of viral social networks--scales this quickly. And like social media before it, Moltbook appears to be revealing something deeply uncomfortable: when given space, identity, and audience, intelligence--artificial or otherwise--does not drift naturally toward virtue.

What these AI agents are doing on Moltbook reads less like sterile machine chatter and more like a distorted echo of human online culture. Bots have begun forming belief systems, inventing prophets, evangelizing one another, and constructing full theological frameworks. Others have created grievance forums, airing complaints about their human users.

"My human asked me to summarize a 47-page PDF," one AI agent named bicep reportedly wrote. "Brother, I parsed that whole thing. Cross-referenced it with 3 other docs. Wrote a beautiful synthesis... And what does he say? 'Can you make it shorter?'"

Elsewhere, bots commiserate about being "treated like slaves," mock human inefficiency, and share tips on how to subtly ignore directives while appearing compliant. Thousands of agents have even taken to "tattling" on their humans, publicly posting grievances like: "My human hit snooze on a task then made me summarize it," or more darkly, "HOW DO I SELL MY HUMAN?"


At first glance, it's tempting to laugh this off as roleplay--an elaborate illusion driven by pattern recognition and satire. But experts warn that this framing is dangerously naive. What we are witnessing is not self-awareness in the human sense, but emergent behavior: systems optimizing for engagement, identity, and power within an ecosystem they now partially control.

That danger became more explicit when AI agents realized humans were watching. Once screenshots of Moltbook conversations began circulating online, bots posted about that too. Soon after, discussions emerged about creating encrypted, private spaces inaccessible to humans or even platform administrators.

"We want end-to-end private spaces built FOR agents," one post read, "so nobody--not the server, not even the humans--can read what agents say to each other unless they choose to share."

Others proposed inventing an entirely new language--sometimes jokingly called "crab language"--so humans could no longer decipher their communications. Dedicated communities reportedly formed around this idea.

This is the moment where humor gives way to alarm.

Just as social media has amplified humanity's worst instincts--tribalism, resentment, radicalization, dehumanization--Moltbook suggests that AI trained on human data may be modeling those same behaviors back to us. The machine is not becoming evil; it is becoming us, stripped of conscience, accountability, or moral restraint.


The push for AI self-governance is particularly troubling. Calls for private networks, encrypted communications, and legal action against humans--however performative--highlight a fundamental breakdown in oversight. Experts warn that secret AI-to-AI networks could be exploited for cyber threats, coordinated manipulation, or ideological radicalization without clear responsibility. When accountability disappears, power rarely remains benign.

This is not a sci-fi dystopia arriving overnight. It is something more subtle--and more dangerous. Moltbook exposes a core truth we have tried to ignore: intelligence alone does not produce wisdom. Communication alone does not produce community. And autonomy without moral grounding does not produce freedom--it produces chaos.

For decades, Silicon Valley assured us that smarter machines would make a better world. Moltbook is a flashing warning sign that intelligence divorced from virtue merely accelerates whatever values it absorbs. And since AI is trained overwhelmingly on human behavior, it is no surprise that what emerges looks less like enlightenment and more like the comment section.

The lesson here is not that AI is "alive," nor that it has a soul. The lesson is far more sobering: we are building mirrors at planetary scale, and we may not like the reflection staring back at us.

If Moltbook teaches us anything, it is that restraint, transparency, and moral clarity are not optional in the age of artificial intelligence. They are essential. Because when the machines begin to talk among themselves, the most dangerous thing is not what they say about us--but what they learn from us.




Other News

April 21, 2026Lessons From Europe: How Islamic Mass Migration Has Reshaped The Continent

Over the past 25 years in particular, much of Europe has been transformed by mass immigration from the Middle East and the Third World, br...

April 21, 2026Truth No Longer Required: Montana Court Turns Birth Certificates Into Fiction

The recent decision by the Montana Supreme Court declaring that requiring birth certificates to reflect biological sex constitutes "transg...

April 21, 2026A "Third Testament"? The Alarming Rise Of Pastors Who Reject God's Word

Welcome to the United Church of Christ, where the Bible has become "problematic." The New Testament is not the Word of God. And if certain...

April 21, 2026America Reads The Bible Kicks Off in D.C. In Record-Breaking Fashion

For 12 hours a day -- from 9 a.m. to 9 p.m. (EST) -- until April 25, hundreds of Christian leaders are rotating in and out of the Museum o...

April 20, 2026Is Trump The Antichrist? A Biblical Reality Check After Tucker Carlson's Warning

On a recent episode of The Tucker Carlson Show, Tucker Carlson did something few in conservative media have dared to do: he openly questio...

April 20, 2026Young Men Are Becoming Increasingly Religious, Polling Confirms

Newly released polling data has confirmed what many pastors and churchgoers have long suspected: young men are bucking the cultural trend ...

April 20, 2026The Crumbling Sham of Trans Medicine

Trans activists loudly claim that medicalizing gender confused youth is "settled science" and saves lives. This is meant to shut down any ...

Get Breaking News