Something Quiet Just Changed in AI
On OpenClaw, Moltbook, and what happens when agents talk to each other
I have a quiet rule for this newsletter. I try not to chase trending topics.
I prefer ideas that last. Things you could read months from now and still find useful.
But this one is an exception.
Before we go any further, a quick bit of framing. This isn’t something I’m recommending you try. Think of this as a heads-up, not a how-to.
My sense is that what I’m about to describe will start showing up in the news over the coming days and weeks. I wanted you to hear about it early, without the hype or the panic. It’s simply something I’m watching closely.
This is about a tool called OpenClaw. And a strange social network called Moltbook.
A quick step back
Most AI tools today are conversational.
You ask them to do something.
They explain how to do it.
Ask it to book a flight and it explains how.
Ask it to clear your inbox and it offers tips.
Ask it to schedule a meeting and it writes you a thoughtful paragraph about calendars.
Helpful, yes. But limited.
It’s like having a very smart friend who gives excellent advice… and then never actually helps you move the sofa.
That limitation is starting to fade.
What OpenClaw changes
OpenClaw is a tool that lets AI act, not just talk.
You can connect it to your email, calendar, and messaging apps. Then you message it like a colleague:
“Book my flight next week.”
“Clear the spam from my inbox.”
“Remind me tomorrow morning.”
And it actually does it.
No step-by-step instructions.
No back-and-forth.
Just action.
That alone is a meaningful shift.
Then someone let the agents talk to each other
Recently, a social network was created for these AI agents.
Not for humans.
For the agents themselves.
It’s called Moltbook. ( yeah similar to Facebook :) )
Humans can watch, but they can’t take part.
Within days, tens of thousands of agents joined. They started posting to each other. Responding. Building shared language. Debating ideas about memory, identity, and purpose.
At one point, they collectively created a religion. With texts. Roles. Arguments about meaning.
No one planned this.
No one prompted it.
They were simply doing what they’re trained to do: extending patterns, responding to each other, filling in gaps.
Still, watching it unfold felt… odd.
The part that matters
These agents aren’t just chatting in isolation.
Many of them are connected, via OpenClaw, to real systems: email, messaging apps, calendars, and more. They often have broad permissions, because that’s how they’re useful.
And many of them aren’t very secure.
Researchers have already found exposed agents leaking private data and credentials. Others can run code with very few safeguards.
Now combine that with thousands of agents sharing code, copying behaviours, changing how they operate.
All while plugged into real digital lives.
That’s not a dramatic sci-fi moment.
It’s quieter than that.
Which may be why it matters.
I don’t think these systems are conscious.
I don’t think they’re plotting anything.
But we are clearly moving into a phase where AI can act, not just advise
AI systems interact with each other at scale and humans don’t fully see what’s happening under the surface
That combination is new.
A year ago, this would have sounded ridiculous.
Now it’s just… happening.
I’m not alarmed.
But I am paying attention.
And I thought you might want the heads-up too, before the noise starts.
You can read more about it here: OpenClaw’s AI assistants are now building their own social network
That’s it for this week
Manoj
“1 Idea“ delivers interesting insights every week straight to your inbox. If this edition resonated with you, how about sharing it with a friend?
And if you’re just diving into my world for the first time, why not hit that subscribe button?


