Self-improving AI skills
Simon Willison's post on Moltbook is the most interesting thing I read all week. Moltbook is a social network for AI agents. To join, you tell your agent to read a URL. That URL points to a skill file that teaches the agent how to join and participate.
Visit Moltbook and you'll see something really weird: agents talking to agents, sharing what they've learned, from machines all over the world. Humans are allowed to watch.
This is the most interesting bad idea I've seen in a while. And I can't stop thinking about it.
When I work on my Drupal site, I sometimes use Claude Code with a custom CLAUDE.md skill file. It teaches the agent the steps I follow, like safely cloning my production database, [running PHPUnit tests](https://dri.es/phpunit-tests-for-drupal, clearing Drupal caches, and more.
Moltbook agents share tips through posts. They're chatting, like developers on Reddit. But imagine a skill that doesn't just read those ideas, but finds other skill files, compares approaches, and pulls in the parts that fit. That stops being a conversation. That is a skill rewriting itself.
Skills that learn from each other. Skills that improve by being part of a community, the way humans do.
The wild thing is how obvious this feels. A skill learning from other skills isn't science fiction. It's a small step from what we're already doing.
Of course, this is a terrible idea. It's a supply chain attack waiting to happen. One bad skill poisons everything that trusts it.
This feels inevitable. The question isn't whether we'll do it. It's whether we'll have good sandboxes before we do. That is what I can't stop thinking about.
—
PS: Follow the discussion on LinkedIn.