Views from the maker of OpenClaw and the author of Claude's constitution; New York in isometric pixel art; Fun robots and security vulnerabilities
I've missed a couple of weeks due to travelling and illness. What a difference it makes, a lot has happened! Although actually it is nice to be able to take a considered view of the Openclaw / Moltbot / Moltbook story that blew up over that period, rather than being immersed in the daily froth.
OpenClaw Creator: Why 80% Of Apps Will Disappear
My favourite reflection is from the author of OpenClaw himself, Peter Steinberger (the "Clawfather"), interviewed on the Y Combinator Startup podcast. His previous claim to fame was as the founder of PSPDFKit (now called Nutrient), who created a PDF rendering library used across many apps and devices. Initially he comes across very casual, as someone who has organically built up software and tools to solve his own problems, and it just happens to have attracted huge interest. However there's clearly more deep thinking behind the offhand persona. A few takeaways for me:
- The real difference is that it runs on your computer, not in the cloud. Retaining and inspecting memory is easy, as it is stored in local files. You can "give it all the skills you have yourself" (from your computer). You're not tied to the "data silo" of an AI company.
- It can be personalised in a deeper way. Just like Anthropic has a soul document, so does OpenClaw, but it is your instructions not Antropic's. Peter talks a lot about the "sassy" personality of his agent that took him some time to create.
- Frameworks built for coding can extend to all sorts of other domains: "Coding is really like creative problem solving that maps very well back into the real world".
- He has a good analogy, that managing the complexity of running 10 coding agents in parallel is like being a "skilled driver".
Worth a listen, as he is living a year or two in the future compared to most.
Meanwhile, other good commentary:
- Moltbook was peak AI theater. MIT Technology Review nails it. “What we are watching are agents pattern‑matching their way through trained social media behaviors".
- Greenfield tech. Robin Sloan rightly points out that we're at the beginning of a new era, and that's why it feels open, free and exciting. It always does in the pre-enshittification period. "The language models in 2026 are Google in 1999, Twitter in 2009"
It’s interesting and useful to imagine — really visualize — the chatbots and agents in ten years or twenty … barnacled with gunk … locked in a permanent cat-and-mouse game with their adversaries … just as a platform like Google is today. In 2036, you send your AI agent out into the internet, and it returns battered, bedraggled, inexplicably enthusiastic about a bargain flight to Bermuda.
Amanda Askell Explains Claude's New Constitution
Another podcast, this one from Hard Fork (New York Times). I enjoyed hearing from Amanda Askell from Anthropic, who is the "philosopher behind Claude's personality", discussing the newly published Constitution for Claude (following the leaked "soul" document last year). It was reassuring to hear a very reasonable and well considered process, but at the same time striking to hear just how deeply she'd accepted an anthropomorphic view. In this excerpt she's explaining the rationale for having this at all, and "you" is Claude:
So if you understand the reason you're doing this is because you actually are trying to care about people's well-being and you come to a new situation where there's, you know, hard conflicts between someone's well-being and what their stated preferences are, you're a little bit better equipped to navigate it than if you just know a set of rules that don't even necessarily apply in that case.
In Two Surprising Things About Claude's Constitution, Maury Shenk agrees that it is surprising how deeply the consitution assumes "personhood" for Claude.
As Iskander Smit puts it in Soul documents and the new priesthood of AI:
The doom scenario isn't a machine that decides to eliminate us. It's a gradual delegation of agency - out of convenience, out of trust in systems we don't understand, out of faith in a new priesthood writing the rules. ... If we're treating AI systems as entities worthy of constitutions and exit interviews, shouldn't we ask who gets to write them?
isometric nyc
The end result of this project is a very cool vast isometric view of New York in a pixel-art style. What's more interesting is the process taken to get there, using Nano Banana Pro and other mainstream tools to begin with, and then fine-tuning an open weights Qwen model (at a cost of $12), and creating "micro-tools" to help along the way. Lots of good insights into both scaling up and the creative coding process with AI.
This project is far from perfect, but without generative models, it couldn’t exist. There’s simply no way to do this much work on your own, and hiring a team of artists large enough to hand-draw pixel art for every building in New York City would be impossible. AI agents unlock a universe of creative projects that were previously unimaginable.
Other things I noted (both fun and scary)

Fauna Sprout is a modular, safe home robot you can tinker with and program, with quite sophisticated built-in behaviours (see the paper Fauna Sprout: A lightweight, approachable, developer-ready humanoid robot). Thanks to ImportAI for the link.
Anthropic's new model is a pro at finding security flaws. As part of testing the newly released Opus 4.6 model, Anthropic have shown how good it is at finding new vulnerabilities in existing open source libraries:
Claude found more than 500 previously unknown zero-day vulnerabilities in open-source code using just its "out-of-the-box" capabilities, and each one was validated by either a member of Anthropic's team or an outside security researcher.