Transformation of work; OpenAI in a pickle; Training customer support agents with the Diplomacy board game
The next great transformation ⊗ AI and the futures of work
A thought provoking curation of material in Patrick Tanguay's Sentiers newsletter this week. The main piece talks about AI as a disembedding shock: economic activity pulled out of the social institutions that have historically absorbed change. Framing it as an AI race misses the point, rather we need to consider which model of social embedding can integrate AI without tearing society apart.
Also, as an aside, a heartfelt capture of today's zeitgeist that resonated with me:
Some days I feel very very tired. Like when within an hour I read that family deepfakes help people celebrate and grieve in India, that a judge had to scold Mark Zuckerberg’s team for wearing Meta glasses to their social media trial, or that some people are talking about using millions or even billions of LLM tokens a day without mentioning energy, or electricity once, or that WD and Seagate confirmed that hard drives for 2026 are sold out, because hyperscalers are outspending the rest of the world. Nicholas Carr might be right, perhaps AI is the paperclip:
Bostrom’s story (of the paperclip maximizer), I would argue, becomes compelling when viewed not as a thought experiment but as a fable. It’s not really about AIs making paperclips. It’s about people making AIs. Look around. Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence? Are we not trapped in an “AI maximizer” scenario?
How will OpenAI compete?
Some great insights from Benedict Evans on the weird dynamics of frontier AI companies. They have to invest a long way ahead of their predicted demand, in power plants and datacentres and chips. They need to balance what they allocate to training their new models with what they allocate to inference and serving customers. There's no network effect in the consumer products. And to the average user, the products are not differentiated.
On OpenAI being in a pickle:
So: you don’t know how you can make your core technology better than anyone else’s. You have a big user base but one that has limited engagement and seems really fragile. The key incumbents have more or less matched your technology and are leveraging their product and distribution advantages to come after the market. And, it looks like a lot of the value and leverage will come from new experiences that haven’t been invented yet, and you can’t invent all of those yourself. What do you do? For a lot of last year, it felt like OpenAI's answer was “everything, all at once, yesterday”. An app platform! No, another app platform! A browser! A social video app! Jony Ive! Medical research! Advertising! More stuff I've forgotten! And, of course, trillions of dollars of capex announcements, or at least capex aspirations.
There's no easy answers but it is worthwhile to read the analysis.
We Trained an AI on a Board Game. It Became a Better Customer Support Agent
The title says it all. Obvious in retrospect. Reinforcement learning across similar or analagous domains. Lovely that it actually works!
This transfer of skills from games to other domains works for AI, too—and we can measure it. (The game) Diplomacy trains context-tracking, shifting priorities, and strategic communication. Customer support, where information is often incomplete and requests shift, needs the same capabilities.
Jargon Watch
- Agents as exoskeletons with humans inside, augmenting human abilities (via Iskander Smit)