Reverse centaurs vs. AI; Delaying AI regulation; Chatbots choosing similar friends; Normalising deviance
The Reverse Centaur’s Guide to Criticizing AI
Cory Doctorow is a great author, and has been campaigning against restrictive copyright, digital rights management and related issues for a long time. More recently he's coined the term "enshittification" to describe the gradual degradation of platforms as they prioritise profit over users. This text of a talk is a polemic in classic Doctorow style. A "centaur" in this context is a person assisted by a machine, and so "a reverse centaur is machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine".
Worth reading it, but to give you a flavour, here he is discussing the difficulties of spotting realistic hallucination errors:
And you, the human in the loop – the reverse centaur – you have to spot this subtle, hard to find error, this bug that is literally statistically indistinguishable from correct code. Now, maybe a senior coder could catch this, because they've been around the block a few times, and they know about this tripwire.
But guess who tech bosses want to preferentially fire and replace with AI? Senior coders. Those mouthy, entitled, extremely highly paid workers, who don't think of themselves as workers. Who see themselves as founders in waiting, peers of the company's top management. The kind of coder who'd lead a walkout over the company building drone-targeting systems for the Pentagon, which cost Google ten billion dollars in 2018.
For AI to be valuable, it has to replace high-wage workers, and those are precisely the experienced workers, with process knowledge, and hard-won intuition, who might spot some of those statistically camouflaged AI errors.
Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet
Dean Ball is an AI policy fellow at the Foundation for American Innovation and was one of the drafters of the Trump administration’s American AI Action Plan. This interview on the 80,000 Hours podcast provides useful insight into the thinking informing the US government agenda. His main concern with regulation is that it is too early, and may lock society into a worse version of the future, accidentally concentrating power. He's not concerned about a future malign AI per se, as he sees it more like a background force, causing a slow erosion of human agency:
The most successful ‘conquerors’ of the modern era are business enterprises, not countries.
AIs will emerge as a sort of meta character in world history — in the same way that today ‘the market’ is a meta character.
The most challenging aspect is his view that future governments won't survive AI at all in their present form, and will be radically smaller.
Study: AI Chatbots Choose Friends Just Like Humans Do
This research from Marios Papachristou and colleagues looked at the formation of social networks resulting from interactions between AI agents given specific tasks. They found that, like people, the models exhibit preferential attachment (linking up with well-connected people), triadic closure (connecting with friends of friends) and homophily (connecting to others similar to oneself).
The team decided to compare AI’s decisions to humans directly, enlisting more than 200 participants and giving them the same task as the machines. Both had to pick which individuals to connect to in a network under two different contexts—forming friendships at college and making professional connections at work. They found both humans and AI prioritized connecting with people similar to them in the friendship setting and more popular people in the professional setting.
They look towards applications in modelling human networks, but the work also has relevance to the design of multi-agent AI systems.
The Normalization of Deviance in AI
This article provides a helpful reminder: we know that LLM outputs can't be trusted, but the resulting security concerns are often ignored. Discussing the 1986 Space Shuttle Challenger disaster: "the absence of disaster was mistaken for the presence of safety".
Coined by Diane Vaughan, the key idea here is that organizations that get away with “deviance” - ignoring safety protocols or otherwise relaxing their standards - will start baking that unsafe attitude into their culture. This can work fine… until it doesn’t. The Space Shuttle Challenger disaster has been partially blamed on this class of organizational failure.
The current approach is hard sell for AI solutions while writing legal disclaimers to cover the risks. It may take a bigger disaster to shake things up.