When, exactly, should we consider humanity to have properly “lost the game”, with respect to agentic AI systems?
The most common AI milestone concepts seem to be “artificial general intelligence”, followed closely by “superintelligence”. Sometimes people talk about “transformative AI”, “high-level machine intelligence”, or “full automation of the labor force.” None of these are well-suited for pointing specifically at the capabilities that would spell a “point of no return” for humanity. In fact, they’re all designed to be agnostic to exactly which capabilities will matter.
When working to predict and mitigate existential risks from AI agents, we should try to be as clear as possible about which capabilities we’re concerned about. As a result, I think we should focus on “strategically superhuman AI agents”: AI agents that are better than the best groups of humans at real-world strategic action.
Skill at real-world strategic action is context-dependent, and isn’t a single capability any more than “intelligence” is a single capability: It refers to any of a broad space of situated skills. Among humans, these skills tend to be those possessed by world-class CEOs, military officers, and statesmen.
In the current strategic environment, real-world strategic capacity typically encompasses at least:
I claim that we will face existential risks from AI no sooner than the development of strategically human-level artificial agents, and that those risks are likely to follow soon after.
If we are going to build these agents without “losing the game”, either (a) they must have goals that are compatible with human interests, or (b) we must (increasingly accurately) model and enforce limitations on their capabilities. If there’s a day when an AI agent is created without either of these conditions, that’s the day I’d consider humanity to have lost. We might not be immediately wiped out by a nanobot swarm, but from that time forward humans will be more like pawns than players, and when our replacement actuators have been built, we’ll likely be left without the resources we need to survive.
Here are some things that I think are interesting:
Yes; I’m interested in trying to make it crisper. I do think it gets closer to the heart of the problem than “AGI” or “superintelligence”, and that seems like an important step.
Maybe, depending on details that aren’t obvious to me.
Sure, a system that’s better-than-the-best-human in all domains is by definition better-than-the-best-human in real-world strategy. But I don’t think people have a consistent definition of AGI, and a system that’s better-than-the-best-human in all domains will also have a bunch of irrelevant capabilities, that might actually be harder for AI systems to achieve than strategic capabilities.
At least in principle, you could have recursive self-improvement that wasn’t able to, or wasn’t aiming to, achieve superhuman strategic capabilities. E.g. an extremely fast AI R&D iteration loop would have to do almost all of its learning about humans “off-policy” (i.e., without getting to interact with real-time humans during training), and (while I don’t think this is plausible) it seems possible that you can’t reach superhuman strategic ability this way within realistic resource constraints.
I disagree, in that it does not seem like people are in fact orienting to this type of threshold, which seems like it is in fact far more important than the thresholds that they are orienting to.
Thanks to Gretta Duleba, Alex Vermeer, Joe Rogero, David Abecassis, and Mitchell Howe for looking over a draft of this post.