People are building nonhuman things that look on track to surpass all humans in all economically relevant intellectual capabilities. They will also be cheaper to deploy than humans (who cost 0.5kg of oxygen and 2000kcal of (at least) protein and fats per day, plus a laundry list of micronutrients, and need to be kept within some pretty specific conditions in order to be productive; all told on the order of $1-100 / day).
If we stay on that track, humans will no longer be programmers or accountants. Humans will no longer be engineers or managers or even CEOs. Humans might still be lawmakers and laborers and even capitalists for a while. But AI systems will be the strategists of the world, and they will ultimately decide where resources are spent. Humans trying to compete with AIs for resources will lose.
Nobody knows exactly what these superhuman systems will be like. Nobody really knows exactly how they are, already.
We might get lucky: Maybe the alien minds will be better people than us.
It feels to me like we probably won’t get lucky. We’ll do “our best” to get them to be better than us, and it probably won’t be enough because we don’t have, don’t know how to build, aren’t even really trying to build, the right kind of feedback loop. We can tweak and coerce the massive matrices into extreme levels of capability: You can just check how well the AI did at writing that code, or making that trade, or winning that game. But if it’s smart enough, and situationally aware enough, you can’t check why it wanted to do that.
If AIs are merely as good as humans, morally, that’s probably not sufficient. Humans tend to treat our friends and family well, and we sure know how to be nice to potential trading partners, but we also build massive unfeeling bureaucracies that enslave or murder other humans, or inhumanely raise and slaughter animals for us to eat. If you’re not strategically useful to powerful groups of humans, you should probably fear them.
I think we should probably stop trying to build strategically superhuman AI, if we can.