But some people still talk about the risk of AI being too smart, tricking us-- or simply not delivering what we expect, or using any power it is given in ways that result in manipulation more sophisticated than what we could ever anticipate.
Do all of the people who have these fears buy-in to that silly bootstrapping theory of AI advancement? (This is the idea that once an GI AI becomes "smart" enough to redesign itself, it will spiral off to become... well a god basically.)
2/