Where is Accountability When Governments Deploy AI?
Governments are now deploying AI systems that do not merely assist decision-making but structure how public authority is exercised. The most persistent myth in contemporary AI governance is that accountability can be preserved by keeping a human “in the loop.” The reassuring assumption is that human review anchors responsibility. In practice, however, that human often appears only at the end of a decision chain, reviewing outputs shaped by upstream objectives, constraints, and optimization logic.
If a person reviews outputs, approves recommendations, or retains override authority, responsibility is presumed to remain human. That model made sense when automated systems supported discrete decisions. But contemporary AI systems can learn from data, update internal models, and optimize across multiple variables without explicit human instruction at each step. Agentic systems go further, executing ongoing processes and setting intermediate objectives. They structure how judgment is exercised rather than merely informing choices. In this environment, it is increasingly inadequate to assume that downstream review can sustain meaningful public accountability.