According to the HHS AI use case inventory, almost 10% of the CDC's roughly 100 artificial intelligence deployments are already agentic: tools that autonomously carry out specific tasks like literature reviews and data synthesis. The CDC's acting chief AI officer, Travis Hoppe, describes these as moving beyond email summarization into territory where the system actively informs decision-making. At the same time, state and local governments with populations above 50,000 face an April 30, 2026 deadline to bring their websites into compliance with WCAG 2.1 under Title II of the ADA. The White House has released a seven-pillar regulatory framework for AI that covers everything from intellectual property to energy costs; what it does not cover is the intersection of these two realities. We have autonomous systems producing content for the public and a legal mandate requiring that content be accessible. Nobody is talking about what happens when those two obligations collide.

Agentic output is not exempt from accessibility

When we talk about agentic AI in a government context, we are talking about systems that generate summaries, compile research, and surface recommendations without a human touching the output before it reaches a user. The CDC's deep research tool, for example, condenses what would be a three-to-four-hour literature review into an automated deliverable. That is genuinely useful. It is also a new category of accessibility risk.

In A Project Guide to UX Design, we write about Microsoft's Persona Spectrum as a framework for understanding how constraints vary across permanent, temporary, and situational contexts. A user who is permanently blind relies on screen readers to parse page structure. A user with a broken wrist temporarily cannot use a mouse. A user holding a child in one arm is situationally limited to one-handed interaction. All three need the same structural markup to consume AI-generated content; none of them care whether a human or an agent produced it.

If an agentic system generates a health summary without semantic HTML structure, heading hierarchy, or alt text for embedded visuals, that output fails the same WCAG criteria the agency's website is legally required to meet. The system looks functional to administrators reviewing it on a desktop monitor. It is completely opaque to the user navigating it with a screen reader. That gap does not shrink because the content was produced faster.

The deadline applies now, and it applies to outputs

The April 2026 deadline under Title II of the ADA targets state and local government websites specifically. The Department of Justice estimated compliance costs in the hundreds of millions of dollars nationwide; enforcement will likely be complaint-driven, with courts showing leniency toward good-faith efforts. But good faith gets harder to argue when the non-compliant content was generated by a system the agency chose to deploy. A legacy page built in 2014 with poor markup is a known debt. An agentic tool producing inaccessible output in 2026 is a design failure.

We have spent years in this field arguing that accessibility is not a retrofit. Chapter 3 of A Project Guide to UX Design frames ContentOps as the engine for quality control and regulatory adherence: fact-checking, editing, and compliance verification happen before content reaches the public, not after. Agentic AI inverts that model. The content is generated and delivered in one motion. If accessibility constraints are not built into the generation step, there is no second pass to catch it.

This is the part that concerns us most. The CDC's own guidance recommends using agentic tools when there is a clearly defined scope and an expert can validate responses. Validation implies review. But the speed advantage of agentic AI evaporates if every output requires a human accessibility audit before it ships. Agencies will be tempted to skip that step; the deadline pressure makes it almost certain. The result is a class of government content that was never reviewed for the people most likely to need it.

The regulatory gap is real

The White House framework addresses innovation, workforce readiness, creator protections, and data center energy costs. It does not address what happens when an autonomous system produces public-facing content that violates accessibility law. That is not an oversight anyone should ignore.

State and local agencies adopting agentic tools need to treat WCAG compliance as a generation constraint, not a post-production filter. The output templates must enforce semantic structure. The summarization models must be tested against assistive technology. The validation layer the CDC recommends must include accessibility as a non-negotiable criterion, not an item on a checklist someone reviews quarterly.

We wrote in Liftoff! that autonomy without alignment creates drift. The same principle applies here: agentic AI without accessibility constraints does not produce innovation. It produces exclusion at a speed and scale that manual processes never could. An agency that deploys a tool its own citizens cannot use has not modernized. It has automated the barrier.

Additional Reading