AI From Predictive Models to Public Value: AI Theory in Action
Written by NCAC Board Member, Ryan Heimer
As May ushers in a season of renewal, marked by Public Service Recognition Week and Memorial Day, it offers a moment to reflect not only on the enduring mission of public service, but also on the forces reshaping it. Among these, artificial intelligence (AI) stands out as more than a technological advancement; it represents a structural shift in how governments operate, make decisions, and deliver value. To understand AI’s implications, we must examine a deeper question: what is intelligence, how has it evolved, and what responsibilities does it now place on public institutions?
Drawing from A Brief History of Intelligence, intelligence is best understood as an adaptive process rooted in survival. Bennett traces this evolution from simple organisms, such as bacteria responding to chemical gradients, to increasingly complex nervous systems capable of learning and prediction. One key example is reinforcement learning in animals, where behaviors are strengthened or weakened based on outcomes. This biological principle mirrors modern AI systems, particularly those used in predictive analytics and optimization. For instance, just as a rat learns to navigate a maze through reward signals, AI models learn to optimize outcomes through data feedback loops.
For public administration, this insight is more than theoretical. Government systems operate under similar principles. Policies act as “stimuli,” and public responses serve as feedback. Consider regulatory enforcement within agencies like MSHA: inspections, citations, and compliance assistance function as feedback mechanisms that shape behavior in high-risk environments. If penalties are too weak, unsafe practices persist; if overly punitive, they may encourage concealment rather than compliance. Like biological systems, effective governance depends on calibrating feedback to produce desired outcomes.
This behavioral dynamic aligns closely with the work of Daniel Kahneman, whose distinction between intuitive (“System 1”) and analytical (“System 2”) thinking highlights the limits of purely rational policymaking. For example, safety compliance in mining is not driven solely by written regulations but also by habits, heuristics, and cultural norms underground. AI systems, particularly those using machine learning, now replicate these patterns by identifying correlations in behavior and predicting outcomes—often faster and at greater scale than human analysts.
Bennett’s concept of layered intelligence further enhances this understanding. He describes the brain as a hierarchical system in which older, reactive structures coexist with newer, deliberative ones. This layering is evident in government as well. At the
operational level, agencies respond to immediate demands—emergency response, inspections, and frontline service delivery. At the institutional level, they enforce rules and ensure accountability through regulatory frameworks. At the strategic level, they analyze data, develop policy, and plan for the future.
A clear example of this layered governance can be seen in public health responses during crises. During the COVID-19 pandemic, local governments combined real-time operational decisions (e.g., hospital capacity management), institutional rules (e.g., mask mandates), and strategic modeling (e.g., infection projections). AI enhanced this process by providing predictive analytics, helping leaders anticipate case surges and allocate resources more effectively. The lesson is clear: AI does not replace governance layers, it strengthens their integration.
However, the promise of AI is not evenly distributed. As emphasized by Brenna Isman of the National Academy of Public Administration, the most significant impacts of AI will occur not in national capitals, but on “Main Street.” For example, municipalities are already using AI to improve service delivery—chatbots handling citizen inquiries, predictive maintenance systems identifying infrastructure failures, and automated permitting processes reducing administrative delays. In Kansas City, AI has been used to streamline loan processing, expanding access to capital for small businesses. Meanwhile, in California, AI-driven automation has improved recycling operations, increasing efficiency while reducing costs.
Yet these benefits require foundational investments. Communities lacking broadband access or technical expertise cannot effectively adopt AI. This challenge is particularly relevant in rural and post-industrial regions such as Appalachia. Here, the insights from Jump-Starting America become critical. Gruber and Johnson argue that innovation in the United States has become concentrated in a few metropolitan hubs, leaving many regions behind. They propose establishing new “growth centers” anchored by research institutions, federal investment, and private-sector partnerships.
Applied to AI, this suggests that federal and state governments should actively invest in regional AI ecosystems; supporting universities, workforce training programs, and local innovation hubs. For example, a partnership between a land-grant university and local government could create AI training pipelines for public sector employees, enabling smaller communities to leverage technology without relying entirely on external vendors. This approach not only promotes equity but also strengthens national competitiveness.
At the same time, AI cannot be separated from the physical infrastructure that enables it. As detailed in Chip War, semiconductors are the backbone of modern computing. The global competition for chip production, particularly between the United States and
China, illustrates how technological capability is tied to geopolitical power. For instance, Taiwan’s dominance in advanced chip manufacturing has made it a focal point of international strategy. Disruptions in this supply chain could significantly impact AI deployment across sectors, including government.
For public administrators, this underscores the importance of aligning AI strategy with industrial policy. Investments such as the CHIPS and Science Act represent efforts to rebuild domestic semiconductor capacity, ensuring that critical technologies remain accessible and secure. Without such investments, even the most advanced AI strategies could be constrained by external dependencies.
While Chip War highlights structural dependencies, Recoding America exposes internal barriers within government itself. Pahlka provides numerous examples of how overly complex systems hinder effective service delivery. One notable case is the rollout of Healthcare.gov, where technical failures were exacerbated by fragmented authority and rigid procurement processes. The issue was not a lack of technical expertise, but a system that prevented effective coordination and problem-solving.
This lesson is directly applicable to AI adoption. Without institutional reform, AI risks becoming another layer of complexity rather than a solution. For example, if an agency deploys an AI tool for case processing but retains outdated approval workflows, the overall system may remain inefficient. Successful implementation requires rethinking processes, empowering frontline workers, and aligning policy design with operational realities.
These challenges are not new. As explored in Accessory to War, technological advancement has long been intertwined with national priorities. Tyson and Lang demonstrate how innovations, from celestial navigation to satellite systems, were often driven by military and strategic needs. For example, the development of accurate star charts enabled naval dominance, while Cold War investments in space technology led to advancements that now underpin modern GPS systems.
The implication for AI is clear: technological progress is rarely neutral. It reflects the priorities and values of the societies that invest in it. Today, AI development is shaped by both economic competition and national security concerns. Public administrators must therefore ensure that AI is guided not only by efficiency, but by democratic values.
This perspective aligns with the concept of The Technological Republic, which calls for aligning technological innovation with public purpose through coordinated national effort. In this framework, AI becomes a national project—similar to the interstate highway system or the Apollo program. Such projects require long-term investment, cross-sector collaboration, and a clear commitment to public outcomes.
Importantly, this national project must incorporate place-based strategies, as emphasized in Jump-Starting America. It must also address infrastructure dependencies highlighted in Chip War and institutional barriers identified in Recoding America. Without integrating these elements, AI adoption risks being fragmented, inequitable, and ineffective.
Ethical considerations further reinforce the need for a coordinated approach. Public trust is the foundation of governance, and AI must strengthen that trust. This includes addressing algorithmic bias—for example, ensuring that predictive policing models do not disproportionately target certain communities—and promoting transparency so that decisions can be understood and challenged. Accountability mechanisms must also be established to ensure that AI systems operate within legal and ethical boundaries.
As explored in The Singularity Is Near and The Singularity Is Nearer, by Ray Kurzweil, the pace of technological change is accelerating. While these works often focus on long-term possibilities, their relevance to public administration is immediate. Governments must operate in an environment where innovation outpaces regulation, requiring adaptive governance frameworks capable of responding to rapid change.
As the United States approaches its 250th anniversary, the integration of AI into governance represents a defining moment. The nation’s founding principles (democracy, accountability, and service) must guide how these technologies are adopted. The question is not whether AI will transform the government, but whether that transformation will advance the public good.
In conclusion, the evolution of intelligence—from simple biological systems to advanced artificial models—provides a powerful framework for understanding AI’s role in governance. Intelligence is not about perfection, but about the capacity to learn and adapt. Public administration must embrace this mindset, leveraging AI to enhance decision-making, strengthen feedback systems, and improve outcomes.
At the same time, it must recognize that technology is embedded within broader systems from economic, institutional, and geopolitical. By integrating insights from A Brief History of Intelligence, Jump-Starting America, Chip War, Recoding America, and Accessory to War, public leaders can approach AI not as an isolated tool, but as part of a larger national project.
By treating AI as a shared public endeavor—grounded in equity, accountability, and strategic coordination—the United States can ensure that this transformative technology serves as a cornerstone of a modern technological republic, advancing opportunity, resilience, and public value for generations to come.