Artificial Intelligence
Artificial Intelligence is Not a Race
It Is a Struggle Over Compute, Responsibility, and Democratic Sovereignty
Author’s note. This essay is the first of three reflections born from the notes, interviews, questions, and conversations that emerged during the study tour “Emerging Tech & the Future of Geopolitics,” organized by the Friedrich Naumann Foundation for Freedom from February 28 to March 7, 2026, with visits and discussions in Washington, D.C. and Texas. The second essay will focus on blockchain and Bitcoin. The third will examine the increasingly strategic relationship between energy, artificial intelligence—especially data centers—and blockchain.
I should ask the reader’s indulgence for the length of this essay—and of the two that will follow. The subject is not merely technical. It is political, moral, institutional, and civilizational all at once. Compression, in a field like this, too easily becomes distortion. These essays are long because the conversations that inspired them were rich, layered, and too important to flatten into slogans.
We have fallen into the habit of speaking about artificial intelligence as if it were a race. The phrase sounds dynamic, strategic, even patriotic. It suggests urgency. But it also distorts. A race assumes a single track, one finish line, and a clear winner. None of those conditions applies here. Artificial intelligence is not a sprint toward one endpoint. It is a reorganization of power: power over compute, over standards, over infrastructure, over labor markets, over public truth, and ultimately over the state’s capacity to govern technologies it did not invent and does not fully control.
The first intellectual error in the public debate is to talk about “AI” as if it were one thing. It is not. What we call AI includes general-purpose generative models, automated decision systems in credit or hiring, predictive systems, computer vision and biometric tools, and increasingly agentic systems that do not merely answer questions but can plan, call tools, take actions, and pursue sub-goals over time. Add to that the material layer beneath them—advanced chips, cloud infrastructure, data centers, water, and energy—and the object of governance becomes more complex still. Stanford’s 2025 AI Index reports that nearly 90% of notable AI models in 2024 came from industry, and that training compute for notable models continues to double roughly every five months. In other words, this is not simply an argument about ideas; it is an argument about industrial concentration and strategic bottlenecks.
One of the most useful distinctions I heard during the study tour was also one of the least appreciated in public debate: the distinction between upstream and downstream regulation. Upstream regulation attempts to shape the design and training of a model so that it does not generate harm in the first place. Downstream regulation is concerned with consequences: once harm occurs, who is accountable, how is damage remedied, what obligations are triggered, and what safeguards must follow. This is not a semantic nuance. It is the difference between trying to eliminate all risk at the point of creation and building institutions capable of managing harm in the real world. In practice, upstream regulation faces severe limits, not least because regulators rarely have meaningful access to the datasets, model choices, or internal design decisions that firms treat as commercially strategic. NIST’s Generative AI Profile reflects this reality by organizing its guidance around governance, pre-deployment testing, content provenance, and incident disclosure rather than pretending that all risk can be neutralized at the source.
This is where the geopolitical argument enters. One of the most influential positions in AI policy today is that regulation must be minimal because overregulation would slow innovation and allow China to “win the AI race.” That logic has proven especially attractive to those who think in zero-sum strategic terms. It also happens to align neatly with the interests of large technology firms that prefer a lighter regulatory environment. That does not require a conspiracy to be politically consequential. It is enough that national-security hawks and dominant tech actors often find themselves wanting the same thing: fewer constraints, faster deployment, and greater tolerance for uncertainty. Public Citizen found that more than 3,500 federal lobbyists—about one quarter of all federal lobbyists—worked on AI issues in 2025. The pressure to shape the rulebook is no longer hidden at the margins; it is now one of the central facts of AI governance.
At the international level, the landscape remains thin. There are now many declarations, principles, and normative statements on acceptable and unacceptable uses of AI, but most remain voluntary. The major exception is the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, adopted in May 2024 and opened for signature in September 2024—the first legally binding international treaty on AI. That matters symbolically and legally, but it does not yet amount to a truly coercive global regime. In other words, international AI governance still relies far more on signaling than on enforcement.
At the national level, three broad models remain especially important. The European Union has moved furthest toward a comprehensive framework. The AI Act entered into force in August 2024, applies its first prohibitions and literacy obligations from February 2025, and applies obligations for general-purpose AI models from August 2025, with the General-Purpose AI Code of Practice published in July 2025 to help operationalize those duties. The law’s core contribution is not merely that it regulates AI; it is that it distinguishes among kinds of risk. A medical device and a chatbot are not treated as if they were the same object. That is a genuine conceptual advance. At the same time, the European debate has shifted from legislative triumph to implementation friction: scope, codes of practice, compliance sequencing, and the practical meaning of systemic-risk obligations.
The United States still presents the opposite picture: enormous technical dynamism combined with a fragmented regulatory architecture. One recurring observation I heard during the tour was that American rhetoric often sounds more deregulatory than its actual policy instruments. That is an important nuance. The country has not produced a single comprehensive federal AI law, and in the vacuum, the states have moved aggressively. According to the National Conference of State Legislatures, all 50 states introduced AI-related legislation in 2025, and 38 states enacted or adopted around 100 measures. That is enough to create what many now describe as a patchwork regime—one that creates complexity for firms, but also reflects the reality that governance does not stop simply because Congress has not acted.
China is different again. It is not well described as simply “between” Europe and the United States. It has built a more targeted but legally binding model, especially around recommendation algorithms, online content, and generative AI services. China’s interim measures for generative AI took effect in August 2023, and official reporting stated that by March 31, 2025, 346 generative AI services had been filed with the Cyberspace Administration of China. This is not laissez-faire. It is a sectoral, administrative, and politically integrated regime—less comprehensive in one sense than the EU model, but in some respects more direct in its filing, labeling, and compliance logic.
Still, the most underdeveloped part of the governance debate is not Europe, America, or China. It is the problem of agents. Too many people still speak about agentic systems as if they were merely chatbots with extra ambition. They are not. They are systems that can plan, invoke tools, move across interfaces, trigger actions, and generate intermediate decisions that may never be legible to the user in real time. The UK AI Security Institute reports that frontier systems can now complete apprentice-level cyber tasks around half the time and that the task duration they can complete unassisted continues to expand rapidly. A growing governance literature argues that the central challenge is no longer only model output, but how to govern systems that act across chains of tools and permissions faster than ordinary legal intuitions were built to handle.
That is why the old question—“who is responsible?”—becomes more difficult, not less. When a base model is adapted by one actor, integrated by another, connected to tools by a third, and deployed in a high-stakes setting by a fourth, the simple idea of one liable developer no longer fits. Direct and unrestricted liability on the original developer is not necessarily the right answer, because the harm often emerges from a distributed system rather than from one isolated act of design. The right response, in my view, is to create a distinct governance category for systems of agentic action—that is, systems able to alter rights, assets, access, code, records, or safety-relevant states in external systems without human approval at each step. Such systems should be subject to persistent identity requirements, robust logging, bounded permissions, interruptibility, and a default matrix of responsibility that does not allow legal accountability to evaporate into technical complexity. NIST’s own guidance points in this direction by stressing content provenance, pre-deployment testing, and incident disclosure, but it does not yet offer a complete doctrine for agentic responsibility.
This brings us back to compute. The AI Act uses a 10^25 FLOP threshold as a presumption for systemic-risk GPAI models, and the Commission has made clear that the threshold is under review and can be rebutted or supplemented by other criteria. That is important, because compute should not be fetishized. It is useful as a trigger for scrutiny, not as a complete theory of risk. A well-governed regime should treat compute as an early-warning signal, then combine it with evidence about capabilities, access to tools, adoption scale, and downstream incidents. The goal is not to worship the threshold, but to use it as one instrument among several in a wider supervisory architecture.
Open-weight and open-source models require similar nuance. The political temptation is to divide the world into heroic openness and sinister closure. Reality is harder. The AI Security Institute notes that open models are substantially harder to safeguard because their defenses can often be removed, and that the capability gap between open and closed models has narrowed significantly. The right question is not whether openness is good in the abstract. It is what obligations should attach to a release once a model is materially capable, broadly distributed, and easy to transform into a high-risk or agentic system. A university researcher fine-tuning a model for a narrowly bounded, public-interest use should not be regulated as though they were a hyperscale platform. But a firm releasing globally distributed, highly capable weights should not be able to evade serious duties merely by invoking the moral halo of openness.
The deeper economic question is even more consequential. Technological disruption is often easier to identify in direction than in tempo. Crises accelerate other crises. Strategic forecasts may see the trend but miss the speed. That is exactly the right warning for AI. We are not simply adding another tool to the post-industrial economy; we are entering a phase in which cognitive and administrative labor itself becomes more systematically automatable. The historical analogies invoked in many of the interviews and lectures during the study tour—the steam engine, the printing press, electricity—matter not because AI is identical to those technologies, but because each forced societies to invent new institutions after the old settlement no longer fit.
Here Karl Polanyi becomes newly relevant. His great insight was that social order fractures when economic transformations outrun the institutions that embed and constrain markets. The transition from agrarian society to industrial production produced one kind of crisis; the long move from production to services produced another; and the automation of service-sector and knowledge work may now be generating a third. That does not mean AI inevitably causes populism. It means that if productivity, capital investment, and wealth creation continue to decouple from broad-based employment, social cohesion will depend less on rhetoric about innovation and more on whether democracies can build a new fiscal and distributive settlement adequate to a new productive order.
That is also why labor statistics must be handled carefully. The ILO–NASK global index states that one in four jobs worldwide is potentially exposed to generative AI, but it explicitly emphasizes that exposure is not the same thing as actual job loss and that transformation of tasks is more likely than elimination of whole occupations. By contrast, the “GPTs are GPTs” literature asks a different question and estimates that around 80% of the U.S. workforce may have at least 10% of tasks exposed, with about 19% having at least 50% of tasks affected. These are not interchangeable metrics. They describe different units of analysis. Meanwhile, early empirical labor evidence remains mixed: NBER work from Denmark found broad adoption of chatbots but small or null wage and hours effects in the short run, while earlier work on customer support found large productivity gains, especially for less experienced workers. The sober conclusion is not that AI will destroy work, nor that nothing important is happening. It is that we are in an uneven transition whose institutional consequences are likely to be larger than its first wage data suggest.
The problem of public truth may be even more serious. The public conversation still often frames this as “misinformation,” but the more precise danger is scalable conversational persuasion combined with authenticity collapse. A study in Nature Human Behaviour found that GPT-4, when given basic sociodemographic information, was more persuasive than human interlocutors in 64.4% of non-tied debate cases. At the same time, a growing literature on sycophancy shows that language models often bend toward what users want to hear, even when doing so undermines correctness. NIST’s emphasis on provenance is therefore not cosmetic. In a world where images, audio, documents, and dialogue can all be synthetically generated at scale, democracies will need stronger authentication norms, especially in elections, courts, finance, and public administration. The challenge is no longer simply falsehood; it is the erosion of shared criteria for verification.
The Global South cannot remain an afterthought in this debate. AI geopolitics is not exhausted by the triangle of Washington, Brussels, and Beijing. Two issues stand out. The first is data sovereignty: when data centers, cloud infrastructure, or full AI stacks are exported into middle-income and poorer countries, who owns the resulting data—the user, the host country, the cloud operator, or the foreign company running the stack? The second is more basic still: many societies lack the digital infrastructure, data ecosystems, and local-language resources required to make AI genuinely useful on their own terms. In that sense, the problem is not merely exclusion from the frontier; it is exclusion from the prerequisites of meaningful participation.
That is why sovereignty in AI should not be reduced to the fantasy that every country must build a frontier model of its own. A more realistic and democratic understanding of sovereignty would include four capabilities: the ability to audit foreign systems; meaningful access to shared compute; the development of language and data infrastructure in local contexts; and sustained presence in standards-setting forums. The African Union’s 2024 Continental AI Strategy explicitly advances an Africa-centric, development-focused approach. India’s AI Compute Portal now reports more than 38,000 GPUs and 1,050 TPUs at subsidized rates. And ISO/IEC JTC 1/SC 42, one of the key standards forums in this field, reports 48 published standards, 54 under development, and just 26 participating members. The lesson is plain: countries absent from standards bodies do not escape governance; they merely inherit governance written by others.
For that reason, the program for middle-income democracies should be sequenced rather than romanticized. First, fix public procurement and build state audit capacity; governments should stop buying opaque systems on blind trust. Second, invest in shared compute rather than prestige nationalism. Third, fund local-language corpora, evaluation tools, and public-interest datasets. Fourth, create regional clubs for technical diplomacy, procurement, and standards participation. And beyond these domestic priorities, the Global South needs institutional voice: not symbolic consultation, but funded, permanent technical representation in standards bodies and multilateral processes. A regional representation fund and pooled procurement clubs would be more useful than many conferences on “inclusion.” Power in technology governance belongs to those who arrive with expertise, budget, and the capacity to say no.
This is why artificial intelligence is not a race. A race flatters speed. What matters here is not speed alone but architecture: who controls compute, who bears liability, who writes standards, who captures rents, who authenticates truth, who governs agents, and who is forced to live under rules designed elsewhere. The countries that navigate this transition best will not necessarily be those that train the biggest model first. They will be the ones that learn to govern technological power before it hardens into unaccountable structure.
This is the first of three essays because one essay cannot bear the full weight of this subject. The next will turn to blockchain and Bitcoin. The third will examine the strategic nexus among AI, energy, data centers, and blockchain. Together, they are not separate stories. They are parts of the same emerging political economy.