Why we (don't) need export control
Introduction - Dario Amodei's perspective
Recently Dario Amodei, ex-OpenAI, co-founder and CEO of Anthropic, published a short blog post titled On DeepSeek and Export Control: the post contains a first section with several interesting considerations regarding AI models, scaling laws and shifting the curve, as well as a second section with an analysis of the two latest DeepSeek releases (DeepSeek V3 and R1). If the first two sections, apart from some considerations that might sound a little too subjective, are not far from what we've been hearing in the past days about DeepSeek from the most critic fringes of experts, the third section, in which Amodei dives deep into the reasons why we should implement export control on chips against China, is not only controversial, but also misses some important points related to DeepSeek and, in general, to the whole open source ecosystem.
In this brief post, I would like to address some of the most important points of Anthropic's CEO line of argument, reporting my thoughts and some considerations that I deemed important to the matter.
Theses and Antitheses
In this section, I will proceed like this: I will take a claim made by Amodei in his post, verbatim , and I'll report my point of view on that. Each thesis-antithesis couple is separated by a line
- "To the extent that US labs haven't already discovered them, the efficiency innovations DeepSeek developed will soon be applied by both US and Chinese labs to train multi-billion dollar models."
This is the main point that Anthropic's CEO misses in his post: DeepSeek innovations are open and reproducible, and this is because the model is open source and accompanied by a technical paper that details the techniques used by DeepSeek's team to optimize the training, making it more efficient. Amodei understates the power of these information and the impact they might have on the scientific community by saying "to the extent that US labs haven't alreasy discovered them", but the effect of DeepSeek's paper is already visibile: companies such as HuggingFace have started to openly reproduce DeepSeek R1, and others have started building extended synthetic datasets based on DeepSeek's thinking, such as OpenThoughts-114k
by OpenThoughts or Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B by Magpie-Align. Furthermore, simply searching "deepseek" on HuggingFace Models Hub reveals more than 3000 entries, and almost 300 if we type the same query in the Datasets Hub. The impact is evident, and this not only invests "US labs", but is pervasive, and extends to individual developers and other labs around the world, giving them access to a powerful and reproducible technology, and accelerating the democratization of AI. The same is not true about Claude (the flagship model by Anthropic), but also about most models from OpenAI: they are kept behind the curtains of closed-source, and so not fully reproducible.
"This means that in 2026-2027 [when Amodei predicts we will reach AI smarter than humans ndr] we could end up in one of two starkly different worlds. In the US, multiple companies will definitely have the required millions of chips (at the cost of tens of billions of dollars). The question is whether China will also be able to get millions of chips.
- If they can, we'll live in a bipolar world, where both the US and China have powerful AI models that will cause extremely rapid advances in science and technology — what I've called "countries of geniuses in a datacenter". A bipolar world would not necessarily be balanced indefinitely. Even if the US and China were at parity in AI systems, it seems likely that China could direct more talent, capital, and focus to military applications of the technology. Combined with its large industrial base and military-strategic advantages, this could help China take a commanding lead on the global stage, not just for AI but for everything.
- If China can't get millions of chips, we'll (at least temporarily) live in a unipolar world, where only the US and its allies have these models. It's unclear whether the unipolar world will last, but there's at least the possibility that, because AI systems can eventually help make even smarter AI systems, a temporary lead could be parlayed into a durable advantage. Thus, in this world, the US and its allies might take a commanding and long-lasting lead on the global stage."
Amodei uses the "two worlds" scenario, a very well known debate technique that is aimed at facing the audience with two strikingly opposed perspectives, one in favor of the author's thesis and one against it. This technique is particularly efficient because: (a) it narrows down all the possible scenarios to two of them, reducing complex issues into two (often simpler) ways of interpreting reality, so that we polarize all the grays into black and white; (b) it forces the audience to choose a side based often on powerful rhetoric imagery and emotionally-stimulating contrasts. In these two scenarios, Amodei depicts a bipolar world, in which China will eventually catalyze talent, capital and resources to assert its dominance, and a unipolar world, in which the US will hold the AI power. Let's break this down.
- The two scenarios are not the only possible ones: when AI is open source (like in China's case, with DeeepSeek but also with Qwen and other companies), science can advance and lots of people, from single individuals to laboratories, can reproduce it. Although it's undeniable that China and US have a great advantage over the rest of the world, with a non-unipolar world that fosters open-source there is still the potential for a decentralized and democratic AI ecosystem.
- The assumption that China will invest the AI advancements in the military field completely oversees the fact that the US are already doing it: in November, Meta and Anthropic itself (which is Amodei's company) gave the US government access to their models for security and defense applications; you can read a good summary of it in a Washington's Post article by Gerrit De Vynck, but there are also posts by Meta and Palantir (Anthropic's partner in the deal) about the issue. Amodei himself, in an essay from October 2024, states: "My current guess at the best way to do this is via an “entente strategy”, in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy (this would be a bit analogous to “Atoms for Peace”)."
- But let's play Amodei's game on this: let's assume that US needs to hold the power. Well, Amodei writes about a "coalition of democracies", but from his perspective in the blog post concerning DeepSeek the world will be unipolar, lead by "US and its allies": de facto, it will be US alone, as none of its allies, especially speaking about EU, holds enough power to contribute significantly to this dominance (that's also why he says unipolar). The point is: are US that trustworthy? Should we really leave the power of developing (mostly closed-source) AI to providers that bow their heads to the current political governance, and change perspective with a change of government? Obviously, I am not here to deny China's crimes against human rights, mass control policies and lack of democracy, that are much worse than what happens generally in the US, but I need to make a point clear: if we have a multipolar world, the check and balances, the auto-correcting mechanisms of science can intervene and identify/ablate/correct the problems and biases that Chinese models, as well as European and American ones, have, especially if the development of those models has been open sourced. If few companies hold AI in their hands, develop it behind the curtains of closed source and control their models from inside, without getting validation from the wider scientific community, we could have more frequent biases and errors, which would simply go unnoticed because no one is there to check the weights, the training process and techniques used.
- "Well-enforced export controls are the only thing that can prevent China from getting millions of chips, and are therefore the most important determinant of whether we end up in a unipolar or bipolar world."
Well enforced export control can make the difference between a unipolar and a bipolar (or, as I see it, multipolar) world, and that's undeniable. What is less obvious are the implications of this action. Imagine that the next company to release a model that "threatens" US-held AI superiority comes from Europe: following the same logic, if the United States want to maintain their leadership in this technology, they will put an export restriction also on Europe. Maybe they won't put it in the same way as for China, because most European countries conform to the notion of "democracy" that underpins Amodei's post, but they will be enough to put any European company in the second/third/fourth place of the game, securing the podium or, at least, the first place to American companies. Setting "well enforeced export controls" on China today creates a dangerous precedent that might see the US doing it again (although maybe on a different scale) against Europe or against whoever they feel could be a threat to their leadership in the future.
Conclusion
In light of all what I've written, I have a very important question that I feel compelled to ask, after reading Amodei's perspective on multipolarity:
How would unipolarity help science to go further? Isn't modern science, by definition, built on multipolarity, sharing and auto-correction mechanisms based on reviews by the scientific community?
When, between the Middle Ages and the XVII century, Europeans left the control of knowledge to Church institutions, we had unspeakable atrocities, including massacres, persecutions and widespread discrimination based on ethnicity, gender and religious beliefs. Between the 17th and the 18th century, the Scientific Revolution first and the Age of Enlightenment after contributed to the democratization of knowledge (let's just think about the Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers, by Diderot and colleagues), that paved the way to the foundations of modern democracy, human rights and modern-days science. As you can see, in spite of some side effects that still accompanied the transition to modern science and shared knowledge, opening the knowledge lead to improvements in the world we've been living in: from 700 to 1700 AD most of the Europeans were illiterate, poor and abused by their power: in just 300 years, the life of approximately 800 millions people has radically changed, mostly in a positive way. I think that this is an interesting example that we could keep in the back of our heads for the future.
I have then a very brief conclusion, that summarizes my main point of view: we need more open source and more companies like Hugging Face🤗 - It's the only way we can really advance AI, because progress is not made by wars and restrictions, it's made by collaborations.