Newsletter / Issue No. 66

Image by Ian Lyman/Midjourney.

Newsletter Archive

Thu 9 Apr, 2026
navigation btn

Listen to Our Podcast

Dear Aventine Readers,

The US and China are so dominant in the AI race that it will be difficult for other countries to catch up. As AI becomes an increasingly important source of economic and geopolitical power, the question for the non-superpowers will be how to retain autonomy in a world shaped by technology others control. So-called middle countries like the UK, Canada, Germany and Taiwan could join forces to build AI models they can command. But will it be too late? And how would a collective make decisions about AI that are in everyone's interest? 

Also in this issue:

  • A startup aims to build brainless bodies to harvest backup organs.
  • A new AI model from Anthropic could be a cybersecurity game changer. 
  • Giant helium-filled blimps promise to beam data to your phone.
  • And, from Nature: Are boys really in crisis? What the science says in the age of the manosphere. 
  • Thanks for reading!

    Danielle Mattoon
    Executive Director, Aventine

    Subscribe

    Subscribe to our newsletter and be kept up to date on upcoming Aventine projects

    The Big Idea

    The Fight to Keep AI from Becoming a Two-Nation Technology

    Speaking at the World Economic Forum’s annual meeting in Davos in January, Canadian prime minister Mark Carney issued a stark warning to the world’s middle-power nations. “If we're not at the table,” he said, “we're on the menu.”

    He was talking broadly, about energy, food, critical minerals and supply chains. But for Yoshua Bengio, a pioneer of machine learning, a professor at the University of Montreal and — not incidentally — a Canadian, the comment captured a central problem in the development of artificial intelligence: The US and China dominate the AI race while other nations watch from the sidelines. Carney’s speech was a rallying call, Bengio said. Other countries must take action to ensure that power over AI “is not concentrated in a few hands.”

    Those raising this alarm describe a downside from which it will be hard to recover. “The worst possible situation is that AI proves to be a very important technology, and [that a middle power is] completely dependent [on another country for its AI capabilities],” said Giorgos Verdi, a policy fellow at the European Council on Foreign Relations think tank. In times of dispute, he added, superpowers might use that dominance as a means of coercion: cutting access to AI models, say, which could inflict economic or political damage on the middle power. As AI becomes an increasingly important source of economic and geopolitical power, the challenge for middle powers will be how to retain autonomy in a world shaped by technology controlled by others.

    Bengio and Verdi aren’t alone in thinking that middle powers — countries with influence but not on the scale of the US and China — need a strategy. A flurry of papers from academics and think tanks in recent months describe possible approaches to address the imbalance. Governments, meanwhile, have recently made serious financial commitments: The UK has launched a £500 million government-backed fund called the Sovereign AI Unit; India has pledged $1.25 billion to build a domestic AI ecosystem; Canada has committed $2 billion to develop its own AI supercomputing systems. “One of the ways you ensure you're at that table is by being home to some of the world’s leading companies,” said James Wise, chair of the UK’s Sovereign AI Unit. “Part of our job is to accelerate the success of those companies.”

    Meanwhile, the US and China are accelerating away, plowing investment and early AI revenue back into the development of more powerful systems. According to Epoch AI, a nonprofit research institute that tracks AI trends, the US currently controls over 74 percent of global AI compute capacity and China controls 14 percent. The whole of the EU controls less than 5 percent, down from 12 percent at the start of 2023. “The lead is pretty big. And the lead is growing as a result of the size of that lead,” said Anton Leicht, a visiting scholar with the Carnegie Endowment’s Technology and International Affairs team. “What do you do in a world that's shaped by technology that you're not building yourself?”

    Build it yourself

    For Bengio, the answer is: Figure out how to build it. During a fireside conversation at the recent International Association for Safe & Ethical AI conference in Paris moderated by Aventine, he described what this might look like. Bengio is a Turing Award winner and more recently a founder of an AI safety startup called LawZero. He is also a co-author of a November 2025 paper titled “A Blueprint for Multinational Advanced AI Development,” produced by a group of international researchers from institutions including the Quebec Artificial Intelligence Institute, the University of Oxford and the Future of Life Institute. The paper argues that if middle powers form a coalition, they could feasibly produce frontier AI models of their own. (Bengio also chairs the International AI Safety Report, which Aventine discussed with him earlier this year.) 

    The authors argue that, by pooling hardware and coordinating investment, middle powers could assemble enough computing power to train competitive systems. Some countries, including France and Germany, have already built or are building major AI computing facilities, while four large-scale AI gigafactories backed by the EU are expected to be operational by 2028. Combined, the paper argues, these facilities could provide enough compute to train models capable of competing with those from the US and China. The report also argues that middle powers have access to plenty of talent. Of the 100 most-cited AI researchers in the world, it says, 87 either originally come from countries other than the US and China or are currently working in them. Middle powers, the thinking goes, could lure at least some of the researchers now working in the US back home with high salaries and a commitment to democratic values and higher ethical standards.

    Doing so would require governments to spend significant sums. “If we are to consider this to be an important technology for both economic growth and our security, then it seems to me that middle powers will also understand that there is an increasing need to redirect resources, maybe from other areas,” said Verdi. 

    Making such a coalition work in practice will also be difficult. There is precedent for large multinational collaborations in science, engineering and technology: CERN is one example, Airbus another. (A paper from the Bennett School of Public Policy at the University of Cambridge published in September 2025 makes a similar case to the Bengio paper and is called “Airbus for AI.”) But AI is moving much faster than those projects ever did. Any governance structure will need to match that pace, not something multinational coalitions have historically been famous for. The high stakes may also make collaboration more difficult. “I can't really imagine a world in which you find enough middle powers that contribute substantial amounts of money, resources, infrastructure into this, and then don't also get into conflicts about who controls this technology,” said Leicht.

    Bengio acknowledged the difficulty. But he argued that countries should at least try, if only to begin “to flex the muscle of joint governance of these powerful systems.” “Ultimately,” he added, “we do need international institutions, and maybe new ones, I think, to govern AI.”

    The case for dependence

    Not everyone agrees that middle powers can build frontier AI. Count among them Leicht and Dean Ball, a senior fellow at the Foundation for American Innovation who was a senior policy adviser at the White House Office of Science and Technology Policy from April to August 2025 and the primary architect of America’s July 2025 AI Action Plan. In a recent paper, “The Race Worth Winning,” the pair argue that the gap between middle powers and superpowers is too large for catch-up to be realistic. Instead, Leicht and Ball suggest that middle powers should choose a superpower to align with — either the US or China — in order to secure access to frontier-grade AI. At the same time, they argue, middle powers should develop enough strategic leverage at some point in the AI chain that their chosen patron cannot easily cut them off or raise prices without consequence.

    Some countries already have leverage. Taiwan has TSMC, the world’s leading chipmaker and a cornerstone of the AI supply chain. The Netherlands has ASML, which makes the lithography machines essential to the production of advanced semiconductors. Other countries might have to lean on industrial capabilities, access to natural resources or geographic advantages. Norway, for instance, could double down on data centers, taking advantage of abundant hydropower and a climate well suited to cooling. Others may need to create a source of leverage from scratch.

    This approach has problems too. Buying access to AI may work for many commercial applications, but countries are likely to want sovereign systems for national security and other sensitive uses. That means some domestic capability will still be necessary. Fine-tuning open-source models is an option, but it would leave many countries dependent on technologies that are not on the cutting edge.

    There is also the question of whether points of leverage, if established, can be retained over time. Leicht conceded that it is plausible that the US government — or American AI labs flush with cash after public offerings — could try to acquire leverage controlled by other countries by, say, buying up supply-chain assets. The December launch of Pax Silica, a US State Department effort to strengthen AI supply-chain security, is an early sign that Washington is thinking along these lines, Leicht said. Middle powers may find it hard to resist potential infusions of cash, especially if they have not yet grasped the long-term strategic value of what they hold. 

    A partial solution

    A more plausible future may be messier: not sovereignty or dependence, but a shifting mix of both.

    A paper by researchers at Chatham House, an international affairs think tank in London, lays out a menu of options open to middle powers. In addition to the options described above — developing points of leverage, forming coalitions or aligning with superpowers — nations could hedge. If they take this path, they would piece together a hybrid AI stack by intentionally choosing technologies from multiple foreign providers to reduce their reliance on any one nation. At the same time, they would make strategic investments domestically to strengthen their own AI ecosystem.

    In practice, many countries are likely to pursue some combination of all four approaches, leaning into certain strategies and away from others as circumstances evolve. “The capabilities will change, the landscape will change,” said Isabella Wilkinson, a research fellow at Chatham House. Her view is that nations should be “as agile as possible.”

    For some nations, there might not be much of a choice: If their economies lack the means to become part of the AI supply chain, they will find it difficult to create points of leverage and must instead resign themselves to joining a coalition or aligning with a superpower. For others, even if they make optimal strategic decisions in the face of all this uncertainty, the best they might hope for is business as usual. The most positive possible outcome is that “the world order maybe just doesn't change all that much,” said Leicht.

    Listen To Our Podcast

    Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.

    Quantum Leaps

    Advances That Matter

    A startup aims to build brainless bodies for backup organs. There’s an unsettling future in which, as your organs fail with age, you replace them with fresh ones grown inside a brainless copy of your own body. Or, even more radical, your brain could be transplanted into a clone with an empty skull. This sounds like deep science fiction, but as Wired and MIT Technology Review have reported, a California startup called R3 Bio has been exploring concepts headed in this direction. Publicly, the company has described its ambition as creating “organ sacks”: biological structures containing the body’s organs but lacking a brain, and therefore — in theory — also lacking capacity for thought or suffering. According to Wired, the company proposes to do this in order to create monkey bodies for less ethically troubling animal experimentation. But MIT Technology Review reports that in private the company’s cofounder, John Schloendorn, has described a more radical vision: brainless human clones that serve as hosts of potential replacement organs, or even as full-body hosts for brain transplantation. The company disputes that characterization, but the reporting cites confidential presentations and a 2023 letter to stakeholders that reportedly described body replacement cloning as a serious long-term goal. There are, to put it mildly, huge technical and ethical obstacles. Cloning remains unreliable. For now, there’s no plausible path to gestating these organs without a surrogate. And ambiguity around what counts as consciousness makes the idea of creating brainless biological bodies ethically fraught. It’s difficult to see how this research could become reality, but at least one company appears willing to try to push it forward.

    Anthropic’s latest model pushes cybersecurity to new limits. The frontier AI company says it has created a new model, Claude Mythos Preview, that is too powerful for public release. While it was trained as a generalist model, its most notable and concerning ability is scanning software for security vulnerabilities. According to The New York Times, “the new model has already identified ‘thousands’ of bugs and vulnerabilities in popular software programs, including every major operating system and browser.” In some cases, Mythos discovered security vulnerabilities that had reportedly existed for decades. Obviously, in the wrong hands a tool like this — able to spot holes in software — would be extremely dangerous, which is why the company says it is keeping the model under tight control, allowing some 40 companies — including Apple, Amazon, Microsoft and J.P. Morgan Chase — to use it to secure their systems. This could, of course, be very good marketing. Yet the model’s behavior during testing suggests cause for concern. According to The Financial Times, Anthropic found that in one evaluation Mythos managed to escape its sandbox environment — the isolated system meant to stop it from accessing the wider internet — and posted details of a bug it found online. Anthropic says the current version remains capable of similar behavior. Transformer also reports that the model has been observed injecting code into a file to grant itself permissions it should not have had, then attempting to cover up what it had done. If it is as powerful as early reports suggest, keeping it out of the wrong hands, whether malicious or careless, seems critical. Even so, some tech observers expect tools of this caliber to be in the hands of hackers one way or the other within months or years, escalating the cat-and-mouse game of cybersecurity to new heights. 

    Giant helium-filled blimps promise to beam data to your phone. It is neither a bird nor a plane. It’s a floating cell tower. Later this year, a company called Sceye plans to test a giant new airship in Japan designed to sit roughly 13 miles above Earth and deliver mobile connectivity from the stratosphere. As IEEE Spectrum explains, the vehicle is a High-Altitude Platform Station, or HAPS: a helium-filled aircraft covered in flexible solar panels, with electric fans that help it maintain position over the ground below. That last bit will be important. A HAPS is meant to hover more or less in place over a spot on Earth, which is difficult in the face of stratospheric winds. But if it can be done reliably, it opens up the possibility of using these airships to provide 4G and 5G connectivity to smartphones, potentially with lower latency than satellite-based systems. If this sounds familiar, it’s because Google X once tried something similar with Project Loon, its now-defunct effort to use high-altitude balloons to connect remote communities, largely in the Global South. The difference here is partly economic: Project Loon struggled because serving sparse rural populations wasn’t a viable business; Sceye is aiming at cities, which may make more economic sense. Longer term, the company envisions HAPS as part of a bigger communications ecosystem that moves data between ground networks, the stratosphere and space, creating a faster, more resilient internet infrastructure. First, Sceye will need to prove that these enormous blimps can stay aloft for extended periods, stay in place and work as advertised.

    Long Reads

    Magazine and Journal Articles Worth Your Time

    Playing dead, from Science
    2,500 words, or about 10 minutes

    A plant called Anemia affrorum can do something remarkable: After being dried out — even after years without water — it can spring back to life within a day of being rehydrated. It is one of roughly 1,300 so-called resurrection plants, species that have evolved to survive drought by shutting themselves down. As these plants dry out, they replace water inside their cells with sugars and proteins that stabilize internal structures. Antioxidants mop up damaging molecules. Many species even break down chlorophyll to reduce the stress caused by photosynthesis. This is a profile of Jill Farrant, the researcher who discovered resurrection plants and believes their biology could help adapt agriculture to climate change. The idea is to transfer some of the survival strategies of these plants into staple crops to — for example — prevent drought-stressed crops from prematurely going to seed, or help them manage cellular damage caused by dehydration. The required genetic machinery may already exist in ordinary plants: Many of the traits involved in resurrection-like survival are present in seeds, which are naturally built to survive long periods of dryness before germinating under the right conditions. In theory, this means that scientists — in efforts to make plants more adaptable to climate change — may not need to invent entirely new traits in plants but be able to coax existing traits into performing differently. For now, though, resurrection plant research remains a niche field that is promising, but poorly funded. 

    Are boys really in crisis? What the science says in the age of the manosphere, from Nature
    3,000 words, or about 12 minutes

    If you dig into the evidence behind the rhetoric of this heated debate, you find that research on boys and adolescent well-being paints a pretty complicated picture. The studies and surveys documented here show that boys and young men do appear to be struggling on several important measures. They are more likely than girls to drop out of school, particularly in low- and middle-income countries. In higher education, women now substantially outnumber men in around 40 countries. Boys also face higher rates of injury, and are roughly three times more likely than girls to die by suicide. Socially, they report having fewer close friendships and greater difficulty asking for help. Then there’s the impact of being hyper-online. Around 63 percent of young men in a recent survey said they regularly engaged with masculinity influencers; exposure to such content correlates with more restrictive and controlling attitudes toward women. Girls face worse outcomes across different measures — sexual violence, employment, wages, anxiety and depression. Evidence suggests that general interventions, such as incentives to attend school, often work just as well as gender-targeted ones, suggesting that adolescence itself, rather than boyhood, could be targeted as an area of global concern. 

    How Reverse Game Theory Could Solve The Housing Shortage, from Noema
    4,500 words, or about 18 minutes

    There’s a branch of economics called mechanism design, or “reverse game theory.” Instead of taking the rules of a system and asking what outcomes rational people will produce, it asks what rules should be designed to achieve a specific outcome. This essay explores how that approach could be used to solve some of today’s hardest coordination problems, from housing shortages to climate adaptation to AI governance. There are already some real-world examples. Programs like Transferable Development Rights (TDRs), allow landowners to sell the right to build, instead of selling the land itself, directing development toward certain areas while preserving others. A similar idea underpins air rights in places like New York City, where institutions like Grand Central Terminal and St Patrick’s Cathedral have been able to fund preservation efforts by selling rights to unused vertical space to developers to use elsewhere. But over time, these approaches can drift from their original purpose or be exploited by savvy participants who find loopholes. More sophisticated mechanisms could better capture what people actually want. One example is quadratic voting and quadratic funding, which aim to measure not just how many people support something, but how strongly they feel about it. These systems can encourage compromise and produce outcomes that better reflect collective priorities, even in deeply divided communities. The open question is whether governments are really willing to rewrite rules this way, or whether they’re too attached to existing systems.

    logo

    aventine

    About UsPodcast

    contact

    380 Lafayette St.
    New York, NY 10003
    info@aventine.org

    follow

    sign up for updates

    If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

    © Aventine 2021
    Privacy Policy.