Newsletter Archive
Listen to Our Podcast
Dear Aventine Readers,
More AI! This week we unpack the stakes in the AI race, which go a long way in explaining why top AI companies are working overtime to show how different they are from other AI companies. Whichever company figures out how to make frontier AI a profitable business could heavily influence the future of the technology.
Also in this issue:
See you next week,
Danielle Mattoon
Executive Director, Aventine
Subscribe
Subscribe to our newsletter and be kept up to date on upcoming Aventine projects
AI is Turning into a Brand Battle. Here's Why.
If it weren't already obvious, the Super Bowl ads left no room for doubt: Two of the leading AI companies — Anthropic and OpenAI — are taking their quest for dominance public. Anthropic's ad — a creepy bot come to life selling height-enhancing insoles alongside workout tips — was a jab at OpenAI's plan to sell ads through ChatGPT. OpenAI's spot highlighted the promise of limitless, life-enhancing creation through AI.
Since the advent of ChatGPT over three years ago, AI driven by new large language models could appear to be a single wave of disruption and opportunity. The various chatbots — ChatGPT, Claude, Gemini, Grok and others — astounded and dismayed in similar measure. And it's true that the underlying technologies of each company perform nearly identically according to benchmark tests.
But the past 12 months have been about more than upgrade one-upmanship. Through various releases, from AI-powered shopping tools to systems that automate software development, these companies are trying to differentiate themselves in ways that go beyond performance: the products they build, the customers they target and the business models they pursue.
The stakes are high. Building products that can generate enough revenue to help cover the astronomical costs of AI research isn’t just desirable, it’s existential. Companies are burning through billions of dollars on computing costs and staff annually; OpenAI’s own projections show it losing more than $100 billion over the next three years. Which companies succeed and which fail will shape not just the business playbooks of AI companies but the trajectory of the technology itself.
Diverging strategies
The companies’ differentiation has crystallized into distinct strategies. Anthropic is laser-focused on enterprise customers, selling tools for developers and specific industries, with business spending accounting for about 80 percent of its revenues. OpenAI, once the consumer champion — with consumer spending still making up roughly 70 percent of its revenue — seems to be increasingly hedging its bets with a flurry of products aimed at both everyday users and businesses. Google, meanwhile, is embedding AI across its existing empire, patient enough to let AI contribute to Alphabet’s $400 billion annual revenue rather than developing entirely new income streams.
One of the clearest examples of the direction Anthropic is taking is Claude Code, a coding assistant that became widely available in May 2025 and has since exploded in popularity, generating $1 billion in run-rate revenue within just six months. OpenAI's competing tool, Codex — whose most recent and impressive version was released this month — performs similarly on technical benchmarks. But Claude Code is designed to work collaboratively with developers, keeping them in the loop and asking clarifying questions — a subtle product distinction that has clearly resonated with its target market.
Anthropic has also built industry-specific tools for law, finance, health care and life sciences. When it released add-ons for legal work, stock prices of traditional legal software companies tanked. Notably, Anthropic still doesn't offer image generation — unlike OpenAI's DALL-E or Google's Nano Banana — underscoring its focus on commercially valuable business tasks over more whimsical consumer demands. The strategy appears to be working: Anthropic is generating $9 billion annually as of late 2025, double its revenue six months ago.
OpenAI has long dominated the consumer space. It claims over 700 million weekly active users, making it one of the most visited websites in the world, according to data provider Semrush, behind Google, YouTube, Facebook and Instagram. It has leaned hard into that audience with a suite of consumer products: Sora, a social network for AI-generated videos; an AI-powered web browser; a shopping assistant; health and wellness tools. Now it’s testing ads on its free and new low-price tiers.
Yet increasingly, OpenAI is flooding the business market with new tools too: Codex; Frontier for managing teams of agents; Prism for scientific collaboration. It's also, according to The Information, hiring hundreds of consultants to boost enterprise sales. The scattershot approach — chasing both consumers and enterprises across dozens of use cases — could allow the company to test multiple business plans at once in order to find the most profitable ones, but at present industry watchers are not impressed. The approach, said Ethan Mollick, an associate professor at the University of Pennsylvania’s Wharton School who studies AI’s impact, is "a little bit of a mess."
Google occupies a different position entirely. Rather than multiple different stand-alone AI products, it is embedding its Gemini model everywhere users already are: in Search, Gmail, Android and so on. That approach is getting traction: Its AI-generated search results, for example, were once heavily ridiculed, but now seem to be widely embraced. AI contributes to Google's overall revenue rather than creating distinct income streams, and that gives the company the luxury of patience. "It can absorb much higher losses if it needs to," explained Mollick, who in addition to teaching also writes about AI on a Substack called One Useful Thing. And because Google isn't under pressure to turn AI into hundreds of new products immediately, it can invest in R&D at its own pace.
Shared realities
As a sign of a maturing market, the competitive landscape in which these companies operate is shifting and OpenAI, whose ChatGPT became synonymous with AI as soon as it launched, is no longer ascendant. ChatGPT's share of daily US users fell from 69 percent in January 2025 to 45 percent in January 2026, according to Apptopia data reported by Big Technology. In the same period Google Gemini's share climbed from 15 to 25 percent, while Claude's consumer presence remained negligible — under 2 percent. The enterprise story is even more dramatic. OpenAI commanded 50 percent of business spending on large language models in 2023, according to estimates from VC firm Menlo Ventures, but that share dropped to 27 percent by the end of 2025. Anthropic’s share went from 12 to 40 percent over the same period, while Google’s grew from 7 to 21 percent.
The field now seems wide open. What happens next depends on how fast these companies can develop more advanced AI, how much money they lose along the way and — particularly in the case of Anthropic and OpenAI — what they can do to stanch those losses.
All three companies are pursuing what insiders call "recursive self-improvement," said Mollick — building AI systems capable of independently conducting high-quality AI research to improve themselves, with the goal of creating a virtuous cycle that could lead to superintelligence. Leaving questions of superintelligence aside, the self-improvement aspect of this quest seems to be working. Anthropic has said that "we build Claude with Claude" and OpenAI reports that its latest model "was instrumental in creating itself." The implicit theory is that whichever company achieves this first could establish not just market dominance but a lasting technological moat, and goes a long way toward explaining why they’re willing to burn so much cash.
The immediate financial realities these companies face could force a reckoning before they reach their goals. This is particularly true for OpenAI and Anthropic, since Alphabet, Google's parent company, has already recorded more than $400 billion in annual revenue, more than 80 percent of which comes from Google ad sales. Training a single frontier model costs roughly $1 billion today, and these models don't appear to turn a profit before they're replaced by newer versions. Salaries are astronomical. OpenAI, despite claiming $20 billion in annual revenue, is projected to lose $17 billion in 2026, $35 billion in 2027 and $47 billion in 2028 — about $100 billion over three years — according to The Information. The Wall Street Journal reported that OpenAI won't be profitable until 2030, with Anthropic expected to break even in 2028.
All of this explains the current scramble: ads and shopping referrals for consumers, specialized enterprise tools for businesses, anything that might close the gap between revenue and costs before investors lose patience. Anticipated initial public offerings for Anthropic and OpenAI — potentially this year — would inject fresh capital to sustain the burn rate until profitability arrives. That is particularly important as these companies face running out of private investors with deep enough pockets — and enough good will to wait around indefinitely for an exit — to sustain the spiraling costs of their businesses.
There's a deeper tension here too. Anthropic in particular has differentiated itself partly on safety, promising not to release models it deems unsafe due to potential misuse. But its latest model, Opus 4.6, is approaching or exceeding the safety thresholds that the company defined in the past. That means that its models appear to be increasingly capable of helping execute major cyber attacks or facilitate the development of potent bioweapons, in direct contradiction to Anthropic’s stated focus on safety. "Eventually they'll have to release something relatively risky to stay competitive," wrote Mike Knoop, a co-founder of the automation software company Zapier and a new AI lab called Ndea, in an email to Aventine. "Will they?"
That is one among many uncomfortable questions that these companies will face in the coming months. Can they serve their customers, survive financially and maintain their stated principles as they race to find a sustainable business model and perhaps build superintelligence? The answer will shape not just the business playbooks of these companies, but the trajectory of AI itself.
Listen To Our Podcast
Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.
Advances That Matter
AI is starting to chew through the long tail of open math problems. Paul Erdős, a prolific mathematician, left behind more than a thousand deceptively simple unsolved questions after his death in 1996, and so far only about 41 percent of them have been solved. Now AI-based approaches are beginning to pick off the easier ones. Since the start of the year, AI has solved two of the problems for which there were no existing full or partial solutions in the math literature, as well as contributing to several others. Terence Tao, the Fields Medal-winning UCLA mathematician, wrote that “many of these easier Erdős problems are now more likely to be solved by purely AI-based methods than by human or hybrid means,” as models can be applied systematically to obscure problems that few specialists might find time to work on. Meanwhile, a startup called Axiom has developed an AI math tool called AxiomProver, which it claims has generated proofs for four previously unsolved results in areas including algebraic geometry and number theory. Even if these tools are picking off problems that remain unsolved partly because they weren't deemed worth an expert’s time, mathematicians argue that the falling marginal cost of answering them is important, because it will help broaden our grasp of mathematics.
Small, high-tech devices are speeding up mineral prospecting. Last summer in Utah’s Tushar Mountains, crews hired by the mining company MAG Silver planted roughly 200 basketball-size, 20-pound “nodes” across the hillsides. Made by an Australian company called Fleet Space Technologies, the devices sit in shallow holes for days or weeks recording naturally occurring ground vibrations, reports The Atlantic. The data is then processed into high-resolution underground maps that indicate where reserves of metals including silver, gold, copper, cobalt, nickel and lithium might lie beneath the surface, a method known as ambient-noise tomography. Compared with traditional methods, which search for underground deposits using heavy equipment or explosives that can cause long-term harm to ecosystems, the nodes leave only small holes when removed. Because US land use rules often hinge on whether a prospecting activity causes “significant” surface disturbance, the nodes could increase prospecting opportunities and move projects from discovery to development faster than in the past.
3D printing could squeeze better batteries into tight spaces. Mass-produced batteries tend to be unforgivingly rectangular or cylindrical, but not every device is a neat shape. So what happens if you’re trying to pack maximum capacity into the body of, say, the arms of smart glasses or a next-generations drone? IEEE Spectrum reports that startups including Material in Florida and Sakuu in California are using 3D printing approaches to make batteries in custom shapes. In Material’s case, its platform can print a full battery stack — anode, cathode, separator and casing — without molds or special tooling. In a proof-of-concept project with drone maker Performance Drone Works, Material says its 3D-printed batteries delivered about 50 percent higher energy density than fitting the same space with standard cells. The catch is scale: The approach is currently limited to small batteries and specialized applications, and costs won’t compete with conventional cells unless production ramps up dramatically. But for devices with awkward internal geometry where costs are less of an issue, building batteries to fit the space — not the other way around — could unlock more power.
Magazine and Journal Articles Worth Your Time
AI Hunts for the Next Big Thing in Physics, from IEEE Spectrum
5,100 words, or about 20 minutes
Particle physics has a problem. The Large Hadron Collider has generated mountains of data, but there’s been little sign of the “new physics” that many hoped would emerge from all that information. Now, researchers are betting that a different approach could help. IEEE Spectrum reports on the growing interest in physics around an AI approach called unsupervised learning — similar to the anomaly-detection tools used in cybersecurity — that doesn’t start with a specific goal. Instead, these systems learn what “normal” looks like in data and flag patterns that don’t fit, potentially pointing physicists to phenomena their existing models didn’t predict. The risk is false alarms, which would alert physicists to results that aren’t actually meaningful. But for a field hunting for new science, AI could be the best shot in decades.
America Isn’t Ready for What AI Will Do to Jobs, from The Atlantic
7,500 words, or about 30 minutes
In this cover story for The Atlantic, Josh Tyrangiel tries to cut through some of the debates about whether AI will bring abundance, mass unemployment or just a new form of business-as-usual to a more immediate reality: Individual institutions aren’t built to understand how this will play out. Economics, he suggests, is constrained by models built on historical precedent, which means “driving while looking only at the rearview mirror.” Corporations have a hard time imagining what work inside firms will look like as AI spreads. Policymakers, meanwhile, lack both a sense of urgency and basic visibility into what’s changing. Tyrangiel makes a modest proposal: Systematically measure how AI is actually reshaping tasks, roles and wages in real time so that governments and businesses can respond to present-day reality, not what yesterday’s economy would predict.
How teaching molecules to think is revealing what a 'mind' really is, from New Scientist
2,300 words, or about 11 minutes
Learning might not be the exclusive preserve of organisms with brains. In this story, New Scientist reports on work suggesting that gene regulatory networks — the molecular circuits that control when and how genes turn on and off — can display associative conditioning akin to Pavlov’s experiments with dogs. In recent research from Tufts, such networks were “trained” by repeatedly pairing a neutral drug with a functional one that brings about a physical response; after training, the neutral drug alone could elicit the response, implying a memory-like change in the network’s behavior. The claim isn’t that molecules are somehow conscious, but that learning might exist on a continuum, and can emerge in systems far simpler than creatures with brains. If the idea holds up experimentally, it could point toward therapies that exploit learned molecular responses — for example, reducing tolerance or triggering effects with gentler cues.