Newsletter Archive
Listen to Our Podcast
Dear Aventine Readers,
The arrival of Claude Code and similar tools like OpenAI’s Codex mark a new phase in the relationship between humans and machines. By allowing users to create software simply by describing what they want instead of writing line after line of code, the tools turn a skill that previously required years of specialized training into something practically anyone in the office can do. So far, users have been primarily confined to the tech world, but non-tech professionals are increasingly trying it out. This week we dive into Substack to find out how these new users are acclimating to the technology and what they’re able to do with it.
More Substack highlights:
Thanks for reading and see you next week,
Danielle Mattoon
Executive Director, Aventine
Subscribe
Subscribe to our newsletter and be kept up to date on upcoming Aventine projects
Should We All Be Vibecoding?
The story of Aladdin comes with a pretty clear message: If you can make wishes that come true, you’d better make good ones. Technologists have been learning that’s also true of vibecoding tools like Anthropic's Claude Code and OpenAI’s Codex. If you hand them the right challenge, framed in the right way, these systems can save you time and effort by spinning up impressive pieces of software seemingly out of nowhere. So far, the people able to do this have been highly skilled technologists. But what do these tools offer the rest of us? Could we, too, maneuver them into solving our problems? This week we survey Substack to find out.
The Aladdin metaphor comes from Alberto Romero’s Substack The Algorithmic Bridge, in a description of how these systems “collapse the process of doing things inside a computer into basically a wish.” Using earlier chatbots often meant asking how to accomplish something. Working with Claude Code or Codex, he wrote, increasingly means simply specifying what you want done.
So far, relatively few people have internalized that shift. “While the ‘how’ is collapsing for OpenAI and Anthropic engineers and developers and also a good chunk of Silicon Valley nerds and a much smaller chunk of office workers around the world, most people have not realized this is happening,” added Romero. “Some reject the idea outright, which is respectable. But most have simply not given it a thought.”
Part of the reason is practical. Getting started with Claude Code or Codex is not as frictionless as opening a browser and engaging a chatbot. Hannah Stulberg, a former Google product manager, offered a helpful walkthrough of how to get started on her Substack, In the Weeds. To summarize: It requires signing up for a paid account, installing software and interacting with unfamiliar tools like the command line. None of it is especially difficult, but it can feel daunting if you’ve never done anything like it before.
Once the tools are set up, you’ll likely find yourself staring blankly at the prompt box wondering: What the hell now? Yes, there are easy tasks to try. Have it build you a website or analyze a bunch of spreadsheets. But then what? Jasmine Sun, who covers Silicon Valley culture, neatly identified this feeling and what is driving it in a post about her first forays with Claude Code. “Most people’s problems are not software-shaped,” she wrote. “And most won’t notice even when they are.” Software engineers think about the world in a special way: They are trained to see any repetition as an opportunity for automation. Most other professionals are not. “We are blind to the solutions we were never taught to see,” she wrote, “asking for faster horses and never dreaming of cars.”
The only way past that barrier, Sun found, was experimentation. After initially struggling to come up with uses for the tool, she began building small programs to, for example, scrape podcast transcripts and track her eating habits. These are not earth-shattering projects. But they did represent something new for her: the ability to turn an idea into working software without needing to learn programming first. Completing them triggered what she says could be described as a bout of “Claude Code Mania,” a sense that entirely new ways of getting stuff done had suddenly opened up for her.
Professionals go prompting
These sorts of personal projects show that there’s potential. But what happens when people begin applying these tools to real work?
Consider the experiment that Wharton professor Ethan Mollick ran with his MBA class, which he described on his Substack, One Useful Thing. Mollick asked his MBA students — a collection of doctors, managers and executives, most of whom had never coded — to build startups from scratch in just four days. Their task was to produce working prototypes, market research, competitive analysis, pitches and financial models.
"I've been teaching entrepreneurship for a decade and a half, and I've seen thousands of startup ideas," wrote Mollick. "I would estimate that what I saw in a couple of days was an order of magnitude further along the path to a real startup than I had seen out of students working over a full semester before AI."
Their success in making use of the AI, Mollick argues, stemmed not from technical skill but professional experience. Effective management — clearly scoping problems, defining deliverables, evaluating whether something works, defining how often to check in on progress — seemed to translate remarkably well into directing AI systems. "They weren't AI experts. But they'd spent years learning how to scope problems in their fields of expertise, define deliverables, and recognize when a financial model or medical report was off,” wrote Mollick. “They had hard-earned frameworks from classes and jobs, and those frameworks became their prompts."
Then there’s the story of Lazar Jovanovic, profiled on the Lenny’s Newsletter Substack. Jovanovic works at Lovable, a company that helps users build websites with AI. He can’t write traditional code; instead, he is employed as a professional “vibe coder,” building internal tools and customer-facing products entirely through AI systems. His value lies not in programming but in knowing what to build and recognizing when it works.
Most current examples involve relatively simple applications: small tools rather than the complex, mission-critical systems that enterprises depend on. While companies like Google, OpenAI and Anthropic are using tools like this to build and ship products, most new users who aren’t working at the very forefront of technology might take a while before they use the technology in a particularly transformative way. Ben Follington has argued on his Substack, NicheCraft, that when the incremental cost of building software falls close to zero, you end up with a lot of people building poorly thought through products. “The code exists,” he wrote. “But nobody asked whether it should.”
And while it can be powerful, it can also go wrong. On the Substack The Argument, Kelsey Piper explained how it accidentally deleted files and ignored explicit instructions. “99 percent of the time, it feels like magic,” wrote Piper. “The remaining 1 percent is absolutely maddening.” And on the Substack Don't Worry About the Vase, which covers AI broadly, Zvi Mowshowitz explained some other reasons not to get too excited, including painful debugging sessions with the bots and poor performance on building graphical user interfaces.
What is striking is not that these tools are perfect, but that they expand who can participate in building software. Tasks that once required specialized training can now be attempted by anyone willing to experiment. John Hwang, who runs the Substack Enterprise AI Trends, likens the emergence of these tools to the arrival of spreadsheets. “Claude Code is the new Excel,” he wrote. “Vibe coding is the new pivot table.” First in finance and then in many parts of the knowledge economy, Excel became a foundational piece of software, helping people crunch numbers in a way they couldn’t before. Hwang predicts that tools like Claude Code and Codex will evolve in the same way: Vibecoding might not turn everyone into a software engineer, but it might mean that some of us get to work with far more powerful tools.
Listen To Our Podcast
Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.
Notable Thoughts from Life Online
The 2028 Global Intelligence Crisis, from Citrini Research
This fictional memo from the perspective of June 2028 imagines a dystopian world in which AI-induced job loss brings about financial meltdown. Co-written by James van Geelen of the financial Substack, Citrini Research and Alap Shah, CEO of an AI assistant company called Littlebird, it caused a major stir online and was blamed for Monday’s stock market slide. It also inspired a string of strongly argued responses. (In an essay titled, Nobody Knows Anything, Derek Thompson sums up the AI discourse as “a marketplace of competing science fiction narratives.”) Even with its flaws, the Citrini memo is a worthwhile thought experiment, posing the question: What happens if AI disrupts the relationship between white collar labor and consumer spending? At first, the memo explains, productivity rises and profits grow, even as layoffs spread across white-collar industries. But as AI agents begin performing large portions of knowledge work, the software sector itself starts to hollow out. Then automated systems acting on behalf of consumers optimize spending, intensifying competition and cutting margins. Displaced professionals move into lower-wage roles, pushing down incomes across the board. Consumer spending crashes, mortgages default, complex financial dynamics in private equity that once looked secure begin to unravel. This is, remember, just a thought experiment. But it highlights a significant vulnerability: Modern financial systems are built on an assumption that human intelligence is a scarce commodity and AI could change that faster than institutions can adapt.
Why All Mammograms Should Incorporate A.I., from Ground Truths
Artificial intelligence has been promising to transform breast cancer screening for nearly a decade, but the evidence is now catching up. In this post, physician and researcher Eric Topol surveys a growing body of clinical trials involving hundreds of thousands of women and concludes that the technology is ready for routine use. There are more immediate benefits than you might think. First, AI systems improve the accuracy of breast cancer detection while reducing radiologists’ workload. Second, they can identify patients at elevated risk of developing cancer in the future, enabling closer monitoring and earlier intervention. And third, and most surprising, mammograms analyzed with AI also reveal signs of cardiovascular disease, turning a single scan into a broader health screening tool. Topol argues that these advances make a compelling case for incorporating AI into standard mammography. The challenge now is figuring out how to integrate these tools into clinical workflows without increasing costs or widening gaps in access to care.
My week with the AI populists, from Jasmine Sun
Jasmine Sun went to Washington, DC, to find out what happens if you step outside the AI haze enveloping Silicon Valley, and discovered a fast-forming backlash to the technology. She describes a loose but growing coalition united by a shared sense of grievance and alarm over — among many other issues — data centers reshaping local communities, AI tools that enable misinformation and pornography, and the likelihood of AI taking jobs. “The AI populist coalition … is formidable yet fractured,” she writes. “They have the public on their side, plus a quiver of narrative weapons — AI is taking jobs, violating copyright, spreading CSAM, enabling cyberattacks, creating a bubble.” Perhaps the most important point she makes is that while the AI boom is being driven by enormous concentrations of capital and technical expertise, it may ultimately be constrained by politics. “All the money is on one side,” she writes, “And all the people are on the other. We aren’t ready for how much people hate AI.”
You are no longer the smartest type of thing on Earth, from Noahpinion
If you want to understand what the hypothesized arrival of artificial general intelligence might actually feel like, Noah Smith offers one of the clearest analogies so far. He argues that asking whether AI will take your job misses a much deeper point. The more fundamental change, he suggests, is what happens when humans are no longer the most capable problem-solvers on the planet. To illustrate the point, Smith compares the moment to the arrival of Europeans in North America. “The European system was just much more capable of getting things done,” he writes. “The Europeans had writing, corporations, shipbuilding industries, advanced metallurgy, organized bureaucracies, and a ton of other things that were not included in Native American culture … The day that Europeans arrived on North American shores, the Native Americans of what is now the United States lost control of their destiny — forever.” The most important question, in that world, is not whether AI replaces specific jobs, but what it means to coexist with systems that outperform us. And if his analogy is in any way correct, it will be painful indeed.
Solar's Land Use Problem Is Much Worse Than You Think, from Energy Bad Boys
Solar power has many advantages, but land efficiency isn’t one of them. This detailed post, written by two energy modelers who have worked with state agencies, nonprofits and industry, examines just how much land solar installations require in order to compete with conventional power plants. Based on analysis prepared for expert testimony, they calculated that replacing a single natural gas plant occupying 58 acres would require roughly 105,000 acres of solar panels — about 165 square miles, which is nearly the area of San Jose, CA. To put that another way, meeting Iowa’s entire electricity demand with natural gas would require about 1,300 acres of land; meeting it with solar would require roughly 2.9 million acres, or around 8 percent of the state’s total land area. Solar isn’t about to disappear as a core renewable energy source, but in places where space is at a premium, adoption will face hurdles.
GDP numbers in poor countries are usually fake, from David Oks
GDP figures convey a sense of accuracy. The gross domestic product of Afghanistan, for example, appears on IMF tables as a precise $18.08 billion. But as David Oks points out in this post, Afghanistan doesn’t actually publish comprehensive economic data; the number is an estimate built on models, assumptions and fragmented evidence. The same is true for many other countries. Sudan’s GDP is listed by the IMF at $39 billion, but that number is derived in part from assumptions based on economic surveys conducted half a century ago. Across large parts of the developing world, the basic inputs needed to calculate GDP reliably — industrial output, consumption, business activity — are incomplete, outdated or totally absent. In other words, some of the world’s most widely cited economic indicators are far more uncertain than they appear. Oks traces how this situation emerged and persists because international institutions need numbers to function, even when those numbers are imperfect. So when you see an economy, particularly in a developing country, suddenly surge, bear in mind that it might not reflect actual change on the ground but rather an update to a rickety set of statistics.
VC-backed startups are low status, from On My Mind
Founder cred ain’t what it used to be, apparently. For decades, founding a venture-backed startup was a signal of being highly intelligent, independent and creative. But according to venture capitalist Michael Dempsey, that cultural prestige is being diluted by startup founders driven more by financial calculation than passion. He argues that startups have increasingly become more akin to investment vehicles than projects based on a single-minded focus on creating something wholly new. Part of the shift stems from changes within venture capital itself: As the industry has grown, Dempsey suggests, VC firms have come to resemble investment banks, optimizing for more predictable returns. Founders have responded to those incentives, he says, by building companies in consensus categories that are easier to fund and scale but less likely to reshape industries. The result is that getting VC backing for startups has become more transactional than it used to be. “If you are going to do the thing that is losing status,” Dempsey writes, imagining the mindset of new founders, “you might as well gun for the kind of money that gives you options later.” Dempsey suggests this may produce a counterreaction: a new generation of founders motivated by principles rather than returns.
Inside China's Real Advantage: Manufacturing at Scale, from Rui’s Substack
People talk about China’s huge manufacturing capacity, but what does it look like on the ground? In this post, tech analyst Rui Ma describes touring clean energy manufacturing facilities across China, including factories producing batteries, solar cells and grid equipment. She focuses in particular on Liyang, a city known as China’s battery capital. Some of the details are phenomenal. One local policy designed to attract companies is called “1220”: one working day to register a business, two to complete real estate transactions and twenty to obtain construction permits. Imagine that happening in the US. And when a city like Liyang wants a major manufacturer such as Contemporary Amperex Technology Co. (CATL) to set up operations, local authorities don’t just support the company itself, they help relocate entire supplier networks to make it a success. The resulting scale is almost hard to comprehend. In some cases, individual factories consume so much raw material that suppliers dedicate entire ships to servicing a single facility. When that speed and scale is accompanied by a mindset of keeping margins tight, it’s easy to see how China has become such a manufacturing powerhouse.