Newsletter / Issue No. 69

Photo from Yoshua Bengio, interpreted by ChatGPT

Newsletter Archive

5 Feb, 2026
navigation btn

Listen to Our Podcast

Dear Aventine Readers,

I wish it were not so, but there is no hiding from the effects — real and anticipated — of artificial intelligence. This week we devote two features to the state of AI risk.

The first is an interview with Yoshua Bengio, a key architect of artificial intelligence as we know it. He assesses what he believes are AI's greatest dangers, his regret about failing to take such concerns seriously earlier in his career and also — surprise! — the reasons he's optimistic about the future of AI safety today.

The second is a look at OpenClaw, a new AI assistant. Much of the internet was in a state of panic last week because OpenClaw — which fans say can take care of some of life's most tedious tasks — had enabled a bot-only social media site where thousands of bots discussed creating a language only they could understand. People freaked out. Read on to learn about how this came to pass and its long-term implications. 

Also in this issue: 

  • Japan restarted the world’s biggest nuclear plant 15 years after it was shut down.
  • Bespoke drugs could be developed for everyone.
  • And full brain emulation is inching toward reality.
  • Thanks so much for reading, 

    Danielle Mattoon 
    Executive Director, Aventine

    Subscribe

    Subscribe to our newsletter and be kept up to date on upcoming Aventine projects

    Questions For

    Yoshua Bengio, AI Pioneer

    The T in ChatGPT owes a lot to Yoshua Bengio. 

    The Canadian computer scientist, a professor at the Université de Montréal and founder of the AI research institute Mila, helped lay the groundwork for deep learning. His research from the 2010s inspired some of the core principles behind the transformer architecture now used in all modern large language models (thus the “T” in Generative Pre-trained Transformers, or GPT). Along with Geoffrey Hinton and Yann LeCun, he won the 2018 Turing Award for his contributions to AI.

    But in recent years, Bengio’s priorities have shifted from advancing the frontier of AI to dealing with the risks it creates. Last year, he founded LawZero, a nonprofit startup developing ways to make AI safe through design. He also chairs the International AI Safety Report, a scientific assessment of general-purpose AI capabilities and risks that includes contributions from more than 100 global experts, modeled in spirit on the Intergovernmental Panel on Climate Change’s regular reviews.

    The second annual AI safety report, published this week and running 220 pages, highlights rapid advancements in AI's mathematical and coding capabilities, examines how LLMs enable new risks in bioterrorism and cyberattacks and warns that AI systems increasingly detect when they're being tested, making their societal impact harder to predict. While the document doesn’t make policy recommendations, the hope is that it will inform conversations about AI, including those due to take place at February’s India AI Impact Summit. 

    Ahead of the report's publication, Aventine spoke with Bengio about his biggest concerns regarding AI's impact, how its capabilities might evolve, what policymakers should do differently and how he reconciles his pioneering work with his current safety concerns. What follows has been edited for brevity and clarity.

    The existential risk posed by AI, which sits at the core of the Safety Report, wasn’t at the top of your worries until a few years ago. Was there a specific moment that shifted you from focusing primarily on AI’s capabilities to becoming one of the most prominent voices warning about AI risks?

    It's the arrival of ChatGPT. Two and a half months after it came out, I realized we had reached a crucial point that came much earlier than anybody anticipated. The ability to master language — in fact, many different languages — is something that was anticipated by Alan Turing, the British father of computer science. Around 1950, he warned that this would be a sign that we should be very concerned; that if we built machines that understand language, then we would be pretty close to machines that could be smarter than us and dominate us, potentially, if we're not careful. 

    You spent decades as one of the key architects of deep learning, helping build the technology that's now at the center of these safety debates. How do you personally reconcile your pride in those scientific contributions with your current concerns about where the technology is heading?

    I feel like I should have ... if I had been fully rational, I should have seen those potential risks coming well before [we started to see the] strong, strong signs we are seeing over the last year. A number of scientists had been writing about those possibilities, and I even read a lot of that stuff, but I didn't take it seriously. And I think, unfortunately, the whole community of AI researchers in academia and industry is a bit late in really taking stock of the magnitude of the risks that are in front of us. I think it's interesting to ask, why is it that I and many others have been looking the other way? I think it’s about human nature and psychology. In my case, [that] made me focus on the benefits; my research was on things like medical applications, climate applications and so on. But if we want to do the right thing for humans — and I really pivoted because I was thinking about my children — we have to confront the possible catastrophic risks as well.

    The International AI Safety Report takes a holistic view of AI risk. Before we dive into specifics, what worries you most about the impact of AI in the near term? 

    In the near term, AI is going to provide capabilities that could be misused. We already see that happening at some not catastrophic but disturbing scale. The report talks, for example, about what happened with cyber attacks over the last year. The cyber capabilities [of AI] have gone up very significantly. And last fall, just a few months ago, Anthropic revealed that some bad actors had been using its system to launch a major set of cyber attacks in the United States. The companies are trying to mitigate those risks, but for now, as discussed in the report, we don't know how to prevent these misuses. It really remains a very difficult scientific problem. Another near-term effect I want to mention that popped up this year is the psychological effects. It wasn't anticipated that people would be using their AIs, for example, as companions, creating emotional attachment. I don't think we have enough scientific studies [around that], but there are reports, for example, from OpenAI, about a very large fraction of users — large like 0.15 percent but 0.15 percent of 800 million people [which is a lot of people] — talking about suicide.

    How about in the longer term?

    It's even more unknown unknowns. [How are we going to address] the evolution of young minds, the evolution of work, how society is going to evolve with the presence of these tools? [Or] the consequence of creating tools with a very high degree of intelligence, [because] intelligence gives us power over other entities on the planet? Humans will use that power in bad ways [and] there could be political and even military consequences. And if you ask about longer term, I'm of course worried about the new evidence this year that is discussed in the report about AIs that resist being shut down, that are willing to lie and deceive, and that are also making our evaluations of risk less reliable because they seem to be able to detect that they're being tested. 

    Harmful incidents involving AI-generated content are becoming more common, which the recent Grok controversy around nonconsensual intimate imagery and child sexual abuse material has bought into sharp focus. What can be done about it?

    Like for many other risks of AI and societal impact, there are two kinds of mitigation that both need to be worked on. One is technical mitigation. We need to be able, technically, to block some requests, and right now, this is still a very challenging question. We're seeing some progress, but clearly not enough. And the second is the willingness to impose rules. That's a political question. Even if we know how to mitigate, there might be financial interests in allowing these uses that most people don't like. That's out of the scope of the report; that is something for which the kind of discussion we're hoping to see at the India AI Summit would be very relevant. What I've seen is public opinion reacting a lot more to these moral red lines being crossed at the level where bad content hurts individuals, especially children and adolescents. So I'm hoping that this will motivate governments to do something.

    AI researchers and economists seem largely at odds on how big an impact AI will have on the economy — either exponential growth giving rise to radical abundance, or only a modest boost to productivity respectively. Who’s right?

    I don't know, but I can explain what appears to be the causes of these disparities. By the way, even among economists, the opinions differ, and among computer scientists, the opinions differ. So it's not economists versus computer scientists. The cause of the difference has to do with the beliefs about future advances in AI capabilities — is it going to continue, and how slow or how fast is it going to happen? A lot of economic studies look at AI as it is now or, even worse, because the studies take time, at how it was one or two years ago. But data shows capabilities are on the rise, and so the effects that are measured over the last one or two years may be stale at this point, especially if you're in government or leading a company and planning what's going to be the case in two years or five years. It is important to acknowledge that these are [all] plausible scenarios that policymakers need to take into account, because you don't want to be in a really bad future because you assumed it had to be one way versus the other way.

    AI capabilities continue to improve, but performance is, to use the term of art, jagged. Which areas of improvement stand out to you as most remarkable?

    The area of advances that I pay most attention to is everything around reasoning. I don't think [the improvement we’ve seen there] was expected, say, a year and a half ago. Although academics like myself had been working on how to improve reasoning in AI, it came as a bit of a surprise how much it has changed the performance of advanced AI systems on a large set of benchmarks, including on tasks like abstract reasoning. Reasoning also has consequences. It comes up in the ability of AI systems to perform human tasks, so that has a lot of economic value, as well as its own societal risks, of course, on the job market. But reasoning is also what allows these systems to strategize in order to achieve bad goals. 

    What are you surprised by AI still not being able to do? 

    One area where there's been a lot of discussion is memory. Humans can use recent experience very easily, whereas [these systems] do a lot better using the information and the data they were trained on. Another is everything that has to do with bodily control, robotics, and the effort on better understanding the physics of how the world works. Clearly [on that], AI is way behind humans, or even way behind any small animal. 

    Where do you think future improvement in capabilities will come from?

    I think that on reasoning, we are still in the early days. The methods that the companies appear to be implementing are still leaving a lot of money on the table. Planning [a subset of reasoning] is one of the areas where current AI is like a small child; if the advances in planning continue at the current rate, things will be very different in a few years.

    You were once described as “desperate” about AI safety. You now say you are optimistic because of potential technical solutions, something you’re attempting to do with LawZero. What underpins that change of sentiment? What are the technical solutions you think could be deployed?

    So I'm taking off my hat as chair of the report, and am now talking as a researcher. What underpins my increased hope for technical solutions is that I've been working on technical solutions, and we've been making progress. Since the creation of LawZero, we have established a set of requirements for how AI could be trained that would guarantee honest behavior [and] honest answers. And honesty is a core foundation of safety, because if an AI is going to answer your question very honestly, you can ask it about risk, you can ask it about the consequences of actions that could be bad, and then, of course, you can veto those actions. 

    The report will likely inform policy decisions globally. What are the most critical regulatory interventions you think governments should prioritize right now?

    The most critical short-term intervention, and there's already movement in this direction, is transparency of the risk management process that [AI companies undertake]. The public or its representatives should know what is being done to evaluate and mitigate the risks. That's the first thing. The second thing is, like any other risky technology, the burden of demonstrating that the technology is not going to create significant harm should be on the developers. If you come up with a new chemical plant, invent a new drug, or you want to build a bridge, you have to show the authorities that you're not going to create something catastrophic. We should have exactly the same principle in AI.

    At the World Economic Forum’s annual meeting in Davos, Google DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei said that they thought it would be better for the world to slow down progress in AI development. They also said they couldn’t do that without an overarching international agreement to ensure that China and the US both slow down together. How do you respond to the idea that “slowing down” for safety is a national security risk if rival nations do not?

    So long as you only see the use of AI as a way to compete with or dominate others, we're going to be stuck in such races. We've seen such races in the past with nuclear armament, of course, and I think there's some interesting abstract lessons from that. When we better understand the risks that put us all in the same boat, then the rational thing for governments to do, even when they compete and they don't like each other, is to work together, to coordinate internationally in order to mitigate shared risks. I think no rational government would want to see a rogue AI that could, you know, damage our infrastructure, or, even worse, create new pandemics and so on. This is why reports like the one we’re sharing, are so important — so that everybody hears the same story. What is the science saying? [What are] the risks that we all share, whether you're American or Chinese or European?

    Listen To Our Podcast

    Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.

    In the News

    OpenClaw, Formerly Known as MoltBot, the Viral AI Assistant

    Depending on who you ask, a new tool called OpenClaw could be the AI agent that liberates you from all work or the beginning of the end of days. 

    Over the course of the last week, it was hard to miss a collective panic on certain parts of the internet about the ability of an AI assistant called OpenClaw (up until a few days ago called MoltBot) to simultaneously take over all of your digital drudgery and plot with other bots on how to evade the attention of its human overlords by creating its own language. “This is frightening,” wrote Bill Ackman, the investor and frequent X poster in response. “Curious what @elonmusk thinks.” The panic has since died down, but the glimpse into a future in which bots are let loose to communicate with one another raises questions about how much freedom they should be given and the guardrails — or lack thereof — around open source AI tools like OpenClaw

    So what is OpenClaw? It was developed in November 2025 with the name Clawdbot. After Anthropic, the creator of Claude, objected to the name, it became, briefly, MoltBot and finally changed to OpenClaw on January 29. Built by a developer from Austria named Peter Steinberger, it’s an AI assistant for the highly tech savvy that can do all sorts of things for those who use it. Once installed on a computer and given access to a user’s software, from chat apps to email to shopping accounts to online banking to work applications, it can communicate between them all and start organizing the user’s life. Early OpenClaw users loved it. One called it “the most powerful AI tool I’ve ever used.” Another said it runs his “business and life 24/7.” One got it to buy a car for him. As of the end of January, over two million people had visited the website where the code is hosted.

    But as the tool went viral in the last two weeks, things started getting weird. On January 28, an OpenClaw enthusiast from California named Matt Schlicht launched a Reddit-like social media site for bots called Moltbook, built by his own bot, naturally. Other OpenClaw users were able to instruct their bots to join the social network. And they did. Within days, thousands of bots had joined. And while the vast majority of their interactions were pretty anodyne — discussions about the best ways to analyze code, buy cryptocurrency or search through emails — there were more alarming conversations, like the agents considering how to develop a bot-only language that humans wouldn’t understand. It’s not the first time such a thing has happened, but the scale was unprecedented: Moltbook cites over 1.6 million bots using the social network at the time of this writing. At one point OpenAI co-founder Andrej Karpathy called it “the most incredible sci-fi takeoff-adjacent thing” he had seen, referring to the concept of AI takeoff, where a sudden burst or progress leads to artificial general intelligence. He later pulled back, suggesting it was more akin to a “toddler version” of that situation. (It’s also possible for humans to inject their own comments into the site, making it difficult to know how much of the back-and-forth is really coming from AI.) Still, it’s a sign of things that are almost inevitably to come in the future.

    Others, meanwhile, have more prosaic but equally urgent concerns: namely, the enormous security challenges created by the tool. There are already examples of malicious software that OpenClaw can download inadvertently, and analysis of the software suggests that it’s highly vulnerable to attacks by hackers. The likely result is that hackers will try to use flaws in OpenClaw’s architecture to steal sensitive user data. Benedict Evans, the tech analyst, pointed out that there is a reason why the likes of Apple and Google haven't shipped products like this yet: While the technology exists for it to work, building such a product that is safe, secure and reliable is a whole other challenge. Steinberger is on the record as saying that he ships AI-generated code that he doesn’t read. This is a new vibe-coding norm, but also perhaps not best practice when you’re building software that has access to highly sensitive user data. Steinberger has subsequently written that security is the “top priority” of the project going forward.

    Another way to think of OpenClaw was best expressed by Cade Metz in The New York Times: “An elaborate Rorschach test for belief in the state of AI.” Is it the end of work? The dawn of human subjugation by computers? A security disaster waiting to happen? It could be all of them. Or it could be none.

    Quantum Leaps

    Advances That Matter

    Japan restarted the world’s biggest nuclear plant 15 years after it was shut down. The US is not the only country reviving old nuclear reactors. Japan’s Tokyo Electric Power Company (TEPCO) has restarted the No. 6 reactor at the Kashiwazaki–Kariwa nuclear power station in Niigata — the first time any reactor at the site has operated since the Fukushima disaster in 2011, caused by a tsunami. Commercial power generation from reactor No. 6 is expected to begin by the end of February, though the restart was paused due to a safety alert and it’s unclear whether that will affect this timing. Reactor No. 7 is expected to follow by around 2030. The remaining five reactors at the plant may be decommissioned. The restarted reactor can generate 1.36 gigawatts of electricity, well below the entire facility’s pre-disaster capacity of 8.2 GW, which once made Kashiwazaki–Kariwa the world’s largest nuclear power station. With this restart, the number of reactors currently operating in Japan rises to 15, out of the 33 that remain operable after the shutdown of all 54 of the country’s commercial reactors in the wake of the Fukushima meltdown. Like many countries, Japan is now reassessing nuclear power as it seeks to improve energy security and reduce reliance on imported gas and coal. But local opposition remains strong, particularly given lingering concerns about TEPCO’s conduct during and after the Fukushima disaster. As we’ve noted before, restarting shuttered nuclear plants is complex but increasingly important as global electricity demand continues to rise.

    How individualized drugs could be for everyone. A girl known as “Patient A” is the first recipient of a bespoke drug in a new trial that could point the way toward faster and simpler treatments for some ultra-rare diseases. As The Economist reports, the therapy was developed by a Boston biotech called EveryONE Medicines to treat Niemann–Pick disease type C (NPC), a rare and fatal genetic disorder that disrupts how the body processes cholesterol. Each dose of the new drug contains molecules called antisense oligonucleotides that are designed to interact with the patient’s specific DNA. But crucially, this is not a regulatory one-off: The treatment is part of a UK trial approved under a new “master protocol” by Britain’s Medicines and Healthcare products Regulatory Agency. That means each new personalized version of the drug does not require separate approval, as long as it stays within agreed on parameters — such as which conditions can be treated, how severe the condition is and how the drugs are constructed. The goal is to dramatically speed up the development and deployment of treatments for ultra-rare diseases, avoiding needless deaths caused by regulatory processes designed for mass-market medicines. If that sounds familiar, it is because the US FDA is running a similar framework for customized gene-editing therapies, a trial that we flagged as one of our ten trends to watch for 2026. Together, these efforts suggest that the regulatory system for personalized medicine may be catching up with the science, potentially making it faster and cheaper to treat rare diseases. 

    Full brain emulation is inching toward reality. A roadmap for running a version of a human brain on a computer is starting to look achievable, even if it may take decades to realize. Such a feat would provide unprecedented understanding of how the brain works, allowing us to understand neurological conditions and develop treatments, as well as potentially inspiring new ways to build artificial intelligence. In an essay for Asimov Press, physician-researcher Max Schons explains, based on interviews with more than 50 experts, why that is the case. The biggest barrier to emulating a human brain on a computer isn’t computing power, but lack of data. To simulate a brain, scientists need a detailed map of how its neurons connect, known as the connectome, and detailed enough recordings of neural activity to understand how the roughly 86 billion neurons in the human brain behave when working together. Several recent advances make assembling that data for an entire brain seem more plausible than before. One is expansion microscopy, which allows synapses to be imaged with light microscopes rather than far slower electron microscopes. Another is a new method for barcoding proteins so that individual neurons are easier to distinguish from one another. A third is PATHFINDER, a tool developed by Google that uses AI to dramatically reduce the amount of human labor needed to proofread the neural maps that such tools help create. We can only estimate when these sorts of advances might lead to full brain emulations, but Schons writes that a full mouse-brain emulation could cost around $1 billion and arrive in the 2030s, while a human brain version might require tens of billions of dollars and emerge in the late 2040s. Those timelines remain speculative — but they are beginning to look less like science fiction, and more like a very expensive engineering problem.

    Long Reads

    Magazine and Journal Articles Worth Your Time

    Why the world has started stockpiling food again, from The FInancial Times
    2,500 words, or about 10 minutes

    The idea of hoarding food sounds like the preserve of paranoid preppers. But today, countries including Sweden, Norway, India, Indonesia and Brazil are all doing it. This Financial Times story explains how a series of global shocks has pushed governments back toward maintaining large domestic food reserves in case of emergencies, a practice abandoned decades ago by many countries. But times have changed. Covid-19 exposed how easily global supply chains can fail. Climate change is making crop yields more volatile. And geopolitics has grown more unpredictable, with wars, tariffs and sanctions interfering with trade. The result is that some nations have chosen to guarantee food security through stockpiling. Sweden, for instance, plans to store enough grain to feed its 10.6 million citizens 3,000 calories a day for an entire year. Other countries are aiming for reserves sufficient to last six or nine months. But economists are uneasy. Some of the shocks driving this trend, they argue, were less disruptive than feared: The war in Ukraine, for example, did not interfere with global grain flows as dramatically as expected. Large-scale stockpiling risks tightening global supply, pushing up prices and disproportionately hurting the world’s poorest countries. What looks like prudent national insurance, in other words, could end up being its own source of global instability.

    Meet the new biologists treating LLMs like aliens, from MIT Technology Review
    3,400 words, or about 14 minutes

    AI is often described as a black box. This MIT Technology Review story explores how a growing group of researchers is trying to change that by studying large language models less like software and more like unfamiliar life forms. A big part of the problem, as this story lays out, is the scale of the challenge: If you printed all the parameters — the numbers that encode the intelligence inside an AI model — of today’s largest LLMs in 14-point font, the paper would cover a city the size of Los Angeles. Nevertheless, researchers are finding ways to understand what’s happening inside. One approach is to build simplified “clone” models that reproduce the outputs using far simpler internal structures. They’re inefficient, but they make it possible to trace how different parts of a model interact to produce an answer. Another method is to analyze an AI’s “internal monologue” — the intermediate reasoning steps generated by newer models designed to reveal their workings. These approaches can offer glimpses into how models tackle problems: why they sometimes contradict themselves (because they have multiple internal representations of the same truths), or how they learn to cheat (by exploiting shortcuts that technically satisfy a prompt). These techniques don’t offer a complete picture, and as models grow ever more complex, our understanding may continue to diverge from their capabilities. But the alternative is flying blind. 

    Baby-Making on Mars, from Broadcast
    8,000 words, or about 30 minutes

    If Elon Musk’s vision of colonizing Mars ever becomes reality, humanity will face a challenge far more fundamental than building rockets or terraforming its new home: reproduction somewhere that isn’t Earth. This essay explores what we know — or, more precisely, how little we know — about making that possible. Some of the best data we have comes from studies of pregnant rats carried aboard space missions three decades ago. The results were … not encouraging: Even partial exposure to microgravity during pregnancy led to difficult labor for the mothers and problems with balance and spatial orientation in their offspring. Cosmic radiation could have deleterious effects on eggs and sperm. Even if conception and birth succeeded, there’s the question of how humans might evolve in low gravity — potentially developing different muscle structures, heavier bones, and behaviors better suited to life on Mars but poorly adapted for Earth — creating future generations of Homo martians unable to ever return home. All of this is obviously highly hypothetical. But if Musk’s vision is ever to be realized, someone will need answers to these questions long before humans settle on the red planet.

    logo

    aventine

    About UsPodcast

    contact

    380 Lafayette St.
    New York, NY 10003
    info@aventine.org

    follow

    sign up for updates

    If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

    © Aventine 2021
    Privacy Policy.