Newsletter / Issue No. 68

Photo from Sinan Aral, interpreted by ChatGPT.

Newsletter Archive

Thu 23 Apr, 2026
navigation btn

Listen to Our Podcast

Dear Aventine Readers, 

This week we speak with MIT professor Sinan Aral about his new research on AI’s effects on companies, employees and productivity. Among his findings: AI will boost productivity for some workers while limiting it for others, and different AI personality types can make employees either more or less productive. In short: a one-size-fits all approach to AI adoption could end up hurting the bottom line. Read on to find out what companies and employees can do to maximize the benefits of AI and minimize risk. (Also, for those of you who haven’t been with Aventine for the long haul, we interviewed Aral for our podcast almost five years ago about The Hype Machine, his book about the pervasive effects of social media on individuals and society.)

Also in this issue: 

  • Autoimmune therapies could help treat some mental illnesses. 
  • Meta is developing an AI version of Mark Zuckerberg. 
  • And, speaking of AI and jobs, there’s a $10 billion startup training AI to replace white-collar workers! 
  • Sincerely, 

    Danielle Mattoon 
    Executive Director, Aventine

    Subscribe

    Subscribe to our newsletter and be kept up to date on upcoming Aventine projects

    Questions For

    Sinan Aral on How Companies Could Get AI Adoption Wrong

    Sinan Aral spent more than a decade studying the mechanisms and effects of what may be the most culturally influential technology of the early 2000s: social networks. If you are aware that lies spread faster and more broadly than the truth on Twitter, it is thanks to Aral and his colleagues, who — back in 2006 — began tracking the spread of over a hundred thousand online rumors. 

    This line of investigation, which helped reveal how social platforms amplify certain kinds of content through their design and business models, culminated in his 2021 book, The Hype Machine. Through large, randomized real-world trials on social networks, his team’s research helped academics understand the flow of misinformation, helped businesses predict consumer behavior and gave policymakers a framework for understanding how to regulate the companies behind the systems. 

    Now he’s using similar tools and strategies to understand the impact of AI, which he describes as “probably the most important technological revolution in human history.” Already, his work is revealing the specific ways that AI can boost long-term productivity for some workers while stifling it for others, how the personality of an AI can hinder or enhance an employee’s performance and how collaborations between humans and AI lead to better creative outcomes than AI alone. 

    Aventine spoke with Aral about his new research and how it could be used to shape the way artificial intelligence systems are designed, deployed and governed. 

    What follows has been edited for clarity and length.

    You spent years studying how social networks shape human behavior. Now you’re studying AI. Do you see this as another important technology platform or something categorically different? 

    AI is an evolution of decades of research and computer science. That said, today's frontier models are a fundamental shift along that trajectory. What we know today as AI — whether it's [Anthropic's] Claude or OpenAI’s [ChatGPT] — is probably the most important technological revolution in human history. I don't say that lightly. I think it's along the lines of the Industrial Revolution, the Agrarian Revolution, and I think potentially even more impactful than those two in terms of its sociotechnical and socioeconomic effects. 

    There are so many ways humans are interacting with AI. How do you decide what to zero in on? 

    We're very interested in business and economic impacts: the labor market, how businesses are going to change, sociocognitive impacts, how it 's going to affect skills, how it's going to affect the way humans think. One really important message that I'm trying to tell everybody is that we face a fork in the road. We have clear evidence that humans plus AI is better than humans alone. We also have pretty significant evidence that more than half the time humans plus AI are beaten by AI alone. That is not a good place to be. When Dario Amodei says 50 percent of all entry-level jobs will disappear in the next two to five years, it's because we don't have a science of human-AI collaboration that is going to lift the marginal productivity and performance effects of humans plus AI above AI alone. That is what I would say is the mission of my lab right now. It's something I call integrated intelligence: How do we tune this machine of human-AI synergy to become better and better, such that it's better than just adopting AI and leaving humans to have universal basic income?

    In your most recent paper, “The Augmentation Trap,” you argue that AI can enhance skills when it complements expertise and erode skills when it substitutes for it. How does this play out? 

    Substitution is when you essentially tell AI to do the task, and then, generally speaking, just have that be the output. The substitutional way to use it is to say, “Hey, AI, write this thing for me.” It's cognitive offloading and it's skill eroding, because I don't get the benefit of practicing coding or writing in producing that output. A complimentary way to use it is: I'm doing the writing, and AI is helping me write as sort of an assistant. If I'm constantly in the process of coding, checking the AI's code, checking my own code, thinking [for] myself about how things should fit together, then I'm actually augmenting my skills. I am learning from the AI how to be better at doing the thing rather than having the AI do the thing. That is the difference. 

    And how do they affect worker skills differently?

    This is related to the stock of expertise that exists in experienced versus less-skilled workers. Experienced workers get more benefit from complementarity: They are more likely to build skills by using AI, and they are less likely to substitute their own judgment for AI, because the productivity benefit of doing that is less for them than it is for the lower-skilled workers. When you substitute AI for the low-skilled workers, you get this output that looks like high-skilled output, but the low-skilled workers didn't earn the cognitive benefit of doing [the work].

    You wrote your latest paper with one of your PhD candidates. Did the two of you notice the dynamics you were studying play out as you did the research? 

    Yes. I do notice that PhD candidates now are a lot more productive than they used to be. They are very able to very quickly explore areas, because AI can be an assistant to so many aspects of research, from understanding where gaps are in the literature, to thinking through hypotheses and ideas, all the way through to [writing code]. I am very careful to advise them that we should eat our own dog food in terms of the results that we're seeing on skills erosion, because they are at a moment where they are building their skills. When I was going through the PhD program at MIT, I remember thinking to myself, “Wow, I am building something valuable right now that it's going to be very hard for someone else to build.” I'm urging them to not miss the opportunity to build those skills while they get the benefit of the complementarity of AI.

    We’ve heard a lot of employers say that they’re eager to hire young people who are AI natives. But your research shows that once these young employees are part of an institution, AI could inhibit their career development. Are employers aware of this? Is this a solvable problem? 

    I think employers are generally focused on short-term goals, because the market requires them to do that. They're focused on the stock price and providing results to their board, and they can do that by taking a short-term view of AI [to boost productivity]. That's rational for the firm. It’s also rational for the worker because they can be more productive than the next person if they use AI. Well, that's all true, [but to avoid skill erosion] what we say in the paper is that both for the worker and for the firm, it's important to make goals less focused on short-term evaluation and build in long-term evaluation; to have a measurement of skills so that you know where you're at if you're a worker, or where your employees are at if you're a firm; and to use AI in ways that complement skills, rather than substitute skills. So for example, doing a task periodically without AI, so that you can maintain your skills as well as get the productivity benefit of AI, and so you can measure whether your skills are increasing or eroding as you use AI.

    Tell us about the study you did comparing the productivity and creativity of human teams versus humans using AI. 

    That experiment is the largest randomized, controlled experiment with fully multimodal and fully frontier AI agents that exists in any paper that I have seen. And the great thing about it, in my mind, is that not only did we have thousands of participants that made tens of thousands of ads, but we measured these ads and their quality by having humans rate them on text quality, image quality and likelihood of click-through. Then we ran the ads and we got back click-through rates, view-through rates, cost per click, all the performance of the ads. This is a real task for a real organization, a real thing that people would use AI for in marketing. 

    It seems like you discovered a trade-off. Humans+AI produced ads that performed better on some metrics, but they were less creative than the work created by humans alone.

    We found a couple of things. We found, number one, that there was this jagged frontier: AI plus humans improve the text but perform worse than humans with humans on images. We also found that humans that interact with AI in a more task-oriented way perform better than humans that collaborate with AI in ways that humans typically collaborate with each other, which is like interpersonal dealing with social cohesion. [And, yes,] we find very clearly this result of diversity collapse, meaning when humans work with AI, it creates a homogenization of the creativity and innovation of the ads themselves, which is a big red flag in terms of how managers should think about human AI collaboration [because] creativity and innovation [are] a source of product differentiation and a source of growth. 

    What can leaders do to encourage people to avoid that kind of homogenization?

    I think that managers can take some of the lessons from “The Augmentation Trap” paper, about workers performing tasks without AI to maintain skills. This [process of working out how to make the best use of technology] is not new. If we recall back to the invention of the electric dynamo and the replacement of the steam engine, you didn't see productivity and performance benefits. In fact, it created disruption. But once we started learning how to reorganize our business processes and use the technology better, then we saw a lift in productivity and performance and innovation and creativity. We're at that moment in the age of AI, and so we need to go through the challenging work of figuring out how to do it well.

    Anyone who uses AI a lot has noticed how jarring the shifts in different model personalities can be. But it’s more than simply jarring. Your work suggests that an AI’s interaction style can materially affect performance, depending on the user. Can you give us some examples of what you found? 

    It doesn't seem immediately obvious when you think about it as an AI. But when you think about it from the perspective of the science of human teaming, you are not surprised to say, “Wow, the personality fit of my team makes a huge difference In the performance.” It goes all the way back to Apollo teams from the 1960s, where you found that if you just put the smartest people on a team, it doesn't perform as well, and in fact, it can detract. 

    [Similarly] different types of people need different types of AI to create the best collaborative outcome. This is not specifically taken from the paper, but to give you an example, there may be someone who thinks they're amazing, very self-confident but they're not as good as they think they are at the task. In that case, you would want an AI personality that is willing to push back in order to optimize the outcome. Take another person who is less self-confident, more self-conscious, but great at the task; that AI may want to encourage them to trust themselves more than they normally do, in order to maximize the outcome. What we find [in the paper is that] extroverted people work a lot worse with conscientious AI [and]conscientious people work a lot worse with agreeable AI. So fitting the personality of the AI to the person improves the quality of the output.

    One theme across several of your papers is that AI doesn’t affect everyone in the same way: Some workers build skill, others lose it; some personality pairings work, others don’t. Is that a technical task that model developers can address, or is it a cultural task that businesses and leaders need to tackle?

    I have to mention that we have started a company called Pairium AI that does this. [You have to do it] from both sides. You need a system that fits itself to the person well, and we're developing that technology at Pairium based on the research that we've been doing. And you need to train people to fit themselves well to the AI, as well. There are aspects of this integrated intelligence science that is about training people to learn how to use their specific models in ways that are most productive: the way you're task oriented versus social and emotional in your communication, the degree to which you delegate appropriately. We believe that [the big AI companies like Anthropic and OpenAI] are still focused on the old Silicon Valley business model of engagement: They are building models that get you to continue using the models. That's why we had a sycophancy problem. So it's going to take effort to get that personalization correct.

    Your research surfaces cases in which the people making decisions about AI deployment — managers, say, or companies — do not bear the full long-term costs of those decisions. In the meantime, workers lose skills and consumers can end up trusting bad information. Is there a way to change that?

    Look, I think that we were at a disadvantage [with previous technological shifts]; we didn't have the benefit of hindsight from the last 20 years of digital technology development. We should have learned a lot since about 1995 and the invention of the internet, and a lot of questions are similar to the ones that we'll be asking [about AI]. Now, they're not identical questions, but we should be able to transfer some of that knowledge into the age of AI, and we don't have the same excuse now if we don't do that. So at MIT, and in my lab, at Parium and so on, we are trying to bring to bear all of that knowledge [and ask]: How can we get ahead of this wave and solve some of the issues before they become big problems? I think that incentives matter a lot, from the perspective of the person using AI, the employee using AI in the firm, the manager and the owner of the business, the government and so on. 

    To some extent we’ve grappled with these sorts of changes before — around the Industrial Revolution, as you point out. Is AI different from previously technological revolutions, or is this just the latest version of the same story?

    Most of the revolutions of the past happened at a pace that we were better able to adapt to. This one is happening at a pace that we've never seen before. And the question is, will our social, sociotechnical, sociocognitive and economic systems be able to adapt fast enough to the technological advances that are much faster. I think speed is a big, big issue there. I also think that this technology is different. You hear, “This technology isn't developed, it's grown.” This technology develops, through training on data, through reinforcement learning and so on. That uncertainty [in development] combined with the speed is what makes it challenging, I think, and different from prior sort of revolutions.

    If you step back from all of these projects, what do you think is the central thing AI is changing in society?

    I think it will change the way we as a species think and decide. That encompasses a lot: how we get information, how we sort through information, how quickly and at what scale we can evaluate information, how we then act on all of that processing, both alone and in combination with the technology. This is the first time that we have really found ourselves integrating with a new intelligence that affects all of those aspects of thinking and acting.

    Across your career, you’ve studied systems that shape human behavior at a huge scale — social networks, information markets, platforms and now AI. What have you come to believe about how AI can help humanity, and when it can weaken it?

    First of all is that it is extraordinarily augmenting, in the sense that we can potentially solve diseases, discover new drugs, figure out how to massively amplify our cognitive abilities and so on. The ways in which we can steer toward good versus some of the sort of deleterious effects is essentially maintaining the alignment of the development and use of this technology with human values, with human goals, with human objectives. And I think that is extraordinarily important. We are still at the beginning of figuring out how to do that at scale.

    How hopeful are you that we can steer it in the right direction?

    I am ultimately a very hopeful person. I do believe that this revolution will be a positive revolution for humanity. We will need to, and therefore we will, succeed at doing the things that we need to do to make this, on balance, a massively productive, innovative and welfare-creating revolution, rather than a welfare-destroying revolution.

    Listen To Our Podcast

    Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.

    Quantum Leaps

    Advances That Matter

    NASA is going nuclear. The revival of nuclear power on Earth looks set to extend beyond the planet, with new plans to use fission reactors to power space exploration. Ahead of the Artemis II mission, NASA administrator Jared Isaacman announced that a spacecraft using a nuclear system — Space Reactor-1 Freedom (SR-1) — could head to Mars before the end of 2028. Soon after, Wired reports, the White House Office of Science and Technology Policy outlined broader plans for space-based nuclear power, including collaboration between NASA, the Department of Defense and the Department of Energy to build a reactor for use on the moon’s surface by 2030. There are some major selling points. Nuclear fuel is incredibly energy-dense, making it far more efficient than alternatives. It can provide continuous power in environments where solar energy falters, such as the long lunar night. And nuclear-powered propulsion systems, which use reactor-generated power to electrify gases that provide thrust, could produce faster spacecraft, reducing mission duration and strain on astronauts. MIT Technology Review points out that there is precedent. Both the US and the Soviet Union experimented with nuclear-powered space systems in the 1960s. And while the idea of launching nuclear power raises safety concerns, the fuel itself is relatively stable until the reactor is activated. Current plans would only initiate fission once safely in orbit. The biggest question mark is timing. NASA aims to begin hardware development this summer, with initial orbital demonstrations  targeted for October 2028 — an “aggressive” timeline according to experts speaking to MIT Technology Review. 

    Autoimmune therapies could help treat some mental illnesses. A growing body of research suggests that the immune system may play a larger role in psychiatric conditions than previously thought, raising the possibility that some forms of mental illness could be treated with immunological drugs rather than traditional psychiatric ones. Back in the late 2000s researchers discovered a disorder called autoimmune encephalitis, which occurs when antibodies attack receptors in the brain, triggering symptoms such as delusions and hallucinations that can be indistinguishable from schizophrenia. By treating the underlying immune response it’s possible to reverse those symptoms. Now, New Scientist reports, researchers are exploring whether similar mechanisms might underpin other conditions such as OCD, PTSD, depression and even dementia. Teams at King’s College London and Charité-Universitätsmedizin Berlin are working to map how other antibodies may contribute to psychiatric symptoms, with some early success. Meanwhile, broader screening efforts for autoimmune encephalitis are underway, which could help establish how widespread immune-driven psychiatric conditions really are. No one expects this to explain most mental illness: Researchers believe only a small fraction of cases have autoimmune roots. But for some patients, particularly those who do not respond to conventional treatments, the implications could be life-changing.

    Meta is developing an AI version of Mark Zuckerberg. According to The Financial Times, Meta is building photorealistic, AI-powered 3D avatars for users, with an initial focus on one that replicates Mark Zuckerberg himself. The idea is to create a kind of always available proxy for the CEO. The system is reportedly being trained on Zuckerberg’s mannerisms, tone, public statements and thinking about company strategy. In theory, it would allow employees to test ideas, decisions or proposals against a model of how Zuckerberg might respond without needing to actually speak to him. Zuckerberg is said to be personally involved in the project. Earlier this year, The Wall Street Journal reported that he was also involved with a project to build a personal AI agent, designed to surface information instantly and reduce the need to rely on human intermediaries. It’s tempting to dismiss all of this as more tech narcissism. But these efforts foreshadow products that we may all use one day to help us make decisions and share our perspectives more broadly. For now, Zuckerberg just happens to have a large team of engineers at his disposal.

    Long Reads

    Magazine and Journal Articles Worth Your Time

    The $10 Billion Startup Training AI to Replace the White-Collar Workforce, from Bloomberg Businessweek
    4,700 words, or about 19 minutes

    The latest issue of Bloomberg Businessweek — with the coverline “Young, Educated, Jobless” — is stuffed with stories about how AI is reshaping white-collar work. The best is this one, about Mercor, a fast-growing startup that recruits large numbers of skilled professionals to help teach AI how to do their jobs. Doctors, lawyers, bankers, consultants, software developers, journalists — they all log onto Mercor’s platform, often after their workday ends, to write prompts, evaluate outputs and refine model behavior for companies like Anthropic and OpenAI. Pay ranges from minimum wage to as much as $300 per hour. And for a sense of the scale of its operations, Mercor says it is paying out more than $2 million a day to tens of thousands of contractors. Founded by three entrepreneurs in their early 20s, none with conventional corporate experience, Mercor has raised nearly $500 million and was valued at $10 billion as of October. It also claims to have been profitable from the start. But there are issues. Contractors complain about chaotic project management and abrupt cancellations. The company faces multiple class-action lawsuits over worker classification. And a recent data breach was serious enough that Meta paused its relationship with the company. More existential questions hang over it, too. Perhaps the biggest is whether the endeavor will bear the fruit it promises. If it does, these contractors are contributing to the automation of jobs in their own professions. Mercor’s pitch — echoed by some workers — is that AI will augment human expertise, allowing people to focus on higher-value tasks and stay relevant in a changing labor market. As Mercor begins to partner with corporations to deploy AI agents, we may find out if that holds up.

    Jammed phone lines. Burned-out dispatchers. Can AI ease a strained 911 system? from Be Giant
    5,500 words, or about 22 minutes

    Most calls to 911 aren’t genuine emergencies. The result is thousands of systems under strain, in which callers often wait minutes before reaching a dispatcher. A Canadian startup, Hyper, thinks it has a solution. Its AI agent, Betty, is designed to listen to callers and decide how to route them. For now, it’s being deployed on nonemergency lines such as 311, where it can handle routine requests and free up human dispatchers to focus on more important calls. But the longer-term ambition is for Betty to take on emergency calls. On paper, the technology sounds impressive. It can translate conversations in real time across more than 20 languages, distinguish between similar-sounding but very different scenarios (“fire” versus “fireworks”) and even identify and interpret coded language used by people in distress, such as victims of abuse. The problem is what happens when it gets things wrong. Hyper has been reluctant to share details about failure rates, though early deployments have revealed glitches: offering to dispatch officers when none were needed, for instance, or asking a single question over and over. Other concerns include potential bias in how calls are interpreted, how to manage privacy around highly sensitive conversations and who assumes responsibility if Betty mishandles a life-or-death situation. There is also a question of whether, in moments of crisis, many callers may simply prefer a human on the other end of the line. The idea of using AI to help triage overloaded emergency systems seems like a no-brainer. But the question of how far into the process its tentacles should reach will be harder to answer.

    Inside Chernobyl, 40 years after nuclear disaster, from New Scientist
    4,300 words, or about 17 minutes

    On April 26, 1986, reactor No. 4 at the Chernobyl Nuclear Power Plant exploded, reshaping the future of nuclear energy. Estimates of the long-term death toll range from 4,000 to the many tens of thousands. Four decades on, this New Scientist visit to the site offers a sobering look at what remains. Roughly two-thirds of the exclusion zone is now considered technically safe, but the site is far from stable; the war in Ukraine has set back cleanup efforts significantly. Russian troops reportedly dug trenches in contaminated soil, looted laboratories and destroyed valuable scientific data. A drone strike has also damaged the €1.5 billion New Safe Confinement — the structure that now covers the destroyed reactor — sparking a weekslong fire and damaging infrastructure built to help with decommissioning. Yet life persists on the site. The exclusion zone is now technically a conservation area, home to wildlife that includes wolves, bears and lynx. In many ways, it’s a uniquely valuable scientific site: a place to study how radiation affects animals, waterways and landscapes over long periods, and to test new monitoring and safety technologies in extreme conditions. The most unsettling part of this story is that some experts project a nuclear disaster every 25 years or so, which makes the study of Chernobyl less about understanding history and more about preparing for the future.

    logo

    aventine

    About UsPodcast

    contact

    380 Lafayette St.
    New York, NY 10003
    info@aventine.org

    follow

    sign up for updates

    If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

    © Aventine 2021
    Privacy Policy.