In brief
- DeepMind warns AI agent economies may emerge spontaneously and disrupt markets.
- Risks include systemic crashes, monopolization, and widening inequality.
- Researchers urge proactive design: fairness, auctions, and “mission economies.”
Without urgent intervention, we’re on the verge of creating a dystopian future run by invisible, autonomous AI economies that will amplify inequality and systemic risk. That is the stark warning from Google DeepMind researchers in their new paper, “Virtual Agent Economies.”
In the paper, researchers Nenad Tomašev and Matija Franklin argue that we are hurtling towards the creation of a “sandbox economy.” This new economic layer will feature AI agents transacting and coordinating at speeds and scales far beyond human oversight.
“Our current trajectory points toward a spontaneous emergence of a vast and highly permeable AI agent economy, presenting us with opportunities for an unprecedented degree of coordination as well as significant challenges, including systemic economic risk and exacerbated inequality,” they wrote.
The dangers of agentic trading
This is not a far-off, hypothetical future. The dangers are already visible in the world of AI-driven algorithmic trading, where the correlated behavior of trading algorithms can lead to “flash crashes, herding effects, and liquidity dry-ups.”
The speed and interconnectedness of these AI models mean that small market inefficiencies can rapidly spiral into full-blown liquidity crises, demonstrating the very systemic risks that the DeepMind researchers are cautioning against.
Tomašev and Franklin frame the coming era of agent economies along two critical axes: their origin (intentionally designed vs. spontaneously emerging) and their permeability (isolated from or deeply intertwined with the human economy). The paper lays out a clear and present danger: if a highly permeable economy is allowed to simply emerge without deliberate design, human welfare will be the casualty.
The consequences could manifest in already visible forms, like unequal access to powerful AI, or in more sinister ways, such as resource monopolization, opaque algorithmic bargaining, and catastrophic market failures that remain invisible until it is too late.
A “permeable” agent economy is one that is deeply connected to the human economy—money, data, and decisions flow freely between the two. Human users might directly benefit (or lose) from agent transactions: think AI assistants buying goods, trading energy credits, negotiating salaries, or managing investments in real markets. Permeability means what happens in the agent economy spills over into human life—potentially for good (efficiency, coordination) or bad (crashes, inequality, monopolies).
By contrast, an “impermeable” economy is walled-off—agents can interact with each other but not directly with the human economy. You could observe it and maybe even run experiments in it, without risking human wealth or infrastructure. Think of it like a sandboxed simulation: safe to study, safe to fail.
That’s why the authors argue for steering early: We can intentionally build agent economies with some degree of impermeability, at least until we trust the rules, incentives, and safety systems. Once the walls come down, it’s much harder to contain cascading effects.
The time to act is now, however. The rise of AI agents is already ushering in a transition from a “task-based economy to a decision-based economy,” where agents are not just performing tasks but making autonomous economic choices. Businesses are increasingly adopting an “Agent-as-a-Service” model, where AI agents are offered as cloud-based services with tiered pricing, or are used to match users with relevant businesses, earning commissions on bookings.
While this creates new revenue streams, it also presents significant risks, including platform dependence and the potential for a few powerful platforms to dominate the market, further entrenching inequality.
Just today, Google launched a payments protocol designed for AI agents, supported by crypto heavyweights like Coinbase and the Ethereum Foundation, along with traditional payments giants like PayPal and American Express.
A possible solution: Alignment
The authors offered a blueprint for intervention. They proposed a proactive sandbox approach to designing these new economies with built-in mechanisms for fairness, distributive justice, and mission-oriented coordination.
One proposal is to level the playing field by granting each user’s AI agent an equal, initial endowment of “virtual agent currency,” preventing those with more computing power or data from gaining an immediate, unearned advantage.
“If each user were to be granted the same initial amount of the virtual agent currency, that would provide their respective AI agent representatives with equal purchasing and negotiating power,” the researchers wrote.
They also detail how principles of distributive justice, inspired by philosopher Ronald Dworkin, could be used to create auction mechanisms for fairly allocating scarce resources. Furthermore, they envision “mission economies” that could orient swarms of agents toward collective, human-centered goals rather than just blind profit or efficiency.
The DeepMind researchers are not naive about the immense challenges. They stress the fragility of ensuring trust, safety, and accountability in these complex, autonomous systems. Open questions loom across technical, legal, and socio-political domains, including hybrid human-AI interactions, legal liability for agent actions, and verifying agent behavior.
That’s why they insist that the “proactive design of steerable agent markets” is non-negotiable if this profound technological shift is to “align with humanity’s long-term collective flourishing.”
The message from DeepMind is unequivocal: We are at a fork in the road. We can either be the architects of AI economies built on fairness and human values, or we can be passive spectators to the birth of a system where advantage compounds invisibly, risk becomes systemic, and inequality is hardcoded into the very infrastructure of our future.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.