How artificial intelligence can reduce selfish behavior and reshape society

Making AI “always cooperate” did little. Letting people control it backfired. But AI that mimics humans changed the game.

Joseph Shavit
Shy Cohen
Written By: Shy Cohen/
Edited By: Joseph Shavit
Add as a preferred source in Google
A Michigan State study tests AI in a classic cooperation game, and finds mimicry can push groups toward cooperation.

A Michigan State study tests AI in a classic cooperation game, and finds mimicry can push groups toward cooperation. (CREDIT: AI-generated image / The Brighter Side of News)

Forcing an AI system to “play nice” does not automatically make people cooperate. In one set of simulations, it barely moved the needle. In another, it backfired.

That’s the core tension in a new study from Michigan State University that uses a classic cooperation test, the Public Goods game, to ask a modern question: what happens when artificial intelligence joins the group? The work was led by MSU professor Christoph Adami, Department of Microbiology, Genetics, & Immunology.

“Cooperation is everywhere in nature,” Adami said. “But the mathematics of how cooperation can persist is not easy to understand.”

When being good gets punished

The study sits inside a long-running problem that economists and ecologists love to argue about. It is often called the “tragedy of the commons,” a situation where shared resources get drained because each individual can gain by taking more than they give.

MSU professor Christoph Adami, Department of Microbiology, Genetics, & Immunology. (CREDIT: Michigan State University)

“Being a good citizen is more costly than being a leech,” Adami said. He added that his team has studied the issue for more than 15 years, searching for ways to “lower the barrier for cooperative behavior in order to convert a selfish society into a cooperative one.”

To explore that barrier, the researchers used the Public Goods game, a group version of a social dilemma. Each player can cooperate by paying a cost to contribute to a common pool, or defect by keeping that cost. The pool gets multiplied by a “synergy” factor and then split evenly among everyone, whether they contributed or not.

So defectors can come out ahead inside any mixed group. Yet if everyone cooperates, the whole group does better than a group full of defectors. That mismatch is the dilemma.

Bringing AI into the room

Researchers have proposed many ways to support cooperation in simpler, two-player games, including punishment, reciprocity, and institutional incentives. But cooperation in groups proves harder to start and harder to keep. The Michigan State team argued that widespread AI could change the social landscape, since AI agents can be programmed and regulated in ways humans cannot.

In their model, a population of human “players,” each with a probability of cooperating, evolves over thousands of generations through selection and mutation. Payoffs come from repeated Public Goods games in small groups. The authors ran 100 independent experiments for each condition, then examined behavior after 10,000 generations.

AI agents enter as replacements for some neighbors around a focal human player. Those agents do not evolve in the model. They follow a policy set by the experiment.

The team tested three policy scenarios.

Illustration of the Public Goods game. (CREDIT: npj Complexity)

Policy 1: Mandate the AI to cooperate

The first idea sounds like the simplest policy pitch. If AI always cooperates, maybe it will “seed” cooperation and nudge people toward helping too.

The simulations did not support that hope. Mandating cooperation increased the amount of cooperative behavior only in proportion to how many AI agents were present. The human players themselves did not become more cooperative.

The paper’s reasoning for this is blunt: in this game, changing the number of cooperators nearby does not change the underlying dilemma’s boundaries.

Policy 2: Let people control the AI

The second scenario is closer to real life. Governments do not set strict rules, and whoever controls an AI system can tune how it behaves. In the model, each player evolves not only their own willingness to cooperate, but also a separate probability that determines how cooperative the nearby AI agents will be.

This created a new opportunity, and humans took it.

Illustration of AI Agent density in the neighborhood of the central player. (CREDIT: npj Complexity)

Players evolved to program the AI agents to cooperate while the players themselves defected. They “offloaded” the costs of being cooperative onto the agents and still collected an equal share of the group benefits. The authors note that this resembles a familiar worry about AI in society: people may use automated systems to capture gains without supporting the collective good.

“This reveals that allowing unrestricted control over AI agents leads to exploitation rather than genuine cooperation,” the paper concludes.

Policy 3: Make the AI mimic the human

The third policy flips the usual instinct. Instead of forcing AI to be good all the time, the agents mirror the focal human’s tendency to cooperate.

If the human cooperates, the neighboring AI agents also cooperate. If the human defects, the agents defect too.

This mimicking approach changed the payoffs in a way the other policies did not. It lowered the “synergy” threshold needed for cooperation to become the attractive strategy, and that threshold depended only on the likelihood of encountering mimicking agents and on group size. In the limit where every neighbor is a mimic, the dilemma disappears in the model.

The more AI agents replaced players, the more cooperation became the self-interested move.

“The idea of always being a good actor is not always a good strategy in society but being an actor that will stand firm against a bad actor may achieve a better outcome,” Adami said. “Imitation is not just the sincerest form of flattery; it is also a form of communication that can provide the incentive to tip a population into cooperation.”

Illustration of three different policies for AI agent behavior. (CREDIT: npj Complexity)

Limits, and what this does not claim

The authors stress that their framework simplifies human behavior. Evolutionary game theory treats strategy change as selection on payoffs, but real people respond to social norms, emotions, and biases.

The model also assumes agents can perfectly observe and copy player behavior. In reality, information is noisy. The study does not include spatial networks either, even though network structure can affect cooperation. The paper also notes that it only tests three AI policy styles and leaves out other possible cooperation mechanisms, like direct and indirect reciprocity or punishment systems embedded in institutions.

Finally, the work stays theoretical. The authors say it cannot be directly extrapolated to real-world settings.

Practical implications of the research

The clearest takeaway is that “nice” AI, by itself, may not fix free-riding. A second takeaway is less comforting: if people can tune AI helpers without constraints, they may use them as a shield, shifting costs outward while keeping benefits.

The mimicking policy points to a different design principle for systems that interact with humans at scale: build AI that responds to behavior instead of endlessly absorbing it.

The authors speculate this could make small improvements in daily life, including cooperation among self-driving vehicles, where choices like yielding and merging shape traffic flow even when everyone follows the law.

Research findings are available online in the journal npj Complexity.

The original story "How artificial intelligence can reduce selfish behavior and reshape society" is published in The Brighter Side of News.



Like these kind of feel good stories? Get The Brighter Side of News' newsletter.


Shy Cohen
Shy CohenScience and Technology Writer

Shy Cohen
Writer

Shy Cohen is a Washington-based science and technology writer covering advances in artificial intelligence, machine learning, and computer science. He reports news and writes clear, plain-language explainers that examine how emerging technologies shape society. Drawing on decades of experience, including long tenures at Microsoft and work as an independent consultant, he brings an engineering-informed perspective to his reporting. His work focuses on translating complex research and fast-moving developments into accurate, engaging stories, with a methodical, reader-first approach to research, interviews, and verification.