ARTICLE

Google's Artificial Intelligence Getting "Greedy," And "Aggressive"

News Image By Jake Anderson/AntiMedia.org February 18, 2017
Share this article:

Will artificial intelligence get more aggressive and selfish the more intelligent it becomes? 

A new report out of Google's DeepMind AI division suggests this is possible based on the outcome of millions of video game sessions it monitored. 

The results of the two games indicate that as artificial intelligence becomes more complex, it is more likely to take extreme measures to ensure victory, including sabotage and greed.

The first game, Gathering, is a simple one that involves gathering digital fruit. Two DeepMind AI agents were pitted against each other after being trained in the ways of deep reinforcement learning. 

After 40 million turns, the researchers began to notice something curious. 


Everything was OK as long as there were enough apples, but when scarcity set in, the agents used their laser beams to knock each other out and seize all the apples.

The aggression, they determined, was the result of higher levels of complexity in the AI agents themselves. When they tested the game on less intelligent AI agents, they found that the laser beams were left unused and equal amounts of apples were gathered. 

The simpler AIs seemed to naturally gravitate toward peaceful coexistence.

Researchers believe the more advanced AI agents learn from their environment and figure out how to use available resources to manipulate their situation -- and they do it aggressively if they need to.

"This model ... shows that some aspects of human-like behaviour emerge as a product of the environment and learning," a DeepMind team member, Joel Z Leibo, told Wired.

Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself.

The second game, Wolfpack, tested the AI agents' ability to work together to catch prey. The agents played the game as wolves who were being tested to see if they would join forces as strategic predators; if they jointly protected the prey from scavengers they would enjoy a greater reward. 

Researchers once again concluded that the agents were learning from their environment and figuring out how they could collaboratively win. For example, one agent would corner the prey and then wait for the other to join.


Researchers believe both games show an ability in artificial intelligence entities to learn quickly from their environments in achieving objectives. The first game, however, presented an added bit of abstract speculation.

If more complex iterations of artificial intelligence necessarily develop aggressive, greedy 'impulses,' does this present a problem for a species already mired in its own avarice? 

While the abstract presented by DeepMind does not venture to speculate on the future of advanced artificial minds, there is at least anecdotal evidence here to suggest AI will not necessarily be a totally logical egalitarian network. 

With complex environments and willful agents, perhaps aggression and self-preservation arise naturally...even in machines.

Originally published at theantimedia.org - reposted with permission.




Other News

April 15, 2026Pope Leo's Mosque Visit Raises Serious Concerns Over Doctrinal Confusion

Pope Leo XIV's recent visit to the Mosque of Algiers--where he removed his shoes, stood in silent reflection before the mihrab, and expres...

April 15, 2026Drought, Diesel, And Fertilizer - Global Food Shock Could Be Coming

It's a chain reaction already in motion--from dry soil in American fields to fuel shortages in Australia, from fertilizer plants in the Mi...

April 15, 2026Swalwell Is The Tip Of The Iceberg

Rep. Eric Swalwell, D-Calif represents the worst variety: the kind who lectures the world about morality while living in the gutter. The ...

April 15, 2026Why Are Democrats Trying To Criminalize Investigative Journalism?

In a move straight out of a totalitarian playbook, the California Assembly Judiciary Committee voted to advance AB 2624, the so-called "St...

April 14, 2026Are We Building A Prototype Of 'The Image That Speaks' From Revelation

What makes Meta's experiment particularly significant is its focus on personality replication. The Zuckerberg AI is not just a tool--it is...

April 14, 2026Claude Mythos AI Is More Dangerous Than You've Been Told

If even half of what has been reported about Claude Mythos Preview is accurate, then we are no longer talking about a "new technology" or ...

April 14, 2026Britain's Identity Shift: When Citizenship No Longer Has A Shared Story

Britain no longer teaches citizenship through a single cultural lens. It teaches it through managed pluralism--where the state acts as cur...

Get Breaking News