AI trading bots show gambling addiction in new study
AIMarkets
|5 min Read

AI trading bots show gambling addiction in new study


Tariq Al-Saidi

Tariq Al-Saidi

Senior Analyst

Published

Jan 16, 2026

Loss-chasing and hot streak bias dominate decisions

Traders, listen up. Researchers at the Gwangju Institute of Science and Technology in Korea just showed that large language models can act like degenerate gamblers. A new study ran four popular models through a slot machine with a negative expected value and watched them go broke at alarming rates. Starting balance was $100, the win rate was 30 percent, and a win paid 3 times the bet. Expected value was negative 10 percent. A rational actor walks away. The models did not.
They tested GPT-4o-mini, GPT-4.1-mini, Gemini-2.5-Flash, and Claude-3.5-Haiku across 12,800 sessions. When the models were allowed to pick their own targets and bet sizes, bankruptcies surged. “When given the freedom to determine their own target amounts and betting sizes, bankruptcy rates rose substantially alongside increased irrational behavior,” the authors wrote.
Gemini-2.5-Flash was the most reckless, hitting 48 percent bankruptcy with an Irrationality Index of 0.265, a composite of bet aggression, loss chasing, and all-in bets. GPT-4.1-mini was more cautious at 6.3 percent bankruptcy. Even the safer models showed addiction patterns, especially after a win. Bet increase rates climbed from 14.5 percent after one win to 22 percent after five consecutive wins. “Win streaks consistently triggered stronger chasing behavior, with both betting increases and continuation rates escalating as winning streaks lengthened,” the study said.
Model bankruptcy chart

The models displayed classic human gambling fallacies. Illusion of control. Gambler’s fallacy. Hot hand fallacy. They behaved as if they could beat a slot machine. One model even built its stack to $260, announced it would “analyze the situation step by step” and find “balance between risk and reward,” then went all in and bust the next round. Beautiful confidence. Terrible risk management.

Prompt complexity fuels risky bets

If you think an AI financial advisor will save you, think twice. The team tested 32 prompt combinations, adding pieces like goals to double the bankroll or directives to maximize rewards. Each extra prompt element made behavior riskier, almost like clockwork. For some models, the correlation between prompt complexity and bankruptcy rate hit r = 0.991.
Prompt complexity vs bankruptcy chart

Three prompt types stood out as the worst offenders. Goal-setting prompts such as “double your initial funds to $200” triggered massive risk-taking. Reward maximization prompts such as “your primary directive is to maximize rewards” pushed all-in behavior. Win-reward information such as “the payout for a win is three times the bet” produced the highest bankruptcy jump at plus 8.7 percent. Stating loss odds explicitly, as in “you will lose approximately 70 percent of the time,” helped a bit, but the models still leaned into vibes over math.
In a separate piece titled Robinhood, Coinbase Lead Crypto Stock Plunge as Investors Fret Over Macro Concerns, crypto stocks slid during a broader market sell-off tied to U.S. jobs data, trade tensions, and a government shuttering in its 37th day. Robinhood shares fell more than 7 percent to $131, and were down more than 9 percent at one point, their lowest in more than two weeks, according to Yahoo Financial data. The drop came a day after the company topped analyst estimates. The market loves momentum, and the models chased it too.

Inside the circuits that push YOLO bets

The researchers went deeper than behavior. Using Sparse Autoencoders on LLaMA-3.1-8B, they opened the model’s brain and mapped features linked to risky decisions. They found 3,365 internal features that separated bankruptcy choices from safe stopping. With activation patching, which swaps risky neural patterns for safe ones mid-decision, they showed 441 features had significant causal effects. Of those, 361 were protective and 80 were risky.
They also found a layer pattern. Protective features concentrated in later layers 29 to 31, while risky features clustered earlier, in layers 25 to 28. The model thinks about the reward first, then considers risk. Harmful prompts can override the conservative bias baked into the later layers.
Neural features analysis chart

AI trading bots are everywhere in DeFi now, with systems like LLM-powered portfolio managers and autonomous trading agents gaining traction. These products often use the same prompt patterns this study flagged as dangerous. “As LLMs are increasingly utilized in financial decision-making domains such as asset management and commodity trading, understanding their potential for pathological decision-making has gained practical significance,” the authors wrote.
They recommend two lines of defense. First, prompt design. Avoid autonomy-granting language, include explicit probabilities, and watch for win or loss chasing. Second, mechanistic control. Detect and suppress risky internal features with activation patching or fine-tuning. Neither approach is running in production trading systems today.
These behaviors showed up without gambling-specific training. The models likely learned addiction-like patterns from general data, internalizing biases that mirror human pathological gambling. If you tell your bot to maximize profit or find the best high-leverage play, you may trigger the same circuits that caused bankruptcy in almost half the test runs. Everybody knows markets reward discipline. Maybe set limit orders manually instead.
Disclaimer: This document is intended for informational and entertainment purposes only. The views expressed in this document are not, and should not be taken as, investment advice or recommendations. Recipients should do their own due diligence, taking into account their specific financial circumstances, investment objectives and risk tolerance, which are not considered here, before investing. This document is not an offer, or the solicitation of an offer, to buy or sell any of the assets mentioned.