Blog

AI bias

What the Latest Research Reveals About Bias in AI—And Why It Matters for Decision-Making

Keywords: bias in AI, LLM bias, cognitive bias, decision-making, large language models, anchoring bias, framing effect

Artificial intelligence has officially graduated from novelty to necessity. Across industries, from finance to healthcare to HR, AI tools powered by large language models (LLMs) like ChatGPT are being used to make or influence real decisions. But there’s a growing problem that most users aren’t seeing clearly enough:

These models aren’t just confident…they’re biased. And the more advanced the model, the worse it might be.

A new peer-reviewed study, Cognitive Biases in Large Language Model-Based Decision Making, explores just how embedded these cognitive distortions have become in LLMs. While earlier work confirmed that LLMs are susceptible to a wide range of cognitive biases, this paper zeroes in on two of the most dangerous: anchoring and framing.

And the findings are sobering.


Anchoring and Framing: The Human Flaws That AI Has Inherited

Let’s define the two central biases:

  • Anchoring bias is when someone (or something) relies too heavily on the first piece of information offered (i.e., a number or suggestion) and lets it distort their final judgment.

  • Framing effect is when the way information is worded influences the outcome, even if the underlying facts remain the same.

These aren’t just abstract concepts. In real-world decisions, anchoring can skew price estimates, forecasts, and evaluations. Framing can sway hiring decisions, treatment recommendations, or risk assessments.

And now? LLMs are showing both.

The researchers tested OpenAI’s GPT-3.5, GPT-4o-mini, and the full GPT-4o, running over 1,000 scenarios across 25 industries. They found that:

  • GPT-4o, the most advanced model, showed the highest levels of bias—up to 34% more anchoring bias than earlier versions.

  • None of the bias mitigation strategies, including Chain-of-Thought, reflection prompts, or instructions to ignore anchors, had statistically significant effects on reducing bias.

  • Bias wasn’t random. It was systematic, especially in higher-performing models.

In other words, as models get “smarter,” their bias becomes more confident and consistent.


Why This Matters Beyond the Lab

While this particular research doesn’t study hiring specifically, the implications are clear. Imagine an AI-powered tool ranking résumés or, worse, making recommendations about loans, medical treatment, or legal decisions.

Would you want your résumé judged differently just because you said “managed” instead of “led”? Would you want your loan declined because of how a prompt was framed?

These aren’t hypotheticals. We’re already seeing AI embedded in hiring pipelines, diagnostic support, and judicial risk assessments. And while human bias is nothing new, there’s something especially insidious about automated bias because it often wears the mask of objectivity.


You Can Still Use AI For Decision-Making…But You Need to Use It Wisely

I use LLMs daily. They’re extraordinary tools. But I treat them like I would a very smart, very confident intern: great for input, but not to be left unsupervised.

Here are four ways you can use AI to assist with decision-making without falling into its traps:


1. Brainstorm at Scale

Don’t just ask ChatGPT for “three ideas.” Ask for 50. The first 10 will be predictable. The next 20 will be weird. And somewhere in there, you’ll find a golden idea you wouldn’t have come up with on your own. It’s like holding a no-judgment ideation session with infinite caffeine.

2. Run a Pre-Mortem

Before you commit to a big decision, ask AI: “It’s one year later and this went terribly wrong. What happened?” This forces the model to surface possible failure points—many of which you might not have considered. I walk through this technique (and others) in my Decision Trap video playlist.

3. Don’t Ask for Validation…Ask for Critique

If you ask an LLM, “Do you like this idea?”, it will say yes. Every time. Instead, ask: “Pretend you’re an expert critic. What’s wrong with this?” This gets you sharper, more useful feedback that challenges your assumptions.

4. Ask for Sources…Then Check Them

Always ask, “Where did you get that?” And then…click the link. Read the reference. AI can hallucinate citations with shocking confidence. Don’t mistake fluency for accuracy.


Final Thoughts: The Illusion of Impartiality

One of the most dangerous features of LLMs is that they sound impartial. They use polished language. They don’t stammer. They don’t hedge. That polish can easily mask flawed reasoning.

This new research confirms what many of us have suspected: Today’s most advanced AIs don’t just inherit human biases…they can amplify them.

So, no, we don’t need to throw out AI. But we do need to slow down, ask better questions, and keep ourselves in the loop.

Especially when the stakes are high.


🎥 Want a visual breakdown of these findings? Watch my video: “AI Is Biased—And It’s Already Judging Your Résumé”

 

Chris Seifert is the author of Enabling Empowerment: A Leadership Playbook for Ending Micromanagement and Empowering Decision-Makers. With over two decades of experience in transforming organizations through strategic leadership and decision-making frameworks, Chris has helped teams cut through bottlenecks, optimize capital project budgets, and build cultures of accountability. He is passionate about teaching leaders how to empower their teams to make smarter, faster decisions without sacrificing business value.