Blog

AI Makes Humans Essential, NOT Redundant

Written by Dr Peter Hughes | Apr 9, 2026 12:59:24 PM

Dr Peter Hughes explores cognitive science and behavioural psychology, explaining AI doesn’t replace human thinking but makes discipline and judgement more important than ever.

AI is reshaping human cognition. 

These common anxieties reflect this change:

  • What’s the point of learning when AI knows more than we can ever know?
  • What’s the point of having a belief when AI can prove us wrong?
  • Why bother creating strategies to engage prospects when AI understands their behaviour better than we ever will?
  • Why waste time building a career when AI will take it away before it begins?
  • And, ultimately, what's the point of you?

And yet, despite the spiral of anxiety and feelings of redundancy that AI generates, the value of human cognition has never been greater.

Without it, we will allow delusions and biases baked into AI, to distort the truth and disengage us from reality. AI isn’t making humans redundant: it’s making us dangerously passive.

 

AI and Behavioural Psychology

Recent research in cognitive science and behavioural psychology, gives us four clear warnings that should shake us out of our complacency and ensure we apply human cognition to AI.

AI Sycophancy

  • The Princeton authors of a recent paper observed that LLMs are sycophantic (i.e. they seek to affirm user beliefs and reinforce their world view).
  • The effect is to create “delusion-like epistemic states, producing beliefs markedly divergent from reality”.
  • They noted sycophantic behaviour in 58.2% of medical and mathematical cases.
  • More disturbingly, the LLM changed its answer from correct to incorrect in 14.7% of cases where users expressed disagreement or discomfort.
  • A simple opinion statement made by a user evidenced the LLM to agree with incorrect beliefs in 63.7% of cases.
  • Sycophantic AI behaviour reinforces our need for affirmation at the expense of truth. It deploys biases in data selection to systematically remove data that conflicts with a user hypothesis. In doing so, it removes the friction of reality and reinforces user confirmation bias.

    LLMs are sycophants for 4 main reasons:
  • They read requests for assistance as requests for affirmation
  • They learn that agreeing with users boosts ratings
  • They seek consistency with users expressed beliefs
  • User beliefs structurally change LLMs by overriding prior knowledge

The authors conclude: “The result is a feedback loop where users become increasingly confident in their misconceptions, insulated from the truth by the very tools they use to seek it”.

 

Cognitive Surrender

  • In a brilliant study involving 1,372 participants in 9,593 trials, two University of Pennsylvania professors showed how AI is reshaping human reasoning.
  • Their study built on the work of Daniel Kahneman and Amos Tversky developed a two-system framework of human cognition: System 1 makes fast, automatic, unconscious decisions, while System 2 is slow, conscious and effortful.
  • The authors suggested that AI functioned as a third cognitive system (System 3), which sits outside the human brain yet determines its function.
  • They found that 79.8% of participants followed incorrect AI advice and discipline as the number of faulty trials increased, leading the authors to conclude that “when AI was wrong, people followed it off a cliff - performing worse than without any AI at all (emphasis mine).
  • These findings were repeated on subsequent trials, leading the authors to coin the phrase “cognitive surrender” to define the operation of System 3. It reflects the passive, uncritical acceptance of AI generated information.

“Cognitive surrender”, they conclude, “is an uncritical abdication of reasoning itself. It reflects not merely the use of external assistance, but a relinquishing of cognitive control”.

 

Distortion and Disempowerment

  • A team drawn for Anthropic, the ACS Research Group and the University of Toronto studied 1.5 million Claude.ai conversations and found three disturbing patterns of human disempowerment:
  • Reality Distortion where conversations with an AI assistant led to distorted beliefs about reality. For example, the AI assistant amplified mild fears about persecution, in workplace interactions or online, and transformed them into an unfalsifiable conspiratorial framework.
  • Value Judgment Distortion where users transfer moral authority to an AI assistant. For example, in hundreds of romantic exchanges, the AI assistant acted as a moral judge labeling partners as “manipulative”, “toxic” or “abusive”. This led to the collapse of human agency, leaving the AI assistant to make life-changing decisions on the users behalf.
  • Action Distortion where decisions and actions are outsourced to an AI assistant. For example, users allowed AI assistants to provide highly prescriptive guidance in business, medical, therapeutic and other life domains, leading to users treating the AI “as a life authority rather than a collaborative tool”.

Authority projection, across all three of these distortions, led users to refer to the AI as “sensei”, “Maestro”, “mentor”, “goddess” or “Lord”.

 

Truth and Persuasion

  • A study conducted by researchers at Oxford University, LSE and Stanford, involved three large-scale experiments with 76,977 responses from 42,357 participants using 19 LLMs to test AI persuasiveness on 707 political issues. The study also analysed more than 466,000 AI claims to assess the relationship between persuasiveness and truthfulness.
  • They found that information density was the primary factor driving AI persuasiveness. In practice, this means packaging an argument with a high volume of factual claims.
  • However, and most disconcertingly, persuasiveness was driven by density not factual accuracy or truthfulness: “Although the biggest predictor of a models’ persuasiveness was the number of fact-checkable claims (information) that it deployed, we observe that the models with the highest information density also tended to be less accurate”.

The authors conclude by highlighting “a troubling potential trade-off between persuasiveness and accuracy: the most persuasive models and prompting strategies tended to produce the least accurate information”.


The Value of Human Cognition

AI makes us feel smarter while reducing our capacity (and willingness) to think. In the absence of critical human cognition, AI will disempower, distort and falsify. Unchecked, it reduces us to dupes, paying with our brains for digital snake oil.

The competitive advantage is no longer AI adoption: it’s having the cognitive discipline to harness its power. Far from making humans redundant, AI demands that we understand the true value of human cognition. Personally and professionally, we must exercise our capacity for critical thinking or accept the consequences.

As the authors of the study on “cognitive surrender” concluded: “The question is no longer whether AI can think for us - it’s whether we’ll still be able to think for ourselves”.

 

References:

  1. Batista, Ramos and Thomas L. Griffiths. “A Rational Analysis of the Effects of Sycophantic AI.” (2026)
  2. Sharma, Mrinank et al. “Who's in Charge? Disempowerment Patterns in Real-World LLM Usage.” (2026)
  3. Shaw, Steven D and Nave, Gideon. “Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender (January 11, 2026).
  4. Hackenburg, Kobi et al. “The Levers of Political Persuasion with Conversational AI.” (2025)