AI and Human Biases: is Behavioural Science the answer?


By
Biases in Action Psychology Digital Marketing Digital Strategy
More Marketing Psychology

Mustafa Suleyman, the co-founder of Google DeepMind wrote about AI, that its ‘vast, almost instantaneous consumption of information is not just difficult to comprehend; it's truly alien'.

However, such cognitive power is not a defence against bias and AI will, he warned, 'casually reproduce and indeed amplify the underlying biases and structures of society, unless they are carefully designed to avoid doing so'.

But how is human intelligence, riddled with about 300 biases and heuristics, meant to provide an impartial, bias-free platform for the evolution of AI?

Let’s begin with the risk of ideological bias. Perhaps the most famous example of the influence of this was the ill-fated launch of Gemini. While few would dispute the need to ensure that a diverse society is not represented as monolithic, if the AI is coded to prioritise the value of diversity above historical accuracy, you end up with severe factual distortions. For example, the image below was Gemini’s understanding of what a typical couple in 1820s Germany looked like:

Screenshot 2025-07-02 at 14.22.25

 

Similarly, a study conducted by MIT in 2018 found that the under-representation of dark-skinned faces in AI training datasets led to an error rate of almost 35% in identifying dark-skinned women in facial recognition tests. The error rate for light-skinned men was only 0.8%.

For brand and marketing strategists, the risks of uncritically accepting algorithmic bias is enormous. Here are just a few examples:

  • Humans tend to over generalise based on prevalent traits in a particular datasets. This is known as the representativeness heuristic and it can affect ad delivery systems. For example, even when Meta advertisers selected neutral targeting in their job ads, the algorithm tended to select candidates based on stereotypes: nursing ads were shown more to women and ads for plumbers were shown more to men. 
  • Amazon encountered a similar problem when using AI to filter job applicants. The CVs used to train the AI were selected disproportionately from male applicants. As a result, CVs that included terms such as ‘women’s college’ were downgraded.  
  • The success of campaigns tends to be assessed based on metrics like CTR. However, this can result in deepening an algorithmic bias towards stereotypes.
  • AI training involves labeling datasets. Inevitably, there will be a subjective component to this process creating a ‘labeling bias’ that places value judgements on symbols or language patterns. For example, the association of a rational emotional tone with the responsible exercise of authority can reinforce norms about leadership and ignore the importance of emotional intelligence.

The key takeaway from these examples is that brand owners should always remember that artificial intelligence may have vastly more processing power than human intelligence but that does not make it free from human biases. On the contrary, it amplifies those biases exponentially. 

The tendency among humans, however, is to treat AI as an accurate, bias-free intelligence. This makes us lazy in our interaction with AI, leading to the progressive decline of our critical faculties. A recent study from LIT’s Media Lab found that the brains of frequent ChatGPT users aged 18-39 “consistently underperformed at neural, linguistic, and behavioral levels.” 

How, then, can brand owners ensure that AI generated data and content they use to engage target audiences is being critically evaluated rather than being uncritically accepted?

The answer is behavioural science and you can apply these 4 steps to mitigate AI bias in marketing and branding:

  1. Approach AI in the knowledge that its intelligence is as biased as yours.
  2. Master the basics of behavioural science by identifying key biases (e.g. confirmation, loss aversion, narrative fallacy etc.) that drive human communication and decision making.
  3. Use behavioural science to create an organisational culture that takes a critical approach to AI output. For example, undertake regular AI Bias Audits to evaluate biases embedded in AI output.
  4. Above all, don’t allow AI to reduce you to a state of intellectual resignation and passivity. 

AI is autonomous and growing its power exponentially but it still needs humans to use our unique blend of emotional intelligence and conscious reflection to get the best out of it. The future is neither with AI nor behavioural science: it’s at the intersection of these two disciplines. We can’t redesign the AI we work with but we can mitigate its biases. This will  make its outputs more effective in creating meaningful engagement with internal and external audiences. That, in turn, will decide the brands that will thrive from those that will fail.

 

Get psychology insights to your inbox

Keep up to date with the latest content, updates and industry trends from Cognition, straight to your inbox. 

Follow our socials