Mustafa Suleyman, the co-founder of Google DeepMind wrote about AI, that its ‘vast, almost instantaneous consumption of information is not just difficult to comprehend; it's truly alien'.
However, such cognitive power is not a defence against bias and AI will, he warned, 'casually reproduce and indeed amplify the underlying biases and structures of society, unless they are carefully designed to avoid doing so'.
But how is human intelligence, riddled with about 300 biases and heuristics, meant to provide an impartial, bias-free platform for the evolution of AI?
Let’s begin with the risk of ideological bias. Perhaps the most famous example of the influence of this was the ill-fated launch of Gemini. While few would dispute the need to ensure that a diverse society is not represented as monolithic, if the AI is coded to prioritise the value of diversity above historical accuracy, you end up with severe factual distortions. For example, the image below was Gemini’s understanding of what a typical couple in 1820s Germany looked like:
Similarly, a study conducted by MIT in 2018 found that the under-representation of dark-skinned faces in AI training datasets led to an error rate of almost 35% in identifying dark-skinned women in facial recognition tests. The error rate for light-skinned men was only 0.8%.
For brand and marketing strategists, the risks of uncritically accepting algorithmic bias is enormous. Here are just a few examples:
The key takeaway from these examples is that brand owners should always remember that artificial intelligence may have vastly more processing power than human intelligence but that does not make it free from human biases. On the contrary, it amplifies those biases exponentially.
The tendency among humans, however, is to treat AI as an accurate, bias-free intelligence. This makes us lazy in our interaction with AI, leading to the progressive decline of our critical faculties. A recent study from LIT’s Media Lab found that the brains of frequent ChatGPT users aged 18-39 “consistently underperformed at neural, linguistic, and behavioral levels.”
How, then, can brand owners ensure that AI generated data and content they use to engage target audiences is being critically evaluated rather than being uncritically accepted?
The answer is behavioural science and you can apply these 4 steps to mitigate AI bias in marketing and branding:
AI is autonomous and growing its power exponentially but it still needs humans to use our unique blend of emotional intelligence and conscious reflection to get the best out of it. The future is neither with AI nor behavioural science: it’s at the intersection of these two disciplines. We can’t redesign the AI we work with but we can mitigate its biases. This will make its outputs more effective in creating meaningful engagement with internal and external audiences. That, in turn, will decide the brands that will thrive from those that will fail.