AI's gender bias is even worse than reality
Can designers fix it?

AI is revolutionising design, streamlining workflows, and generating content at a speed humans could never match. But here’s the problem: it’s also one of the biggest bias machines we’ve ever created.
AI doesn’t just reflect gender bias - it amplifies it. It can churn out stereotypical imagery, sexist assumptions, and regressive branding choices that would make Mad Men look progressive.
And the kicker? It’s doing it at scale. With International Women’s Day upon us, we chatted with industry experts to get the inside track.
AI bias isn’t just mirroring human prejudice - it’s cranking it up to 11. As Jen Robbie, Graphic Designer at Digital Skills Education, explains, because these systems are trained on millions of labeled images, they don’t just inherit bias - they amplify it. And once biased AI-generated content floods the internet, newer AI models start training on those same skewed images, locking us into a feedback loop that makes things worse with every iteration.
She’s not exaggerating. Robbie ran basic AI image generation tests and the results were alarmingly predictable:
"Lawyer"? A young, white man. "Doctor"? Also a man. "Nurse”? Female, of course. It’s the same bias we see baked into the real world.
Nat Maher, founder of Kerning the Gap, an organisation advocating for gender diversity in design leadershiphas seen this first-hand.
Get the Creative Bloq Newsletter
Daily design news, reviews, how-tos and more, as picked by the editors.
“You only have to spend five minutes with ChatGPT asking it to generate images of a Creative Director and Senior Graphic Designer (white men) then to try a receptionist at this same agency, a nurse and a parent putting a baby to bed (all white women) to be reminded of the inherent bias carried by AI," Maher explains.
"Given that AI is fundamentally driven by the data that has been input by humans, and that 75% of people carry a bias that ‘careers are for men, and families are for women’, this isn’t surprising. Even in our own industry, only 22% of leadership roles are held by women, so AI is playing our own issues back at us."
And this isn’t just a quirk of a single AI tool. Studies show AI-generated art consistently sexualizes and lightens women's skin tones. Some generators even block prompts like ‘plus-size woman’, as if diverse body types don’t exist.
It gets worse. AI isn’t just replicating bias - it’s exponentially reinforcing it. With the internet now overflowing with AI-generated content, new AI models train on this already-skewed data, compounding the problem. It’s a bias feedback loop - and we’re in danger of all getting stuck in it.
The UK Design Industry: Slow to Act?
You’d think the design world - full of progressive, forward-thinking creatives - would be on top of this. It’s not. It’s as open to manipulation as everyone else.
"Many artists and designers are confused and concerned about AI and its impact," says Robbie. "AI-generated content remains a gray area, filled with uncertainties and loopholes, and it seems to be taking a long time to implement regulations or laws that address issues like bias."
Nat Maher sees the same pattern. "We’ve still got a long way to go with tackling our human biases and how they play out in the disparity of gender equality across our industry."
And part of the problem? Women are 16% less likely to use AI than men.
"With all AI outputs there is still the fundamental need for users to exercise their own judgment," Maher explains. "Designers need to be aware of their own biases, so that they are conscious enough to make those judgments. Taking the Harvard Implicit Association test is a really good place to start, and will humble anyone that thinks they don’t have any (me included)."
Yes, the UK government launched the Fairness Innovation Challenge, throwing £400,000 at AI bias research. Cool. But let’s be real: most designers aren’t waiting around for policy papers.
Your AI Tools Are Already Screwing Up Your Brand
This isn’t just about pretty pictures. AI bias is actively shaping branding, advertising, and UX design in ways companies don’t even realise.
Brands increasingly using AI to generate product mockups, stock images, and ad campaigns will shape public perception - even if no one in the marketing department meant for it to happen.
Amazon’s AI-powered recommendation engine has been found to gender-code products—associating ironing boards with women and power tools with men. And remember when Amazon had to scrap its AI-driven hiring tool because it downgraded resumes with the word ‘women’?
And it isn’t just imagery, as Nayan Jain, Executive Director of AI at ustwo, explains: "In branding and advertising, AI-generated copy tends to default to gendered language. For instance, AI-written product descriptions for beauty and fashion often use words like ‘elegant’ or ‘delicate,’ reinforcing the idea that femininity is tied to softness, while men’s products lean into words like ‘bold’ or ‘powerful.’"
That’s your AI at work. And yet, when bias like this surfaces, companies act surprised. As if AI operates in some magical black box. As if brands aren’t ultimately responsible for the results.
Who’s Accountable When AI Gets It Wrong?
AI bias doesn’t just happen - as we’ve seen, it’s baked in.
"Bias can come from people, from the use of AI, and from a mix of the two," says Zoë Webster, AI and innovation expert and one of AI Magazine’s Top 10 Women in AI in the UK and Europe (2024). "It can stem from the data on which AI is trained and from the way choices are made by the underlying algorithm. Both are controlled by people - not the AI itself."
Yet when bias shows up - whether in an AI-generated ad, a chatbot response, or a brand’s automated design tools - who takes responsibility?
"As a consumer, it isn't always easy to tell what is AI-generated and what is not," Webster notes. "Where there looks to be bias, it can be difficult to know exactly where the bias has come from. However, a person (or team) should be accountable for content and communications, so they should be considering the potential for bias from the very start."
How Designers Can Stop AI From Screwing Up
The good news? Designers are not powerless. Awareness is changing.
One way to fight AI bias is to write smarter prompts - spelling out diversity explicitly rather than relying on AI’s defaults. Stop accepting default AI results. If Midjourney, DALL·E, or Copilot keep generating white men in suits, push back and tweak the input.
Other solutions:
- Use AI less. "A more ethical option is to use human illustrators and photographers to source images for content," says Robbie.
- Train designers on AI’s limitations, says Webster. If they don’t know bias exists, they won’t know what to fix.
- Interrogate the data. AI learns from whatever it’s fed, and bad data equals bad AI.
- Own the output. No more hiding behind "the algorithm." If AI generates something biased, someone needs to be accountable.
- Ensure genuine diversity and inclusion in the teams making decisions about the tools to use and how these are applied, says Webster.
- Track regulatory changes. The UK’s Information Commissioner’s Office (ICO) and the Equality and Human Rights Commission (EHRC) are tightening rules around AI fairness and brands need to stay ahead of this.
Because here’s the truth: If your brand gets AI bias wrong, consumers won’t blame the machine - they’ll blame you.
We’re at a crossroads. AI can enhance creativity - or it can lock us into decades-old biases that stifle progress and diversity. "AI bias is not going away," says Robbie. "But we can choose how much power we give it."
Thank you for reading 5 articles this month* Join now for unlimited access
Enjoy your first month for just £1 / $1 / €1
*Read 5 free articles per month without a subscription
Join now for unlimited access
Try first month for just £1 / $1 / €1
Simon is a writer specialising in sustainability, design, and technology. Passionate about the interplay of innovation and human development, he explores how cutting-edge solutions can drive positive change and better lives.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

















