23 September 2025 Articles

Opinion: Breaking the Bias: From History to AI and the Digital Future

 

Bias is not new. For centuries it has shaped societies, policies, and opportunities. What is new is the way those same biases are now embedded into technology, particularly artificial intelligence (AI), with the potential to amplify inequality at scale.

Understanding bias, where it comes from, and how to address it is critical if we are to create fair, inclusive systems that work for everyone. But before we consider AI, it’s worth reflecting on how women’s rights have been restricted even within the last century because those lessons show us exactly why vigilance is so important, as Lisa Goode, Consultant at Apira, an IQVIA business explains…

 

A Century of Change But Not Enough

It can be shocking to remember just how recently women were denied basic rights. A glance at history shows how systemic inequality has shaped daily life:

  • The vote: Before 1918, women in the UK were unable to vote in public elections. Even then, only women over 30 who met property requirements were enfranchised about 40% of women. It wasn’t until 1928 that the voting age for women aligned with men at 21.
  • Property: Married women could not own property independently; ownership automatically transferred to their husbands. The 1922 Property Act finally allowed women to retain control of their own assets.
  • Banking & Credit: Until 1975, a woman in the UK needed a man’s signature to open a bank account. Mortgages in her own name were similarly out of reach until the mid-1970s.
  • Employment: Many women were forced to leave their jobs upon marriage or motherhood, effectively removing them from professional life.
  • Professions & Jury Service: The Sex Disqualification (Removal) Act of 1919 opened the door for women to enter law, accountancy, medicine, and jury service but in practice, real equality took decades.
  • Everyday rights: As late as 1982, pubs could legally refuse to serve women. Equal pay legislation was only passed in 1983, and marital rape was not criminalised until 1991.

These milestones are uncomfortably recent. They show how long-standing and ingrained bias can be, and how slowly society has responded to correcting it.

When Design Doesn’t Fit Everyone

Bias also shows up in design. Take car safety. For decades, crash-test dummies were built to represent the “average” man, leaving women at greater risk of injury because seatbelts, airbags, and impact models didn’t reflect female physiology.

Only in 2022 did Swedish engineers develop crash-test dummies based on the average woman. Yet even now, there is no legal requirement for rear-impact testing on female models. This is even though women influence most car purchases and, in the UK, more women than men hold driving licences.

When the design of life-saving systems overlooks half the population, the cost of bias becomes starkly visible.

Healthcare: A History of Exclusion

Healthcare has its own deeply ingrained bias. Drug trials, historically, were conducted almost exclusively on men. Dosages, side effects, and treatment pathways were therefore designed with male bodies in mind, even though they were prescribed to women.

The consequences remain clear today. Women’s heart disease symptoms, for example, are more likely to be misdiagnosed or dismissed. Until 2023, even menstrual products had never been tested with real blood.

When the design of life-saving systems overlooks half the population, the cost of bias becomes starkly visible.

AI: The New Frontier of Old Biases

AI is often described as objective, but it is only as fair as the data it is trained on. When that data reflects decades of social, cultural, and institutional bias, the outputs will too. Consider some recent examples:

  • Hiring algorithms: Amazon’s experimental recruitment tool was scrapped when it was found to disadvantage CVs that included the word “women’s,” reflecting historical hiring patterns.
  • Policing: In Chicago, predictive policing algorithms were trained on stop-and-search data that disproportionately targeted Black residents. The AI then reinforced these patterns, creating a feedback loop of over-policing.
  • Healthcare: Diagnostic tools trained predominantly on male data sets risk misdiagnosing female patients.

Even generative AI reflects bias. Studies show that when asked to generate images of doctors, AI tools often overwhelmingly produce white male representations, despite the reality of diverse medical workforces. This isn’t because AI “believes” doctors are men; it’s because training datasets, stock photography, advertising, media have historically presented them that way.

In other words: if bias goes in, bias comes out.

Why This Matters for the Digital Future

AI is increasingly embedded in critical decisions: hiring, credit approvals, medical diagnostics, education, and even justice. If we allow existing biases to remain unchecked, we risk hardcoding inequality into the systems that will define the next century.

There are reasons for optimism. Some organisations are stepping up: Microsoft’s AI & Ethics in Engineering & Research group and Meta’s independent oversight board have outlined frameworks focused on accountability, transparency, fairness, safety, privacy, and inclusiveness.

But ethical frameworks alone won’t solve the problem. Progress requires diverse voices, rigorous oversight, and deliberate effort.

What We Can Do

The lesson from history is clear: change doesn’t happen by accident. It takes awareness, advocacy, and action. So, what does tackling AI bias look like in practice?

  1. Increase representation in development: AI teams must reflect the diversity of the societies they serve. Different perspectives help catch blind spots.
  2. Strengthen data integrity: Clinical trials, safety tests, and datasets must represent whole populations, not just historical defaults.
  3. Demand accountability: Algorithms that make decisions about people’s lives should be transparent, explainable, and held to account.
  4. Encourage active participation: Individuals can make a difference by engaging with research, providing feedback in digital systems, and challenging biased defaults.
  5. Shift culture: Recognise bias as a social issue, not just a technical one. Amplify underrepresented voices in leadership, academia, and technology.

How Apira, an IQVIA Business, Can Help

At Apira, an IQVIA business, we’ve seen first-hand how digital transformation can unlock opportunity but also how easily bias can creep in if it isn’t addressed early.

Our work with NHS trusts, other public-sector and private helathcare organisations gives us a unique perspective: technology is never just about systems, it’s about people. That means considering representation, fairness, and accessibility at every stage of the digital journey.

  • We help organisations assess risks of bias in digital tools and processes.
  • We support the design of inclusive data strategies that reflect diverse populations.
  • We bring together technology expertise and deep healthcare knowledge to ensure AI is used responsibly, safely, and effectively.
  • We work with leadership teams to embed governance and accountability, so digital transformation serves everyone, not just the “average” user.

The future of AI doesn’t have to repeat the mistakes of the past. By combining expertise, ethics, and inclusivity, we can create systems that reflect our best values, not our worst habits. Contact us today at info@apira.co.uk to find out more.

A Final Thought

History teaches us that progress is possible, but never inevitable. Women gained the right to vote, to own property, to work, and to live with autonomy because generations before us refused to accept inequality as “normal.”

The same is true now. AI will shape the next century as profoundly as suffrage shaped the last. If we want it to be fair, inclusive, and humane, we must demand it and design it that way.

Bias in AI is not just a technical flaw; it’s a mirror of society. The question is: what reflection do we want to see?

About the author

Lisa has many years’ experience of programme/project management alongside business change, within the NHS and software supplier environments. Lisa has implemented EPRs and clinical systems (such as theatres, maternity and ED ) across the country from Cornwall to Liverpool.

Lisa is also a qualified Neuro Linguistic Programming Practitioner and Hypnotherapist. NLP is used to help us understand how the language we use influences the way we think, and can bring positive outcomes.

Outside of work, Lisa has a daughter and 2 grandchildren who live in France, and a daughter who lives on a farm on the edge of  the New Forest. Lisa loves to craft, with sewing being one of her favourites.