In 2020, the European Commission released a groundbreaking paper titled “On Artificial Intelligence – A European Approach to Excellence and Trust”. While it sparked debates on regulation, technology, and competition, one crucial aspect went largely unnoticed: the urgent need to prevent AI from perpetuating prohibited discrimination. Fast forward three years to the rise of ChatGPT and the increasing investment in artificial general intelligence (AGI), and it becomes clear that gender bias in AI demands our immediate attention.
As AI seeps further into every aspect of business and society, we cannot afford to ignore the gender conversation. Failing to address this issue risks entrenching discrimination and bias in our systems. Achieving this requires a thorough examination of the foundations of AI development, the data it relies on, and how we identify and tackle bias within its code.
The issue has also been raised by Gabriela Ramos, the Assistant Director-General for the Social and Human Sciences at UNESCO, during the World Economic Forum (WEF). Ramos highlights a critical concern – the exclusion of women at every stage of the AI lifecycle, leading to a gender gap that poses a significant risk: the creation of an immensely unequal economic and technological system in an era of rapid digitalisation.
The statistics surrounding this issue are alarming: male graduates in ICT outnumber women by 400%, women represent only around 33% of the workforce in large global technology firms, a mere 22% of women are in the AI profession, women authors contribute to only 14% of AI research papers, and women-led firms receive a paltry 2% of venture capital funding. These figures underscore the urgent need for action to address gender bias in AI and promote inclusivity and diversity in the field.
The presence of gender bias is evident in the statistics and raises legitimate concerns about the potential reflection of this bias in AI systems. We must acknowledge the existence of this bias, recognise the significant risk it poses in further entrenching biased practices, and unite in our efforts to actively address and mitigate these biases, says Collard. By fostering awareness, collaboration, and proactive measures, we can work towards creating a more equitable and inclusive AI landscape that benefits everyone.
This raises legitimate concerns that this bias will reflect in the AI. It is crucial for us to acknowledge the existence of this bias, recognise the significant risk it poses in further entrenching discriminatory practices, and come together to actively address and mitigate these biases.
There are already several women raising red flags in this space. In an article published in the Gender, Technology and Development journal, Subadra Panchanadeswaran, a professor at the Adelphi University School of Social Work, and Ardra Manasi from the Centre for Women’s Global Leadership at Rutgers University, raised concerns about gender bias in AI. They emphasised the need for ethical frameworks and the inclusion of gender equality in the development of AI, government policies, and overall approaches to equality.
Anu Madgavkar, in a McKinsey analysis, shares a similar sentiment. She suggests that it is crucial to go beyond just coding AI and the data it learns from. It is important to examine how this technology impacts women’s lives, jobs, and careers. Madgavkar highlights that women, along with other groups, will need to adapt and learn how to use these technologies, with an estimated 160 million women requiring occupational transitions due to AI.
AI has the potential to revolutionise various industries and improve our lives in many ways. However, it is crucial to address the many risks associated with AI, such as gender and racial bias. By acknowledging these concerns, we can work towards designing AI systems that break gender norms and promote diversity and inclusion. Ensuring that AI algorithms are free from bias and promote diversity is a fundamental objective.
This is the fervent hope of most researchers and analysts as they look ahead. AI has the potential to ravage the gender divide and create the equality that has remained elusive for so long. However, this does require conscious efforts on removing bias from data collection and more diversity within the fields of technology and AI development and overcoming this challenge remains a priority within the development as a whole.
In her book Invisible Women: Exposing Data Bias in a World Designed for Men, author Caroline Criado Perez describes the adverse effects on women caused by gender bias in big data collection.
As described in Perez’s book, consider the alarming statistic that women are 47% more likely to sustain serious injuries in the same car accident as men, solely due to seatbelt designs based on male-centric data. Now, imagine replicating these biases across various layers of society and business using AI, and the implications become all too real.