Only Our Conscious, Collective Action Can Counteract the Gender Biases and Inequalities Baked Into Artificial Intelligence

31 May 2024
Only Our Conscious, Collective Action Can Counteract the Gender Biases and Inequalities Baked Into Artificial Intelligence

The growing prevalence and influence of artificial intelligence (AI) in our daily lives is undeniable. On the surface, AI appears to serve as a value-free, hyper-logical and widely accessible tool. But the deeper we delve into its technical and social workings, the more biases we unearth, with gender a significant factor at every stage and every level. Mitigating these biases will require a significant collective effort. For this to be both fair and effective, we must ensure that women have an equal chance to take part in shaping the future of AI and other emerging technologies, writes UNIDIR Director Dr. Robin Geiss.

The inequities of AI begin with simple access. It is sobering to remember that 37 per cent of the world’s population – some 2.9 billion people – have never even used the internet, let alone the kind of large language models to which we are already becoming accustomed in the developed world. Those who have used the internet are significantly more likely to be men, especially in less developed countries, in Africa, and in Arab States.

Because it is user data that fuels the training of AI models, this unequal access quickly translates into a second problem: unequal representation in the worldviews of the models themselves. As Dr. Ingvild Bode put it at a recent UNIDIR event on military AI, if an AI is implicitly told that physicists tend to be men, then the physicists that we ask it to imagine will also be men, reproducing and reinforcing existing biases. This is compounded by the fact that many aspects of the design and development process are carried out by humans whose choices around data classification, feature selection, and guidance of the algorithm are not made in a social vacuum. It comes as no surprise then that 44.2 per cent of publicly available AI systems exhibit gender bias.

Biases baked into AI in this way can lead to gendered outcomes wherever it is used, including in the military domain. As Dr. Katherine Chandler points out, this could affect everything from translation to recruitment, from disaster relief to targeting. Worse still, the potential for scrutiny of potential biases can be undermined by a general tendency to defer to automated systems like those built around AI.

One key place where productive scrutiny is applied to weaponry of all kinds is within the international community, via arms-control, non-proliferation and disarmament forums. But even here gender inequalities come into play.

As UNIDIR’s Gender and Disarmament Hub reveals, just a third of accredited diplomats are women, and this number drops to a mere 25% when we consider only heads of delegations. If the same arenas that allow policymakers to address gender bias in artificial intelligence fail to represent the full spectrum of genders affected, then the task becomes all the more difficult.

This is because gender diversity in multilateral bodies and negotiations broadens the diversity of perspectives represented, which in turn enhances effectiveness. Research shows that diversity fosters careful information processing, which is often more limited in homogeneous groups. Those on the receiving end of negative gendered outcomes are also well placed to help others understand what’s behind them.

On top of the basic unfairness of unequal participation, this is another clear reason why the international community must make a conscious, collective effort to redress these imbalances. I am proud to say that the institution I lead, the United Nations Institute for Disarmament Research, is doing exactly that in the field of AI.

Fellows had the chance to exchange ideas with experts from the Swiss Federal Institute of Technology in Lausanne (EPFL), including on efforts to create the world’s most powerful AI supercomputer (© 2024, UNIDIR/Natalie Joray)

This week, we welcomed the first cohort of our Women in AI Fellowship, which provides women diplomats with the knowhow, skills and resources required to engage effectively in multilateral discussions on AI. As I write, the Fellows are gaining a firm grounding in AI technology, security-related applications of AI, risks to international security, the state of the art in related multilateral processes, and issues of gender in military AI.

Beyond the classroom, Fellows have met with industry representatives, quizzed academics in the field, and taken part in ITU’s AI for Good Summit, getting a complete picture of AI’s design, deployment and governance ecosystems in the process. And through the Fellowship, they are joining a network of women diplomats with whom they can pool the full diversity of their experiences, insights and lessons learned, both now and in the future.

Crucially, that diversity is also global. We received more than 100 applications from 56 countries, with the final group representing more than 30 countries and every region of the world. We are confident they will go on to make a significant impact on the international security processes that will ultimately allow us all to reap the benefits of AI while mitigating the risks that come along with it.

It is undeniable that we are living through a phase of incredible technological progress, but it is one that is also shot through with social inequalities. It is only by using our conscious, collective human intelligence to advance initiatives like the UNIDIR Women in AI Fellowship that we can begin to redress these inequalities and address the biases that have dogged the development, application and governance of AI to date.