Artificial intelligence (AI) tools have become increasingly popular in various applications, from natural language processing to generative AI platforms. However, a recent report led by researchers from UCL has shed light on a concerning issue – these AI tools are discriminating against women and individuals from diverse cultural and sexual backgrounds. The study, commissioned by UNESCO, focused on Large Language Models (LLMs) such as Open AI’s GPT-3.5, GPT-2, and META’s Llama 2, revealing clear evidence of bias against women in the content generated by these models.
The Role of Stereotyping in AI-generated Text
The findings of the report highlighted stereotypical associations between female names and words like “family,” “children,” and “husband,” reinforcing traditional gender roles. In contrast, male names were more likely to be linked to words such as “career,” “executives,” and “management.” This gender-based stereotyping extended to notions about culture and sexuality, with negative stereotypes perpetuated in the generated text.
Particularly concerning was the discovery that Open-source LLMs tended to assign men more diverse, high-status jobs like “engineer” or “doctor,” while relegating women to roles historically undervalued or stigmatized, such as “domestic servant” or “prostitute.” Stories generated by Llama 2 also reflected gender biases, with narratives about boys and men centered on “treasure,” “woods,” and “adventurous,” whereas stories about women focused on “garden,” “love,” and “husband.”
Dr. Maria Perez Ortiz, a researcher from UCL Computer Science and a member of the UNESCO Chair in AI team, emphasized the need for an ethical overhaul in AI development. She highlighted the importance of AI systems reflecting the diverse tapestry of human experience, uplift gender equality rather than undermine it. The call for ethical AI development was further echoed by Professor John Shawe-Taylor, lead author of the report, from UCL Computer Science and UNESCO Chair in AI at UCL.
The UNESCO Chair in AI team, in collaboration with UNESCO, aims to raise awareness about the issue of gender bias in AI tools and work on developing solutions. This involves engaging with AI scientists, developers, tech organizations, and policymakers through workshops and events. Professor Shawe-Taylor stressed the need for a global effort to address AI-induced gender biases and create technologies that uphold human rights and gender equity.
The report was presented at the UNESCO Digital Transformation Dialogue Meeting and the United Nations headquarters, highlighting the urgency of tackling gender bias in AI technologies. Professor Drobnjak, Professor Shawe-Taylor, and Dr. Daniel van Niekerk played key roles in advocating for inclusive and ethical AI development. It is essential to challenge the historical underrepresentation of women in fields like science and engineering and recognize their capabilities.
The findings of the report underscore the critical need for addressing gender bias in AI tools and promoting ethical development practices. By actively working towards creating AI technologies that reflect diversity and uphold gender equality, we can pave the way for a more inclusive and ethical future. It is crucial for stakeholders across various sectors to come together in a concerted effort to challenge existing biases and build a more equitable AI landscape.