Gender Bias in Artificial Intelligence and Digital Feminism: Empowering Women Through Digital Literacy

Premier Science Research Team
Premier Journal of AI & Society

This paper investigates the interplay between gender bias in AI systems and the potential of digital literacy to empower women in technology. Synthesizing research from 2010-2024, it examines how gender bias manifests in AI and the effectiveness of digital literacy initiatives.

📋 Abstract

Systemic gender biases are embedded in AI applications across diverse domains, such as recruitment, healthcare, and financial services. Only 22% of AI workers globally are women, 44% of AI systems show gender bias, and 25% exhibit both gender and racial bias.

🔑 Keywords

AI bias digital feminism algorithmic discrimination digital literacy technology critique
Read Original

Introduction: Gender Justice in the Algorithmic Age

Artificial intelligence has become an integral part of everyday life, covering all aspects of society at a global scale. Given the permeation of AI in society combined with concerns about bias in and through AI, critical scholarship on AI has gained traction in the humanities and social sciences. Particularly powerful critiques of AI include scholarly works inspired by US-American Black feminism.

A narrative review published in January 2025 investigates the interplay between gender bias in artificial intelligence (AI) systems and the potential of digital literacy to empower women in technology. The study examines how gender bias manifests in AI, its impact on women’s participation in technology, and the effectiveness of digital literacy initiatives in addressing these disparities by synthesizing research from 2010 to 2024.

Manifestations of AI Gender Bias

Historical Bias in Data

One of the core problems with AI systems is their reliance on historical data for training. This data often reflects and perpetuates existing social biases. When AI systems are taught on data that is historically biased, they can maintain and even worsen gender differences.

A study by the Berkeley Haas Center for Equity, Gender and Leadership analyzed 133 AI systems across different industries and found that about 44 percent showed gender bias, and 25 percent exhibited both gender and racial bias. These biases are not accidental glitches but structural outcomes of how systems are designed and trained.

Gender Stereotypes in Language Models

Analysis of large language models reveals disturbing patterns. Women were described as working in domestic roles far more often than men—four times as often by one model—and were frequently associated with words like “home”, “family” and “children”, while male names were linked to “business”, “executive”, “salary”, and “career”.

These associations are not merely harmless statistical patterns. They influence how AI systems make decisions that affect real people—from hiring recommendations to loan approvals, from medical diagnoses to legal judgments.

Industry-Specific Biases

Recruitment and HR: AI-based hiring tools are less helpful for women. Amazon famously abandoned an AI recruiting tool that showed bias against women, downgrading resumes that included the word “women’s” (as in “women’s chess club captain”).

Voice Recognition: Voice recognition systems often do not work well with female voices. Studies show that leading voice recognition systems have accuracy rates 13% higher for male voices than female voices.

Healthcare: AI diagnostic tools may misdiagnose or underdiagnose women’s health conditions based on biased training data from historical underrepresentation of women in medical research.

Financial Services: Credit scoring algorithms and loan approval systems may systematically disadvantage women, particularly those with caregiving responsibilities or non-linear career paths.

Structural Causes Analysis

Underrepresentation in the AI Workforce

Only 22% of AI workers around the world are women. This underrepresentation creates cascading effects:

Design Blind Spots: Male-dominated teams may not be aware of or prioritize issues affecting women.

Insufficient Testing: Products may not be adequately tested across diverse user groups, leading to poorer performance for certain demographics.

Cultural Biases: Workplace cultures may inadvertently exclude or marginalize women, perpetuating the cycle of underrepresentation.

Algorithmic Design Choices

Gender biases stem from factors including the under-representation of women in AI development teams, biased training datasets, and algorithmic design choices. These are not neutral technical decisions but choices that reflect and reinforce social values and priorities.

For example, algorithms that optimize for engagement or click-through rates may inadvertently promote content that reinforces gender stereotypes, as such content often generates higher engagement due to confirmation bias.

The Rise of Digital Feminism

Technology as a Tool of Resistance

Despite these challenges, digital technology has also become a powerful tool for feminist resistance. Digital feminism encompasses the use of digital platforms and tools to advance gender equality and challenge patriarchal structures.

The activism of computer scientists, authors, and activists Joy Buolamwini and Timnit Gebru, who explicitly promote a Black feminist perspective while also raising awareness about the experiences of trans users of technology, have shaped the regulatory efforts around facial recognition technology. Buolamwini has engaged the central institutions of the American state by testifying in Congress about racial and gender bias in facial recognition technology.

Female Perspectives and Activism

Empirical data gathered from interactions of female users in digital dialogues highlight that the most prominent topics of interest are the future of AI technologies and the active role of women to guarantee gender balanced systems. Algorithmic bias impacts female user behaviors in response to injustice and inequality in algorithmic outcomes. They share topics of interest and lead constructive conversations with profiles affiliated with gender or race empowerment associations. Women challenged by stereotypes and prejudices are likely to fund entrepreneurial solutions to create opportunities for change.

Social Media as Organizing Space

A 2025 study by Fraile-Rojas and colleagues explores the use of natural language processing (NLP) techniques and machine learning (ML) models to discover underlying concepts of gender inequality applied to artificial intelligence (AI) technologies in female social media conversations.

Social media platforms have become:

Awareness Raising: Sharing information and experiences about AI bias.

Collective Action: Organizing campaigns and initiatives to challenge sexism in tech.

Knowledge Production: Crowdsourcing research and documentation of bias instances.

Support Networks: Building communities and mentorship for women in tech.

Digital Literacy as Intervention

Transformative Tool

Digital literacy programs emerge as a promising intervention, fostering a critical awareness of AI bias, encouraging women to pursue AI careers, and catalyzing growth in women-led AI projects. Although gender bias in AI poses significant challenges, this review highlights digital literacy as a transformative tool for achieving gender equity in AI development and application.

Digital literacy is not just about technical skills; it encompasses:

Critical AI Literacy: Understanding how AI systems work, their limitations, and how bias can be embedded in them.

Data Literacy: Understanding how data is collected, processed, and used in decision-making.

Algorithmic Awareness: Recognizing when algorithms are making decisions that affect people’s lives.

Digital Rights Knowledge: Understanding privacy, consent, and rights in digital environments.

Successful Digital Literacy Programs

Effective digital literacy programs share several key features:

Intersectional Approach: Recognizing that gender intersects with race, class, age, and other identity factors.

Hands-on Learning: Providing practical experience with AI tools and technologies.

Critical Perspective: Encouraging questioning and challenging of existing systems, not just adaptation to them.

Community Building: Creating support networks and mentorship opportunities.

Policy Engagement: Connecting technical skills with advocacy and policy change.

Institutional and Policy Responses

Regulatory Frameworks

Governments and international organizations worldwide are developing frameworks to address AI bias:

EU AI Act: Includes specific provisions against discrimination and requires transparency in high-risk AI systems.

UNESCO Recommendation on the Ethics of AI: Emphasizes gender equality and inclusion of women at all stages of AI development.

National Initiatives: Countries are developing their own AI ethics guidelines and regulations, increasingly incorporating gender considerations.

Corporate Responsibility

Tech companies face increasing pressure to address gender bias in their products:

Diversity Initiatives: Programs to increase women’s representation in AI teams.

Bias Audits: Regular testing and evaluation of AI systems for discriminatory outcomes.

Transparency Reports: Publishing data on workforce diversity and AI system performance.

Ethics Boards: Establishing oversight bodies to review AI development and deployment.

Global Perspectives and Cultural Considerations

Challenges in the Global South

Women in developing countries face compounded challenges regarding AI and digital technology:

Infrastructure Barriers: Limited internet access and digital devices.

Educational Gaps: Fewer opportunities for STEM education.

Cultural Norms: Social expectations that limit women’s participation in technology.

Economic Constraints: Cost of digital tools and education.

Indigenous and Decolonized Approaches

There is growing recognition of the need for diverse approaches to technology development that don’t simply replicate Western models:

Indigenous AI: Incorporating indigenous knowledge systems and values into AI development.

Community-Led Solutions: Supporting local initiatives that address specific gender inequalities.

Linguistic Diversity: Developing AI systems that work for non-English languages and cultures.

Future Directions and Recommendations

Inclusive AI Design

The study highlights the importance of inclusive AI design, gender-responsive education policies, and sustained research efforts to mitigate bias and promote equity.

Inclusive design principles include:

Diverse Teams: Ensuring gender, racial, and background diversity in AI development teams.

Participatory Design: Involving diverse user groups in the design process.

Intersectional Testing: Evaluating AI systems’ impact on different intersectional identities.

Continuous Monitoring: Implementing systems to detect and correct biases as they emerge post-deployment.

Educational Reform

Fundamental changes to education systems are needed:

Early Intervention: Introducing computational thinking and AI literacy from primary school.

Curriculum Reform: Integrating gender studies and ethics into technical education.

Teacher Training: Equipping educators with tools to address gender bias in classrooms.

Role Models and Mentorship: Highlighting successful women technologists and creating mentorship programs.

Research Agenda

Future research should focus on:

Long-term Impact: Tracking long-term outcomes of digital literacy interventions.

Intersectional Analysis: Examining how gender intersects with other identity factors to affect AI experiences.

Global Comparative Studies: Understanding how gender bias manifests in different cultural contexts.

Technical Innovation: Developing new tools and methods to detect and mitigate bias.

Conclusion: Toward a Gender-Just AI Future

The research from 2025 demonstrates that while gender bias in AI remains a significant challenge, there is growing academic attention to digital feminism approaches, with emphasis on digital literacy, inclusive design, and intersectional frameworks as key strategies for addressing algorithmic discrimination.

Ethical AI involves taking an intersectional approach when addressing questions around gender, race, ethnicity, socioeconomic status, and other determinants, in addition to adopting a human rights-based approach to AI governance premised on transparency, accountability, and human dignity. Different stakeholders, including business and corporate entities, tech companies, academia, UN entities, civil society organizations, media, and other relevant actors should come together and explore joint solutions.

As the research shows, while the challenges are real and significant, the potential for change through coordinated effort, critical awareness, and commitment to equity is equally powerful. The future is not predetermined; by recognizing and addressing gender bias in AI, we can work toward a more just digital future where technology serves everyone, regardless of gender.

The path to gender-just AI requires a combination of technical innovation, policy intervention, educational reform, and cultural change. It is not merely a technical challenge but a social one that requires collective action from all stakeholders. Only through this comprehensive approach can we ensure that the AI revolution does not perpetuate the inequalities of the past but instead contributes to creating a more equitable future.

Academic Discussion

Discuss the theoretical contributions and practical implications of this paper with other researchers

💬

Join the Discussion

Discuss the theoretical contributions and practical implications of this paper with other researchers

Loading comments...