More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech

A powerful work by NYU professor, data scientist, and one of the few Black women AI researchers, Meredith Broussard. Reveals how tech neutrality is a myth and algorithms need accountability. From facial recognition only trained on lighter skin tones, to mortgage algorithms encouraging discriminatory lending, to dangerous feedback loops in medical diagnostic algorithms. Solution isn't making omnipresent tech more inclusive, but rooting out algorithms that target demographics as 'other.'

More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech

📝 Book Review

When technology reinforces inequality, it’s not just a glitch—it’s a signal that we need to redesign our systems to create a more equitable world. The word “glitch” implies an incidental error, as easy to patch up as it is to identify. But what if racism, sexism, and ableism aren’t just bugs in mostly functional machinery—what if they’re coded into the system itself? In the vein of heavy hitters such as Safiya Umoja Noble, Cathy O’Neil, and Ruha Benjamin, Meredith Broussard demonstrates in “More Than a Glitch” how neutrality in tech is a myth and why algorithms need to be held accountable. Broussard, a data scientist and one of the few Black female researchers in artificial intelligence, masterfully synthesizes concepts from computer science and sociology. She explores a range of examples: from facial recognition technology trained only to recognize lighter skin tones, to mortgage-approval algorithms that encourage discriminatory lending, to the dangerous feedback loops that arise when medical diagnostic algorithms are trained on insufficiently diverse data. Even when such technologies are designed with good intentions, Broussard shows, fallible humans develop programs that can result in devastating consequences.

Meredith Broussard is an associate professor at the Arthur L. Carter Journalism Institute of New York University and research director at the NYU Alliance for Public Interest Technology. Her research focuses on investigative reporting with and about artificial intelligence. Broussard arrived at Harvard College in 1991, initially studying computer science as one of only six undergraduate women in that concentration, but left computer science and graduated with a degree in English, having also taken courses in African American studies. She was previously a features editor at The Philadelphia Inquirer and a software developer at AT&T Bell Labs and MIT Media Lab. Her first book, “Artificial Unintelligence: How Computers Misunderstand the World” (published April 2018 by MIT Press), examines technology’s limits in solving social problems. The book was awarded the Hacker Prize by the Society for the History of Technology and the 2019 PROSE Award for best book in computing & information sciences by the Association of American Publishers. Her second book, “More Than a Glitch,” was published in March 2023. Broussard has published features and essays in The Atlantic, Harper’s Magazine, Slate Magazine, and numerous other outlets. She describes herself as an AI ethics scholar, data journalist, and educator with an extensive technology background. Broussard has called algorithmic bias “the civil rights issue of our time” and appears in the 2020 documentary “Coded Bias.”

The word “glitch” in tech contexts typically means a small error, an anomaly, a problem that can be quickly fixed. When your app crashes or your screen flickers, you say “there’s a glitch.” The word suggests the problem is temporary, incidental, superficial—the underlying system is good, just something went slightly wrong somewhere. But Broussard’s argument is that when it comes to race, gender, and ability bias in AI and algorithmic systems, the word “glitch” seriously underestimates the problem’s nature and severity. These aren’t random errors fixable with patches; they’re systematic, structural, often the result of intentional design choices. They’re not “glitches” but features—core to how these technological systems operate.

Broussard tells this story from her unique position as one of the few Black women AI researchers. In a field dominated by white men, her very existence is an intervention. She critiques AI not only from outside but from inside—as someone who worked at Bell Labs and MIT Media Lab, possessing deep technical expertise. She knows how code is written, algorithms trained, systems deployed. This insider knowledge makes her critique sharper and more credible. She cannot be dismissed as a layperson who doesn’t understand technology; she fully understands technology, and precisely because of this, she can critique it so effectively.

The book’s core argument is: tech neutrality is a myth. There’s a widespread belief that technologies themselves are neutral tools—they can be used for good or bad purposes, but the technology itself has no values or biases. An algorithm is just a series of instructions, a dataset just a collection of numbers, an AI system just a mathematical model—how could they be racist or sexist? But Broussard shows this view ignores how technology is created, by whom, for whom, and in what context. Every technological system embodies its creators’ assumptions, values, and biases. Every algorithm reflects the data it was trained on, and that data reflects the inequalities and prejudices of the society that collected it. Every deployment decision involves judgments about who matters, whose needs are prioritized, who can be marginalized or sacrificed.

Broussard illustrates this argument through a series of compelling and disturbing case studies. She begins with facial recognition technology—a field that’s been widely reported but still often misunderstood. Facial recognition systems are highly accurate at identifying white male faces but frequently err when identifying women, especially darker-skinned women. Why? Because these systems were trained on datasets primarily containing white male faces. The algorithm learned what a “face” means based on the examples it saw; if most of what it saw were white males, it would treat this as standard, viewing everything else as deviation. Researcher Joy Buolamwini found some commercial facial recognition systems had error rates as high as 34% for darker-skinned women while having less than 1% error for white men.

This isn’t merely a technical problem; it has real consequences. Facial recognition is used in law enforcement, border control, employment screening, financial services, and more. When these systems more easily misidentify Black women, this means these women are more likely to be wrongly arrested, denied services, falsely flagged as suspicious. In the US, there have been multiple cases of Black men wrongly arrested due to facial recognition system mismatches. These errors aren’t “glitches”; they’re foreseeable results of a system designed to work best on white males being deployed in a racialized law enforcement environment.

Broussard also explores mortgage approval algorithms, revealing more insidious forms of algorithmic discrimination. On the surface, using algorithms to decide who gets loans seems like a good way to eliminate human bias—algorithms won’t be influenced by racist assumptions like human loan officers, right? But it turns out algorithms can and do perpetuate and amplify systemic racism. Research shows that even controlling for income, credit scores, and other “legitimate” factors, Black and Latino applicants are still more likely to be denied mortgages or charged higher interest rates. Why? Because algorithms are trained on historical data reflecting decades of discriminatory lending practices. If banks historically rejected applicants from certain zip codes (which happen to be predominantly Black), the algorithm learns zip codes are predictors of loan default, thus perpetuating the same discriminatory patterns.

More insidiously, algorithms can find proxy variables—variables correlated with race or gender but ostensibly “neutral”—to achieve discriminatory outcomes without obviously violating anti-discrimination laws. For example, an algorithm might consider your social media friend network, language you use, websites you visit—all highly correlated with race and class, but none explicitly “race.” The result is a system that appears objective and data-driven but actually reinforces and automates racism and sexism.

In healthcare, Broussard documents the dangerous feedback loops that arise when medical diagnostic algorithms are trained on insufficiently diverse data. Many medical AI systems were developed on data primarily containing white male patients, because historically clinical trials mainly recruited white male participants. The result is algorithms that diagnose diseases well in white men but less accurately in women, people of color, or disabled people. For example, heart disease often presents differently in women than men, but if the algorithm is primarily trained on male data, it might miss heart disease symptoms in women. Skin cancer detection algorithms trained on lighter skin might fail to recognize cancer signs on darker skin.

Worse, this creates a self-reinforcing cycle. If algorithms perform poorly diagnosing certain groups, these groups may lose confidence using these technologies, thus participating less in future research or using these services. This in turn means less data collected about these groups, making algorithms harder to improve in the future. Meanwhile, if algorithms misdiagnose certain groups, doctors might begin questioning these groups’ patients’ symptoms, assuming “the technology says it’s fine, so the problem must be with the patient.” This perpetuates already-existing racism and sexism patterns in medical systems, where women’s and people of color’s pain and symptoms have long been minimized or dismissed.

Broussard also explores automated hiring systems using algorithms to screen resumes and evaluate job candidates. Amazon once developed an AI hiring tool but had to abandon it because it systematically discriminated against female applicants. Why? Because it was trained on the company’s past decade of hiring data, and during that time, the company primarily hired men (especially in technical positions). The algorithm learned what a “good candidate” looked like based on past hiring decisions; because past decisions were sexist, the algorithm became sexist too. It penalized resumes containing the word “women” (e.g., “women’s chess club captain”) and downgraded candidates from women’s colleges.

This example reveals a key point: algorithms cannot fix biased data. If you train an algorithm on data reflecting sexist hiring practices, you get a sexist algorithm. Simply adding more data or “debiasing” the algorithm isn’t enough if the underlying problem—the company’s sexist culture and practices—isn’t addressed. Technology cannot solve social problems; it can only reflect and often amplify them.

Broussard pays particular attention to intersectionality—how race, gender, class, ability, and other identity axes interact to create unique forms of oppression and marginalization. She notes that when we talk about algorithmic bias, we can’t simply consider “racial bias” or “gender bias” as if these are separate issues. Black women don’t just experience racism plus sexism; they experience a specific form of discrimination that’s the product of both, irreducible to either. Similarly, disabled women of color, queer trans people, elderly immigrants—all at intersections of multiple marginalizations, and algorithmic systems often fail most at these intersections.

This is partly because intersectional subjects are usually most underrepresented in training data. If your dataset already lacks women, it probably lacks even more Black women, even fewer Black disabled women. If you try “debiasing” your algorithm by ensuring gender balance but only consider white women, you haven’t addressed racial bias. If you try addressing racial bias by including more Black participants but they’re all male, you haven’t addressed gender bias. True inclusivity and fairness require attention to intersectionality—recognizing people have multiple, interacting identities that shape how they interact with technological systems and how these systems affect them.

Broussard also critiques “technological solutionism”—the belief that every problem has a technological solution. In Silicon Valley and the tech industry, there’s a tendency to frame social problems as technical problems solvable with better software, smarter algorithms, more data. Poverty? There’s an app for that. Discrimination? Train an algorithm. Inequality? Optimize the system. But Broussard argues this approach fundamentally misunderstands social problems’ nature. Racism, sexism, ableism aren’t “glitches” fixable with technology; they’re systemic forms of oppression deeply rooted in historical, cultural, economic, and political structures.

Worse, technological solutionism often diverts attention from real solutions. If we focus all our energy on “debiasing” algorithms, we might ignore the need to address the structural inequalities producing biased data. If we focus on making AI more “fair,” we might avoid questioning whether we should use AI in certain high-stakes domains in the first place. If we try to technologically fix discrimination, we might avoid doing the necessary political and social transformation to achieve genuine justice.

So what’s Broussard’s proposed solution? Her argument isn’t to make omnipresent tech more inclusive but to root out algorithms that target certain demographics as “other” to begin with. This is a radical claim: not “let’s fix biased AI” but “let’s question whether we should deploy these AI systems in the first place.” She calls for stricter regulation and accountability of technology. Algorithms shouldn’t be treated as “black boxes” whose operations are opaque to scrutiny. There should be laws requiring transparency: companies should disclose how their algorithms work, what data they’re trained on, how they make decisions. There should be impact assessments: before deploying algorithms, rigorous testing should identify potential biases and harms, especially to marginalized communities. There should be accountability mechanisms: when algorithms cause harm, there should be clear recourse, and companies and developers should be held responsible.

Broussard also calls for diversity in the tech industry—not just token representation but genuine power and decision-making authority. If AI systems are designed only by white men, they’ll continue reflecting white male assumptions and priorities. We need more people from diverse racial, gender, ability, class backgrounds involved in all stages of technology development—from research to design to deployment to evaluation. And we need these diverse voices not merely “included” but given actual power to shape decisions. This means changing tech education to be more welcoming and supportive of students from underrepresented groups. This means changing hiring and promotion practices, addressing rampant discrimination and harassment in tech industry. This means valuing different types of expertise, recognizing lived experience with inequality and marginalization is a crucial asset in designing equitable technology.

But Broussard also acknowledges diversity alone isn’t a panacea. It’s possible to have a diverse team yet still produce biased technology if structural constraints and incentives remain unchanged. If priorities remain profit maximization, if metrics remain engagement and growth, if culture remains “move fast and break things,” then even with a more diverse team, results may still be harmful. Real change requires not just changing who’s in the room but changing the room’s rules—changing power structures, decision-making processes, values and goals.

Broussard also emphasizes the importance of community engagement and autonomy. Communities most affected by technological systems should have a say in these systems’ design and deployment. This means consulting with communities early in the tech development process, not just soliciting feedback after systems are already built. This means respecting communities’ concerns and priorities even when they conflict with tech companies’ business interests. This means communities should have the right to refuse unwanted technology, opt out of systems that might harm them.

One of the book’s most powerful moments is Broussard’s discussion of facial recognition use in law enforcement. Many civil rights organizations call for a moratorium or outright ban on law enforcement’s use of facial recognition technology due to its low accuracy rates, racial bias, and potential for use in surveillance and targeting communities of color. Broussard supports these calls, arguing there’s no way to make this technology “good enough” or “fair enough” to justify its use in this context. Even if we could improve facial recognition accuracy to 99%, there would still be errors—and these errors would disproportionately affect communities already unfairly targeted by police. And the technology’s very existence changes power dynamics, creating possibilities for mass surveillance and control incompatible with a free society.

This example illustrates Broussard’s broader argument: sometimes the solution isn’t “fixing” technology but not deploying it. Not every problem needs a technological solution, not every technological solution should be implemented even if it’s technically feasible. We need more thoughtful ethical judgments about technology use, recognizing some applications—no matter how “improved” or “debiased”—are fundamentally unacceptable because of the harms they cause.

Broussard’s writing style makes complex technical concepts accessible to non-specialist readers while maintaining rigor and depth. She uses clear examples and analogies to explain how algorithms work, why they might be biased, how these biases manifest. She also weaves in personal narratives—her own experiences as a Black woman in tech, moments she encountered bias and discrimination, how she learned to critically examine a field she once loved. These personal touches make the book not only intellectually engaging but emotionally engaging.

The book converses with works like Safiya Umoja Noble’s “Algorithms of Oppression,” Cathy O’Neil’s “Weapons of Math Destruction,” and Ruha Benjamin’s “Race After Technology,” all critically examining bias and discrimination in technology. But Broussard’s unique contribution lies in her emphasis on intersectionality—considering not just race or gender but ability, class, and how these categories interact. She also particularly focuses on the “glitch” framing itself, challenging our tendency to view technological harms as fixable errors rather than systemic design choices.

For Chinese readers, this book has profound relevance. Though many examples come from American contexts, the questions it raises are global. China is also rapidly deploying AI and algorithmic systems—from facial recognition to social credit scoring to automated decision-making. What assumptions and values do these systems embody? What are their impacts on different groups? Who designs them, who’s affected? Broussard’s analytical framework—recognizing technology isn’t neutral, algorithms reflect and amplify social inequalities, we need to critically examine technology deployment—applies equally to Chinese contexts.

Simultaneously, this book reminds us technology justice is part of global justice. Many AI systems designed in the West are deployed globally, exporting American or European biases and assumptions to the rest of the world. Facial recognition systems deployed in African countries, trained mainly on white faces; lending algorithms used in the Global South, based on Global North economic assumptions. These forms of technological colonialism are the global dimension of tech bias, requiring global cooperation to address.

“More Than a Glitch” is ultimately a book about power and justice. It demands we recognize technology isn’t just about efficiency or innovation but about who has power, who benefits, who’s harmed. It challenges us to move beyond “fixing glitches” thinking to question the systems themselves—their purposes, their assumptions, their impacts. It calls us to see tech justice as inseparable from social justice, recognizing that in a world increasingly shaped by algorithms, the fight against algorithmic bias is part of the broader struggle against racism, sexism, and all forms of oppression.

Meredith Broussard, with her technical expertise as a data scientist and lived experience as a Black woman, has written a book both rigorous and accessible, both critical and constructive. “More Than a Glitch” is essential reading for anyone concerned with technology’s future, social justice, or simply wanting to understand the algorithmic world we live in. It’s a challenge to the tech industry to do better; a call to policymakers for regulation and accountability; a reminder to everyone that technology isn’t destiny, we can and must shape it to serve justice and equity rather than perpetuating inequality. In an era when technology is often presented as an inevitable force of progress and liberation, Broussard reminds us genuine progress requires we critically examine technology, challenge its biases and harms, and fight for a world where technology truly serves everyone.

Book Info

Original Title: More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech
Author: Meredith Broussard
Published: March 7, 2023
ISBN: 9780262047654
Language: English

🛒 Get This Book

Amazon Buy on Amazon
By purchasing through this link, we may earn a small commission at no extra cost to you. Thank you for your support!

Support Us

If this content helps you

☕ Buy me a coffee

Related Books

Book Discussion

Share your thoughts and opinions on this book and exchange insights with other readers

💬

Join the Discussion

Share your thoughts and opinions on this book and exchange insights with other readers

⏳

Loading comments...