More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech
A powerful work by NYU professor, data scientist, and one of the few Black women AI researchers, Meredith Broussard. Reveals how tech neutrality is a myth and algorithms need accountability. From facial recognition only trained on lighter skin tones, to mortgage algorithms encouraging discriminatory lending, to dangerous feedback loops in medical diagnostic algorithms. Solution isn't making omnipresent tech more inclusive, but rooting out algorithms that target demographics as 'other.'

đ Book Review
When technology reinforces inequality, itâs not just a glitchâitâs a signal that we need to redesign our systems to create a more equitable world. The word âglitchâ implies an incidental error, as easy to patch up as it is to identify. But what if racism, sexism, and ableism arenât just bugs in mostly functional machineryâwhat if theyâre coded into the system itself? In the vein of heavy hitters such as Safiya Umoja Noble, Cathy OâNeil, and Ruha Benjamin, Meredith Broussard demonstrates in âMore Than a Glitchâ how neutrality in tech is a myth and why algorithms need to be held accountable. Broussard, a data scientist and one of the few Black female researchers in artificial intelligence, masterfully synthesizes concepts from computer science and sociology. She explores a range of examples: from facial recognition technology trained only to recognize lighter skin tones, to mortgage-approval algorithms that encourage discriminatory lending, to the dangerous feedback loops that arise when medical diagnostic algorithms are trained on insufficiently diverse data. Even when such technologies are designed with good intentions, Broussard shows, fallible humans develop programs that can result in devastating consequences.
Meredith Broussard is an associate professor at the Arthur L. Carter Journalism Institute of New York University and research director at the NYU Alliance for Public Interest Technology. Her research focuses on investigative reporting with and about artificial intelligence. Broussard arrived at Harvard College in 1991, initially studying computer science as one of only six undergraduate women in that concentration, but left computer science and graduated with a degree in English, having also taken courses in African American studies. She was previously a features editor at The Philadelphia Inquirer and a software developer at AT&T Bell Labs and MIT Media Lab. Her first book, âArtificial Unintelligence: How Computers Misunderstand the Worldâ (published April 2018 by MIT Press), examines technologyâs limits in solving social problems. The book was awarded the Hacker Prize by the Society for the History of Technology and the 2019 PROSE Award for best book in computing & information sciences by the Association of American Publishers. Her second book, âMore Than a Glitch,â was published in March 2023. Broussard has published features and essays in The Atlantic, Harperâs Magazine, Slate Magazine, and numerous other outlets. She describes herself as an AI ethics scholar, data journalist, and educator with an extensive technology background. Broussard has called algorithmic bias âthe civil rights issue of our timeâ and appears in the 2020 documentary âCoded Bias.â
The word âglitchâ in tech contexts typically means a small error, an anomaly, a problem that can be quickly fixed. When your app crashes or your screen flickers, you say âthereâs a glitch.â The word suggests the problem is temporary, incidental, superficialâthe underlying system is good, just something went slightly wrong somewhere. But Broussardâs argument is that when it comes to race, gender, and ability bias in AI and algorithmic systems, the word âglitchâ seriously underestimates the problemâs nature and severity. These arenât random errors fixable with patches; theyâre systematic, structural, often the result of intentional design choices. Theyâre not âglitchesâ but featuresâcore to how these technological systems operate.
Broussard tells this story from her unique position as one of the few Black women AI researchers. In a field dominated by white men, her very existence is an intervention. She critiques AI not only from outside but from insideâas someone who worked at Bell Labs and MIT Media Lab, possessing deep technical expertise. She knows how code is written, algorithms trained, systems deployed. This insider knowledge makes her critique sharper and more credible. She cannot be dismissed as a layperson who doesnât understand technology; she fully understands technology, and precisely because of this, she can critique it so effectively.
The bookâs core argument is: tech neutrality is a myth. Thereâs a widespread belief that technologies themselves are neutral toolsâthey can be used for good or bad purposes, but the technology itself has no values or biases. An algorithm is just a series of instructions, a dataset just a collection of numbers, an AI system just a mathematical modelâhow could they be racist or sexist? But Broussard shows this view ignores how technology is created, by whom, for whom, and in what context. Every technological system embodies its creatorsâ assumptions, values, and biases. Every algorithm reflects the data it was trained on, and that data reflects the inequalities and prejudices of the society that collected it. Every deployment decision involves judgments about who matters, whose needs are prioritized, who can be marginalized or sacrificed.
Broussard illustrates this argument through a series of compelling and disturbing case studies. She begins with facial recognition technologyâa field thatâs been widely reported but still often misunderstood. Facial recognition systems are highly accurate at identifying white male faces but frequently err when identifying women, especially darker-skinned women. Why? Because these systems were trained on datasets primarily containing white male faces. The algorithm learned what a âfaceâ means based on the examples it saw; if most of what it saw were white males, it would treat this as standard, viewing everything else as deviation. Researcher Joy Buolamwini found some commercial facial recognition systems had error rates as high as 34% for darker-skinned women while having less than 1% error for white men.
This isnât merely a technical problem; it has real consequences. Facial recognition is used in law enforcement, border control, employment screening, financial services, and more. When these systems more easily misidentify Black women, this means these women are more likely to be wrongly arrested, denied services, falsely flagged as suspicious. In the US, there have been multiple cases of Black men wrongly arrested due to facial recognition system mismatches. These errors arenât âglitchesâ; theyâre foreseeable results of a system designed to work best on white males being deployed in a racialized law enforcement environment.
Broussard also explores mortgage approval algorithms, revealing more insidious forms of algorithmic discrimination. On the surface, using algorithms to decide who gets loans seems like a good way to eliminate human biasâalgorithms wonât be influenced by racist assumptions like human loan officers, right? But it turns out algorithms can and do perpetuate and amplify systemic racism. Research shows that even controlling for income, credit scores, and other âlegitimateâ factors, Black and Latino applicants are still more likely to be denied mortgages or charged higher interest rates. Why? Because algorithms are trained on historical data reflecting decades of discriminatory lending practices. If banks historically rejected applicants from certain zip codes (which happen to be predominantly Black), the algorithm learns zip codes are predictors of loan default, thus perpetuating the same discriminatory patterns.
More insidiously, algorithms can find proxy variablesâvariables correlated with race or gender but ostensibly âneutralââto achieve discriminatory outcomes without obviously violating anti-discrimination laws. For example, an algorithm might consider your social media friend network, language you use, websites you visitâall highly correlated with race and class, but none explicitly ârace.â The result is a system that appears objective and data-driven but actually reinforces and automates racism and sexism.
In healthcare, Broussard documents the dangerous feedback loops that arise when medical diagnostic algorithms are trained on insufficiently diverse data. Many medical AI systems were developed on data primarily containing white male patients, because historically clinical trials mainly recruited white male participants. The result is algorithms that diagnose diseases well in white men but less accurately in women, people of color, or disabled people. For example, heart disease often presents differently in women than men, but if the algorithm is primarily trained on male data, it might miss heart disease symptoms in women. Skin cancer detection algorithms trained on lighter skin might fail to recognize cancer signs on darker skin.
Worse, this creates a self-reinforcing cycle. If algorithms perform poorly diagnosing certain groups, these groups may lose confidence using these technologies, thus participating less in future research or using these services. This in turn means less data collected about these groups, making algorithms harder to improve in the future. Meanwhile, if algorithms misdiagnose certain groups, doctors might begin questioning these groupsâ patientsâ symptoms, assuming âthe technology says itâs fine, so the problem must be with the patient.â This perpetuates already-existing racism and sexism patterns in medical systems, where womenâs and people of colorâs pain and symptoms have long been minimized or dismissed.
Broussard also explores automated hiring systems using algorithms to screen resumes and evaluate job candidates. Amazon once developed an AI hiring tool but had to abandon it because it systematically discriminated against female applicants. Why? Because it was trained on the companyâs past decade of hiring data, and during that time, the company primarily hired men (especially in technical positions). The algorithm learned what a âgood candidateâ looked like based on past hiring decisions; because past decisions were sexist, the algorithm became sexist too. It penalized resumes containing the word âwomenâ (e.g., âwomenâs chess club captainâ) and downgraded candidates from womenâs colleges.
This example reveals a key point: algorithms cannot fix biased data. If you train an algorithm on data reflecting sexist hiring practices, you get a sexist algorithm. Simply adding more data or âdebiasingâ the algorithm isnât enough if the underlying problemâthe companyâs sexist culture and practicesâisnât addressed. Technology cannot solve social problems; it can only reflect and often amplify them.
Broussard pays particular attention to intersectionalityâhow race, gender, class, ability, and other identity axes interact to create unique forms of oppression and marginalization. She notes that when we talk about algorithmic bias, we canât simply consider âracial biasâ or âgender biasâ as if these are separate issues. Black women donât just experience racism plus sexism; they experience a specific form of discrimination thatâs the product of both, irreducible to either. Similarly, disabled women of color, queer trans people, elderly immigrantsâall at intersections of multiple marginalizations, and algorithmic systems often fail most at these intersections.
This is partly because intersectional subjects are usually most underrepresented in training data. If your dataset already lacks women, it probably lacks even more Black women, even fewer Black disabled women. If you try âdebiasingâ your algorithm by ensuring gender balance but only consider white women, you havenât addressed racial bias. If you try addressing racial bias by including more Black participants but theyâre all male, you havenât addressed gender bias. True inclusivity and fairness require attention to intersectionalityârecognizing people have multiple, interacting identities that shape how they interact with technological systems and how these systems affect them.
Broussard also critiques âtechnological solutionismââthe belief that every problem has a technological solution. In Silicon Valley and the tech industry, thereâs a tendency to frame social problems as technical problems solvable with better software, smarter algorithms, more data. Poverty? Thereâs an app for that. Discrimination? Train an algorithm. Inequality? Optimize the system. But Broussard argues this approach fundamentally misunderstands social problemsâ nature. Racism, sexism, ableism arenât âglitchesâ fixable with technology; theyâre systemic forms of oppression deeply rooted in historical, cultural, economic, and political structures.
Worse, technological solutionism often diverts attention from real solutions. If we focus all our energy on âdebiasingâ algorithms, we might ignore the need to address the structural inequalities producing biased data. If we focus on making AI more âfair,â we might avoid questioning whether we should use AI in certain high-stakes domains in the first place. If we try to technologically fix discrimination, we might avoid doing the necessary political and social transformation to achieve genuine justice.
So whatâs Broussardâs proposed solution? Her argument isnât to make omnipresent tech more inclusive but to root out algorithms that target certain demographics as âotherâ to begin with. This is a radical claim: not âletâs fix biased AIâ but âletâs question whether we should deploy these AI systems in the first place.â She calls for stricter regulation and accountability of technology. Algorithms shouldnât be treated as âblack boxesâ whose operations are opaque to scrutiny. There should be laws requiring transparency: companies should disclose how their algorithms work, what data theyâre trained on, how they make decisions. There should be impact assessments: before deploying algorithms, rigorous testing should identify potential biases and harms, especially to marginalized communities. There should be accountability mechanisms: when algorithms cause harm, there should be clear recourse, and companies and developers should be held responsible.
Broussard also calls for diversity in the tech industryânot just token representation but genuine power and decision-making authority. If AI systems are designed only by white men, theyâll continue reflecting white male assumptions and priorities. We need more people from diverse racial, gender, ability, class backgrounds involved in all stages of technology developmentâfrom research to design to deployment to evaluation. And we need these diverse voices not merely âincludedâ but given actual power to shape decisions. This means changing tech education to be more welcoming and supportive of students from underrepresented groups. This means changing hiring and promotion practices, addressing rampant discrimination and harassment in tech industry. This means valuing different types of expertise, recognizing lived experience with inequality and marginalization is a crucial asset in designing equitable technology.
But Broussard also acknowledges diversity alone isnât a panacea. Itâs possible to have a diverse team yet still produce biased technology if structural constraints and incentives remain unchanged. If priorities remain profit maximization, if metrics remain engagement and growth, if culture remains âmove fast and break things,â then even with a more diverse team, results may still be harmful. Real change requires not just changing whoâs in the room but changing the roomâs rulesâchanging power structures, decision-making processes, values and goals.
Broussard also emphasizes the importance of community engagement and autonomy. Communities most affected by technological systems should have a say in these systemsâ design and deployment. This means consulting with communities early in the tech development process, not just soliciting feedback after systems are already built. This means respecting communitiesâ concerns and priorities even when they conflict with tech companiesâ business interests. This means communities should have the right to refuse unwanted technology, opt out of systems that might harm them.
One of the bookâs most powerful moments is Broussardâs discussion of facial recognition use in law enforcement. Many civil rights organizations call for a moratorium or outright ban on law enforcementâs use of facial recognition technology due to its low accuracy rates, racial bias, and potential for use in surveillance and targeting communities of color. Broussard supports these calls, arguing thereâs no way to make this technology âgood enoughâ or âfair enoughâ to justify its use in this context. Even if we could improve facial recognition accuracy to 99%, there would still be errorsâand these errors would disproportionately affect communities already unfairly targeted by police. And the technologyâs very existence changes power dynamics, creating possibilities for mass surveillance and control incompatible with a free society.
This example illustrates Broussardâs broader argument: sometimes the solution isnât âfixingâ technology but not deploying it. Not every problem needs a technological solution, not every technological solution should be implemented even if itâs technically feasible. We need more thoughtful ethical judgments about technology use, recognizing some applicationsâno matter how âimprovedâ or âdebiasedââare fundamentally unacceptable because of the harms they cause.
Broussardâs writing style makes complex technical concepts accessible to non-specialist readers while maintaining rigor and depth. She uses clear examples and analogies to explain how algorithms work, why they might be biased, how these biases manifest. She also weaves in personal narrativesâher own experiences as a Black woman in tech, moments she encountered bias and discrimination, how she learned to critically examine a field she once loved. These personal touches make the book not only intellectually engaging but emotionally engaging.
The book converses with works like Safiya Umoja Nobleâs âAlgorithms of Oppression,â Cathy OâNeilâs âWeapons of Math Destruction,â and Ruha Benjaminâs âRace After Technology,â all critically examining bias and discrimination in technology. But Broussardâs unique contribution lies in her emphasis on intersectionalityâconsidering not just race or gender but ability, class, and how these categories interact. She also particularly focuses on the âglitchâ framing itself, challenging our tendency to view technological harms as fixable errors rather than systemic design choices.
For Chinese readers, this book has profound relevance. Though many examples come from American contexts, the questions it raises are global. China is also rapidly deploying AI and algorithmic systemsâfrom facial recognition to social credit scoring to automated decision-making. What assumptions and values do these systems embody? What are their impacts on different groups? Who designs them, whoâs affected? Broussardâs analytical frameworkârecognizing technology isnât neutral, algorithms reflect and amplify social inequalities, we need to critically examine technology deploymentâapplies equally to Chinese contexts.
Simultaneously, this book reminds us technology justice is part of global justice. Many AI systems designed in the West are deployed globally, exporting American or European biases and assumptions to the rest of the world. Facial recognition systems deployed in African countries, trained mainly on white faces; lending algorithms used in the Global South, based on Global North economic assumptions. These forms of technological colonialism are the global dimension of tech bias, requiring global cooperation to address.
âMore Than a Glitchâ is ultimately a book about power and justice. It demands we recognize technology isnât just about efficiency or innovation but about who has power, who benefits, whoâs harmed. It challenges us to move beyond âfixing glitchesâ thinking to question the systems themselvesâtheir purposes, their assumptions, their impacts. It calls us to see tech justice as inseparable from social justice, recognizing that in a world increasingly shaped by algorithms, the fight against algorithmic bias is part of the broader struggle against racism, sexism, and all forms of oppression.
Meredith Broussard, with her technical expertise as a data scientist and lived experience as a Black woman, has written a book both rigorous and accessible, both critical and constructive. âMore Than a Glitchâ is essential reading for anyone concerned with technologyâs future, social justice, or simply wanting to understand the algorithmic world we live in. Itâs a challenge to the tech industry to do better; a call to policymakers for regulation and accountability; a reminder to everyone that technology isnât destiny, we can and must shape it to serve justice and equity rather than perpetuating inequality. In an era when technology is often presented as an inevitable force of progress and liberation, Broussard reminds us genuine progress requires we critically examine technology, challenge its biases and harms, and fight for a world where technology truly serves everyone.
Book Info
Related Topics
đ Get This Book
Related Books
Book Discussion
Share your thoughts and opinions on this book and exchange insights with other readers
Join the Discussion
Share your thoughts and opinions on this book and exchange insights with other readers
Loading comments...