top of page


The projects that I am and have been involved with bring ideas from academia - mainly the intersections of feminist, critical race theory and technology - to industry partners and wider audiences.  


AI Ethics: Large-Scale Industry Collaboration

I have spent the last two years working on a cutting edge AI Ethics project, a collaboration between the University of Cambridge and a technology multinational. The study is based on interviews with over sixty employees and is the largest of its kind to date. The project provides both its industry partner and the AI sector with practical tools for creating more equitable AI informed by intersectional feminist knowledge. It aims to reboot ethics work in AI by providing new data for policy interventions that can bridge the divide between the people who make and use AI at industry-level and government and policy stakeholders. Published work available here soon. 

nothing_in_a_face_sutureblue (2).gif

Nothing in a face ©Suture Blue

The Good Robot Podcast

I am the Co-Founder and Co-Host of The Good Robot podcast, which asks, what is good technology? Is ‘good’ technology even possible? And what is feminism contributing? Kerry and I bring together the people who make and study technology with the rest of us, who use these technologies and experience their effects. We offer the public a variety of conversations with both the people at the cutting edge of technological innovation and the most persuasive critics of these new technologies. 

Our expansive and inclusive definition of feminist work includes a wide range of projects that challenge unjust technological practices, from Buddhist approaches to AI through to youth activism against algorithmic oppression. Guests include the following bestselling writers and eminent philosophers: Buddhist monk The Venerable Tenzin Priyadarshi, Rosi Braidotti, Jack Halberstam, N. Katherine Hayles, Anne Anlin Cheng, as well as feminist app developer Priya Goswami, disability design expert Cynthia Bennett, and Googler Blaise Aguera y Arcas. We provide a candid window into the minds of these thinkers as they explore how feminist ideas are producing better, fairer, and more equitable technologies.

Each has a very different take on what feminism is and how it relates to technology. For some, feminism is about women, for others, it’s about standing up for marginalised groups. Some think the benefits of big technology should be embraced; others say ‘resist!’ In this modern incarnation of the culture wars it is more important than ever to bring these opinions into respectful conversation with one another. 


The guests offer not only intergenerational opinions but perspectives from all corners of the globe - including Egypt, India, Nigeria, Germany, Indigenous Hawaiian, USA, Canada, Uganda, Puerto-Rico, Italy, Taiwan and Portugal.

Each interview comes with a reading list of fiction and non-fiction texts that have inspired our guests.

Helping AI Companies Respond to the EU AI Act

I am the Principal Investigator on a project funded by Ammagamma, an Italian AI service-provider. The project's primary goal is to create a tool that helps AI companies create products that conform to the EU AI act. It is oriented primarily towards Project Managers on AI projects and other roles that take a global view. Ammagamma will be creating the tool, and CFI helping them interpret the EU AI act and supplying the conceptual framework for the tool. The project is committed to an in-depth response to regulation that goes beyond mere compliance by working towards a pro-justice and sustainable future with AI.

Who Makes AI? AI Creators in Film 

A relentless stream of movies, from Iron Man to Ex Machina, has helped entrench systemic gender inequality in the artificial intelligence industry by portraying AI researchers almost exclusively as men. I was interviewed by the BBC World Service and RTÉ Radio about our study and the accompanying report, which show that :

- Just 8% of all depictions of AI professionals from a century of popular film are women – and more than half of these are shown as subordinate to men. 

- This gender imbalance is even bigger than in the real-world AI industry, in which 20% of AI professionals are women

- Not a single influential AI film in history was directed solely by a woman.


The study discusses a number of consequences of the underrepresentation of women in portrayals of AI scientists, including the negative influence on career choices, hiring practices, and the treatment of women in AI workplaces. All these factors lead to fewer women entering or staying in the AI field, which is both unjust in itself, and risks contributing to the development of discriminatory technology. 

Screenshot 2023-02-21 at 09.06.35.png
Screenshot 2023-02-21 at 09.05.44.png
Screenshot 2023-02-21 at 09.05.41.png
Screenshot 2023-02-21 at 10.24.22.png
Screenshot 2023-02-21 at 09.59.37.png

AI and Policing

In the US, dystopian technology deployed by police has made it easier for law enforcement to shut down Black Lives Matter and antifascist protests, and to target members specific groups, religions, and causes. 


The same tech is now being used by the UK government as part of their efforts to crack down on protests. I have created a TikTok series on this issue. You can also catch it here on YouTube.  

The series is based on research with Dr Federica Frabetti explores the racializing capabilities of these AI tools when used for protest detection. It is well known that many AI-powered systems exacerbate social inequalities by racializing certain groups and individuals. We focus on Geofeedia and Dataminr, two companies that claim to be able to ‘predict’ and ‘recognize’ the emergence of dangerous protests, to show how their tools produce the danger that they are supposed to observe. We argue that these tools are performative (using Judith Butler and Karen Barad) as a more comprehensive way to expose and contest the harms wrought by EDAC than other ‘de-biasing’ mechanisms. 

Our paper, "The Performativity of AI-Powered Event Detection: How AI Creates a Racialized Protest and Why Looking for ‘Bias’ Is Not a Solution" is forthcoming with Theory, Society and Human Values

Screenshot 2023-02-21 at 09.13.21.png
Screenshot 2023-02-21 at 09.13.17.png
Screenshot 2023-02-21 at 09.13.27.png
Screenshot 2023-02-21 at 09.12.37.png
Screenshot 2023-02-21 at 09.28.27.png

Recruitment AI 

Recruitment AI is on the rise. Candidates are increasingly asked to submit online video interviews, which are assessed by AI. I created a TikTok video series on these hiring tools for Carole Cadwalladr's data rights group, The Citizens. I explore whether they live up to their claim to 'strip' race and gender from candidates in the hiring process by 'just' looking at their personality. But is personality race and gender-free? Can race and gender be 'stripped' from a candidate's profile? And can candidates be reduced to neutral data points?  


I lead a team of computer science students at the University of Cambridge to create an interactive tool, available online, that allows participants to see for themselves how AI is used. You'll see that when you change the image's saturation and brightness it also changes your personality score! 

This work is based on my study with Dr Kerry Mackereth "Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference”. It was featured in the BBC, Forbes, The Telegraph, and internationally. 


Screenshot 2022-11-02 at 17.24.07 (1).png

GRACE Project

I previous worked on the EU Horizon 2020 ETN-ITN-Marie Curie project GRACE (Gender and Cultures of Equality in Europe), where she was part of a team that developed the feminist App ​'Quotidian'. This interactive and multi-sensorial app provides users with intersectional feminist quotes from around the world - crowdsourced from volunteers - and allows the participant the chance to add their own favourite quotes. Quotidian's aims, principles, and functionalities were developed through bottom-up participatory processes. 


bottom of page