Ethical AI: How Black Women Are Shaping Industry Standards

Artificial intelligence, a rapidly growing technology, is becoming an indispensable tool for many. As part of this growth, questions about accountability, transparency, and bias have appeared. As a result, AI ethics has become an important field, working to set guidelines for responsible AI development and use.
Essential to the development of fair and ethical AI are diverse perspectives. Unfortunately, women make up less than 22% of global AI talent, and less than 15% are in senior executive positions. Without better representation, issues such as racial or sexual bias can go unnoticed in AI. The Black women below are stepping up to fill the representation gap, using their expertise to advocate for ethical AI.
Mutale Nkonde
A researcher and policy advisor, Mutale Nkonde believes the issues surrounding ethical AI development are issues of humanity. She has worked to shape policy, advocating for the Algorithmic Accountability Act, and contributing to the US TikTok Content Advisory Council. Mutale also contributed to one of Senator Schumer’s AI Insight Panels and helped the Congressional Black Caucus develop its AI policy platform.
As part of her advocacy, Mutale founded AI for the People, a public interest organization for responsible AI. Their mission is to reduce algorithmic bias through advocacy for ethical policies and practices.

Rachel Gillum
Rachel Gillum, the VP of Ethical and Humane Use of Technology at Salesforce, works to ensure AI usage is safe. She has also brought her expertise to public policy as a commissioner in the U.S. Chamber of Commerce’s bipartisan AI Commission on Competition, Inclusion, and Innovation. The commission examined the real-world impact of AI, releasing a detailed report with recommendations for AI regulation.
Rachel’s background in national security and intelligence adds weight to her advocacy for the ethical development and governance of emerging technologies. She has stressed the importance of making sure AI does not perpetuate existing biases, emphasizing intentional design and data collection.
Timnit Gebru
An AI ethicist and founder of the Distributed AI Research Institute (DAIR), Timnit Gebru advocates for DEI in AI. She previously worked at Google as the co-lead of the Ethical AI team, ensuring Google’s AI products were not racially biased. When she co-wrote a paper voicing concerns about the ethics of large language AI models and how they were not considering the biases being built in, Timnit was fired.
She founded DAIR shortly after and has continued to advocate for transparency and equity in AI. DAIR’s mission centers around the belief that AI can be beneficial if it is developed intentionally with input from diverse perspectives. DAIR’s research is community-based and focuses on lived experiences.

Dr. Joy Buolalmwini
A prominent figure in AI ethics and the founder of the Algorithmic Justice League (AJL), Dr. Joy Buolamwini considers herself a poet of code. By blending art and research, she educates people on the potential social harms of unregulated AI. Her advocacy has had her named to several lists, including Time’s “100 Most Influential People in AI”.
Joy uses her voice to continue to highlight the importance of ethical AI development. Her widely viewed TED Talk on algorithmic bias presented audiences with a clear explanation of the dangers of bias in AI. She is a best-selling author and her documentary, Coded Bias, received several nominations and awards.
Tiffany Martin Deng
Chief of Staff for Responsible AI at Google, Tiffany Martin Deng’s background in public service informs her work with ethical AI. Her experience includes working as a U.S. Army intelligence officer, a Pentagon and State Department consultant, and a privacy and algorithmic fairness specialist at Meta.
Tiffany sees her work in developing AI responsibly as a continuation of her public service. At Google, she is responsible for making sure Google’s research and products align with its AI principles. This includes developing safeguards, ensuring user privacy, and adversarial testing.