The AI & Equality Human Rights Toolbox: course open

By Caitlin Kraft-Buchman, CEO of Women at The Table

How We Started

The AI & Equality Human Rights Toolbox began in 2017 as a luncheon convened by Women at The Table and Gender & AI in Geneva. Also in attendance were Women’s Rights Division of the UN Office of the High Commissioner for Human Rights (OHCHR), their colleagues, and professors from the Swiss Federal Institute of Technology Lausanne (EPFL), and Swiss Federal Institute of Technology Zurich (ETHZ), and the University of Geneva.

After a prescient and chilling presentation by EPFL Professors Nisheet Vishnoi and Elisa Celis (Yale University) on bias in data sets and the implications for machine learning, another professor exclaimed, “I am so glad I am a mathematician! I only deal with numbers! Numbers are pure!  Numbers have no such bias!”  There was a small silence when another professor replied, “But professor, who picks the 3, and who picks the y?”

It is a common belief in the engineering and data science community, as well as with a lot of the public and with policy makers (including some Professors) that data presents neutrality and truth untouched by human bias. In fact, data and AI systems are both relative and contextual. The mechanisms and culture around AI development which need data to thrive can carry and transfer biases and inequalities into AI systems.

As Marshall McLuhan is famously quoted as saying, “We shape our tools, and thereafter our tools shape us”.

Shortly after that event we started looking at AI systems from an intersectional gender and human rights perspective and created a first AI and gender workshops at EPFL. Our hope was to share insights and an intersectional and human rights-based approach to AI with computer and data science students.  

While hosting successful workshops between 2019 and 2022, we began to see the need for policy makers and concerned folks to be able to interact with the toolbox so that we could build a community around the issues.

How We Grew

After presenting the Toolbox at MOZFEST - and with the generous support of the Swiss FDFA Human Rights Division, the Oak Foundation, and Canada’s International Research Development Centre – we were able to host workshops around the world, from Ghana to Chile to Thailand with dozens more stops along the way.

At this point we decided to develop an online course to allow more people to benefit from the workshop materials. An MOU with the Sorbonne Center for AI (SCAI) was signed. Team members Emma Kallina (a PhD candidate in Responsible AI  Cambridge University) and Anna-Maria Georguieva (UC Berkeley Data Science major) joined the AI & Equality Human Rights Toolbox team alongside Sofia Kypraiou, Pilar Grant, and me, the w@tt CEO.

We worked hard in the beginning of 2024 to put finishing touches on the online course and in February 2024 the five module online course was launched!  We formed an online community to share, discuss, debate and collaborate on the human rights-based approach to AI development.

Where We Are Now

That is where you come in! We are so excited to share this free 5 module course with you. You can complete it at your own pace or along with us on a monthly basis on the AI and Equality Community site. At the end you’ll receive a certificate from the Sorbonne Centre for AI.

The five modules are:

1.       Human Rights & AI Systems

2.       How Harms to Human Rights Enter the AI Lifecycle

3.       Fairness Metrics: Technical Measures are Not Neutral

4.       A Human Rights-Based Approach to AI Development

5.       Putting the Human Rights-based Approach into Practice

We really hope that you will join us in discovery, conversation, and collaboration in the creation of an all-of-society discussion about the AI we want, the AI we deserve, and the tech future we want to create together. For more information and to register for the course please visit the AI & Equality Human Rights Toolbox website: https://aiequalitytoolbox.com

Previous
Previous

CivTech 10 launches challenge to enhance public participation in decision-making

Next
Next

How the DRCF AI and Digital Hub can support Scottish innovators