Back to All Events

Responsible AI and the Role of Participatory AI-Auditing

Responsible AI and the Role of Paricipatory AI-Auditing. Friday 21st March - online

 AI auditing is the practice of testing whether an AI system is working as it was intended. AI auditing involves reviewing how AI makes decisions, checking whether decisions are fair, accurate and safe, and ensuring they follow ethical guidelines. In the same way businesses get audited for their finances, AI systems are audited to make sure they are not biased, unreliable or acting in unexpected ways. 

Auditing an AI system currently requires a level of expertise that most organisations do not have available to them, the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project aims to change that. During this session, members of the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) team will share their thinking and reflection from their ongoing work to create co-designed tools and methodologies to audit the harms and fairness of predictive and generative AI across a range of sectors. 

The project has four use cases, Health, Media Content, Cultural Heritage and Collaborative Content Generation. Speakers include experts in computer and social sciences.  

By the end of this session we hope that you will better understand what AI auditing is and why it is important to have non-AI experts, including members of the public, practitioners, and end-users, involved in the process.    

Speakers:

  • Dr Mark Wong – Senior Lecturer and Joint Head of Social and Urban Policy, the University of Glasgow

  • Cari Hyde-Vaamonde – Lawyer and Researcher, King’s College London

  • Dr. Patrizia Di Campli San Vito - Postdoctoral Research Associate, the University of Glasgow 

Previous
Previous
19 March

Algorithms and Allyship: Examining AI's Approach to Queer Content

Next
Next
26 March

Future-Proofing Creative Skills for Responsible AI Adoption