People Powered AI – Challenges and opportunities in Responsible and Trustworthy AI Development

The operationalisation of ethical principles, current and emerging legalisation, and the understanding and mitigation of potential consequences to individuals and society are key challenges in the design, development and deployment of Artificial Intelligence (AI) driven systems. Organisations developing AI solutions as a service or innovating novel applications will need to openly address ethical principles such as bias, fairness, explainability, transparency, data privacy, accountability, and safety through AI Governance. In this talk I will briefly first overview ethical issues and emerging/current legalisation related to AI systems and the impact on people and society. Responsible innovation and ethical tech are essential to build public trust and meet legal obligations, yet how can businesses and researchers in academia harness People Power through participatory AI (co-design, co-production and public engagement)?  I will give some examples of research projects that bridge the gap between businesses, academia, and people and how the use of ethical toolkits such as consequence scanning and harms modelling that can help build public confidence and how enhance how academics undertake research. 

Keeley Crockett SMIEEE SFHEA is a Professor in Computational Intelligence at Manchester Metropolitan University and Chair of the IEEE Technical Committee SHIELD (Ethical, Legal, Social, Environmental and Human Dimensions of AI/CI). She has over 27 years’ experience of research and development in Ethical and responsible AI (for both SME’s and an advocate for citizen voice), computational intelligence algorithms and applications, including adaptive psychological profiling, fuzzy systems, semantic similarity, and dialogue systems. Keeley has led work on Place based practical Artificial Intelligence, facilitating a parliamentary inquiry with Policy Connect and the All-Party Parliamentary Group on Data Analytics (APGDA), leading to the inquiry report “Our Place Our Data: Involving Local People in Data and AI-Based Recovery”. She obtained STRENGTH IN PLACES POLICY funded engagement work with Greater Manchester businesses on “SME Readiness for Adoption of Ethical Approaches to AI Development and Deployment”, has contributed to the recent APGDA: AI and Ethics Report. She is currently the Principal Investigator (PI) on the EPSRC “PEAs in Pods: Co-production of community based public engagement for data and AI research.” Grant, Co-I on The Alan Turing Institute “People-powered AI: responsible research and innovation through community ideation and involvement”, PI on an Innovate UK Knowledge Transfer Partnership with My First Five Years and CI on Innovate UK Knowledge Transfer Partnership with COUCH. She is Co-Lead for AI Team within the Centre for Digital Innovation. Other projects include establishing The Peoples Panel for AI funded by The Alan Turing Institute, with a second grant funded by Manchester City Council. She is a member of IEEE Computational intelligence Society ADCOM (2023-25), Co-Chair of the IEEE Women in Engineering Educational Outreach, and is part of an IEEE working group on ethics, and a U.K. STEM Ambassador.