Humanising Trust in the Age of AI

Professor Keeley Crockett SFHEA, SMIEE

peoples panel AI greater manchester

To human beings, trust is very personal, often domain specific, and influenced by lived experiences. Traditionally, trust has been focused around human to human relationships based upon a person’s integrity, honesty, dependability and the belief that a person will not cause harm. But what about Trustworthy Artificial Intelligence? How can we assess that? This topic which will be discussed in Dr Emily Collins’ Manchester Lit & Phil talk on 2nd May 2024, framed around trustworthy and responsible robotics.

The development of global ethical Artificial Intelligence (AI) principles and guidelines, followed by the explosion of generative AI in the public domain in 2021, has led to a scramble to legislate AI based around core ethical principles. The EU AI Act – the first comprehensive legalisation based on a risk-based approach – was formally adopted in March 2024.

At the heart of the UK’s pro-regulation approach, five cross-sectoral principles based on Safety, security and robustness; Appropriate transparency and explainability; Fairness; Accountability and governance; and Contestability and redress, were adopted. Currently, these principles are down to individual regulators to interpret – but what will this practically mean for a human within society, the wider public and marginalised communities in terms of their rights?

Human trust is at the heart of responsible and ethical AI in society. In March 2024, The UK Government published its guidance on AI Assurance which emphasises the importance of trust, defining the context of justified trust as “where a person or group trust the use of an AI system based on reliable evidence”. The guidance provides a toolkit for organisations for measuring, evaluating, and communicating AI Assurance supported by some practical guidance. Some progress in this area is certainly being made.

However, to the person on the street who may have little awareness of the use of AI in their everyday lives and how it impacts them, understanding the risks and benefits of AI elements of a particular product or service prior to using it, may be overwhelming, and potentially lead to an increase in the digital divide in society.

So how do we ensure that humans have the confidence and trust in AI and that it is accessible to everyone?

The Peoples Panel in Artificial Intelligence was a project first funded by The Alan Turing Institute in 2022, and has since been adopted by Manchester City Council as part of Doing Digital Together. The original Peoples Panel was first established from community volunteers within Salford and Stockport through a series of community AI roadshows designed to reach and engage with traditionally marginalized communities and develop a common language and understanding around AI.

Community volunteers undertook two days of training, practically exploring ethical AI principles and learning techniques to consequence scanning how AI and data was used. They then scrutinised researchers and business in a series of live panels around new and emerging AI products. Confidence was shown to increase, and volunteers became advocates of debating and discussing AI in their own communities.

A second project, PEAS in PODS, trained up researchers as Public Engagement Ambassadors (PEAs) across three universities on public engagement and co-production. The PEAs are currently emersed in three co-produced AI related projects at Back on Track (Manchester), Inspire (Stockport) and The Tatton (Ordsall) led by the communities themselves. One such project is currently co-developing a Peoples Charter for AI – focused on what assurances people want from those organisations that adopt AI.

There is hope for the future: peoples voices – especially those that are hard to reach – are being heard.

And a bill on the regulation of artificial intelligence is currently making its way through the House of Lords. It is significant as it specifically mentions the role of meaningful public engagement and states “AI and its applications should…… meet the needs of those from lower socio-economic groups, older people and disabled people”.

As humans are unique, how we build trust in AI is also unique. But first, we need a mutual language of understanding about AI for everyone.

professor keeley crockett

Professor Keeley Crockett SFHEA, SMIEE

Keeley Crockett is a Professor in Computational Intelligence at Manchester Metropolitan University and Chair of the IEEE Technical Committee SHIELD (Ethical, Legal, Social, Environmental and Human Dimensions of AI/CI). She has over 27 years’ experience of research and development in Ethical and responsible AI (for both SME’s and an advocate for citizen voice), computational intelligence algorithms and applications, including adaptive psychological profiling, fuzzy systems, semantic similarity, and dialogue systems.

Keeley has led work on Place based practical Artificial Intelligence, facilitating a parliamentary inquiry with Policy Connect and the All-Party Parliamentary Group on Data Analytics (APGDA), leading to the inquiry report “Our Place Our Data: Involving Local People in Data and AI-Based Recovery”. She obtained STRENGTH IN PLACES POLICY funded engagement work with Greater Manchester businesses on “SME Readiness for Adoption of Ethical Approaches to AI Development and Deployment”.

She is currently involved in a number of AI projects, both national and international, in which she is playing key roles to push for responsible research, public engagement, and ethical digital innovation. Keeley is passionate about people and a UK STEM Ambassador.

Sign up to our newsletter

Sign up to our e-newsletter to receive exclusive content and all the latest Lit & Phil news

* indicates required