Archive for April, 2024

David Higginson, President of the Society 2009-2011

Posted on: April 25th, 2024 by mlpEditor

We were very sad to hear of the recent death of David Higginson. He passed away peacefully on 7 March, in hospital, after being unwell for the past few months. We only heard the news the day after his funeral (which was on 15 April) but I have spoken at length with his sister Margaret, who reported that it had been a very dignified service, which celebrated his long and full life.

David joined the Lit & Phil in 1989 and was a very regular attender, always asking at least one question at talks! He had a successful career as a lawyer in Manchester, and a wide range of interests. He served as President from 2009-2011.

Covid, then increasing frailty prevented him attending much over the last few years, but he maintained a lively interest in the Lit & Phil events. He will be much missed.

 

Dr Susan Hilton

22 April 2024

Humanising Trust in the Age of AI

Posted on: April 23rd, 2024 by mlpEditor

To human beings, trust is very personal, often domain specific, and influenced by lived experiences. Traditionally, trust has been focused around human to human relationships based upon a person’s integrity, honesty, dependability and the belief that a person will not cause harm. But what about Trustworthy Artificial Intelligence? How can we assess that? This topic which will be discussed in Dr Emily Collins’ Manchester Lit & Phil talk on 2nd May 2024, framed around trustworthy and responsible robotics.

The development of global ethical Artificial Intelligence (AI) principles and guidelines, followed by the explosion of generative AI in the public domain in 2021, has led to a scramble to legislate AI based around core ethical principles. The EU AI Act – the first comprehensive legalisation based on a risk-based approach – was formally adopted in March 2024.

At the heart of the UK’s pro-regulation approach, five cross-sectoral principles based on Safety, security and robustness; Appropriate transparency and explainability; Fairness; Accountability and governance; and Contestability and redress, were adopted. Currently, these principles are down to individual regulators to interpret – but what will this practically mean for a human within society, the wider public and marginalised communities in terms of their rights?

Human trust is at the heart of responsible and ethical AI in society. In March 2024, The UK Government published its guidance on AI Assurance which emphasises the importance of trust, defining the context of justified trust as “where a person or group trust the use of an AI system based on reliable evidence”. The guidance provides a toolkit for organisations for measuring, evaluating, and communicating AI Assurance supported by some practical guidance. Some progress in this area is certainly being made.

However, to the person on the street who may have little awareness of the use of AI in their everyday lives and how it impacts them, understanding the risks and benefits of AI elements of a particular product or service prior to using it, may be overwhelming, and potentially lead to an increase in the digital divide in society.

So how do we ensure that humans have the confidence and trust in AI and that it is accessible to everyone?

The Peoples Panel in Artificial Intelligence was a project first funded by The Alan Turing Institute in 2022, and has since been adopted by Manchester City Council as part of Doing Digital Together. The original Peoples Panel was first established from community volunteers within Salford and Stockport through a series of community AI roadshows designed to reach and engage with traditionally marginalized communities and develop a common language and understanding around AI.

Community volunteers undertook two days of training, practically exploring ethical AI principles and learning techniques to consequence scanning how AI and data was used. They then scrutinised researchers and business in a series of live panels around new and emerging AI products. Confidence was shown to increase, and volunteers became advocates of debating and discussing AI in their own communities.

A second project, PEAS in PODS, trained up researchers as Public Engagement Ambassadors (PEAs) across three universities on public engagement and co-production. The PEAs are currently emersed in three co-produced AI related projects at Back on Track (Manchester), Inspire (Stockport) and The Tatton (Ordsall) led by the communities themselves. One such project is currently co-developing a Peoples Charter for AI – focused on what assurances people want from those organisations that adopt AI.

There is hope for the future: peoples voices – especially those that are hard to reach – are being heard.

And a bill on the regulation of artificial intelligence is currently making its way through the House of Lords. It is significant as it specifically mentions the role of meaningful public engagement and states “AI and its applications should…… meet the needs of those from lower socio-economic groups, older people and disabled people”.

As humans are unique, how we build trust in AI is also unique. But first, we need a mutual language of understanding about AI for everyone.

Sign up to our newsletter

Sign up to our e-newsletter to receive exclusive content and all the latest Lit & Phil news

* indicates required