Q: You work at the intersection of Psychology and Computer Science. Why is it important to consider the relationship between human behaviour and machines?
A: If we want technology to work for us, we have to study how we use it. Theories of how we think, feel and behave in general terms can be useful in informing the design process. But they only go so far. This is because using technology itself changes our behaviour. It’s essential to study these things empirically.
“We need to continually reflect on whether technology is working as it should and whether it is having a positive or negative impact on society.”
Q: What can the structure of the machines we build tell us about ourselves as individuals and as a society?
A: We think of machines as objective, but we forget that they are designed by humans. They map very closely to our thought patterns and behaviours. Inadvertently, Artificial intelligence has been great at reflecting back to us some of society’s deep-rooted inequalities.
An example of this is facial recognition technology, which works better for White people than for Black. This is because of the assumptions made by the engineers who created it. It is important to learn from these situations. We need to use their teachings to design better technology and, more importantly, a fairer society.
Q: You’ve been studying Psychology and Computer Science for two decades. Have there been any standout, pivotal moments for both your personal research and that of your peers during this time?
A: For many years, Artificial Intelligence has relied on large amounts of data and significant computational power. It is very energy intensive. This is not environmentally sustainable, so we really need to change our approach.
My thinking around this has been influenced by Cynthia Rudin at Duke University, and my close collaborator Alaa Alahmadi. Our work has shown that using human expertise and cognition to inform the design of AI can vastly reduce the amount of data and computation it requires to make decisions. The way in which this kind of AI works is also much more transparent and easier for humans to understand. This goes against the current rhetoric that says ‘the computer knows better than us – give it the data and it will be more efficient and effective’.
Of course, current AI can do some things more effectively than humans, but it is seriously limited in other ways.
“I think in the future hybrid approaches to AI, where humans and machines work together, will become much more widespread.”
Q: What do you think our relationship with technology will look like in 50 years’ time? For example, will the use of technology be more democratised?
A: I’d like to think technology use will become more democratised but, at the moment, it’s becoming more polarised. People who don’t have access to technology for social or economic reasons are becoming excluded from important aspects of society, like banking and education.
As we develop new technologies, we must make sure everyone’s voices are heard, and that people are able to consider how it would affect their lives.
“I’d like to see much greater use of responsible research and innovation practices.”
These ought to evaluate the benefits and harms that new technologies might pose for everyone in society. They should be used earlier on in the design process, so they are not just an add-on, but can truly direct development.