The report was produced by the University of Bristol's Research Institute for Sociotechnical Cyber Security (RISCS) and arts charity Cheltenham Festivals, and is the product of the ChelTechne summit, hosted in June as part of Cheltenham Science Festival.
The symposium gathered senior academics, government and intelligence services, science and technology professionals, startups, creative professionals, and industry leaders to discuss how society should imagine, discuss and take forward the narratives around artificial intelligence.
“Narrow Understandings”
Its findings focus on the critical importance of popular and media narratives around AI, and how these can influence - and distort - how not just the public but professionals understand, represent and develop AI technologies. Correcting this, the report suggests, will require scientists and engineers to join forces with storytellers and philosophers to produce better stories about technology.
Across a range of axes - including applying human qualities to AI, assigning it super-human levels of power and agency, and assuming either dystopian or utopian futures created by the technology - these narratives impact on society's capacity to plan for and manage AI's risks and opportunities.
"AI narratives and debates can be driven by unhelpful binaries and quite narrow understandings of what AI really is," explains Professor Genevieve Liveley, Turing Fellow at the University of Bristol and co-author of the report alongside Reid Derby, Head of Entrepreneurship for the Office of the Chief Scientific Adviser, and Dr Marieke Navin, Head of Programming at Cheltenham Science Festival.
"We need instead to grapple with AI's realities, by developing broader, better and more specific narratives around this new technology. For example, we urgently need to develop global leadership and collaboration on AI ethics to achieve greater pluralism and diversity."
Misplaced Fears?
The report is published to coincide with Cheltenham Literature Festival, where an event on Wednesday 11 October will explore just these questions. "What If AI Doesn't Change the World" will feature The Times technology business editor Katie Prescott, Cambridge Professor of Politics David Runciman, Oxford AI systems expert Michael Wooldridge and AI ethicist Kanta Dihal - and explore the promise and peril of AI, asking whether our fears for the future are in fact misplaced?
"As a charity, Cheltenham Festivals exists to spark curiosity and bring communities together in productive debate to inspire positive change," says Ali Mawle, co-CEO of the organisation which produces Cheltenham Science and Literature Festivals each year. "With its thriving cyber and tech sectors and vibrant cultural scene, Cheltenham is an ideal place to reimagine the conversation around emerging technologies like AI. Bringing people together across sectors and industries to ask urgent questions is what Festivals like ours are for."
“Challenges to Public Trust”
The ChelTechne report offers several signposts for the future, asking what regulatory framework is needed to help govern AI developers and companies, how to promote AI literacy in the general public, and which individuals and organisations are best placed to help achieve a high-quality response to its challenges.
"The report finds that biases are inherent in AI systems, but also in the narratives we use to talk about them," comments Dr Navin. "These narrative biases present challenges to public trust in AI and to the UK’s ambition to become a global science super-power. This report - and all our work at ChelTechne this year and in the future - is intended to help face those challenges, and enhance our understandings of this crucial - and sometimes confounding! - technology."
For more unmissable events, read our ultimate annual Cheltenham festivals guide.
Related
Comments
Nobody has commented on this post yet, why not send us your thoughts and be the first?