These questions and more are set to be debated at this year’s Cambridge Festival (31 March – 10 April), during a series of events that explore the impacts of our ever-increasing dependence on technology.
The Cambridge Festival is run by the University of Cambridge and hosts over 350 events that tackle the issues affecting all of us, from the multiple crises in politics, health and climate change to global economics and human rights. Almost all the events are free.
The full programme launches next Monday 28th February with bookings opening on the same day.
Opportunities from new digital products and services can drive economic growth and tackle some of the biggest challenges we face – from climate change and COVID-19 to conflict and poverty. But how can we also curb the harms it creates? Undoubtedly, when it comes to online interactions, the balance between protecting freedom of expression and tackling harmful content is a complicated one, as evidenced by the Government’s Online Safety Bill. The current approach barely scratches the surface and strategies still largely rely on reporting and deletion. However, this method has been criticised by those who fear that one of our most basic human rights, freedom of speech, is at stake.
In Combatting harmful content online: the potential of Counterspeech (4 April, online), Dr Stefanie Ullmann from the 'Giving Voice to Digital Democracies' project at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), University of Cambridge, explores the ethical and technological considerations of Counterspeech. This is a tactic used to counter hate speech or misinformation by presenting an alternative narrative rather than with censorship. The great power of Counterspeech – performed by individuals, organisations or a chatbot – lies in its potential to reach and positively impact bystanders and silent followers of a discussion. Some of the questions Dr Ullmann addresses include: What are different forms of Counterspeech? When is Counterspeech effective? Who should or can perform Counterspeech? Can and should we deploy technology to assist or take over the role of Counterspeakers online?
Another current approach to stop the spread of harmful content online is the use of AI-based techniques to detect hateful social media posts. However, these text-based approaches are limited since online hate speech often involves both texts and images. In Online harms: how AI can protect us (7 April, live stream), Dr Marcus Tomalin and Howard Mei from the 'Giving Voice to Digital Democracies' project at CRASSH, University of Cambridge, show how state-of-the-art automated systems can identify harmful content that involves texts as well as images to help to make online interactions safer for those who are more likely to suffer abuse or discrimination.
What about the tech giants themselves? The networked world is dominated by a small number of global technology companies, and Western democracies have finally woken up to the dangers of such a powerful and largely unregulated industry. To date, however, attempts to rein in tech power have yielded few results. This is in sharp contrast to the situation in China where the huge tech industry seems to have been brought under government control. Which prompts the question: can tech giants only be successfully regulated by authoritarian regimes? This question is debated during a panel discussion, Can liberal democracies regulate tech giants? (6 April, in person, live stream). With University of Cambridge academics, Professor of Politics David Runciman; Professor Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy; and Dr Jennifer Cobbe, an expert in legal responses to new technologies at the Department of Computer Science and Technology. Chaired by Professor John Naughton of Wolfson College and the Minderoo Centre for Technology and Democracy.
The Royal College of Radiologists states that “AI potentially represents one of the most fundamental changes in medical care since the inception of the NHS”. During the panel discussion, Using AI to diagnose ovarian cancer – would you trust a machine to help find tumours? (5 April, in person),University of Cambridge researchers, Dr Cathal McCague, a radiologist and PhD student; Dr Lorena Escudero Sánchez, a particle physicist and image analysis expert; and Thomas Buddenkotte, PhD student at the Department of Applied Mathematics and Theoretical Physics (DAMTP) provide an overview of their research to develop an algorithm that can accurately detect ovarian cancer on CT scans. If successful, this could mark the first of its kind AI tool. AI is not currently being used to detect ovarian cancer tumours in routine clinical practice, but it may be that tools such as these could become part of the clinical workflow in the coming years.
From AI diagnosing cancer to robots that help us feel happier. Imagine having a wellbeing coach available 24/7 who can reliably improve your mood. You never feel judged, the coach is always on top form, and sessions are tailored to your personality to keep you mentally well. Researchers in the Affective Intelligence and Robotics Lab, Department of Computer Science and Technology at the University of Cambridge hope to make this vision a reality – in robot form. During the event, Meet Pepper and Nao, the wellbeing robots (9 April, in person), PhD students Nikhil Churamani and Minja Axelsson, who work on the robots with Professor of Affective Intelligence & Robotics Hatice Gunes, are on hand to discuss their research and facilitate short, one-to-one sessions with Pepper, the robot wellbeing coach.
Professor Gunes has also been working on a child-looking robot, ‘Nao’, to see if robots could effectively be used to support children’s wellbeing. Nao is part of a collaborative research project between the Department of Computer Science and Technology and the Department of Psychiatry at the University of Cambridge. PhD student Nida Abbasi and postdoctoral researcher Micol Spitale introduce Nao and offer people the chance to interact with Nao in a short one-to-one session.
Though AI is ubiquitous in our homes and workplaces, there is widespread misunderstanding of what it really is. Professor Neil Lawrence, DeepMind Professor of Machine Learning at the University of Cambridge, encourages us to reframe our view of AI during his talk, AI: Reclaiming Control (9 April, in person). He said, “If we devolve decision-making to intelligent machines, we depend on those machines to accommodate our needs. However, we often don’t understand how they function, mistakenly seeing machine intelligence as a reflection of our own, when in fact it operates in a fundamentally different way. We need to better understand intelligent machines, and the influence that the decisions they make has on our lives, otherwise we will lose control over our destiny.”
Related events include:
For the full programme and bookings, please see the Festival website: www.festival.cam.ac.uk
Keep up to date with the Festival on social media: Instagram @Camunifestivals | Facebook: @CambridgeFestival | Twitter: @Cambridge_Fest
The Festival sponsors and partners are AstraZeneca and RAND Europe. The Festival media partners are BBC Radio Cambridgeshire and Cambridge Independent.