skip to content
 

Speaker Spotlights: Professor Neil Lawrence

Professor Neil Lawrence is Senior Turing AI Fellow and the DeepMind Professor of Machine Learning at the University of Cambridge. During his event, AI: RECLAIMING CONTROL (2:30pm-3:30pm on Saturday 9 April, in person), Neil explores what AI is and isn’t, and how we can maximise the benefits and minimise the harms of these technologies.

Cambridge Festival: AI is often portrayed as either the great saviour that will cure all our modern-day ills, from climate change and COVID-19 to conflict and poverty, or a harbinger of doom where the machines take over the world. Is the truth somewhere in between both those extremes?

Neil Lawrence: In many respects, machines have already taken over the world. Since the industrial revolution machines of different forms have increasingly influenced our lives. I think the main concern is less whether they’re taking over, but what is the manner of our interaction with them? Machines are less adaptable than humans and that means the tendency has been for us to adapt to the machine. The term AI gives the mistaken impression that we’ve developed a technology that will adapt to us, in the way other humans, or even our pets, adapt to us. That isn’t the case.

However, AI has tremendous potential to solve a great many problems, but for that to happen, we need to ‘do AI’ better. There have been some significant failures and problems in AI solutions to date, and often that’s because this AI is being deployed in complex software systems. These systems are difficult to understand even when they are not using machine intelligence. As a society, we need to have a better understanding of how these machines are making decisions and how those decisions influence our lives, otherwise we are eroding our autonomy.

CF: Currently, what can AI technologies do and not do for us?

NL: AI is very good at handling large data, so it’s good at making decisions derived from data. But it’s bad at handling individual human beings. Machines don’t share our experience of being human so they can’t empathise with us at the level that we empathise with each other. So computers make decisions about us without understanding the human condition – and inevitably that leads to problems such as unfairness.

CF: Where do you see this going in the future? Do you think half the hyped predictions about AI will come to fruition? Or could it far exceed where we think it will go?

NL: Separating the hope from the hype is one of the major challenges of artificial intelligence. We’re in a particular period where start-up companies can still command large investments by over-promising on what the technology will deliver. Inevitably, there will be disappointments, as we’ve already seen with driverless cars. But there will be advances, and those advances will lead to an improved understanding of how to deploy these systems in an explainable and maintainable way. The AI revolution is a continuation of the information revolution, that has brought us radical change over the last two decades. I think we can imagine similar change over the next two decades.

CF: What do you think AI will never be able to do?

NL: What makes humans unique and different to AI is our weaknesses, not our strengths. Those weaknesses are at the core of the human condition. The fact that we struggle to share our ideas, the fact that our life is limited. The fact that we’ve co-evolved with our fellow humans over millions of years. All of this is at the core of our intelligence. An AI doesn’t have the same limitations, so can’t participate in this social context in the same way.

CF: Are there risks and dangers we should be wary of right now?

NL: Yes. Just as car engines produce prodigious power from petrol, machine intelligences deliver decisions derived from data. In both cases the scale of consumption enables a speed of operation that is far beyond the capabilities of their natural counterparts. Unfettered energy consumption has consequences in the form of climate change. I wonder if unbridled data consumption also has consequences for us?

I also argue we should be more careful about the personal data we give away. With colleagues in the Department of Computer Science & Technology we are researching what happens to all the personal data collected by the 'smart' devices we’re wearing or using in our homes. This can be highly personal information but it’s often unclear where it goes and how it’s used. We’re looking at new forms of data intermediary called Data Trusts that can speak up for individuals and mediate the use of their data. Without these data intermediaries, we are vulnerable to how our data is being used. We give away an enormous amount of information about ourselves. We should be wary of doing so.

CF: Who do you think AI is really serving?
NL: There’s a common misconception with AI that we are building the first wave of automation that will adapt to us, instead of expecting us to adapt to it. We think that because it’s intelligent, AI will accommodate us. But in the history of automation, this has never happened. All automation so far has needed humans to do the accommodating (with factories requiring us to turn up on time, roads moving us over to travelling by car, etc). The way that AI is employed, it’s more of the same. It isn’t really adapting to us. That’s why I think we need to be much better at understanding how machine decision-making works so that we can use it as a tool and have it serve our interests – and not the other way around.

CF: Should there be more regulation for the use of AI by big tech and other bodies?

NL: Everyone wants to talk about the big tech companies and what they are doing. But firms like Google and Facebooks are highly incentivised to align their activities with what the public wants. And where there have been incidents – for example, the ways Russia used social media platforms to meddle in the last US election – they have tended to be a failure of process at the companies rather than a conscious attempt to break the law. The enemies we really need to be worried about are the ones working behind the scenes that we are much less conscious of. We need to think about regulation, but we should be building on existing law and we should be wary of short term fixes that don’t address the systemic issues.