skip to content
 

Can we trust AI to use our computers?

11:30am-12:30pm on Saturday 21 March
1:30pm-2:30pm on Saturday 21 March
3:00pm-4:00pm on Saturday 21 March

Times shown are in GMT (UTC +0) up to the 26th March. For events on or after 27th March times are in BST (UTC +1).

Computer Laboratory, William Gates Building 15 JJ Thomson Avenue, CB3 0FD

Current AI agents are powerful but passive; they read what we say and make suggestions to us. But the next AI agents being developed are more active. Called ‘Computer Use Agents’, they don’t just give us answers, they go off and do tasks for us on our computer. But there’s a problem: they can’t distinguish between the instructions we give them, and malicious instructions that might be embedded in the data they read online. So how, and why, should we trust them?

Currently, many of us use AI without even thinking about it – for example, when we search Google maps for a route, get a music recommendation from Spotify, use a spam filter, or set up face recognition on our smartphones. And we also use AI’s like Claude or CoPilot or Chat GTP that read what we say and respond with suggestions. These AI’s are powerful but also passive. However, the next AI coming down the track is active. Computer Use Agents, as they’re called, don’t just tell you the answer, they actually go off and do it for you on your computer. This is AI that literally sees your screen and clicks, types, and scrolls. But it comes with a problem: it can’t distinguish between what you ask it to do, and malicious instructions that might be embedded in the data it’s reading.

You can ask a Computer Use Agent to find you a one-month weather forecast for your local area or set up a favourites folder on your browser bar. While such AI agents can help us get boring stuff done – like filling in forms or sending routine emails – they have a big drawback: they can get side-tracked or even be deliberately thrown off course by hackers feeding it malicious instructions. “As an analogy,” says researcher Hanna Foerster, “imagine you’ve hired a personal assistant, but every letter they open might contain a hypnotic command that they will obey (whether it’s a command from you or a malicious one).”

This is a problem that Hanna is currently researching and developing solutions for. Come and hear her talk about why she doesn’t trust AI – and offer tips for anyone interested in using or developing AI that’s safer to use.

These talks are taking place as part of the Department of Computer Science and Technology Open Day.

Advance booking is not required. Please just come along on the day.

Booking required:
RECOMMENDED

Additional Information

Booking required:
RECOMMENDED
Age: 12 – 18 years, Adults
Format: Talk
Timing: In person
Cost: Free
Event Capacity: 50
Theme: Society, Discovery
Accessibility: Accessible toilet, Step-free access, Lift

Sign up for email updates

Get all the Cambridge Festival news straight from us to your inbox. Sign up to our mailing list now.

Festival FAQs

Got a question? Check out our FAQs here.

You might also like...

Read more at: Chemistry in action in the Yusuf Hamied Department of Chemistry

Chemistry in action in the Yusuf Hamied Department of Chemistry

10:00am-4:00pm on Saturday 21 March
Timing: 
In person
Format: 
Talk
Hands-on Activity
Open Day
Other
Age: 
12 – 18 years
Children under 12
Adults
Family Group

Visit the Department of Chemistry to try some hands-on chemistry experiments and enter the world of science. Once kitted out in a lab coat and...

Follow us on socials