ϲ

Skip navigation

EXPERT COMMENT: Conquering the AI Safety Summit: why abandon previous expeditions?

19th October 2023

By Professor Marion Oswald, Professor Gitanjali N Gill, Professor Michael Stockdale

We in the UK already know a lot about how technology should be regulated and controlled, and researchers have been considering the implications of AI for decades. But the UK AI Safety Summit argues that the risks are Are we in danger of abandoning the knowledge of previous expeditions in mounting this latest attempt on the summit? Half way up? 

The history of the law shows us that if you don't clearly limit, licence or stop something happening, people will do it. We might think about factory machines without guardrails, driving while using a mobile phone, firearms licensing requirements, , and how the where not banned. Of course, people may choose to do things even if unlawful or controlled, but fewer will - law and regulation sends a clear message about acceptable behaviour. And where things are not regulated, bad things can happen, as we saw from the

Humans are creatures of habit, so it’s not unreasonable to think that the same will happen with AI. People will develop and use AI where they think it will benefit them, and damn the consequences (because initially there don't seem to be many, or even any). Sometimes this won't matter, even if the tech is of dubious validity. We might not mind too much if our Alexa gets a few queries wrong. (Though we would mind if it shared our personal data illegitimately).

But there are real risks in some spheres - existing now - that could impact people's lives and liberty. Imagine if a large language model incorporated into an auditing tool, but hidden from view, generates erroneous conclusions about illegal behaviour, and people act on that - we could easily have another .

Much of the promotional talk about ‘foundation’, 'frontier' or 'general' AI implies that these tools have some sort of omniscience in terms of capabilities and purposes, but this is just not true. These models can be built into a variety of tools, that is true and this is not new, but fundamentally, they are trained to analyse how narrative, image, audio and other data are constructed. They are good at building sentences and detecting patterns, but their output could be utter hogwash (or insulting, libellous, dangerous or otherwise inappropriate). If we do not take control, we even risk poisoning the veracity and reliability of digital and online information and therefore our historical records.If we wanted to, we could do something about the hogwash-risk by developing a GPT-like capability that’s only trained on pre-selected trustworthy material, but maybe this isn’t boundary-pushing enough for some.

Furthermore, these models do not create their own role in our world!  Humans ultimately control how these models are used, who by and for what purposes. And that does not only mean by some theoretical 'bad actor' but by our own governments, public sector bodies and commercial companies. It's easy, and lazy, just to talk about a vague future existential threat. But it attracts headlines on slow news days. If we want to take control, this inevitably means considering the aims, actions and policies of the organisations wanting to make use of AI. And that can raise some uncomfortable and potentially contentious issues. Is it in the 'public good' for example to use ‘predictive’ AI to single out people for further investigation for suspected benefit fraud? What is missing from the AI system that we would need to know in order to make the decision fair? And who should be the arbiter of whether AI is being used for public good? Government ministers, parliament, multi-national corporations or a wider public? Which government, parliament, multi-national corporation or public? The UK’s, the EU’s, China’s?

The fundamental question for deployment of any AI should be whether it is effective in helping the human decision-maker uphold our longstanding principles of fair process, transparency, justice and proportionality. We can use our existing knowledge to answer this question, but we must involve breadth of experience across disciplines so that the identification of public good is done in a legitimate and collective way. There must be a genuine acknowledgement that certain actions informed by AI might well have serious detrimental consequences, and these should be prevented. Acceptance that AI is with us to stay and that it is likely to have huge benefits in appropriate contexts does not mean that controls should not be put in place. Bearing this in mind, the recent bonfire of the advisory boards , in favour of a tech-entrepreneur centric assembly with some commercial irons in the fire, would appear to be a retrograde step.

 

 

Computerised Society and Digital Citizens

Our academics are delivering high impact research to understand big societal questions around how citizens can be empowered to digitally – and interactively - engage with governments, political organisations, healthcare providers and civic authorities on a local and national scale.

University Newspaper

ϲ News is packed full of news and features covering everything from research projects and business partnerships to student and staff awards.

News and Features

This is the place to find all the latest news releases, feature articles, expert comment, and video and audio clips from ϲ

a sign in front of a crowd
+

Northumbria Open Days

Open Days are a great way for you to get a feel of the University, the city of Newcastle upon Tyne and the course(s) you are interested in.

Research at Northumbria
+

Research at Northumbria

Research is the life blood of a University and at ϲ we pride ourselves on research that makes a difference; research that has application and affects people's lives.

+

Find out what life here is all about. From studying to socialising, term time to downtime, we’ve got it covered.


Latest News and Features

a map showing areas of ice melt in Greenland
S2Cool project lead Dr Muhammad Wakil Shahzad
The Converted Flat in 2049, by the Interaction Research Studio, is one of seven period rooms built as part of the Real Rooms project which opened in July at the Museum of the Home in London.
The UK Centre for Polar Observation and Modelling (CPOM), based at ϲ, has been awarded over £400,000 by the European Space Agency to investigate tipping points in the Earth’s icy regions with a focus on the Antarctic. Photo by Professor Andrew Shepherd.
Nature Awards Inclusive Health Research
Some members of History’s editorial team (from left to right): Daniel Laqua (editor-in-chief), Katarzyna Kosior (reviews editor), Lewis Kimberley (editorial assistant), Charotte Alston (deputy editor) and Henry Miller (online editor).
Dr Elliott Johnson, Vice Chancellor’s Fellow in Public Policy at ϲ.
Balfour Beatty graduates at Northumbria's winter congregation

Back to top