The rise of AI could be a great British story. But let’s do it the right way


SOURCE: THEGUARDIAN.COM
FEB 13, 2022

It’s easy to miss good news amid coverage of the pandemic, the rising cost of living and the, ahem, rest. However, the United Kingdom is getting something right.

On Thursday, the government announced that it is investing up to £23m to boost artificial intelligence (AI) skills by creating up to 2,000 scholarships across England. This will fund masters conversion courses for people from non-Stem (science, technology, engineering and mathematics) degrees.

“This will attract a less homogeneous group,” explains Tabitha Goldstaub, who chairs the government’s AI council and advises the Alan Turing Institute, “which means the UK AI ecosystem benefits from graduates with different backgrounds, perspectives and life experiences”.

This investment in widening education and opportunity is just one of several steps in the 10-year AI national strategy, which aims to make Britain a world leader in AI. We’re not the only ones; as the AI dashboard at the Organisation for Economic Development (OECD) shows, many other countries have their eye on the same prize.

The frontrunners in this race, the United States and China, have bigger populations and deeper pockets, while the European Union has an impressive record in setting global norms and rules for data protection. To have any hope of keeping up, at the very least the UK must find a way to punch above its weight.

AI is an unstoppable force in our economy... by 2040, it is predicted more than 1.3m UK businesses will be using it

The signs are promising. AI is already an unstoppable force in our economy. According to Tech Nation, there are more than 1,300 AI companies in the United Kingdom. Research commissioned by the government and published last month shows UK businesses spent around £63bn on AI technology and AI-related labour in 2020 alone. This figure is expected to reach more than £200bn by 2040, when it is predicted more than 1.3m UK businesses will be using AI.

Even so, to make the most of the opportunities that this offers – and to understand the risks – we will need to upgrade how we educate and train our workforce. This will be tricky because AI is surrounded by a lot of hype and mixed messages. Depending on who’s talking, AI will be a “more profound change than fire or electricity” (Google CEO, Sundar Pichai), it could “spell the end of the human race” (Professor Stephen Hawking) or help us “save the environment, cure disease and explore the universe” (Demis Hassabis, founder of London-based DeepMind).

Some AI researchers strike a more cautious tone, arguing that AI is just “statistics on steroids” (Dr Meredith Broussard) and “neither artificial nor intelligent” (Dr Kate Crawford). All agree that AI is transforming how we work, live, wage war and even understand what it means to be human, as Professor Stuart Russell explored in his BBC Reith Lectures in December.

As we aim for the goal of becoming a world leader in AI, the United Kingdom must choose between putting ethics at the core of our strategy or leaving it as an option – a bolt-on at best. This is not a choice between being unethical or ethical; rather, it reflects a fear that regulation risks stifling innovation, especially if other countries do not prioritise ethics in their approach to AI.

Ethics is about more than regulations, compliance and checklists. It’s about designing the world we want to live in

However, ethics is about more than laws and regulations, compliance and checklists. It’s about designing the world we want to live in. As Sir Tim Berners-Lee, who created the world wide web, explained in 2018: “As we’re designing the system, we’re designing society… Nothing is self-evident. Everything has to be put out there as something that we think will be a good idea as a component of our society.”

Again, he was ahead of his time. A new role is emerging in our economy: technology ethicist. Its contours are still being shaped. Is it a technologist who works in ethics? An ethicist who works in technology? Can anyone call themselves a technology ethicist or is it an anointed position?

Rather than focus on what technology ethicists are, let’s consider what they do. They might have trained in the law, data science, design or philosophy or as artists and designers. They might be employed by universities (and not just in the philosophy and computer science departments) or work in thinktanks, NGOs, private companies or any part of government. They may infuse new meaning into existing roles, such as researcher, software developer and project manager. Or they might have new responsibilities, such as responsible AI lead, algorithmic reporter or AI ethicist.

They are working daily to ensure that government websites are accessible to all UK inhabitants or fighting to force the government to reveal the algorithm it is using to identify disabled people as potential benefit fraudsters, subjecting them to stressful checks and months of frustrating bureaucracy. They are doing open-source intelligence investigations into crime, terrorism and human rights abuses, or improving healthcare delivery, or protecting children online. They are working in virtual reality and augmented reality and building – and warning about – the metaverse.

Some of the leading technology ethicists in the world were either educated and trained in the UK or are living and working here now. This presents us with a unique opportunity to draw on their talents to ensure that ethics is embedded into our AI strategy, rather than treated as an elective or a bolt-on.

This is about more than redesigning our education curriculum or new ways of working. It’s about creating the future.

Stephanie Hare is a researcher and broadcaster. Her new book is Technology Is Not Neutral: A Short Guide to Technology Ethics