The time has come: humanists must define the values that will underpin our AI future

14 December, 2022

As we look to the future of artificial intelligence (AI) in the UK, there is a sense of both excitement and uncertainty. With its potential to revolutionise industry and make sweeping changes to a wide range of skilled professions, the recent period of exponential advancement in AI technology throws up a great number of ethical challenges and questions which will fall to policymakers and citizens to get to grips with.

Those in the field are already issuing urgent wake-up calls. Many are suggesting that the speed of social and industrial change driven by AI will be faster than the majority of people – especially politicians – have even begun to appreciate. Their sober prediction is that this could be an ‘overnight’ revolution in technology, meaning we will see it begin to change our lives in weeks or months, rather than years or decades. But unlike the gradual adoption of Internet-based services in the 1990s and 2000s which brought slow and steady changes to everyone’s lives, this one has the potential to feel much more like a sudden whiplash than a mere steep hill climb – precisely because the public is not expecting it.

AIs are already at work writing plausible, well-worded, uncannily accurate articles in many workplaces. Marketers, fundraisers, computer programmers, web designers, HR professionals, campaigners, researchers, contract lawyers, administrators and many more besides are already beginning to grasp what it might mean for their jobs too – and the potential for efficiency improvements seem vast. But as with the earliest advances in the web in the 1990s, or smartphone technology in the 2000s, we are only scratching at the surface of what these changes will ultimately mean for our way of life.

If it helps bring the potential dangers into sharper relief, the changes brought about by AI could, if not stewarded adequately, have much more in common with the sudden closure of mines in England in the 1980s, which scarred communities and left a legacy of intergenerational poverty, than with, say, the rapid but harmless decline of the VHS in the late 90s and early 2000s.

This stark potential means if we want a positive transition to a bountiful new future, rather than a violent rupture in our lives, we as individuals need to take a proactive interest in making sure that the benefits of AI are realised equitably, productively, and positively. We should be prepared to ask bold questions we never dared ask in previous generations about the nature of work and the best pursuit of true human happiness and fulfilment. AI could mean less work, but a richer lifestyle, for just about everyone. Or it could mean work itself becomes less secure for huge swathes of the workforce, while underlying economic realities remain unchanged. Or anything between those two poles!

Humanists need to take an active role in helping publicly articulate the values and principles that will shape that process of stewardship, both in the UK and internationally. These are, after all, technologies that will touch the lives of every human being on Earth for generations to come. As time goes on, AI will almost certainly open up new ethical dilemmas, and humanist thought will need to encompass these new realms and open up new discussions about sentience, intelligence, and rights.

AI has the potential to revolutionise many aspects of our lives, from healthcare and transportation to education and the workplace. But at present there are also legitimate concerns about the potential risks and challenges it poses, from the loss of jobs and the erosion of privacy to the potential for AI to be used for malicious purposes. For example, AIs can already generate perfectly believable scripts in an individual’s distinctive idiom and tone of voice, or generate true-to-life ‘film’ footage of an incriminating event that never in actuality took place. Comparable technologies to adequately detect fraud and defamation will remain generations behind unless governments choose to invest in them now.

At Humanists UK, we think that the key to realising the potential of AI will be to approach it with a spirit of both curiosity and caution. Not gung-ho, rip-roaring enthusiasm, but constant vigilance tempered by open-mindedness, a fair dose of optimism, and the will to make it work for the good of human beings.

One thing is certain: we cannot put the genie back in the bottle. This technology is already with us. The key question it poses for humanists is how we can fashion an AI-enhanced future that improves human lives rather than hinders human progress. Properly harnessed, we could realise new, game-changing innovations with the potential to improve our lives in many ways, even providing new and innovative solutions to long-standing problems in academia, health, and public policy.

A process of introspection and discussion will need to take place within every community and every political tradition. It may be the first time in many decades that industrial strategy has taken on an ethical dimension of this size and scale, with the potential to rewrite the battle lines of political debates. The fact that these ethical frontiers are new and challenging will compound our difficulty in achieving political consensus. It all adds up to a democratic and moral imperative to get involved.

Because while the timeline for the more sweeping changes on the horizon remains uncertain, the fact that these changes are already happening is not. In January of 2023, many workplaces in the UK will already be putting this technology to use, or reacting to the new status quo. Many teachers and lecturers, for example, will be designing new assignments that are ‘AI-proof’ if they are to ensure genuine learning is still a possibility, in a world where a machine can freely and instantly proffer high-quality homework in any writing style you ask for. As a society, if we are to define our direction of travel, that means thinking carefully about the ethical principles that should underpin AIs, and making all the rules, regulations, values, and human biases built into algorithms, codes, and machine learning datasets both inclusive and transparent. Only then can we hope to realise its full potential while mitigating its potential harms to humanist priorities like freedom, happiness, and equality.

Notes:

Humanists UK is the national charity working on behalf of non-religious people. Powered by 100,000 members and supporters, we advance free thinking and promote humanism to create a tolerant society where rational thinking and kindness prevail. We provide ceremonies, pastoral care, education, and support services benefitting over a million people every year and our campaigns advance humanist thinking on ethical issues, human rights, and equal treatment for all.

This article was written by Liam Whitton, Humanists UK’s Director of Communications and Development. In 2023, Humanists UK will be publishing more articles and stories of this nature spotlighting humanist views on contemporary issues, challenges, and ideas.