On November 2, the last ever Beatles song, Now and Then, was released and made history by hitting the number 1 slot 60 years after the band’s first chart topper. Like Free As A Bird and Real Love, the reunion recording was based on a demo by John Lennon, some 15 years after his death in 1980. Finishing the track off in 2023, this time the producers were able to use AI to separate Lennon’s voice from the piano, enabling a higher quality mix.
AI stands for Artificial Intelligence, a type of computer-guided automation that mimics human behaviour and ‘learns’ as it adapts to input. It’s already capable of fulfilling complex tasks more quickly - and better - than people.
For instance, cutting edge AI technologies are an exciting development in surgery and procedural medicine. Machine learning, natural language processing and the detailed analysis afforded by computer vision is being harnessed to empower surgeons to improve patient outcomes.
Advertising and journalism can now be performed by AI. One company’s AI Content Generator Tool boasts that it “creates captivating content for a wide range of professional needs.”
“From headlines to entire articles, you can generate just about any type of copy,” the anyword website claims. Tempting, but could this mean that copywriters and reporters will soon be out of a job?
With tools like Bing’s Image Creator, artwork and photo compositions like the image above can be produced in just a few seconds. A student at Pembrokeshire College told us her worry that her chosen career in design will not exist, with the progress of AI technology.
Her fears are borne out by Elon Musk, who recently told Prime Minister Rishi Sunak that AI will have the potential to become the “most disruptive force in history.”
“There will come a point where no job is needed,” he added.
The tech billionaire is excited by the prospect. But it’s not hard to imagine this playing out like an Orwellian dystopia, where the masses are judged ‘useless’ and forced to live on meagre subsidies provided by the ruling elite - Big Tech companies.
Moreover, as Bing Image Creator, anyword and the new Beatles single all show, AI is already very good at producing a convincing fake, and this extends to video footage, too.
A deepfake is a video that makes it look as if someone is someone else - the digital equivalent of Harry Potter’s polyjuice potion, with potentially defamatory results.
Two weeks ago, delegates gathered at Bletchley Park - one of the birthplaces of computer science - for the UK’s AI Safety Summit.
At the conference, the Prime Minister was shown a video of Keir Starmer appearing to insult his colleagues as an example of an AI deepfake.
The main focus of the summit was on ‘frontier’ AI - future AI models potentially more powerful than those used today, but a poll by global foundation Luminate showed that the public are uneasy over the immediate impacts of AI.
According to the poll, many feel the pace of AI innovation is already unsafe (only 17 per cent think otherwise). They’re equally worried about AI automation, election deepfakes next year, discrimination, and how AI could make social media even more addictive.
A striking 71 per cent say they would prefer a slower and safer AI rollout.
The poll shows that while cybersecurity is the top concern for the public when it comes to AI, just as many are concerned about a combination of other immediate harms, such as election tampering through AI disinformation, losing their job to AI automation, and AI-induced discrimination.
The Association of Chartered Certified Accountants has also expressed concern about potential inaccuracy and misinformation, and points out the Magnification Effect, where one single AI error could be much more serious than human error.
Lloyd Powell, head of Cymru/Wales ACCA said: “Understandably among all sectors in Wales there is much confusion and uncertainty in the business community – especially SMEs – and they are looking for guidance on how to implement practical safeguards.”
On November 6, Elon Musk introduced Grok, an AI chatbot developed by his new company xAI, poised to challenge existing AI services. Named after an alien word from Robert A. Heinlein’s “Stranger in a Strange Land,” Grok is designed to respond with wit to queries that might stump other bots.
Grok harnesses data from X (i.e. taps tweets) to offer potentially more timely responses than competitors. Despite this “advantage,” xAI admits Grok’s susceptibility to the typical pitfalls of any large language model (LLM), including the generation of false information.
Another example of AI promoting false information comes from CNN’s business section, November 2 in an article headlined ‘How Microsoft is making a mess of the news after replacing staff with AI’. In the article, Donnie O’Sullivan and Allison Gordon - we’re taking it on trust that they’re real people - list examples of news stories with false claims: Joe Biden falling asleep during a moment of silence; Covid-19 being orchestrated by the Democratic Party and an obituary for a sports player describing him as “useless”.
Microsoft’s home page used to employ more than 800 editors to help select and curate news stories. But by 2020, dozens of journalists and editorial workers were being laid off in the company’s push to rely on artificial intelligence.
AI may be clever - it may learn - it may seem almost human, but it has no common sense or scruples. Its pitfalls came right out into the open recently when The Guardian published an article about Lilie James, a 21-year-old woman who was found dead with serious head injuries at a school in Sydney, Australia.
“James’ murder led to an outpouring of grief and prompted a national conversation in Australia about violence against women,” Donnie and Allison write. “But when MSN republished The Guardian’s story, it accompanied it with an AI-generated poll asking readers, ‘What do you think is the reason behind the woman’s death?’ and listed three options: murder, accident, or suicide.” Microsoft admitted that a poll should not have appeared alongside an article of this nature, and that the company is taking steps to help prevent this in the future.
The News Media Association reports that 97 per cent of editors agree that the risk to the public from AI-generated misinformation is greater than ever before. Additionally, editors have grave concerns over the anti-competitive business practices of the tech platforms, with 90 per cent of editors believing that Google and Meta pose an existential threat to journalism.
Katie French, regional group editor at Newsquest, said: “Our very presence is giving credibility to these platforms that otherwise would be filled with clickbait nonsense and unregulated information.”
In the words of Dr Ritumbra Manuvie, Executive Director of The London Story, “Social media companies with a shameful reputation of disregarding human rights and allowing divisive rhetoric to fester across the world are now leading the rollout of generative AI… We can’t allow Big Tech to further distort our information environments.”