Tag Archives: artificial intelligence

A future written by Generative AI looks miserable

I’ll start by thanking Hugh Andrew for his excellent LDV post from the 23rd April – ‘A thief in the night’, which I completely agree with. I’m old enough to remember the Napster file-sharing era when ordinary people started downloading music over the internet for free. This mightily offended big business in the form of the music industry who, pretending to care about the artists they profited from, declared this was stealing and so successfully lobbied Governments to change the law and make it easier for them to prosecute file-sharers.

Fast-forward 20 years, and now other big companies are downloading creative works over the internet for free, often created by ordinary people who are aspiring or actual artists, writers or musicians. This is also stealing, but those big companies are once again lobbying Governments to change the law, weaken copyright in their favour and legitimise what they are already doing anyway. And Governments, forever in thrall to the lure of the ‘next big thing’ are listening to them.

Where does this leave creatives such as artists, musicians, writers and academics? An aspiring musician might now put their work on Spotify, who will typically pay the princely sum of $0.004 per stream. A new author self-publishing on Amazon might earn a couple of quid per Kindle download of their book. A talented or lucky few may create a buzz, go viral or build a following that allows them to make a living doing what they love. However the vast majority will earn peanuts, but at least their work is out there to take pride in and get credit for, and those that enjoy it will know the creator’s name.

Or so we thought. Now their creative work could be swallowed by a machine and regurgitated without credit by anyone who can type the right prompt into an AI model.

Posted in Op-eds | 5 Comments

“I said not one word of that”. When AI puts words in our mouths

Last December, the Conservative peer Charlotte Owen introduced the Non-Consensual Sexually Explicit Images and Videos (Offences) Bill, which has made its way through the House of Lords. This followed the 2023 Online Safety Act, which not only made it a criminal offence to share, or threaten to share, images or videos of someone in an intimate state, but also included digitally manipulated ones, known as deepfakes, appearing to show someone in such a state.

I am entirely on her side, but I would also like to see it cover content of a non-sexual nature, audio as well as video, which can cause similar humiliation and distress to those targeted, male as well as female. While AI cannot literally put words in our mouths, it can do so virtually or digitally, cloning our voices as well as our faces.

Back in 2023, Stephen Fry asked his audience to compare his voice, from a clip of a documentary about the Dutch resistance he narrated in English, with an AI-generated version of it, only to tell them that was the AI: ‘I said not one word of that.’ His agents, unaware such technology existed, went ballistic, but he knew there was more to come: ‘You ain’t seen nothing yet.’

The Mayor of London, Sadiq Khan, however, would have seen quite enough, after hearing a deepfake audio sounding like him early last year saying inflammatory things about Remembrance weekend, calling for pro-Palestinian marches and declaring that the Metropolitan Police did what he told them to do. It certainly sounded like him, with the London accent, complete with glottal stop; this was a part of his identity having been stolen and subverted, which understandably angered him.

Posted in Op-eds | Leave a comment

Artificial intelligence – golden opportunity or massive threat?

You can hardly read the news, listen to the radio or scan your preferred social media without hearing about AI.  Or experiencing it in practice, whether you are aware of it or not.  

On the one hand, it is seen as offering huge potential to transform business and other organisations, reducing costs and creating entirely new capabilities.  

On the downside we hear of threats to democracy with a surge in fake videos and information; the potential for mass job losses as AI systems replace employees to reduce costs; at the extreme, dire tales of AI systems taking over humanity altogether.  

One important concern is the concentration of AI development in too few powerful hands and the struggle of governments and international bodies to regulate them.  

To discuss these issues, Green Book Pod is back with another episode in our series of discussions on key issues for the Liberal Democrats, now available on Lib Dem Podcast and on YouTube.

In this podcast we try to provide a balanced view and to give a sense of what you need to know, and perhaps what you do and do not need to be concerned about.  That includes what to be looking for both from governments and business, and how to balance the need for regulation with desirable innovation.

Podcast Guests

Posted in Op-eds | Also tagged | 3 Comments

Artificial Intelligence means we need a Universal Basic Income

There continues to be severe pressure on various UK public services, such as health, criminal justice, and social care. Reform of these services is badly needed, to improve outcomes for service users, patients, and victims, as well as providing value to the public purse.

However, the UK’s poor economic outlook makes reform challenging. The UK’s productivity is lower than France and Germany, the number of long-term sick has risen by 500,000 since Covid, and the annual cost of servicing Government debt is £83 billion in interest payments alone.

Artificial Intelligence

Yet there is a potential solution – Artificial Intelligence (AI) – which consultancy PwC says could grow the UK economy by 10% (£232 billion) by 2030. The predicted gains are from boosting innovation and increasing productivity, which can then make public services more effective.

Examples of how AI can reform public services, include: smart transportation, fraud detection, energy management, remote monitoring and more.

For example, AI could be used in healthcare to anticipate when someone may need preventative support, while internet-of-things devices could be used to help support the elderly. Predictive analytics could be used to identify businesses best-placed to receive grants and loans.

The introduction AI is already happening, and the Tony Blair Institute for Global Change has proposed a range of recommendations to improve the UK’s readiness and abilities in this area.

The potential knock-on effects of AI – such as on the protection of personal data and the jobs market – do need careful oversight and solutions.

Posted in Op-eds | Also tagged | 4 Comments

AI is being used by students to produce essays and projects

A story has been appearing in the regional press about a cross-bench peer, John Pakington (Baron Hampton) who is, unusually, also a working teacher. He is concerned that students are using Artificial Intelligence systems to produce essays, technical designs and even works of art and then passing them off as their own. He says:

There is a lot of anecdotal evidence, at the moment, that suggests that students are using AI for everything from essays and poetry to university applications and, rather more surprisingly, in the visual arts subjects. Just before Christmas, one of my product design A-level students came up to me and showed me some designs he’d done.

He’d taken a cardboard model, photographed it, put it into a free piece of software, put in three different parameters and had received, within minutes, 20 high-resolution designs, all original, that were degree level – they weren’t A-level, they were degree level. At the moment, it’s about plagiarism and it’s about fighting the software – I would like to ask when the Government is planning to meet with education professionals and the exam boards to actually work out (how) to design a new curriculum that embraces the new opportunity rather than fighting it.

Tim Clement-Jones is our Digital spokesperson in the House of Lords and he agreed with his fellow peer:

This question clearly concerns a very powerful new generative probabilistic type of artificial intelligence that we ought to encourage in terms of creativity, but not in terms of cheating or deception.

Some 30 years ago I was studying AI as part of my Masters degree. Many of the same tropes were circulating then as now: “AI will make people lazy”, “Many jobs will be lost to machines” – similar sentiments have been expressed whenever there is a substantial shift in technology, from Jacquard looms to automated car production. But this time there is the added fear that AI will “take over” and we will become the redundant playthings of super machines. In practice, many of the techniques that I was looking at then are now embedded in our technologies; they improve productively and are hugely beneficial to society. They support and amplify our activities rather than replace them, although, as this evidence suggests, they can also present new challenges.

Andy Boddington had some fun with the latest AI chatbot, ChatGPT, and generated a passable short essay and some rather dubious poetry. When I say “passable” I mean that it is almost impossible to tell that it has been generated by software and not by a real person.  It is also possible that ChatGPT could pass the Turing Test and win the Loebner Prize.

Of course, the issue of plagiarism has dogged educational assessment for many years. Academics routinely use plagiarism detection systems for essays, and I have a couple of examples from my own professional experience.

Posted in Op-eds | Also tagged | 8 Comments

I asked the chat bot about UK politics and which side to butter my toast

Chat GPT is becoming a favourite internet game. It has serious possibilities for learning, writing, and cheating.

The AI writer generates errors, mostly because time for the current version stopped in 2021. For example, it thinks that Boris Johnson is still prime minister – or has it been hacked by the BJ camp?

Development of artificial intelligence has been underway for decades. From primitive beginnings, it has been growing in power and in “humanness”. Contact your bank or your council and in many cases, you’ll be talking to AI by voice or online. But no one thinks these have intelligence. Mutter something unexpected to your bank’s bot like “which side should I butter my toast” and you can cut through to a real human operator. At least I think it is a real human operator.

Chat GPT, released to the public a few weeks ago, is remarkable and some commentators think it fulfils the Turing Test, which is passed when a computer’s responses cannot be distinguished from those that would have been made by a human. However, Chat GPT itself is dismissive of the idea:

“It is difficult to say whether Chat GPT, or any other language model, would pass the Turing Test.”

AI is potentially a powerful tool for politics. Could we replace phone banking with AI bots calling? Could we get AI to write campaign literature?

I asked Chat GPT: “Tell me about party politics in the UK in 750 words”. The results are impressive. It would pass as a student essay despite a couple of errors. I also asked the bot to write a poem about the Liberal Democrats. It is remarkably good if close to doggerel.

By the way, Chat GPT tells me which side you butter your toast is a matter of personal preference, including whether you butter it both sides. Does anyone do that?

Posted in Op-eds | 2 Comments

Should the mass proliferation of AI-driven war drones and robots be left uncontrolled?

Embed from Getty Images

The Chinese government, as I hope our MoD, will have noticed the important article of Nicholas Chaillan ‘To catch up with China, the Pentagon needs a new #AI strategy“, published in the Tuesday 23rd of November issue of the Financial Times.

As indicated, Mr Chaillan was formerly the first chief software officer at the US Air Force and Space Force. He is now the chief technology officer at cybersecurity firm Prevent Beach. The article points out the shocking “kindergarten-level” of US military/Pentagon cybersecurity in the US’ critical national infrastructure and that the next war will be software-defined: “It won’t be won with a $1.7tn programme of fifth generation F35 fighter jets or $12bn aircraft carriers. China can take down our power grid without firing a single shot”.

Posted in Op-eds | Also tagged and | 4 Comments

Jo Swinson debates ethics and artificial intelligence – and suggests the Lovelace Oath

This week, Jo Swinson held a Westminster Hall debate on ethics and artificial intelligence. While recognising the huge advantages of AI, there are some ethical challenges we need to do something about. Jo looked at this from a very liberal perspective, as you would imagine. Here are some of the highlights of her speech. You can read the whole debate here. 

I would like to start with the story of Tay. Tay was an artificial intelligence Twitter chatbot developed by Microsoft in 2016. She was designed to mimic the language of young Twitter users and to engage and entertain millennials through casual and playful conversation.

“The more you chat with Tay the smarter she gets” the company boasted. In reality, Tay was soon corrupted by the Twitter community. Tay began to unleash a torrent of sexist profanity. One user asked,“Do you support genocide?”,to which Tay gaily replied, “I do indeed.”

Another asked,“is Ricky Gervais an atheist?”
The reply was,“ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”.

Those are some of the tamer tweets. Less than 24 hours after her launch, Microsoft closed her account. Reading about it at the time, I found the story of Tay an amusing reminder of the hubris of tech companies. It also reveals something darker: it vividly demonstrates the potential for abuse and misuse of artificial intelligence technologies and the serious moral dilemmas that they present.

And then there was this:

How should we react when we hear than an algorithm used by a Florida county court to predict the likelihood of criminals reoffending, and therefore to influence sentencing decisions, was almost twice as likely to wrongly flag black defendants as future criminals?

And more:

…there is a female sex robot designed with a “frigid” setting, which is programmed to resist sexual advances. We have heard about a beauty contest judged by robots that did not like the contestants with darker skin. A report by PwC suggests that up to three in 10 jobs in this country could be automated by the early 2030s. We have read about children watching a video on YouTube of Peppa Pig being tortured at the dentist, which had been suggested by the website’s autoplay algorithm. In every one of those cases, we have a right to be concerned. AI systems are making decisions that we find shocking and unethical. Many of us will feel a lack of trust and a loss of control.

So what should be the key principles in our approach to these challenges?

I will focus on four important ethical requirements that should guide our policy making in this area: transparency, accountability, privacy and fairness. I stress that the story of Tay is not an anomaly; it is one example of a growing number of deeply disturbing instances that offer a window into the many and varied ethical challenges posed by advances in AI.

How do they work?

Posted in Op-eds | Also tagged and | 6 Comments
Advert

Recent Comments

  • Greg Hyde
    Paul; Your reply is a reminder that progressive politics is full of people who don’t understand - or don’t care about - the issues driving public anger....
  • David Sparrow
    A pity we couldn’t field at least a paper candidate in Mansfield, our leader Ed’s birthplace!...
  • Paul
    @Greg Hyde “Like so many post industrial UK towns, Wales is seeing demographic changes in their communities that they never asked or were consulted on. ” ...
  • Greg Hyde
    The collapse in that Labour vote across those by-elections. Their Comms have been dreadful in all honesty....
  • Nonconformistradical
    @Jenny Barnes "you go back to the last time they were sold, whenever that was, and increase by house price inflation since then." As far as I know not all p...