Tag Archives: artificial intelligence

Artificial intelligence – golden opportunity or massive threat?

You can hardly read the news, listen to the radio or scan your preferred social media without hearing about AI.  Or experiencing it in practice, whether you are aware of it or not.  

On the one hand, it is seen as offering huge potential to transform business and other organisations, reducing costs and creating entirely new capabilities.  

On the downside we hear of threats to democracy with a surge in fake videos and information; the potential for mass job losses as AI systems replace employees to reduce costs; at the extreme, dire tales of AI systems taking over humanity altogether.  

One important concern is the concentration of AI development in too few powerful hands and the struggle of governments and international bodies to regulate them.  

To discuss these issues, Green Book Pod is back with another episode in our series of discussions on key issues for the Liberal Democrats, now available on Lib Dem Podcast and on YouTube.

In this podcast we try to provide a balanced view and to give a sense of what you need to know, and perhaps what you do and do not need to be concerned about.  That includes what to be looking for both from governments and business, and how to balance the need for regulation with desirable innovation.

Podcast Guests

Posted in Op-eds | Also tagged | 3 Comments

Artificial Intelligence means we need a Universal Basic Income

There continues to be severe pressure on various UK public services, such as health, criminal justice, and social care. Reform of these services is badly needed, to improve outcomes for service users, patients, and victims, as well as providing value to the public purse.

However, the UK’s poor economic outlook makes reform challenging. The UK’s productivity is lower than France and Germany, the number of long-term sick has risen by 500,000 since Covid, and the annual cost of servicing Government debt is £83 billion in interest payments alone.

Artificial Intelligence

Yet there is a potential solution – Artificial Intelligence (AI) – which consultancy PwC says could grow the UK economy by 10% (£232 billion) by 2030. The predicted gains are from boosting innovation and increasing productivity, which can then make public services more effective.

Examples of how AI can reform public services, include: smart transportation, fraud detection, energy management, remote monitoring and more.

For example, AI could be used in healthcare to anticipate when someone may need preventative support, while internet-of-things devices could be used to help support the elderly. Predictive analytics could be used to identify businesses best-placed to receive grants and loans.

The introduction AI is already happening, and the Tony Blair Institute for Global Change has proposed a range of recommendations to improve the UK’s readiness and abilities in this area.

The potential knock-on effects of AI – such as on the protection of personal data and the jobs market – do need careful oversight and solutions.

Posted in Op-eds | Also tagged | 4 Comments

AI is being used by students to produce essays and projects

A story has been appearing in the regional press about a cross-bench peer, John Pakington (Baron Hampton) who is, unusually, also a working teacher. He is concerned that students are using Artificial Intelligence systems to produce essays, technical designs and even works of art and then passing them off as their own. He says:

There is a lot of anecdotal evidence, at the moment, that suggests that students are using AI for everything from essays and poetry to university applications and, rather more surprisingly, in the visual arts subjects. Just before Christmas, one of my product design A-level students came up to me and showed me some designs he’d done.

He’d taken a cardboard model, photographed it, put it into a free piece of software, put in three different parameters and had received, within minutes, 20 high-resolution designs, all original, that were degree level – they weren’t A-level, they were degree level. At the moment, it’s about plagiarism and it’s about fighting the software – I would like to ask when the Government is planning to meet with education professionals and the exam boards to actually work out (how) to design a new curriculum that embraces the new opportunity rather than fighting it.

Tim Clement-Jones is our Digital spokesperson in the House of Lords and he agreed with his fellow peer:

This question clearly concerns a very powerful new generative probabilistic type of artificial intelligence that we ought to encourage in terms of creativity, but not in terms of cheating or deception.

Some 30 years ago I was studying AI as part of my Masters degree. Many of the same tropes were circulating then as now: “AI will make people lazy”, “Many jobs will be lost to machines” – similar sentiments have been expressed whenever there is a substantial shift in technology, from Jacquard looms to automated car production. But this time there is the added fear that AI will “take over” and we will become the redundant playthings of super machines. In practice, many of the techniques that I was looking at then are now embedded in our technologies; they improve productively and are hugely beneficial to society. They support and amplify our activities rather than replace them, although, as this evidence suggests, they can also present new challenges.

Andy Boddington had some fun with the latest AI chatbot, ChatGPT, and generated a passable short essay and some rather dubious poetry. When I say “passable” I mean that it is almost impossible to tell that it has been generated by software and not by a real person.  It is also possible that ChatGPT could pass the Turing Test and win the Loebner Prize.

Of course, the issue of plagiarism has dogged educational assessment for many years. Academics routinely use plagiarism detection systems for essays, and I have a couple of examples from my own professional experience.

Posted in Op-eds | Also tagged | 8 Comments

I asked the chat bot about UK politics and which side to butter my toast

Chat GPT is becoming a favourite internet game. It has serious possibilities for learning, writing, and cheating.

The AI writer generates errors, mostly because time for the current version stopped in 2021. For example, it thinks that Boris Johnson is still prime minister – or has it been hacked by the BJ camp?

Development of artificial intelligence has been underway for decades. From primitive beginnings, it has been growing in power and in “humanness”. Contact your bank or your council and in many cases, you’ll be talking to AI by voice or online. But no one thinks these have intelligence. Mutter something unexpected to your bank’s bot like “which side should I butter my toast” and you can cut through to a real human operator. At least I think it is a real human operator.

Chat GPT, released to the public a few weeks ago, is remarkable and some commentators think it fulfils the Turing Test, which is passed when a computer’s responses cannot be distinguished from those that would have been made by a human. However, Chat GPT itself is dismissive of the idea:

“It is difficult to say whether Chat GPT, or any other language model, would pass the Turing Test.”

AI is potentially a powerful tool for politics. Could we replace phone banking with AI bots calling? Could we get AI to write campaign literature?

I asked Chat GPT: “Tell me about party politics in the UK in 750 words”. The results are impressive. It would pass as a student essay despite a couple of errors. I also asked the bot to write a poem about the Liberal Democrats. It is remarkably good if close to doggerel.

By the way, Chat GPT tells me which side you butter your toast is a matter of personal preference, including whether you butter it both sides. Does anyone do that?

Posted in Op-eds | 2 Comments

Should the mass proliferation of AI-driven war drones and robots be left uncontrolled?

Embed from Getty Images

The Chinese government, as I hope our MoD, will have noticed the important article of Nicholas Chaillan ‘To catch up with China, the Pentagon needs a new #AI strategy“, published in the Tuesday 23rd of November issue of the Financial Times.

As indicated, Mr Chaillan was formerly the first chief software officer at the US Air Force and Space Force. He is now the chief technology officer at cybersecurity firm Prevent Beach. The article points out the shocking “kindergarten-level” of US military/Pentagon cybersecurity in the US’ critical national infrastructure and that the next war will be software-defined: “It won’t be won with a $1.7tn programme of fifth generation F35 fighter jets or $12bn aircraft carriers. China can take down our power grid without firing a single shot”.

Posted in Op-eds | Also tagged and | 4 Comments

Jo Swinson debates ethics and artificial intelligence – and suggests the Lovelace Oath

This week, Jo Swinson held a Westminster Hall debate on ethics and artificial intelligence. While recognising the huge advantages of AI, there are some ethical challenges we need to do something about. Jo looked at this from a very liberal perspective, as you would imagine. Here are some of the highlights of her speech. You can read the whole debate here. 

I would like to start with the story of Tay. Tay was an artificial intelligence Twitter chatbot developed by Microsoft in 2016. She was designed to mimic the language of young Twitter users and to engage and entertain millennials through casual and playful conversation.

“The more you chat with Tay the smarter she gets” the company boasted. In reality, Tay was soon corrupted by the Twitter community. Tay began to unleash a torrent of sexist profanity. One user asked,“Do you support genocide?”,to which Tay gaily replied, “I do indeed.”

Another asked,“is Ricky Gervais an atheist?”
The reply was,“ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”.

Those are some of the tamer tweets. Less than 24 hours after her launch, Microsoft closed her account. Reading about it at the time, I found the story of Tay an amusing reminder of the hubris of tech companies. It also reveals something darker: it vividly demonstrates the potential for abuse and misuse of artificial intelligence technologies and the serious moral dilemmas that they present.

And then there was this:

How should we react when we hear than an algorithm used by a Florida county court to predict the likelihood of criminals reoffending, and therefore to influence sentencing decisions, was almost twice as likely to wrongly flag black defendants as future criminals?

And more:

…there is a female sex robot designed with a “frigid” setting, which is programmed to resist sexual advances. We have heard about a beauty contest judged by robots that did not like the contestants with darker skin. A report by PwC suggests that up to three in 10 jobs in this country could be automated by the early 2030s. We have read about children watching a video on YouTube of Peppa Pig being tortured at the dentist, which had been suggested by the website’s autoplay algorithm. In every one of those cases, we have a right to be concerned. AI systems are making decisions that we find shocking and unethical. Many of us will feel a lack of trust and a loss of control.

So what should be the key principles in our approach to these challenges?

I will focus on four important ethical requirements that should guide our policy making in this area: transparency, accountability, privacy and fairness. I stress that the story of Tay is not an anomaly; it is one example of a growing number of deeply disturbing instances that offer a window into the many and varied ethical challenges posed by advances in AI.

How do they work?

Posted in Op-eds | Also tagged and | 6 Comments
Advert



Recent Comments

  • Michael BG
    Peter Msrtin, I am glad we agree that guaranteed jobs should be voluntary. I am not convinced that those of a particular age should be a priority, I would...
  • Roland
    @Michael BG “ Unemployed people need some time to look for work. Job Centres used to say that looking for a job is a full-time job.” From my experience, ...
  • Simon R
    @Michael; You seem to be making a lot of assumptions about how a guaranteed job scheme would work, which don't match anything either I or Peter have claimed. T...
  • David Sheppard
    Well said Manuela so pleased to have helped you during the election. Wonderful to have a Liberal MP in Stratford !...
  • Peter Msrtin
    @ Michael I agree guaranteed jobs should be voluntary and they should be properly paid. So these wouldn't be workfare. The emphasis would be on the young ...