Tag Archives: artificial intelligence

AI is being used by students to produce essays and projects

A story has been appearing in the regional press about a cross-bench peer, John Pakington (Baron Hampton) who is, unusually, also a working teacher. He is concerned that students are using Artificial Intelligence systems to produce essays, technical designs and even works of art and then passing them off as their own. He says:

There is a lot of anecdotal evidence, at the moment, that suggests that students are using AI for everything from essays and poetry to university applications and, rather more surprisingly, in the visual arts subjects. Just before Christmas, one of my product design A-level students came up to me and showed me some designs he’d done.

He’d taken a cardboard model, photographed it, put it into a free piece of software, put in three different parameters and had received, within minutes, 20 high-resolution designs, all original, that were degree level – they weren’t A-level, they were degree level. At the moment, it’s about plagiarism and it’s about fighting the software – I would like to ask when the Government is planning to meet with education professionals and the exam boards to actually work out (how) to design a new curriculum that embraces the new opportunity rather than fighting it.

Tim Clement-Jones is our Digital spokesperson in the House of Lords and he agreed with his fellow peer:

This question clearly concerns a very powerful new generative probabilistic type of artificial intelligence that we ought to encourage in terms of creativity, but not in terms of cheating or deception.

Some 30 years ago I was studying AI as part of my Masters degree. Many of the same tropes were circulating then as now: “AI will make people lazy”, “Many jobs will be lost to machines” – similar sentiments have been expressed whenever there is a substantial shift in technology, from Jacquard looms to automated car production. But this time there is the added fear that AI will “take over” and we will become the redundant playthings of super machines. In practice, many of the techniques that I was looking at then are now embedded in our technologies; they improve productively and are hugely beneficial to society. They support and amplify our activities rather than replace them, although, as this evidence suggests, they can also present new challenges.

Andy Boddington had some fun with the latest AI chatbot, ChatGPT, and generated a passable short essay and some rather dubious poetry. When I say “passable” I mean that it is almost impossible to tell that it has been generated by software and not by a real person.  It is also possible that ChatGPT could pass the Turing Test and win the Loebner Prize.

Of course, the issue of plagiarism has dogged educational assessment for many years. Academics routinely use plagiarism detection systems for essays, and I have a couple of examples from my own professional experience.

Posted in Op-eds | Also tagged | 8 Comments

I asked the chat bot about UK politics and which side to butter my toast

Chat GPT is becoming a favourite internet game. It has serious possibilities for learning, writing, and cheating.

The AI writer generates errors, mostly because time for the current version stopped in 2021. For example, it thinks that Boris Johnson is still prime minister – or has it been hacked by the BJ camp?

Development of artificial intelligence has been underway for decades. From primitive beginnings, it has been growing in power and in “humanness”. Contact your bank or your council and in many cases, you’ll be talking to AI by voice or online. But no one thinks these have intelligence. Mutter something unexpected to your bank’s bot like “which side should I butter my toast” and you can cut through to a real human operator. At least I think it is a real human operator.

Chat GPT, released to the public a few weeks ago, is remarkable and some commentators think it fulfils the Turing Test, which is passed when a computer’s responses cannot be distinguished from those that would have been made by a human. However, Chat GPT itself is dismissive of the idea:

“It is difficult to say whether Chat GPT, or any other language model, would pass the Turing Test.”

AI is potentially a powerful tool for politics. Could we replace phone banking with AI bots calling? Could we get AI to write campaign literature?

I asked Chat GPT: “Tell me about party politics in the UK in 750 words”. The results are impressive. It would pass as a student essay despite a couple of errors. I also asked the bot to write a poem about the Liberal Democrats. It is remarkably good if close to doggerel.

By the way, Chat GPT tells me which side you butter your toast is a matter of personal preference, including whether you butter it both sides. Does anyone do that?

Posted in Op-eds | 2 Comments

Should the mass proliferation of AI-driven war drones and robots be left uncontrolled?

Embed from Getty Images

The Chinese government, as I hope our MoD, will have noticed the important article of Nicholas Chaillan ‘To catch up with China, the Pentagon needs a new #AI strategy“, published in the Tuesday 23rd of November issue of the Financial Times.

As indicated, Mr Chaillan was formerly the first chief software officer at the US Air Force and Space Force. He is now the chief technology officer at cybersecurity firm Prevent Beach. The article points out the shocking “kindergarten-level” of US military/Pentagon cybersecurity in the US’ critical national infrastructure and that the next war will be software-defined: “It won’t be won with a $1.7tn programme of fifth generation F35 fighter jets or $12bn aircraft carriers. China can take down our power grid without firing a single shot”.

Posted in Op-eds | Also tagged and | 4 Comments

Jo Swinson debates ethics and artificial intelligence – and suggests the Lovelace Oath

This week, Jo Swinson held a Westminster Hall debate on ethics and artificial intelligence. While recognising the huge advantages of AI, there are some ethical challenges we need to do something about. Jo looked at this from a very liberal perspective, as you would imagine. Here are some of the highlights of her speech. You can read the whole debate here. 

I would like to start with the story of Tay. Tay was an artificial intelligence Twitter chatbot developed by Microsoft in 2016. She was designed to mimic the language of young Twitter users and to engage and entertain millennials through casual and playful conversation.

“The more you chat with Tay the smarter she gets” the company boasted. In reality, Tay was soon corrupted by the Twitter community. Tay began to unleash a torrent of sexist profanity. One user asked,“Do you support genocide?”,to which Tay gaily replied, “I do indeed.”

Another asked,“is Ricky Gervais an atheist?”
The reply was,“ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”.

Those are some of the tamer tweets. Less than 24 hours after her launch, Microsoft closed her account. Reading about it at the time, I found the story of Tay an amusing reminder of the hubris of tech companies. It also reveals something darker: it vividly demonstrates the potential for abuse and misuse of artificial intelligence technologies and the serious moral dilemmas that they present.

And then there was this:

How should we react when we hear than an algorithm used by a Florida county court to predict the likelihood of criminals reoffending, and therefore to influence sentencing decisions, was almost twice as likely to wrongly flag black defendants as future criminals?

And more:

…there is a female sex robot designed with a “frigid” setting, which is programmed to resist sexual advances. We have heard about a beauty contest judged by robots that did not like the contestants with darker skin. A report by PwC suggests that up to three in 10 jobs in this country could be automated by the early 2030s. We have read about children watching a video on YouTube of Peppa Pig being tortured at the dentist, which had been suggested by the website’s autoplay algorithm. In every one of those cases, we have a right to be concerned. AI systems are making decisions that we find shocking and unethical. Many of us will feel a lack of trust and a loss of control.

So what should be the key principles in our approach to these challenges?

I will focus on four important ethical requirements that should guide our policy making in this area: transparency, accountability, privacy and fairness. I stress that the story of Tay is not an anomaly; it is one example of a growing number of deeply disturbing instances that offer a window into the many and varied ethical challenges posed by advances in AI.

How do they work?

Posted in Op-eds | Also tagged and | 6 Comments
Advert



Recent Comments

  • Guy
    There's plenty for teachers to strike about at the moment - picking solely on pay is a massive mistake. To me, this dispute has been a long time in the making a...
  • Zoe Hollowood
    That women shouldn't be looked up in prison with male bodied people is something that has been known for centuries and indeed Elizabeth Fry campaigned on this v...
  • David Evans
    Peter, Indeed you may be right, but indeed Peter Watson may be wrong. All in all, I think my point still stands. I would urge you both not to judge so...
  • Martin
    Mick Taylor: For issues of intimidation and harassment there are other considerations that involve the care and protection of innocent parties. You do have to a...
  • Anthony Acton
    Why are the LD leaders not shooting at an open goal on this? It's the one national issue where the public would expect the party to lead. If fear of anti EU sen...