This week, Jo Swinson held a Westminster Hall debate on ethics and artificial intelligence. While recognising the huge advantages of AI, there are some ethical challenges we need to do something about. Jo looked at this from a very liberal perspective, as you would imagine. Here are some of the highlights of her speech. You can read the whole debate here.
I would like to start with the story of Tay. Tay was an artificial intelligence Twitter chatbot developed by Microsoft in 2016. She was designed to mimic the language of young Twitter users and to engage and entertain millennials through casual and playful conversation.
“The more you chat with Tay the smarter she gets” the company boasted. In reality, Tay was soon corrupted by the Twitter community. Tay began to unleash a torrent of sexist profanity. One user asked,“Do you support genocide?”,to which Tay gaily replied, “I do indeed.”
Another asked,“is Ricky Gervais an atheist?”
The reply was,“ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”.
Those are some of the tamer tweets. Less than 24 hours after her launch, Microsoft closed her account. Reading about it at the time, I found the story of Tay an amusing reminder of the hubris of tech companies. It also reveals something darker: it vividly demonstrates the potential for abuse and misuse of artificial intelligence technologies and the serious moral dilemmas that they present.
And then there was this:
How should we react when we hear than an algorithm used by a Florida county court to predict the likelihood of criminals reoffending, and therefore to influence sentencing decisions, was almost twice as likely to wrongly flag black defendants as future criminals?
And more:
…there is a female sex robot designed with a “frigid” setting, which is programmed to resist sexual advances. We have heard about a beauty contest judged by robots that did not like the contestants with darker skin. A report by PwC suggests that up to three in 10 jobs in this country could be automated by the early 2030s. We have read about children watching a video on YouTube of Peppa Pig being tortured at the dentist, which had been suggested by the website’s autoplay algorithm. In every one of those cases, we have a right to be concerned. AI systems are making decisions that we find shocking and unethical. Many of us will feel a lack of trust and a loss of control.
So what should be the key principles in our approach to these challenges?
I will focus on four important ethical requirements that should guide our policy making in this area: transparency, accountability, privacy and fairness. I stress that the story of Tay is not an anomaly; it is one example of a growing number of deeply disturbing instances that offer a window into the many and varied ethical challenges posed by advances in AI.
How do they work?