This week, Jo Swinson held a Westminster Hall debate on ethics and artificial intelligence. While recognising the huge advantages of AI, there are some ethical challenges we need to do something about. Jo looked at this from a very liberal perspective, as you would imagine. Here are some of the highlights of her speech. You can read the whole debate here.
I would like to start with the story of Tay. Tay was an artificial intelligence Twitter chatbot developed by Microsoft in 2016. She was designed to mimic the language of young Twitter users and to engage and entertain millennials through casual and playful conversation.
“The more you chat with Tay the smarter she gets” the company boasted. In reality, Tay was soon corrupted by the Twitter community. Tay began to unleash a torrent of sexist profanity. One user asked,“Do you support genocide?”,to which Tay gaily replied, “I do indeed.”
Another asked,“is Ricky Gervais an atheist?”
The reply was,“ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”.Those are some of the tamer tweets. Less than 24 hours after her launch, Microsoft closed her account. Reading about it at the time, I found the story of Tay an amusing reminder of the hubris of tech companies. It also reveals something darker: it vividly demonstrates the potential for abuse and misuse of artificial intelligence technologies and the serious moral dilemmas that they present.
And then there was this:
How should we react when we hear than an algorithm used by a Florida county court to predict the likelihood of criminals reoffending, and therefore to influence sentencing decisions, was almost twice as likely to wrongly flag black defendants as future criminals?
And more:
…there is a female sex robot designed with a “frigid” setting, which is programmed to resist sexual advances. We have heard about a beauty contest judged by robots that did not like the contestants with darker skin. A report by PwC suggests that up to three in 10 jobs in this country could be automated by the early 2030s. We have read about children watching a video on YouTube of Peppa Pig being tortured at the dentist, which had been suggested by the website’s autoplay algorithm. In every one of those cases, we have a right to be concerned. AI systems are making decisions that we find shocking and unethical. Many of us will feel a lack of trust and a loss of control.
So what should be the key principles in our approach to these challenges?
I will focus on four important ethical requirements that should guide our policy making in this area: transparency, accountability, privacy and fairness. I stress that the story of Tay is not an anomaly; it is one example of a growing number of deeply disturbing instances that offer a window into the many and varied ethical challenges posed by advances in AI.
How do they work?
Let us suppose that we suspect that an algorithm is biased towards candidates of a particular race and gender. If the decision-making process of the algorithm is opaque, it is hard to even work out whether employment law is being broken—an issue I know will be close to the Minister’s heart. Transparency is crucial when it comes to the accountability of new AI. We must ensure that when things go wrong, people can be held accountable, rather than shrugging and responding that the computer says “don’t know”.
The Lovelace Oath
Jo suggested something to embed ethical principles into every developer of AI.
Every doctor who enters the medical profession must swear the Hippocratic oath. Perhaps a similar code or oath of professional ethics could be developed for people working in AI—let me float the idea that it could be called the Lovelace oath in memory of the mother of modern computing—to ensure that they recognise their responsibility to embed ethics in every decision they take. That needs to become part and parcel of the way industry works.
Jo criticised the Government for not including anything about ethics in a recent report on the issue and concluded:
We want the UK to continue to be a world leader in artificial intelligence, but it is vital that we also lead the discussion and set international standards about its ethics, in conjunction with other countries. Technology does not respect international borders; this is a global issue. We should not underestimate the astonishing potential of AI—leading academics are already calling this the fourth industrial revolution—but we must not shirk addressing the difficult questions. What we are doing is a step in the right direction, but it is not enough. We need to go further, faster. After all, technology is advancing at a speed we have not seen before. We cannot afford to sit back and watch. Ethics must be embedded in the way AI develops, and the United Kingdom should lead the way.
While the MInister’s response to Jo was friendly and she said she’d taken Jo’s comments to heart, Tories are by their nature a bit laissez-faire on this .It’s important that they properly consider the ethical aspects of AI development.
* Caron Lindsay is Editor of Liberal Democrat Voice and blogs at Caron's Musings
6 Comments
I watched the full thing, and was reminded that Westminster isn’t all fighting and point scoring, but that these kinds of debates show the influence of opposition MPs and can raise very important issues where people of all political hues can find common ground. The politeness with which they thank and praise each other for their interventions is a bit disorientating, but a pleasant change from the heckling that is common in the main chamber.
I was also reminded how lucky we all are to have Jo back in Parliament and was impressed at how well she articulated the issues, and presented a way forward. However, I do take Caron’s point that the Minister’s warm words are easy enough to give and easy enough to forget once something else comes along.
I love the idea of the Lovelace oath. It’s not going to be quite as simple as the medical profession to ensure that those that ought to take it do take it, but there would be no reason why it couldn’t be included as a requirement when government and responsible companies commission IT projects.
It’s certainly an area that needs much more thought than it is currently getting but the use of the phrase “AI systems are making decisions that we find shocking and unethical.” also poses part of a problem. Shocking is not necessarily unethical. The example of the sex doll is to an extent different from the others as it is doing exactly what was set out to do – rather than an unintended consequence. The others also are potentially causing harm – the usuall touchstone of a liberal assessment of what should be prohibited. If a couple were engaged in consensual roleplay where one partner was resistant/unresponsive to the other’s advances then I would hope that liberals would respect their choices in regard of sexual expression. It’s not clear where the difference would lie with a sex robot doing the same. The issue of sex robots will throw up ethical issues – but those need to be considered on ethical grounds not on the basis of taste and shock.
About the beauty contests. Not sure what data was used to train the computer, but presumably it would be past beauty contest results. It’s 17 years since an African won Miss World, even though Africans are one sixth of the world population.
This might be a case of shooting the messenger when the computer tells us something about ourselves we don’t want to hear.
Most people have a wrong understanding of what ‘bias’ means:
https://jacobitemag.com/2017/08/29/a-i-bias-doesnt-mean-what-journalists-want-you-to-think-it-means/
Richard, the point you raise is discussed during the debate. It is one of many dangers of such technology that the biases of previous, human decision-making become absorbed into the computer decision-making. This is precisely the sort of thing that a responsible computer programmer should look out for, and the sort of thing we all need to be aware of.
I read something recently about facial recognition software being used by some US law enforcement agencies. For various reasons, the software is much better at differentiating between white people than black people, and to cut a long story short, this means that black people are more likely to be falsely identified by the software as the same person who was caught on camera committing a particular crime.
Some of these situations pose no great difficulties. If a human weights decisions in beauty contests or job interviews or university place decisions against Black people (or women, for example), that is illegal. If a human hands over decisions on such things to an artificial intelligence which has the same bias, that’s also illegal, but of course it’s the human rather than the artificial intelligence in the firing line. So anyone considering handing over decisions would be well-advised to have the AI checked for bias. The same thing has already happened years back with decisions over places at a UK medical school, but involving not AI but a formula. Staff were asked to develop a formula that mimicked previous decisions: the only way they could succeed was to import an ethnic factor. Result: suspicious people tested the system with fake applications and the bias was uncovered. It had always existed, but the attempt to mechanise decision-making had led to its discovery and correction.