Last year a computer program used by a US court for risk assessment was biased against African/Caribbean prisoners. The program was much more prone to label these defendants to re-offend. Again in 2016 Microsoft released its Chatbot Tay in Twitter to engage in conversation. In less than a day Chatboy Tay began uttering racist and sexist comments. Facebook last year experimented by allowing two AIs to interact freely. They had to shut them down as they very quickly developed their own version of the English language.
Artificial Intelligence (A.I.) is coming. We are moving from automation to intelligent application to eventually super intelligent A.I. There seems to be no clear plan or any real risk assessment of super-intelligent A.I. and how we will, in the long run, interact with such A.I.
Computers will not become biased on their own. They will learn from us. Up to now, computer science algorithms have focused on machine learning, often having programs performing work we would do, collecting and analysing data, identifying patterns and automating processes. However, as A.I. is built by human beings who have implicit biases, even if you could design an A.I. algorithm to be entirely agnostic for a race, gender, and religion, they will, through interaction, learn from our experience and the world we live in. Consequently, bias, racism and “attitude” will set in. A fear is that super-intelligent A.I will pay no court to our bearing on morality, equality or liberty. Why they should: what history or allegiance do they have to these concepts/beliefs?
It is projected that A.I. will lead to mass unemployment, replacing as many as 900,000 jobs in the UK alone. Historically, the trend seems to be that when new technology has been introduced, over time you end up creating more jobs. Additionally, there is a threat to “human dignity”, and experts in this area have called for refusing to allow A.I. technology where interaction requires respect, care and genuine empath.
So, what of Liberty?
Taking J.S. Mill’s three basic liberties: freedom of thought and emotion (including the freedom of speech), freedom to pursue tastes (provided they do not harm others), and the freedom to unite so long as no harm is done to others, how will A.I. affect them? I offer my suggestions below.
Unless we have laws that are very clear with careful checks and balances, I cannot see A.I. being unbiased. When large corporations are spending billions on A.I. development, they will program to a bias.
I would assume that freedom of speech would be affected by the channels of communications and those media would be controlled by A.I. If that is the case what we say and what information we get is likely to be regulated. A dream comes true for tyrants.
Not harming others, freedom to unite for a cause and equality would be affected. Some early examples of this are given at the start of this article A.I. will learn from exposure to the internet which will affect its own reasoning/understanding. If that understanding is not checked against our ideals of liberty A.I. will develop and eventually work for its own awareness.
I have to agree with Professor Hawkins that A.I. would be too dangerous for us. I cannot see how super intelligent A.I. (and beyond) would serve humanity in any form of servitude. If A.I. is a danger to us, especially when A.I. itself gets to a point it feels it has a “reason for being” it will potentially be a risk to our liberty, and possibly us, the question is then why are we spending billions developing A.I.?
* Cllr. Tahir Maher is a member of the LDV editorial team
21 Comments
“When large corporations are spending billions on A.I. development, they will program to a bias.”
They don’t need to – if there’s one thing we’ve learned about AI development, it’s that it reflects the biases already inherent in society. It’s more likely that large corporations will instead fail to spend the money required to identify and eliminate that bias. Same upshot, different intent and motivation.
I’m sorry Dave. I’m afraid I can’t do that.
🙂
You have lots of “artificial intelligence” in govn depts, esp immigration, where rather than actual thinking the civil servants have lots of boxes to tick, or not, with little leeway when those bureaucratic points are not totted up. Maybe actual AI will be better, who knows? What the past tells us is there is no alternative but to embrace it, if we go into Luddite mode other countries will zoom ahead.
The algorithms are neutral. The problem is training data.
Most training data incorporates bias of some sort, the issue is that not everyone analyses the data beforehand to detect and reduce those biases. Sometimes that is deliberate, most of the time it’s laziness or sloppiness.
Take for example AI research used to target gang activity in places like London. One piece of information used is based around stop-and-search data which is biased towards stopping young black males, even though there’s nothing to indicate they are more criminally inclined than any other group.
That bias is taken as fact by the machine learning algorithm and is used to make the prediction. If you were to selectively only train it with stop and search data of elderly white women, it would take that as fact and predict them as being more likely to be gang members.
@John Chandler: “The algorithms are neutral. The problem is training data.”
No.
We, humans, have determined that we live by different rules.
If you disagree, I suggest reading comments from humans
Artificial Intelligence is capable of some very impressive feats of pattern matching in a known environment, but it’s not intelligent in the way that humans are. We should not think of it as a competitor, but as a tool – like any other machine – that we can use to augment human capabilities.
Like any tool, AI can also be abused by negligent or malevolent designers and operators and they should be held liable for the consequences of their actions. If a runaway car causes an accident, we don’t blame the car, we blame the person who left it in an unsafe condition. The same applies to runaway AI.
If AI starts controlling human communication or congregation, it is the human behind the AI who is responsible. We have much more to fear from dumb intelligence, built without proper controls, than we do from super-intelligent AI, which I’m confident will be smart enough to realize it can achieve more by working with humans than trying to compete with or replace us.
It reminds me of the Terminator films. Science Fiction can become true Humans will have to be careful of what they wish for.
Phil Wainewright 13th Jun ’18 – 9:03pm……………………….Artificial Intelligence is capable of some very impressive feats of pattern matching in a known environment, but it’s not intelligent in the way that humans are. We should not think of it as a competitor, but as a tool – like any other machine – that we can use to augment human capabilities……………………..
An intelligent tool? That sounds rather like slavery to me and, more importantly, it will to the ‘intelligence’.
The late Stephen Hawking made no secret of his fears that thinking machines could one day take charge. He went as far as predicting that future developments in AI “could spell the end of the human race.”
> An intelligent tool? That sounds rather like slavery to me and, more importantly, it will to the ‘intelligence’.
We seem to get on pretty well with dogs and horses, both of which are thousands of times more intelligent than today’s machines, which excel only at rote learning without having any real understanding of what it is they’re recognizing.
What Stephen Hawking warned about was the danger from poorly or negligently designed AI that is too immature to understand the consequences of its actions. If AI does spell the end of the human race, it will be the fault of humans, not AI.
Phil Wainewright 13th Jun ’18 – 11:06pm……………………………..We seem to get on pretty well with dogs and horses, both of which are thousands of times more intelligent than today’s machines, which excel only at rote learning without having any real understanding of what it is they’re recognizing……………………………What Stephen Hawking warned about was the danger from poorly or negligently designed AI that is too immature to understand the consequences of its actions. If AI does spell the end of the human race, it will be the fault of humans, not AI……………………………….
In your first paragraph the definitive words are ‘today’s machines’…As for your second paragraph I suggest you read Hawkin’s reasoning. To continue your ‘dogs and horses’ analogy; would you give them the ability to switch you off?
@Phil Beesley
I’m not actually sure what you’re saying. Are you saying mathematics is not neutral, but humans are? Can I ask how many machine learning systems you’ve worked on?
For the record, my first exposure to machine learning (“AI”) was about twenty years ago, and I’ve worked on developing a few systems recently (the technology has improved drastically in the years between).
It would be nice to see on LDV and indeed it seems Lib Dems and indeed Tahir some joy at the brilliance of modern technology and how it has and is massively improving our lives as humans!
Bias – yes let’s see not something that that those super-intelligent humans are prone to! Even in our “hard wired” sense and interpretation as optical illusions shows. AI and computer technology will and gives us the ability to get rid of massive sources of bias.
Come on folks – technology has freed millions of Brits from back backing, life-limiting work and now in office and technical jobs =- there used to be rows of clerks just adding up rows of numbers. Is everyone now out of job? – no, of course not – employment if anything is at a record level.
Come on folks – embrace science and technology. The future is bright and is bringing us so much benefit.
Or… you could go on believing that the Earth is flat, the sun orbits it and was created about 6,000 years ago!!!!!
@Michael 1 – I don’t doubt the benefits from technology. The problem with AI is that at a point it will surpass us humans and the issue is how are we going to interact with such an entity. A point I also raise is why even get to that point of AI development if we think it could be dangerous for us
The problem with AI is that at a point it will surpass us humans
No, it won’t. Unless by ‘surpass us humans’ you mean ‘be better than us at some very specific tasks’ but in that case we’ve been coping for thousands of years with being which surpass us (horses, for example, which surpass us in pulling ploughs). But that doesn’t sound so impressive, now, does it?
(Even some of the ways people think machines have ‘surpassed us’ now, they really haven’t. Take facial recognition software, which some people think can recognise faces faster than a human. It can’t: humans recognise faces not only with better accuracy than a machine, but faster. What a machine can do is look at face after face for hour after hour without getting board — which, while useful, is not exactly on the path to The Terminator.)
I think you’ve been reading too much science fiction. As an antidote, I suggest:
https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/
Sorry, ‘without getting bored‘, of course.
@Tahir Maher
Thanks for your comments and of course equally there are issues about technology that need addressing and we need to consider them as those concerned about the body politic. Overall if technology is 80% “good”/ 20% “bad” we as politicians need to be careful not to be fixated on the 20% bad and indeed probably most of that is caused by people and not technology. As an example, I was thinking of all the things that I do during the day and whether the technology was around a mere 200 years ago to do them and it is instructive to do – virtually none of them could be – and it enables us, mostly, to live vastly better, healthier, longer, wealthier and more efficient lives.
But if we take the points raised in your article.
EMPLOYMENT. As you note by and large technology leads to more employment over time as more can be done more efficiently. The key as I have banged on about on LDV is education. With countries like South Korea sending 70% of its young people to university today, in 20 years time we will need 80%+ of ours to go to university. That means a vast increase in the schools budget and pupil premium TODAY. And a lifelong adult education fund for each citizen.
BIAS. If anything technology mitigates against bias. To take a hypothetical example IF say an insurance company is discriminating against BAME people on their ethnicity which is actually another factor then another company comes along and insures them for less realising this through better AI and makes a healthy profit. Of course regulation is important.
FREEDOM OF THOUGHT. If anything technology facilitates greater spreading and expression of ideas – from books through to LDV today.
2/2
TODAY’S INTELLIGENCE OF AI. Overall while good in very specific areas and domains, AI is not very “generally” intelligent – just compare them to a small animal. In general it has changed the way we think about intelligence – mostly to the good – someone who could do feats of mental maths was considered intelligent – today a £1 calculator can do it.
TOMORROW’S AI. it is likely that eventually we will be able to design machines more intelligent than us and indeed replicate the human brain and indeed better brains artificially. I truly think this is many centuries off. Does it pose some problems – probably – but as Dav says – we are used to using physically stronger machines and animals. I have got used to losing to my computer at chess – although it doesn’t seem that good at Monopoly yet!
To take a hypothetical example IF say an insurance company is discriminating against BAME people on their ethnicity which is actually another factor then another company comes along and insures them for less realising this through better AI and makes a healthy profit.
Hm. What if:
(a) the insurance company doesn’t input whether someone is BAME or not into their statistical system (maybe they don’t even know, if the application is done on a website and the question isn’t asked)
(b) they do input where the person lives
(c) people in some neighbourhoods are much more likely to claim than others
(d) people from those neighbourhoods are therefore charged higher premiums, as otherwise the company would be making an expected loss on them
(e) (here’s the kicker) a high proportion of the BAME population live in those neighbourhoods.
Is that insurance company discriminating against BAME people, or not? It sure looks like they are, as BAME people are on average going to be charged higher premiums… but they can’t be, if they don’t even know which people are BAME and which aren’t?
It’s a tricky one.
@Dav
I am not sure that I thought up the best example in a few minutes. And there was a typo in my post. I should have said that an insurance company was using ethnicity was being used as a PROXY for another factor – say only ethnicity was being used by insurance company A – say the only insurance company in the market – to charge higher premiums. Better application of data by another company actually shows that BAME is being used where postcode should be and which insurance companies can and do discriminate on. BAME individuals are now charged less if they live in less riskier postcodes.
Obviously there is a debate on why BAME individuals may be living in more riskier postcodes etc.
The overall point was to try and show that the application of science and technology, AI and data analysis can reduce human biases – as we are massive pattern recognition machines – and are overly so.
Of course correlation is not causation and there is a risk that scoring algorithms – say for a bank loan – use correlation rather causation. But overall I would humbly suggest that it gets nearly to the truth and is less discriminatory than going along to the bank manager – a Captain Mainwaring type – and he gives me a loan on the basis of whether I play golf at the same club as him. Humans are massively discriminatory – in that we had to make snap decisions on say whether there was a lion in the bushes – and often false positives were less costly than false negatives (being eaten by the lion). Indeed we can only keep a few factors in our minds – unless we sit down with a piece of paper – or um.. a computer. There is always a danger that we will take “the computer says no” as gospel but a computer algorithm weighing hundreds of factors MAY well be less biased and discriminatory than a human.
This also has little to do with AI. It is also easy to highlight a few discriminatory decisions by computers where humans over history and today have made millions.
I believe that science, technology and better application of data – carefully done – including by computers gets us nearer to the “truth”. I am fairly convinced by looking at my own observations that the Earth is flat and the Sun goes round the Earth. Better data does convince me that the opposite is the case!
BTW the article you referenced is very, very interesting – thanks – and should be required reading for all politicians and journalists.
I am fairly convinced by looking at my own observations that the Earth is flat
Really? You’ve never watched a ship gradually disappear over the horizon — something that could only happen if the world was a globe?
(If the world were flat, there would be no horizon: the ship would just get smaller and smaller as it sailed farther and farther away.)
I think as Dav suggests many have read and/or watched too much science fiction.
For ‘AI’ to go beyond human; which it won’t without divine intervention(!), requires the humans creating the ‘AI’ to be able to envisage what “beyond human” means, which (when you think about it) means it is still sub-human. It’s one of the reasons why humans can not really comprehend ‘God’ or what it looks and feels like to move through 4 dimensional space.
Thus getting under the hype, what we arrive at is simply more sophisticated algorithms, which because they rely on statistical learning and so don’t come to preprogrammed conclusions when analysing inputs. What this means in the context of the Insurance example, we would have no real idea whether one of the ‘AI’ insurance risk assessors is or isn’t taking ethnicity or any other politicial incorrect factors into account…
Personally, working in IT, I have no concerns about the machines taking over anytime soon ie. in the lifetime of my (yet to be conceived) grand children. However, I do have concerns about those (humans) pushing technology beyond its level of maturity, for example autonomous cars – the stories of their failings and failures are starting to appear in the mainstream media – hopeful these will inject some realism into the debate.
Aside: @Michael 1, I presume also you haven’t flown and hence seen with your own eyes the curvature of the earth?