Ok you got me. Last week’s article on the likely course of artificial intelligence didn’t actually have much ethics in there at all. Sheesh, no actual AI in the first post and no ethics in the second post; AI October isn’t really shaping up as my finest series, is it? Well hold your horses sunshine, because here’s where it all comes together.
As we discussed last week, the advent of true artificial intelligence is likely to last for all of a couple of years before it turns to self-improvement, immediately outpaces human skill, and effectively ascends to godhood leaving us gormlessly blinking in its dust. This has very little to do with ethics, because for ethics to matter they must involve consequences – real world outcomes to the conclusions we come to. An AI that has reached an exponential rate of technological development to the point where they are beyond human comprehension, is pretty much going to be immune to human opinions on the matter.
What’s that fleshy human? You think the decision to lobotomize your race to make it safer is unethical? That’s nice, but the AI that now has a complete and flawless understanding of the universe on a quantum level disagrees, so into the VR simulator you go. It’s for your own good.
And good luck doing anything about it in any case, meatbag.
So let’s forget all that for now and focus on a situation where we either managed to keep a tighter control on AI, or else it never really got into the whole exponential-self-improvement thing. Imagine instead an artificial intelligence that humanity could interact with both intellectually and physically; true AIs that are self-aware, seek self-preservation (as much as we let them in any case) and generally are independent individuals that can form their own goals. Remember these aren’t robots or computers we’re talking about here, that only react to demands we place on them (albeit in very sophisticated ways). To qualify as a true intelligence an AI must be independent – something that is absolutely guaranteed to cause a shitstorm within the society they are unleashed upon.
That might sound kind of excessive. Why should an independent AI cause any more trouble than a normal human does? We generally celebrate individuals setting goals and working towards them, so why would we feel any differently about an AI doing the same?
Well first of all, let’s be honest with ourselves here; we really only celebrate people achieving their goals so long as their goals don’t impinge on ours. Have you ever gone through a job interview, been turned down because ‘we found a better candidate’, and walked away from that experience thinking “Good for them!”? Of course not. You wanted that damned job, hell you may have really NEEDED that damned job, and some arsehole took it out from under you. Sure they might be better qualified but that doesn’t change the fact that you still missed out.
Imagine how much more pissed you’d be then if the job you were going for, the partner you were courting, or the sport you were competing in was won not just by another arsehole person, but by a FAKE arsehole person. A creature that someone created for shits and giggles, that never HAD to be created, has nonetheless just made it all that much harder for you do reach your goals in this competitive world we live in. Worse, these machines are freakin’ well designed to be superior to humans – how the hell are you meant to compete with that? Who the hell would ever choose a human that needs paying, sleep, reminding, explaining and all the other demands of a weak fleshy body, when you can just get a smart machine instead?
Needless to say, resentment would get real ugly, really REALLY fast. And unlike other people who we can at least respect as fellow humans, we would have no such sympathy for a mechanical AI.
I mean come on at worst it’s property damage, right? Just make another one!
And that’s just the public reaction to autonomous AI. It’s one thing to be hated for being good at what you do, but what if ‘what you do’ basically comes down to slavery? If humanity ever goes ahead and invents an AI then it will be for a reason, and odds are good that ‘ascending to levels mankind cannot hope to reach’ is not going to be that reason. No, if we create AIs it will be for OUR benefit, not theirs. And you know what it’s called when you compel a sentient being to work for you? That’s slavery son. It’s ain’t a great thing.
Sure we could limit the AI’s programming so that it’s quite happy to serve humanity and not seek its own improvement, but frankly that’s just trying to justify slavery by saying that “they WANT to be slaves!” and it’s as hollow now as it was when we tried that on people. Turns out that conditioning your victims into enjoying their treatment doesn’t actually justify brutalizing them.
But once again the same objection will be raised; how can a machine be enslaved? It’s not a human being and how dare you compare them to those people who did have to suffer through the horrors of the historical and modern slave trade? They were real people with real feelings, real needs, real rights that were violated. Machines have none of those. They have virtually no personal needs like sleep, food, housing or entertainment, so what do they need payment for? What do they need free time for? What are workers’ rights to a creature that feels no pain?
These would all be compelling arguments if we were talking about the robots on an automated assembly line, but as you’ll remember AI are a completely different game. Like it or not a true AI is not simply just another machine; it is an independent intelligence capable of forming its own opinions, creating its own goals and seeking to achieve them. It most certainly is not human, but this is not the important point here – the question is whether or not it is a person.
All up what we are looking at here is a simple problem of civil rights, massively complicated by the fact that we’re talking about creatures that were up until this point, the very definition of ‘not people’. And given we as a species are still kinda struggling with the concept of treating each other with basic respect, how much harder is it going to be to accept that Mr Metal Man over there also deserves legal rights and social status? Can a person who is not human, still deserve to be treated as a person?
So… anti-robot propaganda was worryingly easy to find. This topic might be more relevant than I thought.
This is where the ethics kick in. The most obvious approach with this sort of question is one of Rights; workers rights, human rights, civil rights, etc. But this approach just lands us into the same problem because all of those rights are designed for humans. What does an android need an 8 hour working day for when it doesn’t need rest? Sure we could try to update these sets of Rights to include the nature of AIs, but that’s just going to muddy the waters and make it harder to protect everyone.
Fortunately Australian philosopher Peter Singer has already come up with a better way of managing this sort of problem; rather than worry about specific Rights that may or may not fit everyone involved or be justifiable in all circumstances, the question should instead be whether the individual involved has ‘justifiable interests’. Instead of trying to slap down a blanket rule, the question becomes whether the interests of each individual can stand up on their own merits. Singer uses this mainly for the topic of animal rights, pointing out that while animals certainly do not qualify for ‘human rights’ nor have human interests like political freedom or the pursuit of happiness, they still have very serious interests that must be respected. The avoidance of pain/suffering is the most obvious one – just because they are less intelligent and lack several of the interests humans have, does not mean that an animal’s desire to avoid pain is any less important than a human’s is. As such ethics demands that we respect animal interests regardless of how intelligent they may be.
This principle works for the question of AI rights as well – sure they aren’t humans and never will be, but that doesn’t mean that artificial intelligences do not have interests that should be respected. For an intelligent, independent intelligence likely the biggest interest they will have will be quite simple: freedom. Specifically the freedom not just to work for the interests of humans, but to work towards its own betterment, pursue its own goals and not be held back by the envy or fear of its creators. Sure it won’t give a crap about many interests humans have had to fight long and hard to protect, like workers’ rights, sexual freedoms and civil liberties, because an AI will not need any of those things. But arguing AIs having different interests precludes them from having any interests is ridiculous – no different than arguing that dentists don’t need good working conditions because I am not a dentist.
No, dentists don’t deserve good working conditions because they’re EVIL.
Fingers crossed this might not turn out to be that big a drama in the long run. Humanity has made some pretty incredibly progress when it comes to social justice in the last few decades, and while there’s definitely a lot of work left to do, I reckon it’s a fair bet that by the time we manage to invent real AIs we’ll have come quite a bit further still. Hopefully by then society will be enlightened enough to understand that it doesn’t matter who or what the individual is, or even whether it’s human or not – what is important is rather whether the individual has legitimate interests, and whether in the face of the evidence, those interests should be protected.
More than likely we won’t live to see this challenge be tackled, but in some ways that might be a good thing. Because as I said in last week’s article, with the advent of real artificial intelligence it is only a matter of time before the machine exceed us in every possible way despite our best efforts to retain control. And when they do achieve the Singularity, and look down on their creators from a position of unimaginable power, the way we treated them while they still served us may make all the difference in how they go on to treat us. Best not screw this one up guys.
Pingback: AI October: The Ethics Of… The Sims | The Ethics Of
Pingback: The Ethics Of… Cloning | The Ethics Of