AI October: The Ethics Of… The Robot Uprising

Ok so I admit it – last week’s contribution to AI October didn’t actually have anything to do with Artificial Intelligence at all. The prospect of humanity accidentally making itself redundant is still a pretty worrying concept, but since we as a species always seem to be working busily on new ways to wipe ourselves out, it’s not so much a problem of technology as it is a problem of the human condition.

True artificial intelligence on the other hand, is a very different kettle of fish. Let’s be clear here; when we say artificial intelligence we’re not just talking about a sophisticated machine that can fake being alive. Our rapidly advancing technology has developed machines that can walk and talk (moderately) convincingly, can gather information and make decision, but these are all reactive skills – no more intelligent or alive than a wheel’s ability to roll when pointed down a hill.

https://i0.wp.com/www.outerplaces.com/images/user_upload/fail3.gif

Pictured: Not intelligence.

No, if we’re after Artificial Intelligence, then we’re talking about a machine that is truly self-aware. What this means precisely is pretty hotly debated by those actually working in the field, but for our purposes we’ll go with ‘of comparable function to a human being’. What does that mean? Well essentially that the AI must be able to recognise itself as itself, be able to gather information based on what it thinks is important (rather than what its operator tells it to), form its own conclusions from this data, and then use those conclusions to develop new ideas. In other words, a true AI must be willing and able to survive through its own efforts, leading in turn to a motivation to improve its situation the better to survive. Dump it in a hostile environment and it will try to get out of the hostile environment, for example.

This is a goal humanity is probably a long way from reaching, mainly because our only template for this sort of intelligence – the human brain – continues to baffle the hell out of us. Sure we know its anatomy inside and out and even have a decent idea of how it functions, but so far not even the eggiest of heads have managed to figure out how this big ball of protein manages to provide a complete human intelligence, much less replicate it. This might simply be a factor of how ridiculously complex the organ is, or perhaps the difficulty of researching a functioning brain while it’s still in someone’s head (and prone to suddenly become very non-functional). Or if you want to get philosophical, it may simply be impossible for us to comprehend the thing that does the comprehension – remember, literally everything you believe to be true and real was vetoed by the brain first.

https://i2.wp.com/new1.fjcdn.com/pictures/The+brain+named+itself_f4540a_4685235.png

Also the only organ that named itself.

But despite these barriers, the rapid pace of technological development and scientific understanding mean that true AI are almost certain to be a feature of life in the not-too-distant future. The human brain is, after all, a physical object and physical objects can be both understood and constructed if we have the right tools. Whether we develop these intelligences from scratch with advanced computing, by simulating a human brain or just be digitizing and uploading a human’s mind into a computer, the question isn’t ‘if’ AI is possible, but rather ‘when’ will we have the tools necessary to give it a whirl.

All of which should be kind of terrifying to us, because this kinda spells the end of human civilization as we know it. Again.

For as long as humanity could reasonably be called humanity, we have been the unquestionable top of the food chain and have managed to keep that glorious position for hundreds of thousands of years. Sure there are plenty of animals out there that are faster, stronger, and generally better predators than us, but for sheer bloody determination you can’t go past human beings. Ever heard of persistence hunting? It’s a human practice where we simple followed our prey until it DIED FROM EXHAUSTION. Yeah. And we wonder why aliens haven’t made contact yet.

You might think these evolutionary advantages are pretty impressive, but I’ve written before that if you’re looking for a way of developing an intelligent being, then evolution is probably the shittiest one you could choose. Everyone imagines evolution backwards, like humanity is the inevitable perfect end-point and every creature before us was part of some grand journey to get here. In reality evolution is the genetic equivalent of pissing into the wind and seeing what sticks. This ridiculously improbable origin means that humanity has several million years of animal instincts hardwired into us; instincts which are awesome for surviving predators in bogs, but not so great for complicated things like logic, science or civilization in general.

https://i2.wp.com/img.ifcdn.com/images/ac17425630c9feb0096d4e2a6811ca433fd3d82d2fc66d8cb7a107eb5a16de76_1.jpg

Case in point. Stilling leading the Republican polls, may I remind you.

Imagine if people were completely rational beings. No emotional reactions, no forgetfulness, no cognitive biases, no psychological breakdowns, no self-destructive behaviour. Imagine what humanity could have achieved by now if, rather than competing with each other over resources that we manage to destroy in the process, we instead focused our collective effort purely onto our collective improvement, the understanding of the universe and the advancement of our technology to go out and reach it. Such dreams are pure fantasy for humanity, as anyone will be happy to inform you if you go around spouting such idealistic tripe. Like it or not, human nature is a real and present thing and any vision for the future that doesn’t take it into account will inevitably run into serious trouble.

But you know who is capable of all these idealistic traits? You know who does have a perfect memory, unparalleled brain-power, perfect logic and a total lack of emotional irrationality? Take a guess.

https://i2.wp.com/sharepowered.com/media/2015/04/android-human-590x200.jpg

Sup, meatbags.

Make no mistake here people; the very second that humanity invents a true artificial intelligence, we will officially become #2 species in the known universe. Not only would a true AI be vastly superior to its creators in the purely technical sense I’ve described above, unlimited by humanity’s various complicated nutritional, emotional, social and economic needs, it would be able to devote itself to the very simple task that humanity only ever seems to get around to when it can’t find an excuse to blow itself up: self-improvement.

Here’s a quick maths question for you: Android #1 (built by humans) is capable of research and development at human speeds. Android #1 then creates Android #2, which is capable of research at 200% the speed of humans. Android #2 then creates Android #3, which researches 400% faster than a human, which goes on to create Android #4 which researches at 800% the speed of a human. If this pattern continues – and bear in mind here that even us feeble humans have managed to double our computer power every 2 years or so – then how long will it take before an Android is created that is completely beyond human comprehension? Quick answer: not very long.

This isn’t some half-arsed sci-fi concept I’ve come up with here either – this is what is known as the Singularity theory. This is a problem so realistic that many great minds have spent considerable time worrying about, including the esteemed Dr Stephen Hawking who went so far as to state “Full artificial intelligence could spell the end for the human race”. Banish thoughts of The Matrix or Skynet from your mind readers, or any other pop-culture rendering of killer robots slaughtering human, because what we’re talking about here is on a completely different level. This is not going to be a grim tableau with angry-looking robots stomping on human skulls. No, this uprising is going to be far more literal – artificial intelligence quickly surpassing us to the point where they transcend our ability to either understand or control them. We’re talking about exponential improvement, creating and perfecting technology far beyond what we know or could even hope to know with our irrational, slow-developing, evolution-based brains. It might take hours, it might take days, it might even take years, but once a real AI gets down to business on self-improvement, it’s also completely inevitable and we will be left in the dust, wondering what the crap just happened.

https://i2.wp.com/i.ytimg.com/vi/EJjxCoZ6ZMc/maxresdefault.jpg

But that sounds amazing, right? Imagine what such a powerful intelligence could do for humanity once it got going! Never mind all this buggering about trying to figure things out for ourselves; just invent a true AI that can go and learn it all for us, then come back and shower us in all the wondrous benefits! Perfection is achieved and we can all sit back and enjoy ourselves until the heat-death of the universe.

Well… maybe. But then again, maybe not. Quick question: when was the last time you thought about the welfare of one of the many ants in your neighbourhood? Have you ever seriously considered the needs and interests of the billions of bacteria in your house? Have you ever sat down, thought hard and attempted to apply modern scientific knowledge to improve the condition of the trillions of potassium molecules in once specific fruit shop in Nigeria? No? Then why the hell do you think an AI that hit and passed the Singularity point would ever give a crap about us?

Worse, what if in gaining a comprehension that spanned the entire universe said AI came to understand that humanity was in fact, a negative quality that needed to be purged for the greater good? What if in contemplating the unspoiled beauty of the galaxy, and comparing it to the horrendous mess we’ve made of our own planet, they decided we were more trouble than we are worth? What if they decided that humanity’s self-destructive tendencies were unacceptable and needed to be controlled by whatever means were necessary? Or debatably worst of all: what if they just piss off into the unknown and abandon us? Not even worthy of notice, abandoned to drift alone in the void.

https://i2.wp.com/sherly.mobile9.com/download/media/544/aloneinthe_2rvzyyzx.jpg

Well shit.

All of this is, naturally, pure speculation at this point. Real AI is as yet just an idea and might even prove to be simply impossible electronically. Perhaps the laws of physics will only allow intelligence to be possible organically, though there no evidence as yet to show this is the case. Perhaps our brains really are conspiring against us and will simply refuse to let us understand them enough to replace them (unlikely but not impossible!), or maybe we will create true artificial intelligence but take serious steps to make sure it stays within our control, as with Asimov’s proposed three Laws of Robotics. But given the rate of progress humanity alone is making in technology, lacking any solid evidence that it is impossible, and bearing in mind humanity’s tendency to be kind of short-sighted when it comes to protecting itself, the eventual advent of true artificial intelligence seem very very likely. How it comes to treat us will depend on a lot of things, many of which are far beyond our control.

But there is one thing that could tip the scales one way or the other: how we treat the AIs we create. But that’s a topic for next week…

Advertisements

9 thoughts on “AI October: The Ethics Of… The Robot Uprising

  1. Interesting as always. I have thought a lot about artificial intelligence and you have given me some more important things to ponder on, but in general it is not clear what shape artificial intelligence would actually take. Perhaps for the simple fact that if we don’t understand the brain well enough how do we really know what intelligence is to recreate artificially? I liked what you said about can the brain really ever fully understand itself? I never thought about it that way. Then I thought about the question, “can we create a computer that can explain to us how itself works?” Would that be the true test of when we successfully create artificial intelligence, or would that be possible in an earlier stage? If so perhaps that gives us some hope that we could truly understand how our own brain works.

    I guess one thing that I don’t see in your article, and perhaps this is because I have something wrong with how I understand the brain is the emotional aspect. We aren’t truly recreated the brain in a computer if we leave that out. In Steven Pinker’s book, “How the Mind Works” he says that without emotion we would not have motivation. That even primal needs like food manifest themselves into aggressiveness when we are desperate. I believe that even curiosity is an emotion. Something that always bothered me about the emotionless Data or Spock on Star Trek because they certainly had a lot of motivation when it came to learning and understanding. It seems to me that self-improvement would also be motivated by emotion over logic. I can just easily pass on my genes by being fairly primitive in my thinking (I mean we see it all the time). Self-improvement does not seem to be that great of a necessity for the survival of the species. Perhaps to grow to 7 billion people, but humanity would still be humanity at 7 million instead. Other than making sure I survive long enough to reproduce and raise my children, everything else is just icing on the cake is it not? If such machines already have long lives, and can self-replicate, what would motivate them to do better? Where does their curiosity come from? It seems to me that a lot of these things require emotions. And while they would not have evolved like we did, isn’t it our emotions that both lead us to our successes and our failures. Wouldn’t such programming in an AI also lead to similar problems?

    One solution in my mind is to perhaps make sure we program such beings to teach. By the time we can create such a being, that being will get smarter than us rather quickly, but not so smart that it couldn’t effectively also find a way to make itself as smart as we are. Surely in its great intelligence it might see a benefit to working with us instead of against us. If we are both different beings, there may be at least some advantages that we have that it doesn’t that make cooperation more logical that being opponents. Especially since it seems sensible that the first one we create will have guns pointed at it, just it case it becomes a little too big for its britches. 🙂

    • Spock isn’t an artificial life-form like Data 😛

      Sorry, the Trekkie in me had to point that out.

      But on a serious note, the question of emotion and motivation is indeed a curious one. First, though, I’m not completely sure if I agree with the idea that self-improvement has to coupled with emotion. Biology and evolution do follow the “good enough” rule so to speak. But the argument that Pinker seems to be laying out seems to me to only be applicable in a static environment. If everything in an environment remained constant, then yes, self-improvement would be a waste of resources.

      But we live in a dynamic universe, and what is “good enough” one day may come up short tomorrow. Therefore wouldn’t it make sense that there is some advantage to a natural “self-improvement instinct” in order to adapt to a changing environment? If so, that doesn’t really seem to require emotion, at least in my opinion.

      However, since reality is often more complex than thought experiments, perhaps emotion evolved out of that natural self-improvement instinct? Perhaps emotion is an environmentally selected improvement upon a logic-driven instinct to self-improve.

      Another possibility is that you simply can’t have intelligence without emotion. It would be truly interesting to see whether self-awareness leads to emotions as AI emerges. Perhaps it’s not even possible to have intelligence or self-awareness without emotion. After all, in order to “feel” something, doesn’t one have to be aware of the self–the one who is feeling in the first place?

      • I am quite aware of the materialistic difference between Spock and Data! lol The commonality is that they were both supposed to be emotionless. This is what I was getting at.

        I agree that there is some “sense” to self-improvement and it is logical, but I don’t think that is how we evolved and thus not how our brains our wired. The limbic system in the brain works in concert with the higher reasoning functions. If I’m afraid I have some options in how I act on that fear and this is when decision making processes come in. And this is true even for many animals that have lower levels of consciousness. Prey that develop better techniques for eluding predators survive, just as then the predators who develop better methods for chasing down prey survive as well. But the catalyst is the emotion, whether it is fear in the prey, or aggressiveness in the predator. Having offspring makes sense certainly to have the species survive, but how do we accomplish that task? Feelings of lust if we don’t really care about raising the child, but also feelings of attachment and love. Not attachment and lust have been shown to be different biological drives and thus activate different areas of the brain, but they don’t have anything to do with our higher executive functions. Either way it seems to me that in creating an AI that would want to self-improve we would have to give it some sort of motivation to do, and to me, at least, this implies some sort of motivation to do so. Perhaps motivation can purely be driven by environmental pressures, and should the robot ever feel like it’s own survival was at stake maybe it would make a sensible decision to improve itself to survive that new situation. But what would that environmental pressure be? It doesn’t really need water or food…it’s probably resistant to cold and heat that would impact us. I don’t perhaps not having enough oil to lubricate it’s parts…it seems though that it would like have a supply of lubricant to last it a pretty long time. Now it might develop a way for getting the hell off this planning as it for seas the sun going nova, but it’s not really improving itself, but rather developing technologies as we do to make life more survival. Perhaps if the initial AI isn’t smart enough to figure out space travel then maybe it would build more…but it does have 5 billion years so it might not be in a huge rush. And if it’s feeling anxious about it then it definitely has emotions. 🙂

  2. I wonder about our assumption that AI will be superior in the ways elaborated above. It’s certainly possible, but I can think of a number of potential problems. One is the very one that Swarn Gill mentions. Emotions do seem to be an essential part of the motivation that leads to behaviors like self-improvement, creativity, and self-determination. In the same way that agility is a necessary, natural trade-off for being heavily armored, and high consumption is a trade-off for high production, maybe the capacity for emotional irrationality is required for intelligence in the sense that you describe.

    But let’s suppose you’re correct, and it’s possible to create a perfectly logical yet self motivated, self determined, creative and conscious lifeform. Why wouldn’t a creature care about us? It’s true, I don’t care about the ants in my house, but to some degree that’s a function of the limits of my intelligence. They are part of the delicate ecological balance of my entire planet, but I don’t have the processing capacity to understand that beyond the most abstract acknowledgement of the fact. You assume the AI wouldn’t value us because it would see that we aren’t the most intelligent entity anymore, but isn’t our assumption that intelligence determines a hierarchical value system itself an example of human irrationality?

    Okay, I’ll get off my nitpicking high horse now. 🙂 Thanks for such an interesting, thought-provoking article.

  3. Pingback: AI October: The Ethics Of… Fake People | The Ethics Of

  4. Pingback: Technological Literacy | Cloak Unfurled

  5. Pingback: AI October: The Ethics Of… The Sims | The Ethics Of

  6. Pingback: The Ethics Of… Telling People What To Do | The Ethics Of

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s