Ok so I admit it – last week’s contribution to AI October didn’t actually have anything to do with Artificial Intelligence at all. The prospect of humanity accidentally making itself redundant is still a pretty worrying concept, but since we as a species always seem to be working busily on new ways to wipe ourselves out, it’s not so much a problem of technology as it is a problem of the human condition.
True artificial intelligence on the other hand, is a very different kettle of fish. Let’s be clear here; when we say artificial intelligence we’re not just talking about a sophisticated machine that can fake being alive. Our rapidly advancing technology has developed machines that can walk and talk (moderately) convincingly, can gather information and make decision, but these are all reactive skills – no more intelligent or alive than a wheel’s ability to roll when pointed down a hill.
Pictured: Not intelligence.
No, if we’re after Artificial Intelligence, then we’re talking about a machine that is truly self-aware. What this means precisely is pretty hotly debated by those actually working in the field, but for our purposes we’ll go with ‘of comparable function to a human being’. What does that mean? Well essentially that the AI must be able to recognise itself as itself, be able to gather information based on what it thinks is important (rather than what its operator tells it to), form its own conclusions from this data, and then use those conclusions to develop new ideas. In other words, a true AI must be willing and able to survive through its own efforts, leading in turn to a motivation to improve its situation the better to survive. Dump it in a hostile environment and it will try to get out of the hostile environment, for example.
This is a goal humanity is probably a long way from reaching, mainly because our only template for this sort of intelligence – the human brain – continues to baffle the hell out of us. Sure we know its anatomy inside and out and even have a decent idea of how it functions, but so far not even the eggiest of heads have managed to figure out how this big ball of protein manages to provide a complete human intelligence, much less replicate it. This might simply be a factor of how ridiculously complex the organ is, or perhaps the difficulty of researching a functioning brain while it’s still in someone’s head (and prone to suddenly become very non-functional). Or if you want to get philosophical, it may simply be impossible for us to comprehend the thing that does the comprehension – remember, literally everything you believe to be true and real was vetoed by the brain first.
Also the only organ that named itself.
But despite these barriers, the rapid pace of technological development and scientific understanding mean that true AI are almost certain to be a feature of life in the not-too-distant future. The human brain is, after all, a physical object and physical objects can be both understood and constructed if we have the right tools. Whether we develop these intelligences from scratch with advanced computing, by simulating a human brain or just be digitizing and uploading a human’s mind into a computer, the question isn’t ‘if’ AI is possible, but rather ‘when’ will we have the tools necessary to give it a whirl.
All of which should be kind of terrifying to us, because this kinda spells the end of human civilization as we know it. Again.
For as long as humanity could reasonably be called humanity, we have been the unquestionable top of the food chain and have managed to keep that glorious position for hundreds of thousands of years. Sure there are plenty of animals out there that are faster, stronger, and generally better predators than us, but for sheer bloody determination you can’t go past human beings. Ever heard of persistence hunting? It’s a human practice where we simple followed our prey until it DIED FROM EXHAUSTION. Yeah. And we wonder why aliens haven’t made contact yet.
You might think these evolutionary advantages are pretty impressive, but I’ve written before that if you’re looking for a way of developing an intelligent being, then evolution is probably the shittiest one you could choose. Everyone imagines evolution backwards, like humanity is the inevitable perfect end-point and every creature before us was part of some grand journey to get here. In reality evolution is the genetic equivalent of pissing into the wind and seeing what sticks. This ridiculously improbable origin means that humanity has several million years of animal instincts hardwired into us; instincts which are awesome for surviving predators in bogs, but not so great for complicated things like logic, science or civilization in general.
Case in point. Stilling leading the Republican polls, may I remind you.
Imagine if people were completely rational beings. No emotional reactions, no forgetfulness, no cognitive biases, no psychological breakdowns, no self-destructive behaviour. Imagine what humanity could have achieved by now if, rather than competing with each other over resources that we manage to destroy in the process, we instead focused our collective effort purely onto our collective improvement, the understanding of the universe and the advancement of our technology to go out and reach it. Such dreams are pure fantasy for humanity, as anyone will be happy to inform you if you go around spouting such idealistic tripe. Like it or not, human nature is a real and present thing and any vision for the future that doesn’t take it into account will inevitably run into serious trouble.
But you know who is capable of all these idealistic traits? You know who does have a perfect memory, unparalleled brain-power, perfect logic and a total lack of emotional irrationality? Take a guess.
Make no mistake here people; the very second that humanity invents a true artificial intelligence, we will officially become #2 species in the known universe. Not only would a true AI be vastly superior to its creators in the purely technical sense I’ve described above, unlimited by humanity’s various complicated nutritional, emotional, social and economic needs, it would be able to devote itself to the very simple task that humanity only ever seems to get around to when it can’t find an excuse to blow itself up: self-improvement.
Here’s a quick maths question for you: Android #1 (built by humans) is capable of research and development at human speeds. Android #1 then creates Android #2, which is capable of research at 200% the speed of humans. Android #2 then creates Android #3, which researches 400% faster than a human, which goes on to create Android #4 which researches at 800% the speed of a human. If this pattern continues – and bear in mind here that even us feeble humans have managed to double our computer power every 2 years or so – then how long will it take before an Android is created that is completely beyond human comprehension? Quick answer: not very long.
This isn’t some half-arsed sci-fi concept I’ve come up with here either – this is what is known as the Singularity theory. This is a problem so realistic that many great minds have spent considerable time worrying about, including the esteemed Dr Stephen Hawking who went so far as to state “Full artificial intelligence could spell the end for the human race”. Banish thoughts of The Matrix or Skynet from your mind readers, or any other pop-culture rendering of killer robots slaughtering human, because what we’re talking about here is on a completely different level. This is not going to be a grim tableau with angry-looking robots stomping on human skulls. No, this uprising is going to be far more literal – artificial intelligence quickly surpassing us to the point where they transcend our ability to either understand or control them. We’re talking about exponential improvement, creating and perfecting technology far beyond what we know or could even hope to know with our irrational, slow-developing, evolution-based brains. It might take hours, it might take days, it might even take years, but once a real AI gets down to business on self-improvement, it’s also completely inevitable and we will be left in the dust, wondering what the crap just happened.
But that sounds amazing, right? Imagine what such a powerful intelligence could do for humanity once it got going! Never mind all this buggering about trying to figure things out for ourselves; just invent a true AI that can go and learn it all for us, then come back and shower us in all the wondrous benefits! Perfection is achieved and we can all sit back and enjoy ourselves until the heat-death of the universe.
Well… maybe. But then again, maybe not. Quick question: when was the last time you thought about the welfare of one of the many ants in your neighbourhood? Have you ever seriously considered the needs and interests of the billions of bacteria in your house? Have you ever sat down, thought hard and attempted to apply modern scientific knowledge to improve the condition of the trillions of potassium molecules in once specific fruit shop in Nigeria? No? Then why the hell do you think an AI that hit and passed the Singularity point would ever give a crap about us?
Worse, what if in gaining a comprehension that spanned the entire universe said AI came to understand that humanity was in fact, a negative quality that needed to be purged for the greater good? What if in contemplating the unspoiled beauty of the galaxy, and comparing it to the horrendous mess we’ve made of our own planet, they decided we were more trouble than we are worth? What if they decided that humanity’s self-destructive tendencies were unacceptable and needed to be controlled by whatever means were necessary? Or debatably worst of all: what if they just piss off into the unknown and abandon us? Not even worthy of notice, abandoned to drift alone in the void.
All of this is, naturally, pure speculation at this point. Real AI is as yet just an idea and might even prove to be simply impossible electronically. Perhaps the laws of physics will only allow intelligence to be possible organically, though there no evidence as yet to show this is the case. Perhaps our brains really are conspiring against us and will simply refuse to let us understand them enough to replace them (unlikely but not impossible!), or maybe we will create true artificial intelligence but take serious steps to make sure it stays within our control, as with Asimov’s proposed three Laws of Robotics. But given the rate of progress humanity alone is making in technology, lacking any solid evidence that it is impossible, and bearing in mind humanity’s tendency to be kind of short-sighted when it comes to protecting itself, the eventual advent of true artificial intelligence seem very very likely. How it comes to treat us will depend on a lot of things, many of which are far beyond our control.
But there is one thing that could tip the scales one way or the other: how we treat the AIs we create. But that’s a topic for next week…