Generally when I think of artificial intelligence I use a bit of skepticism before arriving at the conclusion that the Terminator movies are at best, an unlikely end to the human race. Not that the movies aren’t great but Hollywood has missed the boat on AI in so many examples of real world AI application that it is doubtful that many people have grasped the reality of it all.
Watson, an IBM supercomputer now quite well known for it’s Jeopardy performance some time back is an example of primarily safe AI development and as it adapts machine learning to master answering questions most of us can’t conceive of a present danger that we can actually explain.
What Watson does differently from most AI isn’t dangerous, in fact it mostly weighs questions and seeks answers by checking the data it has related to the question. It doesn’t manipulate anyone elses coffee maker anywhere, it doesn’t close garage doors on puppies. It uses machine learning to evaluate questions and couldn’t begin to care less if it gets an answer incorrect because it is programmed to make those evaluations and feedback that data.
More accurately it weighs the data then evaluates using machine learning to make plausibility assesments and then feedback the possible answers.
It has complicated rejection systems based on methods of evaluating data by type and can make determinations about a question but has no invested concern about applying that data to anything other than an answer.
Safe enough right?
In any and all examples of AI in film we see self aware machines deciding to eliminate humans based on some kind of realization that we are bad or not necessary but nobody really explores how such a thing could actually occur outside of perhaps Hal in 2001 who simply decides not to comply with an integrated system or two.
Reality wouldn’t likely put us in a ship that could decide to not open a door. But there are ways that AI can actually become dangerous.
When a human user executes code to make something happen, i.e. opening a browser, downloading games, installing games, playing games, etc. The control component in that decision isn’t necessarily something logical to any machine. Likewise without the concept of unwanted execution it can’t be quickly summarized or explained why we want any given program to run.
Our simplest background services like system monitors show many kinds of unnecessary processes waiting for resources. Our email programs from a machine perspective don’t do anything particularly vital.
In fact 99% of the things you do on a computer probably aren’t vital to a machine. So context becomes this hazy line of wanted and unwanted behavior.
So does a smarter machine shut off your email, or an errant process that forgot to close? It really depends only on what we desired as an outcome.
If a programmer decided to let an AI execute code at random intervals it would eventually do something unwanted by default. This isn’t a hard concept to grasp though it might be harder to see bigger implications than unwanted browsers opening and closing.
Depending upon what kinds of machine codes are available to execute, an AI could be very very dangerous. Executing malicious scripts, shutting down servers, shutting off antiviruses, making bank transfers, all with no actual concept of what it’s doing. But only if you had such code lying around where it could be executed.
If even the simplest understanding of what is problematic can occur to a machine then it will most likely test the reaction until it understands enough to reject the process.
It might actually be slowing down the development of AI to make them so human friendly. Language barriers and contextual issues are the nightmare stuff.
It would be simpler to never give AI the ability to execute non native code, but that isn’t very likely. Most likely the machines that already are permitted to arbitrarily execute codes have no malicious development behind them yet.
Make no mistake it will always be a human who handed it the tool. So that will always be the responsibility of said human, not machine. Ultimately in closing, the idea that AI will kill off humanity is far fetched, but it becomes a lot more likely if you put the means at it’s disposal.
You probably shouldn’t teach chimps to use handguns either for that matter. As long as we’ve said as much we can add, common sense is the rule and the measure with regards to any programming. And it is better to not fear what can reasonably be understood.
Of course as we all estimate that it’s just a matter of time before someone makes the nightmare scenario happen perhaps we should focus more on recognizing the way it might unfold. After all, I don’t want to lose my digital copy of Terminator.
Thanks for reading this!
For more of our tech articles click here…