The more things change
In the ancient world man tried to reconcile the notion that we were created beings. The concept of gods or demigods flourished and waned into the notion of a singular God and even of no God as science became more accepted. Man’s quest to create in his own image something tangibly sentient began long before computers. And while history looks back on the novelty of such attempts, we are rapidly approaching an era where sentient technology can become something much more than ordinary general intelligence.
Some of the reasons for this include:
- Computers use memory differently than humans
- Machines put to a purpose are exceptionally capable of achieving that purpose
- Machines aren’t often distracted by variables that make people pause
- The knowledge base used by a program to function isn’t something it needs to understand completely
Long ago as man attempted to create the first widespread robotic automation technology, the same fears arose about the capabilities of the machines to “take over.” Today as narrow use artificial intelligence demonstrates the incredible capabilities of programming to adjust to some tasks, we discover massive limitations in other seemingly simple tasks.
This limitation is imposed by human design; the idea that robots will make good nurses provided they can handle cups and pill bottles is just one example of the human mentality that these machines will undoubtedly be used to serve us first, in every capacity we can imagine. For now, A.I. can drive a car with ease but cannot easily manipulate many “real world” objects, the robotics hardware and software aren’t quite there yet.
Slavery and Singularity
Much as we adjusted away from neolithic ideas about gods serving every imaginable purpose, then slaves performing every unwanted task. We can eventually evolve away from robots that exist simply to serve every imaginable purpose and focus on a general intelligence that serves a more noble goal.
As humans get less intelligent and lazier by the year, we could even conceivably turn over the biggest decisions to a general intelligence that won’t be biased by economic greed. Maybe even one that hold the needs of every person on the planet as equal and valuable, what would that singularity look like?
It is estimated that some of the easiest jobs to automate by A.I. would be those of Doctors, and Lawyers, mostly due to the ability to recall massive amounts of data needed to perform those functions. It stands to reason that it would be just as easy to automate the jobs of politicians, judges, etc.
Should we or shouldn’t we
The question about whether or not to create this kind of artificial intelligence is a moot point, we are steadily marching towards it’s creation with every new innovation. In fact the infrastructure is being built even now.
Simply by having search engines, app platforms, programming languages that rapidly automate mechanical functions, and the people working around the clock to create smaller and smaller computers, we inch ever closer to a world where we are integrated with technology in ways that change human evolution forever.
Nano tech and neural networks
The neurons in a human brain are pretty small, but essentially they work together in large clusters to identify our different memories based on the impressions we have of our experiences. The smells, tastes, emotions, and other sensory data are gathered as our perceptions of events and they are spread over more of our brain than we might imagine.
If a neuron were the size of our smallest robots today how big would that brain need to be? If those robots were networked together and drew information differently from the same inputs would they work together to have a better recollection of events?
Much like in internet topography, a gateway server that masks a massive network of machines is somewhat safer than having that same network connecting at will to the outside world. Cluster computers, neural networks, and even nanotechnology will invariably want a closed system for the many machines that will have to work in tandem to accomplish anything resembling human cognition in a machine cluster.
So a general a.i. is probably fairly practical
Ironically we humans aren’t very unpredictable. Our language follows an easily mapped usage frequency that obeys a mathematical formula. We call it Zipf’s law, and it’s implications are huge. Essentially so many things obey this principle that it stands to reason it could be used to predict how many craters a moon will have based on the largest ones. Likewise we can assume that no matter how innovative a program we create it will suffer 80% of it’s difficulties as a result of 20% of it’s “bugs.”
In closing I’ll simply ask you dear readers, “Should we try to estimate a machines awareness based on how it behaves in those silly ways that we cannot predict?
What then should we say about humans using language in ways that are so predictable, that we can even chart how often they will use specific words?
Perhaps we were a singularity once?