Ones and zeros in varying complex patterns which make up nearly the entirety of what modern computers can do. Though it wasn’t the only system ever proposed, binary is still the predominant system of underlying coding upon which all other coding is based.
Layer upon layer of interperetation happens within an operating system as even the most passive bit of more complex codes get broken down (usually into C) then into binary.
The result is impressive but has a few drawbacks, for example the inability of a computer to generate truly random numbers. While the random numbers that pour from simple programs designed to generate them might seem random, in truth they are not. This is because ultimately everything breaks down into a predictable pattern when what lies underneath is something so simple as merely a zero or a one.
Other systems that have been developed included ternary, a system which adds a value of -1 to the mix. Building components proved challenging enough to dissuade the earliest attempts at making the complex harware “switches” operate in more than two states.
Like a lightswitch suddenly having a third setting, and one for which abstract code would need to be developed.
Future systems will eventually evolve to rely on completely different code structures and hardware, and those switches will be more like dimmer switches by comparison. And this will likely happen during our lifetime.
The operating systems that run on top of such hardware (and system software) will potentially be far closer to optimal for generating anything truly random. Likewise the only real chance artificial intelligence has of evolving, is to move towards such randomness.
If we were to develop computers from abstract models and change every aspect from the hardware and underlying coding to the operating system what would it be like? Would we use similar components to accomplish similar tasks?
All we can say for certain is that for now even what is encoded on the bottom of a dvd is a version of binary. If the digital infrastructure for a different base code were written today then by next year a dvd could theoretically hold many more layers of coding and thus more memory than could be currently conceived of.
*Imagine a holographic cube of zeros and ones, all displayed at different levels of brightness due to a voltage difference. Code that can be read as it is prompted to reach the correct voltage and thus non conflicting.
Likewise programs which operate using one type of code could interact with programs using another on the same machines and perhaps even utilize existing memory simulataneously without any conflicts.
Some of this might sound like science fiction, but it is likely that you’ll see existing harware capable of holding more and more layers of storage as ideas like this become reality. Processing speeds will multiply quickly, and what today we call cluster computers will become very standard offerings on a single motherboard.
The biggest hurdles may not be the hardware, but rather the operating systems themselves as we have yet to create any such systems that work effectively on clustered systems. There are programs to make use of them, and ways of integrating systems between them, but as of yet no operating system that utilized all of their resources in tandem in the same way as a normal desktop operating system.
The moral of the story is technology is still in it’s infancy, and you may get to see it take it’s first real steps in this next decade. I hope this gives some of you ideas, and causes others to debate the efficacy of what I am suggesting we pursue. Beyond all of that I hope that developers like Linus Torvalds offer some ideas about how to construct such systems.
Because if it isn’t open source and it crosses certain boundaries, we may find ourselves debating all of this morally. I for one am thinking such technology should in fact emerge from open source hardware and software communities.
When the time is right.