In a nut shell!
We live in a world where computers and software form an integrated part of our everyday lives, with our reliance on this technology growing exponentially year on year. This situation is putting greater strains on the hardware aspect of the technology, with silicon chips facing severe limitations as to processing speeds and switch density. Our need to keep increasing the quantity of switches in CPU’s, is down to computer programs becoming ever more complex and ever more binary code resource heavy.
As computer programs become ever more complex the volume of 1’s and 0’s within those programmes grows hugely. Where decades ago a simple programme may have used just a few thousand individual 1’s and 0’s (bits), we now have programmes that run into billions of them. So we end in an endless spiral of more 1’s and 0’s equalling more and more switches with faster and faster speeds. So traditionally we have looked at the solution to the problem as being within silicon chip design itself. This has led to billions of dollars of research and development into ways and methods to increase the chip technology GHz and switch volumes. The question must be asked “why haven’t we looked at a binary code solution”?
This question might find an answer in our interpretation of the nature of binary code. It has been assumed that the basic model of the code cannot be improved because it consists of a pure binary structure of just two values 1’s and 0’s, which translates to on and off within a switch function. The roots of such binary formats goes back as far as 1679 when it was first devised and in the years since we have not questioned the efficiency of the structure.
As computer programmes become ever more complex the volume of 1’s and 0’s within those programmes grows hugely. Where decades ago a simple programme may have used just a few thousand individual 1’s and 0’s (bits), we now have programmes that run into billions of them. So we end in an endless spiral of more 1’s and 0’s equalling more and more switches with faster and faster speeds. It seems only logical then to look at reducing the volume of bits thus leaving the chips less work to do!
The problem then comes down to that within a single bit you can only code two pieces of information, that being a 1 or a 0, but you can, as we do, string more bits together to increase the information held. Such as in standard binary code format, we place eight bits together to make a single byte of 256 pieces of information.
It is therefore accepted by most that we are restricted to a maximum of 256 individual pieces of information within a single byte.
By applying an algorithm our team has found a way to increase the efficiency of today's binary code by a factor of at least four and we are currently looking at a factor six improvement. The answer for more computer efficiency isn’t therefore in the hardware, it’s actually in reducing the levels of 1’s and 0’s that slow silicon chip processing down and also limits the speed of data transfer across the motherboard architecture. This reduction in bits per programme also reduced file sizes and increased download speeds by a factor of three to four.