Binary numbers – seen as strings of 0's and 1's – are often associated with computers. But why is this? Why can't computers just use base 10 instead of converting to and from binary? Isn't it more efficient to use a higher base, since binary (base 2) representation uses up more "spaces"?
I was recently asked this question by someone who knows a good deal about computers. But this question is also often asked by people who aren't so tech-savvy. Either way, the answer is quite simple.
What is "digital"?
A modern-day "digital" computer, as opposed to an older "analog" computer, operates on the principle of two possible states of something – "on" and "off". This directly corresponds to there either being an electrical current present, or said electrical current being absent. The "on" state is assigned the value "1", while the "off" state is assigned the value "0".
The term "binary" implies "two". Thus, the binary number system is a system of numbers based on two possible digits – 0 and 1. This is where the strings of binary digits come in. Each binary digit, or "bit", is a single 0 or 1, which directly corresponds to a single "switch" in a circuit. Add enough of these "switches" together, and you can represent more numbers. So instead of 1 digit, you end up with 8 to make a byte. (A byte, the basic unit of storage, is simply defined as 8 bits; the well-known kilobytes, megabytes, and gigabytes are derived from the byte, and each is 1,024 times as big as the other. There is a 1024-fold difference as opposed to a 1000-fold difference because 1024 is a power of 2 but 1000 is not.)
Does binary use more storage than decimal?
On first glance, it seems like the binary representation of a number 10010110
uses up more space than its decimal (base 10) representation 150
. After all, the
first is 8 digits long and the second is 3 digits long. However, this is an invalid argument
in the context of displaying numbers on screen, since they're all stored in binary regardless!
The only reason that 150
is "smaller" than 10010110
is
because of the way we write it on the screen (or on paper).
Increasing the base will decrease the number of digits required to represent any given number, but taking directly from the previous point, it is impossible to create a digital circuit that operates in any base other than 2, since there is no state between "on" and "off" (unless you get into quantum computers... more on this later).
What about octal and hex?
Octal (base 8) and hexadecimal (base 16) are simply a "shortcut" for representing binary
numbers, as both of these bases are powers of 2. 3 octal digits = 2 hex digits = 8 binary
digits = 1 byte. It's easier for the human programmer to represent a 32-bit integer,
often used for 32-bit color values, as FF00EE99
instead of
11111111000000001110111010011001
. Read the
Bitwise Operators article
for a more in-depth discussion of this.
Non-binary computers
Imagine a computer based on base-10 numbers. Then, each "switch" would have 10 possible states. These can be represented by the digits (known as "bans" or "dits", meaning "decimal digits") 0 through 9. In this system, numbers would be represented in base 10. This is not possible with regular electronic components of today, but it is theoretically possible on a quantum level.
Is this system more efficient? Assuming the "switches" of a standard binary computer take up the same amount of physical space (nanometers) as these base-10 switches, the base-10 computer would be able to fit considerably more processing power into the same physical space. So although the question of binary being "inefficient" does have some validity in theory, but not in practical use today.
Why do all modern-day computers use binary then?
Simple answer: Computers weren't initially designed to use binary... rather, binary was determined to be the most practical system to use with the computers we did design.
Full answer: We only use binary because we currently do not have the technology to create "switches" that can reliably hold more than two possible states. (Quantum computers aren't exactly on sale at the moment.) The binary system was chosen only because it is quite easy to distinguish the presence of an electric current from an absense of electric current, especially when working with trillions of such connections. And using any other number base in this system ridiculous, because the system would need to constantly convert between them. That's all there is to it.
Comments (84)
Yeah, I do wonder how future computers will be.
Like you said, in binary, you have two modes, on/off, so it does make sense to be in a computer.
Who would even use a quantum computer?
On the topic of moores law,
if you go by definition then moore's law isn't realy a law, it is just an uncannily accurate observation and prediction. Why they called it moore's law when it should be called Moore's Theory of computer miniatureization
Also, no one has mentioned the different paths idea.
The "different paths" idea still reduces to 2 discrete voltage states, making it expressible using binary. (Please correct me if I'm wrong here.)
Those are pretty cool ideas though, and I guess I'll be dealing with huge paradigm shifts in computers as I get older and the limits of current computers are reached.
The quantum computers would be used in labs for a long while they refine them to handle end users and, eventually, small enough for a home desktop.
Binary is far more efficient for today's computers to process because it's easy to build a reliable circuit that has distinct "on" and "off" states. Two values in the real world imply the use of a 2-valued number system.
Binary is also simpler; the basic operations of addition and subtraction only involve 3 possible states (0, 1, carry/borrow) versus 11 possible states for decimal.
Decimal is significantly more intuitive for humans, however, since a) we normally have 10 fingers and b) our language was built around a base-10 system.
S 0 R
s=sender r=receiver 0 glass or some shit.
0 will be kinda like glasses but movable by the computer to make any character.
There are still limits as to how much information you can send -- namely, the speed of light, and the attenuation caused by a non-100%-clear cable over longer distances. It's still significantly faster than copper wire and has the additional benefit of not being influenced by EM interference or varying ground potential (over long distances).
Light is still used digitally though, through on/off pulses. It's easy to send tons of fast, timed on/off pulses, but it's much more difficult and error-prone to measure analog levels of light. You need something that has well-defined quantum states, and unless we're talking positions of an electron or something like that, the only ones that can be easily achieved are "off" and "on."
y we use binary no. in computer;
gud job be thx a lot;
pls tell me the advantages of qunatum computer over classical computers ;
iH9ns
y we use binary no. in computer;
gud job be thx a lot;
pls tell me the advantages of qunatum computer over classical computers ;
iH9ns
i really apprecoat with the blog
i really apprecoat with the blog
Oh and if higher numbers mean faster processing speeds, would a base 3 system be faster than binary? And then technology could improve from there, base 4, 5, 6.
If you're talking about using 10 different wires for each signal (1 wire on = 1, 2 wires on = 2), it would simply be inefficient. Would you rather have 10 possible combinations or 2^10 = 1024 possible combinations for a given hardware cost?
Higher numbers do not necessarily mean faster processing speeds. It greatly depends on how the rest of the system is set up. Think about it this way: are 5 light bulbs brighter than 1? Not if there are five nightlight bulbs versus 1 100W bulb!
https://en.wikipedia.org/wiki/History_of_processors#1950s:_early_designs
base. Since this base reduces a lot any astronomical calculations.As an example, you mention that a number can be represented by 0s and 1s in binary, so it seems logical that takes more digits and therefor should take more storing space although our present thecnology doesnt allow us to do it in any other way.Well long ago the church stated that earth was flat too because their luck of thecnology.If you take any mathematical funtion like add or divide a number you can esaly see that the process is a lot faster using the decimal base 10 ,more if you try to reduce or classify it.My intuition says that there is a way to use Decimal base 10 in a computer, you only need that a computer could think like a human.Best Regards
Thing is, there is nothing inherently "better" about base 10 -- you can perform calculations in any base, it's just harder for a human to use different bases since they're not intuitive.
Base-10 computers are possible but they require technology that provides 10 distinct quantum states. Currently most computers use binary because it's easy to provide 2 quantum states with a digital circuit. Analog computers have used different bases in the past but they were also far more susceptible to interference.
Nevertheless, I dont agree with this statement :"Thing is, there is nothing inherently "better" about base 10 -- you can perform calculations in any base, it's just harder for a human to use different bases since they're not intuitive.",here Im afraid that you have to prove it, mathematically speaking.And I can prove it with a simple problem: Try to classify a number with "x" digits using a mathematical operation so the result can be keep in a constant serial of ordinal numbers.
I believe that using base10 is the best option to do this even more exact than base 12 known to be as best base to use.
Best Regards
Let's say you have twelve cookies. You can write it as:
"I have 12 cookies"
"I have 1100 cookies" (base 2)
"I have C cookies" (base 16, using A-F as additional digits)
"I have 111111111111 cookies" (base 1, count the lines)
Regardless of how you write it, the actual number of cookies in your hand does not change. Therefore the bases are equally good at REPRESENTING the meaning of the numbers.
As for efficiency and speed of calculation... your math skills are optimized towards base 10 because that's the way they taught you (see first point). Binary also SEEMS less efficient at first glance because it takes more digits to write out the numbers ON PAPER, but by that same logic, the most efficient base would be the highest one practical -- if you have 10,000 unique characters usable as "digits", you can then represent every number from 0 to 9,999 using ONLY ONE DIGIT!
A computer, on the other hand, can be built to operate very quickly on many base-2 lines in a digital circuit because it's very easy for it to represent 2 states (on and off) but much more difficult to reliably represent multiple states. In a perfect world it can use multiple voltage levels (i.e. 0V = 0, 1V = 1, 2V = 2...) but this presents 2 challenges.
1) Circuitry for distinguishing these multiple voltage levels is much more complicated to build, and thus more expensive
2) In the real world, circuits are imperfect and are susceptible to noise. TTL circuits for example operate at 5 volts, but signals can vary quite dramatically, so 0 to <1V is defined as "off" and 2-5V is defined as "on". This allows the computer to be far more certain of the results even if voltage fluctuates. In the 10-volt base 10 system, a 5-volt signal (representing 5) can drop down to 3V, and this messes up the calculation because the other chip "sees" a 3 on the line. Possible to build? Yes. Practical for general use? No.
There is certainly a way to make a base-10 computer, at least in theory. A quantum computer does not operate in base 2 because there are multiple distinct states (analogous to "on" and "off", as opposed to "partially on"). This allows it to reliably operate in a higher base.
The link :
http://www.dozenalsociety.org.uk/basicstuff/osburns.html
It's difficult for a human to deal with 100-digit strings of binary numbers. But for a computer, it's trivial to print more digital logic lines into a microchip – far easier than trying to build a reliable analog circuit with 10 "distinct" states instead of 2.
Again, this article covers why computers use binary, and in no way implies that binary is universally superior.
Have you researched Memristors at all? They're a relatively new technology with big implications for computing: they can "remember" any frequency of electrical charge, meaning it could be used to make a computer that runs on base 10 or larger. Pretty interesting stuff. You should look it up. Thanks for this article, very helpful and concise!
-Roy
Even with the additional load of calculating the check sum, one could imagine that the increased efficiency would more than make up for the additional calculations.
A true 10-bit computer would need to be made up of components that have 10 distinct, unambiguous states – something akin to quantum states of an electron (hence the term "quantum computer"). Such a computer will not be subject to data corruption in the way that an analog one will be.
There are also other costs involved. If it costs X units to produce a 1-bit (binary) circuit element and 10X units to produce a 1-dit (decimal) circuit element, it will be possible to build a 10-bit circuit that's capable of holding 1024 possible values for the same price as a "more efficient" decimal circuit that can only hold 10 possible values. Thus the binary computer is actually more "efficient" for a given price point.
@Roy Memristors are still analog devices – they're quite fascinating, but from what I know, they have the potential for improving computers by reducing the size of a memory element (which currently requires a much more complicated circuit) but it would still be a 2-state device (has resistance vs. doesn't have resistance). Attempting to define "states" of resistance will result in the same problems that plague analog computers in general.
(I know, mitochondria are supposedly onlhy good for making ATP from ADP by pumping protons --- the opposite of electron flow. Some guy got a Nobel prize for saying that! If that's true, then a Toshiba is a not very good space heater.)
I have a question about the binary base in general.
I know it is a bit off-topic, but you seem to understand numbers well, so I hope you answer.
I find it misleading about the binary and any other base, how it treats Zero.
Zero represents "nothing", but the way it is used, it doesn't represent "nothing", but is just used as another symbol.
Here is an example is binary base.
0 is nothing.
1000 is |||||||| or 8 in decimal base. Here 0 is not nothing and could be replaced with any other symbol, for example, @.
So
1@@@ will also be equal to |||||||| or 8 in decimal base.
My point is that, thought it is called binary there are actually 3 symbols.
"0" to represent "nothing"
"1" to represent "one"
"0" to represent the idea of "position" in numbers.
Why do you think it was called binary though? It applies to other bases as well.
Thank you!
Note that binary, or any other number system for that matter, is simply a way to represent numbers, so the symbol "0" doesn't mean anything by itself.
A representation of a number consists of an infinite number of "place values" starting with the units (1). Each place value has a numeric value equal to the base raised to the place value's position. With this in mind, the 0 simply indicates "the number doesn't contain anything from this place value" -- we write in the zeros to make sure the actual place values don't get shifted over.
You can write the number 5 in binary in 3 different ways:
101
0101
000000000000000000000000000000101
They still represent the exact same number.
101 = 1*(2^2) + 0*(2^1) + 1*(2^0) = 5
0101 = 0*(2^3) + 1*(2^2) + 0*(2^1) + 1*(2^0) = 5
Thank you!
http://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Zero.html
Maybe it will be interesting for others as well.
Actaully it could not have more than one current. They would have to translate back to a binary sort of system to unify them, or else they will all have to connect.
Please help me .....
thanks a lot
@KK Sohaib
I'm following right along with your comments about binary versus any other set. I understand the 'on'/'off' and present/not present and the reliability of such. So I understand why we are currently using binary. From your comments I see why what I wonder about will be difficult and likely some time off. But I would like your thoughts on this.
With my limited knowledge of the human brain and it's structure, I understand that within the brain one node (bit in computer terminology) can have one or several connecting nodes. Depending on the synapes (voltage if using computer terminology) none, one, all, or any combination could be "opened".
It seems to me, that even though it is not difficult to quickly pass through many bits of data, it would still be quicker to pass through fewer. Also, the brain may store some data regarding the position in the string such that the same node can be used in several data strings.
I'm not sure about how one would manage such a system, but based on what you know, would it be possible one day to structure a processor to use the same bit in multiple data strings? What about less nodes and more connections? This would relate to the binary versus other bases.
I wonder if the human brain uses a numbering (naming) system of 10,000 symbols (pick whatever number you wish)such that one node is required for each symbol. Thus one full thought can be done with only a few nodes. For computers to become even smaller and faster, it may take a completely different architecture then the long stings of 0 and 1 that we currently use.
Your thoughts.
I have been working out ways to achieve decimal computation, using standard electrical circuitry, for many years now. Not only do I believe that it very possible, but that it should be pursued as soon as possible to help usher in the age of quantum computation and to relieve internet data congestion currently.
For starters, it is a bit of a leftover myth that electrical circuits would not be able to properly distinguish ten states of voltage (amplitude). An Analog to Digital Converter (A/D or D/A) already does this with startling precision. Your computer microphone sends a constantly fluctuating state of voltage (amplitude) into the D/A as an analog signal. The D/A then discerns and carefully calculates the voltage of the wave at any given point (depending on the sample rate) and converts that value into a binary string. Modern D/A's are extremely accurate and reliable; this is why the Music, Film and Measurement industries are able to use them professionally... It is also why I can use my computer as a legitimate and accurate oscilloscope.
I do agree that a decimal processing computer would, in fact, use far more space and circuitry, but this is no longer a problem thanks to the miniaturization of electronics. When I got my first desktop computer, the case had very little free space inside of it. However, that has changed dramatically and now my case is almost all free space (making me wonder why cases have not really changed size from the standard ATX form factor). The Smart Phone and tablet further this miniaturization. So, when base-10 computing was last given any real consideration by the industry, the size of it was impossible, but now, a base-10 computer would probably just fill my case the way a binary system used to.
Now, some are looking forward to quantum computing as the next leap, and I am on board and excited for this to happen, but it's not around the corner as many hope. The problem with this is that it may still take another decade or two to actually build the quantum base-10 computer circuits that make this possible, but it will also still take another decade or two for software and ancillary hardware developers to do their part once they actually have the hardware to develop with. Because this is such a monumental leap in the very foundations of how things will be processed, it will require re-writing everything... You won't be able to just install a binary version of Windows or Linux on it without an emulator, and even if you did use an emulator, you would lose all the benefit of operating in base-10... So, the only way to prepare for that is with a hybrid technology that gets developers developing in base-10 as soon as possible. If programmers and developers had already written and worked out the foundations for base-10 computing on a standard electrical processor by the time the quantum computer was released, it would only take a few years to adapt it for this as opposed to whole decades otherwise, meaning that we could have a quantum computer on our computer desk much sooner.
I have designed a number of crude concepts to make base-10 possible with standard electronics, including a design for a 10-state transistor (a decistor?) that would be at the core of switching for this computer depending on the state of voltage given to it (Two inputs , Nine [actually nine because zero would need no output] outputs). If this design works, it means that you could send one signal into it (from 0vdc to 9vdc +/- .05vdc) and work with each of the nine outputs discreetly (If signal is a 5, than activate the part of the circuit designed to to react to 5). This means that each input pin of a base-10 processor (loaded with decistors and transistors) could take any input between 0 and 9, process it, and spit out a result on the output pins that is again 0 to 9. While there may be various degrees of logical binary operators between the decistors, it would still flip flop a single cycle in base-10. The advantages would be almost immeasurable... I've got most of this base-10 computer sort of figured out, but I am still stuck on finding an efficient way to store the values in a single memory cell without having to use 9 capacitors...
Either way, interesting article and discussion worth having and seeking solutions to, in my opinion... Feel free to email me for further discussion.
In binary switches, you have ONE wire from ONE switch. With 8 switches you can achieve 255 (128+64+32+16+8+4+2+1).
In base ten switches, you would maybe need 255 wires and switches OR you could pulse the electricity so quickly or in some kind of pattern that you would only need ONE switch OR you could control the STRENGTH of the current so that the computer knows precisely which number you mean.
MY LIGHT SWITCH AT HOME HAS AT LEAST 1 MILLION SETTINGS USING AN ANALOGUE DIMMER. IT IS NOT SIMPLY OFF AND ON.
Please Help me...
Binary originates from a time where classical science got implemented to create a simple code for the on and off state of a current.
on = 1
off = 0
Our numerical system originates from a time where we started to count on our fingers, with 10 being the highest.
We have 10 fingers and somewhere in our history we determined the iconology of zero to be 0.
Obviously this makes logic sence from a humane perspective, yet i can also imagine it to be somewhat counterintuitive.... but only from a iconology and symbology perspective.
With rapid developments in quantum coding, i am curious if we would decide to keep binary as a standard for these task.... or would we perhaps decide to create a more "universal" and easyer aproach.
For example:
on: -
off: |
both: /
I know that i would find it intuitive.
How would a high school student reply to the following question?
State three reasons why computers use the binary system to represent data (3 marks)
You can also group numbers as needed -- you get base 16 (0-F) by grouping 4 base-1 digits (0-1) and in fact this is used for representing numbers in many programming languages. It's much easier to write "19F" than "000110011111", although they represent the same quantity.
Binary is a fact of circuits -- you either have current flowing or you don't, so there are 2 distinct states. It's very simple to make something that uses binary. If you want a variable 0-4 (say, 0V, 5V, 10V, 15V) it will work but it will be much harder (electrically) to build and will be much more susceptible to noise. A weak 10V signal ("2") might appear as a strong 5V signal ("1") and scramble information easily, whereas in a TTL based (0-5V) binary system, a 2V signal is still seen as a "1".
Likewise you can use quantum states to build a quantum computer which is not binary-based. The key here is that quantum states are distinct -- there's no doubt about the position.
It is very possible to represent more than 2 states with totally normal technology. For starters, representing 3 states instead of 2 is piece of cake -- +5V, 0V, -5V. Done.
Then, you don't need "Quantum Computers" to simply use an analog voltage (say, between 0 and 2.55V) to represent a byte, instead of using 8 bits. Such a computer may not always be totally exact depending on the temperature of the circuit etc., but neither is the brain, and that guy works quite well and certainly has much more power than binary computers!
Today’s fundamental electronic components can easily represent the presence of potential (i.e., positive voltage) and the absence of potential (i.e., no voltage present). Today the most efficient way to do this is via solid state “gates”, such as silicon diodes and transistors. In this technology, when potential is applied, a gate will “open”, allowing current to pass. In the absence of potential, no current passes.
In theory, one can create fundamental components that will pass different voltage amplitiudes, where each amplitude value reresents a different state. 0 continues to represent “no potential present” and 1 continues to represent “potential”, however, now it represents a potential of “1”, as “1 volt state”. Extending this, “2” represents the “2 volt” state, “3” represents the “3 volt” state, and so on and so forth.
With current solid state physics and components, the cost to build an “n-nary” computer would be quite costly, with the estimated complexity to be AT LEAST Order(nxn), or O(n^2), or “n squared”, where “n” is the n-nary machine desired. In fact, depending on the design and application, the complexity could reach O(n^n), or n raised to the nth power, simply because every output n of a previous stage can be input to one or more (up to n) of the next n stages.
One solution is to build a “Virtual n-nary” machine that emulates n-nary on 2-nary hardware.
Would anyone be so kind as to help me with a simple explanation?
Thanks
1 - In electronics its easier to differentiate two distant voltages than more values, and so, get more reliability. Circuits use 0V and 5V, meaning 0 or 1, or true or false, yes or no, etc.
2 - Electronic computers use Boolean algebra through logic circuits that admit only two states (https://en.wikipedia.org/wiki/Boolean).
Leave a comment