The following tutorial was first published on my website and is

Copyright © Shawn South 2002. All rights reserved.

### Computers only understand numbers

The first thing to understand about computers is that they are nothing more than a powerful, glorified calculator. The only thing they know – the only thing they understand – is numbers. You may see words on the screen when you’re chatting with your friend via Facebook, or breathtaking graphics while playing your favorite game, but all the computer sees are numbers. Millions and millions of numbers. That is the magic of computers – they can calculate numbers – *lots* of numbers. *Really* fast.

But why is this? Why do computers only understand numbers? To understand that we need to go deep into the heart of a computer; break it down to its most basic functionality. When you strip away all the layers of fancy software and hardware what you will find is nothing but a collection of switches. You know the kind, you have them all over your house – light switches. They only have two positions: *On* or *Off*. It’s the same for computers, only they have millions and millions of the little buggers. Everything a computer does comes down to keeping track of and flipping these millions of switches back and forth between *on* and *off*. Everything you type, download, save, listen to or read eventually gets converted to a series of switches in a particular on/off pattern that represents your data.

### What does this have to do with Binary and Hexadecimal numbers?

Let’s back up for a minute and look at how human beings deal with numbers first. Most people today use the Arabic numbering system – more commonly known as the *decimal*, or *Base-10*, numbering system (*dec* means ten). What this means is that we have ten digits in our numbering system:

0 1 2 3 4 5 6 7 8 9

We use these ten digits in various combinations to represent any number that we might need. How we combine these numbers follows a very specific set of rules. If you think back to grade school, you can probably remember learning about the *ones*, *tens*, *hundreds* and *thousands* places:

When counting, you increase each digit in the right-most place column until you reach 9, then you return to zero and increment the next column to the left.

I know this all probably seems very remedial and unimportant, but going back to these basic, simplistic rules is very important when learning to deal with other number formats. Would it surprise you to learn that there other numbering systems that have a different base? Somebody, somewhere, a long time ago decided that having ten digits would work best for us. But there really is no reason why our numbering scheme couldn’t have had seven, or eight, or even twelve digits. The number of digits really makes no difference (except for our familiarity with them). The same basic rules apply.

As it turns out, computers have a numbering system with only *two* digits. Remember all those switches, each of which can only be on or off? Such an arrangement lends itself very nicely to a *Base-2* numbering system. Each switch can represent a place-column with two possible digits:

0 1

0 = off, 1 = on. We call such numbers *binary* numbers (*bin* means two), and they follow the same basic rules that decimal numbers do: Start with 0, increment to 1, then go back to 0 and increment the next column to the left:

binary decimal

equivalent0 0 1 1 10 2 11 3 100 4 101 5 110 6 111 7 1000 8 1001 9 …

### Hexadecimal

Binary numbers are well and good for computers but having only two digits to work with means that your place-columns get very large very fast. As it turns out, there is another numbering scheme that is very common when dealing with computers: Hexadecimal. *Hex* means six, and recall that *dec* means ten, so *hexadecimal* numbers are part of a *Base-16* numbering scheme.

Years ago, when computers were still a pretty new-fangled contraption, the people designing them realized that they needed to create a standard for storing information. Since computers can only think in binary numbers letters, text and other symbols have to be stored as numbers. Not only that, but they had to make sure that the number that represented ‘A’ was the same number on every computer. To facilitate this the ASCII standard was born. The ASCII Chart listed 128 letters (both upper- and lower-case), punctuation and symbols that could be used and recognized by any computer that conformed to the ASCII standard. It also included non-printable values that aren’t displayed but perform some other function, such as a tab placeholder (09), an audible bell (07) or an end-of-line marker (13). The various combinations of only eight binary digits, or *bits*, could be used to represent any character on the ASCII Chart (2^{8} = 128). (There were also other competing standards at the time, some of which used a different number of bits and defined different charts, but in the end ASCII became the dominant standard.)^{1}

128 characters may have seemed like a lot but it didn’t take long to notice that the ASCII Chart lacked many of the special vowels used by Latin-based languages other than English, such as ä, é, û and Æ. Also lacking were common mathematical symbols (e.g. ±, µ, °, ¼) and monetary symbols other than the dollar sign ($) for United States currency (e.g. £, ¥, ¢). To make up for this oversight these symbols and a series of simple graphical shapes, mostly for drawing borders, were assembled as an extension to the original ASCII Chart. These additional 128 characters brought the new total to 256 (2^{16}), with the pair of charts being referred to collectively as the *Extended ASCII Chart*.

Did you notice that the value 256 can be represented as 2 (the base of a binary numbering system) to the 16^{th} power? This brings us back to hexadecimal (Base-16) numbers. It turns out, through the magic of mathematical relationships, that every character on the Extended ASCII Chart can be represented by a two-digit hexadecimal number: 00 – FF (0 – 255 decimal).

### Whoa! What’s up with this *FF* stuff?

Hexadecimal is a Base-16 numbering system, which means that every places column counts up to sixteen individual digits. The decimal system that we humans are familiar with only has a total of ten unique digits, however, so we needed to come up with something to represent each of the remaining six digits. We do this by using the first six letters of the alphabet.^{2} This means the digits for the hexadecimal numbering system are:

0 1 2 3 4 5 6 7 8 9 A B C D E F

And, of course, hexadecimal numbers follow the same basic rules that decimal and binary numbers do. Count up to the last digit, then return to zero and increment the next column to the left:

hexadecimal decimal

equivalent0 0 1 1 2 2 … 9 9 A 10 B 11 … E 14 F 15 10 16 11 17 … 19 25 1A 26 … 1F 31 20 32 …

As you can see, the hexadecimal numbering system doesn’t advance through the place-columns as quickly as decimal numbers do – and certainly not at the rate of growth experienced by binary numbers! This, coupled with its relationship to the Extended ASCII Chart and subsequent relationship to various other computer concepts, has made the hexadecimal numbering system, or *hex*, a standard for computer programmers and engineers the world over. It is common when viewing a raw data dump to use a *Hex Viewer* – software that displays the hex values of each character. This allows one to see every character in the Extended ASCII Chart, even the ones that are not normally printed or visible.

If you are a programmer, or aspiring to be one, it is also worth noting that the variable type *Byte* is, depending on the programming language, 8 bits in size. This means that it can be represented by a single digit hexadecimal number (0-F). If you are programming for the Windows platform in C or C++ you have probably noticed the commonly used variable type DWORD (*D*ouble-*WORD*). A WORD is 16 bits (0-FF) in size, which makes a DWORD 32 bits (0-FFFF). If you are an HTML programmer you have probably seen color values that are composed of hex numbers. Colors are represented as a mixture of Red, Green and Blue values (RGB). Each of these three primary colors can have a value from 0-255 (decimal), which translates into three sets of two-digit hexadecimal numbers: 00 1A FF.

This tutorial just touches on the basics of the hexadecimal and binary numbering systems and their importance when working with computers, but I hope that it has provided a good base of understanding from which to start. If you found this article useful please consider supporting my work and/or letting others know.

### Footnotes

- While ASCII was the standard of its time, it doesn’t even come close to representing the international needs for sharing data. There are many competing standards today which provide support for the various letters and characters of other cultures and countries but Unicode is by far the most common. [return]
- Whether the letters are upper- or lower-case makes no difference. It is common to see them represented either way. [return]

[…] Understanding Binary and Hex tutorial I wrote many years […]

you book sucks. The reason it’s 8 bits to a byte has to do with logcial space. When you do a calculation you need someplace to put those numbers, hence memory. So, memory must be able to handle binary sets that make up numbers. That 1 or 0 only needs one place in memory so we call it a bit. Which stands for binary digit. Early machines were 8 bit machines, (that would be a hardware limitation) so 8 bits make up a byte. each byte is made up of 2 nibbles, each nibble is 4 bits.

An outstanding share! I’ve just forwarded this onto

a colleague who was conducting a little research on this.

And he in fact bought me dinner simply because I discovered it for him…

lol. So let me reword this…. Thanks for the meal!! But yeah, thanks for spending the time

to discuss this issue here on your website.