Bits, Bytes, and What “Bits Per Pixel” Actually Meansby Jonathan Grossman ; Updated October 17, 2017
In a society where bigger is better, more is better and better is, well, better, there is no place where this holds truer than in the world of computers. Most of us may not know what terms like bits and bytes mean, exactly, but everyone kind of knows that more of them make their computer better.
But, really, what does it all mean? Is a bit more than a byte? Is there a buffet that offers more than just bits and bytes? What if we’re really hungry for more? I mean like MEGA hungry. Do they offer… megabytes? In fact, they do. Let’s explore.
When we talk about bits and bytes, we are referring to units of memory that hold computer information. Think of a bit as a tiny box of information that lives on your computer’s hard drive or RAM.
Eight of these little boxes, or bits, make up one byte. Approximately 1000 bytes make up one kilobyte and about 1000 kilobytes make up one megabyte. (I say approximately and about because technically, there’s more than one way to measure a mega of something. Thanks to math, a mega can also be 1024.)
It’s important to remember that eight bits make a byte because this group of eight bit, or factors of this group of eight, like 16, 24, 32 are the building blocks that make up most computer language. Another important idea is that the more bits you have, the more information you can store.
Okay, so we know bits are building blocks for computer language. We know that eight bits make up one byte (and this group of eight is important, so remember!). But why does this matter?
A bit is a box of information on, let’s say, our hard drive–but what’s in that box? If you read my article, How Zeros and Ones Create Music, Photos and Everything Digital, it will be no surprise to hear that the box contains either a zero or a one. Sorry, not quite as exciting as a birthday present. There are zillions of different computer languages, but they all use zeros and ones.
So how can bits make all this stuff like colors and letters? Well the computer language and the arrangement of the bits determines what it ultimately turns into.
Let’s compare this to English. Our “bits” are the letters of the alphabet. Our language is English.
The letters H, T, R, E, A can be assembled in three unique ways: heart, earth or hater. Same letters, but different order, different words and different meanings.
This same idea applies to computers and bits. Depending on the grouping of the bits and language used, they can become colors on a TV, audio from a Compact Disc or letters, like the ones you’re reading now. Yep, each one of the letters you’re reading can be represented by a group of eight bits.
You can try it yourself — here’s a page where you can type in some text and have it converted to zeros and ones.
In the English language, our 26 letters create over one million words. What about computer language? Well, remember when I said that the grouping of eight bits is important? Many computer languages are based on groups of eight. Each group contains strings of eight bits, 16 bits, 24 bits and so on.
In an 8-bit language, there are groups of eight zeros and ones lined up in a row that form a string of numbers. Let’s call this string an 8-bit code. There are a maximum of 256 unique 8-bit codes. Here’s an example of an 8-bit code for letters in the alphabet.
Now let’s apply this to the language of color.
Let’s say you have a digital image of Darth Vader on your computer. The image is created using 8-bit color. Since 8-bit language has 256 different combinations and each combination makes a unique color, we have 256 colors to use. Sounds pretty good right? Remember that huge box of Crayola Crayons with the sharpener in the back. It had a whopping 64 different colored crayons. You could make some amazingly colorful drawings. Your 8-bit TV blows it away with 256 colors available to create the picture on the screen!
But what if your friend starts telling you about his 16-bit color image of Darth Vader? With 16-bit color you now have strings of 16 zeros and ones in a row to create 16-bit codes that each represent a unique color. Without boring you with math, I’ll tell you we now have 65,536 unique color variations. This box of crayons is way bigger! And yes, bigger is better.
You can guess that as we go up to 24-bit, 32-bit and so on, we have more colors which create an image with more detail and clarity.
This idea of “bits per something” applies to everything digital. The more “bits per,” the more detail available. In the case of your computer, it makes perfect sense. If your hard drive has 500 gigabytes of space that means you have… wait for it… hang on, getting out the calculator… and… you have 4,000,000,000,000 bits. That’s a lot of zeros and ones.
As you can see, the digital buffet has a lot to offer. Even with just zeros and ones, we have bits, bytes, kilobytes, megabytes, gigabytes, terabytes and so on. I don’t mean to be childish, but I just got a new computer with 64-bit color and it’s possible that my Darth Vader is not only more detailed and colorful, but it’s probably bigger than yours as well.
Image credits: Jonathan Grossman