Some Internet users have asked me why their hard drive’s storage capacity shows “931GB” instead of 1TB. This is because they don’t know whether their hard drive is formatted as a group of 1TB (1,000GB) or 1,024GB (1024GB). My answer: They could be either.

To understand how this happens, you need to know that most computers use a binary system to measure storage space. That means that Windows uses an entirely different number from what manufacturers use when determining the capacity of a hard drive.

To understand that, first, you need to learn about the difference between binary and decimal number systems;



#Binary Vs. Decimal Number System

The decimal system is the most widely used number system. It’s the one you probably think of first whenever you think about numbers. That’s because it’s the one we use in everyday life – whether you realize it or not. The decimal system is so easy for us to grasp because we have ten fingers and we rely on them every day to operate this system.

The binary number system, by contrast, is a lot less familiar to most people. It’s commonly encountered in computing and other technical fields where exact quantities must be processed and stored efficiently.

Even so, almost everyone has heard of — if not actually seen — the famous “bits” in computing, which are essentially discrete representations of binary values as opposed to decimal ones.

The binary system, by contrast, is based on two distinct numerals – which are usually expressed as “0” and “1”. These two numerals are the basis for all other values in a binary system.

In the decimal system, each position to the right of the decimal point is worth ten times the value of the position to its immediate left. In the binary system, each position to the right of the binary point is worth two times the value of the position to its immediate left.

There are a number of key differences between binary and decimal number systems that you should understand. One of the most important is that binary and decimal are two completely different number systems.

In other words, they are not just different ways of expressing the same numbers as different people might write or say them. Rather, they are different systems of representing numbers as a series of digits. One difference between binary and decimal is that decimal uses the digits 0 – 9, whereas binary only uses 0 and 1. This means that decimal is far better than binary.

This is an important distinction to understand, and it means that you can have a different number of different values in binary compared to decimal. Another major difference between binary and decimal is that in binary, values are stored in groups of bits (short for “binary digits”). This means that each group of bits is a fixed number of digits – with no decimal portion as there is in decimal numbers.

You might wonder why it is worth your time to learn about binary and decimal number systems. After all, who really uses binary numbers in daily life? For the most part, decimal numbers are the only ones we really use. In fact, it is often said that binary numbers are worth learning because they are a good introduction to math and are still used in OS like Windows to measure storage capacity.

As you learn about binary and decimal numbers, you might wonder why these systems are called binary and decimal. The short answer is that nobody really knows for sure. However, there are a number of theories about why these systems are called binary and decimal. There is no definitive explanation as to how these systems came to be known as binary and decimal.

Decimal is important because it gives you a very good understanding of what numbers are and how they work. Binary is an extension of decimal and provides you with an understanding of how computers work at a very fundamental level. In fact, you can think of computers as giant calculators.

Some people might be tempted to say that binary is better than decimal because it is less “imprecise” and therefore stores smaller values in the same amount of space. However, decimal is more efficient at some things than the binary is.

The main reason that decimal is better than the binary is that decimal systems are more expressive than binary systems. This means that decimal systems can represent more numbers than binary systems can. This is not to say that binary numbers are bad or that we should switch to a binary system. Rather, it just shows that binary and decimal are different systems with their own advantages and disadvantages.

#So, Why Binary & Decimal Matters

In a nutshell, Windows operating system uses the binary number system to measure hard drive capacity. On the other hand, the hard drive manufacturer uses the decimal number system to determine the capacity of a hard drive. This means that according to binary, one Gigabyte (GB) is equal to 1,024 MB, but it is 1,000 MB if it’s measured according to decimal.

In the decimal system, one kilobyte is equal to 1,000 bytes. In the binary system—used by computers for processing information—1K equals 1024 bytes. The contrast between the two is even more extreme when you look at gigabytes: 1,073,741,824 bytes in binary compared to only 1,000,000,000 bytes (1 billion) for decimal.

So, it means that your 1 TB HDD which comes with 1,000,000,000,000 bytes of storage, will only show 931GB when you calculate it as according to the binary (1,000,000,000,000 / (1,024 * 1,024 * 1,024) = 931.322574615 bytes, or 931.3GB).

So, what does this mean to you? If you’re buying a new computer, it’s important to know that manufacturers use different systems to measure hard drive capacity. As a general rule of thumb, Windows uses binary code while hardware components use decimal. In addition, another important thing to be aware of is that when a company advertises their product’s storage capacity it often rounds down rather than up.



LEAVE A REPLY

Please enter your comment!
Please enter your name here