Casting Bit Magic

4.8
(36)

Pt. 1 Binary Representation and Endianness

Introduction

This article began as an explanation of the “fast inverse square root algorithm” found in the source code of Quake 3 by id Software. While working on the article and receiving feedback I realized that I would better split the article in a small series about binary representation and the floating point type. We will begin at the basics of bit representation and ending the series with some nice bit trickery with the floating point format. The goal of the article is that beginner level programmers can understand the lower level workings of their machine. I do sincerely hope I have included enough material that also the more experienced programmer will enjoy the content. So here is the first entry!

Binary Notation

First we need to understand how numbers look like in memory. In these examples I will consider 32-bit variables only. Conversion to 64 bit is left to the reader as an exercise. Let’s start with an example. Say we have the integer number 42 which would be declared in C++ code as:

int32_t answer = 42; 	// To life, universe and everything…? :) 

As you all probably know computers work with a binary base system; a single bit is either 1 or 0. A group of 8 bits is called a byte (by eight). The int32_t type of C++ defines an integer of exactly 32 bits or 4 bytes. The variable “answer” which contains the value of 42 is encoded in binary as follows:

00000000 00000000 00000000 00101010 =
2^1 + 2^3 + 2^5 = 2 + 8 + 32 = 42 

The encoding is based on the position of each bit. The base 2 is raised to the power of the actual bit index in the bit string. The index is found by starting counting from right to left. The first index starts at zero. In the example above this would mean:

bit index 0 = 0 (we do nothing)
bit index 1 = 1 → we calculate 2 ^ 1 = 2
bit index 2 = 0 (we do nothing)
bit index 3 = 1 → we calculate 2 ^ 3 = 8
bit index 4 = 0 (we do nothing)
bit index 5 = 1 → we calculate 2 ^ 5 = 32
no other bits are set
Adding up the intermediate results gives 2^1 + 2^3 + 2^5 = 42.

What about sign? We could declare negative integers like answer = -42. The simplest solution is to use a sign bit. This means that the utmost left bit defines the sign of the integer:

00000000 00000000 00000000 00101010 = 42
10000000 00000000 00000000 00101010 = -42

(Please note that processors actually use a smarter way to encode negative integers called two’s complement – this might be an interesting topic for another article)

Binary visualization

Before we continue you might want to experiment with binary visualization of variables yourself. Feel free to use the C++ template below. This code is based on a snippet which I found on StackOverflow quite some years ago. I made some minor stylistic changes to it.

//=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// Print binary value representation
// Function splits the binary number with a space at each CHAR_BIT interval
// Idea based on a StackOverflow post:
// https://stackoverflow.com/questions/7349689/how-to-print-using-cout-a-number-in-binary-form
//
// Usage:
// float pi = 3.14f;
// PrintBinaryRepr( pi );
//
// =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
#include <iostream&gt;
#include <cstdint&gt;
#include <bitset&gt;
#include <climits&gt;

template<typename T&gt;
void PrintBinaryRepr( const T&amp; value ) { 
	const char* begin = reinterpret_cast<const char*&gt;( &amp;value );
    const char* end = begin + sizeof( value );
    
	while( begin != end ) {
        	std::cout << std::bitset<CHAR_BIT&gt;( *begin++ ) << ' ';
	}
    std::cout << std::endl;
}

The Endianness

When you start experimenting with the template you might notice something. When I run the following code snippet on my machine:

int main( int argc, char* argv[] ) {
	uint32_t answer = 42;
	PrintBinaryRepr( answer );
	return 0;
}

I get the following result:

00101010 00000000 00000000 00000000

The byte order seems to be shifted when compared to our manual encoding. This shift in byte order is caused by the processor hardware of the machine on which you are running the code. The byte order is called Endianness; your machine is either little endian or big endian. Due to historic reasons the two types of Endianness still exist nowadays. The difference between them is best explained in terms of the Least and Most Significant Byte; LSB and MSB. This concept is best explained by looking at normal integers: take the number 1447. In this integer the number 7 is the least significant and the 1 (thousand) is most significant. In our manual example the most right byte is the LSB and the left byte is the MSB. With this definition in mind we can explain the difference between little and big endian. The memory layout of little endian processors start with the least significant byte (LSB) first and than up to the most significant byte (MSB). Big endian processors memory layout is structured in the opposite way from MSB to LSB. Looking again at our manual example you see that the notation we used was actually big endian. Note that the endianness of the machine only says something about memory layout and not about actual values. The number represented in memory regardless of the architecture will remain the value 42.

B1 (MSB) B2       B3       B4 (LSB)     
00000000 00000000 00000000 00101010 = 42 (big endian)

In little endian layout:

B4 (LSB) B3       B2       B1 (MSB)     
00101010 00000000 00000000 00000000 = 42 (little endian)

Which result did you obtain when running the above sample? Most likely the same result as me because Intel and ARM processors are little endian. Those processors are used in most common laptops and desktops. Network protocols and the major game consoles tend to be big endian however. To conclude: endianness indicates the ordering of bytes in a multi-byte number.

Is this really important to know? Apparently the compiler and target processor are taking care of the endianness so why put attention to it? Imagine the following scenario: you have written a beautiful piece of software on your laptop (little endian processor) which calculates a number and writes it in a file on a game console (big endian processor). Your program runs it highly efficient code and comes up with the result 42 in a 32 bit integer and stores it on the disk of the game console. Since your machine is little endian it will write the following 4 bytes:

00101010 00000000 00000000 00000000 = 42 (little endian)

How will the big endian game console decode these bytes found on its SSD? Well lets do the math:

00101010 00000000 00000000 00000000 = 
2 ^ 25 + 2 ^ 27 + 2 ^ 29 = 704643072

This could possibly end up in some hard to find bugs and long debugging sessions…

The solution to prevent this is straightforward. When you know that you are interfacing a machine with a different type of endianness you need to “endian swap” multi-byte values. Endian swap routines are easy to program. You only need to know two bit operators: shift left “<<” and shift right “>>”. For those who are not familiar with binary bit operators here is a quick summary of what these bit-wise operators do. Let’s take a single byte for the sake of simplicity:

00101010 = 42 
The operator “<< 1” shifts all bits left one position:
00101010 << 1 = 01010100 = 84 
00101010 ">> 1” shifts all bits right one position:
00101010 >> 1 = 00010101 = 21

You might notice that a single bit shift to the left is equal to multiplying the value by 2 and that a single bit shift to the right is equal to a division by 2. When I started to program this was an optimizing trick to avoid “expensive” divisions and multiplications. This is also one of the reasons why tile sets in old games tend to be multiples of two (e.g. 16×16 sprites); all kind of calculation optimizations were possible if bit shifts and even byte alignment were exploited.

Using these two operators it is quite easy to design an endian swap routine:

B1 (MSB) B2       B3       B4 (LSB)     
00000000 00000000 00000000 00101010 = 42 (big-endian)

B4 (LSB) B3       B2       B1 (MSB)     
00101010 00000000 00000000 00000000 = 42 (little-endian)

We want to shift the first byte to the last position and the last byte to the first position (interchange B1 and B4) and we want to shift the middle bytes (interchange B2 and B3). To shift the first byte B1 to the last position we need to bit shift that value 24 bits to the right. The same for the last byte B4 but we need to shift it 24 bits to the left. B2 and B3 are interchanged by shifting them 8 bits right and left respectively.

The challenge is how to “select” only one particular byte in a 4 byte number. If you have the following binary value:

B1       B2       B3       B4
01010101 00000000 10101010 00000000 

Applying the algorithm above you would shift the left byte B1 to the position of B4, however using the bit shift right operator “>>” for 24 bits would also shift B3 24 bits to the left and this will cause an underflow. Therefore we are going to “mask” the bits we want to move before we do the actual bit shift. Masking is a well-known technique and is based on two other binary operators & (AND) and | (OR). A quick recap of both operators:

& “AND” sets a bit if both bits of the expression are (1):
0110 1010 & 1001 1111 = 0000 1010 
| “OR” sets a bit one or both bits in the expression are (1):
0110 1010 | 1001 1111 = 1111 1111

If we want to shift only the most right byte (B4) in our example we will need to use a mask what “selects” that byte. If we would “&” this value with the mask and shift the result 24 bits to the left we would get the desired result. Since typing binary multi-bytes is quite work-intensive an hexadecimal base number is used. I assume the reader is aware of hexadecimal numbers. So the masking will look like:

value & 00000000 00000000 00000000 11111111 =
value & 0x000000FF

We repeat this method for each byte and combine the result with an “OR”.

Finally the code:

uint32_t EndianSwap( uint32_t value ) {
	return  (( value &amp; 0x000000FF ) << 24 )	
 		|   (( value &amp; 0x0000FF00 ) << 8 )	
		|   (( value &amp; 0x00FF0000 ) &gt;&gt; 8 )	
 		|   (( value &amp; 0xFF000000 ) &gt;&gt; 24 );
 }

Please note that the above endian swap method works for simple integers and a similar method can be written for floats. But in the case of more complex data structures careful consideration of the values that need to be endian swapped is required.

What is next?

This concludes the first entry in which binary encoding and endianness are presented. I would like to thank Slavka for his great feedback on this first entry. In the next article the floating point format encoding is discussed which will brings us another step closer to understanding the “Fast Inverse Square Root” algorithm in the Quake 3 .

Rate this post!

Average rating 4.8 / 5. Vote count: 36

No votes so far! Be the first to rate this post.