If binary code is something only computers can understand, why should you learn about it?
As developers, we have our user friendly programming languages to give instructions to computers but they are still translated into binary code for computers to be able to interpret them and run your programs.
Binary code is the most fundamental concept underlying programming and Computer Science. But, how does binary work then? How can a complex computer program consist of only 1’s and 0’s?
How does the binary number system works?
To be able to understand the binary number system, we need to take a closer look to the decimal number system which we learn in elementary school and still using everyday. Many numeral systems of ancient civilisations use ten and its powers for representing numbers, probably because there are ten fingers on two hands and people started counting by using their fingers.1
In the decimal system, each digit in a certain number represents the 1’s, the 10’s, the 100’s, and so on, starting from the right hand side.