2.3 Representing numbers: fractions
In the denary system, a decimal point can be used to represent fractions, as in 6.5 or 24.29. One way of encoding fractions uses an exactly analogous method in binary numbers: a ‘binary point’ is inserted.
Some examples of 8-bit binary fractions are:
The weightings that are applied to the bits after the binary point are, reading from left to right, 1/2, 1/4, 1/8, etc. (in just the same way as in denary fractions they are 1/10, 1/100, etc.).
Now, 1/4 is the same as 1/22, and 1/8 is the same as 1/23, and there is a convenient notation for fractions like this, which is to write 1/22 as 2−2, 1/23 as 23, and so on. In other words, a negative sign in the exponent indicates a fraction.
Using this notation, the bits after the binary point in a binary fraction are weighted as follows:
|(1/2 = 0.5)||(1/4 = 0.25)||(1/8 = 0.125)||(1/16 = 0.0625)||…|
So, for instance, 0.010 1000 in binary is equal to 1/4 + 1/16 = 5/16 (or 0.25 + 0.0625 = 0.3125) in denary.
One problem with encoding fractions like this is that there is no obvious way of representing the binary point itself within a computer word. Given the 8-bit number 0101 1001, where should the point lie? The way of solving this problem is to adopt a convention that, throughout a particular application, the binary point will be taken to lie between two specified bits. This is called the fixed-point representation. Once a convention has been adopted – for example, that the binary point lies between bits 7 and 6 – it should be adhered to throughout the application.
Another problem is that it may not be possible to represent a denary fraction exactly with a word of given length. For example, it is not possible to represent denary 0.1 exactly with an 8-bit word. The nearest to it is binary 0.000 1101 with a denary difference of 0.0015625. (Try it for yourself!) So it is just not possible to represent all fractions exactly with a binary word of fixed length. This second problem can be reduced, but not eliminated, if a multiple-length representation is used. For example, with 12 bits the denary fraction still cannot be represented exactly, but now the nearest to it is binary 0.000 1100 1101 with a much smaller denary difference of 0.00009765625.
Fortunately this problem of exact representation does not occur in the kitchen-scales example. In recipes using imperial weights it is traditional to use 1/2 oz, 1/4 oz, etc., and these are fractions which can be exactly represented by the fixed-point representation just described.
Box 3: Floating-point representation
Another way of representing binary fractions is by the floatingpoint representation. This is the preferred method in many applications as it is a very flexible method, though rather complex. It uses the same basic idea as a fixed-point fraction, but with a variable scale factor. Two groups of bits are used to represent a single quantity. The first group, called the mantissa, contains the value of a fixed-point binary fraction with the binary point in some predefined position – say, after the most-significant bit. The second group, called the exponent, contains an integer that is the scale factor.
An example is a mantissa of 0.111 1000, which evaluates to the fixed-point fraction 1/2 + 1/4 + 1/8 + 1/16 = 15/16, together with an exponent of 0000 0011, which evaluates to the denary integer 3. But what fraction does this mantissa-exponent pair represent?
The answer is that it represents (mantissa × 2exponent), which is 15/16 × 23. This works out to 15/16 × 8, which is 7.5.
Another example is a mantissa of 0.011 0000 and an exponent of 0000 0001. Here the mantissa evaluates to 1/2 + 1/8, which is 3/8. The exponent evaluates to 1. So the fraction represented here is 3/8 × 21. This works out to 3/8 × 2, which is 3/4.
Notice that the fraction being represented in the first example is a ‘mixed fraction’ – that is, a mixture of an integer and a fraction which is less than 1 – while in the second example the fraction being represented is simply less than 1. This ability to represent both types of fractions with identical encoding methods makes the floating-point representation extremely versatile.