# Why ADCs Use Integer Math

In everyday life, we encounter apparently continuous variables such as voltage, current, charge, light intensity, flow rate, speed, and so on. All of these quantities are usually expressed as real numbers with units, such as 2.17 volts, 3.15 milliamperes, 0.05 Coulombs, 1.34 nanowatt cm^{-2}, milliliters s^{-1}, 20.2 meters s^{-1}, etc. So why wouldn't one want to digitize measurements using numbers with decimal points instead of simply integers?

### The Problem With Fractions, Decimals, and Heximals

Just as numbers have place value to the left of the decimal point, so they have place value to the right. 3.14159 shows 1×10^{-1} in the first post-decimal location, 4×10^{-2} in the second place, and so on. In binary, one can work with "binary decimals" (to mix number systems), and the place values after the "decimal point" are 2^{-1}, 2^{-2}, and so on. Interestingly, some finite decimals in base 10 are repeating decimals in base 2 and vice versa. 1.2 (base 10) = 1 + 0/2 + 0/4 + 1/8 + 1/16 + 0/32 + 0/64 +1/128, with a remainder of 0.00468755 (base 10), giving a binary representation that start 1.0011001... . Because 2 is a factor of 10, any exact binary decimal can be expressed as an exact base 10 decimal. This means that any attempt to represent a rational decimal quantity in binary risks round-off error. While Hex has place values of 1/16, 1/256, etc., the essential issue doens't change. 1.2 base 10 = 1.33... in Hex. Exact representation of fractions is a chimera.

### Every Measurement Has Finite Precision

Anyone who does measurements recognizes that all measurements are contaminated with noise, so that all measurements have finite precision. Suppose one is measuring electrical potential. The potential may be 1.073 V, or (with sufficient care in the measurement) 1.073182 V. But the measurement is no 1.073182131415161788 V because humanity hasn't figured out how to measure potential to 19 significant figures, nor is it likely that measurement to that precision could be significant, given that the noise in many systems is approximately the uncertainty in the thermal energy of an electron. At room temperature, where the energy kT is 4.1×10^{-21} Joules per electron or 25.7 mV, averaging over 1.6×10^{19} (1 Coulomb) of electrons decreases the uncertainty by a factor of (1.6×10^{19})^{1/2} = 4×10^{10} = 6.4×10^{-13} V, ~ 1 pV. Measuring smaller amounts of charge (or small currents over short periods of time) increases the noise. If one measures 1 microampere for 1 millisecond, that is 10^{-9} Coulomb, the signal-to-noise ratio drops by at least a factor of (10^{-9})^{1/2} = 3×10^{4}, and the smallest useful voltage increment increases to 6.4×10^{-13} V * 3×10^{4} = 20 nV. It makes no sense to digitize data with resolution significantly smaller than the noise amplitude -- the least significant bits will exhibit only noise, not useful information. It's just like a bathroom scale. For a 150 lb (70 kilogram) person, resolution of 0.25 lb (100 g) may be useful. But would 1 mg resolution make sense? We gain and lose about 0.5 g (1/1000 lb) each time we breath. Such resolution obscures useful information.

### Scaling Can Convert Resolution Elements in Counts

Once we agree that only finite resolution is required for a measurement, we can scale the measurement arbitrarily so that integers can represent the measured quantity. If 20 nV is the smallest voltage increment we care about, then we can use "appropriate electronics" (to be described later) to scale the raw measurement so that 1 bit = the measurement quantum, in this case 20 nV. Then 2 (base 10) is 40 nV, 10 (base 10) is 200 nV, and so on. On this scale, 1 V = 5×10^{7} and 10 V = 5×10^{8}. If negative quantities are useful (temperature on the Celcius scale, electrical potential, and speed, for example, can all be negative), then we can use negative integers as well as positive ones to express the measured quantity. It may take 30 bits to express the 1 billion possible codes for ±5×10^{8} values, but that's still less than the 32 bits common in modern desktop computers (and a lot less than the upcoming 64 bit standard). As we'll see, most measurements use considerably fewer codes and fewer bits. We can thus go back and forth between analog (continuous) and digital (integer, perhaps signed) quantities using:

Digital = Round(Scale factor * analog quantity)

Analog = Digital/(Scale factor)

In the example above, the scale factor is 1 count/20 nV.

Example: A digitization system is set up so that 1 bit = 1 mV. If a signal is reported as hexadecimal AB, what voltage is being observed?

First, convert AB to base 10: 10*16 + 11 = 171. Since each count is 1 mV, the observed potential is 171 mV = 0.171 V.

### Straight Binary Coding of Analog Quantities

If we are sure that an observed quantity always has the same sign (light intensity is always ≥ 0, Temperature in Kelvin is always ≥ 0), there is no need to waste encoding space with a sign bit. Data can be directly encoded in straight binary, indicating magnitude with sign implicit since the sign is known to the user (and, presumably, the user's software) in advance. What scale factor should be used? There is a tiny, but significant, ambiguity, that must be dealt with. Given N bits, there are 2^{N} encodings possible. The actual values run from 0 to 2^{N} - 1. If we decide that a scale factor is some pre-defined increment, then there is no problem. Zero means zero and 2^{N} - 1 means the pre-defined increment times 2^{N} - 1. A 1 mV least significant bit, operating with a 12 bit analog to digital converter would thus top out at (2^{12} - 1) * 1 mV = 4095 mV = 4.095 V. Notice this is 1 mV less than 2^{12} mV.

However, the real world situation is typically approached from the other direction. Given a full scale range of 5 V or 10 V, what is the signal increment for each bit? Do we set, e.g., 5 V = 2^{N} (and, thus, unobservable as it is 1 count higher than the highest-represented number for N bits), or do we set 5 V = 2^{N} - 1 (the highest number we can represent with N bits)? This amounts to calibration that varies with resolution. The standard, therefore, is to have the highest digitized value be 1/2^{N} * full scale signal less than the nominal digitization range. For an 8 bit example, 1000 0000 binary digitizes exactly 1/2 of the nominal full scale signal (2.5000 V for the 5 V example here), 0100 0000 is exactly 1/4 of the nominal full scale (1.2500 V), and each bit sets an easily defined value. The highest straight binary encoding, 1111 1111 corresponds to 255/256 of full scale or 4.980 V (to 4 significant figures).

### Two's Complement Coding of Bipolar Quantities

We face the same scaling issues with bipolar (plus or minus) data that we do with unipolar (unsigned) data, but we have an anchor that avoids a lot of the ambiguity. We know that 0 numerically and 0 encoded have to represent the same idea. For consistency, we want the positive encoding for a bipolar converter and the unsigned encoding for a unipolar converter to give the same numbers for the same input. Thus if we have an M-bit bipolar converter (with one bit used to indicate sign), that should give the same result for unsigned values as an M-1 bit unipolar converter.

Example \(\PageIndex{1}\):

**Exercise**: If the 8 bit converter in the previous section gives 1000 0000 for a 2.5 V input signal, what digitization would we want for a 9 bit bipolar converter observing the same 2.5 V input?

Exercise \(\PageIndex{1}\)

What encoding would we want for the 9 bit converter to give for -2.5 V input?

An old data file is found with 8 bit numbers and a label "binary digitized data, 8 bits, 5V full scale." The first 3 values in the file are 0110 0011, 1110 0100, 1010 1001. There is no indication if the data is unipolar or bipolar, and no indication if the encoding is straight binary, signed-magnitude binary, or two's complement binary. What are the possible values for the voltages that are digitally represented by these three numbers? Answer.