1. Light Transduction: the CMOS Camera
Computers store all data as binary numbers (zeros and ones). How does the CMOS camera in a cellular telephone or digital camera convert light into numbers? There are several steps.
a) Light impinges on the camera's detector surface. Each absorbed photon kicks an electron away from a silicon atom in the detector, so that an electron/hole pair is formed (a hole is the absence of an electron -- just as a hole in the ground is the absence of rock or soil).
b) Different regions on the detector are sensitive to red, green, and blue light, so the amount of light seen by each region helps establish the color of each portion of the image. In the drawing, a single pixel corresponds to the area covered by one square of each color (red, green, blue, unfiltered). Some cameras have color sensing stacked vertically, avoiding the dead zone for one color where another is being sensed. Typical pixels range from 1.5 μm to 25 μm in width.
c) The more light there is, the greater the number of electron/hole pairs.
d) The released electrons and holes are stored in separate parts of each observation region or pixel. The result is that the charge, Q, in each pixel is proportional to light intensity. Any device that stores charge is called a capacitor (with capacitance, C, in Farads). The voltage, V, (electrical potential energy) on a capacitor is proportional to the stored charge. For more up to this point, see Silcon Imaging's website.
Q = ∫i dt =CV
e) An analog-to-digital converter (ADC) generates a number proportional to the amount of charge. "How does the ADC work?" A module on this topic, eventually to be submitted to ASDLib, is under development (November, 2009). In the meantime, here's a link to follow if you're curious.
f) The digitized values for each pixel are read into the computer's memory. At first, the values are stored as three arrays, one each for the red, green, and blue values. This "stack" of colors is then copied into a single array with the three values stuck adjacent to each other RGB(1), RGB(2), ... , RGB(last pixel). This array of pixels is a bitmap or .BMP file. These files require 3 bytes of storage for each pixel, so big images take a lot of memory.
|Σ Red+Green+Blue||Σ Red+Green+Blue||Σ Red+Green+Blue|
g) To save space (and speed up transmission of data over the internet), the file is converted to a compressed format, typically a JPG file. Some information is lost during compression, making this format ill-suited to high precision work. Some cameras allow access to pixel-by-pixel data; in addition to the BMP format, example formats include TIFF and RAW.
2. Software: Converting JPG Files to Intensity Plots
The JPG format is reasonably well described in Wikipedia. The software loads ANY JPG file -- it has no knowledge what is in the image.
After the user identifies the range of pixels that is to be considered a spectrum, the software reads out the Red, Green, and Blue bytes. If only a single row of pixels is selected, one simply reads out R, G, and B, then adds the 3 values to get a sum, construed as total intensity (since the values of the bytes are integers 0 to 255, the highest total intensity possible is 3*255 = 765). If multiple rows are selected, all pixels at the same offset from the blue end of the spectrum are added together. Thus, if one has a spectrum height of 9 pixels, each color runs from 0 to 255 in each pixel, so 0 to 9*255 = 0 to 2295, and total intensity can be as high as 3*2295 = 6895.
The software provides options to plot the intensity from the individual color sensors or for all summed together. The abscissa may be either pixel number (no wavelength attributed to a particular offset from the blue end of the spectrum) or wavelength (calibration as given by selecting reference points in the image and presuming linear dispersion between the two reference points).
3. Software: Converting Intensity to Transmittance and Absorbance
The software has NO capability for stray light or background subtraction! It simply computes, at each wavelength, T = I/I0, and A = -log10 T. If the dispersion (range of wavelengths per pixel) is different for sample and reference images, the software performs linear interpolation to find an approximate intensity between measurements. In all cases "intensity" (the I or I0) is just the number computed for total intensity in step 2 above.
4. Software: Why Moving Data to a Spreadsheet May Be Useful
What if you want to subtract background? What if you don't want to say Itotal = Ired + Igreen + Iblue? What if you have some other manipulation you want to try? The software outputs a .CSV (comma-separated variable) file that can be read by most spreadsheet programs. To avoid making massive, uninterpretable files, it puts out only the data selected from within the JPG (a line of pixels or pixels averaged over some height), listing both pixel number and imputed wavelength.