2 Specifications for image capture
This section is about the performance specification for a captured image. Cameras do not attempt to copy the way in which light signals are detected and processed by nerve cells in the eye – the latter is based on quite different principles from those we have seen here, using changes in molecular shape within proteins to detect light and initiate a response.
Photography began as a means of capturing images that the human eye would have seen. That task involves obtaining full-colour imitation of a scene. Because of the way the eye works with colour, the techniques that have been developed reduce the scene to a combination of three primary colours, red, green and blue, by which the eye reconstructs the sensation of full colour. Furthermore, photographic images have always used dots of information that blend into smooth pictures when viewed from the appropriate distance. In film photography the dots are somewhat irregular grains of chemical photoreceptor; in digital photography the dots are arranged in a regular array of pixels. In practice, therefore, photography is all about deception. The trickery begins with a means to capture the pattern of light intensity in a focused image.
SAQ 3
Look back over the story so far and suggest three ways in which patterns of light intensity can be translated by arrays of pixels into electronic information. In each case, identify the basic device and the nature of the primary electronic signal that it registers.
Answer
Three approaches have been identified to translate patterns of light intensity into electronic information using arrays of pixels:
-
photoconductors: photocurrent
-
photodiodes: photocurrent
-
photocapacitors: stored charge.
To specify the scale at which information is to be encoded we need to consider four basic quantities: resolution, sensitivity, contrast and dynamic range.
Resolution relates to the size of dot from which the picture is to be reconstructed. At normal close viewing distance, the eye can discern a little less than two-tenths of a millimetre. So a conventional print photograph (100 mm × 150 mm) should be made up of around 500 pixels × 750 pixels (or dots). Table 2 records some typical digital image formats: you can see that the smallest in the list is close to this. Such a format is not suitable for making larger images, unless the viewing distance is increased. Therefore denser formats are specified to build in some room for enlargement (this amounts to a ‘digital zoom’).
Image height/ pixels | Image width/ pixels | Total pixels | Colours | Grey scale/ bits | Image size/ megabytes |
---|---|---|---|---|---|
480 | 600 | 288 000 | 3 | 8 | 0.864 |
600 | 800 | 480 000 | 3 | 8 | 1.44 |
768 | 1024 | 786 432 | 3 | 8 | 2.359296 |
1200 | 1600 | 1 920 000 | 3 | 8 | 5.76 |
Let's work out the pixel size that is required to encode an image at the highest resolution in Table 2 (1600 × 1200). Electronic image sensors are a little smaller than the 35 mm × 24 mm film format, measuring about 24 mm × 16 mm. The ratios do not match exactly and the harder task is to get 1600 pixels into 24 mm: that amounts to 15 μm per pixel. Such a scale represents no real challenge to the semiconductor industry, even when you take into account that really each pixel needs to have three separate sensors sitting behind separate colour filters and a matrix of connections. Of more concern is whether there are enough photons striking such a small area to register an image.
Sensitivity concerns the amount of light that is required to register as a change in intensity. We have already carried out a few calculations about this in this section. There is some interaction here with the optical design team because larger-diameter lenses capture more light.
Contrast is about the tonal range between the brightest and the darkest features. Cameras must have means to adjust the rate at which light enters the system so that image detection is effective: too much and the image bleaches into saturation with no discernible contrast; too little and image contrast is washed out by the speckle of background noise. There are two controls available for a camera. First, there is the size of the aperture, which need not always be as large as the lens system; second, there is the shutter which sets the amount of time for which the image sensor is active.
Dynamic range defines the ratio of the extremes of brightness that are encoded. In digital data it is conventional to base this on powers of two; 256 levels corresponds with eight ‘bits’ or one ‘byte’ of information.
Figure 6 shows an array of pixels in schematic form. We now have an idea of the physical size of the array and the manner in which a pixel might capture its part of an image. This figure also includes some indication of how the various elements might be individually connected to the next stage in the process of recording a digital image, which is the storage of the data.
