Thanks to a remark made by Ansel Adams in the late 80’s to an interviewer, regarding his overseeing the printing of one of his books of photographs that was reproduced by scans of his images, Ansel stated he was impressed that digital editing could accomplish adjustments to images he could not make in his own darkroom. For me that was handwriting on the wall, that the future of photography was in digital imaging. During 1989 I began my shift from analogue film photography to digital. It went slowly and haltingly, there weren’t many products that supported digital imaging with computers. But little by little more and more scanners became available, as well as software to edit images with a computer. So I learned mostly from personal experience using scanners and software and talking with a few colleagues on internet forums about how a scanner worked and the beginnings of image editing with a computer.
How does a scanner work? It has a set of CCD cells arranged in rows with an adjacent light source to illuminate either a print or film that is to be scanned. This bar of sensors is moved very precisely along the length of the area to be scanned by a finely threaded screw. The scanner user controls the size of the area to be scanned and the number of pixels high and wide that will be reproduced as a digital image file. What that setup accomplishes is a virtual matrix or graph of the area to be scanned, projected onto the scan surface breaking it into small, square segments. Once the scan is begun each of these segments is read for brightness by the CCD sensor and that reading is translated into an R,G,or B value from one to 256 for that pixel XY location.
Once a file is made it can be opened and reproduced on a computer display as a picture. If you select a small section of that picture and then zoom in on it to fill the computer display screen the individual pixels will be big enough to see clearly. And what you see is a matrix or grid of different colors and brightness for each pixel. You should also notice that each pixel is evenly filled with a uniform color and brightness value. Each pixel is all the information an image sensor records, its just a kind of light meter, and with colored imaging there are three kinds of sensors, one each for red, green, and blue. However in the final output image each pixel has all, a red, green, and blue value. This is accomplished by the interpolation of the color values from laterally adjacent pixels which is done by the A/D firmware processor, a small limited function computer chip built into the scanner’s hardware.
With the best scanner driver software like Lasersoft Imaging SilverFast the user can obtain a low resolution raw preview image from the scanner. Then that image can be used to perceptually adjust the values of the image to first optimize the raw scan tonal range to fit the 256 level output gamut, use the histogram to adjust the image brightness/darkness, then use the gradation adjustment to balance the highlights/shadow levels of an image, and there is a global color balance adjustment, as well as a selective color adjustment to adjust individual colors, and finally a selection of sharpening options with USM to provide a side by side perceptual magnified windows to select sharpness perceptually. All of these adjustments can be made serially and are additive, put together to provide the scanner driver a model of how the scan output should be adjusted as part of the scan process. By this method you obtain a finished ready to use image file from the scan that requires little or no post-scan editing.
How does a digital camera differ with a scanner? Although a digital camera uses digital sensors just like those used in scanner, they are arranged in a lateral area array instead of of a lineal array that moves to read the subject. With a camera the lens focuses and frames a subject and responds to the light reflected from the subject. Other wise both scanners and cameras work alike, both project a virtual grid or matrix on the subject to be sensed. So lets take a camera, a modern 12 megapixel model framed and focused on a subject 30x40 feet in size. So each pixel is measured as a virtual square: 0.12 X 0.12 inches in size, and the sensor makes an average light measurement of everything in that virtual square of the subject, so any detail within that approximate 1/8th inch square is lost within that averaged light reading. In other words a digital camera of 12 megapixels is really a light meter with 12 million sensors making individual averaged light readings of 12 million segments of the scene the camera exposes when the shutter is released. These light reading are then sent to an A/D microcomputer chip where they are laterally interpolated so each pixel has all three of the RGB color values.
Unlike scanners most digital cameras do not take a raw preview which can then be adjusted with software so the finished file produced can be pre-edited ready for use. But there are exceptions to that, high end dSLR models like Canon and Nikon, as well as most of the medium format digital cameras, can be controlled by a tethered connection to a computer with software much like a scanner driver’s that captures and displays a raw preview that can be edited. Then the software can fire the camera to expose the image to the pre-edited requirements and you get a finished image file output just like I described from a scanner. In other words a high-end dSLR camera can be used just like a digital scanner.
However, most photographers use digital cameras much like they used film cameras, setting camera controls that essentially edit the exposure. For instance with my last dSLR, a Canon 5D, I could select one of several Picture Styles before shooting that would direct the camera to edit the exposures to suit a selected type of subject. This kind of camera pre-editing applies directly to what the camera micro-computer outputs in JPEG format, or if Raw is selected that Raw image file data is accompanied by metadata files that describe the editing the Raw data should receive as part of conversion to a standard image file format like TIFF. But among the selections of different Picture Styles a user can also select Natural that applies no sharpening, contrast, saturation or color balance adjustments to the Raw data and the user gets just what the sensor records and A/D outputs. Many times I have suggested dSLR users should try shooting with a “natural” Raw output and see how that Raw image actually appears displayed by an application like Photoshop. None of these correspondents have replied that they have done what I suggested. And I know of only two colleagues who set up and shoot to get unadjusted Natural Raw image files. In fact most photographers apparently use 3rd party software to convert even though only the camera manufacturer software can actually read the copyrighted metadata. So most photographers only obtain a simulation of what the metadata contains in image attribute adjustments.