Leave a comment



David Miller



Chapter 4 – Visible Light


• Visible light represents a small portion of the electromagnetic spectrum; wavelengths range between 400 and 700?nm.

• The main source of visible light is the Sun.
• The Earth’s atmosphere absorbs most of the light below 400?nm.
• Visible light sensing by the eye depends upon

The parameters of the light receptors
– Unique size
– Unique shape
– Spectrum of sensitivity
– Orientation as light guides

The characteristics of the dioptric media

• Understanding the transformation of an optical image composed of visible light into an electronic image composed of visible light

Processing of a 2D optical image into an electronic image

Processing of a 3D optical image into an electronic image

Clinical optics concerns the focusing or processing of visible light. Of course, visible light comes primarily from suns (stars); children are taught that this visible light also generates the energy necessary for life. The wavelengths of visible light (4 × 10-6 – 7 × 10-6 m) represent a minute fraction, about 1%, of the electromagnetic spectrum, which ranges from the shortest ionizing radiation (1 × 10-16 m) to the longest radiowaves (1 × 106 m; Fig. 4-1 ).[1] Interestingly, visible light does not start out as such in the core of the Sun.
The Sun’s core may be considered a furnace in which thermonuclear fusion takes place. Here, because of the crush of gravity, temperatures close to 16 × 106 K are generated. In such a hot environment the elemental hydrogen protons fuse to produce helium nuclei and energy in the form of gamma rays. (The Sun converts 4 × 106 tons of matter into energy every second.) This resultant short-wavelength energy passes through about half a million miles (8 × 105 km) of dense solar matter before reaching the Sun’s surface.
During this long and slow journey, the photons lose energy and hence increase in wavelength. The radiation that leaves the Sun’s surface primarily represents a spectrum of radiation between ultraviolet and infrared, with a small fraction of ionizing radiation in the form of x-rays with wavelengths of 10-10 m and ?-rays with wavelengths of 10-14 m. This ionizing radiation (part of the entire cosmic radiation) can destroy life, however the Sun also ejects huge amounts of matter (one million tons of hot electrons and protons every second), called the solar wind, which produces a vast shell around the Sun and prevents ionizing radiation from reaching the Earth. The fast-moving ions of the hot plasma of the solar wind are repulsed by the Earth’s magnetic field.
Effect of Earth’s Atmosphere
The Earth’s atmosphere is held in position by the gravitational pull of the mass of the Earth. The potentially harmful ultraviolet and infrared radiation released from the Sun’s surface is absorbed by ozone, carbon dioxide, and water vapor in the Earth’s atmosphere ( Fig. 4-2 ).[1] The Earth’s temperature, which is a result of the temperature of the Sun’s surface (6000?K) and its distance from Earth (almost 100 × 106 miles [160 × 106 km]), is responsible for the volume of atmospheric water vaporized from the oceans. Ozone and carbon dioxide result from photosynthesis and respiration. Thus early forms of life had to exist and produce these atmospheric gases before ultraviolet and infrared radiation could be absorbed and higher forms of life evolve.
It may seem incredible that only life-supporting visible light reaches the Earth from its origin in the solar core, although it may be argued that the process of evolution would have adopted life on Earth to any wavelength that reached Earth. The core-produced x-rays are filtered first by the outer layers of the Sun’s matter. The Earth is 1/100th the diameter of the Sun and almost 100 × 106 miles (160 × 106 km) away, and it receives only a tiny fraction of the radiation (about a billionth of the total).[2] The radiation that travels toward Earth is further filtered by the particles of the solar wind. In turn, this deadly solar wind is repelled by the Earth’s magnetic field. Finally, the size and temperature of the Earth, as well as life on Earth, combine to produce an atmosphere that allows little more than visible light to pass through.
We have traced the origins of visible light from the Sun to the Earth’s surface. Equally instructive are the mechanisms by which the biological molecule absorbs visible light and then informs the animal of that event. In a sense this represents the equivalent of Einstein’s photoelectric effect. Rhodopsin is the biological molecule typically used for this purpose. Perhaps the earliest form of sensory rhodopsin, bacteriorhodopsin, is found in a primitive purple-colored bacterium, Halobacterium halobium.[3] It is not known how long this organism has inhabited the Earth. However, its preference for anaerobic conditions and a very salty environment may mean it developed at a time when little or no oxygen existed in the atmosphere and the sea contained high salt concentrations.
Bacteriorhodopsin is a complicated molecule that contains 248 amino acids in the opsin portions, which are linked to one retinal chromophore. Time-resolved spectroscopic measurements have determined that a cis/trans isomerization in the retinal portion of the molecule begins about 10-12 seconds after light stimulation. This is followed by deprotonation in the opsin portion at 10-5 seconds after stimulation.[4] This early rhodopsin absorbed light maximally at 495?nm but responded to almost all


Figure 4-1 The electromagnetic spectrum. The pictures of mountains, people, buttons, viruses, etc., are used to produce a real (i.e., visceral) feeling of the size of some of the wavelengths. (Adapted from Zeilik M. Astronomy: the evolving universe, ed 3. New York: Harper & Row; 1982.)

Figure 4-2 Absorption of the Sun’s radiation by the Earth’s atmosphere. The white areas show the actual measured spectrum at sea level. Note the white areas of absorption are produced by ozone, water, and carbon dioxide. (Adapted from Zeilik M. Astronomy: the evolving universe, ed 3. New York: Harper & Row; 1982.)
visible light. Estimates suggest that the ancestor of human color pigment genes diverged from the rhodopsin gene about 800 million years ago and eventually resulted in a series of pigments with maximal absorption peaks in the blue, green, and red areas of the spectrum.[5] These specially adapted molecules are needed for accurate color vision.
Thus early animals used something akin to the original rhodopsin and a very simple optical system to see. For example, early worms and shellfish had light-sensing cells that lined a small cup-like structure. Such a system gives a sense of directionality, because each cell is shielded from light that approaches the cup from the nonseeing side. If the cup is made deeper and the sides are turned over, a lensless pinhole system is produced. Such a system is used by a very primitive swimming mollusk called Nautilus.[6]
Thus with visible light falling on the Earth, and rhodopsin already present, the stage was set for the development from simple light-sensing to natural or living optics.
Life has existed on Earth for about 4 billion years. Primitive fish that had eyes resembling human eyes first appeared about 400 million years ago, so it might be said that ophthalmic optics originated at this time. [7] [8] [9] The living form of optics operates under the same rules and regulations as mechanical glass optics. Obviously, the various aspects of natural optics are linked closely to the dimensions of the wavelengths of visible light. Some of the basic elements of optics, using living optics examples, are introduced below.
The essential job of an optical system is to convert information about an object into an image. In natural optics, the image is formed on the retina and, therefore, it usually is much smaller than the object. Classically, the object has been considered as made up of a series of luminous points. For example, an object such as a tree does not contain points of

light but can be thought of as reflecting points of light. The optical system converts the object points of light into image points. Because the image is smaller, the image points may be considered more densely packed.
Thus an image of high quality—also called an image of high resolution—demonstrates much detail. The finer and more tightly packed the receptors, the more detail is registered. The retinal receptor size and shape is influenced by a number of factors.
Because smaller receptors are better for resolution than larger receptors, what factor actually limits the smallness of a photoreceptor such as a retinal cone? The answer is diffraction. The smallest point focus of light is surrounded by a diffraction pattern. Thus very narrow receptors that receive a large diffraction pattern are wasteful. The size of the diffraction pattern, on the retina or on a screen, is known as an Airy disc. The diameter of this disc determines the distance between two resolvable points.
That is to say, the diameter of the Airy disc, D? , or the width of the central maxima, also is equal to the just resolvable distance between two intensity peaks when the minima of the interference patterns overlap ( equation 4-1 [10] ; Fig. 4-3 ).[11] [12]

where 1.22 = constant for round pupil, ? = 550?nm (average for visible light), f = focal length of system, and p = pupillary diameter.
For example, the size of the Airy disc image of a point of light for the human eye under photopic conditions may be determined as follows. If f = 17?mm (focal length of eye), p = 4?mm (average photopic pupil), l = 0.00055?mm (median wavelength in visible spectrum of 0.0004–0.0007?mm), then the diameter of the Airy disc, D? , is given by equation 4-2 .

Note that the size of the Airy disc can vary with the focal length of the eye, the wavelength of light, and the pupil size. Also note that 2.8?µm is close to the size of the average foveal cone (1.5–2.0?µm). In comparison, the eagle has a large photopic pupil (about 6?mm); its foveal cones are thinner than those of the human and the eagle eye’s resolution is finer.
Two other important optical concepts are buried in equation 4-1 . First, note that f/p may be a key factor in determining the size of the Airy disc. The f/p ratio is called the f-number of the system. As p, the pupil diameter, decreases, the diameter of the diffraction pattern increases, and the resolution power lessens. The same occurs if the focal length increases, because this tends to widen the projection of the diffraction pattern. Thus a larger f-number suggests a degradation in resolution.
The second concept, the angle of resolution, is related closely to the Airy disc. The Airy disc is the physical distance, on the retina or a screen, between two points that are just resolvable. The angle of resolution, AS, is another way to describe just resolvable points in physical space ( equation 4-3 ; see Fig. 4-3 ).

where 1.22 = constant for a round pupil, p = pupil diameter, and l = 0.000550?mm. The focal length of the system, f, is not used in equation 4-3 .
The angle of resolution, AS, for two distant stars viewed by a healthy, average human eye with a pupil of 8?mm in diameter is given by equation 4-4 . However, it is known that the human eye can resolve two separate points in 1 minute or even less.[11] This discrepancy is explained as follows. The Raleigh criterion for resolution demands that the maxima of one point source must intersect the minima of the second point source (see Fig. 4-3 ),[13] which allows a patch of no light (high-contrast image) between the two maxima. However, in the case of the healthy young human eye, contrast determinations can be made for targets of

Figure 4-3 Two object sources of light (S1 and S2 ) cannot be resolved if their diffraction patterns (Airy discs) overlap substantially. Two refraction patterns are produced by a circular aperture placed between two lenses, and resultant patterns of the light intensity distribution and appearance are shown: the central maxima of one diffraction pattern falls on the second minima of the diffraction pattern from the second source; the central maxima of one diffraction pattern falls on the first minima of the diffraction pattern from the second source, and the two images can just be resolved (Rayleigh’s criterion); the two images merge as one. Mosaic of retinal cones with the diffraction pattern superimposed.
lower contrast. Thus many human eyes are able to distinguish two point sources or two black bars when the diffraction patterns overlap (see Fig. 4-3 ).

For example, if it is assumed that the human separation criterion is one half the width of the Airy disc, then the angle of resolution is close to 1 minute of the arc. If the contrast enhancement known to be built into the neural processing of the human visual system is considered, it becomes apparent how some subjects have a resolution angle of less than 1 minute of arc.[10] [14]
In conclusion, the resolution limit of natural optics is related to the size of the wavelengths within the spectrum of visible light.
When a firefly is seen in the distance, the number of photons collected by the eye from the firefly (per unit time) is distributed over the retinal image. Each image point is an Airy pattern. Thus the smaller the patterns, the more concentrated the pattern and the brighter is the image. It may be wondered whether animals that have small eyes, with a small focal length, or insects that have even smaller eye facets can collect light as well as the human eye does. From equation 4-1 , if the


Pupil Width (mm)
Net-casting spider
Flour moth
Tawny owl
(Modified from Lythgoe JN. The ecology of vision. Oxford: Clarendon Press; 1979.)

light-catching ability of an optical system depends primarily on the f-number (f/p), the small eyes of spiders and each facet of the housefly eye, theoretically, are even more sensitive than the human eye. Table 4-1 gives the f-number for some animal species[12] ; the tiny eye of the net-casting spider sees dim objects better than eyes of the other animals. In conclusion, we can make the following observations.
• Small eyes may have low f-numbers and consequently have very sensitive light-catching abilities.
• The Airy disc or diffraction pattern from any point on the object is important in determining the density of photons that fall on a retinal area.
Thus we can appreciate that the level of sensitivity of the receptor is tied ultimately to the wavelength within the spectrum of visible light.
The shape of the photoreceptor plays an important role in resolution of light and sensitivity. For example, the tighter the packing of receptors, the closer the focused points on the retina may be placed (actually, these are Airy patterns). Theoretical analysis shows that hexagonal cross-sections of close elements allow the tightest packing and, in fact, photoreceptors have such hexagonal cross-sections. [13] [14] Of course, the tightness of the packing is related to the angle of resolution.
A light guide (fiberoptic element) receives light at its entrance. Because the core of the guide has a higher index of refraction than the outer coating, or cladding, light that enters beyond the critical angle is not refracted but forced to reflect continually off the walls of the guide until it reaches the other end. (Critical angle refers to a refracting system, in which the incident ray is reflected instead of refracted.) As might be expected, at angles of entry close to the critical angle, a small amount of light may leak between closely packed light guides. The retinal cone acts as a light guide ( Fig. 4-4 ).[15] The body of the cone has one index of refraction and the surrounding interstitium, although narrow, has a lower index of refraction. Recall that the index of refraction varies with wavelength. A second point to note is that as the diameter of the guide gets smaller, the wave nature of light plays a more important role in the functioning of the guide. For example, as the diameter of the guide approaches the light’s wavelength, the waves of light that enter interfere more destructively with each other, which reduces the amount of light that reaches the other end. The interference pattern is known as a modal pattern.
Because diffraction is ultimately dependent on the wavelength, the limiting diameters of a light-guiding cone are related to the wavelength. [16] [17] [18] The second limiting factor is light crossover between receptors, which is related to the indices of refraction of the receptor and its surround, as well as to the closeness between receptors.[19] [20] Both of these properties may be thought of as related to the wavelength of light.
In summary, the dimensions of receptors of about 2?µm in diameter and the separation between receptors of about 0.33?µm are related to the wavelength of visible light.

Figure 4-4 Scanning electron micrograph of photoreceptors that can be considered a light guide. C, Cone; R, rod. (From Prause JU, Jensen OA. Scanning electron micrograph of frozen-crack, dry cracked and enzyme digested retinal tissue of a monkey and man. Graefes Arch Klin Exp Ophthalmol. 1980;212:261–70.)
Dioptric Media
It seems obvious that dioptric media, or the optical elements of the eye, must be transparent. A perfectly transparent medium does not absorb or scatter light. Classically, pigments are described as absorbing visible light. The characteristic feature of a pigment molecule is a series of single and double bonds formed by the carbon atoms. The pi electrons of the double bond may be thought of as “free to wander” across the carbon backbone structure of the molecule, which increases their combined probability distribution over the entire molecule. This condition makes it easier to excite the pi electrons with the less-energetic visible wavelengths; ultraviolet, x-ray, and ionizing radiation have more energy than visible light. Transparent media have few or no pigment molecules. A good example of a medium transparent to visible light is the human ocular media,[21] which consists primarily of water.
When a beam of visible light passes through pure water, the water appears transparent because it contains no pigments and because the light waves scattered from each of the water molecules interfere destructively with one another in all directions except the forward direction. No light appears to have been scattered, because the scattered waves mutually cancel to give zero net scatter to the side. Water and glass interact with light in this way because their components are all of the same index of refraction and uniformly distributed. The transparent cornea may be thought of as made up of collagen fibers of one index of refraction embedded in a mucopolysaccharide (high water content) of a second index of refraction. However, because the distribution of the elements is in a uniform pattern, and because the collagen fibers are never more than the distance of one half a wavelength of visible light apart, the number of scattered waves is small. In reality the cornea is only 90% transparent (10% of the incident light is scattered). It is functionally transparent,[22] although not perfectly transparent. Once again, an important optical property (transparency) may be thought of as dependent on the wavelength of the incident light.
Light is visible because it can be detected in the retina. It produces changes in receptor cells in our eye. These changes stimulate nervous activity, which is processed by retinal nerve cells and conveyed to our brain. Electrical sensors can “see” light, too. There are two major classes of electrical light sensors, photovoltaic and photoconductive. The photovoltaic class generates electrical power, which is related to the power of the light incident to the sensor. Photoconductive devices conduct more electricity


Figure 4-5 Example of early mechanical version of a scanning television system (Patented in 1884 by Paul Nipkow) (Courtesy Cinemedia Corporation) [http://www.cinemedia.net/SFCVRMItAnnex/maughton/nipkow_disk.gif]
with increasing light. A solar cell is a photovoltaic device. The sensor which turns on the streetlights at night is usually a photoconductive device. Either type can produce an electrical signal which can be conveyed to a distant receptor.
The association between light and electricity has been known for a long time. Lightning is a spectacular example of light and electricity. Electricity can produce light related to the amount of electricity. The brightness of an electric arc or an incandescent filament of a electric lamp are both related to the current flow producing the light. Television, fax, and electronic cameras all depend on the ability to electrically and proportionally sense and create light.
Paul Nipkow invented mechanically scanned television and patented it in 1884 ( Fig. 4-5 ). This system “scanned” a real image using a rotating disk pierced with a spiral of holes that presented a small portion of the image to a selenium photoconductive sensor. The sensor was connected to a light source that was observed through a second, synchronized, rotating disk which placed the received light in the right place in the image plane.
An electronic image differs from a real image in several ways. (1) The electronic image is sampled. It is made up of a finite number of little light spots, or picture elements called pixels, which are seen together as a continuous image but individually simply represent the light at a point in a real image. (2) The light provided by these (pixels) is made up of three different primary colors (red, blue, and green) which are perceived as nearly any color. (3) Finally, the information is not there all of the time but is presented repeatedly at a rate sufficiently fast that the image is seen as continuous. Electronic images are thus neither spatially nor temporally continuous. Although it is not essential to an electronic image, the information describing the individual pixels is conveyed serially or one at a time. By agreeing on a correspondence between the location of a pixel and the order that it is transmitted, it is unnecessary to transmit the location with the color and brightness information. This orderly sequence of analysis and synthesis describes a pattern known as a raster.
Solid state electronic image sensors have replaced mechanically scanned image sensors and electronically scanned sensors like Vidicon tubes with arrays of electrical sensors. Electronic image sensors have a photosensor for each pixel, while mechanically and electronically scanned sensors examine a portion of a photosensitive region large enough to accommodate the entire image. The first solid state image sensor was the charge-coupled device (CCD). CCDs represent the amount of light at a pixel by stored electrical charge. The pixels are arranged in rows and columns. The charge is collected by an individual pixel element in each row. This charge is collected and transferred to the CMOS Imager, so called because it incorporates CMOS transistors and includes light sensors with individual transistor amplifiers and electronic switches. The switches can connect the selected sensor to an output amplifier. The switches usually are operated in an orderly sequence. Color filters can be placed in front of individual sensors so that they provide color information as well as a measure of the amount of light.
The information associated with an electronic image can produce a visible image in several ways. Probably the most common image presentation uses a cathode ray tube. In this tube, a beam of electrons excites a phosphor that produces light. Modern color cathode ray tubes incorporate a complex image plane with regions of three colored phosphors. The electron beam traces a raster, which corresponds to the raster used to scan the original image. When this original image is stored in an electronic memory, the date is read out in the order needed to display it on a standard raster. This is called a bit-mapped image. Liquid crystal displays (LCDs) are an important current technology. Individual pixels are implemented with tiny “light valves” that control the amount of light coming from that pixel. The light valves work by electrically shifting the polarization of light passing through a liquid crystal material. Polarized light passes through it and encounters a second polarizer which transmits only light aligned with it. The contrast ratio (brightest-to-dimmest light) is limited with a light valve. Contrast ratios between 200 and 500 are now available. Transmissive LCD displays can provide around 200 nits of illumination. Color LCDs are implemented by placing color filters in the path of the light valve.
LCDs work by controlling transmitted or reflected light. Plasma displays and light-emitting diode (LED) displays, like cathode ray tubes, provide light directly and are consequently quite bright. It is difficult to fabricate small pixels (0.3?mm in a 15-inch cathode ray tube display) with these technologies. Plasma technology creates light from a glowing plasma which excites colored phosphors. Pixel dimensions can be made sufficiently small to realize large-format television (50-inch) displays. Conventional semiconductor LEDs are relatively large and are suitable only for very large displays (greater than 10 feet) but can be very bright (5000 nit). Organic LEDs (OLEDs) are evolving rapidly. Very small OLEDs can be fabricated. High-resolution (852 × 3 × 600 pixels) “microdisplays” (0.62-inch diagonal) are currently available.
An electronic image can be manipulated as data and offers tremendous opportunities to present or receive visual information which is beyond the power of physical optics. Moreover the current generation of computer technology is fast enough to process an image as we view it. This makes it possible to see subtle differences in light intensity by mapping shades to differences in color. Edges can be enhanced. Reference points can be marked, distances measured, and templates superposed. There is much promise in the computer-enhanced electronic image.
Stereoscopic Vision
We see the world through two eyes. Stereoscopic vision is of immense value to a surgeon. Most surgical microscopes provide a stereoscopic view. Stereoscopy requires acquiring and transmitting images for the left and right eye in the same amount of time required for transmitting a single image. This doubles the required bandwidth. Reducing the bandwidth with slower transmission produces unacceptable flicker and interrupted motion. The current generation of computer technology offers greatly increased bandwidth so we can anticipate economical, high-quality stereo imaging will become economically feasible. This suggests the practical possibilities for stereoscopically recording or viewing an image from an operating microscope.

In conclusion, in this chapter a perspective for optics as well as a focus on an important common denominator in optics is given. The common theme is related to the properties of the tiny portion of the electromagnetic spectrum known as visible light. The wavelength of visible light is critical in understanding the structural dimensions of the optical systems of animal and human eyes. Electronic images are increasingly common and have unique characteristics and possibilities, which should be included whenever considering vision and visible light.


1. Zeilik M. Astronomy: the evolving universe, ed 3. New York: Harper & Row; 1982.

2. Kippenhahn R. Light from the depths of time. New York: Springer-Verlag; 1986.

3. Oesterhelt D, Stoekenius W. Rhodopsin-like protein from the membrane of Halobacterium halobium. Nature New Biol. 1971;233:149–52.

4. Atkinson GH, Blanchard D, Lemaire H, et al. Picosecond time resolved fluorescence spectroscopy of K-590 in the bacteriorhodopsin photocycle. Biophys J. 1989;55:263–74.

5. Yokoyama S, Yokoyama R. Molecular evolution of human visual pigment genes. Mol Biol Evol. 1989;6:186–97.

6. Dawkins R. The blind watchmaker. New York: WW Norton; 1986:85–6.

7. Calder N. The life game. New York: Viking Press; 1974.

8. Burton VL. Life story. Boston: Houghton Mifflin; 1962.

9. Marshall K. The story of life. New York: Holt, Rinehart, and Winston; 1980.

10. Jenkins FA, White HE. Fundamentals of optics. New York: McGraw Hill; 1950:290–3.

11. Emsley HH. Visual optics. London: Hatton Press; 1950:47.

12. Blatt FJ. Principles of physics. Boston: Allyn and Bacon; 1987.

13. Lythgoe JN. The ecology of vision. Oxford: Clarendon Press; 1979.

14. Snyder AW, Bossomaier JR, Huges A. Optical image quality and the cone mosaic. Science. 1986;231:499–501.

15. Prause JU, Jensen OA. Scanning electron micrograph of frozen-crack, dry cracked and enzyme digested retinal tissue of a monkey and man. Graefes Arch Klin Exp Ophthalmol. 1980;212:261–70.

16. Enoch JM. Retinal receptor orientation and the role of fiber optics in vision. Am J Optom Arch Am Acad Optom. 1972;49:455–70.

17. Snyder AW, Menzal R. Photoreceptor optics. Berlin: Springer-Verlag; 1975.

18. Snyder AW, Miller WH. Photoreceptor diameter and spacing for highest resolving power. J Opt Soc Am. 1977;67:696–8.

19. Snyder AW. Coupled mode theory for optical fibers. J Opt Soc Am. 1972;62:1267–77.

20. Barlow HB. Critical limiting factors in the design of the eye and visual cortex: The Ferrier Lecture 1980. Proc R Soc Lond B Biol Sci. 1981;212:1–34.

21. Boettner EA, Wolter JR. Transmission of the ocular media. Invest Ophthalmol. 1962;1:776–83.

22. Miller D, Benedek G. Intraocular light scattering. Springfield: CC Thomas; 1973.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: