Você está na página 1de 37

The Path of Light

The Straight and Narrow

What do you first notice about the image above? I see beautiful beams of sunlight streaming into Grand
Central Station from windows high overhead. Then I notice that these sunbeams are streaming in with
absolutely straight paths. Why is the light traveling in straight paths? Shouldn't it be bent due to the
"refraction" of the windows?

The second question is an easy one to answer. Yes, the light is bent at the window, but only at the
interface (boundary) between the window and the surrounding air. We deal with this topic of light
bending (refraction) in a later module. Here, we just need to realize that the light path is absolutely
straight when the light is not encountering any interface. So whether its traveling through air, water,
glass, diamond, or any substance (or none at all), light travels in a straight path, until it encounters a
different medium. In fact, it travels straight at a constant speed, c, called the speed of light, where in
vacuum

c = 300,000 km/sec or about 186,000 miles/sec.

The velocity is slower in transparent material as water, glass, etc.

The Path of Light is Reversible

The path of light is straight and the velocity of light is constant in uniform composition material such
as glass. The path and the velocity of light changes when light enters another material such as air or
water. The path and velocity of light changes at the boundary, or interface, between two materials.

The path of light can undergo many changes as light goes from its source to your eye. To analyze the
path it is useful to know that the path is reversible. If you trace the path from water to air to glass, you
can reverse the direction and follow the same path from glass to air to water. The use of this
reversibility concept helps test your ideas of the light path.

The World Through a Hole

When we made our pinhole viewer with the cardboard tubes and tape, we punched a small hole (or
several) in the aluminum foil with a pin. We looked into the viewer and could see an image of a bright
light source on the tape. How did this happen? How could the light from the source get through the tiny
pinhole to create a complete image of the source?

All visible objects either emit light or reflect light. Some do both, most just reflect. The way that we
"see" things is by the light that is reflected off of them into our eyes. When an object either emits light
or reflects it, the light travels in all directions from the object. You can confirm this by just walking
around your computer and looking at it: you can see it from every angle. Only a tiny fraction of the
light from the object gets into our eyes at one time, however. Even then, we are usually focusing on
one part of the object and not the whole thing at once (unless it's far away).

Imagine a tree off in the distance. We may focus our eyes on the top branches of the tree but light from
the trunk and the branches facing us is still traveling into our eyes. The same is happening with the
pinhole and the light source. But why is the image upside down on the screen? Here is where we have
to think about the path of light.

We aim our pinhole viewer at the tree and notice that the tree's image on the tape screen is upside
down. Imagine a ray of light from the top branch of the tree traveling toward the pinhole. Since the
pinhole is so small, only this ray and a few others from the top branch can pass through the hole. The
light rays haven't encountered any different media, so they travel in a straight path from the tree
branch to the screen. Notice in the image above that the rays from the top branch and from the trunk
have to cross at the pinhole in order to pass through. This is what causes the image of the tree (or light
bulb, or any other object) to be upside down. Our pinhole is acting as a converging lens!

What do you think would happen if we enlarge our pinhole? Does the image become more or less
clear? Does it get brighter or dimmer? Try this by punching a new, larger hole in your pinhole viewer
and observing the light source through your viewer.

What might be a practical (and fun) way of utilizing this simple device? How do cameras work? Do
you necessarily need a fancy lens to make photographs?
To learn how to make a pinhole camera and develop the film go to the Reading Photography with a
Pinhole Camera
Photography with a Pinhole Camera
Robert Alber

Back to Basics

Taking pictures with a pinhole camera is one of the simplest forms of the photographic process. But
wonderfully, making and using a pinhole camera provides the student with an understanding and appreciation
for not only photography, but human physiology, chemistry, light physics, mathematics, art and possibly, a little
magic.

All cameras, from the most sophisticated to the pinhole, rely on the same elementary principles, performing in
similar fashion to the human eye.Like your eyes, the camera needs light to operate. Light moves into the eye
through the pupil, a hole that is made smaller or larger by the iris.

Light gets into the camera through a hole called an aperture that is made larger or smaller by a diaphragm. The
camera can also shut out all light with a shutter, similar to closing your eyelids, or opening them to let the light
pass.

Recall what happens when you enter a movie theater on a sunny afternoon. It takes some time for your eyes to
adjust to the low light. At first you cannot see anything, but soon you began to make out objects and within a
short time you can see pretty well, even in that darkened room.

This is much like a camera making a long exposure in low light. The diaphragm opens as wide as it can to allow
maximum illumination. For your eyes, the iris opens wide. The eye's retina, like the camera's film, is sensitive
to changes in light and sends messages to the brain about the images you see. As you leave the theater and
return to the sunlight, the opposite situation occurs. The eye's iris closes down as it's flooded with light. In
bright light the camera's diaphragm closes down, or stops down.

Unlike the camera, the eye is constantly and automatically reacting to various fight and focusing and refocusing.
The camera has to be adjusted for each situation. The camera, however, can bring into focus objects both near
and far and record them on film at the same time. Your eye cannot.

While modern cameras and the human eye use sophisticated systems to focus images, including color correction
and lenses to improve clarity and magnification, much simpler techniques can and do work.
Consequently, the pinhole camera can produce surprising results using just a light-tight box to capture the
image transmitted to film through a simple hole (aperture) made with a pin or, for a better level of performance,
a sewing needle .

Constructing the camera

The camera body can be made from any container that is practical to handle and can be fixed so that it will not
allow stray light (all but light entering through the aperture) to enter. Common pinhole camera bodies range
from small rectangular jewelry boxes and the old metal band aid tins to shoe boxes and one-pound coffee cans.
The curved shapes of the coffee can or oatmeal carton will produce a more surrealistic or panoramic image.

Keep in mind that a sturdy container usually has better light keeping properties, is easier to work with and will
last longer. The best overall size is in the neighborhood of six inches square. The shape of the box can present
some real creative possibilities and the box depth, from removable cover to back, relates to the angle of view of
the transmitted image. In other words: a container that is very shallow will yield a wide angle view; and a
container with depth will yield a telephoto image.

For purpose of explanation, let's assume that a sturdy, cardboard box about six inches square and four inches
deep is used for the camera body. The bottom of the box will be used to hold the film (more on that later) and
the box top or lid will become the light-transport system (aperture, shutter). Some would choose to line the
box with black paper or paint it black to cut down on the possibility of stray light, but it the lid is properly
prepared, or if the lid is equally as large as the box itself, that won't be necessary.

To convert the lid to an aperture you will need to first cut out a hole in the center of the lid about one-inch
square. Cover the hole with as heavy a grade of aluminum foil as you can find and tape it down with black
plastic electrician's tape. Then using a small sewing needle, carefully punch a hole in the center of the foil
with the needle, taking care not to move the needle from side-to-side. An easy, but deliberate, straight-in-
straight-out, motion will work nicely.

The foil will provide a much more accurate aperture unlike one made by punching the needle directly through
the lid. Also, it gives you the opportunity to repeat the process if need be with the minimum of trouble. It may
also be easier to punch the hole in the foil prior to taping it on the camera. Greater accuracy can be obtained by
using a cushion underneath, like a phone book or piece of cardboard.

A more sophisticated aperture can be made by using thin, brass shims which can be found at a good automotive
supply or hardware store. The difference of making an aperture in cardboard, aluminum foil or brass shims can
be noticed in the clarity and sharpness of the final photograph. Modern cameras use computer designed lenses,
ranging from hand ground glass to machine ground plastic. But the aperture itself can function as a lens.

The pinhole (in this case, needle hole) transmits rays of light so that they strike the film in tight clusters. The
result is an acceptably clear photo. Results are improved with better materials, but also with smaller apertures.
Size makes a difference because the smaller aperture transmits only a few rays from each point reflected from
the scene. The finer the rays of light, the tighter the cluster hitting the film and the better the representation of
the image viewed In other words, pin-point accuracy.Larger apertures will transmit a much softer and less
focused image. Experimenting with the foil apertures of various sizes will provide a dramatic illustration.

Shadow catching
With camera in hand it is time to load the film and get to making photographs. While real photographic film can
be used, it is easier and a more visible learning experience to use photo paper as film. Let's review the basic
structure and differences with a little, very little, chemistry.

Photographic film is made in layers.The base is clear plastic coated with light sensitive material held in place
by an emulsion layer. The light sensitive material is actually tiny particles of silver that when exposed to light,
chemically react and etch a reflection of the image viewed. Not all film reacts the same. Some, with larger silver
particles, is more fight sensitive and color film is different than black and white, either having more layers to
capture various hues or using dyes. All film, and photographic papers, have an emulsion layer. This is actually a
gelatin that holds the silver in place. The emulsion is fastened with adhesive to the plastic film base and then
coated with a scratch resistant material.

Two additional pieces of photo trivia will aid in the understanding of the photographic process related to the
transmission of light and film:

1. The image viewed by the camera is recorded upside down on the film. Our eye functions identically, but
the brain reverses the image. Modern cameras have a system of mirrors that reverse the image as you
look directly through the lens.
2. The scene is recorded on the film as a negative image. In other words, whites are black, blacks are
white, etc. That means then, that the white parts of the negative image are actually unexposed silver
particles and the black parts are fully exposed.

Film, unless packaged in a light-tight container, must be handled in total darkness both when loading the
camera and processing. For this reason, it is easier to use photographic paper, which can be used under low
level photo lights, in the pinhole camera. Photo paper is also a lot less light sensitive and therefore less likely to
react to stray light. Later, you can choose to experiment with real film as your experience increases. Photo paper
can be purchased in a wide variety of sizes with the most common being 5 x 7 and 8 x 10. Paper is coated with a
plastic resin (RC) and will process quickly and dry flat. The paper will probably need to be cut to fit the inside
of your pinhole camera. The cutting and loading will have to be done under safelight, a dark red, or yellow
photo light. Once cut to size it can be placed in the camera so that the emulsion, slick side, faces the pinhole
(aperture). Before you turn on the fight and leave the darkroom you will need to place your finger gently over
the pinhole to prevent stray light from exposing the film.

Incidentally, your finger now becomes the pinhole camera's shutter.

Because photo paper reacts slower to light than film and because the sewing needle creates a tiny aperture,
exposure times for the photograph will be lengthy. This will call for a bit of experimentation, however. Not all
cameras, aperture and lighting conditions will be the same.
Suggested exposure times
outside, bright sun: one minute
outside, cloudy: five minutes
inside, sunny window: eight minutes
inside, sunny room: 14 minutes
inside, dim light: 30 to 40 minutes

Keep in mind that the longer the exposure the darker the resulting negative which will make for a light, or
washed out print. Think of baking. The longer something is in the oven the darker it will become. Just a little
experience with the pinhole camera will greatly improve your pictures. Remember that if a negative is
overexposed (too dark) and you had exposed it for six minutes, a three minute exposure will reduce the
exposure by 50 percent. This is all too logical, but often beginning photographers will only reduce the exposure
by a few seconds, producing a negative almost identical to the problem they are trying to solve.
Returning to the darkroom with your exposed negative you will unload the camera -- again under safelight
conditions -- and process the print in a series of chemical baths. Each chemical should be pre-mixed and placed
in plastic trays commonly used for the purpose. All of the following materials are readily available at any
decent camera supply store.

The first processing solution is the paper developer. Dektol, a Kodak product is common. Developer can be
mixed 1:2 (one cup Dektol, two cups water) and it will turn brown when exhausted. Under the safelight, gently
slide the photo paper negative into the solution and rock the tray carefully. It is a good idea to use photo tongs
(not metal tongs) to handle the print. Some people may react to the developer using their hands and there is also
less chance of chemical contamination from tray to tray. Each tray, other than the water baths, should have
separate tongs. The negative image will begin to appear after a few seconds in the developer. After about two
minutes, development has been achieved and the negative, if properly exposed, will not change. If the negative
has been severely overexposed it will turn completely black.

From the developer, the negative is placed in a water bath that removes the Dektol and arrests development.
The negative is then placed in the fixing agent. Kodak fix, or fixer, is a chemical that removes all the
unexposed silver particles and hardens the paper surface. After a few seconds in the fixer, room lights may be
turned on. Total time in the fixer need not exceed four minutes. The negative is then placed in the last water
bath for about four minutes.

(Once the room lights are on you will notice, for sure, if you have exposure problems and need to try again. You
will also notice how sharp the image is and exactly what a negative image looks like. It will take some
experience to judge the quality of a negative. If you have had success so far, then the most difficult parts are
now behind. All that's left is reversing the negative to a positive print.)

If this is your first darkroom experience then you will want to see a "real" print as soon as possible. Fortunately,
with pinhole photography you won't have to wait long. You can even process a print before the negative dries.
In fact, often it is easier that way. What you need is another piece of photo paper the same size as your negative
or slightly larger.

Remember to turn off the white light and handle the paper only under a photo light! If your darkroom has an
enlarger -- machine for printing prints from negatives -- you can use its light source for the printing process.
Turn on the enlarger --keep photo paper packaged-- and adjust the size and shape of the light it projects so that
it is several times larger than your intended print. Place a photo easel under the light, or mark the top edge of the
light beam with tape, and turn the enlarger off.

Now, with all the lights off, except the photo light, remove a sheet of fresh photo paper and place it emulsion
side up in the easel, or where the light will strike, marked by the tape. Place your negative, wet or dry, upside
down and directly on top of the photo paper. If the negative is wet, roll out any bubbles with your hand or a
roller made for that purpose. If the negative is dry you will probably have to sandwich it and the photo paper
under a clean and rather heavy piece of glass. You are now ready to make a second exposure. This time
exposing the paper to the enlarger's light that will pass right through the negative and reverse the image on the
photo paper by creating another silver particle reaction.

Now, run the newly exposed piece of photo paper through the same chemical process and, there you have it, a
wet, but finished black and white positive print. Blot off the excess water, or use a photo squeegee and set it out
to dry. The resin coated paper will dry soon and dry flat. If you are in a big hurry for a final dry print, use a hair
dryer, taking care not to crackle the resin coating with heat too high. Some photographers have been known to
use a microwave to dry resin coated prints.
If you do not have an enlarger, or access to one, you can still make a positive print. Just turn on the room light.
This will take some experimentation as the light won't be as concentrated as under the enlarger's lamp, but will
work.
The Reflection of Light

All Things Reflected

What is it about objects that let us see them? Why do we see the road, or a pen, or a best friend? If an object
does not emit its own light (which accounts for most objects in the world), it must reflect light in order to be
seen. The walls in the room that you are in do not emit their own light; they reflect the light from the ceiling
"lights" overhead. Polished metal surfaces reflect light much like the silver layer on the back side of glass
mirrors. A beam of light incident on the metal surface is reflected.

Reflection involves two rays - an incoming or incident ray and an outgoing or reflected ray. In Figure 1 we use
a single line to illustrate a light ray reflected from the surface. The law of reflection requires that two rays are at
identical angles but on opposite sides of the normal which is an imaginary line (dashed in Fig. 1) at right angles
to the mirror located at the point where the rays meet. We show in Fig. 1 that the angles of incidence i and
reflection i' are equal by joining the two angles with an equal sign.

Fig. 1 Light reflected from a metal surface with angle of incidence i equal to the angle of reflection i'. The dashed line (normal) is
perpendicular to the surface.

All Things Equal

All reflected light obeys the relationship that the angle of incidence equals the angle of reflection. Just as
images are reflected from the surface of a mirror, light reflected from a smooth water surface also produced a
clear image. We call the reflection from a smooth, mirror-like surface specular (as shown in Figure 2a). When
the surface of water is wind-blown and irregular, the rays of light are reflected in many directions. The law of
reflection is still obeyed, but the incident rays (Fig. 2b) strike different regions which are inclined at different
angles to each other. Consequently, the outgoing rays are reflected at many different angles and the image is
disrupted. Reflection from such a rough surface is called diffuse reflection and appears matte.

Fig. 2 Light reflection from a) smooth surface (specular reflection ) and b) rough surface (diffuse reflection). In both cases the angle of
incidence equals the angle of reflection at the point that the light ray strikes the surface.

Light is also reflected when it is incident on a surface or interface between two different materials such as the
surface between air and water, or glass and water. Each time a ray of light strikes a boundary between two
materials - air/glass or glass/water - some of the light is reflected. The laws of reflection are obeyed at all
interfaces. The amount of reflected light at the interface depends on the differences in refraction between the
two adjoining materials.
The Refraction of Light

Changing the Speed of Light

Ever notice how your leg looks bent as you dangle it in the water from the edge of a pool? Why do fish seem to
radically change position as we look at them from different viewpoints in an aquarium? What makes diamonds
sparkle so much?

These are all questions that can be addressed with the important concept of refraction, the bending of light as it
encounters a medium different than the medium through which it has been traveling. This meeting place of two
different media is called the interface between the media. All refraction of light (and reflection) occurs at the
interface.

What happens at the interface to make light refract or reflect? When light is incident at a transparent surface, the
transmitted component of the light (that which goes through the interface) changes direction at the interface.
Another component of the light is reflected at the surface. As shown in Figure 1, the refracted beam changes
direction at the interface and deviates from a straight continuation of the incident light ray.

Figure 1. Light in air incident on glass surface where it is partly reflected at the interface and partly transmitted into the glass. The
direction of the transmitted ray is changed at the air/glass surface. The angle of refraction r is less than the angle of incidence i.

The change of direction of light as it passes from one medium to another is associated with a change in velocity
and wavelength. The energy of the light is unchanged as it passes from one media to another. When visible light
in air enters a medium such as glass, the velocity of light decreases to 75% of its velocity in air and in other
materials the decrease can be even more substantial. For example, in linseed oil, the velocity decreases to 66%
of its velocity in air. Figure 2 displays in bar chart format the velocity of light in different media. The 100%
value is the velocity of light in vacuum. For air, the velocity is 99.97% of the speed in vacuum. For some
pigments such as titanium (Ti) white, the velocity decreases to 40%.
Figure 2. Bar chart of the velocity of visible light in different media. The value of 100% refers to the velocity of light in vacuum.

Waves

Refraction is an effect that occurs when a light wave, incident at an angle away from the normal, passes a
boundary from one medium into another in which there is a change in velocity of the light. Light is refracted
when it crosses the interface from air into glass in which it moves more slowly. Since the light speed changes at
the interface, the wavelength of the light must change, too. The wavelength decreases as the light enters the
medium and the light wave changes direction. We illustrate this concept in Figure 3 by representing incident
light as parallel waves with a uniform wavelength . As the light enters the glass the wavelength changes to a
smaller value '. Wave "a" passes the air/glass interface and slows down before b, c, or d arrive at the interface.
The break in the wave-front intersecting the interface occurs when waves "a" and "b" have entered the glass,
slowed down and changed direction. At the next wave-front in the glass, all four waves are now traveling with
the same velocity and wavelength .

Figure 3. Light waves of wavelength incident on glass change direction and wavelength when transmitted into the glass.

The waves are continuous and remain connected as they pass from one medium to another. We can think of it
like a long line of people running into the ocean. As the first few people run into the water, they're slowed down
because it's harder to run in water. Thus, they bunch up and stay bunched up as they run through the water.
When everyone in the line has entered the water, we would see a line of people all running in the same
direction, but the line would be shorter and the people would be bunched close together. If they run back to the
beach, the first few people would clear the water and run faster. Eventually everyone will have cleared the water
and would be running at the original pace with the original spacing between persons.

In this analogy we can think of the whole line of people as the "light wave" and the people themselves as the
"crests" of the wave. The distance from one person to her neighbor would be the wavelength of the wave and
the water would be the medium into which the light wave is traveling. Why, then does the light wave change
direction when it enters the new medium?
We can extend our analogy and imagine two lines of people running into the ocean from the beach. The lines
are close together and each person in a line is matched up with another person in the other line. This is
analogous to waves a, b, c, and d in Fig. 3 above. When line "a" hits the water first, the line slows down. In
order to maintain the one-to-one relationship with the other line, both lines must turn when they hit the water.
Which way do they turn? Towards the normal - the imaginary line that runs perpendicular to the interface
between the two media (the water and the beach); a pier is a good example of something normal to the
water/beach interface.

So the two lines must turn towards the normal when they hit the water. The greater the change in velocity and
wavelength, the greater the change in direction. Figure 4 shows the change in direction for light in air incident
at 45° on water with refracted angle of 32° and on titanium white (a paint pigment) with a refracted angle of
16°. These angles correspond to the differences in velocity shown in Fig. 2.

Figure 4. Light incident at 45° on water and Ti white. The angles of refraction (32° for water, 16° for Ti white) depends on the optical
properties. The reflected components are not shown.

We can characterize the change in velocity by a number called the refractive index of the material.

Continue the Refraction Reading.


The Refraction of Light, Part II
The Refractive Index

The ratio of the velocity of light in vacuum to the velocity of light in a medium is referred to as the medium's
refractive index, denoted by the letter n. The velocity of light in a vacuum is 3.0 x 108 m/s or about 186,000
miles/s. If we go back to our beach/ocean analogy we can think of the refractive index of the water as
something like its density. Air is not very dense at all (its refractive index is 1.0003), so the people run through
quite easily; but when they run into water, which is denser than air (and has a refractive index of 1.333), the
people slow down and their line bends at the ocean/beach interface. For light, the index of refraction n equals
the ratio of the velocities of light in vacuum (c) to that in the medium (v), that is n = c/v.

Refractive indices are most easily determined from the measured values of the incident angle and the angle of
refraction and their geometric relationship. Values of the refractive indices for the media shown in Fig. 2 are
given below.

Values of Refractive Index


Medium Refractive Index
Air 1.0003
Water 1.33
Linseed
1.48
Oil
Co Green 2.00
Diamond 2.42
Ti White 2.5

The path of light in air incident on and transmitted through a glass plate is shown in Figure 5. The angle of the
incident ray to the normal is 45° and equals that of the reflected ray. The transmitted ray is refracted at an angle
of 28° to the normal and exits the glass at an angle of 45° to the normal, an angle equal to that of the incident
ray. This explains why, for example, the image we see through a flat-glass window pane is unchanged from that
seen through an open window.
Figure 5. Light incident on a glass plate. The reflected part of the ray is shown along with the light path for the refracted component.

Light incident normal to a glass plate does not change direction as the transmitted light continues normal to the
surface (air/glass interface). The light is not refracted (that is, no change in angle) but the wavelength and
velocity do change. Light does reflect as it encounters the air/glass interface (about 4% in this case).

The paths of light traversing different media are reversible. The same relations are obeyed in Fig. 5, for
example, if the light were incident on the bottom of the glass plate. Similarly, in Fig. 4, if the light started in the
water, it would be refracted at the water/air interface and would retrace the same reversible path as for light
incident from air.

Total Internal Reflection

If light is inside a material such as glass with a larger refractive index n2 than that n1 of the material outside such
as air, there is an angle, the critical angle of incidence, beyond which the light is reflected back into the material
and does not escape. This is total internal reflection. The critical angle ic is given by
sin ic = n1 / n2

For light exiting glass, n2 = 1.5, the relation becomes

sin ic = 1.0 / 1.5

and the value of the critical angle of incidence is 41°. Light in glass at any angle of incidence to the normal 41°
or greater will be reflected from the glass-air interface back into the glass. The bluer area in Fig. 6 is the angular
region where light is reflected back into the glass. The greater difference in the two indices of refraction, the
smaller the amount of light can escape.
Figure 6. The internal reflectance at an air/glass interface for light rays from a point source in glass. Light rays incident at angles to
normal at greater than the critical angle (here, 41° for glass to air) do not leave the material and are reflected at the glass/air interface.

Light fibers and diamonds are both materials with a high refractive index and are used for their properties of
"retaining" light.

Chromatic Aberration

Fortunately, the index of refraction of most materials is not the same for each wavelength passing through it;
otherwise, we wouldn't see rainbows or the sun's green flash at sunset. When white light passes through a lens
or a prism, blue light is bent more than red light. When seen through a lens, this change of index of refraction
with wavelength is called chromatic aberration, the change in focal length for different wavelengths of light.

Figure 7. Chromatic aberration in a lens. Blue light is bent more than red light so its focal length is shorter than that of red light, and
the "blue" image is located in front of the "red" image.

In precision lens systems, chromatic aberration is undesirable, so achromatic lenses are used to reduce or
eliminate this effect. The simplest achromatic lens system is just two lenses with different indices of refraction
put together: one convex, one concave. Quality telescopes, microscopes, and camera lenses all have achromatic
lens elements.
Lenses and Geometrical Optics

The Focal Length, or Frying an Ant on the Sidewalk

We use lenses and mirrors everyday but sometimes don't really understand how and why they do what they do.
In this section of the Readings, we'll learn a little more about how light interacts with such things as lenses and
how this interaction can help us.

Most optical elements (mirrors, lenses) can be characterized by basically one quantity: their focal length. To
understand focal length, we need to know something about the focal point.

In the image above, note how the light rays, after they pass through the lens, converge on a single point. Below
is a schematic drawing of the above situation.

The single point to which the light rays are converging is called the focal point. The distance, f, from the lens to
the focal point is the focal length. Each lens and mirror has its own focal length, which is its defining
characteristic.

Note, however, that in showing the focal length of the lens above, we used light rays that were all parallel as
they came to the lens. This isn't always the case: we can have light rays incident at different angles.

In the case above, the focal point is farther from the lens, because the incident rays were diverging as they came
to the lens.
What would happen if the rays were already converging as they hit the lens? Where would we find the focal
length in that case?

So when we discuss the "focal length" of a certain lens or mirror, we are usually referring to the focal length
you get when a distant object's light rays (parallel light) are incident on the lens or mirror.

To find out the difference between Real and Virtual Images, go on to the Second Part of these Readings.
Lenses and Geometrical Optics, Part II
Real vs. Virtual, or What Does the Ant See?

The lenses and mirrors that we are using in this module come in two basic flavors: convex and concave. The
lenses A and B in your Optics Kit are called double convex lenses and have positive focal lengths. They have
sides that bulge outwards and, as we saw above, converge light to a focal point. Lens C in your Kit is called a
double concave lens and has a negative focal length. The concave lens has sides that bulge inward and diverge
light, as shown below for parallel beams incident on the lens.

What does it mean for a lens to have a negative or positive focal length? A positive focal length means that the
focal point of the lens is on the other side of the lens from where the object is placed. On the other hand, a
negative focal length means that the focal point is on the same side of the lens as where the object is placed.
This terminology is just a useful convention that allow physicists and engineers to characterize lenses and
mirrors; it has no real physical meaning.

But how can we have an image on the same side of the lens as the object?

In an earlier activity, you tried to image (bring to a focus) the room lights with your various lenses. Did you get
an image of the lights with your concave lens (Lens C)? Most likely not, since the concave lens, as we have
seen, diverges light and can't bring light beams to a focus. The image, if there was one, would have been on the
same side of the lens as the source (the light).

This kind of image on the same side as the source is called a virtual image . It's an image that isn't really there;
the focal point in the drawing above is just the point where the rays would converge if they could. We can't
focus the beams onto a piece of paper, for instance like you tried earlier. The strange thing about a virtual
image, however, is that we can see it!

So what kind of image does a convex lens make? The drawing below show us that a convex (positive) lens
forms images on the side of the lens opposite the source .
Since we can form an image on a piece of paper, we call this kind of image a real image. As you've probably
guessed, we can also see real images.

The mirror tells all

Real and virtual images are not relegated to the domain of just the lenses -- if you want to see another virtual
image look in your bathroom mirror. Or in the curved mirror on the passenger side of your car. Virtual images
in a mirror form on the image side of a mirror, unlike that of lenses, in which the virtual image forms on the
object side of the lens. Again, however, the virtual image is not a true image, in that it cannot be projected onto
a screen. This is because the light rays reflecting off a plane mirror never converge, the major criterion for
creating a real image. Likewise, light bouncing off a convex mirror never converge, thus these mirrors create
virtual images.

The dotted lines represent the extension of the reflected light rays if they could go through the mirror.

Only concave mirrors can create real images. When light hits the surface of a concave mirror, an individual
light ray is reflected at an acute angle and all the light converges at a point on the object side of the mirror.

Most real images are seen inverted (upside-down) while virtual images are seen upright. Look in both faces of
a well-polished spoon. One surface is a convex mirror, the other is a concave mirror: on which side is your
image inverted? upright?
To learn how to find the position and size of an image, go to the Third Installment of our Lenses Readings.
Lenses and Geometrical Optics, Part III
Ray Tracing, or Where Should the Ant Move?

We've seen how we can find the focal length of various types of lenses, and what the difference is between a
real and a virtual image. Now we need to bring these concepts together so we can use them to our advantage.

Let's pretend for a moment that we live in a world in which the optometrists or opthamologists won't make our
eyeglasses or contact lenses. If you had the raw materials and knew the prescription of your eyes (the eye
doctors will at least do that service), how could you find the right lenses to make? We could make some lenses
and test them until we found the right ones, but this might make our eyes pretty tired and would take a long
time. If we knew the focal length of various lenses, we would be able to use a much more powerful technique to
make our glasses.

Ray tracing allows us to "see" how light is going to behave when we have lenses and mirrors in the path of the
light. If we know where the focal point of each lens and mirror is, we can figure out where the image of a
certain object will be formed. This will tell us where the image will fall on our retina so we can see it.

The idea is to draw light rays emanating from the object through the optical system (lenses & mirrors). The
diagram below shows how much of the light from the light source (a candle) passes through the lens and is
imaged on the other side (notice that the ray that started at the top of the candle ends up below the ray that
started at the bottom of the flame: the image is upside down).

Since there are an uncountable number of light rays (shaded area above) passing through the lens(es), it would
be unreasonable to draw them all. Usually, we need draw only 2 or 3 rays from a point on the light source to
represent all the rays coming from that point.

If we know the focal length of each of the lenses and/or mirrors, our 3 rays will show us where the image of a
point from the source (or object) will be located.
In the diagram above, I chose "special" rays to locate the image: the ray (1) passing through the center of the
lens will always travel in a straight line without refracting; the ray (2) coming straight in from the candle top
(parallel to the optical axis) is bent so it passes through the focal point on the other side of the lens; and the ray
(3) passing through the focal point on the object (the left) side of the lens will be refracted so the light will exit
the lens travelling parallel to the optical axis. A very similar diagram could be drawn for the light rays coming
from the bottom of the candle, or from any other point on the candle.

With this technique of tracing various rays from a light souce, we can tell where an image will form and how it
will be oriented: inverted or upright. Now, go make those glasses!
Fresnel Lenses
During the height of the Shipping Age in the 18th century, France was looking for a way to make new
lighthouses along the coast of Normandy and Brittany. The lenses that were used in the lighthouses were huge
pieces of glass that were both bulky and expensive. In 1748, Georges de Buffon realized that only one side of a
lens is needed to bend light. In fact, only the outer surface of the lens is needed. Light is bent at the lens/air
boundary.

de Buffon cut away the inside of the lens and left rings with edges on the outside. Later, Augustin Fresnel
modified this idea and the modern Fresnel lens was created. His lenses were first used on the French coast as a
lightweight and less-expensive alternative to the old, bulky lighthouse lenses.

Below is a schematic cut-away diagram showing how a Fresnel lens is made.

Fresnel lenses are lighter than conventional convex lenses. The Fresnel lens can have the same focal length as a
conventional lens so both can have the same magnification. The image quality is not as good in the Fresnel lens
as in the conventional lens. When plastic is used as is the case for the Fresnel lens in the Optics Kit, the quality
of the image is further degraded over that of a quartz lens.
The Life Cycle of the Photon
The early weeks of the "Patterns in Nature" course deal with the properties of light that we can see and
experience. Light is reflected refracted or absorbed. Later the course covers color and goes on to x-rays. It is at
this point that the description of x-rays requires a further designation than light. The term photon is used as a
general term to describe the electromagnetic spectrum from radiowaves to infrared (IR) radiation to visible light
(the Greek word "photos" means light) to x-rays and to gamma rays.

The behavior of all these photons are basically the same: the photons travel at the same velocity in vacuum with
energies that can vary over ten orders of magnitude from the very low energies of radio waves to the very high
energies of gamma rays emitted from the nuclei of radioactive atoms.

If light is imagined as a flow of particles, the particles are called photons with each photon carrying a discrete
packet of energy. For a beam of fixed energy photons, the intensity of the beam depends the number of photons
per second. Light can also be described as waves with the distance between waves, the wavelengths, inversely
proportional to energy. The low energy radio waves have wavelengths of meters and the high energy x-rays can
have wavelength of a millionth of a meter or less.

Light can be described as a particle (photon) or a wave (electromagnetic wave). The electromagnetic wave can
be pictured as oscillating electric and magnetic fields that move in a straight line at a constant velocity (the
speed of light).

Sound waves are not part of the electromagnetic spectrum and are not identified with photons. Light travels in
vacuum - from the sun to the earth, for example - whereas sound, which is a vibration of air molecules, can not
exist in vacuum. Both do have wave behavior but the mechanisms are different.

Light described as photons allows a visualization of the absorption - disappearance - of light. In the
photoelectric effect shown in Fig 1 below, a photon incident on a metal surface transfers all its energy to an
electron and disappears while the electron - now containing the energy of the photon - leaves the surface of the
metal.

Figure 1. In the photoelectric effect a photon incident on a surface (here, a metal surface in vacuum) transfers its energy to an
electron, which leaves the surface and is detected. The photoelectric effect demonstrates the particle nature of light.

This photoelectric effect was described by Albert Einstein in 1905 as the interaction of a photon with an
electron in a solid.
The photon has only energy and no mass. When the photon gives up all its energy to an electron the photon
disappears. The electron on the other hand, has mass and charge - electrons carry the electrical current in wires.

Photons at extreme energies (millions of electron Volts of energy) can create electrons. Here, the high energy
photon, or gamma ray, produces an electron and its anti-particle, the positron (positively charged with the same
mass as the electron). The gamma ray disappears (Fig 2) in the formation of electron and positron, called pair
production.

Figure 2. The formation of an electron positron pair by the annihilation of a gamma ray.

The two oppositely charged particles (electron and positron) can recombine with each other and disappear with
the creation of a photon.

In the realm of visible light the photons are absorbed and disappear in giving their energies to outermost atomic
electrons that are only held in place by energies of a few electron volts. The figure below shows electrons
occupying energy levels (vertical scale) and an incident photon

Figure 3. Photons absorbed in a solid by delivering all its energy to an electron which moves to a higher energy level.

losing its energy and disappearing by moving an electron to a higher energy level.

In the discussion of reflection and refraction it was pointed out that the path of light was reversible. A photon in
air incident on glass would be refracted at the air/glass boundary. The same path would be followed in reverse
for a photon in glass incident on air and refracted at the glass/ air boundary. A similar reversibility is found in
the interaction of the photons and electrons. In Fig 3, if the energetic electron returns to its original position, the
energy given up in the transition from higher to lower energy states appears in the form if a photon. The same
reversibility is true in pair production (Fig 2). If an electron and positron combine together and disappear, there
energy appears as a gamma ray.
This then is the life cycle of a photon as shown in Fig 4. Life and death (creation and disappearance) are
contained in the interaction of electrons and photons. The electrons are charged particles with mass: they
change energy but do not disappear. The photons have no mass nor charge: they appear and disappear.

Figure 4. The life cycle of an electron: a) the electron loses energy and creates a photon which b) travels in space where it encounters
c) an electron and disappears.

Vision is also connected with the life cycle of the photon. We detect objects by the reflection of light from
them. The reflected light enters the eye and triggers the photoreceptors which sends a signal to the brain. The
photon disappears in giving its energy to the photoreceptor.

Quite the opposite view was held by Plato and Euclid. They believed that the eye emitted rays of light (Fig 5)
which senses the objects in front of them. This incorrect model of vision today is still displayed in the x-ray
vision of superman. He projects x-rays, and somehow senses the objects in front of him.

Figure 5. The eye sends out visual rays to sense what is in front of it. This is a misconception of the role of the eye.
Color and Light

Here we ask how color is sensed by the viewer. To answer the question we need to specify how color is
described and how color information is received by the eye. The starting point of an understanding of color is a
description of light.

Light: Photons and Waves

Isaac Newton discovered in 1672 that light could be split into many colors by a prism, and used this
experimental concept to analyze light. The colors produced by light passing through a prism are arranged in a
precise array or spectrum from red through orange, yellow, green, blue, indigo and into violet. The students'
memory trick is to recall the name "Roy G. Biv" where each letter represents a color. The order of colors is
constant, and each color has a unique signature identifying its location in the spectrum. The signature of color is
the wavelength of light.

Fig. 1. The electromagnetic spectrum, which encompasses the visible region of light, extends from gamma rays with wave lengths of
one hundredth of a nanometer to radio waves with wave lengths of one meter or greater.

Somewhat less than 100 years after Newton's discoveries, James Clerk Maxwell showed that light was a form of
electromagnetic radiation. This radiation contains radio waves, visible light and X-rays. Figure 1 shows
electromagnetic radiation as a spectrum of radiation extending beyond the visible radiation to include at one end
radio waves and at the other end gamma rays. The visible light region occupies a very small portion of the
electromagnetic spectrum. The light emitted by the sun falls within the visible region and extends beyond the
red (into the infrared, IR) and the ultraviolet (UV) with a maximum intensity in the yellow.

When we consider light as an electromagnetic wave, a color's spectral signature may be identified by noting its
wavelength. We sense the waves as color, violet being the shortest wavelength and red the longest. Visible light
is the range of wavelengths within the electromagnetic spectrum that the eye responds to. Although radiation of
longer or shorter wavelengths are present, the human eye is not capable of responding to it.

Figure 2. A wave representation of three different light hues: red, yellow-green and violet, each with a different wavelength , which
represents the distance between wave crests.

Three typical waves of visible light are shown in Fig. 2. The wavelength is the distance from one wave crest to
the next, and is represented by the Greek letter lambda, λ. Violet light is electromagnetic radiation with
wavelengths of 410 nanometers and red light has a wavelength of 680 nanometers.
The nanometer is a unit of distance in the metric scale and is abbreviated as nm. One nanometer (nm) equals
one thousand millionths of a meter (m) or 1 nm = 10-9 m. One nanometer is a distance too small to be resolved
in an optical microscope but one micron (µm) or one thousand nanometers can be resolved (1 micron = 1000
nm). The wavelengths of visible light are smaller than common objects such as the thickness of a sheet of paper
or the diameter of a human hair. Both of these are about one hundred microns thick which translates to distances
greater than one hundred wavelengths of visible light.

As we move through the visible spectrum of violet, blue, green, yellow, orange and red, the wavelengths
become longer. The range of wavelengths (400 - 700 nm) of visible light is centrally located in the
electromagnetic spectrum (Fig. 1). Infrared and radio waves are at the long wavelength side while ultraviolet
(UV), x-rays and gamma rays lie at the short wavelength side of the electromagnetic spectrum. Radiation with
wavelengths shorter than 400 nm cannot be sensed by the eye. Light with wavelength longer than 700
nanometers is also invisible.

We can describe light as electromagnetic waves with color identified by its wavelength. We can also consider
light as a stream of minute packets of energy-photons - which create a pulsating electromagnetic disturbance. A
single photon of one color differs from a photon of another color only by its energy.
In the description of light, the most convenient unit of energy to use is the electron volt, abbreviated eV. The
electron volt is the energy gained by an electron that moves across a positive voltage of one volt (V). For
example 1.5 electron volts is the energy gained by an electron moving from a negative metal plate to a positive
plate which are connected to the terminals of a common 1.5 volt "C" battery.

Visible light is composed of photons in the energy range of around 2 to 3 eV (Fig. 3). As the energy of the light
increases, the wavelength decreases. Orange light with a wavelength of 620 nanometers is composed of photons
with energy of 2 eV. It is the energy range of 1.8 to 3.1 eV which triggers the photo receptors in the eye. Lower
energies (longer wavelengths) are not detected by the human eye but can be detected by special infrared
sensors. Higher energies (shorter wavelengths) such as x-rays are detected by x-ray sensitive photographic film
or again by special devices.
Figure 3. Diagram showing the visible region of the electromagnetic spectrum in terms of wavelength and corresponding energies.
The visible region extends from 400 nm to 700 nm (wavelength) with corresponding energies of 3.1 to 1.8 electron volts (eV).

Light rays are composed of photons whose energy specifies a color from red to violet. The intensity or
brightness of the light is defined by the flux, or number of photons passing through a unit area in a unit time;
i.e., number of photons per cm2 per sec.

If we specify a wavelength in the visible range on the electromagnetic scale, we can attribute a color to it. That
is, laser light with a single wavelength of 650 nanometers looks red. We show the major spectral colors in Fig.
4a as a linear sequence from red (at 700mm) to violet at 400 nm. A circular sequence of these same spectral
colors, the color wheel first attributed to Isaac Newton, is shown in Fig. 4b. The progression of colors from red
through violet is identical to that on the linear scale. The circular wavelength scale outside the color wheel
shows the wavelength connection between the linear and circular sequences. The purple region in the color
wheel is a notable difference between the two sequences. Colors in this purple portion of the color wheel are
composed of mixtures of wavelength and cannot be represented by a single wavelength.
Figure 4. The region of visible light in wavelengths shown as a linear arrangement (a) and as a circle (b) as conceived by Sir Isaac
Newton. The color purple shown in the color wheel (b) is composed of a mixture of light in the red and violet regions of the spectrum.
Purple cannot be represented by a single wavelength of light.
No single wavelength exists for the color brown just as none exists for purple. Purple can be created with a mixture of wavelengths in
both the red and the violet. Brown requires a more complex mixture of wavelengths from at least three regions of the sequence.

The Color of Objects

Here we consider the color of an object illuminated by white light. Color is produced by the absorption of
selected wavelengths of light by an object. Objects can be thought of as absorbing all colors except the colors of
their appearance which are reflected as illustrated in Fig. 5. A blue object illuminated by white light absorbs
most of the wavelengths except those corresponding to blue light. These blue wavelengths are reflected by the
object.

Figure 5. White light composed of all wavelengths of visible light incident on a pure blue object. Only blue light is reflected from the
surface.

The Eye and Color Sensation

Our perception of color arises from the composition of light - the energy spectrum of photons - which enter the
eye. The retina on the inner surface of the back of the eye (Fig. 6) contains photosensitive cells. These cells
contain pigments which absorb visible light. Of the two classes of photosensitive cells, rods and cones, it is the
cones that allow us to distinguish between different colors. The rods are effective in dim light and sense
differences in light intensity - the flux of incident photons - not photon energy. So in dim light we perceive
colored objects as shades of grey, not shades of color.
Figure 6. A cross-sectional representation of the eye showing light entering through the pupil. The photosensitive cells, cones and
rods, are located in the retina: cones respond to color and rods respond to light intensity.

Color is perceived in the retina by three sets of cones which are photoreceptors with sensitivity to photons
whose energy broadly overlaps the blue, green and red portions of the spectrum. Color vision is possible
because the sets of cones differ from each other in their sensitivity to photon energy. The sensitivity of the
cones to light of the same intensity (the same photon flux) but different wavelengths (energy) is shown in Fig.
7. The maximum sensitivity is to yellow light, but cone R has a maximum in the red-orange, G in the green-
yellow, and B in the blue. The sensitivities of the three comes overlap. For every color signal or flux of photons
reaching the eye, some ratio of response within the three types of cones is triggered. It is this ratio that permits
the perception of a particular color.

Figure 7. The response of the tree cones to incident light: cone R (pigment R) has a maximum sensitivity in the orange-red, cone G
(pigment G) in the green-yellow, and cone B (pigment B) in the blue portions of the visible spectrum. The sensitivities of the three
cones overlap and the perceived color is due to the relative response of the three cones.
Sources of Light: The Sun and Lamps

The visible spectrum of light is just a small part of the electromagnetic spectrum which extends from radio
waves (long waves, kilmeters in extent) to gamma rays (10-3 meters down to 10-15 meters, the size of the
nucleus). The more familiar x-rays have wavelengths around 10-10 meter. The visible spectrum lies in the middle
range with wavelengths between 400 nanometers (nm) for blue and 700 nm for red. The receptors of the eye
respond to light in the visible spectrum and, as shown in Figure 1, the sun's spectrum encompasses the visible
spectrum. Light from the sun is absorbed by the earth's atmosphere in the ultraviolet (UV) and infrared (IR) but
some reaches the earth. The UV portion that causes sunburn can be blocked by glass and much of the light is
blocked by water except for the blue (B) and violet portions.

Figure 1: The region of the sun's spectrum that spans the range from ultraviolet (UV) to infrared (IR). Different portions of the sun's
energy are absorbed in the atmosphere, water, and glass.

Human daylight vision is most effective in the blue-green (around 550 nm) where the sun's energy spectrum is
in the region of its maximum. Figure 2 shows the spectrum of the sun along with those of a tungsten lamp and a
candle flame, both incandescent light sources where incandescence refers to light produced by the temperature
of an object.

In a candle, the wax is melted by the heat of the flame, flows up the wick and is vaporized. Combustion raises
the temperature of the carbon particles to incandescence and causes the emission of the yellow color. As shown
in Fig. 2 the maximum in the visible spectrum is at the long wavelength, red, end.
Figure 2: Energy spectrum in the visible region for the sun, tungsten lamp, and candle flame

In the tungsten lamp, the electric current runs through the filament which becomes hot and radiates. As with the
candle some of the radiation is in the visible but most is in the infrared (IR). To keep the tungsten filament from
burning or melting, the glass bulb is filled with a mixture of argon and nitrogen gases that does not react with
tungsten.

Fluorescent lamps operate by a different principle from incandescence, a gas discharge. A glass tube is filled
with mercury vapor and others, such as sodium, and the electrodes are connected to an alternating current (AC)
source. The electric source ionizes the atoms in the tube which emit light primarily in the UV. The inside of the
tube is coated with phosphors which absorb UV light and produce visible light. Consequently, fluorescent
tubes have an energy spectrum with a broad spectrum, similar to the tungsten lamp, and a series of sharp intense
peaks in the red, blue, and green. The intense green line (mercury vapor) at 546 nm is used to calibrate
spectrometers.
The Composition of Color

The sensation of color depends primarily on the composition of light which is a mixture of white light and
colored light (which in itself can be a mixture of wavelengths as in the case of purple). The colored light may
have a dominant wavelength or hue and the extent to which the hue dominates is known as saturation (or
chroma). The saturation decreases as the hue is deleted with white light.

There are 3 receptors in the eye that respond to different wavelengths. This leads to attempts to chart colors by a
mixture of three primary lights. Figure 1 shows James Clerk Maxwell's color triangle with the three apexes
representing three primary colored lights: blue-violet, orange-red, and green. A great number, but not all colors
can be produced by mixing lights of the three primary colors. A specific color, for example an unsaturated
greenish blue, can be represented by a point on the triangular grid.

Figure 1. The color triangle attributed to James Clerk Maxwell. At the apices are the additive primary colors and at the edges, the
subtractive colors. Many, but not all colors can be represented as a mixture of the three color lights. The nearer a point is to an apex,
the higher is the proportion of light of the color represented by that apex (adapted from H. Rossotti, ,(Princeton, 1983)).

In order to represent all colors, 3 imaginary or "ideal" primaries are used. The Commission Internationale de
l'Eclairage (CIE) defined in 1931 (modified in 1967) the chromaticity curve with standard observer and 3 ideal
standard sources. The chromaticity diagram is constructed (Fig. 2) by drawing a color triangle with 3 ideal (but
non-existent) primary colors at each corner. The x-axis is the amount of ideal green that would be mixed with
blue. The y-axis is the amount of ideal red that would be mixed with blue. A given color is represented by
values along the two axes.

Superimposed on the triangle is the CIE chromaticity curve which places the band of pure spectral colors as a
solid curved-line from violet up to green down to red. The dashed line connecting 380 nm and 700 nm are the
nonspectral colors of purple obtained by mixing violet and red light beams. All the colors that we can see are
contained within the area bounded by the solid and dashed lines. The central point W of the diagram is the
white produced by an equal mixture of the three primaries.

Figure 2. The CIE chromaticity diagram showing wavelengths in nanometers (nm) and energies in electron volts (eV). The area
enclosed by the curved line and dashed segment include all visibile colors. The pure spectral colors lie along the curved edge.
(Adapted from Nassau, The Physics and Chemistry of Color, (Wiley, New York, 1983)).

We can represent a mixture of two spectral lights as a point on the line joining the light point on the spectral
curve. The dotted line in Fig. 2 joins the blue light at 480 nm with the yellow light at 580 nm. Following the
dotted line we would proceed from spectral (or saturated) blue to pale blue to white to pale yellow to saturated
yellow. Thus, a mixture of the correct amounts of 480 nm blue light and 580 nm yellow light gives any of the
colors located in between. Similarly the purple colors can be formed by a mixture of red light with violet light
as specified by the dashed line. A pair of colors which can produce white (the line joining the two colors passes
through the white point, W) are called a complementary pair. Thus blue light and yellow light form a
complementary pair as do orange (600 nm) and blue-green 488 nm, also called "cyan". We can now use the
point W as the origin and describe color as a mixture, in a certain proportion, of white light of a given
wavelength. This wavelength is referred to as the dominant wavelength and the color associtated with this
dominant wavelength is called the hue. We thus describe the sensation of color in terms of hue. The amount of
hue that makes up the composition of light is known as saturation (also designated as "chroma"). The dominant
wavelength points on the spectral curve (solid line in Fig. 2) are fully saturated. As the dominant wavelength or
hue is diluted with white light, the saturation decreases. For example, to describe a beam of pink-appearing light
(point D in Fig. 2) as unsaturated orange hue of 620 nm.
Figure 3. Different ways of obtaining metameric beams of pink light. Each implies mixture with white light to obtain pink light, A) by
orange light, B) by mixing red with cyan, or C) by mixing red, green, and violet. To the eye, these metameric colors would all appear
the same. (Adapted from Nassau, The Physics and Chemistry of Color,(Wiley, New York, 1983)).

There are many ways to produce the light at point D (or any other point): one hue plus white, two spectral
colors or three. These light mixtures are illustrated in Fig. 3 where A) shows orange and white, B) blue-green
(cyan) and red, and C) violet, green, and red. These three mixtures would appear the same to the standard
observer.

Você também pode gostar