Vision requires reflected light. There is no information in direct light from the sun, a lamp, or any other light source. In a science experiment called Project Eureka, author and scientist Arthur Zajonc constructed a box into which a powerful projector shown. The experimenter took special care that no reflected light was seen from objects or surfaces within the box. When viewers looked inside the box, they saw only pure light. When viewed from the front (so the projector could not be seen) the observer sees nothing, there is absolute darkness. If light is not shown directly into an eye, or is not reflected from a surface, it is invisible.
Light is like no other entity in the universe. It has no mass and no charge. Nothing travels faster than the speed of light. Einstein's relativity theory says that at the speed of light, time and space get warped; they lose the meaning so dear to human experience. Our reality of substance, space, and time is not the reality, not the universe, that light inhabits. Efforts to understand light through quantum theory offer the best current avenue, yet these efforts are incomplete and obscure (especially for the mathematically challenged; ie. most of us).
Using quantum theory we arrive at several bizarre conclusions. Light does not occupy a position; it has no place. Light is not here or there. It seems, when scientists go about measuring it, that light is everywhere at once. Sometimes light seems to be a particle, called a photon. Other times, light acts like a transverse wave. Light refuses to stand still and be measured.
Even harder to grasp is the notion that light may exist in another dimension. For decades scientists have tried to understand how light could be a wave and yet travel through a vacuum. Waves in a pond travel through water. Sound waves travel through air. But what is light waving in a vacuum? For years scientists tried to find the mythical "ether" that was said to fill the vacuum of space. After decades of searching, the current conclusion is that there is no ether.
In the popular book "Hyperspace", theoretical physicist Michio Kaku states that many scientists believe light to be a vibration of the 5th dimension (ie. the vacuum of space is vibrating in a 5th dimension). This conclusion is derived from an old set of equations called the Kaluza-Klein theory, and is supported by the new theory of superstrings. We are all familiar with the three spatial dimensions of width, height, and depth. We also are familiar with the notion that Einstein's equations established time as a fourth dimension. Our brains did not evolve to perceive a fifth dimension (we are all blind to the 5th dimension), but well respected scientists are taking this theory seriously, since it is an explanation for the otherwise otherworldly behavior of light waves.
As strange as this thing called light is, It is possible to assign it attributes. This is important for humans because we live in a universe of attributes. We expect our world to be filled with shapes, sizes, colors, and textural patterns. If we are confronted with a universe that does not have attributes or that does not follow the laws of attributes, we are pretty sure we are facing some kind of magic or religious experience (or another dimension?). We are rather dubious about unsubstantial worlds. So, we say that light has a direction, an intensity, a polarization, and that it comes in wavelengths (colors). There it is, it's real, it has attributes, end of story.
Except, when scientists attempt to study these attributes, they turn out not to be distinct, but to be ambiguous. "Light possesses a nature unique to itself. Every natural assumption we make about it, assumptions common to us from daily life, leads to errors." (Catching the Light, Zajonc)
Direct light coming from the sun or from any other light emitting source, has properties. White light is a combination of all the other "colors" of light. Note that this is different (the opposite) of mixing pigments (of paint) where a mixture of all colors results in black. When white light strikes a blue shirt, all the colors (wavelengths) of light are absorbed except the blue wavelength which is reflected back into the eye. The reason a black shirt is hot in the summer is because all the wavelengths of colored light are changed into heat; none of the colors are reflected into the eye. Therefore, within every light source is a combination of wavelengths of light. These wavelengths correspond to what we perceive to be colors. The short wavelengths are the blue, indigo, and violet end of the spectrum. The long wavelengths are the red, orange, and yellow ends. The human eye (central vision) is most sensitive to the wavelength corresponding to yellow. That is why we see black letters best against a yellow background. Yellow is the most "seeable" wavelength.
Goethe said that the eye owed its existence to the light. Light created the eye. "It sculpted an organ suited to itself." (Zajonc). So light is the mysterious beginning of the story that leads to vision. It is fitting that we begin in the ethereal realm. We pass from the mystery that created the eye, to the mystery that is the human brain. Somehow, we gather the light, swirl it around within the brain, and end with the ability to get to the bathroom and back.
In the morning, when we open our eyes, light triggers a flood of hormonal activity via the retinohypothalamic tract to the suprachiasmatic nucleus (SCN) within the hypothalamus. The SCN is a master body clock that alerts the other "worker" clocks to reset and begin their daily rhythms.
From the February, 2000 Science News Magazine: "How the clock detects illumination has long puzzled scientists. They know that human eyes perceive illumination and convey a signal to (the SCN). Yet this link does not appear to involve rods and cones... In fact, scientists have created mice lacking rods and cones, and the animals still shift their biological clocks in response to light." In other words, there is a separate, distinct pathway (functional system) in the eye for setting the biological clocks using light, that is not related to the visual image analysis system mediated by the rods and the cones. Cryptochromes and melanopsin are candidates as the photoreceptors for the human biological clock.
This control of the human being by the entity we call light is remarkably powerful. Those hundred body clocks regulate our temperature, mood, energy, mental acuteness, muscular speed and accuracy, endurance, sexual appetite, hunger and thirst; indeed our well being and health are linked to light. We truly are solar powered creatures
Which brings us to this question: If the hormones are not turned on and off, when there is blindness, what happens to the body's internal clocks? Is the body continually swamped with melatonin (the hormone released at night by the pineal gland)? Excess melatonin is implicated in depression (seasonal affective disorder); it induces sluggishness and sleep states. At the moment, there is no answer to this question (ie. I don't know the answer, do you?).
The human vision system decodes reflected patterns, dynamic patterns. Scientists have shown that if an image is perfectly stabilized on the retina, the image disappears. Human perception requires that images be dynamic. We can only see change. We can only see when something is moving (either the object and/or the eye)
The flow of images across the peripheral retina, for example, establishes patterns that allow for visual navigation. The peripheral vision system analyzes background "optical flow" patterns. Central vision analyzes figures (patterns) that stand out from the ground; the faces, words on a page, the landmarks that help us conceptualize layouts and routes. A constant, micro-ophthalmic oscillation of the eyeballs ensures that the retinal image never stabilizes while central vision is "locking on" to objects.
When direct light strikes an object, three things can happen (or any combination of the three): light is reflected, refracted, or absorbed. "Reflection" means light is bounced backward, opposite the direction it was traveling (directly back, or at some angle). Refracted light passes through an object (ie. an object that transmits light, like glass). To "refract" means to bend light; light continues in the direction it was traveling, but is bent off course. "Absorption" means light is transformed into another kind of energy, heat, chemical, or electrical.
Objects affect light in two other ways. Light that is reflected off a surface (what's left after some is absorbed and some is refracted) has spatial frequency and temporal frequency. An object's spatial frequency (in layman's terms) is it's size relative to other objects in the visual field. Temporal frequency is simply a measure of whether the reflected light is constant or intermittent relative to other objects in the field. Spatial frequency varies as we move or as objects move (for example, the image on the retina gets smaller or larger as we or the object approach or recede). Temporal frequency is present in flickering light, as objects are blocked, as shadows and changes in illumination fluctuate.
Both temporal and spatial frequency systems contribute to visual navigation. Optical flow analysis within the peripheral retina uses temporal frequency assessments to allow navigation through openings and around objects. The change in size of objects with movement, changes in spatial frequency, contributes to (works in conjunction with) the analysis of temporal flow patterns.
A visual pattern that is well established cortically positions the visual analysis system to do two things: make error assessments, and predict trends. If a face has a certain set of features (mouth, nose, eyes), then any change in the usual facial pattern is immediately detected. The vision system can also predict what features the next face should have. On a more neural level, the peripheral retina can predict the path of a moving object, for example, from an awareness (established pattern) of the usual speed of the object. Any behavior that does not fit the known laws of the visual perceptual universe are quickly detected. The vision system then has the ability to quickly detect perceptual errors and to predict future events. This has particular value in our dynamic universe.
Physical space is in constant flux. Light intensities rise and fall, shadows come and go, contrast falls between vivid and obscure, and color, size and shape perception all change with distance and angle of view. A finite information processing system cannot respond to a dynamic world without overloading. There must be (ie. there are) ranges within which the system responds or fails to respond.
If colors changed every time we altered the angle or distance of view, we would not be able to maintain color constancy ("color" wouldn't make sense). If our awareness of an object (a car, for example) changed every time the size variable changed, we would need a separate name for each gradation of size. Our vision system adjusts to the dynamic physical world by using what are called constancies; dependable perceptual responses (or awarenesses) to ranges of activity. Therefore, we have color constancy, size constancy, shape constancy, brightness constancy, and movement constancy. The constancies are created because of the error recognition and trend analysis capabilities of vision (an information processing system).
Objects in the world have defining borders or edges, a square has four sides, a face is oval, etc. When we speak about form, we are mostly talking about the outer edge of the object, the border that gives the object a specific, standard shape. The human vision system is particularly attuned to the analysis of edges and borders (ie. lines, orientation, movement of lines). Another way to think about this is to understand a border as the extreme of a range, in this case a spatial, size/shape related range.
In summary, reflected light that enters an eye is a pattern that can contain colored light in a wide variety of combinations (part of the field can be yellow, another blue, etc.). The pattern also has a spatial frequency (its size), and a temporal frequency (movement information). Patterns are also in continual flux. It is probably safe to say that no two patterns are ever exactly the same. Vision pattern recognition is an information processing system that is capable of ignoring irrelevant differences, while finding similarities that allow for the perceptual constancies for color, size, shape, brightness, and motion. Patterns have ranges within which a system responds, and outside of which the system ignores the input. Borders that define the form of real world objects are spatial range constancies. These borders are particularly important for object perception.
Vision is a high brain level activity, intimately linked to cognition. Vision impairment (or blindness) is almost always caused by damage to the eyeballs, the visual tracts, or the occipital lobe. As complex as visual processing is at the level of the retina, along the tracts and in the visual cortex, the most the brain does at these early levels of visual processing, is establish that a visual pattern exists. Higher level intelligence is generally unaffected by what is called "vision" impairment and blindness. This is really "sight" or "sensory" impairment. Visual cognition occurs beyond the occipital lobe.
Much of what we call vision is a subconscious process. If visual processing did not occur on a subconscious level then the act of seeing would become a labored, arduous, inefficient burden. The peripheral visual processing system is particularly subconscious, and is linked to postural and navigational brain centers. This division between the central, conscious role of vision, and the unconscious peripheral role is especially important for the field of orientation and mobility. For visually impaired children to become good travelers, they need to make good use of both systems, or know how to compensate if impairments have adversely affected either or both processing streams.
Vision is the premier distance receptor, capable of exploring near and far, left and right, scanning over a wide frontal area. Hearing, the only other distance receptor, is a poor second as a spatial monitor. Loss of the ability to perceive spatial relationships using vision is arguably the most serious problem facing blind individuals.
Vision is not an isolated activity occurring in a specific area of the brain. Vision is a whole brain process. Centers for vision are in every lobe of the brain. As many as 32 different visual processing centers have been located in the brain so far. These centers have efferent and afferent neurons that connect to each other and to virtually every major area of the human brain. Other senses, hearing, touch, smell, proprioception, vestibular, all have neural links with these vision centers. Subcortical connections run to lower brain stem areas as well. This is an important understanding, because many children in special education have brain damage, either traumatic or genetic; focal or extensive. Damage anywhere in the brain has implications for the normal operation of vision.
Vision is also a dynamic process; it is a verb, not a noun. It constantly adapts as we move and as illumination levels change. This dynamic character of vision is necessary for perception. Vision is a steady-state system. This means that the job of the various vision systems is to adjust to changes in patterns of light being reflected into the eye. For example, as the brightness of light rises or falls, the lens/retina system adjusts to keep a constant, moderate level of light intensity on the retina. Too much light whites-out the retinal image; too little light makes it impossible to see color or detailed patterns.
Vision only occurs when there is movement. The body must be moved through space, and/or the eyes must move to explore space, for vision to occur. Memories of movement patterns are the basis for perception. Things which change when there is movement provide the underlying patterns for perceptual constancies. Neurons are laid down and synaptic connections develop; ie. the brain grows because we move. This is important to know because it implies that children with vision disorders who do not try to use their impaired vision, will not develop the brain level neurons to see (to improve their visual perceptual skills using their impaired vision). It also suggests that children with physical impairments that affect their ability to move, will not develop sophisticated perceptual skills unless we find ways to help them navigate through space on their own (using power wheelchairs, for example).
In other words, vision is a learned skill . You have to use the vision system to get good at sensing, perceiving, and cognitively processing visual information. Neurons and synapses are built up as vision is used. Brain cells die or fail to develop when vision goes unused. Early enriching visual experiences result in sophisticated visual processing systems. Impaired, untreated, under stimulated vision cells (like in amblyopia) result in poorly developed visual processing abilities.
The primary function of eyes as they evolved through the eons was as motion detectors. Only the most advanced brains are able to see in the absence of object movement. Evolution is important to this discussion because the modern human brain is build upon the base of earlier brains. That is why scientists speak of the lower, middle and upper parts of the human brain. The lowest sections are concerned with automatic, life sustaining functions like breathing. The middle brain controls many subconscious activities like navigation, emotional response, and sexual arousal. The most advanced part of the human brain is concerned with consciousness, voluntary activity, and higher order processing.
The peripheral retinal system is sometimes called the "where" retina. It is involved with the subconscious control of human navigation. It is an old visual system, having evolved long before central visual processing. The evolution of the retina is played out as you go from the extreme edge of the retina (the oldest system in evolution) to the retinal "center", the fovea, where central processing occurs. The extreme far edges of the retina are purely reflexive. When an object moves on the far retinal edge an immediate reflex swings the eyes in a direction which aligns the moving object with the fovea. Closer in, the peripheral retinal tissue can "see" movement, but there is no object recognition. When movement stops, the object becomes invisible. Closer in still, the medial peripheral retina monitors optical flow, the velocity of objects moving across the retinal surface. It is this optical flow that is the basis for the subconscious human navigation system, the "where is it" system.
There are two separate movement tracts, one that is controlled by the higher, conscious brain (frontal lobe primarily), and the other a part of the subconscious functions of lower brain areas (ie. the cerebellum, pons, thalamus, basal ganglia, limbic system). The lower brain controls the movement tract used for subconscious navigation (we do not have to will ourselves to walk, move around objects, and to go in and out doorways, etc.). This tract monitors the velocity of the optical flow across the retinal surface. This system cannot detect (cannot see) stationary objects. The higher (forebrain) system controls the eye/head movement tract. This system works when we voluntarily will our eyes to move, explore and concentrate. As the eyes are voluntarily shifted, movement is ignored, ie. optical flow is suppressed so that we do not see a blur. This explains why people with nystagmus do not see an oscillating world.
The two movement systems work in harmony; they do not interfere with each others role. But either or both of these neurological tracts can be damaged, causing characteristic visual disabilities. Damage to the higher system would affect the ability to concentrate visually and to use the visual system for gathering knowledge and learning. Damage to the lower brain system would affect the ability to navigate in space. Many optical illusions can be explained by the interactions of these two movement systems (see Gregory's book "Eye and Brain", latest edition).
A whole bunch of incredible stuff happens (signals coming and going all over the brain, mostly to the subconscious centers) before the signals from the retina reach the occipital lobe of the brain; the visual cortex. The visual cortex detects borders, brightness contrast, and the movement of edges.
From the visual cortex signals go to the association areas where the borders are combined into shapes. At this level, we recognize faces and objects. This is the level at which the figure is distinguished from the ground.
Signals then pass to the angular gyrus where the meaning of the visual image is interpreted.
The message is then sent to Werniche's area in the temporal lobe where language is controlled. The face or object is assigned a name.
From Werniche's area signals are transmitted to the orbitofrontal cortex (and to the limbic system) where feelings are associated. Do we like this policeman's face? Are we afraid of the chocolate cake?
From the orbitofrontal cortex signals go to the prefrontal cortex where we put our thoughts into sequences, and where we decide what to do (Run screaming from the chocolate cake; tell the police officer to have a #@#!% good day himself).
Alas, if it were only so simple. It isn't, but that's the general idea. The complex truth is that tracts go to 32 different brain centers for vision (at last count); all the centers are inner connected with each other and (directly or indirectly) with all the rest of the brain. Some processing is going on serially, most happens in parallel.
Early in evolution, human ancestors had only peripheral vision. The role of this early vision system (still preserved in our eyes) was to detect motion and set off protective reflexes. When cortical vision evolved, so did the ability to suppress these movement reflexes. "Visualization" could only evolve when this became possible. When we visualize a series of events (movement patterns) the neural pathways play out their old patterns (they fire to ready the organism to move), but the final reaction (the order to move) is suppressed. As we visualize picking up an object from the table, our brain patterns follow the same sequence they do when we actually reach toward the table. The final command to act is overruled during visualization.
At the thalamus (lateral geniculate body), there is a direct, one synapse connection to the amygdala in the limbic region of the brain. This means that when the eyes witness something emotionally charged, signals go directly to the limbic region for analysis and quick (automatic, subconscious) response (a similar connection goes from the ears to the amygdala so that eye and ear coordination for emotional response are possible). This short pathway was not known for many years when it was thought that visual tracts went to the neocortex first for processing and then to the limbic region for emotional analysis. This helps explain why emotion and emotional response sometimes overpower reason or logic or self-preservation. The limbic system is the older system in evolution.
Return to the index page Return to the top of this page.
Back to the Vision home page Ahead to "Systems analysis"