The HawkEye Project
(A Vision Prosthetic System: Grant Template)

Project Description

The following was an early attempt at a narrative; it needs heavy editing:

In November 2003, a partnership was created between World Access for the Blind (WAFTB) in California and the Institute for Innovative Blind Navigation (IIBN) in Michigan. This partnership centers around a joint interest both organizations have in wearable computing technologies for blind wayfinding. The long range plan put forward by WAFTB is for the creation of a vision prosthetic system. IIBN monitors advances in wayfinding technologies and assists inventors with promising avenues of research. In February, 2004, the Humanistic Intelligence Lab at the University of Toronto joined the collaboration. This internationally respected research lab has the expertise to create the wearable systems needed to accomplish the high ideals set down by WAFTB and IIBN.

Wearable computing has been a promising dream for many years. There has not been however, a centralized all out effort to take wearable systems off the science fiction shelf. The HawkEye Project will move the idea of the cyborg from the strange/alien to the commonplace. The partnership that includes World Access for the Blind, the Humanistic Intelligence Lab, and IIBN is about moving speculation and opportunity off the science fiction shelf and into reality. All three agencies that it is now time to assemble the teams that will create the future of blind navigation. The HawkEye Project is the future of blind navigation.

There is a quote that says in effect "We can put men on the moon and get them back, but we cannot build a technology that can get a blind person across a street safely." As a culture, we made the decision to "go for the moon." The nation was united around a large and uncertain future. The resolve was backed up with planning, money, and successful follow through. The HawkEye Project is such a large undertaking that it will require a "go for the moon" resolve. The first step, taken through the partnership of WAFTB, the University of Toronto's HI Lab, and IIBN is to declare that the "journey of a thousand miles" has begun, and that the resolve to see the effort to completion is in place. It is a matter of faith and not belief. This is what we mean when we say that the team members must "step over the line;" Once over the line you are in the land of the faithful. It is a done deal. There is no discussion (can we really do this?). There are no words, no debate, no internal second guessing, no worry, no angst. There is only a non-verbal faith. Power comes from the harmony and energy generated when humans gather together around a common passion; follow fate and ride the waves of serendipity.

World Access for the Blind is building partnerships that will result in a visual prosthetic system that will enable the blind to "see without sight." The HawkEye Project is the foundation upon which generations of evolving wearable technologies will stand. It is the bedrock upon which to build modules that effectively substitute, replace, augment, and mediate for an absent (or ineffective) vision system.

CyberEye is a technology that will allow human beings to explore new avenues for perception. The system will be designed not only to allow blind and visually impaired individuals to have equal access to the visual world, but it will augment the senses and mediate reality so the user perceives in ways never before possible. CyberEye will provide the user with bionic abilities that go well beyond natural human capability. This new form of perception we call Cybervision.


Wearable computers (head mounted systems) do two things, they mediate reality, and they augment/alter reality. This is accomplished by putting a screen (glasses are usually used) in front of the eyes. The user of a wearable computer no longer looks at the world naturally. The screen (the inside of the eye glass) contains a video of the world. It is the same world the eyes would see without the glasses, except there is a fast, unrealized delay in the arrival of the image to the eyes. A small digital video camera is used to capture the scene in front of the user. What allows the system to change reality is a tiny computer that takes the video image of the world and alters it before sending it to the eyes. The computer can now do all kinds of things to reality. For example, it can select parts of the visual field to be enlarged, or parts of the field (like roadside advertisements) to be eliminated from the scene. The computer can enhance contrast or allow only black and white images, or see in the ultraviolet or infrared ranges. What the computer can do to reality is open to creative experimentation. The resulting computer-altered images give us "CyberVision," a digital view of the world.

We have taken this understanding of CyberVision and modified it to accommodate the needs of blind individuals. We now add auditory mediation and augmentation to the mix. We will build into the wearable system modules that filter sound, sometimes enhancing it and sometimes eliminating noises. More importantly, we will change visual input into auditory wave forms. We are also placing a network of smart auditory chips inside the environment (or placing virtual sound anywhere we want in space). These audified spatial areas are intimately linked to CyberEye in such a way that CyberVision for the blind is enhanced (acuity, depth perception, and pattern recognition sharpened).

When we discuss "artificial vision systems," we use the term "vision" in a broad and perhaps unique way. When we offer the brain unique forms and combinations of sensory input (different from the innate and usual input of everyday experience), and further when we place computational chips inside the human body and throughout the environment, we are creating new (pioneering) ways of perceiving. We are creating new ways of seeing, new kinds of "visual" (digital) perception. Vision has traditionally been thought of as accomplished exclusively by the tracts and centers in the brain; by the so called vision cortex and visual association and processing areas. However, this perspective does not ring true with human experience. For example, there is an internalized "seeing;" wherein reside dreams, visualization, and a non-physical ability to have "visions," and to be "visionary." We often refer to particularly vivid ideas as "visions," and the term "visionary" is applied to one who imagines and conceives concepts and ideas of substance and appreciability. In these cases, the term "vision" refers to one's perception of one's inner, mental world. Blind people commonly refer to their perception of the world in "visual" terms as in "oh, I see," or "I've seen this before." Helen Keller has been referred to as one of the greatest "visionaries" of our time. Thus, when we operate according to the current, broader definition of vision, we acknowledge contributions of the whole self to seeing, as well as the ability to see by means other than the eye or by means that go beyond the eye.

This latter point is worth particular attention as it is the main crux of our concern and endeavor. It is documented that sailors as far back as the 1700's would use gun fire and hammering on blanks to determine the proximity of land through the fog or in darkness where their eyes could not see. Such a sound could carry for miles, alerting the sailors to where the land was by echoes. In the 1940's, we saw the active development of technology to allow humans to see where the eye could not. The use of radio waves (radar) allowed the perception and tracking of oncoming planes too far away to be seen with the eye. Sonar was applied to allow man to travel the depths of the sea where light does not penetrate. Later, infrared systems were applied to allow people to see under cover of total darkness.

As we enter the new millennium, we find the development of sensors that can see through walls and around corners, that can detect a flock of birds a mile away, and that can measure the deep workings of the body and brain. Sonar, once used to guide great submarines through the deep, is now used by the blind to find their way through the complex world. One blind boy could hit a softball pitched to him from 14 feet. Others have been seen on world TV bicycling through city streets and mountain trials, all without the benefit of eyes. Cameras, once used to take photos, now allow 30 times the range and sharpness than the eye can provide. Indeed, sensors are becoming so powerful and far reaching that the unassisted human brain can no longer be expected to gather and process the full wealth and breadth of information now available - realms that have hitherto not been explored.

Fortunately, with the development of ultrasensors came the development of computers to assist the human brain to gather and process this wealth of information. Computers now help human pilots to guide light aircraft at a-thousand miles an hours through the tree tops, and jumbo passenger jets practically land themselves. Radio telescopes throughout the world are coordinated by computers to provide composite images that the human brain can understand. Cameras hooked to computers are now allowing the blind to read.

With the increased power of sensing technology and computers, concurrent with their decreasing size, cost, and power requirements, we have finally entered an age where the unknown can become known, and the unseen can be made visible. Technology can now take the place of the human eye where it fails to see.

Computer mediated vision can improve the clarity of the environment when compromised by darkness, physical barriers such as walls or clutter, or obscurity such as thick smoke, dust, or fog. Computer vision systems can mediate the perception of the environment by making walls transparent or making walls visible to the blind.

Creating Global Teamwork

Think like the astronomers. When they were mapping the entire universe of stars, astronomers realized that great challenges required extraordinary cooperation. They wrote software that linked all the great telescopes of the planet. The project went on day and night as different teams came on line as the earth revolved. They communicated using the internet. They shared the work and they shared the data. They mapped the universe of stars for the benefit of mankind, not for personal fortune or fame. Because of this level of sharing and organizing, they accomplished a task larger than the sum of the agencies who were working together. We must think and plan for the HawkEye Project at this level of sophistication and cooperation.

Think like the scientists who mapped the human genome. When they conceived of the project, they did not have the computer processing power (the technology) to accomplish their goals. By the time a few years went by, and Moore's Law marched on, the processing power arrived. The project was completed before the projected target deadline. Many of the ideas within the HawkEye Project sound like science fiction. They are not. Processing power will arrive on time, as if fate was dealing answers as fast as the scientists reached the questions.

The first step in this long journey is the creation of the international research lab that will design and develop the vision prosthesis and the supporting acoustic environment. The location of the lab, the funding base, staffing, and the organizational and philosophical structure, will be decided by the project design team. This team will review the overall project (this document) and will decide how and when the project will move forward.

This document will be ready for review by prospective project team members by March, 2004. The initial team meeting will occur in May, 2004. A follow up meeting will take place in October, 2004.

The second focus for the project team, after plans for the research lab are solidified, is to outline the strategy for the creation of CyberEye. This process begins with defining the nature and design of the wearable computing substrate, the network that will house the plug-in modules. This study will include a review and final suggestions for standards for the substrate. When the substrate is established, the focus will turn to the creation of navigation modules for the vision prosthesis. Other modules can be discussed and proposed, but in the beginning primary energy will go toward the creation of wayfinding modules. Two wayfinding prototype versions of CyberEye will be outlined by the project team. These will be called the Sonocular and the SonicEye. The "final" version of CyberEye, when the system is interfaced with chip implants, will be called HawkEye. Concurrent with this development plan will be an outline for creating the enhanced smart environment and the plans for training and curriculum development.

Speaking about his own non-profit agency World Access for the Blind (WAFTB), CEO Dan Kish said "Ultimately, our purpose is to provide tools to allow the blind to access the world at large in a manner of their own choosing. We were very careful in the wording of our mission statement. We will not "help the blind to succeed" or "empower the blind" or "create opportunities for success," because we feel the blind should be autonomous enough to make their own choices about success. "Options are the key." If we give a blind man a chance to see, it's still his choice whether to open his eyes, and how to use them. That choice is sacrosanct to the sighted, and can be no less so for the blind. The best we can do is provide tools and openings to equalize opportunities; the rest is up to the blind themselves. And, indeed, it will ultimately be the blind themselves who will mobilize the effort to that point. The foundation of our approach at WAFTB is a "NO LIMITS" philosophy - the idea that, although we all face limits, we must not suffer limits to be imposed upon us by others. We all have the right to enjoy the freedom and strength of character to seek and discover our own limits and strengths. This is how we learn to embrace the world on common ground.

"BlindnHawkEye Project. "

Target Populations

The population that will benefit from CyberEye varies with the modules attached to the wearable computer. The system will be custom designed to address individual impairments or disabilities. Therefore, the range of the system will be broad and flexible and will have utility across all disabilities. Initially, the primary target population will be sophisticated, technologically savvy blind individuals with the courage to use innovative new tools. With modifications (to modules and training), the technology will be applied to less sophisticated travelers, (for example younger children, the multiply handicapped, and the elderly), to people with vision impairments of various complexities, and to individuals with information processing disabilities.

While our primary interest is providing vision to the blind throughout the civilized world, optimized environmental access will improve and enhance the functional capacity and quality of life for a diverse population with broad application. Through our work with a vision prosthesis for blind individuals, we expect that there will be spinoffs that bring us closer to the promise of wearable computers for the general population and for the entire range of people with disabilities. In other words, CyberEye will help move the wearable computer revolution forward.

Wearable computing is rapidly emerging as a mainstream technology and experts in the field are working on a wide variety of applications. The HawkEye Project may have spinoff value to these many areas peripheral to our primary focus. Our work (as a furthering of the discipline of wearable computing) may contribute to technology solutions for diverse populations, including individuals with mental health issues such as attentional disorders or pathological stress factors; people with physical health impairments such as diabetes or cardio-vascular illness; professionals in surveillance, crime prevention, terrorism prevention, civil defense, search and rescue, and emergency response; and generally for any individual who would otherwise benefit from or appreciate expanded access to the world through computer mediated/ augmented sensory experience.

The Needs and The Problem to be Solved

Blind individuals need access to the same things that vision provides. If vision allows a person to see obstacles in the line of travel, or perceive an opening, or follow a pathway into the distance, then there is a need for blind individuals to access the same functional information. If vision allows the sighted to read signs and interpret symbols, to see and read faces and body language, to track a ball and catch it, to see cars as they cross a street (and so forth), then the blind have a need to access the same information. Before the advent of digital technologies it would have been irrelevant to claim that the blind should have a sensory system equal to that of the sighted. Advances on all technology fronts now make this assertion of need valid. Furthermore, as these technologies evolve, the issue moves from one of need to one of human rights; the blind have a right to create and use digital tools to provide them with equal access to information.

People who are blind need more options for accessing information. The digital age holds the potential for creating new tools that solve problems heretofore considered beyond the capability of humanity. This potential needs to be turned into practical tools. We need to see how far we can push these powerful new technologies so that we can increase the options available to blind citizens.

To put this in more scientific terms, the quality of access to the environment depends upon the strength of correspondence between what exists, and what one perceives. In turn, the strength of correspondence between the environment and human perception is governed by two factors - the nature of stimuli (E.G., intensity, clarity), and the perceptivity of the human sensory system. If the clarity of the environment is compromised by darkness, physical barriers such as walls, obscurity such as thick smoke, dust, fog, or clutter, the unaided perceptual system may be thwarted in processing the environment to allow purposeful access. Purposeful access to the environment can also be compromised by impairment or dysfunction in the perceptual system, as in sensory neural processing or sensory reception anomalies (E.G., blindness). There is a need for technology that enhances and controls the stimuli reaching the blind, and for technology that alters and improves perception. The blind have a need and a right to greater safety, greater efficiency, and greater accuracy. These attributes are within reach as the digital revolution continues to unfold.

The functional implications of compromised environmental access can range from inconvenient, to threatening of life and livelihood. The human condition is punctuated by the devastating impact of compromised access to the full environment. From the sinking of the titanic brought about by the lack of timely perception of an iceberg, to the frantic efforts of search and rescue teams to recover survivors from rubble and smoke filled buildings, to the struggles of the blind to achieve an acceptable quality of life, mankind strives to save and better lives by making the unknown knowable. The blind have a need and a right to make the unknown knowable.

Many research teams are using digital tools to address the near point needs of blind individuals. We now have computers that read, digital Braille printers, raised Braille displays, and so on. The technology to address far point needs, especially for navigating blind, have until recently been beyond the scope of our knowledge. We have turned a corner, however, and now have the computational power to create wayfinding, far point, orientation technologies. There exists therefore a compelling need to get these emerging far point technologies off the theoretical table and into practical formats.

The visual prosthetic system will need to function within a smart environment, that is, within spaces that are filled with embedded computer chips that sense, emit audio signals, network with other chips or other networks, and that have speech capability. There is a need therefore to modify the environment to make it more "visible" to blind individuals (especially to those using the vision prosthesis). Just like the blind have a right to academic literacy (to the knowledge embedded in print based documents, video, signage, etc.), they also have a right to environmental literacy (to wayfinding knowledge; layouts, routes, orientation and location-based information).

If we study any of the potential modules that could be incorporated into CyberEye, like face recognition, or computer vision, or sonification, we find hundreds of university teams and/or corporate groups working intensely on the systems. Decentralized, competitive efforts are widespread. There is very little effort however at consolidation; few people are pulling the puzzle pieces together toward a common end. There is a need therefore to present a holistic picture, a need for global cooperation, a need for centralized planning and action, a need to blend internal vision chips and bio-molecular "cures" with wearable computing expertise, and a need for standards for incorporating modular designs into wearable systems. All of these needs/solutions are addressed within the HawkEye Project.

Goals, Objectives, Activities and Time Lines

Goal One: Create a major international research lab

Goal Two: Build CyberEye systems

Goal Three: Build acoustically enhanced environments

Goal Four: Create training strategies and curricular materials

The time line for developing versions of CyberEye (primarily the Echolocation-based CyberEye) will be as follows:

January, 2005: Detailed plans in place for the creation and testing of the Sonocular (outlines for the next levels also in place). Initial plans for environmental chip design ready. Plans for alternative/additional CyberEye designs ready. Beginning of the development of training strategies and curricular materials outlined. CyberEye 1.0 is the Sonocular, a Soundflash (sonic echolocation) technology combined with ultrasonic sonar environmental sensing technology such as KASPA. This will be the first wearable unit that combines diverse emitter/sensor technology resulting in user perception of a large, more complete whole.

January, 2006: Sonocular prototype ready for testing. Detailed plans ready for environmental chip design. Parallel development of alternative/additional CyberEye systems. From this date onward the alternative CyberEye systems will follow a sequence of design, testing, prototype, testing, final product, and continual upgrading similar to that for the echolocation CyberEye described below.

January, 2007: Detailed plans in place for the creation and testing of the SonicEye (updates on next generations reviewed). Prototype environmental chips ready for testing. CyberEye 2.0 is the SonicEye, a Sonocular with the acoustic output externalized using virtual reality display technology so that vertical cuing and spatial mapping are improved. This device will be the first wearable device to provide real-time, externalized acoustic feedback, resulting in true perceptual correspondence (spatial elements appear to be where they actually are).

September, 2007: Sonocular product available. Environmental chips that work with the Sonocular available. Training/curriculum materials ready for Sonocular.

January, 2008: SonicEye prototype ready. Environmental chips that work with the SonicEye detailed. Training/curriculum materials in development for SonicEye.

January, 2009: Detailed plans ready for the creation and testing of the CyberEye (updates on next generations reviewed). Initial plans for environmental chip development in place as well as associated training and curricular materials. CyberEye 3.0 is called for the first time the CyberEye, a computer driven device that integrates a full range of sensor technologies (sonar, radar, optical) into a single array, and outputs a composite image though auditory, tactile, and visual displays. It will connect to modules for color perception, complex pattern recognition, modules for interfacing with smart audified environments and location information networks, and specialized sensor modules for seeing at greater distances, through solid surfaces, and into the human body. CyberEye is a fully developed vision substitution system with augmented and mediated reality components to provide perceptual capacity beyond the limits of the human eye.

September, 2009: SonicEye product ready. Environmental chips that work with the SonicEye complete. Training/curriculum materials for SonicEye complete.

January, 2010: CyberEye prototype ready

January, 2012: CyberEYE product ready. Environmental chips that work with CyberEye complete. Training/curriculum materials for CyberEye complete.

January, 2014: CyberEYE generation II product ready (additional modules, break-throughs in ergonomics, sensor and display refinements, platform updates in anticipation of the latest in implant technology). Upgrades complete for environmental chips and training materials.

May 2016: Detailed plans for the creation and testing of the HawkEye ready (updates on next generations reviewed). Upgrades plans to environmental chips and training materials ready. Selected artificial vision systems that work with HawkEye ready to be interfaced. CyberEye 4.0 is HawkEye, a wearable system that combines all sensor and display technologies with neural implant technologies. This will result in a multimodal vision system with sensory feedback highly native to the native nervous system by adding true luminescence to the composite image.

September, 2018: HawkEye prototype ready

May 2020: HawkEye product ready


1. We expect to build CyberEye so that the blind will attain perceptual information that is different from, but functionally equal to that of sighted individuals. The blind will perceive their spatial, symbolic, computational, and aesthetic surroundings in a manner similar to the sighted. Specifically, the blind will be aware of terrain changes and ground level objects; recognize and read signs, printed material, and computer displays; move with ease and efficiency through complex, dynamic environments; participate competitively in conventional sports and pastimes such as ball play and video games; access mid-speed transportation; and expand awareness of the beauties of art and nature.

2. We expect to create a standardized personal area network (PAN), a substrate that will accommodate any modular technology that can be conceived for incorporation into a wearable computer. These modular units will be the new digital technologies that will increase functional options for the blind.

3. We expect blind individuals to be safer using CyberEye.

4. We expect blind individuals to be (measurably) more efficient using CyberEye.

5. We expect blind individuals to be (measurably) more accurate using CyberEye.

6. We expect that a new kind of smart chip will be developed that will audify environments to make the world more "seeable" for the blind through their vision prosthetic system (and at times even using unaided hearing).

7. We expect that CyberEye will be networked to embedded computer processors that unlock location information (information networks such as global positioning systems, inertial guidance such as dead reckoning, and magnetic north reference) - This will allow blind people to find their way from any point to any other point or to an original point by accessing GPS, and using inertial and magnetic sensors. Blind (just as with the sighted) people need never be lost again. Databases of streets, addresses, and points of interest will guide individuals where they wished.

8. We expect that CyberEye will be modified to provide improved perceptual abilities for the partially sighted - allowing individuals with poor vision to see better by presenting clearer images to the existing eye. Reading, driving, and improved mobility may become possible with mediated and augmented vision enhancement.

9. We expect that CyberEye will be interfaced and blended with strategies and technologies being developed for internal ocular and brain chip implants and for bio-molecular systems.

10. We expect that CyberEye will be modified to address the needs of other populations of people with disabilities, including those groups with sensory processing dysfunctions - Allowing individuals with sensory neural dysfunctions such as dyslexia, cortical visual impairment, or sensory modulation disorders to access the environment by mediating perception in a manner that is friendly to individual processing requirements.

11. We expect that CyberEye will enable spinoff solutions for other wearable computer systems.

The makeup of the team could include members from the following categories. A single team member can have one or more of the roles listed below. We will need to elaborate a set of roles for each professional on the team.

B. Technical Expertise Needed in These Areas:

1. Blindness: needs and issues, sophisticated informant
2. Blind movement and navigation
3. Environmental literacy/location information
4. Sensory/perceptual processing
5. Man-Machine interface - displays, human perception, humanistic intelligence
6. Neural-scientist
7. Sensing technologies: laser optics, ultrasonar, radio
8. Engineer
9. Software Developer
10. Instructional/therapeutic: teaching strategies, curriculum/program development, assessment
11. Directive

C. Public Awareness

1. Press connections
2. Writers: science writers, scholarly journalists
3. Web site development
4. Presenters and conference coordinators

D. Mobilization of Resources

1. Marketing
2. High Level Fund developer (corporate connections, government)
3. Public Funding (medical, rehab, education, defense)

(Dan: The above is your organization for the group and below is what I was working on. Feel free to drop my stuff or incorporate where you think best.)

Blind Navigation Specialists; Roles:

01. To help the group understand the demographics of blindness
02. To help the group understand the history of blind rehabilitation; to keep developments in historical perspective
03. To help the group understand current and evolving strategies for teaching blind individuals to navigate
04. To help the group understand how human development affects teaching strategy and technology use
05. To help the group understand the scope of technologies being created to address blind navigation; to see where our work fits into a larger world
06. To keep the team grounded in the practical and relevant, focused on solving real world functional problems
07. To keep the team focus on accessibility

Experienced Experts in Sensory Adaptation, Including Echolocation; Roles:

01. To help the group understand the physics of echolocation
02. To help the group understand how blind individuals successfully use echolocation
03. To help the team design relevant and practical echolocation tools

Experts in Wearable Computing/Cyborgs; Roles:

01. To help design the CyberEye systems and the universal substrate
02. To help design the environmental audification systems
03. To offer guidance about modular development; what and how
04. To help design networks and interconnections
05. To help the team understand the cultural, political, and ethical implications of wearable computing technology and to advise the team
06. To help the team understand alternative modules that could be used on a wearable system Including body sensing, non-native senses (360 degrees, seeing through surfaces, etc.)

Experts in Environmental Literacy, Signage, and Location Based Technologies; Roles:

01. To help the team design audification chips
02. To help the team understand the various networks that CyberEye might link with (personal and interior area nets, spatial and object nets, etc.)
03. To help the team understand the cultural, political, and ethical implications of extreme networking and to advise the team

Assessment, Research Experts; Roles:

01. To design three kinds of assessments for consumers: Impairment (body parts), disability (tasks, functional assessments), handicap (quality of life)
02. To design assessments for the modules and substrate

Experts in Sensory Integration; Roles:


Experts in Artificial Vision; Roles:


Neuroscientists; Roles:

01. To help the team understand the anatomy and function of the human brain; to guide the understanding of the brain's information processing system

A Secretary/Administrative specialist; Roles:

01. To set up a communications network
02. To set meeting dates
03. To arrange lodging and meals
04. To do accounting

Fund raising Experts; Roles:

01. To understand federal funding avenues and to bring these suggestions to the team
02. To understand corporate funding avenues and bring these suggestions to the team
03. To understand venture capital funding avenues and bring these suggestions to the team
04. To understand funding strategies and bring these suggestions to the team
05. To organize and write proposals that generate the funding needed for all aspects of the project

A Marketing/Public Awareness Team; Roles:


A Document/Training team; Roles:

01. To turn the technology into a system to be taught
02. To make materials available to consumers and professionals

A Media/Literary Figure; Documentary Film Crew; A Science Writer; Roles:


Lawyers with Patent and Intellectual Property Expertise; Roles:

01. To help the group avoid patent infringement


Provide information that measures whether the goals are met

Two kinds:

One: Summary: measures how well the population achieved the goals; administered at the end of the project (nationally validated assessments; authentic assessments that include solutions to real community problems; attainment of benchmarks; portfolio assessments that demonstrate products completed

Two Formative: provides data during the course of the project: attitude surveys; daily, weekly journals (web tracking); real time feedback; audio and video tapes of activities


Use a budget narrative; one or two paragraphs that describe how expenses support the project

Show matching funds and funds that the organization will absorb.

Show inkind contributions


Use the white paper or portions.

APPENDIX A: CyberEye Requirements

I. Introductory Considerations

A. Initial definitions - though some of these concepts are psycho-physical, the terminology used here has often been simplified to make the concepts more intuitive, and preclude the need for esoteric jargon.

1. Spatial element: refers to any item, feature, or component, static or dynamic, occurring in space including one's own body parts, colors and symbols, illumination, sound, and objects and their characteristics.
2. Temporal Element: Any element that occurs as a function of time including self regulatory functions.
3. Spatial Event: generally refers to any spatial element that is novel or just coming into awareness.
4. Relative Distance: is broken down according to four indices that generally represent the degree of action and cognitive processing required to reach the objective - the extent of movement (crawling, walking, running, use of vehicular technology), motor planning, orientation processing, likelihood and extent to which elements are interposed between subject and objective. Classifications are based on the functioning of a sighted adult human.

a. Reachable (within body reach): spatial element is within the body's reach (arms, legs, head), can be obtained with reflexive movement (i.e., simple reaching response). Little movement, very low motor planning, little orientation processing, little likelihood of interposition.
b. Short (beyond reach to 2 meters): element is proximate to the body, within easy grasp with brief, almost reflexive movement - little movement, minimal motor planning, little orientation processing, low likelihood of interposition with few elements. One could easily role, crawl, or scoot to the element.
c. Medium (2 to 15 meters): element requires locomotion, engagement of a simple plan to obtain the object, minimal orientation processing, moderate likelihood of interposition of multiple elements.
d. Long (15 to 200 meters): One must engage more concerted effort, more complex motor planning, moderate orientation processing, high likelihood of interposition of multiple elements.
e. Remote (200 and beyond): technology is preferred or required to reach objectives. All indices fall into the upper range.

5. Reception: the registration of information by the sensory organs or sensor elements.
6. Perception: the processed awareness of sensory information by the subject.
7. Mediation: Computerized processing of sensor received information intended to facilitate human perception.

B. General Parameters: Vision is simply a sophisticated piece of biotechnology developed by nature over millions of years to perform specific functions. Man, using his brain which has also been developed over millions of years, can engineer technology and strategies to more or less emulate or surpass the functions performed by vision.

1. Primary functions of vision

a. Registration (spatial event detection): detects when a new event has taken place, which would include the presence of a new element coming into the field of detection.
b. Discrimination of spatial elements: (includes elements of one's own body): being able to discern one element from another.
c. Classification of spatial elements: the ability to recognize elements once they've been registered. This is more a function of perception than reception, because it requires the subject to draw upon recall. However, the reception component is important, in this case vision, insofar as it provides sufficient information for pattern resolution to make classification possible.
d. Identification: the attribution of meaning to what is perceived either through the recognition of elements or features (a truck, an entrance to a building), or the comprehension of elements or events taking place (a ball thrown toward the subject, an approaching drop-off, a familiar face).
e. Spatial mapping: Establishes spatial element relationships. (Relationship of self to self, self to space, and between spatial elements.) There are several of those, and this is above that and below that, and to the left or right of that.
f. Dynamic interaction: Guides or refines dynamic interaction with spatial elements. (Self to self and self to environment.) The subject catches the ball, and tosses it into a basket.

2. Justifications for vision emulation: While we recognize that the human brain may be capable eventually of learning to process just about anything that it is exposed to, we feel it prudent to initiate the perceptual determinates of cybervision to emulate visual reception fairly closely for reasons explained below.

a. Neural precedent: the neural system has largely evolved and developed around vision as the primary distal/spatial sense. The eye has certain characteristics which provide information in specific ways that the neural system is designed to process optimally. In essence, the nervous system expects to receive certain information in a certain manner. Presenting information that is alien to the neural system, or presenting information in ways that are not native to the neural system could cause a poor fit between information and neural processing. This could result in an undue struggle of the subject to use the information provided.
b. Societal precedent: human society has developed to make optimal use of vision as a primary perception medium. The environment has typically been designed with the visual system foremost in mind. Everything from street signs, traffic lights, and lanes on the road, to graphic displays, to popular sports and art, to the operation of a simple cell phone or microwave connects the sighted tightly within a complex network of the exchange of goods, services, information, and companionship. Providing information to the brain that is consistent with information available to the general public will bring the subject into this community network by allowing the subject to operate on common ground.

II. Perceptual Factors: This refers primarily to sensor parameters and performance, and the environmental information received by the sensors.

A. Range: Ideal range will vary according to tasks. The visual system uses eye convergence, focusing, and fixation mechanisms to vary its range of reception. Generally speaking, when the eyes have concentrated on a particular spatial element, all other elements recede to the background of both reception, and perception. The range parameters should be such that the full gamut of daily and developmentally relevant tasks can be performed. In general, short ranges are sufficient to facilitate near point tasks and basic movement, while longer ranges are required for navigation through the larger environment, access to environmental literature, and high speed tasks. Range would ideal be adjustable up to no less than 15 meters (greater than 200 preferable), though greater ranges would be very useful for navigating through the larger environment or just enjoying a broad view of the environment.
B. Lateral/Vertical field of view: The lateral field of view of the visual system is about 180 degrees, with acuity falling off very sharply from center according to existing formulas. This allows major events to be registered peripherally, but central events take precedent receptively. Vertical field is similar, though the angle is narrower (about 60 degrees). The visual field is oriented anteriorly with no posterior perception without head scanning. In general, field of view is relevant to tasks involving interaction with dynamic elements, or multiple elements. Work with previous devices indicates that field acuities should approximate vision in that the central field should take precedent over the peripheral. Vertical field and/or processing algorithms should be designed to reduce ground clutter.
C. Focal Point: Human vision has a maximum acuity focal point of about one degree. This allows very minute details to be registered through scanning with a relatively high precedent. The affect is that fine discrimination and relationships can be registered. Focal point is useful for discrimination of small elements or features. A device would allow easy scanning of the environment to provide detailed discriminations.
D. Element discrimination: (see C above.) The four primary factors of element discrimination are density, texture, color/contrast, and pattern recognition. The wavelengths of visible light carry information that allows for color and contrast discrimination. This level of discrimination enables highly refined distinctions to be made among spatial elements. Moreover, registration of minute elements lends itself to the reception of texture and, to a lesser extent, density. Extensive work with people who have very poor vision demonstrates that color, or at least contrast, is perhaps the most critical factor in discriminating spatial elements. A very little vision goes a long way in detecting drop-offs, grass lines, doors, empty chairs, and more.
E. Classification: The device or system must provide enough information at high enough resolution to allow for classification and identification of objects.

III. Perceptual Interface (Man-Machine): This refers to the manner in which sensor reception is conveyed to the subject.

A. Auditory/tactile/Visual

1. Audition: One of the biggest issues in auditory displays has been the externalization of sound. Hitherto, sound has been conveyed to the ears by direct coupling, causing auditory stimuli to be perceived in 2 dimensions (left/right) within the head, bounded by the distance between the subject's ears. There is generally no vertical cuing. Externalization of the sound would allow for a presentation of information to approximate actual placement of elements in space with minimal, non-native coding. i.e., an event taking place 13 feet forward, 8 feet to the left, and near the ground could be presented by a sound burst from that externalized location. Not all dimensions need be thus coded, but without externalization, lateral cuing is limited, and vertical cuing is difficult to nonexistent.
2. Taction (tactile display): a tactile component could greatly expand the breadth and scope of information being presented through sophisticated, modern sensors. Such information may be too rich to squeeze into the eye or ear alone. Deaf-blind users, who are arguably in greatest need for access to distal information, would have little or no access to auditory displays. Further, tactile displays could serve to provide additional information that might not be incorporated into other displays. In general, tactile displays can be bulky, and may be limited in the amount of information that can be provided. Nonetheless, their use may be crucial.
3. Visual: Visual displays can take 3 forms - external display, retinal imaging, and neural interface. The advantage to visual interface is that the visual system is a dedicated spatial processor most conducive to the presentation of spatial information. Individuals with partial vision would benefit from visual displays.

B. Real-time neural mapping potential: The eyes update the brain with a new set of information approximately 25 times per second. This update speed may be considered continuous as far as the neural system is concerned. In general, continual updating allows for smooth interaction between subject and environment. Changing perspective between subject and environment is processed dynamically, and new events do not escape detection. However, since the auditory system is able to update the brain much more quickly, the brain may reap unknown benefits from higher update speeds provided by artificial sensors.
C. Reality correspondence and cuing (colinearity): The closer is the correspondence between the location of the spatial element and displayed presentation of that location, the more quickly and easily will the neural system learn to interact with spatial elements. Greater disparity between actual position and displayed position necessitates more learning and processing. This is to be minimized, since one of the biggest arguments against distal sensors has historically involved concerns about cognitive or perceptual overload.
D. Integration: The output display needs to be of a nature that does not mask or otherwise interfere with the normal sensory process. Also, it should alter sensory processing as little as possible to maximize speed and potential of learning. Extended consideration would be given to complex output displays that would provide a great deal of useful information, but might take some time to learn. The benefits might be worth the effort. An example of this would be restored vision or visual cortex interface to provide 3d visual imaging. Such imaging would be expected to baffle a novice to visual perception such as a congenitally blind person, and could take months or years to adapt. Permanently altered brain chemistry will complicate the matter still further.

IV. Ergonomics

A. Size, weight, compactness, impact on user comfort: The less comfortable is the device, the less likely it is to be used no matter how useful. Connection between the head mounted sensor array and other components is inevitable, at least in the beginning stages of development. Connectivity options should be explored thoroughly. It should be noted that the SonicGuide's head piece and processor pack were approximately one-third the size and weight of KASPA. Some people have preferred these user friendly features over the higher spatial resolution and discrimination of KASPA. Targeting the preferences of specific users is important in differentiating various "models" of essentially the same technology.
B. Aesthetics.
C. Friendly to use.
V. Flexibility to individual needs

A. Perceptual needs (hearing, tactile, visual): (see III-A.)
B. Body size: the smaller the person (e.g., infant or toddler), the smaller and lighter the device must be. Also, vertical field sensor performance and ground clutter algorithms may vary slightly according to height.
C. Cognitive capacity: individuals with limited cognitive capacity may require simpler user interfaces, and/or simpler display presentations.
D. User preferences

VI. Issues of Affordability and Reliability

A. Cost to produce is affected by quantities - ease of manufacture is not of consequence in the end except as it affects costs.
B. Availability of components is essential.
C. High Reliability, Ruggedness, and Maintainability are Essential

APPENDIX B: CyberEye Design Concept


A. Acoustic (compression/density)

1. Sonic
2. Ultrasonic

B. ElectroMagnetic

1. Radio
2. Optical


A. Visual (enlargement, heads-up, laser)
B. Auditory (2d or 3d, acoustic coupling)
C. Tactile/vibratory (wearable on back, chest, and/or head; handheld modules)
D. Neural implant (direct stimulation)

III. AREAS OF CONVERGENCE: The following areas are available and converging to power this project.

A. Technology

1. Micronization
2. Power requirements
3. Processing speeds
4. Sensors
5. Wearability: Ergonomics, operation, enclosures

B. Technical Expertise

1. Blindness: needs and issues, sophisticated informant
2. Blind movement and navigation
3. Environmental literacy/location information
4. Sensory/perceptual processing
5. Man-Machine interface - displays, human perception, humanistic intelligence
6. Neural-scientist
7. Sensing technologies: laser optics, ultrasonar, radio
8. Engineer
9. Software Developer
10. Instructional/therapeutic: teaching strategies, curriculum/program development, assessment
11. Directive

C. Public Awareness

1. Press connections
2. Writers: science writers, scholarly journalists
3. Web site development
4. Presenters and conference coordinators

D. Mobilization of Resources

1. Marketing
2. High Level Fund developer (corporate connections, government)
3. Public Funding (medical, rehab, education, defense)


A. Parallel Development: Various specific technologies developed, tested, and perhaps used (SoundFlash, KASPA, VOICE). Pieces of the puzzle without inter-connections or integrations.
B. First stage technical integration: Technologies begin to integrate (from two sensory areas) with emphasis on single sensory feedback display.
C. Second stage: At least 3 sensor technologies integrate with emphasis on intermodal sensory feedback displays.
D. Bio-Machine: Technology becomes neurologically integrated.


A. Sensory Performance:

1. Reception: In building such devices, we often refer to the "terrible trio" - range, resolution, and field of view. It is difficult to balance these 3 without compromising something. It would seem prudent to explore optics or short wave electro-magnetics in addition to sound and ultrasound as optimal for balancing these three, and provide the potential for color detection. A lens or other receiving transducer or transducer array could be configured to a narrow, high acuity central field with acuity decreasing peripherally over about 180 degrees. 360 degrees could also be accomplished, but this should be reserved for special applications since the issue of information overload is already paramount. Other sensor designs and processing options could be available for special applications such as accentuated perception of movement in peripheral view to enhance (corner of the eye) perception.
2. Signal Processing: With processing, color discrimination might also be reduced peripherally as with the eye. Edge detection algorithms could also be used to assist discrimination. It might also seem necessary to use a system of emission reception so that the received signal is uniform and properly coded. Ambient conditions of light, etc. can vary sufficiently as to make such a device very difficult to use - especially for one not accustomed to the ever changing nuances of illumination such as a congenitally blind person. An emitted signal with uniform characteristics would probably be easier to process as well. For example, the signal could be ramped in frequency or pulsed, and a modulation technique could be applied to measure distance without the need for stereo cameras and complex focusing algorithms.
B. Spatial Updating: In order to optimize neural mapping to movement (which is crucial to the learning process, especially for developing children), the device should be head mounted. This allows hands free scanning of the environment in a way native to the nervous system. Also, the output to the user should be continuous or nearly so.
C. Display:

1. Acoustic Coupling: Information could be conveyed to the user through ear tubes or small speakers behind each ear. Bone conduction should be explored.
2. Information would be displayed in continuous, 3d sound with limited voice prompts or other indicators, and possibly combined with tactile stimulation over the head and torso area. Feedback would consist of complex signals with signal characteristics corresponding in real time to movement and to the dynamic nature of spatial elements. The properties of sound that could be used for incoding include pitch, volume, directionality, phase, pulsewave modulation, frequency modulation, frequency filteration (high, low, band pass), time delays and reverberation. For example:

a. Distance might be coded by changing pitch, volume, and resolution - with greater distances equaling higher pitch, lower volume, and poorer resolution. Matching pitch to distance was found very successful by Dr. Leslie Kay with his KASPA technology. Pitch could be coded logarithmally as a function of distance so that fine discriminations are possible at close distances, but less necessary at further distances.
b. Color could be coded according to a band pass algorithm for each element pitch - high wavelengths passing a high band, low passing low. Brightness could be represented by an increased resonance in the filter cue or by changing pulse modulation in a square wave.
c. Lateral and vertical position would be denoted by externalized sound position.
d. Texture would be indicated by the clarity of the signal and complex timbres as with KASPA or VOICE.

3. Tactile displays would run concurrently with the auditory display. It could be handled in several ways. A chest array could be used which focuses on ground level and terrain information. Ground level information may be of sufficient complexity as to baffle an auditory display. A chest array could also be used for reading or graphic perception tasks. A pair of virtual hand mounts might also be used to allow tactile reading of any printed, computer, displayed, or graphic material (information could be conveyed directly, or OCRED and output in voice or a tactile Braille simulation), or the hand mounts could allow one to explore his distal environment through a kind of electro-stimulated virtual touch.

D. Ergonomics: With modern electronics the device might be integrated into a spelunker type head piece with a stream-lined chest mount. Short range wireless technology should be explored for connecting discrete component. "Wearable computer" designs should be explored. Full body covering may be used for special applications to present complex tactile information. However, most athletes will want to leave their arms and legs uncovered for freedom of use.
E. Extended features: Each of the following features addresses critical access needs. We also understand that each of these additions inflates cost and size, and complicates R&D and manufacture. However, other parties have already given attention to some of these features in other, stand alone devices. A modular device with "plug-in" options is warranted. We could start with the basic sensor array and perceptual processor, and expand from that as resources allowed.

1. Device could shift to reception of ambient lighting at user preference. This would allow for clear perception with source lighting. This might even be preferred by users experienced in the use of vision.
2. Device might implement a zoom feature which would allow extended range with narrowed field.
3. A close-up mode could be implemented to allow scrutiny of near objects or near point material without concerning about coding distance. Reading might be accomplished this way if field could be adjusted. A display mode could be implemented to eliminate distance processing for easier reading. Also, for reading, vertical position could be highlighted by pitch cues as well as position coding. This would make letters easier to read. One reading system using this approach, the Stereo-toner, met with moderate success in the mid 70's. Another option might be to implement an optical character recognition function to assist reading. This could provide information in speech, or to hand-mounted virtual tactile displays.
4. Close-up ambient light detection might allow reading of simple LED, LCD, and CRT displays.
5. Could implement color/contrast filters so that elements known to be certain colors would be emphasized or de-emphasized.
6. Could receive location information such as GPS, talking sign, and magnetic north for route planning and tracing in the larger environment. Would provide user with vocal or tone feedback about object location allowing user to "hone in" on a objective. For example, an object of interest could be "tagged" in the virtual environment with a signal that one could approach with distal and directional feedback.
7. Could allow user to retrace routes using inertial and magnetic (compass) sensing.
8. A tactile display output option could be made available. Excellent, high speed tactile displays already exist. One could imagine outputting the image to a tactile display for non-movement tasks such as viewing graphical displays. This could also be applicable to the deaf-blind.
9. A direct interface could be established to PC'S to facilitate graphical display navigation.
10. The device could receive transmissions from multiple, remote emitters. These emitters could be applied as "tags" to elements of the environment that need clarification - a moving ball, for example, a target, or key points along a race course or route). These tags would effectively become beacons to the user to highlight specific elements. These emitters could also be tied to specialized sensors designed to measure other qualities for medical, security, and other applications.
11. The unit would have visual display options and processing flexibility for the partially sighted.
12. Short wave radar would allow special task individuals to see through walls and other obstruction. Infrared would allow seeing in the dark.
13. Wireless Internet connectivity could facilitate access to on-line information such as books, periodicals, news media, and mail.
14. Personal data assistant to facilitate cognitive functioning through word finding and spelling options, reminders, etc.
15. Body sensors would allow individuals with health issues to monitor and manage their health conditions in real-time.

VI. Precedents:

A. Environmental sensing (far point perception)

1. Kay's technology (KASPA): This is a real time, 3 channel, ultrasonic sonar device which gives feedback to the user according to complex tone timbres, pitch, and direction. These tones are derived from the environment. Distance is coded by pitch, and azimuth by stereo positioning. A center transducer allows for finer discriminations. The range is 5 meters, and the field of view about 60 degrees. Updating is virtually instantaneous and continuous; scanning is accomplished by head movement. Users can learn to move around complex environments, reach for objects, and discriminate texture and density. A few can slalom poles on bicycles.
2. Dr. Peter Meijer's VOICE: This is an optical camera run through a computer, and coded to audible signals. It relies on ambient light. The scene is coded for color by timbre, verticality by pitch, and laterality by stereo positioning. There is no direct coding for distance other than amplitude of the sound, and the viewing space the element being viewed occupies. There is no central acuity distinction, so specific elements are difficult to distinguish. Updating takes place about once or twice per second. Subjects have learned to identify elements in scenes, and describe pictures. Some have indicated increased comprehension of visual concepts. Two individuals are known to use the VOICE while moving. Because distance is not coded, and the information isn't provided continually, movement is generally not facilitated.
3. Dr. Steve Mann's Blind Vision project: used combination of radar and optics to produce a wearable system that allowed detailed environmental perception of over a mile.
4. Dr. Steve Mann's Eyetap: laser eyeglasses hooked to a wearable computer that can mediate the image to account for a broad range of visual impairments or dysfunctions.
5. Sendero's GPS: This is a GPS device with output to voice or Braille. Users can follow routes to new or preselected destinations with an accuracy of 10 meters. With mapping technology, it can tell you what street you're on, and how to get to a given street, address, or place of interest. 6. Sonic Echolocation and SoundFlash by WORLD ACCESS FOR THE BLIND: By using natural echoes enhanced by an optimized echo signal, users can detect and identify objects the size of a curb or pole at 5 feet, a parked car or tree at 15 feet, a bus at 30 feet, and a building from over 600 feet. The ears are used without added processing of any kind. Users can run or ride bikes through unknown areas. Discrimination of small elements is very limited, especially at greater distances.

B. Reading (near point)

1. Stereo-toner: Auditory reader (do not know manufacturer): This device came out in the 70's. It translated writing through a camera into tones that corresponded to the shape of what the camera saw. Some learned to read over 100 words per minute by listening to the sounds.
2. Optacon (Telesensory Systems): This device operated much like the Stereo Toner in principal, but used a tactile display instead of sound to present the output. A one finger display presented the shape in vibrating pins of what the camera saw. A special camera allowed reading of CRT'S. Users were known to achieve reading speeds of up to 80 words per minute.
3. The virTouch Mouse: This device enabled real time access to graphical computer screens. It contained 3 finger pad displays which presented real time, Braille-like information as the device was moved. The displays gave the shape of whatever the mouse encountered on the screen. Shading was represented by pin depth. The display updated continuously as the mouse was moved, providing a convincing sense of dynamically "touching" the screen. An option converted text to Braille. People were able to learn to use graphical functions of off-the-shelf word processors and drawing programs. People were also able to read maps presented on the PC.
4. Color Test: This device, distributed by several companies, gives speech output to identify colors of objects touched to a quarter-inch square window. It gives the name of the color, its brightness, hue, and saturation. Individuals can learn to select and match colors.

APPENDIX C: Functional Criteria

Functional navigation means "how to find your way in the standard spaces of the world." Define the spaces and organize them by the frequency they are encountered. Create standard test spaces so that we can control assessment. Define standardized training spaces. In each case consider layouts (mental images), and routes.

Indoor spaces:

01. Four-walled room with one door
02. Hallway
03. Specialty rooms (classrooms, gym, pool area, kitchens, bathrooms, etc.)

Outdoor spaces:

01. Quiet residential
02. Small business areas, bus stops and shelters
03. Small urban areas
04. Intersections
05. Gas stations
06. Parking lots
07. School campuses
08. Apartment Complexes and condominiums
09. Large urban areas
10. Parks
11. Rural pathways
12. Through the woods
13. Mountain trails; hiking trails
14. Panoramas
15. The stars

Commercial spaces:

01. restaurants and eateries
02. shopping malls
03. Stores (department, grocery, hardware, clothing, toy, pet)
04. Outdoor plazas
05. Outdoor bazaars (swapmeets, flea markets)
06. Office buildings
07. Transit Stations (bus depots, train stations, airports)
08. Hotels

Culinary spaces:

01. Kitchen (food preparation)
02. buffets
03. food displays (dessert trays, bakeries)

Recreational spaces:

01. Playgrounds
02. Stadiums
03. Amusement parks
04. Theatres
05. gyms
06. Country Clubs

social spaces:

01. Parties and informal gatherings
02. Lines and formations
03. Seating arrangements
04. Conventions and conferences

Mechanical/Technical spaces:

01. construction sites
02. furniture and gadget assembly
03. automotive care
04. puzzles
05. Home improvement

symbolic spaces:

01. the printed word
02. graphics (pictures, graphs, photos, art)
03. dynamic text (marquee, subtitles, digital text)
04. dynamic graphics (computer graphics, video)
05. Commercial literacy (vendor signage, menus, courtesy signage, package labeling)
06. location literacy (public signage, directional markers, warning markers)

water spaces:

01. Rivers
02. lakes
03. Oceans
04. Snow and ice
05. Waterfalls

Dynamic spaces:

01. Observer from vehicles (car, bus, train)
02. Sports
03. Running
04. Operating short range vehicles (skates, scooters, bicycle)
05. Operating long range vehicle (car, truck, bus, train)
06. Operating long range vehicle (car, truck, bus, train)
07. Operating water craft (sail boat or house boat, speed boat, kyak/raft, surfboard, jet skiis)
08. Operating air craft (sail plane, glider, balloon)
09. Operating snow craft (snowmobile, skiis)

Vehicle spaces:

01. Cars
02. Buses
03. Trains/subways
04. Planes

Since we are "copying" vision, we will also look for:

01. Awareness of the presence of spatial elements s - registration (to be avoided or approached)
02. Localization of elements' position - (Where is it?)
03. Discrimination of elements (perception of edges, boundaries, figure-ground); attributes like color, density, texture, form/shape
04. Pattern recognition; "reading" the gestalt (recognizing familiar face, person, vehicle, thing)
05. Object identification
06. Interaction with elements (catching a ball, reaching for a glass, entering a doorway)
07. Monitoring dynamic relationships among spatial elements
08. Interpretation of the social environment and social cues (eye contact, gestures, facial expressions, body language)
09. Landscape interpretation
10. High speed navigation (running, short ran vehicles, long range vehicles)

APPENDIX D: Information Networks Under Consideration

The meshwork of meshworks (things that potentially) network include the following:

01. WAN (wide are networks, like the internet)
02. LAN (Local area networks, like an agency or a government)
03. PAN (Personal area network, wearable computers)
04. IAN (Internal area network, the chips inside of living things)
05. SAN (Spatial area network; smart spaces that communicate; ie intersections, roadways, the inside of cars); a Sub-network here would be the AAN (Acoustic area net; the linking of audified nodes
06. OAN (Object area network; things that communicate with other things, car parts that communicate, for example)
07. VAN (Virtual area network, GPS locations that hang in space anywhere that communicate; ie information in places)
08. MAN (Molecular area network, the meeting and networking of biology and machines)
09. NAN (Nano area network, chemistry meets machine )
10. QAN (Quantum area network, machines and light at the level of God)