The HawkEye Project: Seeing Without Sight

The HawkEye project has four comprehensive goals

1. To build CyberEye vision prosthetic systems

2. To build acoustically enhanced environments that interface with CyberEye units

3. To create an international research lab to design, test, and develop CyberEye systems and associated environmental computer chips

4. To design and develop training strategies and curricular materials to promote the acceptance of the new digital technologies

CyberEye

The HawkEye project will investigate, apply, and assemble state-of-the-art sensory, processing, and man-machine interface technology to design and build CyberVision systems (see APPENDIX A). These systems will consist of a common processing platform of universal design with plug-in options for various sensors and displays (see APPENDIX B). Sensor options may include sonic echo signaling, ultrasonic sonar, radar, infrared, optics, laser, inertial, magnetic, and location information transceivers. Display options may include auditory, visual, tactile, and direct neural-stimulus. These artificial ocular units will provide information to neural processors by channeling information primarily through undamaged neural-perceptual pathways - thus circumventing or augmenting damaged pathways and allowing the brain to make compensatory use of adequate (visual) information.

The HawkEye project will conceive, design, and build high definition "artificial vision systems" generically called "CyberEye." To understand how CyberEye systems will allow seamless wayfinding for blind individuals it is necessary to think more in terms of spatial perception and less about "vision." Spatial perception is a complex whole-brain manipulation of dynamic, multi-sensory patterns. In this broad sense, the term "vision" can be misleading, since what we view as visual reality is really a multi-sensory experience. The brain gathers spatial information from a composite of sensory perceptions which are not (need not be) subsumed under "vision." See the discussion of Cybervision below.

CyberEye is a sophisticated wearable computer capable of housing and networking modular plug-in units. These modules are designed to provide sensory input that is optimally compatible with the brain's information processing system.

CyberEye is a generic term that can refer to various designs for artificial sensing. One CyberEye might be radar intensive while another might be dedicated to enhancing echolocation. Since the wearable computing substrate can hold various modules, the creation of CyberEye systems can be custom designed to address the specific needs of individual consumers.

CyberEye is a method for creating CyberVision (defined below), a new kind of perception that mediates and augments sensory input, and that redefines what it means "to see" in the digital age. The human eye is a sophisticated piece of bio-technology developed by nature over millions of years to perform specific functions - to make specific information available to the brain. Using the human brain as the primary computer for receiving computer mediated patterns from CyberEye will enable scientists not only to emulate the functions of the eye, but to go beyond them.

CyberEye is a comprehensive strategy for addressing disabilities associated with blindness. CyberEye modules will eventually allow access to print, signs, and symbols. CyberEye systems will allow the consumer to sense, identity, and interact with objects. CyberEye units will allow blind individuals to navigate fluidly through the environment. Modules will allow users to identify people and read body language, and CyberEye systems will probe the world of visual aesthetics.

The father of wearable computing, Dr. Steven Mann (Director of the University of Toronto's Humanistic Intelligence Lab), makes an important distinction between AI (artificial intelligence) and HI (Humanistic intelligence). AI is a set of smart machines that think and report. HI is about human beings using the greatest computer ever conceived, the human brain, to interface with and use AI tools. The concept of humanistic intelligence, putting the human brain in control, not surrendering the spiritual core of the human being, is critically important. It ties into the entire moral quagmire that lies in the shadows as we create digital tools. As we create perceptual prosthetic systems, we will keep the focus on simplicity of use, retaining natural control using human brain power, preserving the beauty and character of the human being. Put another way, we will use technology to bring a greater glory to the human brain, rather than decrease or circumvent it's awesome power.

The genius of the CyberEye vision prosthetic system is that it will be built upon the natural way the brain computes. The human brain will be fed patterns that it is hungry for, that it best perceives and acts upon. Blind individuals will use their native intelligence and natural auditory skills to control components of the wearable computer. They will be in command of the "auditory chips" embedded in the environment, turning them on and off and altering their output as needed. To put it in a more comprehensive/abstract way, CyberEye will optimize purposeful access to all aspects of the inner and outer world including the physical, symbolic, interpersonal, intrapersonal, aesthetic, and computational environment.

CyberEye will eventually interface with artificial chip implants. Research teams are at work in major universities all over the world on vision-enabling chips that are designed to be placed in the eye or in the brain. Pioneering systems have already been implanted in animal models and in humans. Early surgeries are turning totally blind individuals into severely visually impaired individuals. Vision-enabling chips are crude in these pioneering stages. They are not yet useful for pattern recognition or for sophisticated visual perception. However, these early chips will reawaken the powerful navigation functions of the vision system. The first eye chips will be wayfinding tools. CyberEye will provide "artificial" perceptual systems with information that is coded in the most optimal way for natural processing by the human brain; in other words there will emerge a need for a vision prosthesis system (CyberEye) as a "front end" to implanted artificial vision systems.

CyberEye will eventually be part of medical solutions and will be used for rehabilitation after molecular surgery/repair. Biotechnology, tissue engineering, and genetic surgeries will eventually enable doctors to repair the body at the molecular level. It could be twenty years or more before these advances begin to reduce the population of blind individuals. What is certain, however, is that, if the brain does not receive visual input (as in the case of long-time blinded individuals), cells that are needed for normal perception slowly die in various regions of the cortex. Vision "restored" in older individuals will therefore contain unusual perceptual anomalies that require rehabilitation training. In other words, early developments in biotechnology will not cure blindness. Like brain implantation of computer chips, pioneering ocular tissue repair will result in individuals having various kinds of novel vision impairments. This will be the case until we learn to repair entire regions of the brain (not just primary vision tracts and centers). Wayfinding will be the primary vision capability that will benefit from early developments in biotechnology and novel perceptual systems created by biotechnology will need modified input, ie. CyberEye.

Acoustically Enhanced Environments

The HawkEye project has an environmental component that is as challenging and ambitious as CyberEye. CyberVision has two interrelated components, the computer mediated input of CyberEye plus computer enhanced output from embedded environmental processing chips. The development of these embedded units will occur simultaneously with the creation of CyberEye systems. These chips will be embedded as needed in the environment and will have the ability to emit various wave forms (ultrasound, radio, etc.) that "audify" the environment; i.e. fill the atmosphere with wave patterns that can be translated by CyberEye into processing codes that are optimally "understood" by the human brain. This design will eventually allow 3-D "seeing," enhanced pattern recognition, and enable other visual functions common to the central (foveal) vision system.

To put this another way, the world was designed for the eye. The architecture, the pathways, the signs and symbols, the vehicles, everything was designed around the vision system. For all of history we had little choice but to create the world in this manner. Now, however, in the digital age, we can overlay the visual environment with computer chips that sense, communicate, network, and think. In short, we can make the world "visible" for the blind traveler by adding and adapting to the visual design. We can do this using CyberEye systems that interact with a world that has been "modified" to filter, enhance, and produce signals that make it easier for blind people to "see without sight."

This will not be a crude, tentative perception. Blind individuals will be trained to "see" in a way that the sighted cannot comprehend; a fluid, graceful navigation will result that looks and is "natural." In the same manner that Braille (a near point vision substitution system) has allowed the blind to read at incredible speeds (an act that seems extremely improbable to the sighted), so the use of "environmental audification" combined with CyberEye vision prosthetic systems, will allow far point vision substitution that is as amazing to behold (and improbable seeming) as Braille perception. This is a revolution in the way blind individuals will access the world.

International Research Lab

The comprehensive and revolutionary nature of the HawkEye Project requires the creation of an advanced, high profile research lab for the study, assembly, testing, and on-going development of modules that attach to a personal area network.The only international research center comparable to this concept was the Blind Mobility Research Unit in the Department of Psychology at the University of Nottingham in England. When the Nottingham program closed several years ago, it left the world with no research lab dedicated to blind navigation. This project will replace the Nottingham Lab with a more sophisticated, state-of-the-art facility. Dr. Steven Mann, whom some call the "Father of Wearable Computing," proposed in 2002 the creation of "LoVE:" The Lab of Vision Empairment (combining "empowerment' with "impairment") to be built at the University of Toronto. The HawkEye research facility could be located at a university using Dr. Mann's model, or at a consumer technology center, or it could be created as a separate corporation or non-profit agency. However the research center is set up, it will be the necessary first step toward the creation of a vision prosthetic system.

Training Strategies and Curricular Materials

The HawkEye Project will be part of a broad initiative in blind rehabilitation that will deliver instructional and therapeutic strategies to ensure full user benefit from the emerging vision technologies. Experience and history suggest that sophisticated technology will not be accepted or used unless there is an infrastructure for training both teachers and consumers. To support the common use of CyberEye, technology support networks will be established, government funding priorities will be changed, university education will be reorganized, professional and consumer organizations will be redefined, and training strategies and curricular materials will be established and disseminated on a broad scale. The move to digital tools will force a painful but necessary evolution of long standing, industrial based institutions. Something as comprehensive as the HawkEye Project cannot and will not ignore the disruptive effects of introducing digital technologies into an industrial infrastructure. Through a comprehensive and knowledgeable strategy of education and sound, user friendly technical designs, the HawkEye Project will embrace the inherent responsibilities that come with changes of this magnitude.

Roles of Agencies and Individuals

The roles of World Access for the Blind will be the following:

01. To articulate and guide the vision/plan behind the development of the vision prosthetic system; staying the course
02. To assemble the teams that will carry out the mission
03. To establish assessment strategies for the technologies as they develop
04. To design training programs, curriculums, guidelines, and instructional strategies that respectfully integrate the technology into a comprehensive approach encompassing all suitable techniques and skills.
05. To set up training schools and camps; provide for follow up, upgrades, etc.
06. To assist in mobilizing resources; fund raising, writing grants, etc.
07. To maintain the attention of the public eye on the project; to oversee documentation and information dissemination
07. To assist with bookkeeping, marketing, and product development; bringing finished products to market.

The roles of the Institute for Innovative Blind Navigation will be the following:

01. To be a catalyst for positive change
02. To build bridges (networks) between agencies, governments, professionals, consumers, and interested supporters
03. To be the diplomatic, gentle center, the glue that holds the coalition/teams together
04. To articulate and keep current a global vision; an overview of technology and it's social implications
05. To keep a finger on the moral issues; to look for and stand upon the higher moral ground; to set this as an issue for teams to consider
06. To create (by example and leadership) an open minded, non-judgmental environment for the teams to work within
07. To help with the overall organization plan
08. To provide expert advice as blind wayfinding professionals
09. To record and communicate; to manage the knowledge that flows from the project
10. To help raise funds

The roles of the University of Toronto Humanistic Intelligence Lab will be the following:

01. To create and maintain the international research laboratory
02. To design and test wearable computing solutions to the problems addressed by the collaborating agencies
03. To design and test environmental audifying chips that work with wearable computer systems
04. To train blind consumers (test pilots, Xybernauts) to use evolving systems
05. To maintain high moral standards in keeping with the concept of humanistic intelligence
06. To create and share an open source code (philosophy) so that others can contribute to development
07. What else, Steve?

The project team will have the following shared goals and commitments:

1. Commitment to a non-profit structure, to high ideals, to doing as much good as possible for as many people as possible
2. Commitment to collaboration; Commitments to expanding options to improve freedom of choice, and to teamwork
3. Commitment to thinking positive and acting with confidence in the future outcome of the goals
4. Commitment and faith in individuals to make a difference
5. Commitment to consumer involvement and leadership
6. Commitment to humanistic intelligence and simplicity of design
7. Commitment to enjoying the journey
8. Commitment to quality products

To set the stage for planning, the team will be instructed to do the following:

Self-select. Individuals need to want to contribute to this project
Assume that this breakthrough will not happen unless individuals working in teams make it happen
Assume unlimited funding (somebody will get the funding because what we are doing is important)
Assume unlimited time and resources (A logistics team will be in place to help and we won't quit until we are satisfied with the product)
Assume success
Assume that Moore's Law/Kurzweil's Law will continue and that the impossible/science fiction will be doable sooner rather than later
Assume open source support
Assume a world wide team of blind beta testers

When the team designs and develops technologies/products assume the following:

1. New kinds of networks will be available for linking, we will especially need to connect the PAN (personal area network of the wearable system) with the Internal area network, and the object/spatial network; as well as the familiar WANs and LANs. The internal vision/sensory chips that are evolving will connect to each other. Assume the PAN can link with this eventually.
2. Modular design: provide an upgrade path/plan and always have a rear guard team working on the next generation. It is very important to understand that in functional terms a "module" is an "option" that a blind user could either have as part of the cyborg system, or it is an option that comes with all systems and the choice is whether to use it or not.
3. The modules will be customized; made available to solve individual problems
4. Universal design; because this project for the blind traveler is a "back door" to products for the sighted
5. Products are being designed for a sophisticated blind traveler (there will be spinoffs and adaptations to address the needs of less sophisticated travelers)
6. Modules will solve a specific functional problem (getting a blind person across a busy street in a major city safely 100% of the time; or face/pet/sign recognition; or pathway detection, etc).

Building a vision prosthesis will require thinking on a grand and bold scale:

1. The task will be global in geographic scope; because of the internet, the whole world of agencies, consumers, professionals, scientists, and supporters will have the opportunity to play a role at various levels of the project.
2. The task (because of the internet) carries on day and night, it is always on and never off
3. The undertaking is understood to be gigantic, but worth the effort, and worth the cost
4. Planning will be detailed, comprehensive, and fluid
5. Resources will be in place to insure success
6. The hype/marketing around the project will be positive and supportive, but kept to a minimum until a finished product is ready for use
7. A talented team of experts and consumers will be brought together to insure success
8. The goal of the overall effort will be clear, and tasks leading up to the goal will be spelled out in detail
9. The goal will be reached even if key people are removed (by fate, illness, etc.)
10. The effort will have strong leadership from beginning to end

From my perspective (IIBN), I would say that this approach would be unique if it had the following attributes:

01. It was viewed as not just an acoustic echolocation based model of a vision prosthesis, but rather as a set of modules incorporated in a wearable substrate. These modules could be any system that offered an option to address visual disability (ie something that addressed a function; a task relevant problem). So, if face recognition was a desired option, the substrate could be modified to allow for this module (to be "plugged in").
02. A "blind commons" was created on the internet to allow sophisticated technology users to contribute to the evolution of the system; i.e. a global group interested in, supportive of, and actively involved with the evolving technologies.
03. The vision prosthesis was more sophisticated (by far) than anything so far invented
04. It was head mounted (as the main "system")
05. It was conceived from the beginning as a system that could eventually network with internal area networks, wide and local area networks, spatial networks, acoustic networks, all evolving networks
06. It was never thought of as a finished product, but was always in process, always in a stage of development. There can be products, but no sense of completion
07. Tech support is a given; free and open all the time
08. There was an understanding that consumers would not use this system(s) just as they never used earlier inventions, because there was a lack of infrastructure for training (consumers and professionals). Built into the hardware development would be a plan for and implementation of a new infrastructure (a school, a series of ongoing camps, curriculums available on the net, an internet training system, etc.)

There are a number of ways to insure that the HawkEye Project will be international in scope:

1. Use the internet to establish a global commons of blind individuals to test the system, a place consumers can go to talk, compare, share, and report. Enlist worldwide consumer agencies in this task.
2. Use the internet to establish a global commons of software engineers/inventors who can use open source code to further the development of modules, the substrate, the audified environment, and CyberEye systems.
3. Define separate tasks that need to be accomplished and "assign" global agencies to their completion
4. Create the project team and associated teams from international agencies
5. Articulate a common theme. This would be the modules of the wearable fit on a common substrate. This allows many "players" to work on their options/modules so that they fit with the common substrate and so that consumers could or might not select this option as a part of their technology "space suit."
6. Get the backing/blessing of internationally focused blindness organizations, in particular the World Blind Union
7. Create partnerships with global corporations, like Xybernaut, Nokia, H.P. Cool Town, IBM World board, for example

New literacies being generated by technological change:

1. Environmental literacy: learning the language of smart environments

2. Sensory literacy: learning the language of the new digital sensors (digital vision)

3. Lite3. Literacy of Cooperation; learning the language of digital communicating, new forms and degrees of communication

The following was an early attempt at a narrative; it needs heavy editing:

In November 2003, a partnership was created between World Access for the Blind (WAFTB) in California and the Institute for Innovative Blind Navigation (IIBN) in Michigan. This partnership centers around a joint interest both organizations have in wearable computing technologies for blind wayfinding. The long range plan put forward by WAFTB is for the creation of a vision prosthetic system. IIBN monitors advances in wayfinding technologies and assists inventors with promising avenues of research. In February, 2004, the Humanistic Intelligence Lab at the University of Toronto joined the collaboration. This internationally respected research lab has the expertise to create the wearable systems needed to accomplish the high ideals set down by WAFTB and IIBN.

Wearable computing has been a promising dream for many years. There has not been however, a centralized all out effort to take wearable systems off the science fiction shelf. The HawkEye Project will move the idea of the cyborg from the strange/alien to the commonplace. The partnership that includes World Access for the Blind, the Humanistic Intelligence Lab, and IIBN is about moving speculation and opportunity off the science fiction shelf and into reality. All three agencies that it is now time to assemble the teams that will create the future of blind navigation. The HawkEye Project is the future of blind navigation.

There is a quote that says in effect "We can put men on the moon and get them back, but we cannot build a technology that can get a blind person across a street safely." As a culture, we made the decision to "go for the moon." The nation was united around a large and uncertain future. The resolve was backed up with planning, money, and successful follow through. The HawkEye Project is such a large undertaking that it will require a "go for the moon" resolve. The first step, taken through the partnership of WAFTB, the University of Toronto's HI Lab, and IIBN is to declare that the "journey of a thousand miles" has begun, and that the resolve to see the effort to completion is in place. It is a matter of faith and not belief. This is what we mean when we say that the team members must "step over the line;" Once over the line you are in the land of the faithful. It is a done deal. There is no discussion (can we really do this?). There are no words, no debate, no internal second guessing, no worry, no angst. There is only a non-verbal faith. Power comes from the harmony and energy generated when humans gather together around a common passion; follow fate and ride the waves of serendipity.

World Access for the Blind is building partnerships that will result in a visual prosthetic system that will enable the blind to "see without sight." The HawkEye Project is the foundation upon which generations of evolving wearable technologies will stand. It is the bedrock upon which to build modules that effectively substitute, replace, augment, and mediate for an absent (or ineffective) vision system.

CyberEye is a technology that will allow human beings to explore new avenues for perception. The system will be designed not only to allow blind and visually impaired individuals to have equal access to the visual world, but it will augment the senses and mediate reality so the user perceives in ways never before possible. CyberEye will provide the user with bionic abilities that go well beyond natural human capability. This new form of perception we call Cybervision.

Cybervision

Wearable computers (head mounted systems) do two things, they mediate reality, and they augment/alter reality. This is accomplished by putting a screen (glasses are usually used) in front of the eyes. The user of a wearable computer no longer looks at the world naturally. The screen (the inside of the eye glass) contains a video of the world. It is the same world the eyes would see without the glasses, except there is a fast, unrealized delay in the arrival of the image to the eyes. A small digital video camera is used to capture the scene in front of the user. What allows the system to change reality is a tiny computer that takes the video image of the world and alters it before sending it to the eyes. The computer can now do all kinds of things to reality. For example, it can select parts of the visual field to be enlarged, or parts of the field (like roadside advertisements) to be eliminated from the scene. The computer can enhance contrast or allow only black and white images, or see in the ultraviolet or infrared ranges. What the computer can do to reality is open to creative experimentation. The resulting computer-altered images give us "CyberVision," a digital view of the world.

We have taken this understanding of CyberVision and modified it to accommodate the needs of blind individuals. We now add auditory mediation and augmentation to the mix. We will build into the wearable system modules that filter sound, sometimes enhancing it and sometimes eliminating noises. More importantly, we will change visual input into auditory wave forms. We are also placing a network of smart auditory chips inside the environment (or placing virtual sound anywhere we want in space). These audified spatial areas are intimately linked to CyberEye in such a way that CyberVision for the blind is enhanced (acuity, depth perception, and pattern recognition sharpened).

When we discuss "artificial vision systems," we use the term "vision" in a broad and perhaps unique way. When we offer the brain unique forms and combinations of sensory input (different from the innate and usual input of everyday experience), and further when we place computational chips inside the human body and throughout the environment, we are creating new (pioneering) ways of perceiving. We are creating new ways of seeing, new kinds of "visual" (digital) perception. Vision has traditionally been thought of as accomplished exclusively by the tracts and centers in the brain; by the so called vision cortex and visual association and processing areas. However, this perspective does not ring true with human experience. For example, there is an internalized "seeing;" wherein reside dreams, visualization, and a non-physical ability to have "visions," and to be "visionary." We often refer to particularly vivid ideas as "visions," and the term "visionary" is applied to one who imagines and conceives concepts and ideas of substance and appreciability. In these cases, the term "vision" refers to one's perception of one's inner, mental world. Blind people commonly refer to their perception of the world in "visual" terms as in "oh, I see," or "I've seen this before." Helen Keller has been referred to as one of the greatest "visionaries" of our time. Thus, when we operate according to the current, broader definition of vision, we acknowledge contributions of the whole self to seeing, as well as the ability to see by means other than the eye or by means that go beyond the eye.

This latter point is worth particular attention as it is the main crux of our concern and endeavor. It is documented that sailors as far back as the 1700's would use gun fire and hammering on blanks to determine the proximity of land through the fog or in darkness where their eyes could not see. Such a sound could carry for miles, alerting the sailors to where the land was by echoes. In the 1940's, we saw the active development of technology to allow humans to see where the eye could not. The use of radio waves (radar) allowed the perception and tracking of oncoming planes too far away to be seen with the eye. Sonar was applied to allow man to travel the depths of the sea where light does not penetrate. Later, infrared systems were applied to allow people to see under cover of total darkness.

As we enter the new millennium, we find the development of sensors that can see through walls and around corners, that can detect a flock of birds a mile away, and that can measure the deep workings of the body and brain. Sonar, once used to guide great submarines through the deep, is now used by the blind to find their way through the complex world. One blind boy could hit a softball pitched to him from 14 feet. Others have been seen on world TV bicycling through city streets and mountain trials, all without the benefit of eyes. Cameras, once used to take photos, now allow 30 times the range and sharpness than the eye can provide. Indeed, sensors are becoming so powerful and far reaching that the unassisted human brain can no longer be expected to gather and process the full wealth and breadth of information now available - realms that have hitherto not been explored.

Fortunately, with the development of ultrasensors came the development of computers to assist the human brain to gather and process this wealth of information. Computers now help human pilots to guide light aircraft at a-thousand miles an hours through the tree tops, and jumbo passenger jets practically land themselves. Radio telescopes throughout the world are coordinated by computers to provide composite images that the human brain can understand. Cameras hooked to computers are now allowing the blind to read.

With the increased power of sensing technology and computers, concurrent with their decreasing size, cost, and power requirements, we have finally entered an age where the unknown can become known, and the unseen can be made visible. Technology can now take the place of the human eye where it fails to see.

Computer mediated vision can improve the clarity of the environment when compromised by darkness, physical barriers such as walls or clutter, or obscurity such as thick smoke, dust, or fog. Computer vision systems can mediate the perception of the environment by making walls transparent or making walls visible to the blind.

Creating Global Teamwork

Think like the astronomers. When they were mapping the entire universe of stars, astronomers realized that great challenges required extraordinary cooperation. They wrote software that linked all the great telescopes of the planet. The project went on day and night as different teams came on line as the earth revolved. They communicated using the internet. They shared the work and they shared the data. They mapped the universe of stars for the benefit of mankind, not for personal fortune or fame. Because of this level of sharing and organizing, they accomplished a task larger than the sum of the agencies who were working together. We must think and plan for the HawkEye Project at this level of sophistication and cooperation.

Think like the scientists who mapped the human genome. When they conceived of the project, they did not have the computer processing power (the technology) to accomplish their goals. By the time a few years went by, and Moore's Law marched on, the processing power arrived. The project was completed before the projected target deadline. Many of the ideas within the HawkEye Project sound like science fiction. They are not. Processing power will arrive on time, as if fate was dealing answers as fast as the scientists reached the questions.

The first step in this long journey is the creation of the international research lab that will design and develop the vision prosthesis and the supporting acoustic environment. The location of the lab, the funding base, staffing, and the organizational and philosophical structure, will be decided by the project design team. This team will review the overall project (this document) and will decide how and when the project will move forward.

This document will be ready for review by prospective project team members by March, 2004. The initial team meeting will occur in May, 2004. A follow up meeting will take place in October, 2004.

The second focus for the project team, after plans for the research lab are solidified, is to outline the strategy for the creation of CyberEye. This process begins with defining the nature and design of the wearable computing substrate, the network that will house the plug-in modules. This study will include a review and final suggestions for standards for the substrate. When the substrate is established, the focus will turn to the creation of navigation modules for the vision prosthesis. Other modules can be discussed and proposed, but in the beginning primary energy will go toward the creation of wayfinding modules. Two wayfinding prototype versions of CyberEye will be outlined by the project team. These will be called the Sonocular and the SonicEye. The "final" version of CyberEye, when the system is interfaced with chip implants, will be called HawkEye. Concurrent with this development plan will be an outline for creating the enhanced smart environment and the plans for training and curriculum development.

Speaking about his own non-profit agency World Access for the Blind (WAFTB), CEO Dan Kish said "Ultimately, our purpose is to provide tools to allow the blind to access the world at large in a manner of their own choosing. We were very careful in the wording of our mission statement. We will not "help the blind to succeed" or "empower the blind" or "create opportunities for success," because we feel the blind should be autonomous enough to make their own choices about success. "Options are the key." If we give a blind man a chance to see, it's still his choice whether to open his eyes, and how to use them. That choice is sacrosanct to the sighted, and can be no less so for the blind. The best we can do is provide tools and openings to equalize opportunities; the rest is up to the blind themselves. And, indeed, it will ultimately be the blind themselves who will mobilize the effort to that point. The foundation of our approach at WAFTB is a "NO LIMITS" philosophy - the idea that, although we all face limits, we must not suffer limits to be imposed upon us by others. We all have the right to enjoy the freedom and strength of character to seek and discover our own limits and strengths. This is how we learn to embrace the world on common ground.

"Blindness cannot be conceptualized as a deficiency, but rather it is a style of living which benefits from the same things that the sighted benefit from; access to the world. And access, in this sighted world, is mostly about the choices that a sighted society makes about how to present information. In a world of the blind, it would all be different, but not necessarily any less functional. And, I dare say, a sighted person wouldn't necessarily fair very well in such a world. Where would they be in a world where the artificial light was never invented? Or in a world filled only with low contrast, 24 point interpoint Braille? Or a world where everyone else had developed their multi-modality ability to optimize perception functioning? A sighted person would be forced by precedent and majority to function as a blind person, with perhaps an added edge that would give rise to idiosyncratic behaviors and perspective that would likely be considered bizarre. Many sighted people under such circumstances might even hide their vision out of shame or confusion, and a society might even be imagined where the eyes were removed at birth out of misconception. Does any of this sound familiar? How many blind kids had their KASPA's taken away because the technology was "too much trouble," or they were admonished not to engage in behaviors that generated echo signals because such noises were considered "bizarre?" It is these points of philosophy and perspective that are the foundation of the HawkEye Project. "

APPENDIX C: Functional Criteria

Functional navigation means "how to find your way in the standard spaces of the world." Define the spaces and organize them by the frequency they are encountered. Create standard test spaces so that we can control assessment. Define standardized training spaces. In each case consider layouts (mental images), and routes.

Indoor spaces:

01. Four-walled room with one door
02. Hallway
03. Specialty rooms (classrooms, gym, pool area, kitchens, bathrooms, etc.)

Outdoor spaces:

01. Quiet residential
02. Small business areas, bus stops and shelters
03. Small urban areas
04. Intersections
05. Gas stations
06. Parking lots
07. School campuses
08. Apartment Complexes and condominiums
09. Large urban areas
10. Parks
11. Rural pathways
12. Through the woods
13. Mountain trails; hiking trails
14. Panoramas
15. The stars

Commercial spaces:

01. restaurants and eateries
02. shopping malls
03. Stores (department, grocery, hardware, clothing, toy, pet)
04. Outdoor plazas
05. Outdoor bazaars (swapmeets, flea markets)
06. Office buildings
07. Transit Stations (bus depots, train stations, airports)
08. Hotels

Culinary spaces:

01. Kitchen (food preparation)
02. buffets
03. food displays (dessert trays, bakeries)

Recreational spaces:

01. Playgrounds
02. Stadiums
03. Amusement parks
04. Theatres
05. gyms
06. Country Clubs

social spaces:

01. Parties and informal gatherings
02. Lines and formations
03. Seating arrangements
04. Conventions and conferences

Mechanical/Technical spaces:

01. construction sites
02. furniture and gadget assembly
03. automotive care
04. puzzles
05. Home improvement

symbolic spaces:

01. the printed word
02. graphics (pictures, graphs, photos, art)
03. dynamic text (marquee, subtitles, digital text)
04. dynamic graphics (computer graphics, video)
05. Commercial literacy (vendor signage, menus, courtesy signage, package labeling)
06. location literacy (public signage, directional markers, warning markers)

water spaces:

01. Rivers
02. lakes
03. Oceans
04. Snow and ice
05. Waterfalls

Dynamic spaces:

01. Observer from vehicles (car, bus, train)
02. Sports
03. Running
04. Operating short range vehicles (skates, scooters, bicycle)
05. Operating long range vehicle (car, truck, bus, train)
06. Operating long range vehicle (car, truck, bus, train)
07. Operating water craft (sail boat or house boat, speed boat, kyak/raft, surfboard, jet skiis)
08. Operating air craft (sail plane, glider, balloon)
09. Operating snow craft (snowmobile, skiis)

Vehicle spaces:

01. Cars
02. Buses
03. Trains/subways
04. Planes

Since we are "copying" vision, we will also look for:

01. Awareness of the presence of spatial elements s - registration (to be avoided or approached)
02. Localization of elements' position - (Where is it?)
03. Discrimination of elements (perception of edges, boundaries, figure-ground); attributes like color, density, texture, form/shape
04. Pattern recognition; "reading" the gestalt (recognizing familiar face, person, vehicle, thing)
05. Object identification
06. Interaction with elements (catching a ball, reaching for a glass, entering a doorway)
07. Monitoring dynamic relationships among spatial elements
08. Interpretation of the social environment and social cues (eye contact, gestures, facial expressions, body language)
09. Landscape interpretation
10. High speed navigation (running, short ran vehicles, long range vehicles)

APPENDIX D: Information Networks Under Consideration

The meshwork of meshworks (things that potentially) network include the following:

01. WAN (wide are networks, like the internet)
02. LAN (Local area networks, like an agency or a government)
03. PAN (Personal area network, wearable computers)
04. IAN (Internal area network, the chips inside of living things)
05. SAN (Spatial area network; smart spaces that communicate; ie intersections, roadways, the inside of cars); a Sub-network here would be the AAN (Acoustic area net; the linking of audified nodes
06. OAN (Object area network; things that communicate with other things, car parts that communicate, for example)
07. VAN (Virtual area network, GPS locations that hang in space anywhere that communicate; ie information in places)
08. MAN (Molecular area network, the meeting and networking of biology and machines)
09. NAN (Nano area network, chemistry meets machine )
10. QAN (Quantum area network, machines and light at the level of God)

bar

Ethics

The concept of humanistic intelligence, putting the human brain in control, not surrendering the spiritual core of the human being is critically important. It ties into the entire moral quagmire that lies in the shadows as we create digital tools. In the book "The Inmates are Running the Asylum," the author makes the case that when you add information processing to the world, you turn that world into a computer. The author asks "What do you get when you combine an airplane with a computer?" The answer: A computer that flys. It's not an airplane any longer that is under the control of a pilot's brain and skilled hands. It is a flying computer that communicates with itself and makes decisions without discussion. The argument is not whether this is good or bad, the point is simply that computers have redefined the stuff of the world and have taken over processing decisions from the human brain. Consider this question: What do we get when we cross a computer with a human being? For example, what becomes of the human being as we develop wearable (PAN's) computers, and especially as we embed chips inside the human body (IAN's). Will the answer be "A computer?" It looks like the answer comes down to values and software design.

Currently, we are making software systems that do not interface well with the human being. We make things so complicated and feature heavy that nobody can comprehend them totally (something to consider when we talk about increasing options for the consumer), or we set up situations where human error can be life threatening. As we create the vision prosthetic system we will keep the focus on simplicity of use, retaining natural control using human brain power, preserving the beauty and character of the human being, and we will give particular care to designs that might endanger (by the digital nature of the technology) the blind consumer (crossing a street for example). Put another way, we will use the technology to bring a greater glory to the human brain, rather than decrease or circumvent it's awesome power.

The meshwork of meshworks (things that potentially) network include the following:

01. WAN (wide are networks, like the internet)
02. LAN (Local area networks, like an agency or a government)
03. PAN (Personal area network, wearable computers)
04. IAN (Internal area network, the chips inside of living things)
05. SAN (Spatial area network; smart spaces that communicate; ie intersections, roadways, the inside of cars); a Sub-network here would be the AAN (Acoustic area net; the linking of audified nodes)
06. OAN (Object area network; things that communicate with other things, car parts that communicate, for example)
07. VAN (Virtual area network, GPS locations that hang in space anywhere that communicate; ie information in places)
08. MAN (Molecular area network, the meeting and networking of biology and machines)
09. NAN (Nano area network, chemistry meets machine )
10. QAN (Quantum area network, machines and light at the level of God)

I just made up the last three, but hey, poetic license. Somebody was going to make them up anyway and probably already has.

Opportunities, Options, and the Underlying Philosophical Foundation for the HawkEye Project

The HawkEye project is about creating digital tools; presenting consumers with computer age options for addressing problems inherent in navigating blind. The pace of technological change has created unprecedented opportunities for the blindness community. The HawkEye project is about increasing options, and about equalizing access to information (far point as well as near point information). Speaking about his own non-profit agency World Access for the Blind (WAFTB), CEO Dan Kish said "Ultimately, our purpose is to provide tools to allow the blind to access the world at large in a manner of their own choosing. We were very careful in the wording of our mission statement. We will not "help the blind to succeed" or "empower the blind" or "create opportunities for success," because we feel the blind should be autonomous enough to make their own choices about success. "Options are the key." If we give a blind man a chance to see, it's still his choice whether to open his eyes, and how to use them. That choice is sacrosanct to the sighted, and can be no less so for the blind. The best we can do is provide tools and openings to equalize opportunities; the rest is up to the blind themselves. And, indeed, it will ultimately be the blind themselves who will mobilize the effort to that point. The foundation of our approach at WAFTB is a "NO LIMITS" philosophy - the idea that, although we all face limits, we must not suffer limits to be imposed upon us by others. We all have the right to enjoy the freedom and strength of character to seek and discover our own limits and strengths. This is how we learn to embrace the world on common ground.

"Blindness cannot be conceptualized as a deficiency, but rather it is a style of living which benefits from the same things that the sighted benefit from; access to the world. And access, in this sighted world, is mostly about the choices that a sighted society makes about how to present information. In a world of the blind, it would all be different, but not necessarily any less functional. And, I dare say, a sighted person wouldn't necessarily fair very well in such a world. Where would they be in a world where the artificial light was never invented? Or in a world filled only with low contrast, 24 point interpoint Braille? Or a world where everyone else had developed their multi-modality ability to optimize perception functioning? A sighted person would be forced by precedent and majority to function as a blind person, with perhaps an added edge that would give rise to idiosyncratic behaviors and perspective that would likely be considered bizarre. Many sighted people under such circumstances might even hide their vision out of shame or confusion, and a society might even be imagined where the eyes were removed at birth out of misconception. Does any of this sound familiar? How many blind kids had their KASPA's taken away because the technology was "too much trouble," or they were admonished not to engage in behaviors that generated echo signals because such noises were considered "bizarre?" It is these points of philosophy and perspective that are the foundation of the HawkEye Project. "

With the increased power of sensing technology and computers, concurrent with their decreasing size, cost, and power requirements, we have finally entered an age where the unknown can become known, and the unseen can be made visible. Technology can now take the place of the human eye where it fails to see.

As we enter the new millennium, we find the development of sensors that can see through walls and around corners, that can detect a flock of birds a mile away, and that can measure the deep workings of the body and brain. Sonar, once used to guide great submarines through the deep, is now used by the blind to find their way through the complex world. One blind boy could hit a softball pitched to him from 14 feet. Others have been seen on world TV bicycling through city streets and mountain trials, all without the benefit of eyes. Cameras, once used to take photos, now allow 30 times the range and sharpness than the eye can provide. Indeed, sensors are becoming so powerful and far reaching that the unassisted human brain can no longer be expected to gather and process the full wealth and breadth of information now available - realms that have hitherto not been explored.

Fortunately, with the development of ultrasensors came the development of computers to assist the human brain to gather and process this wealth of information. Computers now help human pilots to guide light aircraft at a thousand miles an hours through the tree tops, and jumbo passenger jets practically land themselves. Radio telescopes throughout the world are coordinated by computers to provide composite images that the human brain can understand. Cameras hooked to computers are now allowing the blind to read.

Computer mediated vision can improve the clarity of the environment when compromised by darkness, physical barriers such as walls or clutter, or obscurity such as thick smoke, dust, or fog. Computer vision systems can mediate the perception of the environment by making walls transparent or making walls visible to the blind. Cybervision is our name for the many bionic possibilities that arise when we use CyberEye, from seeing through walls, the fog, the darkness, to looking out the back of our heads. From seeing objects with radar images to identifying openings with echolocation sonar.

The use of wave forms other than light for seeing is worth particular attention as it is the main crux of our concern and endeavor. It is documented that sailors as far back as the 1700's would use gun fire and hammering on blanks to determine the proximity of land through the fog or in darkness where their eyes could not see. Such a sound could carry for miles, alerting the sailors to where the land was by echoes. In the 1940's, we saw the active development of technology to allow humans to see where the eye could not. The use of radio waves (radar) allowed the perception and tracking of oncoming planes too far away to be seen with the eye. Sonar was applied to allow man to travel the depths of the sea where light does not penetrate. Later, infrared systems were applied to allow people to see under cover of total darkness.

There is an internalized "seeing;" wherein reside dreams, visualization, and a non-physical ability to have "visions," and to be "visionary." In these cases, the term "vision" refers to a perception of one's inner, mental world. Blind people commonly refer to their perception of the world in "visual" terms as in "oh, I see," or "I've seen this before." Helen Keller has been referred to as one of the greatest "visionaries" of our time. When we operate according to this broader definition of vision, we acknowledge contributions of the whole self to seeing, as well as the ability to see by means other than the eye or by means that go beyond the eye. Cybervision will penetrate this inner world and create new kinds of human "being."

Here is how WAFTB CEO Dan Kish describes the patterns that will be sent to the human brain for processing:

"I think it is prudent to consider how the brain is organized to perceive and process information. Whatever the input, it is critical to understand how the brain makes sense of the mass of information impinging on it, because there are certain expectations that the nervous system has about the nature of information; a kind of expected coding. For example, the brain uses scanning movements of the head and the eyes to gather information. The nervous system is also expecting some kind of central focal point of attention to align the body in space. The nervous system is expecting to discriminate spatial information based on the input; to use scanning movement in the areas near the focal point to look for redundant patterns. Since the brain uses visual representation so heavily, it follows that a good bet would be to present non-visual information in such a way that the brain more or less thinks is visual. Put another way, the visual cortex is expecting information, through whatever modality, to be presented in a certain way. If the information arrives in a format that the brain can decipher optimally, the information can be processed in the most efficient manner."

A comprehensive sonar approach to enhance purposeful self-movement for the blind (Dan's description)

Asking street signs for directions! Conversing with objects or using them as portals; the transformation of all objects into robots (here is the blend, the combination and conversion of technologies)

AOL messenger will migrate to internal chips and we will have hive brain (brain to brain, always on communication). All of this technology is converging on the hive brain; the connection of the Internal Area Network with all the other nets.

Could virtual space "cubes" hold sound, so that the audified environment would just hang in space anywhere we wanted it? Could the object net house audified signaling systems so that any object in a space could not only identity itself, but also emit audified signals?

Steve Mann's Seeing Eye People (SEP) idea is really a smart mobs phenomenon. The question for us is "What kind of blind rehabilitation community could evolve if they were linked by SEP?" Could SEP be part of an "On-Star" type of service or part of these services? Could it be part of some other form of location based service system? Could people who are stuck in cloistered situations (shut ins) find useful "employment" by monitoring/describing what the blind see? What we are creating are virtual sighted guides, free source or hired. Probably the most significant contribution of SEP however is the monitoring capability it allows. For example, a dog guide instructor in New Jersey (for example) could use SEP to monitor the activity of a new dog guide user in South Dakota. This monitoring would be in real time and at any time. It would save the expense of travel and it would provide a low cost, anytime educational system (support network).

Remember the phrase "Visually Dependent" (Dennis Shulman quoted in The New Yorker article by Oliver Sacks); referring to the dominance of vision and how it overpowers other senses and deprives people of other ways to experience and feel.

If the blind create it, the sighted will follow

Functional approach: street crossings; vehicle identification, location, mobility; face recognition; sign recognition, identification and probing through internet; animal recognition; landmark/object recognition; complex space analysis (malls, airports, stadiums, parks, parking lots, hotels, inside cars), pathway analysis (sidewalk, hallway, rural paths); opening analysis

Turning the environment on and off and the "acuity" and "volume" up and down as needed; a universal "remote"

There has to be a global balance between centralized control and decentralized innovation. On a personal level, there has to be a balance between relaxing and letting things unfold, and the responsibility for organizing and collaboration.

The network of networks will be under constant assault from virus' spybots, popups, junk mail, subliminal advertising, propaganda, interference. Any or a combination of these might be life threatening to the person with a wearable computer.

The "cell phone" already has all the ingredients for wearable computing, and they are shrinking fast in size and price. Phones are location aware and location sensitive. They are small computers and calculators. They are digital cameras. They connect to the internet and get mail and web pages. They are remotes that can activate embedded chips. They are wireless communication devices.

Using echolocation for navigation to help the blind navigate is common sense, and many inventors over the years have developed sonar based tools. Leslie Kay from New Zealand was a pioneer with echolocation technologies, and gave the world a series of ever more sophisticated tools over a fifty year career. He is still a leader in the field and works with Dan Kish on the most advanced sonar technology called Kaspa (Kay's Advanced Spatial Perception Aid). In 2003, there were many tools in process that used echo location as the basis for navigation. Sound Foresight, a company based in Great Britain has the Spatial Imager and the Bat Cane. Research and development for these tools originates from Leeds University in England. As we develop the vision prosthetic system, we will need to differentiate our approach from current and past echolocation technologies.

It's the spaces, stupid! Functional navigation means "how to find your way in the standard spaces of the world." Define the spaces and organize them by the frequency they are encountered. Create standard test spaces so that we can control assessment. Define standardized training spaces. In each case consider layouts (mental images), and routes.

Indoor spaces:

01. Four-walled room with one door
02. Hallway
03. Specialty rooms (classrooms, gym, pool area, kitchens, bathrooms, etc.)

Outdoor spaces:

01. Quiet residential spaces
02. Small business areas, bus stops and shelters
03. Small urban areas
04. Intersections
05. Parks
06. Gas stations
07. Parking lots
08. Rural pathways
09. Through the woods
10. Mountain trails; hiking trails
11. Large urban areas

Vehicle spaces:

01. Cars
02. Buses
03. Trains/subways
04. Planes

It's not just the spaces, stupid! Since we are "copying" vision (perception), we will also look for:

01. Awareness of the presence of objects (to be avoided or approached)
02. Localization of the objects position
03. Discrimination of the object (face); attributes like color, distance, movement
04. Pattern recognition; "reading" the gestalt
05. Object identification
06. Object interaction
07. Landscape interpretation

We can "correct the shortcomings" of vision as we develop this system. The field of view can be altered easily. The biggest problem (I think) with human processing is attentional blindness; blind spots; blind moments. Humanistic intelligence would seek to create a vision prosthetic system that did not increase attentional (situational) blindness (for example, attending to email coming in through the ears rather than listening for traffic while crossing a street). HI would "manage attention", so that the degree of attentional blindness would be less than for a sighted individual.

Audification

How doe the virtual area net relate to the spatial area net? If you can leave messages at GPS addresses hovering in space or attached to the object net, what are the implications for audification? This is a big idea; what if you could program virtual audified cubes along a GPS mapped route before leaving? As you walked along you would pass by these "floating in space" cubes and have feedback that the route is being followed correctly. What if the whole planet had virtual cube "stations" (space stations). This relates to MIke May's electronic "breadcrumb" concept.

Dr. Chris Dede suggests that we consider devices as species evolving in an ecosystem. This is what I mean by thinking about these tools as processes rather than products. Audification is part of the Planet Earth Communication and information enhancement system (Species)

Personal space is your humanistic property as is your mental space. They should be protected legally, considered sacred. To invade mental space is a theft of "visual" attention. Attention is our humanistic property.

Digital Cities are on the horizon. The city will tell you where you are and how to get places; as part of it's understood service. The blind will not have to create this, only make adaptations to use it.

Project Oxygen

Organizational notes:

The three questions we will be asking groups to address: 1. Is there a need for greater collaboration? Should we leave the potential and pitfalls to the market place or should we band together for the common good?
2. How would you bring the groups together? What would be your priorities? vision substitution, orientation technology, environmental literacy,
3. What perceptual substitution strategy would you emphasize? (make your case)

The white paper should contain: 01. Clearly thought out definitions of terms; including HawkEye, CyberEye, Cybervision, seamless wayfinding, Xybernaut, audification (Landscape molding, landscape intelligence), visually dependent, the blind brain, attentional blindness, artificial intelligence, humanistic intelligence,
02. A debate about why these technologies should be developed
03. A discussion of demographics
04. A discussion of human development (molecular, genetic, embedded computers)
05 "Common substrate/standards" philosophy for creating them or using existing
06. A discussion concerning central versus decentralized issues, the market place in balance with the non-profits
07. Roles and players
08. Issues about training
09 A discussion concerning global cooperation; is it really necessary? How will the groups organize and communicate?? What steps will be undertaken to get from concept to Xybernaut?
10. Assessment strategies
11. The purpose in creating HawkEye

Quotes that we may or may not use

In the electronic age, we wear all mankind as our skin

The computer is the most extraordinary of man's technological clothing

Marshall McLuhan

Web to weave and corn to grind
Things are in the saddle and ride mankind

Ralph Waldo Emerson

Emerson realized long ago the drama that was unfolding for mankind. Technology is in the saddle and mankind is racing to figure out the ethical implications, racing to discover applications, braking against potential negativities, and praying that the common good prevails. This accelerating pace of technological change has created unprecedented opportunities for the blindness community. The HawkEye project is about turning these opportunities into practical digital tools; presenting consumers with computer age options for addressing problems inherent in navigating blind. The HawkEye project is about equalizing access to information; "far point" as well as "near point" information.

"There is great work to be done, and we are the men to do it."

"How easily men could make things better than they are; if only they tried together."

Winston Churchill

Why Churchill, and why this quote? Because Churchill was talking to western civilization about working together and working on great ideas. This quote is a message to all who would collaborate on a project of great magnitude; it is a call for the "big man" inside of all human beings, the part of us that desires and is capable of doing great work. The "small man" inside is frightened by the challenge and not up to the task. Churchill also is a symbol of the spirit that does not give up, even and especially when times are hard. Churchill was also eloquent. This articulate presence was not just a gift, it was the result of research and preparation and practice.