Title: A Modular Synthetic Vision and Navigation System for the Totally Blind
Contributed by: Peter B.L. Meijer ("The vOICe")
Categories: Artificial vision, synthetic vision, blind navigation, augmented cognition
What: A modular wearable system with open interfaces should be developed, where interoperable modules from different independent vendors can be easily combined in a convenient, affordable and user-friendly way. The individual hardware and software modules may entail GPS navigation functionality, sonar or laser feedback on nearby obstacles, reading of RFIDs, barcodes and other tags and link codes (e.g., linking to locally relevant Internet pages as needed at bus stations, railway stations and airports to know of arrivals and departures), camera-based automatic recognition of text, faces and objects in the visual environment, as well as synthetic vision functionality for perceiving raw visual input from a head-mounted camera through an auditory or tactile display. Any future invasive approaches such as retinal implants or brain implants should be compatible with this general framework. The blind user can start out with just a few modules, and optionally add other modules at a later stage depending on increasing user expertise, individual needs, interests and capabilities, and expense considerations.
This proposal not only involves technical development and investment in the assembly of prototype series, but also the involvement of rehabilitation institutes, blindness organizations and universities for the development and independent evaluation of training programs, monitoring both functional improvements and user satisfaction (quality of life, well-being), and giving feedback to the system and module developers for further refinement of the underlying technologies. The load balancing of (remaining) senses is also a key topic in augmented cognition.
Why: Blind people may in the future greatly benefit from talking GPS navigation systems that are gradually becoming commodity items for sighted and blind people alike, from sonar devices that can help with obstacle avoidance, and from infrastructural changes with for instance RF and infrared based tagging wherever it is economically feasible to implement these. However, GPS navigation maps cannot keep up with dynamic changes in the environment nor deal effectively with fine spatial detail, even when assuming that positioning signals can be received everywhere. Also, living in a largely sighted world implies that much environmental information is presented only visually, and no system for the blind can be complete without full access to arbitrary and purely visual information, information that GPS systems, sonar devices, cane and guide dog could never provide. Therefore, dual-mode camera-based systems must additionally be developed that can present the raw visual information through another sense (synthetic vision through sound or touch, with the blind user interpreting the cross-modal input), as well as automatically extract relevant visual information through OCR, face and object recognition while presenting the results in an alternate form that may include synthesized speech or Braille. The vOICe is the first wearable system that implements this dual-mode concept with synthetic vision through an auditory display and with an open interface for third party recognition engines, but the automatic recognition options in particular require much further progress in the development of various types of reliable recognition engines, while the synthetic vision part needs further study from a neuroscience, psychology and education perspective. An optional sonar module for The vOICe is in the prototype stage, and provisions for future GPS extensions are included in The vOICe Learning Edition software.
Much more information about The vOICe synthetic vision project for the totally blind can be found at Seeing with Sound. The AugCog International website can be found at augmented cognition.
Title: Invention of Electric Eyeglasses as a Seeing Aid and Telecommunications Device
By: Steve Mann and Chris Aimone
Categories: Artificial vision, augmented cognition, digital vision, wearable computing
Population: Consumers with low vision or cortical vision impairment; mobility specialists (to observe what their students are looking at); and sighted consumers. Adaptations can be designed for the blind consumer.
What: Eyetap devices are useful as electric eyeglasses for visual seeing aids. Eyetap can be used across the entire spectrum of visual impairments, from mild visual disorders to blindness. We propose initially to address the disabilities associated with the legally blind population; especially those on the "near blind" end of the continuum. Our proposal is for a "proof of concept" pilot study to demonstrate the power of digital vision to assist with blind navigation. We propose to employ a small group of "near blind" individuals (those relying on a mix of blindness and visual skills for navigation) to receive custom designed digital vision systems, and to use these systems to demonstrate improved wayfinding skills.
Traditional (analog, optical) eyeglasses modify light by refraction, whereas the next generation of eyeglasses, called "EyeTap" devices, modifies light computationally. In the future, instead of having to get new lenses ground, our eyeglass prescriptions will be downloaded over the Internet. These "digital" eyeglasses will provide users with many enhancements over traditional eye wear.
The EyeTap is a device which "allows the eye itself to function as both a display and a camera". Eyetap devices measure the quantity of light in each of a large number of rays of light that converge into at least one eye of the wearer, and then re-synthesize these same rays of light. Ideally, each ray of incoming light generates a collinear ray of synthetic light.Light impinging on the EyeTap sensor is measured and then used to drive the EyeTap effector, known as an aremac (camera, reversed). Typically, there is a processor that sits between the sensor and effector, such that the light gets modified, under computer program control, before it reaches the eye. Light from the surrounding scene can be altered, supplemented, or occluded by the EyeTap before it is transmitted to the eye. This is sometimes referred to as "computer mediated reality".
Why: Wearable computing systems using Eyetap technology will result in a new set of diagnostic procedures for assessing eye impairments and refractive anomalies. From these assessments will come an array of new prescriptive strategies for correcting (mediating and augmenting) vision, including: digital magnification, telescopic enhancement, contrast sharpening, use of OCR algorithms for the blind consumer, and pulling the figure from the ground or removing the ground; EyeTap devices take an active role in helping us to filter our visual world for salient information.
Traditional lens technology has allowed various optical disorders to be corrected and has allowed some of our visual abilities to be extended. Eyetap can correct for the same optical disorders but it extends our visual abilities well beyond current optical technology. For example, by controlling the quality and quantity of light, Eyetap can employ various computational methods and be adjusted in real time so that the unit becomes a sunglass or night vision aid as conditions require.
Since EyeTap devices can function as a computer display, it allows users to merge cyberspace with the real world. This feature is tremendously important since we are becoming increasingly bound to our computers due to our reliance on the internet as both an information and a social resource. As long as we need to focus our attention on a single physical object (such as a computer terminal, PDA or cellular phone), we will only feel more encumbered by computer and telecommunication technology as time passes.
For further reference see: The History of Eye Tap Development.
Title: An Ultrasonic Based Spatial Imaging Sonar as an Alternative to The vOICe Light Based Spatial Imaging Camera for use by Blind Persons
Presented by: Leslie Kay OBE. and Larissa Chesnokova of Bay Advanced Technologies Ltd.
WHAT: The existing KASPA technology being used today is in the form of the Sonicguide, the Trisensor and the recent 'K' Sonar. KASPASCAN is the latest innovation. It is an array of ultrasonic radiators constructed to produce a beam scanning system that insonifies object space with a narrow radiation beam. This beam sequentially scans an arc of 70 degrees about the midline of the radiation field. As each object is isonified it reflects some of the ultrasonic energy back to two receivers close to the radiator. This echo carries information about the distance to the object and information about the shape of the object. The two receivers convert the echo into audible sound and present this to the two ears. The receiving elements are angled left and right so that they produce a binaural sound that possesses a sound level difference between the two ears that indicates the direction of the object. This difference is perceived as a stereo lateral shift in the sound position heard between the ears and represents the direction left or right.
A user of this system hears a sequence of sounds from objects as the radiation beam scans past them. The frequency of the sound indicates the distance to the object - a low frequency indicates a close object. In other words the frequency is proportional to the distance. The sound of the echo is laterally shifted according to the direction of the object as in a stereo system. The unique character of the tone complex representing the object provides information about the object that can be used to identify the object and discriminate between objects. The narrowness of the beam determines the angular resolution between two close objects.
Why: It has been established that with KASPASCAN system through recognition of features of the echo sounds, a computer can map the object space and can predict the object type. Research by Dr Phillip McKerrow at University of Wollongong, Australia has shown that with KASPASCAN sufficient information is provided for a computer loaded with specially developed software to identify at least 100 leafy plants. See http://www.uow.edu.au/%7ephillip/rolab/sonar.html
With this proven information about object space, fed binaurally to a person's ears, the sound image is in essence very similar to the camera-sound based imaging system of vOICe using binaural sounds. There is however significant difference in the way the information is perceived via KASPASCAN that enables the recognition of objects, and the cue to distance is unambiguous in the frequency domain.
Because of the similarity in concept it would be seem to be desirable for the best features of KASPASCAN and vOICe to be combined. This is so as to develop an optimum form of imaging sensor for a blind person to be more able to reach his/her potential.
RESEARCH PROPOSAL: Using both KASPASCAN and vOICe systems determine what further technology development is needed to optimize the use of either or both systems. Design a series of tasks that will show the extent to which the object space is perceived and comprehended and to carry out psychophysical experiment using trained subjects. We need to know what are the relative merits the two systems.
By drawing conclusions about the useful properties of the two systems guidance can be given to the blindness field about the user values.
This research should be collaborative internationally and managed by NFB Jernigan Institute.
Title: Independent and "free-hands" Navigation in 3D Space for Blind and Visually Impaired Individuals
Presented by: N. Bourbakis, ITRI-Wright State University
Categories: Wearable Systems, Artificial Vision, Blind's Independent Navigation, Learning with Assistive Technology
What: During the last decades several research efforts have been directed toward providing better accessibility and navigation to blind individuals in their living environment by developing new devices and IT scientific methodologies. However, there is still a need to overcome navigation barriers encountered by individuals who are blind. Until these barriers are eliminated, the blind and visually impaired individuals will continue to be under represented. The analytical abilities of people with visual disabilities should not be disregarded, since there is no evidence that this population does not possess the same range of abilities as the rest of the population. On the other hand, the lack of opportunities to develop and use those abilities will certainly limit their employment advancement.
A range of adaptive technologies and devices have evolved since the 1960's to assist people who are blind in dealing with a variety of situations. The primary drawbacks included inconsistencies in feedback depending on various conditions (such as weather), possible disorientation caused by overuse of the sound space, and the fact that the information such devices provided was redundant to what the individual could discern on their own in a more efficient manner using a cane or guide dog. The main drawbacks of existing assistive devices are the cumbersome hardware, the level of technical expertise required to operate the devices, and the lack of portability. These technological advances do not facilitate unobtrusive navigation and learning from the environment. This limits employment and social opportunities for blind and visually impaired individuals. In summary, these technological advances target specific functional deficits, but largely neglect social aspects, and do not provide an integrated, multi-functional, transparent, and extensible solution that addresses the variety of challenges (such as independence) encountered in everyday blind people's lives.
Tyflos is a research project for significantly enhancing blind individuals' independence in their living and working environment and its goal is to develop a complete wearable system for this purpose. It has been initiated by Dr. Bourbakis in early 90s and includes four main components (the reader, the navigator, the writer, and the constructor). The portable reader is capable for reading books, newspapers, and other type of reading material. It is in the final stage of completion, and will be available at the beginning of 2006. The development of the navigator is in progress. The writer and the constructor are at their early stage of their development.
This proposal presents the development of a wearable navigation prototype mainly based on a 2D vibration array for detecting dynamic changes in 3D space during navigation and provides these changes in near real-time (today and real-time in the future) to visually impaired users (in a form of vibration) in order to develop a 3D sensing of the space and assist their navigation in their working and living environment. This vibration array is a part of the Tyflos prototype wearable device (consisting of two tiny cameras, a microphone, an ear-speaker mounted into a pair of dark glasses and range sensors with all of them connected into a portable PC) for blind individuals. The overall idea of detecting changes in a 3D space is based on fusing range data and image data captured by the cameras and creating the 3D representation of the surrounding space. This 3D representation of the space and its changes are mapped onto a 2D vibration array placed on the chest of the blind user. The degree of vibration offers a sensing of the 3D space and its changes to the user. In particular, the structural 3D information generated by the fusion of image and range data is mapped onto the 2D array. In particular, each pixel (or a small predefined region from the 3D perceived depth image) is represented by a distance from the surrounding borders of the environment. Each cell is appropriately selected to quickly change status (no vibration for no obstacles near by, slow vibration for obstacles a close range, fast vibration for obstacles very close to the user) and appropriately guide-inform the user about the 3D surrounding space. The range of the vibrating sensation is controllable and predetermined in order to cause no harm to the user. Based on the degree of the vibrating sensation and the location of this sensation on the 2D vibration array the user gets a general idea about the 3D representation of the space in front of him/her. In other words the user gets an idea if there are obstacles in front of him/her and on what location and direction, based on the vibrating sensation on the particular cells of the 2D array. Also, when the user feels a continues vibrating sensation coming from the right and left edges of the 2D array and no vibrating sensation in the middle, that means there is free open space in front of him/her for navigation.
In addition, GPS and RFID features will be added to the Tyflos wearable system in order to enhance its capabilities and increase the options for the blind user to verify the accurate representation of the information received from the sensors.
Why: The proposed idea is unique to a great extend via the 2D vibration array with RFID and GPS features as complementary components of the Tyflos wearable system. All these components with the existing ones cameras and range sensors (attached to a pair of dark glasses) will provide to the blind user "free-hands" capabilities and a unique way for sensing the structure of the 3D space and the dynamic changes in near real-time. In other words, the development of the device proposed here will offer to the user capabilities for learning through assistive technology and via training the feeling and the sensing of the 3D environment in order to safely navigate in it. In order for the 3D sensing capabilities to be offered by the wearable system, a variety of unique artificial vision (segmentation, graphs, recognition, etc), navigation and fusion in house methodologies have to be integrated together. We have also mentioned "training" above and for that we have a large group of visual impaired individuals who are volunteers for the Tyflos wearable prototype. Beyond the several institutes, and organizations for blind individuals, a middle high school for blind students near to our University will be also in our agenda for testing the Tyflos prototype. It is also important to be mentioned here that the wearable Tyflos reader, funded by NSF, will be available at the beginning of 2006.
More about Tyflos prototype wearable system are available in nikolaos.bourbakis@wright.edu
Title: User-centered design of assistive technology
Drafted by: Jack Loomis, assisted by Roberta Klatzky, Reginald Golledge, and Jim Marston. To be presented at the Congress by Jim Marston
Echoing the proposals of others, especially Leslie Kay and Bill Crandall, we propose that future R&D of sensory substitution devices for visually impaired and blind people give more attention to sensory, perceptual, and cognitive processing in humans than has been the case in the past. Too many times we have seen devices being proposed by technology-oriented researchers and developers who show little awareness of the needs of the intended population or of the information processing characteristics of the remaining senses and associated perceptual and cognitive processes. User-centered design is the opposite, starting with the needs of the population to be served and the capacities of the remaining sensory systems.
As argued by Loomis (2003), we disagree with the notion of general-purpose sensory substitution (a single device substituting for all functions of the impaired sensory modality) and argue instead for a more analytical approach to sensory substitution, specifically in connection with wayfinding. In place of the general-purpose approach, we argue for a targeted approach in which devices are designed to meet specific functional needs of the visually impaired person. For each function (e.g., obstacle avoidance, sensing of the immediate environment, navigation through larger scale space), there first needs to be analysis of the informational requirements for carrying out the function. This is along the lines of ideal observer analysis in which one considers various forms of information that might be acquired about the environment and, then given this type of information, how much is required to perform the function of interest.
The second and equally important facet of achieving effective sensory substitution is designing devices with interfaces matched to the sensory, perceptual, and cognitive capacities of the remaining senses. There are two aspects to this so-called "impedance matching": sensory bandwidth and the specificity of higher-level representation. As research determines the information needed to perform a task, it must be determined whether the sensory bandwidth of the remaining sense (or senses) is adequate to receive this information. Consider using the tactile sense to substitute for vision in driving (which no one would advocate doing). Physiological and psychophysical research reveals that the sensory bandwidth of vision is many times greater than the bandwidth of the tactile sense for any circumscribed region of the skin. Thus, regardless of how environmental information is transformed for display onto the skin, it seems unlikely that the bandwidth of tactile processing is adequate to allow touch to substitute for this particular function. In contrast, other simpler functions, such as localizing a flashing alarm signal, can be feasibly accomplished using tactile sensing as part of a haptic device.
Even if the intact sensory modality has adequate sensory bandwidth to accommodate the environmental information, this is no guarantee that sensory substitution will be successful, because the higher-level processes of vision, hearing, and touch are highly specialized for the information that typically comes through those modalities. A nice example of this is the difficulty of using vision to substitute for hearing in deaf people. Even though vision has greater sensory bandwidth than hearing, there is yet no successful way of using vision to substitute for hearing in the reception of the raw acoustic signal (in contrast to lip reading, which involves the direct perception of articulatory features, and sign language, which involves the production of visual symbols by the speaker). Evidence of this is the enormous challenge in deciphering an utterance represented by a speech spectrogram.
While brain plasticity and the capacity for rote learning, especially among the young, play an important role in sensory substitution, we believe that assistive technology is most likely to be successful by taking into account the characteristics of the sensory, perceptual, and cognitive processes associated with the remaining senses. The most successful examples of assistive technology for visually impaired and blind people bear out this view. We believe that future research on assistive technology and, in particular, that dealing with wayfinding, needs to include basic and applied research on human capability alongside technology development and post hoc evaluation.
Loomis, J. M. (2003). Sensory replacement and sensory substitution: Overview and prospects for the future. In M. C. Roco & W. S. Bainbridge (Eds.), Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology and Cognitive Science. Boston: Kluwer Academic Publishers. http://www.psych.ucsb.edu/~loomis/loomis_substitution.pdf
Title: Consortium for Alternative Vision
Contributed by: World Access for the Blind, and the Institute for Innovative Blind Navigation
To investigate the establishment of a global consortium for the study, design, and field application/distribution of technology, techniques, and training process for alternative vision systems. Emphasis here is given to techniques as well as technologies to implement a successful alternative vision process. The consortium may exist as a virtual entity within an established framework of priorities and initiatives. It would involve a world wide network of top experts in perceptual processing and blindness, sensing technologies, data processing, man-machine interface, funding and marketing, training and therapeutic process, and whatever else is deemed necessary. The consortium would formally establish and sustain a user centered, systematic approach to research, development, and distribution of quality, effective systems for alternative vision based in a sound understanding of human factors and perception, and driven by consumer priorities. The center's aim would be to improve perceptual access to all aspects of the world for all people who can benefit from the use of alternative vision systems, with initial focus on people with blindness and partial vision.
The primary higher function of a healthy mammalian brain is to comprehend its condition - to register stimuli, to extract patterns from the stimuli to form a first order of comprehension, and to use this comprehension to govern response to the stimuli - usually to investigate and access more stimuli to reach ever higher levels of comprehension. Generally, we may call this process "adaptation." Very simply, we notice it, we think or "feel" about it, then we respond to it in one way or another. This is done through a healthy perceptual process. Perception refers to the registration of stimuli and the mental conversion of stimuli to information to reach for comprehension. Stimuli from all modalities are processed, and cross referenced with our experience, to form a dynamic composite mental image which strives to be functional and promote better comprehension. The perceptual process ideally refines itself by what we may call self directed discovery, which is the process of using one's own perceptual system to direct interaction with the environment, ideally resulting in more refined stimulus access, comprehension, and discovery capacity, or better adaptation.
For purposes of intuitive understanding, I use the term "vision" more or less synonymously with "perception" in a context of common expression. In common usage, to "see" or to have "vision" is to know, understand, or be aware as in "do you see?" or "we share a common vision" or "look here." Blind people also make use of this term in common expression to refer to the use of other senses, such as touch as in "let me see?" or hearing as in "I saw that movie." The term "alternative vision" refers to the implementation of adaptive strategies or technologies to develop and use one's full perceptual system to perceive and interact with one's environment more completely and accurately.
A conceptual project under long consideration is proposed here for potential investigation by such a consortium, code named "HawkEye" for easy reference. HawkEye is a fully functioning, high definition artificial vision system consisting of technological and natural processing elements and techniques. In concept this system would consist of a common processing platform of universal design with plug-in options for various sensors and displays depending on user need. These modules would be designed to provide sensory input that is optimally compatible with the brain's natural information processing system. Sensor options may include sonic echo signaling, ultrasonic sonar, radar/lidar, infrared, optics/laser, inertial, magnetic, and location information transceivers. Display options may include auditory, visual (for partial vision), tactile, and direct neural-stimulus when available and appropriate. These modules would provide sensory input that is optimally compatible with the brain's information processing system. HawkEye would be built upon the natural way the brain computes - how the brain is organized to perceive and process information. It would be designed according to a sound knowledge of how the brain makes sense of the mass of information impinging on it. The human brain would be fed patterns that it is hungry for, that it best perceives and acts upon. Blind individuals would use their native intelligence and natural perception to extract meaning from the signals. This fully functional alternative vision system would provide information to functioning processors in the brain by channeling information primarily through undamaged or restored neural-perceptual pathways - thus circumventing or augmenting damaged pathways and allowing the brain to make integral use of optimized spatial information. This would not be a crude tentative perception. Blind individuals would learn to "see" in a fluid, graceful way that looks and is "natural." In the same manner that Braille (a near point vision substitution system) has allowed the blind to read at incredible speeds (an act that may seem improbable to the sighted), so the use of HawkEye would allow far point vision substitution that is as functional, pragmatic, satisfying, and esthetically pleasurable as Braille.
1. It should be person centered and under user direction. It should need to rely as little as possible on specialized environmental accommodation, maximizing the self sufficiency of the user. Historical precedent and current societal agendas as expressed by where the dollars aren't going suggests that public interest at large holds little concern for the needs of a tiny population. The shifting of industrial priorities and fiscal designations toward infrastructural accommodation is extremely ponderous at best. Endless tomes could be written on the real beaurocracies and fiscal allocation issues involved. An effective device will stand as much as possible on its own, and be able to access mainstream elements of infrastructure, such as GPS and wireless networks. Of course, individual applications may account for restricted modification of infrastructure where appropriate to accommodate specific circumstances, such as specialized institutional settings.2. More than casual, day to day functioning should be our focus. We might consider setting up demonstration projects to challenge developers - high impact activities, such as mountain biking, wilderness orientation, light craft flying, boating, competitive projectile sports, graphics design, browsing the public library, window shopping. A crack team of test pilots would be recruited to challenge the limits of design and application.
3. The device should be able to allow as much access to the raw information feedback from the device as possible with minimal processing. Continuous self directed feedback based on body movement is the way the brain naturally learns.
4. It would be helpful to design the device with mainstream application potential, such as use in search and rescue, surveillance, crime prevention, and even medical applications where internal bodily states, for instance, could be monitored and displayed by appropriate moduals. Mainstream application will make this happen.
5. The system should allow options for direct as well as options for indirect perception of the environment. The quality of access to the environment depends upon the degree of correspondence between what exists, and what one perceives. If there are lags in the system, or the user is forced to wait for prompts or highly processed feedback, the brain may find itself starving for the fluid, interactive flow between self and environment that typically characterizes how we learn.
6. If we are to convey information about the environment to the brain, the information is most easily processed if it is presented in a way that is "native" to the nervous system. There are particular kinds of information that the nervous system is "expecting" and "primed" to process. If information is presented that is alien to this "wiring" or primary tendency, then it will increase the load on the system. This may result in impeded adaptation or compromised functioning. Perhaps the question is, what is this wiring; what information does the nervous system expect? For example, for discriminating information or perceiving relationships among items or events in space, the nervous system seems geared to make use of a focal point, which is generally thought to be supplied by the visual system. This helps to sort details out of the sea of information. The auditory system can apply a focal process, but this is little studied. Dr. Leslie Kays work with a focal point in ultrasonic sonar showed that this could be supplied to the auditory system, and that it greatly improved discrimination of physical details. The nervous system also seems to be expecting information input to be more or less continuous for the optimization of spatial updating and comprehension of changing perspective. Thirdly, the nervous system, at least as concerns perception of relationships beyond the body, seems to be tied to and directed by head movement.
7. information overload. Strong concerns have been raised about providing additional information to the auditory and other systems for image enhancement, because sensory channels can be "overloaded". While certainly true, but lack of attention is typically given to the fact that the brain is an information seeking and processing mechanism, clearly capable of recieving and processing untold megaloads of information. The question may not be, "how much information can the brain process?" but "how much information can effectively be bused in?" And, this leads us back to the concern about presenting information in a way that is native to the nervous system.
Prof. Steve Mann, Director of the Cybernetics Research Center at Toronto University and internationally recognized as the "Father of Wearable Computing" stated in an interview "Modern technology has reached a stage where blind people may throw away their canes and take up tennis as a hobby. All we need are the funds to make it happen." Blindness need not be conceptualized as a deficiency, but rather a style of life, with specific challenges, which benefits from the same things that benefit sighted people - self-directed access to the world's resources, knowledge, esthetics, and companionship. The purpose of an alternative vision approach is to provide tools to allow blind people to access the world at large in a manner of their choosing.
Society has designed the world for the eye. It didn't need to be this way; it is only that sighted people got here first, and there are a lot more of them. The architecture, the roadways, the signs and symbols, the vehicles, literature, entertainment, everything is designed around the vision system. now, however, in the digital age, we can make the world "visible" to the blind by adapting the blind user to the visual design.
Many research teams scattered throughout the world are using digital tools to try to address the perceived needs of blind individuals. We now have computers that read, digital Braille printers, raised Braille displays, and so on. The technology to address navigating and access to signs and symbols have until recently been largely beyond the scope of our knowledge. We have turned a corner, however, and now have the computational power and expertise to create effective wayfinding orientation approaches.
Hundreds of university teams and corporate groups are working intensely on alternative vision systems. However, these worthy efforts are decentralized, and may be competitive, too narrowly focused, disconnected, or ill-informed of the real needs and strengths of blind users, and of the perceptual system. We need an organized consortium to "pull the puzzle pieces together" toward a common end with a consolidated focus - to coordinate global cooperation through an approach of centralized, user driven planning and action, and to establish standards for incorporating modular designs into functional wearable systems centered around a wearcomp platform. This will take intensive, organized, global cooperation.
When the astronomers were mapping the stars, they realized that great challenges required extraordinary cooperation. They wrote software that linked all the great telescopes of the planet. The project went on day and night as different teams came on line as the earth revolved. They communicated using the internet. They shared the work and they shared the data. They mapped the universe of stars for the benefit of mankind, not for personal fortune or fame. Because of this level of sharing and organizing, they accomplished a task larger than the sum of the agencies who were working together. We must think and plan at this level of sophistication and cooperation.
We anticipate that this congress will shape the way technology and services are developed to improve wayfinding for blind people. For this to be effective, it is paramount that we proceed with a sound knowledge of and respect for how the neural system gathers and processes information. There must be a union between development/implementation and an understanding of this process. Hitherto, this has often been lacking, and developments have often gone unused. It would be the intention of the consortium to infuse further developments with collaborative knowledge across development efforts (as developments these days are happening largely in isolation), and with a sound understanding of human perception and blindness. It is imperative to understand that the perceptual system is an integrative process, and that knowledge about the environment and governance of interaction are an integrative process. Propagation of a consolidated approach based in perception and self direction can serve to address the following areas in blind rehab:
1. A view of blindness that emphasizes deficit rather than gain - the idea that we tend to develop and implement approaches with the intention of supplementing deficit rather than from a perspective of facilitating the "gain" or adaptive process. The 3trov}sy is termed deficit model vs. gain model. This probably has its basis in the initial formation of restorative movement treatments in the medical system, combined with treatment approaches that have traditionally been developed and implemented with a visual perspective. It harkens back to the idea of looking at disability in terms of impairment rather than functional ability.For more information, visit:2. The idea that building or rebuilding the movement process can be done most effectively by teaching a sequence of discrete skills, rather than facilitating/stimulating the natural unfolding of the integrative perceptual process. Movement for the blind is often viewed as a collection of skills that one learns and applies discriminatively from situation to situation, rather than a fluid process of grace and freedom governed by self directed perceptual awareness.
3. Over use of human guiding and external direction. There seems to be a lack of awareness that the perceptual system in humans (probably all mammals) is "geared" to develop by availing opportunities of increasing, self directed interaction with the meaningful environment. I strongly suspect that the over use of guiding or imposition of external direction (and this is cronic) short circuits that whole process. The external agent comes to serve as a kind of proxy perceiver, leaving the brain starved for meaningful information, and isolated from meaningful self-directed interaction. It comes back to "blind kids must DO what they cannot SEE".
4. The concept of "Seeing without Sight". This is our jargon, but it just refers to the idea that the brain can gather and organize nonvisual information to create a dynamic, interactive image of the world if information is presented/received in a way friendly to the nervous system. Perhaps central to an understanding of this concept may be the emerging knowledge that what we call the "visual" cortex might better be called the "spatial" cortex, as it really appears to have the primary function of organizing multimodal information for spatial awareness. While it may "prefer" visual information, it seems willing to work with any information it gets that is presented in a way that makes sense. Given this, it stands to reason that the visual cortex is primed to process very huge amounts of information, with little regard to the modality. So, perhaps the question then becomes, "how do we get the information in there in a way that makes sense to it?"
5. The concept of multimodal processing. The perceptual system tends to be seen as very disconnected or dis-integrated when the visual system is off-line, with little attention given to how to prime the perceptual system to establish a working image of space without the visual system in place. Again, this is often done with primary emphasis on discrete skills development. The attempt is to restore the movement process by a step wise program of skills training rather than perceptual training.
Title: Sensory Integration and Spatial Perception
Contributed by: Susanne Smith Roley M.S., OTR/L, FAOTA
Category: Perception, Sensory Integration, Occupational Therapy
What: Learning and doing require information processing, most of which is acquired through sensation. The internal and external environment is accessed through the senses and the complex interplay among them. In the context of time and space, individuals process multiple sensations simultaneously and translate this information to support engagement in complex activities and social interactions. It is through sensory information and related neurological connections that the individual can make sense of and use that information to survive and grow. Although in mature, sighted individuals the visual system predominates for spatial orientation and organization, a variety of sensations contribute to the perception of space. Temporal and spatial perception is in fact an outgrowth of multisensory processing started early in development. This presentation will explore the relationship of vision to other sensations and how this affects foundation abilities essential to function.
Why: When the visual system is not functioning well, spatial perception must be acquired using other sensations. Through the understanding of the interplay between sensory systems and their functions, professionals can design and implement effective interventions to promote engagement and reduce disability.
Recommended Reading:
1. Ayres, A.J. (2005). Sensory Integration and the Child: Understanding hidden sensory deficits. Los Angeles, CA: Western Psychological Services.
2. Blauert, J. (1994). Spatial hearing: The psychophysics of human sound localization. Cambridge, MA: MIT Press.
3. Calvert, G., Spence, C, & Stein, B. (2004). The handbook of multisensory processes. Cambridge, MA: MIT Press. 3. Eide, F.F. (2003). Sensory Integration: Current concepts and practical implications. Sensory Integration Special Interest Section Quarterly. 26(3)1-3.
4. Lewkowicz, D.J., & Lickliter, R. (Eds.) (1994). The development of intersensory perception: Comparative perspectives. Hillsdale, NJ: Lawrence Erlbaum and Associates. 5. Stein, B., & Meredith, M.A. (1993). The merging of the senses. Boston, MA: The MIT Press.
Title: A Wireless Localization System
Contributed by: Vladimir Kulyukin, Utah State University
Category: Wearable computers; smart spaces
What: A wearable system is proposed for localizing the navigator using wireless signals from IEEE Wi-Fi wireless routers. The core system consists of a portable single board computer (6 cm by 10 cm) with a wireless card. A set of landmarks is selected in a target environment. The wireless signature of each landmark consists of the signal strengths from the wireless access points detected in the environment. At run time, signal strengths are classified to a landmark. The objective is a ubiquitous localization technology that relies on wireless signals already present in many indoor and outdoor environments. No modification of the target environments is required. Explicit provision is made for sensor fusion with the GPS and the digital compass. Results of localization experiments will be presented. Fundamental tradeoffs between calibration and precision will be discussed. Video footage will be available at the Congress.
Why: Many localization solutions rely on the Global Positioning System (GPS). However, GPS does not work indoors, because GPS signals cannot penetrate the concrete and steel layers in many modern buildings. Several solutions have been developed for indoor localization. One prominent solution is the Talking Signs technology developed at the Smith-Kettlewell Eye Research Institute. Talking Signs is based on infrared sensors and operates like the infrared remote control device for television channel selection. The Atlanta VA R&D Center has proposed the concept of Talking Braille infrastructure. Talking Braille is a method for providing access to Braille/Raised Letter (BRL) signage at a distance. Talking Braille is an adaptation of electronic infrared badge technology developed by Charmed Technologies, Inc. The Talking Braille infrastructure consists of small digital circuits embedded in standard BRL signs. Small badges worn by users remotely trigger signs in the user's vicinity.
In many indoor and outdoor environments, localization can be achieved by using the wireless signals already available in many environments due to the ubiquitous use of Wi-Fi networks. One advantage of the proposed approach over the existing solutions is that it does not require any modification of the environment, e.g., deployment of extra sensors or chips, which could negatively affect the decisions of many organizations to make their environments more accessible to the visually impaired. Another advantage is that the proposed technology is not line of sight.
Title: Information To Go
Presented by: Bill Crandall, Ph.D.; Smith-Kettlewell
Categories: Smart spaces; signage
What: The Semantic Web and Scalable Vector Graphics are a revolutionary solution to the problem of intelligently getting to precisely relevant information at the appropriate time presented in a usable form.
Why: "In the future" by using my infrared receiver, a wireless connection to the Semantic Web will allow me to walk down the street and identify individuals I pass by way of the information they make available through their infrared transmission to the public from their infrared name tag. Or, I use my infrared receiver to scan the conference participants and potentially "run into" fellow members of my graduate program, identified by alma mater information contained in their infrared name tag s. Of course, you may not mind if some person or group of people know your university affiliation, but you might think this information is definitely not the business of some others. Therefore, your Semantic Web system would look to see "who wants to know" and based upon the level of permission you have set for release of your personal information, that information will or will not be provided to that particular person - all this done automatically.
Also, who and what to trust? We are deluged with information, but how "good" is it? How reliable is the source? How much do you value the opinion of the source? At present, we can only view the Web address of the site providing the information. If it is from Harvard's web site, then, "No Problem" (unless we apply a certain discount factor because we have a problem with effete intellectual snobs…). The Semantic Web's signature verification system moves us in this direction of having some certainty as to the reliability of information. Automatic tracking of information is accomplished by providing digital signatures all the way down the chain in order to ensure trust that you can discern the ultimate source of that information.
"John" is walking down the street on a tour with his infrared system (or any other device suitably configured with a web-enabled, wireless radio connection). He sets his receiver preferences - one of them, a request for information through speech. He deselects the visual display. In this case, relevant map-like graphical information would then be mapped into some auditory form for presentation. Alternatively, Linda may set the receiver's display preference consistent with her requirements as a person with low vision - possibly a combination of speech, large font and re-scaled or re-rendered graphic mapping. These transformations would not be carried out solely in the receiver, but through negotiation between the devices, themselves -- through a Semantic Web conversation we don't have to listen to or even care about. As far as we are concerned, it just happens -- Like much in the current web experience, it's all magic!
The Semantic Web is a shift away from something being provided to something being requested. Each time a request is made, a unique assembly of connections will be especially created and served up. In the new, new information systems, all diners don't get fed the same meal. Rather, each immediately chooses what, how much, and when -- rather like a cafeteria serving line.… right down to how many pads of butter for each biscuit.
Title: Establishing research and development priorities for the teaching, automated production, and use of high-quality, customizable tactile street maps for everyday wayfinding.
Presented by: Joshua A. Miele, Ph.D., Post Doctoral Fellow; The Smith-Kettlewell Eye Research Institute
Categories: Smart spaces; signage, smart maps
What: Tactile street maps can be powerful tools for blind and visually impaired (BVI) travelers, teachers of BVI children, and orientation and mobility (O&M) instructors. We propose to improve the quality and availability of tactile street maps, as well as related research, curricula and training materials, by taking an interdisciplinary approach to the establishment of research priorities for the use of tactile maps as wayfinding tools. Until recently, the difficulty and expense of producing relevant tactile street maps has significantly inhibited their use as tools for effective wayfinding.
The Tactile Maps Automated Production (TMAP)
Why: Sighted people take ready access to maps, signs and other visual aids to
navigation for granted. In planning a trip to an unfamiliar town, or
finding one's way around upon arrival, using maps of one form or another is
routine, and most sighted people could not imagine a world without them.
For blind people, on the other hand, access to geographical information is
quite limited, and the availability of tactile street maps detailed enough
to use for wayfinding within any particular area is an extreme rarity.
Nevertheless, evidence indicates that blind and visually impaired travelers
from a wide variety of demographic groups can be trained to make effective
use of tactile maps in wayfinding.
Although the availability of remote infrared audible signage (RIAS) and
accessible global positioning systems (GPS) is improving, the availability
of high-quality tactile street maps remains practically nonexistent. These
technologies are all complimentary: GPS and RIAS can report location, while
tactile maps have a clear advantage in facilitating the development of
cognitive maps by providing a global perspective on the surrounding
geography. However, the availability of tactile street maps has not kept
pace with the developments in GPS and RIAS. This dearth of tactile maps has
serious implications for wayfinding , and by extension to education,
employment and quality of life for all blind and visually impaired
independent travelers. The lack of accessible cartographic materials
results in an unfortunate positive feedback loop that can be summarized as
follows:
We propose that this vicious cycle should be broken by unequivocally
establishing the importance of tactile street maps as a technology for
blind wayfinding.
Title: Incorporating virtual sound and other spatial displays into the user interface of future blind navigation systems
Drafted by: UCSB'S Personal Guidance System group (Jack Loomis, Reginald Golledge, Roberta Klatzky and Jim Marston) and to be presented at the Congress by Jim Marston
What: We propose that future R&D on navigation systems continue to explore the use of virtual sound and other types of "spatial displays" to help those with low vision and blindness to navigate more effectively and to better comprehend and learn environments.
Most navigation system use Braille or speech displays to provide textual descriptions of routes or environments using cardinal directions, bearings, or clock face information. Distances are also provided. Though these displays have proven very effective, more direct perceptual displays ("spatial displays") of spatial layout are likely to be more effective in some situations, in part because textual information can overload the users' cognitive processing resources and can be difficult to comprehend by some.
Virtual sound displays allow the user to be immersed in the environment, as earphones provide an auditory space where identifying sounds or labels appear to come from the environmental features that are represented in the spatial database. (Air tube earphones can be used that do not block real sounds from the environment). A person can follow "beacons" as if there were loudspeakers at each waypoint of a route, and buildings and other environmental features identify themselves as one walks through the area. This type of interface provides an intuitive and time-efficient way of sensing and learning the spatial layout of important environmental features. Our group has been doing research on the virtual sound interface for over a decade, but Bruce Walker and his colleagues at Georgia Tech have been pursuing an active research program in recent years.
Major findings from the UCSB Personal Guidance System research on the virtual sound display and other types of "spatial displays" include:
Other types of output devices, including hand-held and body-mounted compasses, and various output signals that used the compass to provide directional information through beeps, tones, vibrotactile stimulation and simple "left-right-straight" information all proved successful.
With five to ten minutes of training, these types of output devices enabled all participants to successfully complete the navigation tasks.
Differential GPS and spatialized information enabled users to find small locations, such as path intersections, mailboxes, and bus stop poles.
A hand-held Haptic Pointer Interface was successfully tested which combined access to GPS and data base information and also to intelligent environment location-based infrared information systems (Talking Signs), making indoor and outdoor navigation possible with one device.
Surveys, post-experiment interviews and test results showed that blind participants desired a range of information output options.
While we believe that there are many situations where textual information is perfectly adequate, the intuitive nature of spatial displays, their effectiveness in route guidance,
and the enthusiastic acceptance by visually impaired participants in our research makes us believe that they will be shown to be even more useful than our research so far has demonstrated. For example, the capability of using virtual sound to rapidly display many surrounding landmarks with frequent refreshing of the landmarks (compared to the slower refresh rate of textual information) ought to allow the user to more effectively maintain orientation within the environment as well as to build up better mental representations of the environment. For all of the above reasons, we propose that spatial displays and virtual sound, in particular, continue to be considered in the interface design of navigation systems, be offered as display options for commercial navigation systems, and be considered more widely for other uses by visually impaired people which require comprehension of space.
Title: Shared GIS Data with Pedestrian Level Detail and Accessibility Features
Presented by: Bruce Walker & John Peifer, Georgia Institute of Technology
Category: Shared infrastructure; GIS systems; accessibility hurdles
What: A distributed and shared set of GIS databases that have pedestrian level of detail, and support the inclusion of accessibility features.
Nearly every wayfinding system needs to (1) estimate where the user is located; (2) determine what is around the user; and then (3) use these pieces of information to help get the person to the destination, while noting obstacles and items of interest along the way. Locating the user with sufficient precision and accuracy is a technical challenge that many groups are working on (including our group at Georgia Tech), but it is outside the scope of this proposal. However, assume for the present purposes that the user's location and heading can be determined with excellent accuracy, both indoors and outdoors, with a minimum of hardware to carry around.
Now, the user's location can be compared to the items in a database, to determine routes, the location of hazards, accessibility issues, and features of interest. Data about the environment are typically stored in a database designed in Geographic Information Systems (GIS) format, but many current GIS databases have insufficient detail for advanced wayfinding tasks. In research at Georgia Tech we are developing a System for Wearable Audio Navigation (SWAN) based upon a GIS database with pedestrian-scale details such as sidewalks, curbs, trees, signs, stairs, as well as buildings with interior floor plans for all floors.
In addition, this GIS database can include annotations supplied by a community of users that provides dynamic, location-based comments about items of interest as well as advice about accessibility problems and solutions. There are numerous examples of how the Internet connects communities of interest to exchange ideas, give advice, make recommendations, and send warnings. For example, Amazon.com's customers regularly read consumer reviews before making purchases and submit their own reviews after trying products. Accessibility information is also being shared among Internet communities of people with disabilities, and the support from others with similar disabilities is often invaluable. Emerging mobile wireless technologies are now making it possible to send and receive information from cell phones and portable devices that people carry throughout the day. Thus, everyone can become a mobile reporter of news and events that are relevant, important, or entertaining to their community of interest. For example, warning other wheelchair users that an elevator is broken or advising other blind bus riders that the bus stop has been moved fifty feet down the block during sidewalk construction. The Wireless RERC at Georgia Tech has created an experimental Mobile Accessibility Guide that combines location tracking with wireless networking capabilities to send and receive context-aware accessibility information. Methods have been developed so that consumers can contribute reports from mobile devices (including some cell phones) which are equipped with voice recorders, cameras, text messaging, wireless data networking, and limited location tracking capabilities. A major challenge is to create standard database structures and information filters so that a national or global database of consumer annotations can grow naturally and be accessed efficiently for a variety of new and emerging wayfinding applications.
This level of detail in maps and consumer annotations will greatly enhance independent navigation, but creating and maintaining such detailed information is time consuming and therefore expensive (mostly in terms of building up the database at the beginning). However, many cities and universities already have much of the information necessary, which will help bootstrap the process. We propose that other organizations build up similar data, and that these databases be shared using a common format. That way, any systems that are built to use the common format will work wherever there is database coverage. Such a standard GIS format will enable many new and emerging wayfinding applications to function at the same level of sophistication in communities everywhere. There are also thorny issues surrounding privacy, security, and data maintenance that need to be addressed if this is to be done properly. There needs to be substantial buy-in from many organizations, and considerable funding both for startup and ongoing maintenance and updates.
Why: The existing GIS data is nonstandard and insufficient for pedestrian systems. Maps are not only the result of explorers; they are a tool for future exploration. We hope to create a service that is thorough and detailed enough to be really useful in helping people explore their world. But we also need to make sure this is done properly and carefully, as well.
Title: Increasing the Availability of Geographic Information Systems (GIS) Information in Wayfinding Devices for Individuals Who are Blind
Presented by:Presented by: Jim Marston (UC Santa Barbara), Richard Long (Western Michigan University), Janet Barlow (Accessible Design for the Blind), Reg Golledge (UC Santa Barbara), Mike May (Sendero Group), David Guth (Western Michigan University)
What: What: We propose that the Congress develop a research agenda leading to the development of:
Many states and local jurisdictions are using GIS to record information about their transportation facilities, including presence or absence of sidewalks and curb ramps, locations of signal poles, street lighting fixtures, drainage systems, and other street features that may require maintenance. A variety of database and graphical methods are used for recording these types of information that is typically only available to jurisdiction employees.
Title: Study on Navigating, Alarming and Positioning
Presented by: Swedish National Post & Telecom Agency (PTS), the Department of Speech, Music and Hearing at the Royal Institute of Technology (KTH)
What: We propose that this study, originating and on-going in Sweden, be included in any coalition of wayfinding technology projects, and that our work be considered for inclusion as smart space collaborations develop. The Swedish National Post & Telecom Agency (PTS), the Department of Speech, Music and Hearing at the Royal Institute of Technology (KTH) has a study on navigating, alarming, and positioning as follows:
Background:
To be able to move around independently in an unknown environment, to send an alarm if necessary and, finally, be found if one has gone lost is a matter of growing importance. Partial solutions as well as interesting system components are about, but no holistic system has been presented as yet, still less implemented.
The aim of the study is to present existing problems as well as possible solutions and actors. The study is to result in two or more trials.
Description:
To be able to navigate in an unknown environment (navigating), to send an alarm in case of danger or lost track (alarming) and in the end to be searched for and found (positioning) might occasionally be needed by anyone. There are, however, groups of citizens who have more pronounced needs for this. The methods vary, depending on local, cultural and economic factors.
In Sweden and many other countries there is a demand for the development of supporting systems for people with disabilities, including elderly people with age-related cognitive impairments. The common denominator is that the solutions should offer an as large as possible independence as well as adaptation to the individual's different needs.
Interest groups:
Most likely, there are about ten associations of people with disabilities interested in the issue. Potential groups are visually impaired, deaf-blind, hard of hearing and deaf people and people with mental or cognitive disabilities as well as those with speech, voice or language disabilities (including dyslexia) and persons with impaired mobility.
Activities within certain governmental bodies, e.g. the Road Agency (VV), Road and Traffic Institute (VTI), SOS Alarm, the Swedish Rescue Services Agency (SRV) and the Board of Health and Welfare (SoS) will be presented together with activities within the auspices of the County Councils and local municipalities.
The Handicap Institute (HI), owned by the government, the Federation of the County Councils and Local Communities (SKL) will be offered an advisory role.
Private actors, like enterprises in the field of technical devices and service facilities as well as systems developers and house owners will be contacted during the study. Some research institutions have been active in the field.
On-going activities:
Almost all interest groups mentioned above have shown interest in the issue. Some organizations of people with disabilities have expressed concern on behalf of their members' safety, and participated in some related projects. SRV and VV are already active in the field as well as SOS Alarm. Currently there are many emergency alarms installed in homes of elderly people and private homes, and many companies already now sell various kinds of alarm equipment. Some research institutions, e.g. CERTEC, have run and are running projects in the field. Related projects have been carried through within the EU (e.g. MORE) and are going on (e.g. ASK-IT).
Problem:
The over all need is to get an overview of the situation, i.e. what kind of problems there are when it comes to navigation, alarming and positioning, what actors there are and their roles, the economic consequences of various solutions to be foreseen, legal conditions, ethical considerations and the temporal aspects on implementation.
Technology:
Possibilities offered by 3G-phones with GPS or A-GPS will be studied. Also, other existing technical solutions will be evaluated, after approval by PTS. Future possibilities will be presented.
Why: The study shall illuminate the above mentioned questions concentrated mainly on out-door environments. KTH will contact relevant actors to get a picture of the state-of-art and maintain a close contact with the interest groups, e.g. in the form of a reference group. The study will be structured in active and passive solutions, i.e.:
a) support for persons who are aware of his/her situation and may navigate independently and send an alarm signal.
Result:
The study shall present user benefit in the form of:
- possibility of active and passive navigation/orientation and positioning, including possibilities to be found.
The study shall suggest solutions based on existing technologies and existing standards and even point at solutions which can be available within a few years. The study shall primarily focus upon Swedish conditions, but to some extent also present international activities in the field. The study should not suggest proprietary solutions.
The study shall result in a report containing two or more suggestions for practical trials describing:
- aimed at target group and what needs to be satisfied by the solution. Benefits of the solutions.
The trial shall build on solutions which can be realized with today's available products and services.
The study shall be presented for PTS and during the Information and Demonstration-days of the Handicap Institute in October 2005.
Size and performance:
The study will be going on during 2005 and be reported in October 2005.
The Royal Institute of Technology
Title: A Robotic Shopping Cart for the Blind
Contributed by: Vladimir Kulyukin, Utah State University
Category: Intelligent "vehicle" technology; smart spaces
What: A robotic shopping cart is proposed that enables the visually impaired to
shop independently. The device consists of a mobile robotic base with a shopping
basket mounted on it. The device fuses data from four sensors: a laser range finder,
a ring of sonars, an RFID reader, and a digital camera. The user selects products by
browsing a voice directory of products with a 10-key numeric keypad attached to the
cart's handle. The cart issues synthetic speech messages to the user. The cart
takes the user to shelf sections with selected products. The user finds individual
products with a wireless portable speech-enabled barcode reader coupled with
the on-board computer. The device is not meant for individual ownership. Supermarkets
are expected to purchase and maintain such devices in the future, just as they now
maintain mobility devices. A proof-of-concept prototype has been deployed and
tested in a real supermarket. Video footage will be available at the Congress.
Why: Grocery shopping is an activity that cannot be done by many visually impaired
people. They either do not go shopping or rely on sighted guides. In 2000, U.S.
residents aged 65 and older constituted 12 percent of the population. It is projected
that by 2030 people aged 65 and older will make up 22 percent. Vision is a sensory
modality that deteriorates with age. Surveys show that a significant number of U.S.
residents would like to maintain their independent status in their homes and communities
as long as possible. The proposed technology will enable the elderly to shop independently
despite sensory deprivation. The technology is replicable, because only
commercial-off-the-shelf components are used in the assembly. The technology is
suitable for other dynamic and complex indoor environments, such as airports and
convention centers, and can be integrated with any self-propelled mobile base, including
a power wheel chair. In the future, the technology can be adapted for the wayfinding
needs of people with cognitive disabilities.
Title: The Blind Driver Challenge
Contributed by: Jeff Witt, NFB
Category: Intelligent vehicle technology
What
The Blind Driver Challenge is a proposed series of engineering contests, in which competing teams will build street-ready vehicles that are drivable by blind people. The contest will be staged over several years to focus on enabling specific driving scenarios. In this way we will methodically work toward a general-purpose, versatile vehicle that could be driven alongside conventional vehicles driven by sighted people.
The contest will build on research in intelligent semi- and fully-autonomous vehicles. Most of the research in those areas is currently being driven by military applications. Given the strong demand in that area, it will be most efficient to make use of underlying smart vehicle technologies (vehicle positioning, collision avoidance, mapping, etc.) as they become available and increasingly versatile, and focus most of our resources on the area that is not getting sufficient attention - non-visual Driver/Vehicle Interfaces.
Why
The contest format puts us directly in touch with leaders in the smart vehicle world - academia and industry, as well as government researchers, and extends their current efforts in a symbiotic way. They benefit by having more reasons to justify their research; we benefit by their accelerated progress and increased attention to our areas of particular concern - particularly driver/vehicle interfaces.
In terms of cost-effectiveness, a contest produces far more innovation as well as publicity and education for the money - bang for the buck - than direct funding of individual technology development projects.
By working with a variety of related industry partners, we may also help to develop other uses for non-visual driving interfaces, such as systems for navigating rovers on the moon, impaired combat driving, etc.
Title: From outdoor to indoor navigation: a need for a standard data format to enhance the blind user wayfinding experience
Contributed by: François Boutrouille and Pierre Hamel (HumanWare)
Category: Intelligent vehicle technology
What:
With GPS-based orientation tools, blind users have access to different data provided by commercially available digital maps and databases of points of interest. With new promising positioning technologies that will overcome GPS limitations in urban areas and inside buildings, seamless outdoor and indoor navigation is conceivable. But even with the best localization system, navigation is irrelevant without good data. What we propose is a four step approach highlighting the importance of data enrichment to enhance the blind user wayfinding experience.
Firstly we would like to create a community of organisations/actors involved or wanting to be involved in wayfinding content creation. This community would exchange ideas/thoughts on user needs, what content is missing and has to be created, and how the content can be formalized.
Secondly, community members could categorize the various content types and then propose a data format that would support them. The data format would enable the creation and exchange of information dedicated to blind navigation activities. This approach is similar to the creation of the DAISY standard which promoted the development of Daisy compliant audio books players.
The standard structure should take into account the different levels of hierarchy in the information that is used to achieve navigation tasks. In the outdoor world, data can be as simple as a basic waypoint based on a GPS position (allowing for example an orientation system to automatically notify the user when he arrives home). But data can also be a more complex piece of information such as a route between two locations including all waypoints required for guidance instructions (e.g. turn left, turn right). In the indoor world, waypoints can take either the form of an absolute lat/long location or a location relative to a building map. Ultimately, to achieve indoor navigation, the standard should support the definition of graphs of interconnected waypoints similar to the outside road network.
Thirdly, community members could start defining requirements for software content creation tools. These tools would enable blind organisations, university administrations and others to actually build data that would comply to the proposed standard format. For example, the tools could allow the creation of databases of points of interest collected on a campus or from data available on the internet (e.g. bus stations locations). Also these tools could be designed to facilitate the interchange of data between content producers and users. As an example, the upcoming version of the Trekker orientation system will enable users to import external sources of points of interest. More advanced tool features may include ways to graphically describe the topology of building floors using information such as corridor intersections, door locations, etc. Such indoor map generator tools already exist to create data that is used to guide sighted users in museums, fairs and symposiums.
Finally, community members could start thinking of using the web technologies to leverage content distribution to the users. How content will be indexed and how will users query the content databases are examples of topics that need to be investigated.
Why:
Before talking about a universal positioning system, we agree that indoor positioning technologies must still evolve and reach an adequate level of accuracy and reliability with no or minimal infrastructure.
But what about content without which wayfinding appears as a poor experience to the blind users? Even with commercially available databases, availability of structured geographical information is still limited in open areas with no streets, or inside public buildings.
We believe that there is a real need to standardize data. Standardization could be achieved using one or several formats inspired from existing geographical formats such as the Geographical Markup Language. With such formats in place, blind organisations or O&M instructors could create new data and transfer it directly to their members.
Finally, content creation cannot be the responsibility of a single organisation or a few navigation system manufacturers. Content creation has to be endorsed by a community of members that share the vision of continually improving the blind user wayfinding experience.
Title: Design and Launch a Global Coalition of Cooperating Agencies, and Create a Wayfinding Knowledge Management Portal in Cyberspace to "house" the coalition
Presented by: World Access for the Blind and the Institute for Innovative Blind Navigation
What: We propose that a global coalition of agencies be created to address the growing opportunities and challenges arising from the technology revolution. The coalition would focus on blind navigation in general, as well as blind wayfinding technologies. This multi-agency, multi-professional, and consumer-driven coalition would primarily be centered in cyberspace. We propose that a web portal be created on the internet to house a comprehensive, sophisticated, and creative/dynamic system for managing knowledge about wayfinding.
Why the Coalition: This coalition would result in a common voice for grants. National, federal, and international funding agencies have been asking for this level of cooperation for years. Power at the legislative level would be greatly enhanced by this common front; laws need to be changed or written, and this new organization would become a strong voice. The educational community (K-12 primarily) would also be faced with a united voice calling for better and more comprehensive mobility training for the world's blind youth. Inventors would have a place to go to get attention and serious consideration for their innovations. The coalition would make possible a number of global initiatives that would be too large and complex for smaller organizations- we could dream and act on a larger scale than ever before.
Why the Knowledge Management system in cyberspace: Because the old system of managing knowledge that was established during the industrial age is no longer adequate, and because new opportunities have arisen in the age of the internet. This is also a way to bring together professionals, consumers, non-profit organizations, businesses, governments, and nations. It is a new kind of social glue; a powerful new way to communicate, to learn, and to access knowledge.
The centering of the organization in cyberspace is very significant. We are not proposing that we create a huge industrial age structure with buildings and staff. We are proposing an infrastructure that serves the needs of a level four civilization- a computer literate modern culture (using terminology developed by Douglas Robertson in his publications "The New Renaissance" and "Phase Change"). A cyberspace coalition would have these advantages:
"How do we improve communication between experts?
"How do we involve consumers in the decision making process; how do consumers communicate with experts?"
Title: Removing Barriers to Technology Application Especially Designed for Blind Persons
Presented by: Leslie Kay OBE. and Larissa Chesnokova of Bay Advanced Technologies Ltd.
WHAT: According to the published literature (see http://www.batforblind.co.nz/reference.htm) between 1966 and 1986 the KASPA technology developed by Leslie Kay was fully researched world wide by scientists, engineers, psychologists, and orientation and mobility teachers with blind clients in the field.
This technology was found to work as designed, to enable blind persons to perceive their environment. They could detect, locate, and recognize multiple objects in their pathway over a 70-degree angle ahead up to 5 meters and map their positions relative to self in motion.
This extensive research over 20 years has proved that research alone does not lead to accepted, sustained, marketing and use of a technology. The KASPA technology was made for users in an attractive form that gained many design awards for the manufacturer. The facts are it commenced in the form of the Sonicguide with sales of 100 in the first year, and declined to 25 a year before being taken off the market as not a viable marketable product. Even so users of the technology were enthusiastic about the use they made of the technology. Many factors were involved but all could have been addresses except one, particularly difficult, barrier to progress in developing improved Rehabilitation of the Blind.
The Channels to consumer use are:
A blind persons need is to be able to choose from the variety of technological tools available. But more importantly the professionals who are ethically charged with the rehabilitation of blind persons should be strong knowledgeable advisors on technology use. They themselves should not be reluctant to teach its use. New technologies emerge "daily".
How can blind consumers be helped in making this choice?
20 years of experience reveals that a barrier exists with the purchaser/agency/teacher reluctance to be involved with the assistive technology that enables significant progress to be made in the development of improved blind rehabilitation.
WHY: This needs to be researched in preference to more technology being developed. This is our greatest research need.
The following is a proposal for this research.
It is considered that technology can greatly help a blind user to perceive his/her environment through an alternative sense to sight. The most effective sense is his/her hearing capacity - it is very much greater than that of the tactile sense.
Research should be focused on improving the channels from the blind user with a perceived need, to the actual end use of the developed assistive technology. The knowledgeable purchaser/agency/teacher are the ones who are most qualified to be effective in the chain by providing quality education and training in the new technologies. Indeed, the teachers should span a range of disciplines - psychology, audiology, physical therapy, occupation therapy as well as orientation and mobility. All tools now available for aiding blind persons to achieve their potential should be understood as a prerequisite for a teacher of the Blind.
It is most appropriate for NFB Jernigan Institute to undertake initiation and management of a study of this problem through their proposed Global Coalition of Agencies and Institutes.
Bibliography Relating Directly to the Sensors Developed by Leslie Kay and Colleagues
Title: Interface, Interface, Interface
Presented by: Bill Crandall, Ph.D.; Smith-Kettlewell
Categories: Smart spaces; signage
What: I would like for the group to take a sober look at implementation models that have been demonstrated to be usable and practical at present as distinct from those which show promise or perhaps will show promise at some future time. That is, what can we learn about existing good system or device design that might transfer to our thinking about systems still on the drawing board or even earlier, in the conceptual phase.
Why: I tell engineers bringing their 'next new thing' to Smith-Kettlewell that a useful interface is 95% of the challenge.... systems that are quick to learn and are, overall, a "no brainier" (low cognitive load) are best and may actually be used by people. There are whiz-bang devices out there (I'm not just talking about devices for blind people) that just don't get picked up because they're a pain in the butt. The Opticon is a fantastic device as it really works and is very versatile. However, the training time is years, the device is very bulky and, although it does the job incredibly well, it is slow and requires 100% attention. It is also very expensive. But it can read script handwriting or follow lines in a drawing.... a really fantastic and serious invention! Low cognitive load is a must. Every time I see someone talking on a cell phone while driving I freak out and try to get at some distance from them -- I personally know how dangerous this is because I sometimes use a cell phone when I'm driving!!!! In fact, we recommend that people NOT use Talking Signs while crossing streets or walking up/down stairs... and Talking Signs is perhaps (at this point at least) rates very high along the "no brainer" dimension in wayfinding technology.
What are other user-based design characteristics we need to be aware of as we start planning the future?
There is a great distance between an idea and development of a technical approach. And then there is a great distance between having a technical approach and a device that is usable. Then there is a great distance between having a device which is usable and one which will, from a human factors standpoint, actually be used. Therefore, the expertise of the group may best be used to help bring some reality-check and guidance as to the best way to proceed so that some of the more tentative ideas can be improved at the early stages of development.
Resources:
Smith-Kettlewell page for Talking Signs research: www.ski.org/Rehab/WCrandall
Title: Out with the old… In with the new!
Presented by: Bill Crandall, Ph.D.; Smith-Kettlewell
Categories: Smart spaces; signage
What: We should consider the impact that rushing to promote new R&D has on adequately supporting existing wayfinding systems.
Why: Systems that could, having been through the whole R&D process and demonstrated to work, be used by people who need enhanced independence on a day-to-day basis now. The public is constantly promised that in respect to everything technical, "tomorrow" or "in the future", "it" (whatever it is) will be too cheap to meter and so, while we're busy inventing and advancing the next new thing, people who are blind who could use existing technology at this moment continue to be left in the dark. I have seen this happen time and again on the various listserves. These "promises for the future" confuse consumers, regulators and manufacturers. Strong acknowledgment of "what works" now could go a long way in keeping those who feel either slighted or threatened from either pouting or feeling they have territory to defend; that the drawbridge should be raised once they have crossed.
Title: Preparing Personnel Preparation Curriculum
Presented By: Kathleen Huebner, PhD, COMS & Laurel Leigh, MS, COMS
Category: Infrastructure Change - Curriculum
What: As we look to the future of Blind Navigation, it is prudent to consider the necessary curricular changes Personnel Preparation Programs will have to make to remain up to date with the growing knowledge base. In keeping with the title of this first World Congress, "Inventing the Future of Blind Navigation" the time is right to consider the role Personnel Preparation programs will play in these efforts.
Why we offer this Proposal: To address these questions/issues:
1. What is the responsibility of Personnel Preparation Programs?
Who should have the primary training responsibility for future professionals?
What collaborative strategies can be developed between the vendors and the personnel preparation programs?
Which disciplines will have the primary responsibility in furthering the knowledge base on Blind Navigation Technology, TVI, O&M, etc?
2. Should training be delivered through Continuing Education programs or as In-Service?
How can training be funded?
Are vendors able to provide equipment on loan to personnel preparation programs for training purposes? Such collaboration may be a way for personnel preparation programs to remain up-to date.
What is the best method for training the trainers? [WMU program for BNGPS, Vendor training, etc.]
3. What are the over arching challenges?
Title: Create the "Land Astronaut Program"
Presented by: Institute for Innovative Blind Navigation
What: The detailed explanation of the Land Astronaut Program is available on the IIBN website. It is a large idea that uses analogies from NASA's space program and applies it to blind rehabilitation. Included in the idea is a curriculum/training program for blind children- a scouting model, with increasingly complex developmentally appropriate "badges" to be earned. Technologies would be applied to wayfinding challenges in ever more challenging spaces and children would earn "merits" as they attained proficiency. We also propose that the land astronaut program include an advanced group of blind travelers- "real" astronauts who can traverse the most challenging of spaces. These astronauts would be the test pilots for the new vehicles and emerging technologies. In addition to the childrens program and the advanced team, we propose that an internet based group of land astronauts be created using the model that Peter Meijer pioneered- making everything available through the internet so that independents could experiment on their own time and on their own turf.
Why: A problem with research and pilot programs is that too few blind individuals are available to test the technologies or theories. Researchers end up blindfolding sighted college students, or using a very small sample size. The land astronaut program would provide a pool of willing individuals; particularly if we used this as a means to pay astronauts for their work (especially the young ones). We envision the program being run through a collaboration with NASA and the coalition that comes out of the congress. The coalition will be heavily represented by consumers, and the astronauts will all be blind, so the eternal problem of "failure to include consumers at all levels" would be circumvented. This is also a great public relations opportunity. It would showcase talented blind individuals, showcase cooperation among the coalition members, showcase NASA, and showcase the technologies. Finally, and not insignificantly, many inventions begin as tools to help people circumvent disabilities. Later, it is discovered that these technologies have a broad application for the population at large. We believe this to be true of many of the high technologies we are discussing at the congress. The land astronaut program would put these projects before the sighted public and demonstrate their effectiveness. This could lead to mainstream employment of the technologies.
Title: Can designers be sure that they will produce devices that visually impaired people need and want?
Presented By: Alan Brooks
What: My proposal is to develop an international data base of expertise from amongst visually impaired people and professional mobility officers who can provide a source of advice to researchers and designers of mobility devices.
Placement on the data base should be restricted to individuals who can meet a yet to be agreed criteria. The challenge is to develop criteria that will ensure those on the data base represent a wide range of visually impaired people practising mobility in various environments across every continent.
The mobility professionals should also be representative of the industry and possess the ability and interest in developing innovative training programmes for clients to use technological devices.
Why this is important: Researchers and designers have often worked in the commercial sector where a primary objective is to ensure that their product is considered better than their rivals. Frequently the technology they are working with is exploited to the maximum potential to provide an ever increasing range of features. This often leads to the over development of products with complex operating systems that are under-utilised by most customers that may eventually possess them.
The designers can become so involved with the potential of the technology they work with that they can sometimes continue to develop the product without sufficient thought about how the customer will learn to use the device. They often expect the technology to solve all the problems when in reality the user will adopt a variety of strategies to overcome mobility challenges, some of which will be unrelated to the technology but still prove quite effective.
In the field of visual impairment even when the customer is properly considered the researchers and designers regularly have difficulty in identifying, in sufficient numbers, those who can offer constructive responses to product features and design. Sadly I have witnessed research programmes that have used volunteer university students who have been blindfolded, expecting them to be representative of blind people.
The features of future mobility devices must take into consideration the abilities, needs, views and wishes of visually impaired people.
Title: Should a Design Process and an evaluation of a Mobility aid for the Blind Preceed the Manufacture and Marketing Processes?
Contributed by: Dr Leslie Kay of Bay Advanced Technologies Ltd.
Category: Medical instruments
What: The American Foundation for the Blind published a monograph paper in 1974 with the title "Towards Objective Mobility Evaluation: some thoughts on a theory (by L. Kay)". This paper sets out to explain the issues to be addressed by any designer of a mobility aid for a blind person. It was written as part of the process of design of the first binaural ultrasonic mobility aid - the Sonicguide - that displays a stereo image of environmental objects up to 7 meters over an arc of 70 degrees.
As part of the novel design process four Doctoral research theses and two Masters research theses were published along with five papers, all on the novel concepts being developed (see http://www.wayfinding.net/iibnNECLesliesbib.htm). These established the design criteria for the "binaural sonar glasses" later manufactured by the late Russell Smith - an outstanding engineer who established the company Pulse Data.
Two evaluations were carried out using the experimental/prototype models worn by blind persons.
It was found that the spatial information provided by the sonar system satisfied the primary issues addressed in the published paper on evaluation. Blind mobility was seen to resemble sighted mobility by many of the sensory aid users. The Sonicguide was widely acclaimed by blind users over a period of 27 years, and this resulted in many awards being received by the manufacturer and Dr Kay.
Because of the very strong theoretical scientific background to the design and the interesting psychological outcomes, many independent international research projects were undertaken and papers published (see http://www.wayfinding.net/iibnNECLesliesbib.htm).
This seems to be justification for there to be a need for a strong scientific basis to sensory aid development and design.
Why: There have been many subsequent designs of spatial sensors, mainly based on intuition, and not supported by a robust scientific study. These devices get sold and commented on by a few users who do not in any way represent the user population but the comments become interpreted as being valid. There are no useful comparisons made about the different features embodied in the device design and serving agencies are confused about their validity.
Evaluations do not exist to guide an agency and such evaluation processes are beyond the agencies to carry out themselves.
The World Congress on Wayfinding is an appropriate body to discuss the question posed in the title of this proposal.
Many different views can be expressed about the efficacy of an invention and normally the inventor who is clearly committed to its use makes the introduction to the market. As always there is a financial issue. Does the invention justify the cost of objective evaluation and the statistical process? At what time should this be carried out to be valuable?
The evidence suggests that user market forces - not science - determine the future of a spatial sensor. This does not augur well for the future - medical instruments are professionally evaluated, and spatial sensors for the blind come under the category of medical instruments.
Title: Build the roads and they will come.
Contributed by: Mike May, CEO Sendero Group
I propose a two prong approach for putting wayfinding technology into the hands of more blind folks. First, increase the number of rehab departments, which will fund wayfinding technology through a combination of education and mandate if possible. Second, fund research into which location sensors will help to augment indoor/outdoor wayfinding. As location content is augmented per the proposal by Jim Marston, we need to consider which sensors have the best viability per my guidelines above to electronically label more and more locations.
GPS is a good example of what can work to provide accessible wayfinding. It is a commercially successful technology, which has reached the mass market, driving prices of hardware down and creating a demand for smaller and more accurate units. Access to the position information is free. At least three accessible GPS products are on the market. Years of presentations at conferences have begun to educate consumers, teachers and counselors and yet awareness and price must still be improved.
Early frontiersmen discovered rich lands they could enjoy but it wasn't until roads and railways were built that the masses had access to these opportunities. In "designing the future" as Doug calls it, we must discriminate between those access technologies, which have the possibilities of funding and commercial viability versus those which do not, those which have the possibility of roads and those which are only for researchers and pioneers.
* ... blind travelers have not been trained to use tactile street maps
* therefore demand for tactile maps as wayfinding tools is low
* therefore accessible cartographic materials are rare and expensive
* therefore blind travelers have not been trained to use tactile street maps...
In several experiments, spatialized sound with a head-mounted compass provided the fastest search and walk times, and also the highest user preference ratings.
A. A common protocol for storage and retrieval of GIS information for use in wayfinding devices for individuals who are blind
B. A protocol for determining what GIS information is to be stored
C. Optimal strategies for delivering GIS information to individuals who are blind
Why: As GIS applications and databases expand, and as wayfinding technologies become cheaper and more ubiquitous, opportunities are developing to provide detailed, accurate, real-time wayfinding information to pedestrians and transit users. To date, most of the wayfinding devices have focused on the needs of drivers, and those that are marketed to individuals who are blind contain predetermined or user-entered Points of Interest, but they do not contain information about many of the environmental features located along a route. For example, whether a route has sidewalks, the pedestrian and traffic control features of intersections along the route, and the location of curb ramps and crosswalks typically is not accessible to pedestrians. In addition, information about transit stops and related scheduling often is not readily available.
b) support for persons who are - or in a critical situation may be - unable to navigate and position him/herself and send an alarm.
- possibility to send an alarm and draw attention.
- possibilities to get street address or other relevant information read aloud or presented in other suitable manners.
- methods to "get on the track" again after a temporary deviation, e.g. due to radio shadow.
- possibilities to get information about road work or other unforeseeable obstacles.
- possibilities to get information about facilities like post and bank offices, shops, restaurants etc. and information about degree of accessibility.
- conditions for the engagement of service centers run by society as well as privately and manned by relatives.
- conditions for the inclusion of data-bases with information about personal conditions.
- ethical and legal aspects.
- time schedule.
- size (how many test persons?)
- suitable organization (Who will be the leader of the trial? Which user organizations to be involved? Which interest groups to be involved?)
- the technical solution bases of the project.
- costs of the trial.
- a short risk analyses of the suggestions.
The study will be carried through by Jan-Ingvar Lindström.
Dept. of Speech, Music and Hearing
Stockholm, Sweden
Jan-Ingvar Lindström
+46 8 560 32017
+46 70 217 4822
jan.i.lindstrom@telia.com
1. Virtual organizations effectively communicate instantly across all political and geographical borders. Members of the coalition come and go as their needs evolve.
The knowledge management system will address the following:
2. Virtual teams are formed and then dissolved after solving objectives. These teams of experts may never meet in person and so travel costs are limited.
3. A common knowledge base is created in cyberspace and then is managed through global electronic agreements.
4. Virtual organizations flow with change, making fast decisions as conditions evolve
5. Designs, plans, strategies will be continually discussed and debated until consensus arrives
6. A shared sense of purpose and identity will evolve through cyberspace as it does in real space
7. There will be no overhead in a virtual organization: no rent, no phone bills, no commuting expense, drastically reduced staff expenses- if any
8. Organizations will mutually and instantly "publish" (and continually edit) in cyberspace
9. Grants will be written under a common and global umbrella
10. Consumers will be involved electronically and all the time
11. Centralized leadership will be present, but the organization will exist with a floating leadership (depending on changing objectives).
1. It will define terms and demographics
Technology revolutions evolve in a pattern. They affect relationships, definitions of time and space, vocations, the nature of work, and laws. The small networks of the industrial age must be replaced by larger and more sophisticated institutions. The above list is a partial answer to these questions:
2. It will establish, define, update, and operate curricular categories (like the land astronaut program, echolocation "lesson plans", teaching strategies, training manuals for various advanced technologies, etc.)
3. It will contain a list of recommendations for research and development; i.e. provide continual and changing recommendations.
4. The Congress monograph will be on the web portal; it will represent (be a model of) the new way of publishing in cyberspace that replaces the older print-based knowledge management system
5. Like the internet itself, it will be primarily distributed, not centrally controlled- although there will be a tight organizational structure and a virtual governing board
6. It will collect, organize, edit, and archive the body of knowledge relevant to orientation and mobility, and produce web-zines and white papers as needed.
7. It will track the activities of virtual problem solving teams. The teams will dissolve when goals are reached. An example would be a virtual team to launch a five year study of the brain, human perception, and navigation. Special cyberspace directed task forces will be created, for example, to address alternative perception or environmental literacy. The knowledge generated by these teams will be housed on the web portal.
8. It will provide consumers direct access to software, manuals, technology support, technology specific list serves, and access to inventors and teachers.
9. NFB is developing a library. If it is to be a modern system, it must include a knowledge management system in cyberspace. All blindness related technologies and all webmasters will be organized around a central knowledge management system.
10. Controversy/Debate are just parts of the system; they are not polarizations that divide consumers from consumers and professionals from professionals
11. All past knowledge will be archived
12. There will be a store house for electronic books and access to all books on line.
13. The web portal will be state-of-the-art accessible, and will contiually be an example of accessibility. This will only happen if the knowledge base is created and maintained by consumers
14. There will be a blindness portal for children and for other special populations (elderly, cognitively impaired, etc.)
15. There will be blindness relevant entertainment on the web portal (web radio, web television, web games)
"What can we do to manage the exponentially exploding amount of information being generated by the information age; how do we gather, organize, database, and interface this knowledge?"
For a more detailed discussion see the paper "Orientation and Mobility: Pathways into the Future".
Talking Signs Company web site: www.talkingsigns.com
Position paper of Talking Signs and links to Dept. of Geography (UC Santa Barbara) Talking Signs research of Jim Marston/Reg Golledge:
http://ubats.org/marston.htm
Department of Graduate Studies in Vision Impairment
Pennsylvania College of Optometry
Curricular
Funding
Philosophical
1. It is no longer technology itself that limits access; it is lack of funding for the technology which is the greatest barrier.
With at least 70% unemployment among the blind and at least that percentage of products having to be funded by departments of rehabilitation, no amount of product innovation is going to be commercially viable unless the product costs less than a couple hundred dollars or it is funded by state rehabilitation. Furthermore, the cost of research and development may never be recovered so R&D grant funding must be available to get the product started in the first place.
2. The more one can use commercial components within a product design, the more likely the product will be affordable and that it will stay current.
3. A product will have the greatest potential for being affordable and being universally adopted if it requires little or no infrastructure modification.