(There is a short version of this article that was submitted for publication)
Professor Steven Mann is shopping in a grocery store. He is wearing his EyeTap sunglasses. As he looks at merchandize on the shelves, he is also talking with his wife using the communications module of his wearable computer system. She is explaining what she wants him to select from the shelves. At home in the kitchen she is cooking the evening meal. She is wearing her EyeTap spectacles. Mrs. Mann can see what her husband is looking at, as if she were peering out of his eyes.
Dr. Steven Mann, the Director of the University of Toronto's Personal Imaging Lab, is keenly interested in the potential for high technology tools to assist individuals with disabilities. In the summer of 2002, he took the first steps in the creation of a new lab at the university of Toronto called LoVE, the "Lab of Vision Empairment." He coined the term "empairment" as a blend of the concepts of impairment and empowerment, to emphasize the creative power that the emerging technologies brought to consumers.
Professor Mann coined another term: "Seeing Eye People." Wearing the EyeTap technology developed by Dr. Mann, a parent could look out their blind child's eyes as the child was about to cross a street a block away. A mobility specialist could monitor where a student "looked" as they traveled. A blind student could peer at the shelves in a grocery store and be assisted by a sighted friend (in another part of town) to select items.
Dr. Mann teaches a course at the University of Toronto on how to be a Cyborg. Students in the class can look out of each others eyes. They can get their e-mail on the inside surface of their EyeTap lenses. They can project what they see directly to the internet so that others can look out through their eyes. "Seeing Eye People" is a potential wayfinding tool for the blind traveler; it could become reality if we had the determination to make it so.
We are knee deep in the river that flows between science fiction and reality. There are many new inventions floating to the surface in these waters; some will sink into the depths of bad science fiction, but others will rise like ocean liners. It is a disorienting time. We are trying to make sense of wave after wave of innovation. What are we to do, for example, with emerging innovations like the following:
Somewhere between science fiction and reality are newly evolving robotic animals that see and navigate. Honda's humanoid robot Asimo is walking around auto shows selling cars. Sony has an artificial creatures lab where they developed a robotic dog called Aibo. Every generation, Aibo gets smarter and more "real." One of Sony's goals is to use their artificial animals to assist disabled individuals. Robotic assistance to circumvent disabilities is a field that is rapidly evolving. Many of these robotic tools could be programmed to assist with wayfinding, or be custom designed for individual children as training toys. Should we be looking seriously at these robotic technologies?
It is not science fiction to suggest that there will be a "talking angel" that sits near the ear and tells the blind traveler where they are in space. Global positioning satellite (GPS) technology is evolving at a blistering pace. GPS is a mass market phenomenon, the spinoff to blind consumers is rapidly unfolding. Prices are dropping. The size of GPS units is shrinking. Accuracy gets better and better as weeks pass. Companies like the Sendero Group and Visuaide are adapting GPS units for the blind consumer. This is nothing less than a revolution for blind navigation. What are the implications of these changes and how shall we position ourselves to make maximum use of these innovations?
It is not science fiction, nor is it crazy to suggest that driving of modified vehicles will be possible for the blind consumer. Power wheelchairs and powered bike systems that are designed for blind travelers may be early pioneering vehicles that lead to more sophisticated travel options. Computer networks that link together vehicles, traffic control systems, smart "blind" suits worn by consumers, and smart roadways could decrease risks and enable seamless motorized wayfinding. Advanced radar and sonar systems are already being incorporated in ever smarter automobiles. GPS will be embedded everywhere as will objects with speech capability. The convergence of all these technologies is increasing the potential for blind use of powered vehicles. Should we be building prototype "Segways for the blind?"
It is not science fiction to suggest that blind children could be taught to use artificial vision substitution technologies that enable them to "see with sound." Just like an individual is trained to use Braille as a tactual substitution for visual reading, so might the developing brain be taught to perceive in ways completely foreign to the sighted. The tools are available right now. We insist that Braille be taught so that blind children will grow up to have a "near point vision substitution system" to access the culture's written media. Should we not also insist that these same blind children have a "far point vision substitution system" to access the spatial knowledge of the culture? Is "wayfinding vision" as important as "Braille vision?" If it is, shouldn't we be working to develop the infrastructure necessary to train young blind children to navigate with vision substitution systems?
It is not science fiction to declare that human beings will be walking around soon with computer chips embedded in their bodies, including in their brains. These chips will communicate with each other, with wearable computers embedded in the clothing, and with external networks of all kinds (like the internet, for example). Vision-enabling chips will be crude in the pioneering stages. They won't be useful at first for pattern recognition and for sophisticated visual perception. However, the early chips will reawaken the powerful navigation functions of the vision system. The first eye chips will be wayfinding tools.
Research teams are at work in major universities all over the world on these vision-enabling chips. Pioneering systems have already been implanted in animal models and in humans. Early surgeries are turning totally blind individuals into severely visually impaired individuals, but with capabilities and limitations unique to the rehabilitation field. What shall we do with these "new cyborgs," these new-style rehabilitation patients with perceptual systems unlike anything that ever existed before in the history of mankind?
Biotechnology, tissue engineering, and genetic surgeries will eventually enable doctors to repair the body at the molecular level. This is no longer wishful thinking, nor the stuff of science fiction. It could be ten years or more before these advances begin to reduce the population of blind individuals. What is certain, however, is that these medical innovations must occur at the level of the infant or young child (or in utero). If the brain does not receive visual input, cells slowly die in various regions of the cortex that are needed for normal perception. Vision "restored" in older individuals will therefore contain unusual perceptual anomalies that require rehabilitation training. In other words, early developments in biotechnology will not cure blindness. Like brain implantation of computer chips, pioneering ocular tissue repair will result in individuals having various kinds of novel vision impairments. This will be the case until we learn to repair entire regions of the brain (not just primary vision tracts and centers). Wayfinding will be the primary vision capability that will benefit from early developments in biotechnology.
In the summer of 2001, the Institute for Innovative Blind Navigation (IIBN) received funding from NEC Foundation of America to bring the issue of "Advances in Wayfinding Technology" to the blindness community. Our goal was to accelerate and intensify the debate about these new tools. Our hope was also to consolidate the forces within the field who might bring their expertise and leadership to address the issues. This article is a summary from our experiences, and contains our current thinking on the subject.
New technologies do not necessarily do away with old technologies. The "old" technologies slip into the "lower tech" category. Radio did not disappear when television arrived. The internet did not do away with television. Braille did not disappear with the development of talking books or screen readers. The cane and dog guide became "lower tech solutions" and part of a collection of wayfinding tools when the higher tech inventions appeared.
We can now embed intelligence wherever we care to place these tiny computer chips. We can put the chips in toys and make them smarter; they can now talk, move, and "remember." Embedded processing chips are causing a robotics revolution. We can put the chips in signage, enabling signs to talk and store information. We can place the chips on surfaces, on walls, floors, ceilings, desk tops, poles, sidewalks, and roads. These embedded chips are creating smart spaces, actuating intersections, making intelligent highways, and creating smart ped-heads at street corners. We can embed the chips inside materials, creating smart fabrics and smart accessories. This has resulted in a wearable computing revolution. We can even place the tiny chips inside living beings, making bodies the carriers of embedded intelligence. The chips can be made so small they are essentially invisible or hidden from view. It follows also that the smaller the chips, the cheaper they are to manufacturer, and the more likely that they will be embedded in stuff just because it is so easy and inexpensive to do. Processing computer chips are driving the technology revolution.
We are also embedding the computer chips into sensors; this is causing a sensory revolution. Sensing technologies, things that see, hear, feel, and smell, are evolving at an alarming speed. Every one to two years brings cheaper, smaller, smarter cameras. Listening devices approach the invisible, get more accurate, and improve their range and filtering capabilities. Surfaces are rapidly becoming more like human skin. Not only will toys (robots) get smart, talk, move, and remember, they will also see, hear, and feel with ever greater accuracy. Not only will smart spaces talk, store knowledge, and help with decision making, but the spaces will monitor the world with vision, hearing, and a sense of touch (and smell). Not only will materials gain intelligence, making for smart shoes, toilets, sheets, carpets, guns, bombs, anything of substance, but the materials will have senses that enable self-healing and adjustment to stress. Not only will tiny intelligent machines be embedded in the body, but these machines will sense what is around them and protect themselves, attack foreign bodies, and self-repair (or call for help).
Computer processing chips are also the reason for the communications revolution, everything from digital recorders, cell phones, PDAs, to the internet. The impact of networking all these processors together is staggering. Smart things that have senses will communicate with other smart things. For example, chips embedded in the body could (if we want) communicate with smart clothing, or with smart appliances, or with smart robots. Any chip, any place, could be designed to communicate with any other intelligent chip. It stuns the mind trying to figure out the implications of networking smart processors together.
These computer chips are creating a world of objects, spaces, and materials that have intelligence, that have senses, and that communicate. This would be awesome enough to declare a technology revolution, but it is not the essential ingredient of this revolution. What is happening can be simply stated, but it is hard for human beings to understand on a gut level; it is foreign to our nature. It is simply this:
The pace of technological change is exponential. The essential kernel is not that technology is arriving fast. It is that the pace of change is getting faster and faster and faster. Our potential to create wonderful tools, and our potential for creating destructive tools (and everything in between) is increasing exponentially, outstripping our ability to cope, both on a personal level and across the board institutionally. This change was discovered and elaborated by Gordon Moore and Raymond Kurzweil. Both men are the authors of "technology laws."
"Moore's Law" refers to the predictions made by Intel Corporation founder Gordon Moore. Dr. Moore studied the development of computer chip technology over several years and discovered a pattern. He found that computer chips were able to process information at an ever accelerating rate. The pattern showed that every 12 to 18 months scientists produced chips that were twice as capable as the last generation. Each new generation of computer processor was twice as fast, stored twice as much information, and cost the same or less than previous chips. Futurist and inventor Raymond Kurzweil studied Moore's findings and discovered that exponential doubling of processing power had been going on for many generations before computer chips were invented, and he predicted that after we had reached the physical limits for making computer chips we would discover new technologies that would continue the doubling of processing power (called the Law of Accelerating Returns).
The revolutionary idea within Moore's and Kurzweil's observations is that the creative power of individuals or small groups (for good or evil) is increasing at an alarming rate. So much "fire power" is being delivered to each of us that we can now (and ever more easily every few years) create our own futures. We have arrived at a time in history when it is possible to create the futures that we envision. So, the question (for example) is not "Can we create a vision substitution system for wayfinding?" The questions are: "What kind of vision substitution system should we build?" "Who should build it?" "How will it be paid for?" "How high of a priority is this?" Etc.
Moore's Law predicts that wayfinding technologies should get progressively smaller, cheaper, and more powerful in one to two year cycles. Each new tool on the market could upgrade it's processing chips yearly to become twice as powerful as last years model. Every cycle could bring the consumer more personal creative power.
We can do what we dream if we have the willpower and if there is cooperation among the consumer groups, the government agencies, the universities, and the inventors. The pieces of the puzzle are sitting on the table right this moment. How shall we make a coherent picture out of all these strange puzzle pieces? How do we assemble useful tools out of all these powerful technologies? What we presently lack, at all levels of innovation, is a collective vision of the future and a common resolve to create and fund the necessary training and support systems. There are two fundamental questions before all of us:
"What kind of future shall we create?"
"Where will the leadership come from to map out the futures that we envision; what individuals and what institutions will cooperate to make change happen?"
One: The rapid development of smart spaces has created the potential for environments that could communicate with and assist the blind traveler. The spaces around us, the roadways, the hallways, the sidewalks, the insides of rooms and vehicles, and our work surfaces are being embedded (or could be) with tiny computer systems that can access databases. In other words, spaces are being embedded with knowledge that can be accessed on demand.
Signs and symbols can also contain embedded computer chips. Signs can become talking portals (windows) that link to the internet and to other objects. To become a good blind traveler in this brave new world of smart spaces means that the blind consumer must develop an "environmental literacy" that is as sophisticated and important as is academic literacy.
Location-based technologies (GPS, dead reckoning, etc.) are converging with and helping to create smart spaces. Every step we take places us on a labeled location in space; GPS has given every spot a name. Institutions and individuals are filling these tagged GPS locations with information that can be accessed by environmentally literate persons; ie. those people who know how to use the technologies to gather knowledge about spatial location.
Two: Not only are environments getting smarter, but so too are the tools that consumers use to probe the world around them. We are in the age of the cyborg. A cyborg is an individual who has tiny computers embedded on or in their bodies; call them "smart consumers." We are creating the bionic people who were last centuries science fiction; we are enhancing the capability of the individual.
In this category are:
A. Vision substitution tools like Kaspa, the Sound Flash, and The vOICe;
B. Intelligent obstacle detectors like the Sonic Pathfinder and Miniguide;
C. Wearable computers and the networks that connect them all together;
D. Vision-enabling computer chips that are placed inside the brain;
E. Smarter canes.
Three: Standing inside smart environments, alongside the smart consumer, are a strange collection of robotic creatures, call them "smart helpers." In this category are vehicles loaded with computer processing power and very small and smart sensors. These vehicles have potential as tools for enabling blind motorized travel. Also in this grouping is the entire planet of robotic creatures, some humanoid, some animal-like, some immobile and embedded, and others strange and unusual to the human eye. These creatures are poised to become ever more intelligent and ever more sensory-enabled as the acceleration in technology continues. Computer vision is also rapidly evolving. Machines that see and report what they see are on the horizon, no longer science fiction.
Four: Biotechnology. Computer chips are embedded in evolving research tools. We are building smart machines that can organize, sort, and analyze data. This is fueling a medical revolution; we are at the beginning of a new age in medicine. There is now real hope for tissue repair or replacement. Restoration of visual function will constantly evolve. During the pioneering years ahead expect to see advances that primarily improve wayfinding vision.
All four of the above arenas are overlapping and converging. This a dynamic time, with surges of development and continual surprises. It is a lot about potential and the need for cooperative action.
Let us now take a more detailed look at each of these four areas. Then we will look at the problem of need (do we need rocket powered canes?). Finally, we will conclude with recommendations for dealing with this onslaught of new technologies.
The phrase "environmental literacy" was carefully selected to define this category because it draws attention to accessibility rights. It is well accepted in the culture of North America that people who are blind have a right to be literate. There is an institutional framework, backed up by the legal system, that insures the right to physical access to architectural spaces, and a right to media based (and academic) information. There is a mandate, legal and accepted, to modify all types of media (video, the internet, television, print) so that individuals who are blind can access the culture's knowledge base. Technologies that assist with these mandates are often paid for by government programs. Research and development is encouraged, supported, and financially supplemented.
This is not currently the case for strategies and tools that assist with blind navigation. There is no cultural mandate, legal or accepted, that ensures access to orientation knowledge, to the geographical data of the culture. There are no funding avenues, little research and development support, and not much demand from consumers or professionals. This is not surprising since these orientation technologies are only a few years old; we have only begun embedding knowledge into spaces. By referring to this dilemma as a literacy issue, we draw attention to the needs (the rights) of blind individuals to access this new wealth of spatial knowledge.
The task before us then is to analyze the built environment and determine how we might make smart spaces that are friendlier to people navigating without vision. Some spaces will need little management, others are more problematic and will require more management. Familiar areas, like the inside of a person's house, or a familiar work space don't necessarily need intervention because navigation is usually simple and safe. Other areas, like open spaces (parks, playgrounds, parking lots, gas stations, meadows, wooded areas, etc.), street intersections, and while traveling inside of vehicles (where there is no feedback about what is being passed nor of the vehicles position in the geography) are harder to comprehend or negotiate. Of particular interest to those of us who work with blind children, are "smart learning spaces," like elementary school classrooms. Building portable (temporary or alterable) smart classrooms is an option worth pursuing.
Location-Based Tracking Technologies
Mike May, CEO of Sendero Group and product development manager for Humanware, pioneered the first GPS system for use by his blind colleagues. Mike continues to be the leader in the application of location-based tools for blind consumers, and he is undoubtedly the most knowledgable user of the technologies. The Sendero website, created and managed by Mike May, is the most comprehensive source for information about location-based technology on the web. Sendero Group, located in Davis, California is leading the way, but expect a flood of players in this arena since it is such an obvious adaptation. The European Union for many years funded a GPS for the Blind Project called Mobic. A prototype GPS system was developed at Nottingham University in England. And in Quebec, Canada, Visuaide has developed a GPS system for the blind called the VictorTrekker, due out in 2003.
GPS location-based knowledge has two components; a coordinate system for labeling longitude and latitude, and a managed geographical data base (there are many) filled with map and landmark details. The result of this technology is that every open area on earth has gotten smarter; every square inch of dirt on the planet now contains knowledge about "itself." This knowledge is getting richer and richer as institutions add details to the geographical databases, and as computer technology gets ever more sophisticated. The institutions that are creating databases on the GPS grid include owners of vehicle fleets (taxis, trucks, trains, planes, boats) government agencies like the military, post office, and Amtrak, the makers of passenger vehicles (OnStar), farmers (they "tag" individual plants), disability groups, engineers, architects, travel consultants, builders, and many other professionals who use the technology for their own ends.
The electronic tags attached to each location in space can become portals. They can become part of a communications network. Not only can they link to stored databases, but they can link to the internet. We can ask the system at any time during a journey to tell us about points of interest along or near our travel route. The wireless internet has the potential to allow access to information about any area from any location. This means that the standard cell phone (or hand held PDA) will become an all purpose tool. A module on the unit will make it blindness friendly and enable verbal (instructional) commands. GPS will be a standard feature on these evolving "phones." If environmental literacy is made a political issue, the communications companies will have to comply with governmental regulations that require accessibility. This means that the phone companies will eventually have to provide GPS-enabled modules with all their products, or at least become players in the modification of the systems.
Three Global Positioning Satellites must be in orbit overhead, in any given region of the earth, before triangulation strategies can be used to determine position. Three satellites generally provide only poor information, giving location data that has a large deviation of error. Instead of indicating exact position, three satellites give a position that may be off by many feet. The more satellites active and accessible, the greater the accuracy. GPS does not work inside buildings, and signals are blocked by tall structures. Organizations like the Veteran's Administration in Atlanta and Western Michigan University (working with a consortium of universities and Sendero Group) are developing strategies for maintaining GPS contact through all environments. This is called seamless wayfinding; it employs a combination of technologies including dead reckoning units and indoor grid computing (sensory systems that track people inside buildings; called "sentient computing").
Dead Reckoning (DR) is a wayfinding technology that can be used to identify position indoors. When GPS signals are blocked, location based information processing can be passed on to Dead Reckoning systems. DR can access geographic databases in the same manner used by GPS units, and accuracy of DR is often better than GPS systems. Engineering breakthroughs hold the promise that DR systems can be made as small as one inch cubes, making them perfect for wearable computing applications.
DR uses a sophisticated electronic compass, with mechanisms for calculating speed, position, heading, and distance to destinations (just like GPS). Movement detection systems are also built in (accelerometers, gyros, and pedometers). Unlike GPS, Dead Reckoning can also provide vertical coordinates so that a blind traveler could gather information about any floor of a building. Use of DR technology is spawning the development of indoor maps and indoor location-based knowledge banks. DR also has the potential for use with building blueprints, indoor numbering systems, and landmark positions; all this knowledge could be embedded in smart indoor spaces. An example of dead reckoning technology is the OmniMate under development at the University of Michigan.
Sentient computing refers to indoor sensory systems that track the location of individuals inside buildings. This is accomplished in various ways. One system on the market uses fluorescent bulbs as data transmitters. These are sold by a Boston based company called Talking Lights. Invented by MIT professor Dr. Steven Leeb, this system holds promise as an indoor navigation aid for blind individuals, especially when combined in a seamless wayfinding network.
It is an obvious step to employ location-based tracking tools to the problem of blind navigation, especially because the technologies are rapidly becoming an integral part of the entire landscape of the sighted built environment. It is not a matter of creating new technologies to improve blind navigation, it is only necessary to take off-the-shelf systems and add voice feedback and database filters. These technology developments have created an entirely new set of orientation tools for the blind consumer.
Signage that Talks
When Dan Kish, founder and CEO of World Access for the Blind, talks about the environment, he makes sure that his sighted colleagues understand an important first premise: It is not so much that blind individuals cannot navigate well, it is more the case that sighted people have constructed their world to conform to the needs of the eyes. The buildings and pathways on this planet were created to accommodate the sense of vision and are therefore not blindness friendly. We mark up the environment with lines and symbols, and with non-talking signage to tell sighted people what to do. We do not mark up the environment to tell blind people what to do. The environment tells the sighted about their options, but not the blind traveler. Because evolving technologies are enabling smart environments, we can now construct spaces that are as friendly to the blind traveler as they are to the sighted traveler.
Dan Kish also reasons that blind individuals should have the option to be flooded with information the way that visual input floods the vision system. The sighted brain filters and selects from torrents of visual images. The blind traveler should also have access to a flood of information. Presently (relatively), the blind traveler walks through a "silent void," passing mountains of visual input containing information that could be used to inform, protect, and guide them through space. Technology exists now that can filter and select salient/relevant information from the sighted environment and translate it into verbal feedback. In other words, we have the tools now to overlay a blindness friendly architecture onto the sighted build environment; we can unlock the flood gates.
A large proportion of the useful visual information embedded in our environment is in the form of signs with print and image messages painted on them. Unlocking this information would suddenly flood the blind consumer with important wayfinding data. Signage for the sighted is everywhere and is taken for granted. These signs and symbols can now be modified to make them friendlier for blind navigation; things like store signs, sales posters, train and bus schedules, traffic signs, advertising, crosswalk lines, street lane lines, traffic lights, arrows, company logos, walk/don't walk images, scrolling message boards, all are candidates for modification.
It is also important to note that the sighted world is drowning in irrelevant, obnoxious, intrusive signage. A benefit of being blind is not having to perceive and block out this outrageous invasion of private space that is so taken for granted in "technologically modern" cultures. As we make the built environment more blindness friendly we should keep in mind not to embrace the destructive elements of technology. There are moral and ethical issues that should not be ignored in this entire discussion.
Creating signage that talks and provides meaningful information to the blind traveler has been for many years the passion of engineers at Smith Kettlewell Eye-Research Institute in San Francisco. This team of mostly blind inventors pioneered the technology that created talking signs. The commercialization of this important technology was accomplished by another pioneer in the field, Ward Bond, who founded Talkingsigns Corporation. Talkingsigns are interwoven into the fabric of San Francisco, at subway stations, at city street intersections, in public buildings, at taxi stands and bus stops, even linked with GPS units to track trains and buses in real time over a wireless network. From San Francisco, the signs spread to other cities, to New York, to Glasgow, Scotland, to Venice, Italy, Austin, Texas, Wright State University in Ohio, Washington DC, and at more and more locations as awareness of the technology spread.
Talkingsigns is an infrared technology that works indoors and outdoors. It can be used in combination with other location-based technologies like GPS, and it can be embedded with knowledge using built-in memory chips. The signs can provide the direction of travel, the names of streets, and information about landmarks. Simply put, Talkingsigns can do what any sight-based signage can do, inform, protect, explain, and guide the traveler.
Signs, symbols, and pictures have the potential to become magic windows, portals into vast virtual worlds with unlimited, always evolving "shelves" of knowledge. Talkingsigns has found a way to connect with the internet through a system called PointLink Communications. This is the beginning of the creation of "magic windows," the portals that connect the internet to signage. Digital signs can be layered with information, from the basic reading of the words or description of images to ever more complex discussions. Everything that the sign can conjure up could be verbally communicated to appropriate receivers.
It is important to realize in this discussion that our focus is primarily on North America. The European and Asian communities especially are creating technologies just as rapidly as the North Americans. Parallel efforts are underway in all four of the wayfinding areas highlighted in this discussion. Talking signage systems (and other location-based technologies), for example, are being created independently by blindness organizations in Great Britain, in Europe, in Japan, Australia, New Zealand, and throughout the modern digital world.
The age of the cyborg impacts wayfinding technology for the blind in three areas: vision substitution, wearable computers, and vision implant technology.
If the internet is any indication of determination and leadership in the field of vision substitution, then Dr. Peter Meijer from Phillips Electronics in the Netherlands wins the price. Dr. Meijer invented a video to sound system called The vOICe. It is rare that a month goes by when Dr. Meijer does not make the software for The vOICe more sophisticated. He maintains an elaborate website (one of the best on the internet addressing vision substitution) and gives away the software that is the heart of his invention. Users purchase their own miniature cameras and head mounts, hook these up to the smallest computers they can find and learn how to "see with sound" at whatever pace they can muster. Dr. Meijer provides free tech support on The vOICe listserve.
This generous and tireless dedication is only damped by the lack of enthusiastic response that he and other inventors see coming from the blindness community. Why, they wonder, even when the inventor charges nothing and provides free tech support, are the leaders in the field failing to embrace the idea of vision substitution? This is a common frustration felt by all inventors when they bring their passionate ideas to our field. Dr. Leslie Kay, the inventor of the Sonic Guide Glasses and recently KASPA, has a long history of challenging consumers and professionals to debate this issue. Dr. Kay is a pioneer in the arena of vision substitution, an early crusader for electronic wayfinding, and an original member of the famous Nottingham University Blind Mobility Research Unit in England. Dr. Kay's life-long dedication and Dr. Meijer's stubborn and intelligent focus on vision substitution must be appreciated and addressed.
The "debate" about the value of vision substitution (or any of the new technologies) is full of emotional misunderstanding. The truth (if we can be so bold as to suggest that we have a clue!) is that we are all caught up in this revolution. We are all trying to sort out what all these new technologies hold for us as consumers, professionals, and as individuals; what are the implications?
From our observations at IIBN, we believe that the problem is institutional; we are all trapped between the institutions and practices of the industrial age, and the fast moving changes characteristic of the digital age. The analogy below illustrates this dilemma.
If Louis Braille showed up at our door today with his new invention called Braille, we would politely redirect his enthusiasm. We would explain to Louis "why not."
"Listen Louis, there are many well meaning inventors out there, but I am afraid they haven't done their homework. They don't know the demographics of blindness, they don't understand the politics and infrastructure. They have no awareness of the history and the traditions. And the cost, wow! You have no idea how expensive. Take my word for this Louis, the price is too high. Here's what I mean:"
"If we do this Braille thing with the little bumps, we will have to change the entire infrastructure of the blindness community. The universities will have to design new training programs and curricular materials. The professional groups will have to create new chapters, new standards, and new certifications. Things are working pretty well right now without the little bumps. Change can't happen overnight."
"The consumer groups aren't going to like this either. They are doing just fine having friends and family read to them. They will lose the human element if they end up reading in isolation. Technology is so impersonal, you know. They are very powerful, the consumer groups. Have you talked to people who are blind about this "bumps on paper" idea? I mean, people who are blind were involved in the planning of this Braille thing from the beginning, right?"
"Laws will have to change too. Because there are no funding avenues, Louis. You will have to write grants and do bake sales. How much are you going to charge for this Braille stuff? If it costs more than two hundred dollars, no blind person will buy it."
"Finally, you should know that there are few people who have the time to learn about the bumps. It looks like a long and steep learning curve is involved here. You will have to start with very young children. The schools would have to hire Braille teachers. School systems cannot afford the expense. And who is going to design, manufacture, and market the Braille tools? You know, given the demographics, very few blind people would ever use such an idea. It's a fine idea though, the bumps, don't get me wrong. I'm one hundred percent on your side, but the practical problems are insurmountable."
"You know Louis, you seem like a nice guy. Have you thought about a career in social work? Forget the Braille thing. It can't happen in the modern world."
Braille is a vision substitution system. Touch replaces vision for near point information processing. This requires tremendous brain power. Turning the image of little bumps into a literate language is an amazing and very improbable feat. But Braille does exist, and it works very fluidly and efficiently. We built the institutional infrastructure needed to enable the culture to deliver Braille to consumers. We located funding. We trained teachers. We manufactured Braille paper and Braille writers. We wrote curriculums. We had the patience to withstand a long and difficult learning curve. We wrote the laws that guarantee the right to Braille education and access to the written media of the culture.
The real reason vision substitution systems for wayfinding have not been embraced is not because consumers and professionals have a negative attitude. It is because the task of rebuilding the infrastructure is greater than any one individual can deliver. Nothing less than an all out extended effort from a coalition of blindness organizations can pull this off.
The blindness community is understandably cautious because of the history of the early pioneering electronic travel aids. They were expensive, they were hard to repair, and they were bulky and heavy. But that history is behind us. We are on a new journey, with a new road map. The issue of vision substitution cannot be ignored. If we can teach blind individuals to navigate using alternative sensory input than we have a moral and professional obligation to do so. It seems to follow that if a blind individual can read fluently using one vision substitution system (Braille), they should be able to use other vision substitution systems to improve wayfinding perception.
Each generation is comfortable with the cyborgs of their culture. People who wear contact lenses on their eyeballs, for example, are not considered weird. At the same time, every generation is also cautious or rejecting of the new cyborgs of emerging generations. The horse and buggy generation did not feel at home with the motorized world of the emerging generations. The generation that is completely at home jogging around in Nike shoes is a little uncomfortable with the idea of electronic clothing. The generation that is at home popping aspirin, is uneasy when it comes to implanting computer chips in the head. The initial reaction of our (industrial based) culture to wearing tiny computers embedded in shoes, belts, jewelry, underwear, and jeans is one of rejection (who needs it?) to caution and concern (where is this cyborg thing going?).
What our generation needs to accept is that wearable computing is the future. The next generation of cyborg development will not go away nor will the fast pace of change dissolve. The question that we face is "What kind of cyborg will enable seamless blind navigation?" What kind of modules will we "attach" to the body network that will enhance blind wayfinding? We already know some parts of the answer to these questions.
We know that vision substitution systems, like Kaspa and the vOICe, are really varieties of wearable computers. They require head mounts for the cameras, ultrasonics, and computer units. These can be shrunk in size, networked, made modular and steadily made smarter. We know that the electronics that make up hand held units like the Miniguide, Sound Flash and Sonic Pathfinder can be made into wearable modular elements of a larger wearable "suit." We know that hands-free, invisible tools are preferable to bulky technologies that "tie up" our hands. We know that location-based technologies require computer processors and GPS receivers and a communications module and that these have to be carried or worn, and are therefore a sub-classification of wearable computing. We know that talkingsigns have to be activated and speech access provided. Again, talking signs require wearable computing; the hand held units can be shrunk to wearable modules. We know that internal chips can be networked with wearable systems and that the two systems can be designed to communicate in real time. We know that a "blind wayfinding suit" could network with the smart environment, with smart traffic lights, smart ped-heads, smart signs, smart pathways, smart classrooms, etc.
The study of cyborgs and the associated research and development with wearable computers continues at a blistering pace. MIT's Media Lab has been a leader in this field from the beginning. Research students from that MIT program have gone on to make their own contributions to the field. In particular, the leadership of Dr. Steven Mann (who some refer to as the "Father of Wearable Computing") at the University of Toronto is important, primarily because he has the interest and drive to address wayfinding disabilities; he created the idea for the Lab of Vision Empairment.
Placing computer chips into the human head is an extremely complex undertaking. Media coverage of developments in the field of artificial vision are commonly over-hyped and misleading. "Sound bite" reporting promotes false hopes and over-simplification.
On the other hand, for the first time in history we have hope that the machine/body interface will be solved (to various degrees), and that we will eventually have sophisticated medical procedures that address blindness and severe vision impairment. At the moment, we are at the pioneering stage. Important and exciting developments are in the early phases of creation.
The important idea to keep in mind is that this is a complex problem requiring very careful and sophisticated approaches. New vision chips (cortical and retinal) are often compared to the cochlear implant for the deaf. This is misleading for several reasons. Most importantly, the challenges facing vision chip designers are way more complex than the challenges faced by cochlear researchers. The retina is a multi-cellular, multi-stage neuro-processing system. It is actually part of the brain. The processing that takes place at the level of the retina is massively complicated and not entirely understood. Disease processes target regions, or cell types, of the retina, and have complicated evolutions. On a cortical level, there are over 35 centers for vision processing spread over the entire human brain. These centers have complex efferent and afferent networks; meshworks of crisscrossing and redundant neuro "pathways." Foveal signals (those responsible for very high pattern recognition) mapped in the primary cortical vision center (V-1), are located in a fold inside a fold of the occipital cortex. This means that to attain higher resolution it will be necessary to "invade" the brain rather than place chips on the surface of the cortex.
Two research teams are on the verge of marketing their implants (probably as you read this, they will have appeared on the market). The Dobelle Institute has a cortical implant, and Optobionics has a retinal chip implant.
There is debate among the various research teams about the best approach to creating artificial vision. Dr. Dobelle belongs to the sector that believes cortical (brain level) implants provide the best answer. At the moment, Dr. Dobelle's team is placing the chips on the surface of the visual cortex. The implants are placed so that they stimulate both visual hemispheres of the brain.
The retinal implant teams fall into two camps. One group believes the best approach is to place the chip on the surface of the back of the eye, on the ganglion cell layer. These teams use chips that are called epi-retinal implants.The second group believes that it is best to place the chips directly in the photocell (rod and cone cell) layer of the retina. These chips are called sub-retinal implants. Teams working on epi-retinal strategies include: The Harvard/MIT group, The Doheny team at the University of Southern California, and Professor Rolf Eckmillers team at the University of Bonn. Sub-retinal teams include: Optobionics, The University of Tubingen team in Germany, and the Wayne State University School of Medicine (Ligon Center) in Michigan. Cortical implant teams include: The Dobelle Institute, Terry Hambrecht's group at NIH, and Professor Richard Normann's team at the University of Utah. The Ligon group, under the leadership of Dr. Patrick McAllister, is also experimenting with cortical vision chips in animal models.
The design and success of an implant depends on the situation in the eye or cortex at the time of surgery. Most of the retinal implants are (currently) targeted at RP patients. There are many kinds of RP depending on the location and extent of genetic damage, some appear sooner in life, some later. Some forms are severe and acute, other forms are more chronic and play out over a longer time frame. All these variables affect the potential success of a chip implant. The success of a retinal implant depends on how much retina is left (that is "healthy" enough to receive a chip).
The progression of many retinal, visual tract, and visual cortex diseases cannot currently be addressed using these pioneering (retinal) implants. These ("untreatable by implant") disorders include: retinopathy of prematurity (ROP), glaucoma, diabetic eye disease, optic nerve damage, and impairments caused by vascular damage. The potential for implant success is greatest where the vision loss is discovered early and where specific cell layers or regions are involved (RP, macular degeneration from age, and Leber's Amaurosis). Cortical implants by-pass the retina and could conceivably address vision loss caused by damage to systems anterior to the occipital lobe.
In these pioneering days of chip implantation, researchers are attaining very low acuity. Researchers, from a rehabilitation perspective, are taking people who are blind or nearly blind and making them severely visually impaired. That is the reason why the researchers are claiming that their goal is to improve the mobility of these patients. In other words, these early chip implants are wayfinding technologies. The claim is that the recipients of the (early) implants will not read or see faces well (if at all), but will be able to use shadows and gross form perception to avoid objects in their path. From a disability perspective, the rehabilitation strategies used with these implanted patients is no different from that used with blind individuals. They will still need to learn blind travel skills, and still need assistive technologies.
As we know in the field of orientation and mobility, blind individuals who have no secondary impairments, like most RP patients, usually have excellent mobility using non-visual travel strategies. Indoor mobility, for example, is easily mastered by blind individuals. With training and practice, blind adults can travel all over the world with minimal assistance (many do). Quite often, a totally blind individual travels better than the person with low vision. This is because there is a powerful impulse to use the vision system for navigation. A visually impaired individual will struggle with seeing using a visual perceptual system with poor resolving power when they could much more readily use the other senses to navigate fluidly. So, giving blind individuals severely low vision (at this pioneering stage) is a dubious accomplishment (ie. we should be careful about proclaiming the benefits). What the implant researchers and developers can proudly claim is that they are pioneering a new source of hope. Implants are another set of developing tools to offer consumers; at the moment these are wayfinding tools.
The same set of modular applications can be attached to any substrate. For example, we can put GPS modules on vehicles, include them as part of wearable computing suits, as hand held or backpack stand-alone systems, or as add-ons to robots. We can add entertainment modules that are any combination of radio, television, CD/DVD player, internet connection, etc. We can add communications modules, cell phones, email systems, video conferencing. We can add modules for sensory enhancement and filtering, navigation modules with obstacle detection and avoidance, memory modules that record images and sounds, speech recognition modules, expressive speech modules, face and pattern recognition modules, and so on. Each of these modular areas is following Moore's Law, so for example, communications modules are getting smaller, cheaper, and more powerful (smarter) on a one to two year cycle. That's why we have modules, so we can unplug the old ones from the substrate (vehicle, wearable suit, robot) and replace them with the latest and greatest.
If you apply this understanding to digital toys, you get ever smarter "robots for kids." The digital revolution is enabling dreams of ever more "real" machines. Giving dolls and toy animals life-like qualities is the future.
For this discussion, Sony's robotic dog Aibo seems the most "compatible" with the needs of blind children. Given the significance of the dog guide in the history of blind wayfinding, it "just seems right" to make the first high tech toy a small robotic guide dog. Aibo can be programmed and equipped with any of the specialized processing modules mentioned above. So, on paper anyway (at the moment), we can create numerous smart wayfinding-enabled toy Aibo guide dogs to address the developmental needs of blind children.
"Aibo" means "companion" in Japanese. It is also an acronym for "Artificial Intellience RoBot." Aibo was born in Sony's artificial creatures lab in 1999. As an experiment to see if anyone would be interested in purchasing such a creature, Sony put the dog up for sale on the internet with a price tag of $2,500 per dog. The entire first generation, 145,000 dogs were sold out in less than a year entirely through internet sales. The newest Aibos come with various prices tags depending on the number and sophistication of the modules. The high end dogs now sell for $1,500 (the price has dropped following Moore's Law) and the low end dogs sell for under $800.
Aibo can walk, run, chase a ball, wag it's tail, respond to 75 voice commands, read internet messages, express emotions (happiness, sadness, fear, dislike, surprise, anger), display instincts (play, search, hunger, sleep), "see" in real time (color, face recognition under development), take pictures through a hidden nose camera and store them, recognize it's own name (and it's owners name), plug itself in when it needs recharging, and responds to other Aibos. The dog has built in infrared distance receptors and sensors for temperature, vibration, and acceleration.
The Japanese government is supportive of the use of these artificial creatures as helpers for people with disabilities, particularly their rapidly expanding population of elderly citizens. Sony is also aware of the potential use of Aibo as a tool for the blind, but cautions that the technology is not powerful enough to be used at the moment as a fully developed guide system for blind navigation. As a toy for blind children however, the dog has capabilities that can be exploited and evolved now.
The important ingredient for any child is play. Blind children don't have the visual stimulus that encourages "normal" play activities. Aibo could be programmed (modules developed) that encourage play activities specifically designed to help further developmental progress. For example, Aibo could easily play "hide and seek," encouraging the development of sound localization, as well as movement through space. Aibo could sing kids songs, be a talking, walking watch, teach travel routes through a house, warn about steps or stoves, "read" kids books, use wayfinding language (landmark, masking sound, left, right, etc.), and so on.
At IIBN, we would encourage guide dog schools to take the lead in developing Aibo (or something similar). The toy dogs could be the introduction of real guide dogs when the children reach appropriate ages.
Robots are not ready to be prime time wayfinding assistants for sophisticated blind travelers. They could however be ready for internships with cognitively impaired or multiply impaired individuals. Companion dogs are trained now to assist people in wheelchairs. Honda's Asimo humanoid robot (an acronym for "Advanced Step in Innovation Mobility") can already turn lights off and on, walk up and down steps, and navigate through indoor spaces. In the winter of 2003, Asimo began his television career by appearing in Honda commercials. It is not a far stretch to envision a robot helper bringing in the newspaper, pouring coffee, pushing a wheelchair, and vacuuming the rug. Asimo costs about as much as a sophisticated wheelchair.
It will be awhile before automobiles will drive themselves around following voice commands, or before they can be programmed to follow routes. So, it's a major stretch (bordering on irresponsible speculation) to suggest that blind individuals will drive on the nations highways anytime soon. There are two avenues however that should enable advances in motorized travel for blind consumers; wheelchair travel and small scooter (Segway, three-wheeled bike) mobility. Also, small, indoor motorized toy vehicles for blind kids could be easily created (with plenty of safety issues to resolve).
Wheelchair sophistication has been improving following Moore's Law. Modern power carts are stable, have long lasting batteries, and they can be equipped with any number of control switches to allow navigation using head movements, blow switches, even eye movements. There is no reason that the standard modules listed above can't be incorporated into these robotic systems. We can build luxury cars and we can build luxury wheelchairs. We can build GPS Onstar equipped cars and we can build GPS Onstar-style systems on the wheelchairs. We can equip the chairs with signage activators, obstacle detectors, radar warning systems, and so on, in effect creating "blindness specific" power vehicles. We can use the same modules and strategies to manufacture scooters, bikes, and "toy" cars.
Because movement and navigation are so basic to life, the circuits in the brain that enable wayfinding are ancient and tough. Higher brain level processing systems however are more recently evolved and are fragile. Visual pattern recognition and contrast and acuity resolution, for example, are delicate, easily damaged and impossible (at the moment) to repair. Pioneering work in biotechnology, like the early advances in chip implantation, will likely improve wayfinding capability, but not the higher visual functions. That is why we include this category in the list of technologies that have potential for improving wayfinding.
The discovery of stem cells and the subsequent application of stem cells to biological impairments is one of the greatest achievements in the history of modern medicine. Stem cells are immortal as long as they get nutrients. They are "rare and primitive" cells located in every kind of tissue. They give rise to most (maybe all) of the other cells in the body.
An example of biotechnology applied to the problem of blindness using stem cells made news in 1999. Badly scarred eyes cannot accept corneal implants, because there is no viable tissue upon which the new cornea can attach. When stem cells were implanted in scar tissue they created a living substrate for new corneas. People who had been blind for most of their life suddenly regained usable vision. For a dramatic and thorough documentation of such a situation, see the personal notes of Mike May, the president of the Sendero Group.
Progress will steadily occur in biotechnology because acceleration in computer power will result in medical machines that are increasingly accurate and sophisticated. These machines are the bedrock of research. An example is the rapid evolution of machines that locate and cull stem cells from blood. These new tools give us pure concentrations of stem cells with which to do research and to heal. Another example are the machines that analyzed the human genome. We would never have solved the puzzle without the help of ever more powerful computer processors.
Dr.Turkle goes on to say that these technologies "change the sense of self," and are "extensions of the self." When we bring these sophisticated technologies to our friends and colleagues who are blind we are asking them to address the emotional issue of "changing the sense of self." The more sophisticated and complex the technology, the more we are asking for them to "add extensions to their self," to change their behaviors and their habits. This is not at all a simple issue of engineering. It is a serious psychological challenge.
The same is true for teachers; they are not emotionally neutral about the technologies. Their identities are tied to their knowledge and skill base. Waves of new technologies leave everyone weary and with a feeling of being "outside their comfort zone." Under these conditions, emotions tend to swing between rejection and anxiety. Either that, or a great deal of energy goes into ignoring the problems that these tools bring to the table. Historical circumstances are forcing teachers, like the consumers, to "change the sense of self," and "modify behaviors and habits." Nobody likes the situation of being pressured to change, even when the forces seem abstract and impersonal. So, strong emotions will accompany these technologies; it is a given within the equation. We are a step ahead if we acknowledge our mutual anxiety and move on with the task of figuring out what best to do with this sudden surge in creative power.
On a personal level, it is up to the individual to decide what they want. Technology is about tools, and tools are a matter of choice. One man's "ugly, noisy, expensive, complicated contraption" is another man's solution to a complex wayfinding challenge. All the arguments about cost, appearance, weight, size, learning curve, masking sounds, information overload, and so on, are irrelevant if an individual decides that the complexities and circumstances are worth the effort.
Need is situational. There are times when low tech or no tech solve navigation problems. In other situations, low tech is not sufficient to ensure efficiency and safety. In general, the more familiar an area, the less need for wayfinding tools. In a person's private home, or in an office at work there is often no need even for a cane. At an airport or walking through unfamiliar urban areas, more sophisticated technologies are needed if independent travel is to be maintained. The more adventurous the traveler, the more likely it is that they will seek out sophisticated technology. The more content a person is to stay at home and enjoy the familiar comforts, the less need for complex tools.
Technologies can be important for short periods of an individual's life. Sophisticated tools can be aimed at specific age groups or developmentally appropriate populations. Sometimes the same technology can be used for different purposes; perhaps as a training tool with one person, and as a primary aid for another. One wayfinding tool might be used, for example, to teach object permanence, or to hone sensory skills, or to develop echo location skills. Once these skills are learned or refined, the tool might have other functions or we might put it in a drawer until the next student arrives at the appropriate stage. For example, GPS technology might be appropriate for middle school students and older, but not for the younger kids who are developmentally immature.
For every wayfinding technology it is useful to ask the question "Within what spaces is this tool to be used?" Will it be useful in a classroom? At an intersection of two busy streets? On a rural pathway? On a residential sidewalk? In a mall? Is the technology useful in familiar settings, or is this a tool for unfamiliar spaces? The usefulness of wayfinding technology is spatially determined. It could be that it is a very important tool for getting around a mall, but totally useless (or worse) for getting across a street.
When we take into consideration developmental age, the kinds of spaces a person wants to travel through, and the tasks that the technology is addressing, we find support for the idea that we must diagnose and prescribe for one individual at a time, and that we must continually re-evaluate and adjust as that person grows older and as technologies evolve. This is increasingly the trend in modern medicine as biotechnology moves forward. Blanket medications and generalized intervention strategies will increasingly give way to specific treatments based on an individuals genetic and molecular makeup. It is also the trend in special education and in rehabilitation to prepare individual educational plans based on the unique circumstances of individual students. It makes sense to follow these leads and address the need for wayfinding technology one person at a time.
So, do we need rocket powered canes? Well, yes, Joe does, but Mike and Sue do not. Joe is a little strange, but he has the fastest cane in the west, so don't argue.
The invention of the computer chip created thousands of new professions. With each new profession, a body of knowledge was created that had to be taught to every succeeding generation. New technical languages developed. Students had to learn to understand and speak these languages. No individual could gain competence in all the new job categories. No single individual could grasp all the new knowledge and speak all the new technical languages; there was not enough time in a day or in a lifetime.
Technology generates specialists. The general practitioner cannot cope. This is the problem we face today. The mobility specialist trained to teach cane travel and orientation skills cannot be eighty different kinds of specialist. It is not possible to be an expert in environmental literacy, and a specialist in vision substitution, and a specialist in robotics, and a specialist in rehabilitating bionic cyborgs. There is not enough time in the day to understand and be an expert in the totality of what this profession is becoming. We have reached a crossroads where we have to train new kinds of professionals.
Recommendation Two: Create new institutions or restructure traditional Institutions.
Revolutionary technologies undermine "old" institutional structures and demand that new institutions be created. The computer ushered in the digital age and diminished the traditions and institutions of the industrial age. That is where we stand today. We have to build new kinds of institutions, and/or restructure the ones we have. This is an opportunity for new programs to develop in the universities, an opportunity for new research and development labs to be created, an opportunity for inventors to create and market new tools, and an opportunity for established leaders in the field to reinvent themselves.
IIBN monitors the activities of agencies that have an interest in wayfinding technology. The discussion below highlights a few key organizations. It is not meant to exclude other important organizations, particularly those outside the United States.
The National Federation of the Blind has established a National Technology Center in Baltimore. NFB introduced the ideas of Raymond Kurzweil to the blindness community, they understand the complexity of accelerating technology, and they have the vision and determination to create positive change. They are also in a prime position to reach out to the rest of the blindness organizations and lead an international effort to create the future of wayfinding. It's our opinion at IIBN that consumers (NFB, ACB, WBU, and veterans groups) should steer this revolution. Consumer organizations need to take the reins of leadership, to coordinate cooperative ventures, and to articulate their vision. Powerful organizations like NFB and ACB have an opportunity to lead the entire blindness community into the new century.
The American Printing House for the Blind has quietly been undergoing an internal revolution of their own. They are an excellent example of a "traditional institution" with a long standing reputation of leadership in the field of blindness that is redefining it's role in the digital age; they are reaching out to the mobility community and addressing the flood of new wayfinding technologies. APH plays a critical role in the United States because of the quota system that allows teachers (of visually impaired children) in all the nation's schools to order technologies free (based on the number of visually impaired students). This means, for example, that if APH backs the development of GPS navigation, then mobility specialists will have a no-risk avenue for exploring and teaching with this sophisticated technology. If APH backs vision substitution or digital obstacle detectors, or robotic dogs, then these tools will be ordered by mobility specialists and taught to generations of blind children. Look for APH to offer practitioners solutions to many of their technology problems.
Western Michigan University's Department of Blindness and Low Vision Studies is one of the world's premier centers for the study of orientation and mobility. Western is aggressively addressing the issue of wayfinding technology. They are researching smart spaces through one federal grant and are leading a coalition of universities and specialists to address the environmental literacy issue with a second federal grant. If Western Michigan's staff decides that certain technologies or institutional changes need to be supported, look to their leadership to write the grants that move the field forward. Look for Western Michigan to continue to build partnerships, to write the big grants, and to lead the way.
Steve Mann's lab of vision empairment (LoVE) at the University of Toronto is just a blueprint on the drawing board at the moment. It could however be the center of an international effort to custom design wayfinding technologies. The blindness community lacks a high-powered, widely focused wayfinding technology institution. At IIBN, we think that LoVE, with the leadership and cooperation of Canadian (and global) consumer organizations (WBU), Western Michigan University's Department of Blindness and Low Vision Studies, the University of Toronto, CNIB, guide dog schools, and the blindness agencies of the United Kingdom and Commonwealth (whoever steps forward), could become the replacement for the famous Nottingham University Blind Mobility Research Center (England), the pioneering team that launched the entire wayfinding technology revolution in the 1960's. An important aside here is that digital "institutions" are often virtual, with web-based teams coming together long enough to solve a problem, and then dissolving. Multiple agencies could work together under the LoVE umbrella..
The United States Veterans Administration is a long time powerful force in bringing services and technologies to address blindness. They are champions of technology. They brought the work horse of orientation and mobility to the world of blind navigation, the long cane. The Atlanta VA in particular, has continued to provide pioneering wayfinding technologies (with training) to veterans. Smith Kettlewell Eye Research Institute is another VA affiliated agency. They are well know for pioneering talking signage technology. Look for the VA to continue to be a strong leader in all areas of technology research and development.
The logical place for teaching the use of advanced wayfinding tools is at the dog guide schools. They have facilities for housing students in training, they have adequate funds to support the programs, and they have vast experience with wayfinding education. If technology continues to accelerate as Moore and Kurzweil predict, then we must have a model for life long re-education. We must have training centers where students can periodically return to receive upgrades, repairs, or replacements for their outdated tools, and where they can be retrained. The dog guide schools provide initial concentrated training with the guide dogs, but they never lose connection with their clients after graduation. If the dog needs attention or replacement, the schools provide the solution. The dog guide schools therefore already have the model that can address the problems we face with technology. Leader Dogs in Rochester Michigan, for example, is taking a serious look at new wayfinding technologies. They are in the process of building a technology center and they are exploring avenues for service delivery. Look for Leader Dogs to be an example for teaching wayfinding technology using the well established dog training model.
Recommendation Three: Form partnerships.
Institutions, like individuals, cannot cope with the flood of new technologies. Our institutional infrastructure was designed to operate in the industrial age. The digital age is much more fast paced, much more fluid, much quicker to change. Our current institutions cannot cope as stand-alone entities.
The only way for complex, constantly evolving wayfinding technologies to be embraced is through massive cooperation. This has not been the way of the world in the blindness field (nor in most other fields). Huge changes in technology require huge changes in thought. Huge changes in thought and understanding require huge changes in responsibility and focus.
We can build multi-million dollar buildings, send spacecraft to the planets, spend billions of dollars on military aircraft, and pay sports heroes millions of dollars a year to throw balls around. We can do all this and more at the same time; it is more a matter of resolve than a matter of money. It is not about funding priorities. It is about collective organizing, about having it all, about working together. It is not about bickering over details. We can send a man to the moon and get him home safely. Doesn't it seem like we should be able to "solve" problems like blind navigation and "vision substitution"? At the least, there should be an all out, focused effort to do a better job adapting and applying the technologies that sit at our feet. Cooperative planning and agency-wide teamwork is the key to "solving" wayfinding issues.
Recommendation Four: Do it All
Let's stop debating whether this or that invention merits our blessings. There is enough of everything we need to explore all the avenues, try all the prototypes, and create reams of whatever technology we fantasy. It is a leadership, willpower issue. Think about this analogy:
High technology wheelchairs, costing between twenty and thirty thousand dollars each, are being custom made for physically impaired individuals. These sophisticated power vehicles are designed to improve the mobility of physically impaired people. Insurance companies and Medicaid pay for these mobility tools. It is understood by clients, families, professionals, and government officials that physically impaired individuals have a right to fluid mobility in their communities. The society at large is understood to have an obligation to it's disabled population to provide for access to the environment, to efficient mobility.
For wayfinding technology for the blind, there is no comparable understanding. Try to find three thousand dollars to buy a wayfinding tool. Medicaid won't pay. Insurance companies rarely pay. The blindness agencies do not have the budgets. The consumers groups and professional agencies spend more time bickering about priorities than about collective action and the expansion of choice. Essentially, there are zero funds allocated for wayfinding technologies to improve the mobility of blind children; thirty thousand dollars to increase the mobility of physically impaired children compared to zero thousands to improve the mobility of blind children. Something is ridiculously wrong here.
It is not productive to "admire the problems." The focus and energy of the blindness community should be forward thinking and all encompassing. We can do it all. We should do it all.
Recommendation Five: Custom design technologies to the individual.
We have reached a time in history where we can begin to analyze and prescribe technology that is specific to individual needs. It is the steady rise in computing power that enables this, as well the drop in prices for the component parts of any tool. Many of the components are available at the local Radio Shack. The bottleneck in this plan is that it does not fit the standard model of product development and supply in an industrial age culture. Our business models are based on selling as many of the same mass produced identical systems as possible. The educational and training models are designed to teach the use of the mass produced tools. We have to change our institutions and habits.
There are four ingredients needed to make this change: 01. Businesses need to change their model from a product intensive focus to a service model. They need to sell not the thing, but the services needed to upgrade the tool, training and retraining to stay competent with the technology, and tech support. They need to stop thinking of clients as one-time users, but as life-long partners.
02. Universities need to teach their graduate students to diagnose and prescribe. They need also to think of their students as members of a life-long guild, rather than one-time "products" certified by a diploma. Education needs to be a life-long partnership. Students should pay membership fees that provide them with retraining and access to university databases.
03. School systems currently cannot cope with the onslaught of any of the modern technologies. They would pay to send their young blind students to training centers. The dog guide schools have the facilities, the money, and the philosophy to address this issue. They already have a service model that retrains and provides for life-long membership.
04. The world wide web is a brain for the planet. Over time, the feeling that the earth has a collective brain that is greater than the sum of it's parts will evolve with ever more clarity. Every field of interest is building helter skelter data-pods scattered among the loosely affiliated institutions of their professions. The web however is a waiting opportunity for collective knowledge management. Whenever we are ready, we can pull together the collective wisdom and willpower of the global blindness community. The new institutions of the digital age are global.
Recommendation Six: Address the moral responsibilities
Tools transform human cultures, but human life is about individual day by day experiences. Care givers, like mobility specialists, know that there is a poetry in every moment. It may be a joy to speculate about the future, and surely it is important to look ahead and make good judgements, but here and now cannot be left out of the equation in our rush to enthusiasm. We ignore the moral implications, the potential downside of the technology, at our peril. Along with the enthusiasm, hope, and manic development, comes a responsibility to slow down and consider potentially negative personal and social implications.
The important things in life stand outside technology, like our relationships with others, our interests, our curiosity, our humor. It is the moment that must not be lost when we drift off into future speculation. Real flesh and blood human beings struggle constantly with emotional and existential realities. We are spiritual seekers. We are more than tool creators and tool users. We are larger, more complicated, and more interesting than our technologies. There must be a perspective, a balance. There must be a moral and ethical focus that balances our thirst and quest for technology.
You can't help thinking that there are important elements missing in the "Moore's law equation." It seems like we need more symbols in the equation to stand for things like emotional IQ, the sluggishness of powerfully entrenched institutions, unexpected disasters like terrorist attacks, and the reasoned, emotionally balanced caution of those who are ethically concerned that our intellectual creations are outracing kindness and community. Moore's Law looks good on paper, but the yearly reality is full of setbacks and full of high tech solutions to problems that don't even exist.
The gap between the rich and poor is widening at an exponential rate. Two thirds of the world's people do not have access to a telephone. Forty million people should never have become blind. Twenty million of these could still have restored vision with cataract surgery. Meanwhile, Moore's Law marches on at an exponential rate, our machines get smarter and smarter; a new class of individuals in the developed nations in evolving and growing, richer and healthier than any other group in history. The gap is growing at an exponentially cruel rate.
We are still at a pioneering stage with wayfinding tools. Computer processing power has just arrived with the potential to "solve" individual wayfinding issues. We have yet to make a cheap, invisible, wayfinding-smart wearable computer system. We have yet to perfect implantable chips. We have yet to create smart environments. We have yet to create wayfinding robotic tools. Our community has a moral, ethical, and professional responsibility to use the sophisticated technologies that only recently became available. "Wayfinding technology" is the answer to the question: "What shall we do now with our latest and greatest computers?"