Adaptive and Passive Non-Visual Driver Assistance Technologies for the Blind Driver Challenge®

TR Number
Date
2012-04-30
Journal Title
Journal ISSN
Volume Title
Publisher
Virginia Tech
Abstract

This work proposes a series of driver assistance technologies that enable blind persons to safely and independently operate an automobile on standard public roads. Such technology could additionally benefit sighted drivers by augmenting vision with suggestive cues during normal and low-visibility driving conditions. This work presents a non-visual human-computer interface system with passive and adaptive controlling software to realize this type of driver assistance technology. The research and development behind this work was made possible through the Blind Driver Challenge® initiative taken by the National Federation of the Blind.

The instructional technologies proposed in this work enable blind drivers to operate an automobile through the provision of steering wheel angle and speed cues to the driver in a non-visual method. This paradigm imposes four principal functionality requirements: Perception, Motion Planning, Reference Transformations, and Communication. The Reference Transformation and Communication requirements are the focus of this work and convert motion planning trajectories into a series of non-visual stimuli that can be communicated to the human driver.

This work proposes two separate algorithms to perform the necessary reference transformations described above. The first algorithm, called the Passive Non-Visual Interface Driver, converts the planned trajectory data into a form that can be understood and reliably interacted with by the blind driver. This passive algorithm performs the transformations through a method that is independent of the driver. The second algorithm, called the Adaptive Non-Visual Interface Driver, performs similar trajectory data conversions through methods that adapt to each particular driver. This algorithm uses Model Predictive Control supplemented with Artificial Neural Network driver models to generate non-visual stimuli that are predicted to induce optimal performance from the driver. The driver models are trained online and in real-time with a rapid training approach to continually adapt to changes in the driver's dynamics over time.

The communication of calculated non-visual stimuli is subsequently performed through a Non-Visual Interface System proposed by this work. This system is comprised of two non-visual human computer interfaces that communicate driving information through haptic stimuli. The DriveGrip interface is pair of vibro-tactile gloves that communicate steering information through the driver's hands and fingers. The SpeedStrip interface is a vibro-tactile cushion fitted on the driver's seat that communicates speed information through the driver's legs and back. The two interfaces work simultaneously to provide a continuous stream of directions to the driver as he or she navigates the vehicle.

Description
Keywords
Real-Time Neural Network Driver Modeling, Driver Assistive Technologies, Model Predictive Control, Quazi-Newton Optimization, Non-Visual Human Computer Interfaces
Citation