Using an NI 1762 Smart Camera and LabVIEW to Develop a Wearable Navigation System for the Visually Impaired

  Print

"The NI vision software included important image processing and analysis functions for our system including edge detection, image segmentation, object classification, and pattern matching. These optimized functions were accurate and very useful in developing this system."

- M Manjunatha, IIT Kharagpur

The Challenge:
Developing a vision-based electronic navigation system to help the visually impaired better navigate their environment by classifying and recognizing living, nonliving, moving, and other material objects within a range of 5 m, calculating distance from detected objects and preparing navigation paths accordingly, and interacting with the user through speech messages.

The Solution:
Using an NI 1762 Smart Camera and NI LabVIEW software to effectively implement object identification, classification, and object understanding as well as an ultrasonic sensor array to detect object distance. The Smart Camera output and the ultrasonic sensor array are integrated and invoked into relevant speech messages stored in an APR6016 flash memory.

Author(s):
M Manjunatha - IIT Kharagpur
A Kumar - IIT Kharagpur
B Sripad - IIT Kharagpur
J Mukhopadhyay - IIT Kharagpur
A K. Majumdar - IIt Kharagpur

Project Description

For decades, the visually impaired have used navigational aids such as canes and guide dogs to deal with mobility challenges in their daily lives. With the development of modern technology, many different types of navigational aids are now available to assist the visually impaired. These tools are commonly known as electronic travel aids (ETA). A user-friendly ETA must be  small, lightweight, portable, reliable, and have simple controls. Because the visually impaired cannot see the display panel or control buttons, simple control is extremely important. Figure 1 shows the block diagram of this wearable navigation system.

Living and Nonliving Object Detection

The resolution of the NI 1762 Smart Camera is 640 x 480 pixels. To address processing speed and memory constraints, we resize the images to 320 x 240 pixels. In total, we classified 30 objects with three different views for each. We can successfully detect human presence, chairs, tables, beds, television sets, refrigerators, doors, open doors, cupboards, and telephones. Even when the human face is not present in the camera’s field of view, the system can still understand a human’s presence.

We developed the entire application based on LabVIEW and the NI 1762 Smart Camera. We used the NI Vision Builder for Automated Inspection (AI) to develop the application algorithm, and it was a very interesting and enjoyable experience. LabVIEW and the NI Smart Camera offer very consistent performance. Overall, development and testing were very convenient with the NI platform. The high-speed PowerPC processor, available 128 memory, controllable acquisition frame rates, and output options were key performance features in developing this wearable embedded navigation system. After an object is successfully detected, it is classified and processed and a particular sequence is generated at output pins using the Smart Camera’s embedded processor. This sequence is integrated with further circuitry to convey the detected object to the user.

Ultrasonic-Sensor-Based Distance Measurements

Ultrasonic sensors are used to detect the obstacles in the path of the visually impaired person. The system has four ultrasonic sensors mounted on a customized waist belt. Two ultrasonic sensors are mounted on the front side, one sensor is on left side, and one sensor is on the right side. The system calculates the dynamic distance of the object from the user using the ultrasonic sensors and an AT89S52 microcontroller. It can recognize objects within 5 meters in any direction. The system announces the calculated real-time distance in meters or centimeters using verbal messages in any universal language, as shown in Table 1.

Sensor Number

Distance in Centimeters

Formal Distance Scaling

With Speech Message

1 Less than 70 cm Object is very close
2 70 cm to 99 cm Object is at 1 meter distance
3 100 cm to 199 cm Object is at 2 meter distance
4 200 cm to 299 cm Object is at 3 meter distance
5 300 cm to 399 cm Object is at 4 meter distance
6 400 cm to 499 cm Object is at 5 meter distance

Speech Warning Messages for Conveying Detected Conditions to Subject

Many other systems use vibration array, buzzer-based audio frequency clips, or text-to-speech conversions to announce any detected conditions to the subject. This system uses prerecorded speech messages to convey any detected conditions to the subject. It uses APR6016 audio recording and playback flash memory and can store 64 speech messages at a maximum of 7.5 seconds each. The number of messages can be increased by reducing the duration of each message. The AT89S52 microcontroller processes the real-time data collected by the ultrasonic sensor array and then based on the NI 1762 Smart Camera and ultrasonic sensor output, the correct decision is made and the relevant message is invoked from the flash memory and conveyed to the subject through headphones.

The Flexibility to Use Any Language for Speech Warning Messages

For prior systems’ speech-assisted navigation, many researchers used text-to-speech conversion. In such cases, the text is only converted into English. This system uses APR6016 flash memory to store prerecorded speech messages. Therefore, any language may be used to record speech warning messages. This system offers a simple mechanism for recording and storing the speech warning messages.

Performance Analysis

We developed a prototype to test this system on blindfolded people. To evaluate the performance of this vision-based ETA, we tested the system on subjects that had been trained to use the device, and also novice users. We carried out a total of eight tests both inside and outside the laboratory environment on three blindfolded subjects. Two were trained and one was a novice. After blindfolding the person, they were asked to walk through a corridor where different obstacles were placed within a 10 m range. During the experiment, we recorded the user’s walking motion, measured the time taken to successfully walk through the obstacles, and calculated the travel speed for each test as depicted in Table 2.

  Obstacles Human Presence Cleared Obstacles

Travel Speed

(m/s)

Test1 Novice 7 Yes 7 0.43
Test2 Novice 7 No 6 0.34
Test3 Novice 9 Yes 9 0.49
Test4 Novice 9 No 9 0.54
Test5 Trained 7 Yes 7 0.71
Test6 Trained 7 No 7 0.79
Test7 Trained 9 Yes 9 0.75
Test8 Trained 9 No 9 0.82

Benefits of Using NI Products

The NI vision software included important image processing and analysis functions for our system including edge detection, image segmentation, object classification, and pattern matching. These optimized functions were accurate and very useful in developing this system.

Author Information:
M Manjunatha
IIT Kharagpur
School of Medical Science & Technology, Department of Computer Science & Engineering, Indian Institute of Technology
Kharagpur
India
mmaha2@smst.iitkgp.ernet.in

Bookmark and Share


Explore the NI Developer Community

Discover and collaborate on the latest example code and tutorials with a worldwide community of engineers and scientists.

‌Check‌ out‌ the‌ NI‌ Community


Who is National Instruments?

National Instruments provides a graphical system design platform for test, control, and embedded design applications that is transforming the way engineers and scientists design, prototype, and deploy systems.

‌Learn‌ more‌ about‌ NI