Author : Prof. C. Sivaprakash 1
Date of Publication :7th June 2016
Abstract: Context-awareness is a critical aspect of safe navigation, especially for the blind and visually-impaired in unfamiliar environments. Existing mobile devices for context- aware navigation fall short in many cases due to their dependence on specific infrastructure requirements as well as having limited access to resources that could provide a wealth of contextual clues. In this work, we propose a mobile-cloud collaborative approach for context-aware navigation, where we aim to exploit the computational power of resources made available by Cloud Computing providers as well as the wealth of location-specific resources available on the Internet to provide maximal context-awareness. The system architecture we propose also has the advantages of being extensible and having minimal infrastructural reliance, thus allowing for wide usability. A traffic lights detector was developed as an initial application component of the proposed system and experiments performed to test appropriateness for the real- time nature of the problem.
Reference :
-
[1] L. Ran, A. Helal, and S. Moore, “Drishti: An Integrated Indoor/Outdoor Blind Navigation System and Service,” 2nd IEEE Pervasive Computing Conference (PerCom 04).
[2] S.Willis, and A. Helal, “RFID Information Grid and Wearable Computing Solution to the Problem of Wayfinding for the Blind User in a Campus Environment,” IEEE International Symposium on Wearable Computers (ISWC 05).
[3] P. Narasimhan, “Trinetra: Assistive Technologies for Grocery Shopping for the Blind,” IEEE-BAIS Symp. Research on Assistive Technologies (2007).
[4] Y. Sonnenblick, “An Indoor Navigation System for Blind Individuals,” Conference on Technology and Persons with Disabilities (1998).
[5] J. Wilson, B. N. Walker, J. Lindsay, C. Cambias, and F. Dellaert, “SWAN: System for Wearable Audio Navigation,” IEEE International Symposium on Wearable Computers (ISWC 07).
[6] J. Nicholson, V. Kulyukin, D. Coster, “ShopTalk: Independent Blind Shopping Through Verbal Route Directions and Barcode Scans,” The Open Rehabilitation Journal, vol. 2, 2009, pp. 11-23.
[7] Y.K. Kim, K.W. Kim, and X.Yang, “Real Time Traffic Light Recognition System for Color Vision Deficiencies,” IEEE International Conference on Mechatronics and Automation (ICMA 07).
[8] R. Charette, and F. Nashashibi, “Real Time Visual Traffic Lights Recognition Based on Spot Light Detection and Adaptive Traffic Lights Templates,” World Congress and Exhibition on Intelligent Transport Systems and Services (ITS 09).
[9] A.Ess, B. Leibe, K. Schindler, and L. van Gool, “Moving Obstacle Detection in Highly Dynamic Scenes,” IEEE International Conference on Robotics and Automation (ICRA 09).
[10] Y. Freund, and R.E. Schapire, “A Decision-Theoretic Generalization of On-line Learning and an Application to Boosting,” Journal of Computer and System Sciences, vol. 55, 1997, pp. 119-139.
[11] R. Lienhart, and J. Maydt, “An Extended Set of HaarLike Features for Rapid Object Detection,” IEEE International Conference on Image Processing (ICIP 02).