We are leaving in technology disruption age and every business and individual is taking advantages of that. But still, there are millions of people among us who are lacking to use this technology changes. Those are blind people. What benefits technology has given to them till now? well not much. Why? may be difficulties in creating and finding technologies for them? or small market? or could be lack of inspiration to make some difference in these people’s lives – we don’t know the answer. But here we are putting a concept which can help blind people to see (hear) things and navigate their way.
Project “Drishti” is a concept right now and presented here for awareness and possible future development in future. Drishti is a Sanskrit word meaning “sight”
Google launched its Glass project with the before few years, and there has been much buzz about what the platform can do. But while Glass applications mostly add an augmented-reality visual layer of data meant for seeing. Google glass still seems to be far away from real-world usage, maybe it’s because we are yet not found the actual benefits or applications from google glass.
One big potential application would be enabling the visually impaired to help them navigate their daily lives. Technology advancement in computer vision technique, an application is now able to describe the pictures. Research at Stanford on Deep Visual-Semantic Alignments for Generating Image Descriptions by Andrej Karpathy and Li Fei-Fei provided a groundwork on this area.
Now if we combine this technology with google glass and enhance it to describe the realtime images from live video stream, that gives us the real time assistant for blind people.
There are some earlier solutions designed on this field but those are quite expensive and not feasible for large scale like providing a human assistant who can see through your google glasses.
Advantages from the above-mentioned concept are that it will cost only for hardware and software can be open source so everyone can enhance that. And eventually, prices for the hardware will also drop which will make Google glasses affordable in underdeveloped and developing countries.
A Google Glass will be a tool that enables the blind to see through audible means. There are actually existing technologies and applications that use the same principle, although these have limitations. For instance, the vOICe app for Android will basically describe things around you through speech. The app tries to identify objects and proximity through the Android device’s camera.
However, the limitation is that the camera actually needs to “see” the environment, and you need to wear earphones. That’s not exactly a practical solution. Firstly, it’s not hands-free, unless you can mount your smartphone on your body or clothing. Secondly, earphones can be quite cumbersome. Here’s where Google Glass comes into play.
We know that this new development will not exactly restore vision, although it’s a good substitute until that time in the distant future when we can directly interface our devices with our brains.
Introduction In the ever-evolving landscape of technology, OpenAI has emerged as a trailblazer, consistently pushing…
In the vast realm of software engineering, where data is king, databases reign supreme. These…
Camera Integration What is the process of integrating the device camera into a PWA?Integrating the…
General Understanding of PWAs and SEO 1. What is a Progressive Web App (PWA)? A…
Understanding Offline-First Approach Basics 1. What is the concept of "Offline-First" in the context of…
General Overview 1. What are cross-platform frameworks, and how do they relate to Progressive Web…