Hacking kinect

IF you are really smart in tech, creative and want to solve real problem for people, u can find out that building product or tech will not cost you a fortune.

In 2011 several different groups of people were prototyping navigation assistant for visually impaired people using hacked xbox kinect.

Kinect is using infrared sensors to detect motion and distance to objects.

You can read more on how it works http://www.jameco.com/jameco/workshop/howitworks/xboxkinect.html.

And this tech appeared to be real handy to detect obstacles for blind people, all u need some computer to track the information and signaling system to warn the user.

Audio channel is not the best way to inform blind people during the navigation. In this case the whole sense is used for just one task, and people are loosing the way to get more info on surroundings, which really is creating more problems then solving.

So one talented team of enthusiasts went even further with kinect concept and have created Belly-mounted tactile matrix to worn user about approaching obstacles.
“Each tactile pixel is disassembled electromagnetic relay much like this with a curved paper-clip attached. Voltage is applied periodically so that pixel prods person several times a second. The more frequency is, the closer is the object under that pixel. Depth perception made that way is not very accurate, but even non-trained person could easily distinguish ‘very close’ from ‘about one meter’ from ‘about three metres’.”

Project won second place at Russian Imagine Cup at 2011.

More info on these projects:

Teaching machines to see and recognize

Tech giants Facebook and Google are working hard on improving machine learning capabilities. They want machine to understand pictures like human do it.


They have different purpose of their research – google is using this technology to power their Photos app, Facebook is improving accessibility of their platform.

Both tech can be in handy for visually impaired people.

Here is the latest example from facebook

Both companies are using neuron networks. And whats interesting there, that with evolving of technology, blind people will be able to get real time audio description of surroundings using onhead cam for example. Of course pushing all the info on whats near you through audio doesnt make sense, though using assistant and asking question can do the work.

Imaging this concept but using camera on your head instead of steady image from your facebook feed.

Using Dash cam and image analyze your smartphone will be able to predict the obstacles while you move, where there is a puddle or pole ahead and you need to turn to avoid it.

You can read more on what is Facebook doing on image recognition here:

and Google

Hello Folks

My name is Sergiy Voronov. I am UX/UI guy, working for UK startup www.clinpal.com. In this blog i will be writing bout the topic that excites me the most now – user experience for blind people.

Nowadays 285 million people are estimated to be visually impaired worldwide: 39 million are blind and 246 have low vision.

Due to tech bum (smartphones with gps and audio, bluetooth beacons, Braille smartwatches ,  intelligent personal assistants and other cool gadgets) life of blind people can be little easier.