Big news from big tech giants this week.
Photos play great role in social networks growth and engagement. No wonder facebook and twitter are trying to improve things in this end for millions of blind and visually impaired people.
Both Facebook and twitter announced tech that will describe photos to blind users. Though they took completely different approach.
Facebook uses artificial intelligence to understand whats in a photo and describe it using words. Using AI will cover most of content. Automated process doesn’t require a lot of friction from the real users, so Facebook is not suggesting that people need to describe their photos for blind people as Twitter suggests.
. In case of user generated description – the quality and relevance of the text will be better then AI for now, but with huge amount of photo content Facebook can feed to AI, situation can change massively.
Microsoft also uses AI to help visually impaired and blind people, but with more trivial problems. People can use their new app as dog eye, to get the idea of whats going on around them, read the menu in restaurant, describe people near them and so on.
If you are android developer, and eager to help visually impaired people, if you want your app to be easy to use for everyone, including people with visual or other limitations – Google made a step forward to help you.
Google guys build free app that can go through every screen of your application and detect parts like text or buttons that can be adjusted the way they can be easier to interact with.
Small reminder that android accessibility guidelines can be found here – http://developer.android.com/intl/ru/guide/topics/ui/accessibility/index.html