
At SignAR, our goal is to break down barriers for non-verbal communicators
We believe everyone has an opinion and opportunities should be for all.
Built with
Augmented Reality / Unity / Oculus / Leap Motion / C# / Adobe Illustrator / Interviews
Video
Inspiration
We are inspired to augmented reality to create an experience for those who are
hearing and speech impaired to be able to communicate via sound and text.
SignAR is the next generation of sign language application,
providing non-verbal communicators new tools to communicate.
*This project was done at 2020 MIT XR Hackation, “MIT Reality Hack”.
Functions
How it works



1. Translate user’s sign language
Gesture → Sound
2. Translate conversation partner’s voice
Voice → Text
3. Hot key
Customize vocabulary and phrases

This tool could be a very significant product for non-verbal communicators both locally and globally.
Exploring Watson integration will allow machine learnings to power up the vocabulary to scale the potential for meaningful communication around the world.
Future Features
Integrate with open source Watson, leverage machine learnings to rapidly build out a robust vocabulary database.
Integrate with open source past MIT Hackathon project to integrate existing voice to gesture project.
Effectively closing the loop on meaningful communication for non-verbal communicators.
Develop directional audio functionality, allowing for presentation mode or more private conversations to take place.