- FryAI
- Posts
- Signs of the Times: AI Makes Deaf Communication Seamless
Signs of the Times: AI Makes Deaf Communication Seamless
Welcome to this week’s Deep-fried Dive with Fry Guy! In these long-form articles, Fry Guy conducts an in-depth analysis of a cutting-edge AI development. Today, our dive is about SLAIT. We hope you enjoy!

Have you heard of talk-to-text? Well, SLAIT has introduced sign-to-text, a remarkable step forward for the deaf community.
There is a massive communication barrier in the world that is not often talked about. However, artificial intelligence (AI) might just be here to break it down. Over 466,000,000 people in the world struggle with hearing loss. In the United States alone, that number is 48,000,000. However, only 2,000,000 people in the U.S. can speak American Sign Language (ASL). This creates a major issue for those that struggle with hearing loss, limiting the number of people they are able to communicate with.
A large part of communication is expressed in our ability to read body language, inflection, and facial expressions. In fact, these factors can massively impact the meaning of words and allow us to communicate with others on a deeper, more impactful and intimate level. Unfortunately, individuals who struggle with hearing loss often miss out on this deeper level of communication. They are left communicating via written words or only with people who can speak sign language, which alienates them from society and can cause them to miss out on valuable communications and connections in life.
WHAT IS SLAIT?
SLAIT is a real-time video service that can be used to aid conversations between a deaf and a hearing individual. Using this platform, both parties can see each other on their screen and communicate in their natural way of communicating (whether it be vocally or by signing), making for seamless conversation which includes deeper level components.
The platform works in a straightforward way. When the hearing person speaks, what they say is translated into text for the deaf person to read. When the deaf person signs, what they are signing appears in text form for the hearing person to read, all while not losing out on the facial expressions or body language components to communication. This allows individuals to connect with the enthusiasm and emotions involved in normal conversation.

WHO ARE THE MASTERMINDS BEHIND THE PROJECT?
Evgeny Fromin is the CEO and co-founder of SLAIT. He has more than 10 years of experience in marketing, digital product development, and design. Fromin is joined by the other co-founder, Antonio Domenech, who serves as the Automatic Control and Industrial Electronic Engineer. The company’s deaf advocate for sign language is William G. Vicars, also known as Dr. Bill. Dr Bill is a Professor of Sign Language and Deaf studies from California State University.
Fomin met Domenech in Barcelona 3 years ago, where Domenech was showcasing a glove prototype that had the ability to translate the Catalan Sign Language alphabet. When the event had concluded, Fomin approached Domenech and scheduled to meet the following day to talk, where the two decided to join forces to create an AI-powered sign-to-speech translator which would become SLAIT. After a few months of hard work on SLAIT, Fomin and Domenech agreed they needed the expertise of a sign language specialist to make the project a bit more finely tuned. Given his expertise in the field, they reached out to Dr. Bill, and the rest is history.
HOW DOES IT WORK?
SLAIT’s core consists of a trained neural network model. The transcription technology uses AI-driven video detection programming via MediaPipe to identify certain touch points—tracking arm, elbow, wrist, and facial movements to interpret which sign is being performed. The data is inputted into an algorithm trained on ASL and transcribes the signs into text.
SLAIT doesn’t require that the user know any coding or have any experience with machine learning algorithms. Their user interface is easy to navigate for any individual, regardless of their technical background. All that is required is that the user has a phone or laptop less than four years old and a webcam that works properly.
The current AI model is 92% accurate with the 200 gestures it recognizes. The future development of the project is aimed at recognizing close to 1,000 gestures, allowing for the identification of a wider signing vocabulary.

WHAT ARE THE LIMITATIONS?
SLAIT has loads of potential, but like all new technology, it also has some limitations that need to be understood and might possibly be addressed in the near future.
One of these limitations is that different cultures use different signs to mean different things, leading to complications in interpretation and comprehension across sign language origin barriers. It is possible this issue can be resolved if the user could somehow indicate which version of sign language is being performed, allowing the algorithm to adjust its interpretations accordingly.
Another obstacle for SLAIT is that vocabulary is constantly changing and growing, which could lead to continued translation mistakes. Resolving this requires continual updating. However, this limitation might be able to be addressed if the SLAIT team can find a way to allow the neural network to access live datasets or create their own datasets, which are updated for the latest in sign language.
Lastly, SLAIT is designed for video calls and not in-person interactions, which can still limit the in-person connective conversations with others. It might be possible that the individual could use SLAIT to record someone live, similar to how some people speak in person by using vocal translators, but nonetheless this seems like an obstacle that would take some creative innovation to overcome. Nevertheless, breaking down communicative barriers over video calls is a great place to start.

WHAT DOES THE FUTURE LOOK LIKE FOR SLAIT?
The current options for a hearing impaired or deaf individual to communicate in person is minimal and limited. These often include texting the other person while standing in front of them, writing on paper, or hiring an ASL professional to go with them wherever they go. SLAIT opens up the possibilities for those who have been limited by their hearing impairments or deafness in the areas of education, work, health care, and in general relationships.
As revolutionary as it is, the SLAIT team is not limiting the technology to just an AI-powered platform which enhances video communication. They are also leveraging AI to create a more customizable and interactive way to teach people ASL. The SLAIT team has recently unveiled SLAIT School, which offers people AI-based, personalized training and video evaluations to help people learn sign language. This allows people to learn from whatever level of signing they know, take lessons, practice, and perform interactive evaluations, where the AI algorithm gives the user feedback based on video input.
SLAIT is still in a beta phase, but it can still be accessed and used by individuals who want to explore the technology. SLAIT School is available now, and personalized plans and pricing are available, including a free version where people can get started with their sign language learning immediately.
The growth potential of SLAIT is truly remarkable. An application accessible to anyone on their smartphones for face-to-face interactions, featuring multiple language and dialect options, has the potential to significantly enhance the experiences of individuals with hearing impairments or deafness. We might imagine this type of application being implemented into standard computer setups, making enhanced video conversations easily accessible for those with hearing impairments.
Many individuals in their respective professions heavily rely on video calls for various purposes, including business meetings, medical consultations, educational sessions, and/or family and friend interactions. SLAIT’s platform serves as a real-time bridge, seamlessly connecting individuals on both sides of the screen. Embracing this technology could enhance and even restore broken relationships and allow for the creation of new ones. Not to mention, this type of technology might unlock a plethora of new job opportunities for individuals with hearing impairments or deafness. On the other side, companies that embrace this platform could not only expand their customer base but also tap into the potential of talented individuals who were previously constrained by communication barriers, transforming business functions and society like never before.