Google has announced a new artificial intelligence (AI) model Sanjma that can translate sign language into the text. The model, which will be part of the Jima series of models, is currently being tested by a Mountain View -based Tech Dev and will be launched later this year. Like all other JEMA models, Sigma will be an open source AI model, available to individuals and businesses. It was first displayed during the Google I/O 2025 key, and it is designed to help people with speeches and hearing disabled so they can also speak effectively to those who do not understand the language of the indicator.
Sigma can track hand movements and facial expressions
In a post on X (previously known as Twitter), the official handle of the Google Deep Mind shared some details about the Demo and its release date. However, this is not the first time we have seen Sigma. It was also exhibited in the Google I/O event by Gus Martin, Gus Martin, in the Deep Mind.
We are happy to announce Sanjimma, which is our most capable model to translate the sign language into the text. đź§Ź đź§Ź
This open model is coming to the Jema Model Family later this year, which has opened new possibilities for comprehensive tech.
Share your opinion and interest soon … pic.twitter.com/nhl9g5y8ta
– Google Deep Mind (Googledeeepmind) 27 May 2025
During the showcase, Martins highlighted that the AI ​​model in real time is capable of providing text translation from the language language, which makes the front communication smooth. This model was also trained on various sign language style datases, however, it performs well with the American Sign Language (ASL) when translating into English.
According to multi -linguistic, since it is an open source model, Sanjma can work without the need to connect the Internet. This makes it appropriate to use in limited contact areas. It is said that Gemini is built on the nano -framework and uses vision transformer to track and analyze hand movements, shapes and facial expressions. In addition to making it available for developers, Google can integrate the Google model into its existing AI tools, such as Gemini live.
Deep Mind highlighted that it will be released later this year. The racket -based large -language model is currently in its initial testing phase, and Tech Dev has published an interest form to invite people to try and provide feedback.


