On Thursday, Google released the full version of Jima 3N, in the Jima 3 Family in artificial intelligence (AI) models, its latest open source model. Announcing the first time in May, the new model on device is designed and improved for use issues and features many new improvements based on architecture. Interestingly, a large language model (LLM) can be run locally on only 2GB Ram. This means that the model can also be deployed and operated on the smartphone, provided it comes with AI-capable processing power.
JEMA 3N is a multi -modal AI model
In a blog post, Mountain View -based Tech Dev announced the release of the full version of Jima 3N. Model Jima 3 and the gymoseine are included in the afternoon and gymmores. Since it is an open source model, the company has provided books to the community as well as books to the community. This model itself is available to use under a legitimate JEMA license, which allows both educational and commercial use.
JEMA 3N is a multi -modal AI model. It supports icon, audio, video and text input locally. However, it can only produce a text output. It is also a multi -linguistic model and supports 140 languages and 35 languages for the text when the input is multi -modal.
Google says Jema 3N has a “mobile first architecture”, built on MetroShika transformer or metformer architecture. It is a nest transformer, named after the Russian nest named doll, where fits inside each other. This architecture offers a unique method of training AI models with different parameter sizes.
JEMA 3N comes in two sizes – E2B and E4B – short for effective parameters. This means, despite having five billion and eight billion parameters in size, active parameters are only two and four billion.
It is obtained using a technique called embedded (PLE) per layer, where only the most essential parameters need to be loaded into fast memory (VRAM). The rest of the extra layers are left to embed, and it can be handled by the CPU.
Therefore, with the metformer system, the E4B punches a variety of E2B model, and when the big model is being trained, it trains small modes simultaneously. This facilitates users to use E4B for more advanced operations or E2B or E2B for faster output without having any significant differences in processing or output standards.
Google is also allowing consumers to make customs size models by tweeting some entry parts. In his, the company is releasing the Metformer Lab tool that will allow developers to test different combinations to help them find the size of the customs model.
Currently, JEMA 3N is available to download Google’s sore throat face and Kagal listing. Users can also visit Google AI studios to try JEMA 3N. In particular, Jema models can also be deployed directly from the AI studio to the cloud run.


