Openi released two models of open source artificial intelligence (AI) on Tuesday. This has identified the first partnership in the open community of the San Francisco -based AI firm since 2019, when the GPT -2 was open. It is said that two new models, which have been named after the GPT -OSS -1220B and GPTOS -20B, are said to be compared to the O3 and O3 -mini models. The compound expert (MOE), built on architecture, says the company says these AI models have undergone strict safety training and diagnosis. The open weights of these models are available to download through the hugging face.
Openai’s Open Source AI model supports local reasoning
In a post on X (formerly Twitter), Open CEO, Sam Altman, announced the release of these models, highlighting that “GPTOS -1220B challenges health challenges as well as about O3.” In particular, the two models are currently being hosted on the open AII hugs list, and interesting people can download and run the open weight locally.
On its website, Openi has explained that these models are compatible with the company’s responses to the application programming interface (API), and that agents can work with workflows. These models also support the use of the tool, such as the implementation of a web search or the code. With local reasoning, the model also shows the transparent chain of thinking (COT), which can be adjusted to either focus on a high quality response or a low lettuce output.
When it comes to architecture, these models are created on MOE architecture to reduce the number of active parameters for processing performance. The GPT -OSS -1220B per token activates 5.1 billion parameters, while the GPT -OSS -20B per token activates 3.6B parameters. The former has a total of 117 billion parameters and the latter has 21 billion parameters. Both models support the length of 1,28,000 token content.
These open source AI models were mostly trained on the English language text database. The company focused on science, technology, engineering, and mathematics fields, coding and general knowledge. In the post -training phase, Openi used Fine Toning based on Kimk Learning (RL).
Benchmark performance of Open Source Openi Models
Photo Credit: Openi
Based on the company’s internal testing, the GPT-OSS-1220B improves coding (Kodafors), the general problem (MMLU and the final examination of humanity), and the tool calling (Tobynch) improves O3-Mini. But in general, these models decrease less than O3 and O3-mini on other benchmarks such as GPQA Diamond.
Open AI highlights that these models have deep training training. In the pre -training phase, the company filtered harmful data related to the risks of chemical, biological, radiological, and nuclear (CBRN). The AI firm also said that it used specific techniques to ensure that the model denies unsafe indicators and is protected from immediate injection.
Despite being open source, Openi claims that these models have been trained in such a way that they cannot be corrected by a bad actor to provide harmful results.


