For the first time since the GPT -2 in 2019, the Open Openweight is releasing a new language model. This is an important milestone for a company that has been accused of quickly abandoning its original described mission of “all the benefits of humanity by ensuring artificial general intelligence.” Now, after multiple delays for additional safety tests and disposal, GPT -OSS -1220B and GPT -OSS -20B are available to download from the face.
Before leaving, it is worth a moment to make it clear what the Open is doing here. The company is not releasing a new model of open source, which includes basic code and data that the company used to train. Instead, it is sharing the weight – that is, the numerical values of models to assign the input during their training – which inform the new system. According to Benjamin C. Lee, a professor of engineering and computer science at the University of Pennsylvania, the openweight and open source models offer two very different goals.
“An open -weight model provides the values that were learned during a large language model training, and they allow you to use the model primarily and build it above,” he said. If the trading model is an absolute black box and allows the open source system to fully customize and edit, the open weight is present somewhere in the AIS.
The Open AI has not released the open source model, possibly because a competitor can use training data and code to reverse his tech. “The open source model is more than just weight. It will potentially include the code used to run the training process.” And in practice, the average person will not get more use than the open source model unless they have a high -end NVIDIA GPUS form. (They will be useful for researchers to learn more about the data that the company used to train its models, and it contains a handful of open source models such as Mr. Nim and misunderstanding Small.)
Beyond this route, the main difference between the GPT -OSS -1220B and the GPT -OSS -20B is how many parameters each one offers. If you are not familiar with the term, the parameters are settings that can make you a large language model to provide you with an answer. The name is slightly confused here, but the GPT -OSS -127 billion is 117 billion parameter model, while his younger sister is 21 billion.
In practice, this means that GPT -OSS -1220B requires more powerful hardware to run, Openi -efficient use recommends a single 80 GBGPU. The good news is that the company says that any modern computer with 16 GB Ram can run the GPT -OSS -20B. As a result, you can use small models to do something like a web code on your computer without contact with the Internet. In addition, openings are available through Apache 2.0 licenses, which gives people a lot of flexibility to modify the system in their needs.
Nevertheless, this is not a new trade release, Open says the new models are compared to its proprietary system in many ways. The OSS model is a limit that they do not offer multi -modal input, meaning they cannot take pictures, videos and sounds. For these capabilities, you will still need to approach the cloud and open trade models, to set up some new openweight systems. However, beyond, they offer a lot of abilities, including the reasoning of China’s thinking and the use of the device. This means that the models can deal with more complex issues by breaking them into small stages, and if they need extra help, they know how to use the web and use coding languages like Azigar.
In addition, the Open trained models using techniques, which the company had previously worked in the development of the O3 and its recent Frontier System. Coding GPT-OSS-1220B in the competitive level scored a score, which is far more shadowed than the Openi’s current latest reasoning model, while the GPT-OSS-20B O3-Mini and O4-Mini. Of course, we have to wait for more real -world testing to see how both new models compare the open -up of trade offers and its rivals.
Mark Zuckerberg released a clear consent of the GPT -OSS -1220B and GPT -OSS -20B and Open open -weight models after the signal. Open Source was previously the main focus of Zuckerberg’s messaging about his company’s AI efforts, the CEO once gave remarks about “Fuck” closed source systems. Ready to tinker with LLM, at least one of the tech enthusiasts, time, accidental or not, is somewhat embarrassing for meta.
“Someone can argue that the openweight model democrats the largest, highly capable model for people who do not have a large, hypertheel data centers with many GPUs,” said Professor Lee. “It allows people to use the output or product of a month -long training process at a large -scale data center without investing in this infrastructure on their own. From the point of view of someone who just wants to start with a competent model, and then want to build for some application. I think the open weight is real.”
Open is already working with some different organizations to deploy its own version of these models, including the country’s National Center for Applied AI. At a press briefing before today’s announcement, the team working on the GPT -OSS -1220B and GPT -OSS -20B said they see both models as experience. More and more people use them, it is likely that an openweight model will be released in the future.


