Close Menu
Mobile SpecsMobile Specs

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Honor Magic 8 Lite Review – gsmarena.com Test

    December 8, 2025

    Google Project Aura hands-on: Android XR’s biggest strength is in apps

    December 8, 2025

    Bussel PowerClean Fur Finder Review: This budget-friendly cordless vacuum is simple yet effective

    December 8, 2025
    Facebook X (Twitter) Instagram
    Mobile SpecsMobile Specs
    • Home
    • 5G Phones
    • Android vs iOS
    • Brands
    • Budget Phones
    • Compare
    • Flagships
    • Gaming Phones
    • New Launches
    Mobile SpecsMobile Specs
    Home»Flagships»California released the long -awaited AI Safety Report
    Flagships

    California released the long -awaited AI Safety Report

    mobile specsBy mobile specsJune 17, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    California released the long -awaited AI Safety Report
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Last September, everyone’s eyes were on the Senate Bill 1047 as he made his way to the California Governor Gavin Newsom’s table – and he died there when he vetoed the legislative piece of legislation.

    SB -1047 requires all major AI models makers, especially those who cost 100 million or more for training, so as to examine specific risks. The whistleblower of the AI ​​industry was not happy about the Bloor veto, and most of the big tech companies. But the story did not end there. Newsom, which felt that the legislation fits very tight and one-sized, entrusted a group of leading AI to a group of leading researchers to help suggest an alternative plan-which will support the development of California and the rule of Generative A, as well as with guards for its dangers.

    On Tuesday, the report was published.

    The authors of the 52-page “Frontier Policy California Report” said that the AI’s capabilities have been “rapidly improved since the decision to veto the SB-1047, including the” reasoning “capabilities of the mediocre chains. Is being presented against.

    This Report-Stanford Institute for Human Centers led by Artificial Intelligence Co-Director Fai-Fai Lee. Mariano Florentino Qualer, President of Carnegie Endowment for International Peace. And UC Berkeley College of Computing, Data Science, and Society’s Dean Jennifer Tour Chase concluded that the successes of Frontier AI in California could have a profound impact on agriculture, biotechnology, clean -tech, education, finance, medicine and transportation. Its authors agreed that “to stop innovation and” to ensure that regulatory burdens are such that organizations have the resources to comply. ”

    “Without proper safety measures … powerful El severe and in some cases, could potentially cause non -refundable damage”.

    But reducing the risks is still the most important, he wrote: “Without proper safety arrangements … powerful El severe and in some cases, could potentially cause irreversible damage.”

    The group published a draft version of its report in March for public comments. But even then, he wrote in the final version, proof that the model “chemical, biological, radiological, and nuclear (CBRN) weapons risks have increased.” He added that the leading companies have reported themselves about the increase in the capabilities of their models in these areas.

    The authors have made several changes to the draft report. Now he notes that California’s new AI policy will need to be changed rapidly “geographical political facts”. They added more contexts about the risks that are larger AI models, and they took a tough line about companies classified for regulation, saying that their training is not the best approach.

    The authors wrote, “AI training needs are changing all the time, and a computer -based definition ignores how these models are adopted in matters of real -world use.” This can be used as “the initial filter to screen the cheapness of entities that can guarantee maximum examination,” but factors such as early risk diagnosis and diagnosis of flow effects are key.

    The authors have written that this is especially important because when it comes to transparency, the AI ​​industry is still a wild West, which has little contract on excellent methods and “systemic fading in key fields”, such as how the data is obtained, before the impact of the impact, the impact of the pre -examination and the impact.

    The report calls for researchers performing these diagnostics with Safe Harbor with whistleblower reservations, third -party diagnosis, and direct information sharing information with the public, to enable transparency that existing AI companies choose to disclose.

    Scott singer, one of the leading writers of this report, told Stuffy Since the draft report, the AI ​​policy conversation has “changed completely” at the federal level. He argued that California, however, could help guide “joint policies” between the states, to guide “harmony efforts” that many people across the country support. This is unlike the shock patch that supporters of the AI ​​Mortoorium claim that state laws will be created.

    In an upper aid earlier this month, Anthropic CEO Dario Amody emphasized the standard of federal transparency, which requires AI companies “publicly disclosed on their company’s websites … how they intend to test and reduce them for national security and other destructive risks.”

    “Only developers are only insufficient to fully understand the technology and especially its risks and disadvantages”

    But even such measures are not enough, Tuesday’s authors wrote, because “a newborn and complex technology is being developed and has been adopted at a remarkable fast, only developers are insufficient to fully understand the technology and especially its risks and losses.”

    That is why one of the key principles of Tuesday’s report is needed to diagnose a third -party risk.

    The authors concluded that the risk diagnosis would encourage companies like Open AI, Entropic, Google, Microsoft and others to enhance model safety, while helping their models a clear picture of the dangers. Currently, leading AI companies usually diagnose themselves or hire second -party contractors to do so. The authors say that the third party’s diagnosis is essential.

    Not only “thousands of people … are ready to engage in danger evaluation, reduce the scale of internal or contracting teams,” but also, in groups of third -party reviewers, “unique diversity, especially when the developers mainly reflect some of the population and geography.”

    But if you are allowing third-party reviewers to test the risks and blind spots of your powerful AI models, you have to give access to them-for meaningful diagnosis, a Lot Access and this is the work that companies hesitate to do.

    It is not easy for the second party reviewers to access this level. A company, a partner with the security tests of Openi’s models, wrote in a blog post that the firm was not given as much time to test the Open O3 model as it was with past models, and Openi did not give him enough access to the internal reasoning of the data or model. In these limits, the meter wrote, “Stop us from guessing strong abilities.” Open later said she was looking for ways to share more data with firms like MatR.

    The report states that API or revelations of a model do not allow third -party reviewers to be effectively examined, and that companies use a “ban on banning or threatening legal action against independent researchers who expose security flaws.”

    Last March, more than 350 AI industry researchers and others signed an open letter to “Safe Harbor” for independent AI safety testing, which is similar to the current reservations for third -party cyberself testers in other fields. Tuesday’s report cited the letter and called for major changes, as well as reporting powers for those who suffer from the AI ​​system.

    The authors wrote, “Even the safety policies, even for 100 %, cannot prevent negative results.” “Since the foundation model is widely adopted, it is fast to understand the disadvantages of practically arising.”

    awaited California long released Report Safety
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHow to sign up for Amazon Prime 2025
    Next Article If you are a Gmail User, it’s time to implement these important safety measures
    mobile specs
    • Website

    Related Posts

    Compare

    India has reportedly asked smartphone makers to pre-load state-owned cyber safety app Sanchar Sati on new devices.

    December 1, 2025
    Compare

    Xiaomi Q3 report: Xiaomi 17 sales top 15 series, EV division posts first profit

    November 18, 2025
    Compare

    Harmonies 6 public beta released, here are all devices getting it

    October 22, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Redmi K90 Pro Max debuts with Snapdragon 8 Elite Gen 5 SoC and a Bose 2.1-channel speaker setup

    October 23, 20255 Views

    GPT5 can be here in this month-there are five features we hope

    July 5, 20254 Views

    Gut Hub spreads about the GPT5 model before the official announcement

    August 7, 20252 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    Compare

    Honor Magic 8 Lite Review – gsmarena.com Test

    mobile specsDecember 8, 2025
    Compare

    Google Project Aura hands-on: Android XR’s biggest strength is in apps

    mobile specsDecember 8, 2025
    Compare

    Bussel PowerClean Fur Finder Review: This budget-friendly cordless vacuum is simple yet effective

    mobile specsDecember 8, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Redmi K90 Pro Max debuts with Snapdragon 8 Elite Gen 5 SoC and a Bose 2.1-channel speaker setup

    October 23, 20255 Views

    GPT5 can be here in this month-there are five features we hope

    July 5, 20254 Views

    Gut Hub spreads about the GPT5 model before the official announcement

    August 7, 20252 Views
    Our Picks

    Honor Magic 8 Lite Review – gsmarena.com Test

    December 8, 2025

    Google Project Aura hands-on: Android XR’s biggest strength is in apps

    December 8, 2025

    Bussel PowerClean Fur Finder Review: This budget-friendly cordless vacuum is simple yet effective

    December 8, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Disclaimer
    © 2026 MobileSpecs. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.