Cyber security news for all


    AI Companies Required to Report Safety Tests to US Government

    The Biden administration is set to enforce a new mandate, compelling developers of major artificial intelligence systems to disclose their safety test results to the government. As part of the executive order signed by President Joe Biden three months ago, AI companies must share crucial information, including safety tests, with the Commerce Department under the Defense Production Act. The White House AI Council is convening to assess progress on this directive, with a focus on ensuring AI systems’ safety before public release. While software companies have committed to specific safety test categories, a common standard is yet to be established. The National Institute of Standards and Technology will develop a uniform framework for safety assessments as outlined in Biden’s October order. AI’s increasing role in economic and national security has prompted the government to address uncertainties and investments in emerging AI tools like ChatGPT. The administration is also exploring legislative measures, collaborating with other nations and the European Union to formulate rules for managing AI technology. The Commerce Department has drafted a rule concerning U.S. cloud companies providing servers to foreign AI developers. Nine federal agencies, including Defense, Transportation, Treasury, and Health and Human Services, have conducted risk assessments on AI’s use in critical national infrastructure. Additionally, the government is intensifying the recruitment of AI experts and data scientists across federal agencies to ensure preparedness for the transformative effects of AI technology while maintaining regulatory oversight.

    Recent Articles

    Related Stories