What Does The Letter Say ??
Ex OpenAI Employees Issue Open letter Against Oversight Of Safety, Labelled Whistleblower Protections As ‘Inefficient’
The open letter addressed the risks that are associated with AI systems such as manipulation, misinformation, and losing control of autonomous AI systems.The open letter addressed the risks that are associated with AI systems such as manipulation, misinformation, and losing control of autonomous AI systems.
What Does The letter Say
The letter stated that since there is currently no effective government oversight on what these AI companies do, they should be at least open to criticism by their current as well as former employees. It added that these companies should also be accountable to the public. It further said that these AI companies have 'strong financial incentives' at the moment to ignore safety and that existing 'corporate governance' structures are not enough to keep them in check for the same.The letter also mentions that AI companies have not yet shared the capabilities and limitations of these systems publically and what kind of risk levels they pose. It also cautioned about the different kinds of harm that such systems can do. As per these former employees of OpenAI, the normal “whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”
In the past few months, numerous AI companies including the juggernaut, OpenAI, have been criticized repeatedly when it comes to the oversight of safety. Earlier last month, the chief scientist of OpenAI, Ilya Sutskever, parted ways with the company, following which the Superdata-alignment team head, Jan Leike also resigned. Leike claimed that the safety has “taken a backseat to shiny products.”
According to reports, after these resignations, OpenAI has shut down the Superdata-alignment team and formed a new Safety and Security Committee which is led by the CEO sam Altman himself.