OpenAI has recently formed a new committee to oversee critical safety and security decisions related to its projects and operations. This move comes amid growing concerns about the ethical implications of AI technology and the potential risks it poses. Interestingly, OpenAI has decided to staff this committee primarily with company insiders, including CEO Sam Altman, rather than involving external observers. This decision has sparked debate about the committee’s ability to function independently and effectively, raising questions about the potential for bias and conflict of interest.

Altman, along with other members of the Safety and Security Committee – Bret Taylor, Adam D’Angelo, Nicole Seligman, chief scientist Jakub Pachocki, Aleksander Madry (who leads OpenAI’s preparedness team), Lilian Weng (head of safety systems), Matt Knight (head of security), and John Schulman (head of alignment science)—will be responsible for evaluating OpenAI’s safety processes and safeguards over the next 90 days. According to a post on the company’s corporate blog, the committee will then share its findings and recommendations with the full OpenAI board of directors for review. OpenAI has pledged to publish an update on any adopted suggestions “in a manner that is consistent with safety and security.”

OpenAI has recently begun training its next frontier model, which it anticipates will bring them closer to achieving artificial general intelligence. While the company is proud to build and release models that are industry-leading in both capabilities and safety, it acknowledges the importance of a robust debate at this critical moment. This sentiment is intended to project an openness to external scrutiny and a willingness to engage with the broader community on issues of AI safety.

Over the past few months, several high-profile departures from OpenAI’s safety team have raised concerns. Some of these former employees have voiced their worries about what they perceive as an intentional de-prioritization of AI safety. Daniel Kokotajlo, who worked on OpenAI’s governance team, quit in April after losing confidence that OpenAI would “behave responsibly” with the release of increasingly capable AI. He expressed these concerns in a post on his personal blog. Similarly, Ilya Sutskever, an OpenAI co-founder and formerly the company’s chief scientist, left in May after a prolonged conflict with Altman and his allies. This conflict was reportedly due to Altman’s rush to launch AI-powered products at the expense of thorough safety work.

More recently, Jan Leike, a former DeepMind researcher who was involved with the development of ChatGPT and its predecessor, InstructGPT, resigned from his safety research role. In a series of posts on X, he expressed his belief that OpenAI “wasn’t on the trajectory” to adequately address issues related to AI security and safety. AI policy researcher Gretchen Krueger, who left OpenAI last week, echoed Leike’s statements. She called for the company to improve its accountability and transparency and to take greater care in the use of its own technology.

Quartz notes that, besides Sutskever, Kokotajlo, Leike, and Krueger, at least five of OpenAI’s most safety-conscious employees have either quit or been pushed out since late last year. This includes former OpenAI board members Helen Toner and Tasha McCauley. In an op-ed for The Economist published Sunday, Toner and McCauley argued that with Altman at the helm, they don’t believe that OpenAI can be trusted to hold itself accountable. They stated that “self-governance cannot reliably withstand the pressure of profit incentives,” highlighting the inherent conflict between commercial success and ethical AI governance.

TechCrunch reported earlier this month that OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources but rarely received a fraction of that. The Superalignment team has since been dissolved, with much of its work placed under the purview of Schulman and a safety advisory group formed in December.

Exit mobile version