OpenAI has made significant strides in advancing artificial intelligence technologies, with its most recent achievement being the GPT-4o system that powers the popular ChatGPT chatbot. Today, OpenAI announced the establishment of a new safety committee, the OpenAI Safety Council, and revealed that it has begun training a new AI model.
Who is in OpenAI’s Safety Council?
The newly formed OpenAI Safety Council aims to provide guidance and oversight on critical safety and security decisions related to the company’s projects and operations. The council’s primary objective is to ensure that OpenAI’s AI development practices prioritize safety and align with ethical principles. The safety committee comprises a diverse group of individuals, including OpenAI executives, board members, and technical and policy experts.
Notable members of the OpenAI Safety Council include:
- Sam Altman, CEO of OpenAI
- Bret Taylor, Chairman of OpenAI
- Adam D’Angelo, CEO of Quora and OpenAI board member
- Nicole Seligman, former Sony general counsel and OpenAI board member
In its initial phase, the new safety and security committee will focus on evaluating and strengthening OpenAI’s existing safety processes and safeguards. The OpenAI Safety Council has set a 90-day timeline to provide recommendations to the board on how to enhance the company’s AI development practices and safety systems. Once the recommendations are adopted, OpenAI plans to publicly release them in a manner consistent with safety and security considerations.
Training of the New AI Model
In parallel with the establishment of the OpenAI Safety Council, OpenAI has announced that it has begun training its next frontier model. This latest artificial intelligence model is expected to surpass the capabilities of the GPT-4 system currently underpinning ChatGPT. While details about the new AI model remain scarce, OpenAI has said that it will lead the industry in both capability and safety.
The development of this new AI model underscores the rapid pace of innovation in the field of artificial intelligence and the potential for artificial general intelligence (AGI). As AI systems become more advanced and powerful, it is crucial to prioritize safety and ensure that these technologies are developed responsibly.
OpenAI’s Recent Controversies and Departures
OpenAI’s renewed focus on safety comes amidst a period of internal turmoil and public scrutiny. In recent weeks, the company has faced criticism from within its own ranks, with researcher Jan Leike resigning and expressing concerns that safety had taken a backseat to the development of “shiny products.” Leike’s resignation was followed by the departure of Ilya Sutskever, OpenAI’s co-founder and chief scientist.
The departures of Leike and Sutskever have raised questions about the company’s priorities and its approach to AI safety. The two researchers jointly led OpenAI’s “superalignment” team, which was dedicated to addressing long-term AI risks. Following their resignations, the superalignment team was disbanded, further fueling concerns about the company’s commitment to safety.
In addition to the internal upheaval, OpenAI has also faced allegations of voice impersonation in its ChatGPT chatbot. Some users have claimed that the chatbot’s voice bears a striking resemblance to that of actress Scarlett Johansson. While OpenAI has denied intentionally impersonating Johansson, the incident has sparked a broader conversation about the ethical implications of AI-generated content and the potential for misuse.
A Broader Conversation on AI Ethics
As the field of artificial intelligence continues to evolve rapidly, it is crucial for companies like OpenAI to engage in ongoing dialogue and collaboration with researchers, policymakers, and the public to ensure that AI technologies are developed responsibly and with robust safeguards in place. The recommendations put forth by the OpenAI Safety Council and OpenAI’s commitment to transparency will contribute to the broader conversation on AI governance and help shape the future of this transformative technology, but only time will tell what will come out of it.