OpenAI has announced its participation in the Coalition for Content Provenance and Authenticity (C2PA) as a steering committee member. Other committee members are Adobe, BBC, Intel, Microsoft, Google, Publicis Groupe, Sony and Truepic.
What is C2PA?
C2PA is an open, technical standards body that is developing standards for certifying the source and history of digital content to address the prevalence of misleading information online. This includes verifying whether content has been altered from its original state.
C2PA combines the efforts of the Content Authenticity Initiative, led by Adobe, which focuses on providing context and history for digital media, and Project Origin, a joint initiative by Microsoft and BBC aimed at combating disinformation in the digital news ecosystem.
“C2PA is playing an essential role in bringing the industry together to advance shared standards around digital provenance,” said Anna Makanju, OpenAI’s VP of Global Affairs. “We look forward to contributing to this effort and see it as an important part of building trust in the authenticity of what people see and hear online.”
“OpenAI’s existing adoption, advocacy, and ongoing commitment to Content Credentials will bring an important voice to our membership’s working efforts to guide the development of the C2PA standard,” said Andrew Jenks, C2PA Chair.
OpenAI recently integrated C2PA metadata standards into its DALL·E 3 image model outputs to enhance transparency and traceability. The company plans to extend these practices to Sora, its upcoming video generation model.
Read More:
Stack Overflow, OpenAI Team Up to Integrate AI with Expert Knowledge
OpenAI Ties up with Financial Times
OpenAI’s mage detection classifier
Moreover, OpenAI said in a blog post that its new initiatives are to tackle deceptive digital content, including advanced watermarking and AI-driven detection tools. The company is launching a new image detection classifier, which is now open for testing by select research groups and non-profit journalism organizations through its Researcher Access Program.
The classifier, which has demonstrated a high accuracy rate in identifying AI-generated images, represents a significant step forward in distinguishing between authentic and manipulated content. OpenAI emphasized the importance of these technologies in maintaining digital media’s credibility in a blog post.
In addition, OpenAI is currently working on adding audio watermarking to their custom voice model, Voice Engine, which is currently in a limited research preview. They are continuing their research in the audio segment to ensure that their advancements in audio technologies are transparent and secure. This was announced in a blog post.
OpenAI and Microsoft to promote AI education
In conjunction with these technological advancements, OpenAI and Microsoft are introducing a societal resilience fund to promote AI education and foster broader understanding of AI technologies and their implications. The fund will support various organizations, including Older Adults Technology Services from AARP, International IDEA, and the Partnership on AI.
This multi-faceted approach underscores the critical role of collaboration and transparency in addressing the challenges posed by the rapid advancement of generative AI technologies in content creation.