With the unveiling of Sora, OpenAI's latest creation, will generative artificial intelligence reach a new milestone? Though still in a testing phase with restricted access, this AI model has dazzled with results that portend a radical transformation in the audiovisual sector. However, this technological revolution is not without controversies, especially concerning author’s rights and legal protection of creations generated by AI.
The unstoppable rise of generative artificial intelligences is a phenomenon that cannot be ignored. A study conducted by the University of Oxford and Google projects that this market will grow exponentially, going from generating $40 billion in 2022 to an estimated $1.3 trillion in 2032. This rapid growth positions AI as an indispensable player in the global economy, though it raises questions about the equitable distribution of the wealth it will generate.
How does Sora work?
One of the most outstanding aspects of Sora is its ability to produce realistically looking videos from simple instructions, known as "prompts." This capability, though remarkable, poses significant technical and ethical challenges. While Sora's potential in the entertainment industry is undeniable, concerns about deepfakes and content manipulation have led OpenAI to impose strict restrictions on its access. The threat of misinformation and manipulation has driven the implementation of robust security protocols, such as watermarking on Sora-generated content and the prohibition of certain types of content.
To ensure that Sora cannot be used for illicit purposes, OpenAI has implemented a robust security protocol. This includes the application of watermarks coon Sora-generated content, thus revealing its artificial origin. Additionally, an integrated text classifier has been developed to verify and reject messages requesting extreme violence, sexual content, or hate-inciting images. Furthermore, reliable image classifiers have been established to review each frame of a generated video before it is shown to the user.
However, even with these security measures, concerns persist about the misuse of the technology. The case of a user who managed to obtain instructions to build a bomb through ChatGPT is a stark reminder of the inherent risks of generative AI. The need for effective regulations and rigorous controls becomes increasingly evident to prevent abuses and protect the integrity of information.
What about author’s rights?
The creation of works by artificial intelligences poses unprecedented legal challenges. Although there are laws protecting intellectual creations, the lack of specific regulations for works generated by AI creates uncertainty about the ownership and authorship of such works.
In Europe, efforts are underway to adapt intellectual property regulations to technological advancements, with proposals aimed at addressing liability for damages caused by AI systems.
On the other hand, OpenAI has implemented additional measures to protect the rights of creators. According to the commercial terms updated as of November 14, 2023, Sora users retain all intellectual property rights over their creations.
This includes both the prompts and the results generated by the AI, providing users with the freedom to use, share, and commercialize their creations at their own discretion.
In conclusion, the advent of Sora and other generative artificial intelligences represents a potentially revolutionary advancement in the audiovisual sector, but it also poses significant challenges in terms of security, ethics, and author’s rights.
To fully harness the potential of this technology and mitigate its risks, a collaborative approach among developers, regulators, and the creative community is necessary. Only then can we ensure a future where innovation coexists harmoniously with the protection of individual rights and the integrity of audiovisual content.
Comments