In May of 2024, OpenAI made headlines about a new tool in development called Media Manager.
The tool was designed to give creators the freedom to specify how they wished to have their work included or excluded from the company’s AI training data. But seven months down the line, there’s no trace of the much-anticipated tool.
Now the question is why or where did that tool go that gave creators copyrights to pictures, videos, audio, and even their regular text. At the time, OpenAI seemed very keen on ensuring creators’ rights were safeguarded. Now, there’s no signal of where that might be.
Some even felt it was the AI giant’s smart plant to shut down fierce critics who couldn’t help but comment on the firm’s long list of legal debates about training AI using content that didn’t even belong to them.
Sources familiar with the matter were quick to speak about how the tool was hardly ever viewed as something important by the company. It never deemed this to be a priority and that’s worrisome. One employee went as far as to mention how they didn’t recall anyone working on this.
One non-employee who coordinated work with this firm shared with media outlet TechCrunch how they discussed the matter with OpenAI previously but there’s yet to be any kind of update on that front. The individuals didn’t wish to reveal their identity in the public eye.
One member of the company’s legal team shared more on this front including how there’s no update to be shared from the tech giant including no signs of progress. Despite setting up an internal deadline for the release, it’s yet to be missed and that just goes to show how unwilling the firm is towards its launch.
2025 has begun but that might not be the year for the launch from what we’re seeing right now.
Today, most of the company’s AI models are designed to learn from patterns in certain groups of data. This is what assists them in making the right predictions. ChatGPT can produce very convincing written material while tools such as Sora which is the organization’s video generator could create the most real footage out there today.
The capability to draw on certain examples like movies, written content, and beyond could give rise to new material that makes this AI tech very powerful. It’s quite regurgitating and when you prompt some models in a certain manner, they give rise to almost identical copies of information.
This obviously makes a lot of creators in today’s time upset because it’s their hard work that’s being used without any form of consent taken and no compensation provided. Many’ve launched lawsuits on this front.
The company is fighting a list of class action cases that were hurled in their direction by media outlets, writers, YouTubers, artists, and more from the art community. They’ve accused the startup of training on their material illegally.
Some common names from the list of plaintiffs include The New York Times and Radio-Canada amongst others. The tech giant has given rise to a host of licensing deals with selective partners but not everyone sees this as a lucrative partnership where they get the majority of the benefit.
The company is also giving creators some chances to opt out when or if they’re not happy with the material used for AI training. This includes a submission rolled out a few months back detailing how artists can flag work if they wish to have it removed from training. They’ve even gone to the extent of stopping web crawlers from data scraping activities through several domains.
So many creators criticized these methods as dangerous and not enough. They don’t feel it’s specific enough for written material, audio, and video. Meanwhile, there’s also a discussion about how opt-out forms require copy submissions of every picture for removal alongside a regular description of the process. This is why Media Manger was pitched as a revamp and a total expansion of the entire ordeal.
It was marketed as making use of state-of-the-art tech and research to make sure creators and those owning content can protect their material. At the time, the company shared how it was in discussions with regulators as well during the tool’s development phase. It was also marketed as the fixed standard for the whole AI industry.
Now, there’s no sign or talk about it. It’s almost as if it’s non-existent and no indicators are there as to when a launch might be on the cards. I guess we’ll just have to wait and watch for more updates from OpenAI’s side.
Image: DIW-Aigen
Read next:
• How Is Bitcoin’s Recent Growth Impacting Meme Coins?
The tool was designed to give creators the freedom to specify how they wished to have their work included or excluded from the company’s AI training data. But seven months down the line, there’s no trace of the much-anticipated tool.
Now the question is why or where did that tool go that gave creators copyrights to pictures, videos, audio, and even their regular text. At the time, OpenAI seemed very keen on ensuring creators’ rights were safeguarded. Now, there’s no signal of where that might be.
Some even felt it was the AI giant’s smart plant to shut down fierce critics who couldn’t help but comment on the firm’s long list of legal debates about training AI using content that didn’t even belong to them.
Sources familiar with the matter were quick to speak about how the tool was hardly ever viewed as something important by the company. It never deemed this to be a priority and that’s worrisome. One employee went as far as to mention how they didn’t recall anyone working on this.
One non-employee who coordinated work with this firm shared with media outlet TechCrunch how they discussed the matter with OpenAI previously but there’s yet to be any kind of update on that front. The individuals didn’t wish to reveal their identity in the public eye.
One member of the company’s legal team shared more on this front including how there’s no update to be shared from the tech giant including no signs of progress. Despite setting up an internal deadline for the release, it’s yet to be missed and that just goes to show how unwilling the firm is towards its launch.
2025 has begun but that might not be the year for the launch from what we’re seeing right now.
Today, most of the company’s AI models are designed to learn from patterns in certain groups of data. This is what assists them in making the right predictions. ChatGPT can produce very convincing written material while tools such as Sora which is the organization’s video generator could create the most real footage out there today.
The capability to draw on certain examples like movies, written content, and beyond could give rise to new material that makes this AI tech very powerful. It’s quite regurgitating and when you prompt some models in a certain manner, they give rise to almost identical copies of information.
This obviously makes a lot of creators in today’s time upset because it’s their hard work that’s being used without any form of consent taken and no compensation provided. Many’ve launched lawsuits on this front.
The company is fighting a list of class action cases that were hurled in their direction by media outlets, writers, YouTubers, artists, and more from the art community. They’ve accused the startup of training on their material illegally.
Some common names from the list of plaintiffs include The New York Times and Radio-Canada amongst others. The tech giant has given rise to a host of licensing deals with selective partners but not everyone sees this as a lucrative partnership where they get the majority of the benefit.
The company is also giving creators some chances to opt out when or if they’re not happy with the material used for AI training. This includes a submission rolled out a few months back detailing how artists can flag work if they wish to have it removed from training. They’ve even gone to the extent of stopping web crawlers from data scraping activities through several domains.
So many creators criticized these methods as dangerous and not enough. They don’t feel it’s specific enough for written material, audio, and video. Meanwhile, there’s also a discussion about how opt-out forms require copy submissions of every picture for removal alongside a regular description of the process. This is why Media Manger was pitched as a revamp and a total expansion of the entire ordeal.
It was marketed as making use of state-of-the-art tech and research to make sure creators and those owning content can protect their material. At the time, the company shared how it was in discussions with regulators as well during the tool’s development phase. It was also marketed as the fixed standard for the whole AI industry.
Now, there’s no sign or talk about it. It’s almost as if it’s non-existent and no indicators are there as to when a launch might be on the cards. I guess we’ll just have to wait and watch for more updates from OpenAI’s side.
Image: DIW-Aigen
Read next:
• How Is Bitcoin’s Recent Growth Impacting Meme Coins?