Tech giant OpenAI just made several announcements that happen to be linked to a wave of new tools based on generative AI.
For starters, the company has spoken about a new media manager. This is designed to ensure content creators inform the firm about any material that’s under their sole ownership so their copyrights stay protected.
In the same way, it helps to ensure their content isn’t used for training purposes without consent and that’s a big deal because many have been robbed of their work ever since the revolution of generative AI.
The firm released a new statement on this front including how it would now require state-of-the-art learning research to produce its first tool of this kind to better highlight text, audio, pictures, and even videos coming from different sources. It would seemingly also ensure the creator’s voices are heard and not just ignored.
For now, the tool is just in its trial phase and we don’t see a launch coming before early next year. But seeing the company even speak about such a product has fans excited as OpenAI has been at the center of a lot of controversy and legal woes including those arising from big media firms like NYT.
In case you didn't know, the latter accused the tech giant of using content for AI training purposes.
In other news, the company has shed light on how it’s working on other offerings including an innovative tool created to detect pictures generated via the DALL-E art creator. Their goal seems to be more related to ensuring independent research is conducted where it examines how effective a classifier really is, provides considerations, and also determines where it could be used in the real world. Did we mention the characteristics that the content produced includes?
For a while now, deepfake images seem to be circulating with no checks and balances. But seeing OpenAI make efforts toward identifying real from fake is the way things need to be done. And let’s not forget how the tool at hand could differentiate real from fake and during trials. And it was miraculously how it could do that 98% of the time.
For now, it’s under an experimentation phase and no clues about when we might get to benefit from it were made public.
The tech giant added through a similar blog post how it hopes to join new groups to produce more standards so that online content could be better certified. Additionally, the tech giant and its top partner are collaborating for a whopping $2 million in funding which is dubbed Soicetal Resilience Fund.
This would be used for funding purposes that are allocated toward AI education and better comprehension. So as you can see, OpenAI is really moving full throttle in terms of ensuring it's answering the many questions surrounding its technology.
Image: DIW-Aigen
Read next: New Chrome Feature, Video Chapters Mimics YouTube's Functionality, Aiding Navigation on Websites
For starters, the company has spoken about a new media manager. This is designed to ensure content creators inform the firm about any material that’s under their sole ownership so their copyrights stay protected.
In the same way, it helps to ensure their content isn’t used for training purposes without consent and that’s a big deal because many have been robbed of their work ever since the revolution of generative AI.
The firm released a new statement on this front including how it would now require state-of-the-art learning research to produce its first tool of this kind to better highlight text, audio, pictures, and even videos coming from different sources. It would seemingly also ensure the creator’s voices are heard and not just ignored.
For now, the tool is just in its trial phase and we don’t see a launch coming before early next year. But seeing the company even speak about such a product has fans excited as OpenAI has been at the center of a lot of controversy and legal woes including those arising from big media firms like NYT.
In case you didn't know, the latter accused the tech giant of using content for AI training purposes.
In other news, the company has shed light on how it’s working on other offerings including an innovative tool created to detect pictures generated via the DALL-E art creator. Their goal seems to be more related to ensuring independent research is conducted where it examines how effective a classifier really is, provides considerations, and also determines where it could be used in the real world. Did we mention the characteristics that the content produced includes?
For a while now, deepfake images seem to be circulating with no checks and balances. But seeing OpenAI make efforts toward identifying real from fake is the way things need to be done. And let’s not forget how the tool at hand could differentiate real from fake and during trials. And it was miraculously how it could do that 98% of the time.
For now, it’s under an experimentation phase and no clues about when we might get to benefit from it were made public.
The tech giant added through a similar blog post how it hopes to join new groups to produce more standards so that online content could be better certified. Additionally, the tech giant and its top partner are collaborating for a whopping $2 million in funding which is dubbed Soicetal Resilience Fund.
This would be used for funding purposes that are allocated toward AI education and better comprehension. So as you can see, OpenAI is really moving full throttle in terms of ensuring it's answering the many questions surrounding its technology.
Image: DIW-Aigen
Read next: New Chrome Feature, Video Chapters Mimics YouTube's Functionality, Aiding Navigation on Websites