The unexpected legal action taken by The New York Times against ChatGPT maker OpenAI and software giant Microsoft did cause a stir in the tech world recently. But if you think both leading firms are just going to sit back and let the high-profile media outlet make striking allegations of copyright infringement against it without justification, then you’re mistaken.
OpenAI has taken issue with The New York Times, asserting that the lawsuit they generated lacks merit. And it’s a big deal, considering how the NYT is a leader in the world of journalism.
The company began its statement by first praising its accusers, calling it a reputed organization and adding how much it supports the right kind of journalism. But now, we’re seeing the tech giant add the lawsuit garnered in its direction has no merit and therefore cannot make such huge allegations without any basis.
In the same way, the post went on to speak about how the goal is to collaborate with several media outlets and generate the best possible opportunities in the industry. They also shed light on how any kind of training carried out is of fair use and they also give the option to users to opt-out as they feel it’s the right thing to do in terms of generating options for them. And lastly, they did delineate how the problem of regurgitation is there and has been from the start and they’re working hard to bring that down to zero.
Every claim was spoken in detail with pun intended and captured headlines and it was interesting to see how so many of the allegations put out against them had to do with scraping public news sources for the sake of training its AI models like GPT 3.5 and 4 variants. But the company has mentioned since the end of last year how it is offering legal protections with all of its AI offerings.
The case in question was rolled out at the end of last year when the NYT alleged that the AI giant not only trained models using its articles protected by copyright claims but at the same time, it never attained the right kind of permission to carry out the deed. Similarly, they failed to provide any kind of compensation for the examples in question, producing text that was quite like the content seen in the articles of NYT.
Therefore, the act was dubbed a serious crime as it gave rise to unauthorized reproductions as well as derivatives from NYT articles. This kind of suit was rolled out after months of carrying out negotiations with both parties. It’s a deal between OpenAI and NYT reps who just failed to reach a conclusion or agreement that both parties found acceptable.
The impact that this had is that OpenAI really tried to cater to its clients and users but then others began to take advantage of just that. In the end, such deals were rolled out to prevent OpenAI from training on its own material.
Now, OpenAI says that it’s being manipulated by the NYT as the prompts generated in its direction are clearly violations mentioned in its Terms of Services. They accused the media giant of rolling out manipulative prompts that force models to regurgitate hand-picked examples.
Therefore, they feel such claims are not very typical nor are allowed but they are working hard to make the system in such a way that prevents such adversarial attacks from taking place where models are forced to regurgitate data used for training. They also spoke about having great success with such models.
Photo: Digital Information World - AIgen
Read next: Growth in US Advertising and PR Jobs Defies Budget Cuts Throughout 2023
OpenAI has taken issue with The New York Times, asserting that the lawsuit they generated lacks merit. And it’s a big deal, considering how the NYT is a leader in the world of journalism.
The company began its statement by first praising its accusers, calling it a reputed organization and adding how much it supports the right kind of journalism. But now, we’re seeing the tech giant add the lawsuit garnered in its direction has no merit and therefore cannot make such huge allegations without any basis.
In the same way, the post went on to speak about how the goal is to collaborate with several media outlets and generate the best possible opportunities in the industry. They also shed light on how any kind of training carried out is of fair use and they also give the option to users to opt-out as they feel it’s the right thing to do in terms of generating options for them. And lastly, they did delineate how the problem of regurgitation is there and has been from the start and they’re working hard to bring that down to zero.
Every claim was spoken in detail with pun intended and captured headlines and it was interesting to see how so many of the allegations put out against them had to do with scraping public news sources for the sake of training its AI models like GPT 3.5 and 4 variants. But the company has mentioned since the end of last year how it is offering legal protections with all of its AI offerings.
The case in question was rolled out at the end of last year when the NYT alleged that the AI giant not only trained models using its articles protected by copyright claims but at the same time, it never attained the right kind of permission to carry out the deed. Similarly, they failed to provide any kind of compensation for the examples in question, producing text that was quite like the content seen in the articles of NYT.
Therefore, the act was dubbed a serious crime as it gave rise to unauthorized reproductions as well as derivatives from NYT articles. This kind of suit was rolled out after months of carrying out negotiations with both parties. It’s a deal between OpenAI and NYT reps who just failed to reach a conclusion or agreement that both parties found acceptable.
The impact that this had is that OpenAI really tried to cater to its clients and users but then others began to take advantage of just that. In the end, such deals were rolled out to prevent OpenAI from training on its own material.
Now, OpenAI says that it’s being manipulated by the NYT as the prompts generated in its direction are clearly violations mentioned in its Terms of Services. They accused the media giant of rolling out manipulative prompts that force models to regurgitate hand-picked examples.
Therefore, they feel such claims are not very typical nor are allowed but they are working hard to make the system in such a way that prevents such adversarial attacks from taking place where models are forced to regurgitate data used for training. They also spoke about having great success with such models.
Photo: Digital Information World - AIgen
Read next: Growth in US Advertising and PR Jobs Defies Budget Cuts Throughout 2023