The growing number of generative AI models is making it so much more difficult to select which one to use when it comes to trending content.
But thanks to expert Greg Jarboe and his latest experiment to test the output generated from Gemini, Claude, and ChatGPT, we’ve got more insights now than ever.
Remember, the goal always seems to be related to producing the most engaging content that your audience can interact with. So to better identify that, the test revolved around providing students with assignments where they could explore replies to great queries that would benefit them.
It’s like a test-and-learn scheme that could apply to any firm, team, or even an agency. Simply carry out a test and compare the results or output that’s generated.
We’re sure you can utilize Google Trends to highlight search terms or subjects that are currently trending. So many journalists are making use of such tools free of cost to attain story ideas and begin the process of brainstorming.
In the test, ChatGPT, Claude, and Gemini were all used to write engaging material and the results were compared.
ChatGPT was asked first to produce engaging and thought-provoking ads for Euros 2024. But a lot of double-checking was needed to ensure the chatbot wasn’t hallucinating. Gemini was next and while the results were great in terms of engaging content, it’s not quite clear what method was used by the chatbot to make selections.
It was probably the most recent articles generated on this topic but again, you’d need to put in more diligence to find the answer. Next, it was time to test Claude’s output. It was very honest and we admire that in today’s day and age. However, the knowledge cutoff meant that this tool wasn’t great for recent trends.
Similar tests were conducted again for a certain topic but we’re sure you get the hang of it.
To conclude, we have to admit that comparing content produced by various models is a great way to gauge each one’s capabilities. No matter which one you use, they’re great at brainstorming. Hence, you can easily produce your first draft most immaculately.
Follow that up with the your personal experience and expertise rule to include attractive/educational images and videos. This would also entail reliable citations from the authentic sources, great quotes and relevant stats to the content seen online.
We’ve learned quite a lot including how every model has its fair share of pros and cons but placing 100% mindless reliance on these AI-tools is probably something smart people would avoid. Do you agree?
Image: DIW-Aigen
Read next: Study Finds AI Phenomenon Blurs Reality, Complicating Scam Identification for Nearly 5 in 10 of Americans
But thanks to expert Greg Jarboe and his latest experiment to test the output generated from Gemini, Claude, and ChatGPT, we’ve got more insights now than ever.
Remember, the goal always seems to be related to producing the most engaging content that your audience can interact with. So to better identify that, the test revolved around providing students with assignments where they could explore replies to great queries that would benefit them.
It’s like a test-and-learn scheme that could apply to any firm, team, or even an agency. Simply carry out a test and compare the results or output that’s generated.
We’re sure you can utilize Google Trends to highlight search terms or subjects that are currently trending. So many journalists are making use of such tools free of cost to attain story ideas and begin the process of brainstorming.
In the test, ChatGPT, Claude, and Gemini were all used to write engaging material and the results were compared.
ChatGPT was asked first to produce engaging and thought-provoking ads for Euros 2024. But a lot of double-checking was needed to ensure the chatbot wasn’t hallucinating. Gemini was next and while the results were great in terms of engaging content, it’s not quite clear what method was used by the chatbot to make selections.
It was probably the most recent articles generated on this topic but again, you’d need to put in more diligence to find the answer. Next, it was time to test Claude’s output. It was very honest and we admire that in today’s day and age. However, the knowledge cutoff meant that this tool wasn’t great for recent trends.
Similar tests were conducted again for a certain topic but we’re sure you get the hang of it.
To conclude, we have to admit that comparing content produced by various models is a great way to gauge each one’s capabilities. No matter which one you use, they’re great at brainstorming. Hence, you can easily produce your first draft most immaculately.
Follow that up with the your personal experience and expertise rule to include attractive/educational images and videos. This would also entail reliable citations from the authentic sources, great quotes and relevant stats to the content seen online.
We’ve learned quite a lot including how every model has its fair share of pros and cons but placing 100% mindless reliance on these AI-tools is probably something smart people would avoid. Do you agree?
Image: DIW-Aigen
Read next: Study Finds AI Phenomenon Blurs Reality, Complicating Scam Identification for Nearly 5 in 10 of Americans