The rise of LLMs is creating a situation wherein massive quantities of text can be generated basically at the drop of a hat. In spite of the fact that this is the case, it’s not always clear what people think about the question of authorship in this brave new world.
LLMs use human generated text in their algorithms and for training purposes, and they can be tweaked to mimic a particular writing style down to the last letter, so the question of authorship is a pertinent one with all things having been considered and taken into account. Hence, researchers working at the Institute for Informatics at LMU tried to find an answer to this question.
Test participants were divided into two groups, and both were asked to write post cards. One group had to write these postcards on their own, whereas the other was able to use an LLM to get the job done. When the postcards were written, test participants were asked to upload them and provided some context on authorship.
With all of that having been said and now out of the way, it is important to note that people felt more strongly about their own authorship if they were more heavily involved in the creative process. However, it bears mentioning that many of the people that used LLMs still credited themselves as authors which is rather similar to ghostwriting in several ways.
This just goes to show that perceived ownership and authorship don’t necessarily go hand in hand. If the writing style was close enough to their own, participants had no problem whatsoever with trying to claim that the writing in question was theirs.
The authorship question is pertinent because of the fact that this is the sort of thing that could potentially end up determining whether or not people trust the content that they read online. The willingness of participants to put their own name on a piece of text that was generated almost entirely by AI goes to show that much of the content online may very well end up being created through LLMs, and readers might not know about it. The key is transparency, though it remains to be seen whether or not people using LLMs would be willing to declare it.
Read next: This GenAI Prism Highlights the Expansive Universe of Generative AI Tools
LLMs use human generated text in their algorithms and for training purposes, and they can be tweaked to mimic a particular writing style down to the last letter, so the question of authorship is a pertinent one with all things having been considered and taken into account. Hence, researchers working at the Institute for Informatics at LMU tried to find an answer to this question.
Test participants were divided into two groups, and both were asked to write post cards. One group had to write these postcards on their own, whereas the other was able to use an LLM to get the job done. When the postcards were written, test participants were asked to upload them and provided some context on authorship.
With all of that having been said and now out of the way, it is important to note that people felt more strongly about their own authorship if they were more heavily involved in the creative process. However, it bears mentioning that many of the people that used LLMs still credited themselves as authors which is rather similar to ghostwriting in several ways.
This just goes to show that perceived ownership and authorship don’t necessarily go hand in hand. If the writing style was close enough to their own, participants had no problem whatsoever with trying to claim that the writing in question was theirs.
The authorship question is pertinent because of the fact that this is the sort of thing that could potentially end up determining whether or not people trust the content that they read online. The willingness of participants to put their own name on a piece of text that was generated almost entirely by AI goes to show that much of the content online may very well end up being created through LLMs, and readers might not know about it. The key is transparency, though it remains to be seen whether or not people using LLMs would be willing to declare it.
Read next: This GenAI Prism Highlights the Expansive Universe of Generative AI Tools