A new report from plagiarism detector Copyleaks found that 60% of OpenAI’s GPT-3.5 outputs contained some form of plagiarism.
Why it matters: Content creators from authors and songwriters to The New York Times are arguing in court that generative AI trained on copyrighted material ends up spitting out exact copies.
Eh, kinda. It’s not like a science paper is just going to be an equation and nothing else. An author’s synthesis of the results is always going to have unique language. And that is even more true for a social science paper.
Are those “best matches” paper-sized, or snippet-sized?
Article mentioned 400-word chunks, so much less than paper-sized.