silence7@slrpnk.net to Technology@lemmy.worldEnglish · 4 months agoWhen A.I.’s Output Is a Threat to A.I. Itself | As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.www.nytimes.comexternal-linkmessage-square53fedilinkarrow-up1353arrow-down112
arrow-up1341arrow-down1external-linkWhen A.I.’s Output Is a Threat to A.I. Itself | As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.www.nytimes.comsilence7@slrpnk.net to Technology@lemmy.worldEnglish · 4 months agomessage-square53fedilink
minus-squaresilence7@slrpnk.netOPlinkfedilinkEnglisharrow-up2·4 months agoYes — also non-native speakers of a language tend to follow similar word choice patterns as LLMs, which creates a whole set of false positives on detection.
Yes — also non-native speakers of a language tend to follow similar word choice patterns as LLMs, which creates a whole set of false positives on detection.