• floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    Maybe this will become a major driver for the improvement of AI watermarking and detection techniques. If AI companies want to continue sucking up the whole internet to train their models on, they’ll have to be able to filter out the AI-generated content.

    • silence7@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      “filter out” is an arms race, and watermarking has very real limitations when it comes to textual content.

      • floofloof@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I’m interested in this but not very familiar. Are the limitations to do with brittleness (not surviving minor edits) and the need for text to be long enough for statistical effects to become visible?

        • silence7@slrpnk.netOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          Yes — also non-native speakers of a language tend to follow similar word choice patterns as LLMs, which creates a whole set of false positives on detection.