There’s another round of CSAM attacks and it’s really disturbing to see those images. It was really bothering to see those and they weren’t taken down immediately. There was even a disgusting shithead in the comments who thought it was funny?? the fuck

It’s gone now but it was up for like an hour?? This really ruined my day and now I’m figuring out how to download tetris. It’s really sickening.

  • Kalcifer@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    How was it handled on Reddit? Did the moderators have to handle it there as well, or did Reddit filter it out beforehand?

      • Kalcifer@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Are any of the examples that your provided libre/free and open-source? I wasn’t able to find any info for Google’s, and Cloudflare seems to only offer theirs for free if you are already using Cloudflare’s services. If not the examples that you provided, does there exist any tools that are libre/free and open-source?

        • shagie@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          No.

          The nature of the checksums and perceptual hashing is kept in confidence between the National Center for Missing and Exploited Children (NCMEC) and the provider. If the “is this classified as CSAM?” service was available as an open source project those attempting to circumvent the tool would be able to test it until the modifications were sufficient to get a false negative.

          There are attempts to do “scan and delete” but this may add legal jeopardy to server admins even more than not scanning as server admins are required by law to report and preserve the images and log files associated with CSAM.

          I’d strongly suggest anyone hosting a Lemmy instance to read https://www.eff.org/deeplinks/2022/12/user-generated-content-and-fediverse-legal-primer

          The requirements for hosting providers are https://www.law.cornell.edu/uscode/text/18/2258A

          (a) Duty To Report.—
          (1) In general.—
          (A) Duty.—In order to reduce the proliferation of online child sexual exploitation and to prevent the online sexual exploitation of children, a provider—
          (i) shall, as soon as reasonably possible after obtaining actual knowledge of any facts or circumstances described in paragraph (2)(A), take the actions described in subparagraph (B); and
          (ii) may, after obtaining actual knowledge of any facts or circumstances described in paragraph (2)(B), take the actions described in subparagraph (B).
          (B) Actions described.—The actions described in this subparagraph are—
          (i) providing to the CyberTipline of NCMEC, or any successor to the CyberTipline operated by NCMEC, the mailing address, telephone number, facsimile number, electronic mailing address of, and individual point of contact for, such provider; and
          (ii) making a report of such facts or circumstances to the CyberTipline, or any successor to the CyberTipline operated by NCMEC.

          (e) Failure To Report.—A provider that knowingly and willfully fails to make a report required under subsection (a)(1) shall be fined—
          (1) in the case of an initial knowing and willful failure to make a report, not more than $150,000; and
          (2) in the case of any second or subsequent knowing and willful failure to make a report, not more than $300,000.

        • PM_Your_Nudes_Please@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          1 year ago

          It will also make it a battle of attrition. Because now we’re not only using AI to block CSAM; Trolls are using AI to generate CSAM.

          The issue is that these tools typically work by hashing the image (or a specific section of the image) and checking it against a database of known CSAM. That way you never actually need to view the file to compare it to the list. But with AI image generation, that list of known CSAM is essentially useless because trolls can just generate new images.

          • fubo@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 year ago

            Even without the issue of new AI-generated images, those hash-based scanning tools aren’t available to hobbyist projects like the typical Lemmy instance. If they were given to hobbyist projects, it would be really easy for an abuser to just tweak their image collection until it didn’t set off the filter.

            • snowe@programming.dev
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 year ago

              You can use CloudFlare’s CSAM scanning tool completely for free. You can’t get access to the hashes, which would allow what you are talking about.

              • fubo@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                Sure, for Lemmy instances who are Cloudflare customers. But I don’t think it can be integrated with the Lemmy code by default.

                • snowe@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  No it can’t, and it shouldn’t be. It’s better to stop the CSAM before it ever makes it to any server you control rather than wait and then need to deal with it.

            • fubo@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              On the other hand, if the people who want those images can satisfy their urges using AI fakes, that could mean less spreading of images of actual abuse. It might even mean less abuse happening.

              However, because they’re terrible people, I have to suspect that’s not the case.