Other samples:

Android: https://github.com/nipunru/nsfw-detector-android

Flutter (BSD-3): https://github.com/ahsanalidev/flutter_nsfw

Keras MIT https://github.com/bhky/opennsfw2

I feel it’s a good idea for those building native clients for Lemmy implement projects like these to run offline inferences on feed content for the time-being. To cover content that are not marked NSFW and should be.

What does everyone think, about enforcing further censorship, especially in open-source clients, on the client side as long as it pertains to this type of content?

Edit:

There’s also this, but it takes a bit more effort to implement properly. And provides a hash that can be used for reporting needs. https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX .

Python package MIT: https://pypi.org/project/opennsfw-standalone/

  • Gamey
    link
    310 months ago

    Yea, just leave the CSAM till someone reports it, great solution!

    • @toothbrush@lemmy.blahaj.zone
      link
      fedilink
      010 months ago

      Well, thats how it generally worked as far as I know. Im not saying that you can host illegal stuff as long as no one reports it. Im saying its impossible to know instantly if someones posting something illegal to your server, youd have to see it first. Or else pretty much the entire internet would be illegal, because any user can upload something illegal at any time, and youd instantly be guilty of hosting illegal content? I doubt it.