• thedeadwalking4242@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    24 hours ago

    Do these companies actually have a group of people who read through and target specific concepts like this? Seems insane. If I was a intern somewhere punching these filters in I’d just throw a fit

    • merthyr1831@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      23 hours ago

      they usually outsource any of the menial tasks to people in the global south. I work with someone who had a startup dating app where they used “AI” to match couples, but it was actually just a university student in Indonesia who they paid to do 8 hour stints sorting people’s profiles manually.

    • geneva_convenience@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      22 hours ago

      ChatGPT undoubtably gives way different answers when asked about Palestine than when asked about other human rights violations and/or genocides. Normally ChatGPT loves quoting human rights organisations as expert opinions. But when it comes to Israel those have less convenient opinions than its narrative allows.

      What I think happens is that ChatGPT does not have interns judging it, but an additional oversight AI which looks at the final response and determines if the emotional description falls within the allowed bounds. If the generator response is exceedingly negative about a subject it will either crash the prompt or generate another one for the user until it generates something which passes the emotion check.