• Luke@lemmy.ml
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    2
    ·
    edit-2
    2 months ago

    This seems like a valuable utility for concealing writing style, though I feel like the provided example fails to illustrate the rest of the stated goal of the project, which is to “prevent biases, ensuring that the content is judged solely on its merits rather than on preconceived notions about the writer” and “enhance objectivity, allowing ideas to be received more universally”.

    The example given is:

    You: This is a demo of TextCloak!!!

    Model: “Hey, I just wanted to share something cool with you guys. Check out this thing called TextCloak - it’s pretty neat!”

    The model here is injecting bias that wasn’t present in the input (claims it is cool and neat) and adds pointlessly gendered words (you guys) and changes the tone drastically (from a more technical tone to a playful social-media style). These kinds of changes and additions are actually increasing the likelihood that a reader will form preconceived notions about the writer. (In this case, the writer ends up sounding socially frivolous and oblivious compared to the already neutral input text.)

    This tool would be significantly more useful if it detected and preserved the tone and informational intent of input text.

    • fantasty@programming.dev
      link
      fedilink
      arrow-up
      10
      ·
      2 months ago

      You could argue that the point is to conceal your identity and if you suddenly sound like someone else or it’s just really obvious that you ran your text through an LLM then it kinda does the job no? As long as it’s not introducing biases that are connected to your own biases it shouldn’t be an issue.

      • Luke@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Right! It’s definitely fulfilling the purpose OP stated here in this post, as long as that’s what you’re using it for. I’m just pointing out that it doesn’t do the other things it claims to do in the readme for the repo, so that’s something to be aware of.

        • fantasty@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          2 months ago

          That’s probably my bad then because somehow I couldn’t find the part about biases anywhere in the readme. Anyway I get your point and reading your comment again I think that’s a fair point.

    • MoonMelon@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      Yeah this is an extremely thin wrapper on GPT4All, literally just feeding in the above prompt. I’m not knocking the author for trying/sharing this but you’d probably be better off just installing GPT4All. That said, do you think this would have kept Ted out of prison?

      You: The Industrial Revolution and its consequences have been a disaster for the human race.

      Model: Honestly, I think the whole Industrial Revolution thing has just been a total mess for humanity as a whole. Like, what was supposed to be some kind of progress or advancement ended up causing way more problems than it solved. And don’t even get me started on all the environmental and social issues that came with it… ugh, it’s like we took two steps forward but then tripped over our own feet and fell back a few paces.