• FooBarrington@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    3 days ago

    It doesn’t think, meaning it can’t reason.

    • How do you know thinking is required for reasoning?
    • How do you define “thinking” on a mechanical level? How can I look at a machine and know whether it “thinks” or doesn’t?
    • Why do you think it just picks stuff from the training data, when the DeepSeek paper shows that this is false?

    Don’t get me wrong, I’m not an AI proponent or defender. But you’re repeating the same unsubstantiated criticisms that have been repeated for the past year, when we have data that shows that you’re wrong on these points.

    • L3ft_F13ld!@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Until I can have a human-level conversation, where this thing doesn’t simply hallucinate answers or start talking about completely irrelevant stuff, or talk as if it’s still 2023, I do not see it as a thinking, reasoning being. These things work like autocorrect and fool people into thinking they’re more than that.

      If this DeepSeek thing is anything more than just hype, I’d love to see it. But I am (and will remain) HIGHLY SKEPTICAL until it is proven without a drop of doubt. Because this whole “AI” thing has been nothing but hype from day one.

      • FooBarrington@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        3 days ago

        Until I can have a human-level conversation, where this thing doesn’t simply hallucinate answers or start talking about completely irrelevant stuff, or talk as if it’s still 2023, I do not see it as a thinking, reasoning being.

        You can go and do that right now. Not every conversation will rise to that standard, but that’s also not the case for humans, so it can’t be a necessary requirement. I don’t know if we’re at a point where current models reach it more frequently than the average human - would reaching this point change your mind?

        These things work like autocorrect and fool people into thinking they’re more than that.

        No, these things don’t work like autocorrect. Yes, they are recurrent, but that’s not the same thing - and mathematical analysis of the model shows that it’s not a simple Markov process. So no, it doesn’t work like autocorrect in a meaningful way.

        If this DeepSeek thing is anything more than just hype, I’d love to see it.

        Great, the papers and results are open and available right now!

        • L3ft_F13ld!@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          I don’t have the knowledge or the understanding for the research paper to mean anything to me and I’ll admit that. I’ll see where this new model is in a few months time after it’s actually been used and properly tested by people. Then we’ll see if it’s meaningfully changed anything or just become another forgotten one after the hype dies.