• TachyonTele@lemm.ee
    link
    fedilink
    arrow-up
    64
    arrow-down
    4
    ·
    2 months ago

    You can’t turn a spicy autocorrect into anything even remotely close to Jarvis.

    • Aatube@kbin.melroy.org
      link
      fedilink
      arrow-up
      13
      arrow-down
      8
      ·
      2 months ago

      It’s not autocorrect, it’s a text predictor. So I’d say you could definitely get close to JARVIS, especially when we don’t even know why it works yet.

      • Zangoose@lemmy.world
        link
        fedilink
        arrow-up
        19
        arrow-down
        4
        ·
        2 months ago

        You’re just being pedantic. Most autocorrects/keyboard autocompletes make use of text predictors to function. Look at the 3 suggestions on your phone keyboard whenever you type. That’s also a text predictor (granted it’s a much simpler one).

        Text predictors (obviously) predict text, and as such don’t have any actual understanding on the text they are outputting. An AI that doesn’t understand its own outputs isn’t going to achieve anything close to a sci-fi depiction of an AI assistant.

        It’s also not like the devs are confused about why LLMs work. If you had every publicly uploaded sentence since the creation of the Internet as a training reference I would hope the resulting model is a pretty good autocomplete, even to the point of being able to answer some questions.

        • Aatube@kbin.melroy.org
          link
          fedilink
          arrow-up
          4
          arrow-down
          11
          ·
          2 months ago

          Yes, autocorrect may use text predictors. No, that does not make text predictors “spicy autocorrect”. The denotation may be correct, but the connotation isn’t.

          Text predictors (obviously) predict text, and as such don’t have any actual understanding on the text they are outputting. An AI that doesn’t understand its own outputs isn’t going to achieve anything close to a sci-fi depiction of an AI assistant.

          There’s a large philosophical debate about whether we actually know what we’re thinking, but I’m not going to get into that. All I’m going to elaborate on is the thought experiment of the Chinese room that posits that perhaps AI doesn’t need to understand things to have apparent intelligence enough for most functions.

          It’s also not like the devs are confused about why LLMs work.

          Yes they are. All they know is that if you train a text predictor a ton, at one point it hits a bottleneck of usability way below targets, and then one day it will suddenly surpass that bottleneck for no apparent reason.