Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.

  • Spzi
    link
    fedilink
    English
    211 months ago

    I guess you’re right, but find this a very interesting point nevertheless.

    How can we tell? How can we tell that we use and understand language? How would that be different from an arbitrarily sophisticated text generator?

    For the sake of the comparison, we should talk about the presumed intelligence of other people, not our (“my”) own.

    • Utsob Roy
      link
      fedilink
      English
      011 months ago

      In the case of current LLMs, we can tell. These LLMs are not black boxes to us. It is hard to follow the threads of their decisions because these decisions are just some hodgepodge of statistics and randomness, not because they are very intricate thoughts.

      We can’t compare the outputs, probably, but compute the learning though. Imagine a human with all the literature, ethics, history, and all kind of texts consumed like that LLMs, no amount of trick questions would have tricked him to believe in racial cleansing or any such disconcerting ideas. LLMs read so much, and learned so little.