Also known as snooggums on midwest.social and kbin.social.

  • 0 Posts
  • 1.91K Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle



  • By design, LLMs can get faster but cannot be more accurate without a massive intentional approach to verifying their datasets, which isn’t feasible because that would counter anything not fact based as LLMs don’t understand context. Basically, the training approach means that they get filled with whatever the builders can get their hands on and then they fall back to web searches which return all kinds of unreliable stuff because LLMs don’t have a way of verifying reliability.

    Even if they were perfect, they will not be able to keep up with the content flood of new information that comes out every minute when used as general purpose answer anything tools.

    What AI actually excels at is pattern matching in controlled settings.