• grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    100
    arrow-down
    1
    ·
    7 months ago

    I think it’s important to note that Linux can be a way to avoid AI, but doesn’t have to be. If you flip the headline around it almost implies that people who do want AI would be missing out by using Linux, but that’s not true at all: instead, the reality is that Linux is still better for them, too, because you could install all the same kind of functionality if you wanted, but it would be wholly under your control, not Microsoft’s.

    • Lem453@lemmy.ca
      link
      fedilink
      English
      arrow-up
      34
      ·
      edit-2
      7 months ago

      Self hosted AI seems like an intriguing option for those capable of running it. Naturally this will always be more complex than paying someone else to host it for you but it seems like that’s that only way if you care about privacy

      https://github.com/mudler/LocalAI

      • Churbleyimyam@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        Check out Jan AI. It’s open source and extremely easy to install and run. I run it locally on a 2017 laptop without a dedicated GPU and it works, just takes longer to generate responses compared to something like ChatGPT.

    • werefreeatlast@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      7 months ago

      Beautifully stated. Owning the AI personally as I own my personal computer if not more is the key.

    • SOB_Van_Owen@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      That sounds very cool. I’m totally ignorant of the hardware requirements. What sort of minimum setup would such an install take?

      • Avatar_of_Self@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        It really depends on what model you want to run and how much training is bundled with it. You can pretty much run any model if you have enough disk space but of course GPU + VRAM is preferred for a ChatGPT like fast response. Otherwise, running on an older CPU and RAM is going to be noticeably slower, especially with complex models with a lot of training data to trawl through.

        There are some pretty lite models out there but the responses will be more barebones and probably seem ‘less informed’.

        Give GPT4All a try for your first time. It makes install, configuration and usage point-and-click while being fairly straight forward. For the presented/featured models, it presents a small summary and VRAM recommended, though there are many, many other models available from inside the UI.