• hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Why does OpenAI “have” everything and they just sit on it, instead of writing a paper or something? They have a watermarking solution that could help make the world a better place and get rid of some of the Slop out there… They have a definition of AGI… Yet, they release none of that…

    Some people even claim they already have a secret AGI. Or at least ChatGPT 5 sure will be it. I can see how that increases the company’s value, and you’d better not tell the truth. But with all the other things, it’s just silly not to share anything.

    Either they’re even more greedy than the Metas and Googles out there, or all the articles and “leaks” are just unsubstantiated hype.

    • Tattorack@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      2 days ago

      Because OpenAI is anything but open. And they make money selling the idea of AI without actually having AI.

    • mint_tamas@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      2 days ago

      Because they don’t have all the things they claim to claim to have, or it’s with significant caveats. These things are publicised to fuel the hype which attracts investor money. Pretty much the only way they can generate money, since running the business is unsustainable and the next gen hardware did not magically solve this problem.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      2 days ago

      They don’t have AGI. AGI also won’t happen for another laege amount of years to come

      What they currently have is a bunch of very powerful statistical probability engines that can predict the next word or pixel. That’s it.

      AGI is a completely different beast to the current LLM flower leaves

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        edit-2
        2 days ago

        You’re right. The current LLM approach has some severe limitations. If we ever achieve AGI, it’ll probably be something which hasn’t been invented yet. Seems most experts also predict it’ll take some years and won’t happen over night. I don’t really agree with the “statistical” part, though. I mean that doesn’t rule anything out… I haven’t seen any mathematical proof that a statistical predictor can’t be AGI or anything… That’s just something non-expert people often say… But the current LLMs have other/proper limitations as well.

        Plus, I don’t have that much use for something that does the homework assignments for me. If we’re dreaming about the future anyways: I’m waiting for an android that can load the dishwasher, dust the shelves and do the laundry for me. I think that’d be massively useful.