• Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      You mean OpenAI didn’t just create a superintelligent artificial brain that will surpass all human ability and knowledge and make our species obsolete?

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        The funny thing is, last year when ChatGPT was released, people freaked out about the same thing.

        Some of it was downright gleeful. Buncha people told me my job (I’m a software developer) was on the chopping block, because ChatGPT could do it all.

        Turns out, not so much.

        I swear, I think some people really want to see software developers lose their jobs, because they hate what they don’t understand, and they don’t understand what we do.

        • enkers@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          10 months ago

          As a software developer, I do want to see software developers lose their jobs to AI. This shouldn’t be surprising, as the purpose of a lot of software development is to put other people out of a job via automation, and that’s fundamentally a good thing. The alternative is like wanting a return to preindustrial society. Automation generally raises quality of life.

          The real problem is that we still haven’t figured out how to distribute the benefits of society’s automation efforts equitably so that they raise quality of life for everyone.

          • nicetriangle@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            10 months ago

            Yeah that would be all fine and well if it meant we’re on track for some post-work egalitarian utopia but you and I know that’s not at all where this is heading.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              10 months ago

              Unfortunately based on what I know of history it seems likely that humanity won’t ever be on track to build a post-work egalitarian utopia until we’ve got no other option left. So I support going ahead with this tech because that seems like a good way to force the issue. The transition period will be rough, but better than stagnation IMO.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Because that’s the version that gets posted and gets clicked on.

      A dry technical writeup looking at the name of the project and how it indicates this is a different approach more in line with DeepMind’s work and what that means in the context of doing high school level math is going to be interesting to only a handful of people.

      But an article that’s contentious and gets hundreds of comments about how “AI is BS” to “AI is dangerous” all arguing with each other drives engagement.

  • Korne127@lemmy.world
    link
    fedilink
    English
    arrow-up
    75
    ·
    edit-2
    10 months ago

    Connecting superintelligence to the board’s recent actions, which Sutskever initially supported, might be a stretch.

    Why do you do that in your headline then?

  • toothbrush@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    1
    ·
    10 months ago

    just bs. They are trying to come up with an explanation for why altman was fired that is not: we caught him doing lots of illegal stuff.

    • GONADS125@lemmy.world
      link
      fedilink
      English
      arrow-up
      38
      ·
      10 months ago

      I think it’s a hype move at this point. Like the guy who claimed he believed google’s chat bot was sentient.

      I read another article that stated they had a computational breakthrough, in which their program can now carry out basic grade school math. No other model is able to actually carry out math equations, not even basic arithmetic.

      This is a significant development, but it’s not like they’re on the cusp of developing superintelligence now. I bet they are taking this small inch towards superintelligence, and hyping it like they’ve just huddled miles forward.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        10 months ago

        The thing is, this could actually be a several miles jump. But where they want to go is not the grocery down the road. They are trying to fly to another galaxy. This is more like hyping up that you are going to land on the moon next year, at a time when you just figured out that rubbing two sticks together it makes a fire. Technically it’s truly a leap, but we are so far away still.

        • GONADS125@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 months ago

          Technically it’s truly a leap, but we are so far away still.

          I completely agree and was trying to convey that. Not trying to downplay the significance of the development, but they are far from superintelligence and they’re going to hype it up as much as they can.

      • Siegfried@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        10 months ago

        Is that the chatbot that they had to shutdown cause it wandered a little bit to much in 4chan?

      • Korne127@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        10 months ago

        The worst part about is it that there have been already two winters in AI development, in the early twothousands and sometimes in the 70/80s? I think? because of exactly this: They always hyped up AI and said they’d solve all the world’s problems in a short time, and when that obviously didn’t happen, people got disappointed in it and pulled funding…

        • R0cket_M00se@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Well the models we have now are already useful for things, so it’s unlikely it’ll just disappear now.

          We didn’t have the computer technology to make it happen back then, they just didn’t know it at the time.

          • Korne127@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 months ago

            That’s not my point. We’ve had good AIs and much development in that area of research already 50 years ago. Chess computers started being better than the best humans in the early 2000s. It’s not a particularly new field. But the development and research of artificial intelligence already completely stopped two times and it took over a decade each time to really start research in the field as well.
            The reasons why this happened is because of too big promises; even if they succeeded in some things, they promised way too much. If they continue promising way too much in the current AI hype as well, I can see the exact same thing happening again: People getting disappointed and the field getting isolated for another decade.
            I’m not saying the current successes will disappear, but that future development might, for a good while, just as it happened back then.

            • R0cket_M00se@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              None of the previous stabs at AI were more than a parlour trick, modern AI are capable of not only full and natural conversations but have the unique ability to turn that into completing tasks based on how well the human operator can describe the problem and explain the proposed solution.

              It’s not always perfect, but it gets close enough for the professional to make use of it by cutting out the research phase of any given project. Or by getting the bulk of the work done without the hours it would have taken to do it. Refining the solution might take ten to fifteen minutes but you don’t have to be a math genius to see the benefits. Plus the models we have now are exploding in niche use-cases. We have image generation, voice generation, code generation, all at near human standards. I’ve had it walk me through how to deploy python scripts via VSC, then I had it walk me through setting up a Git repository, then I asked it to take me through a DnD/Choose your own adventure scenario with specific choices having consequences down the line. It was a little basic but I gave it a preestablished universe and the general premise, it researched the rest on its own and used the data to fill in the gaps in a way I hadn’t even suggested based on what it found of the universe.

              That last one isn’t a productive use case, sure. The point is that what we have now isn’t just some one off computer like a chess bot or a Smash Bros CPU set to its highest level, it’s a seed for every future version of machine learning algorithm that will be used to specifically design models for special scenarios. It’s become ingrained in our society now, and it’s unlikely to just disappear like the rest of what you’re describing.

    • Otter@lemmy.ca
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      edit-2
      10 months ago

      CEO ousting shenanigans = 📉

      Release rumor = 📈

      They’re not publicly traded, but I assume public sentiment still has an effect on things (ex. Partnerships, users buying memberships, etc.)

    • boatswain@infosec.pub
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      self replicating the propaganda?

      You can’t self-replicate anything other than yourself. You replicate things; we use “self-replicating” because it’s shorthand for “thing that replicates itself.”

  • Melt@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    10 months ago

    Hope it replaces the most expensive job position: CEO

  • ZILtoid1991@kbin.social
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    10 months ago

    The “superintelligence” in question: the same old tech, but with a larger context window, which will make it hallucinate a bit less often.

      • mrnotoriousman@kbin.social
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        10 months ago

        Not really. The headline is garbage but that statement is not even close to accurate unless you know nothing about the actual topic.

        • NumbersCanBeFun@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          10 months ago

          Anyone who has spent any time with AI knows that it can easily lose track of context and essentially “hallucinate” a back story to fill in the missing context it lost.

          I have experienced this discussing specific parts of a story I was writing. When I asked the AI to remember a certain detail, it made the entire thing up but swore it remembered it correctly.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Not really.

      If it’s name is Q* then it seems likely that it’s a combination of Q learning and A* search, which indicates that this is an approach similar to DeepMind’s AlphaZero as opposed to a transformer based LLM.

      In that context, getting it to be able to solve high school level math questions is pretty nuts.

      Though the details matter and right now all the articles discussing it are missing those, so we’ll have to wait and see.

  • Tattorack@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    10 months ago

    Alright, so the article really doesn’t prove anything, just says OpenAI claims something and then fills it with words.

    Let’s be clear here; we don’t even have an AGI. That is to say, artificial general intelligence, a man-made intelligence that is at least as capable and general purpose as Human intelligence.

    That would be a intelligence that is self aware and can actually think and understand. Data from Star Trek would be an AGI.

    THESE motherfuckers are now claiming they made a breakthrough on potentially creating an SI, a super intelligence. An artificial, man-made intelligence that not only has the self awareness and understanding of an AGI, but is vastly more intelligent than a Human, and likely has awareness that surpasses Human awareness.

    I think not.

  • RiikkaTheIcePrincess@kbin.social
    link
    fedilink
    arrow-up
    10
    ·
    10 months ago

    Why do I keep looking at these threads? The way people talk about this stuff on all sides is so asinine. Nearly every good point is accompanied by missing a big one or just ricocheting off the good one, flying off into space and hitting a fully automated luxury gay space commulist. Hopes, dreams, assumptions, and ignorance all just headbutting each other and getting nowhere.

    Oh yeah, I wanted to know what “superintelligence” was and whether I should care. Welp.

    • Dadifer@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I think the takeaway is that they’re trying to create a LLM that can answer questions that it wasn’t trained on.

  • reflex@kbin.social
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    10 months ago

    Yawn.

    Let me know when we get a real Terminator or Matrix or Space Odyssey situation.

  • Amir @lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    10 months ago

    The whole organization structure & how it functions is just not so smart after all. Have management team considered the Lean Methodology with their business objectives?

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      The problem that precipitated all this is that they don’t have business objectives. They have a “mission.” The board of directors of OpenAI aren’t beholden to shareholders, and though the staff mocked their statement that allowing the company to be destroyed “would be consistent with the mission” it’s actually true.