• mitchell@lemmy.ca
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    1 year ago

    Adam Something uploaded a video starting with the definition of intelligence itself, and then explains how something that “acts” intelligent doesn’t mean it “is” intelligent.

    • Veraticus@lib.lgbt
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 year ago

      I think even “intelligence” here is a stretch. In a very narrow sense, it is intelligent: it creates text, simulates conversations, answers questions. But that is not what intelligence is (and it is all LLMs can do).

      • BitSound@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        1 year ago

        “Simulating conversations” to a good enough degree requires intelligence. Why are you drawing a distinction here?

        • Veraticus@lib.lgbt
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          1 year ago

          What a silly assertion. Eliza was simulating conversations in the 80s; it was no more intelligent than the current crop of chatbots.

          • BitSound@lemmy.world
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            1 year ago

            This is an unfortunate misunderstanding, one that’s all too common. I’ve also seen comments like “It’s no more intelligent than a dictionary”. Try asking Eliza to summarize a PDF for you, and then ask followup questions based on that summary. Then ask it to list a few flaws in the reasoning in the PDF. LLMs are so completely different from Eliza that I think you fundamentally misunderstand how they work. You should really read up on them.

            • nickwitha_k (he/him)@lemmy.sdf.org
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              1 year ago

              Give Eliza equivalent compute time and functionality to interpret the data type and it probably could get something approaching a result. Modern LLMs really benefit from massive amounts of compute availability and being able to “pre-compile” via training.

              They’re not, in and of themselves, intelligent. That’s not something that is seriously debated academically, though the dangers of humans misperceiving them as such very much is. They may be a component of actual artificial intelligence in the future and are amazing tools that I’m getting done hands-on time with, but the widespread labeling them as “AI” is pure marketing.

              • BitSound@lemmy.world
                link
                fedilink
                arrow-up
                5
                arrow-down
                1
                ·
                1 year ago

                Give Eliza equivalent compute time and functionality to interpret the data type and it probably could get something approaching a result.

                Sorry, but this is simply incorrect. Do you know what Eliza is and how it works? It is categorically different from LLMs.

                That’s not something that is seriously debated academically

                This is also incorrect. I think the issue that many people have is that they hear “AI” and think “superintelligence”. What we have right now is indeed AI. It’s a primitive AI and certainly no superintelligence, but it’s AI nonetheless.

                There is no known reason to think that the approach we’re taking now won’t eventually lead to superintelligence with better hardware. Maybe we will hit some limit that makes the hype die down, but there’s no reason to think that limit exists right now. Keep in mind that although this is apples vs oranges, GPT-4 is a fraction of the size of a human brain. Let’s see what happens when hardware advances give us a few more orders of magnitude. There’s already a huge, noticeable difference between GPT 3.5 and GPT 4.

                • webghost0101@sopuli.xyz
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  To add something, as you mentioned gpt-4 neurons are only a fraction of a human brain.

                  The entire human brain runs on 10-20 watt, thats about a single lightbulb to do all the computing needed for conscious intelligence.

                  Its crazy how optimized natural life is and we have a lot left to learn.

                  • nickwitha_k (he/him)@lemmy.sdf.org
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    Its crazy how optimized natural life is and we have a lot left to learn.

                    It’s a fun balance of both excellent and terrible optimization. The higher amount of noise is a feature and may be a significant part of what shapes our personalities and ability to create novel things. We can do things with our meat-computers that are really hard to approximate in machines, despite having much slower and lossier interconnects (not to mention much less reliable memory and sensory systems).

                • nickwitha_k (he/him)@lemmy.sdf.org
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  Sorry, but this is simply incorrect. Do you know what Eliza is and how it works? It is categorically different from LLMs.

                  I did not mean to come across as stating that they were the same, nor that the results produced would be as good. Merely, that a PDF could be run through OCR and processed into a script for ELIZA, which could produce some results to requests for a summary (ex. provide the abstract).

                  My point being that these technologies that are fundamentally different and at very different levels of technological sophistication can both, at a high level, accomplish the task. Both the quality of the result and capabilities beyond the surface level are very different. However, both, would be able to produce one, working within their architectural constraints.

                  Looking at it this way also gives a good basis for comparing LLMs to intelligence. Both, at a high level, can accomplish many of the same tasks, but, context matters in more than a syntactical sense and LLMs lack the capability of understanding and comprehension of the data that they are processing.

                  This is also incorrect.

                  That paper is both solely phenomenological and states that it is not using an accepted definition of intelligence. With the former point, there’s a significant risk of fallacy in such observation as it is based upon subjective observation of behavior not emperical analysis of why the behavior is occuring. For example leatherette may approximate the appearance and texture of leather but, when examined it differs fundamentally both on the macroscopic and microscopic level, making it objectively incorrect to call it “leather”.

                  I think the issue that many people have is that they hear “AI” and think “superintelligence”. What we have right now is indeed AI. It’s a primitive AI and certainly no superintelligence, but it’s AI nonetheless.

                  Here, we’re really getting into semantics. As the authors of that paper noted, they are not using a definition that is widely accepted, academically. Though they do definitely have a good point on some of the definitions being far too anthropocentric (ex. “being able to do anything that a human can do” - really, that’s a shit definition). I would certainly agree with the term “primitive AI”, if used akin to programming primitives (int, char, float, etc.) as it is clear that LLMs may be useful components in building actual general intelligence.

                  • BitSound@lemmy.world
                    link
                    fedilink
                    arrow-up
                    3
                    arrow-down
                    1
                    ·
                    1 year ago

                    processed into a script for ELIZA

                    That wouldn’t accomplish anything. I don’t know why the OP brought it up, and that subject should just get dropped. Also yes, you can use your intelligence to string together multiple tools to accomplish a particular task. Or you can use the intelligence of GPT-4 to accomplish the same task, without any other tools

                    LLMs lack the capability of understanding and comprehension

                    Also not true

                    states that it is not using an accepted definition of intelligence.

                    Nowhere does it state that. It says “There is no generally agreed upon definition of intelligence”. I’m not sure why you’re bringing up a physical good such as leather here. Two things: a) grab a microscope and inspect GPT-4. The comparison doesn’t make sense. b) “Is” should be banned, it encourages lazy thought and pointless discussion (Yes I’m guilty of it in this comment, but it helps when you really start asking what “is” means in context). You’re wandering into p-zombie territory, and my answer is that “is” means nothing. GPT-4 displays behaviors that are useful because of their intelligence, and nothing else matters from a practical standpoint.

                    it is clear that LLMs may be useful components in building actual general intelligence.

                    You’re staring the actual general intelligence in the face already, there’s no need to speculate about perhaps being components. There’s no reason right now to think that we need anything more than better compute. The actual general intelligence is yet a baby, and has experienced the world through the tiny funnel of human text, but that will change with hardware advances. Let’s see what happens with a few orders of magnitude more computing power.

    • PipedLinkBotB
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Here is an alternative Piped link(s):

      video

      Piped is a privacy-respecting open-source alternative frontend to YouTube.

      I’m open-source; check me out at GitHub.

    • BitSound@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      1 year ago

      That’s kind of silly semantics to quibble over. Would you tell a robot hunting you down “you’re only acting intelligent, you’re not actually intelligent!”?

      People need to get over themselves as a species. Meat isn’t anything special, it turns out silicon can think too. Not in quite the same way, but it still thinks in ways that are useful to us.