Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 day ago

    think the “fancy auto complete” meme is a disingenuous

    “LLMs don’t have human understanding or metacognition”

    Then what’s the (auto-completing) fucking problem? It’s just a series of steps on data. You could feed it white noise and it would vomit up more noise. And keep doing it as long as there’s power.

    Intelligent?

    • SuspciousCarrot78@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 hours ago

      If it was just autocomplete in the dismissive sense, white noise should make it derail into white noise. Instead it tries to make sense of it. Why? Because it learned strong language priors from us and it leans on that when the prompt is meaningless. It tries to make sense of it.

      “Not human understanding” ≠ “no reasoning-like computation.”

      Those aren’t the same thing.

      People doing the "Fancy autocomplete” thing are doing the laziest possible move: not human, therefore nothing interesting happening. I disagree with that.

      It doesn’t “understand,” like we do and it’s not infallible, but calling it “fancy autocomplete” is like calling a jet engine “fancy candle.”

      Same category of thing, wildly different behavior.

      • HugeNerd@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        11 hours ago

        Instead it tries to make sense of it. Why? Because it learned strong language priors from us and it leans on that when the prompt is meaningless. It tries to make sense of it.

        No, it doesn’t. You’re in sci-fi land. There is no “it” “trying to make sense”. That cogitation is happening in YOU, not the motherboard.

        • SuspciousCarrot78@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 hours ago

          “The cogitation is happening in YOU” is just the philosophical zombie argument dressed up as a gotcha. Sure, there’s no ghost in the machine - but that’s true of your neurons too. Your brain is also “just” electrochemical signals on wet hardware. Does that mean your understanding is happening somewhere else?

          The point isn’t whether there’s a homunculus sitting inside the GPU having feelings. The point is that the functional operations happening - maintaining context, resolving ambiguity, applying something structurally similar to inference across novel inputs - are more than pattern-matching in the (dismissive sense) people mean when they say “autocomplete.”

          • Iconoclast@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 hours ago

            Sure, there’s no ghost in the machine - but that’s true of your neurons too.

            Touché.

            Intelligence doesn’t require “self” and we’re a living proof of that. The way LLMs and humans operate have much more similarities than people like to admit. We’re just applying higher standards to AI.