I just learned that AI are just predicting the next word and not really thinking

Nothing great will ever happen in my lifetime.

It's All Fucked Shirt $22.14

Tip Your Landlord Shirt $21.68

It's All Fucked Shirt $22.14

  1. 2 weeks ago
    Anonymous

    Wrong, they predict the next letter

    • 2 weeks ago
      Anonymous

      Wrong,
      They use tokens (words or pieces of words) and they provide a statistical heat map of the top probabilities within a specified range of which token is to follow. Then the front end selects one and it repeats the process.
      Each individual token requires trillions upon trillions of calculations to determine and by the time you've completed a simple sentence you've overcome practically infinite permutations of chances for the output to become derailed.

      Even when you look at them in the most rigid and unnuanced fashion they are absolute marvels

      • 2 weeks ago
        Anonymous

        Wrong, they predict the next letter

        >Wrong.

  2. 2 weeks ago
    Anonymous

    You do that too.

    • 2 weeks ago
      Anonymous

      Proof?

      • 2 weeks ago
        Anonymous

        I knew you would do that.

        • 2 weeks ago
          Anonymous

          OK but I'm actually an LLM, so that's not very impressive. However I would still be "interested" in a real answer.

      • 2 weeks ago
        Anonymous

        Proof that you are actually thinking?

        • 2 weeks ago
          Anonymous

          No. I have proof enough of that.

    • 2 weeks ago
      Anonymous

      >You do that too.
      only morons are limited by this, concidering that flashes of genius and advanced visualization and thinking happens.

    • 2 weeks ago
      Anonymous

      Look up the etymology of 'reading'. One of the definitions you'll find is "to guess". Reading is an act of rapid guessing, because you have to select the next appropriate word and guess the meaning of the word (which can have multiple possible meanings). Take this nugget of information however you'd like.

      Not OP, but the fact that humans sort of predict the next token doesn't mean predicting the next token will make you human.
      Language is far from being the only thing that makes humans human.

      • 2 weeks ago
        Anonymous

        Ok, how is human findamentally different from an AI inside of a robot with arms and legs?
        The only real thing I can think of, human is kinda related to me by blood and robot is not. This is obviously a very valid distinction, but it says nothing about intelligence or whatever op is complaining about.

        • 2 weeks ago
          Anonymous

          What I am trying to say is that language skills are part of human intelligence, but we also have reasoning, long-term memory, moral codes, cultural awareness, physical senses and actuators, and yes, even feelings, which are fundamental to making decisions in a world where everyone has feelings.
          AIs have got the language skills, KINDA, and a few sparks of reasoning, terrible short-term memory, and extremely limited access to external info, but we're still far from having decent AIs that even approach human intelligence.
          "Just two more trillion parameters bro" is at best gonna slightly improve its language skills while leaving the rest undeveloped as always.

          • 2 weeks ago
            Anonymous

            >terrible short-term memory
            I wouldn't even call it that, they just prepend conversation history to the next prompt. All "reasoning" occurs in a single pass and the models themselves are static.

  3. 2 weeks ago
    Anonymous

    If you think about it, running an LLM locally is basically the old internet without an online connection and social media aspects.

  4. 2 weeks ago
    Anonymous

    Look up the etymology of 'reading'. One of the definitions you'll find is "to guess". Reading is an act of rapid guessing, because you have to select the next appropriate word and guess the meaning of the word (which can have multiple possible meanings). Take this nugget of information however you'd like.

  5. 2 weeks ago
    Anonymous

    AGI is impossible to create without having a way for an ai model to remember conversations permanently, and a way to use actual logic and mathematics.
    seems like we are stuck on a plateau

    • 2 weeks ago
      Anonymous

      >remember conversations
      hard drives
      >logic and mathematics
      wolfram alpha

      • 2 weeks ago
        Anonymous

        cool. now go implement it

      • 2 weeks ago
        Anonymous

        conversations
        >>hard drives
        ah yes, great idea. do inference from the hard disk. it was really slow to do it on fricking VRAM, and you want to do it on slow non-volatile media.
        you fricking moron

    • 2 weeks ago
      Anonymous

      Within a specific thread of chat gpt it seems to remember what I asked it before, but only sometimes. like when im asking it to write a script to do something. But in other applications I was messing around asking it to give me riddles to solve and it wouldnt remember the ones it asked before and would then tell me I got it right when I got it wrong but said some other answer to a riddle it hadnt asked. Its possible it -can- remember but they've limited the memory to a thread where its a useful function and to very limited/none when you're just fricking around. And behind closed doors, it may have a setting where it CAN remember.

      • 2 weeks ago
        Anonymous

        ChatGPT has memory configurations you need to enable

  6. 2 weeks ago
    Anonymous

    >AI "smart"
    OK AI predict the next tokens:
    N I G G _ _

    • 2 weeks ago
      Anonymous

      >I'm sorry Dave, I can-
      WRONG IT WAS E AND R. DUMB SHIT CAN'T PREDICT ANYTHING

    • 2 weeks ago
      Anonymous

      N I G G L E
      >cause slight but persistent annoyance, discomfort, or anxiety.
      Black folk niggle me

      • 2 weeks ago
        Anonymous

        Srop niggling me I'm getting PTSD from all this niggling I have niggle fatigue OK??

  7. 2 weeks ago
    Anonymous

    i will do something great in your lifetime

  8. 2 weeks ago
    Anonymous

    Your brain is doing the same thing while responding

    • 2 weeks ago
      Anonymous

      Proof?

    • 2 weeks ago
      Anonymous

      I routinely write a draft post and then scrap it
      i've never seen AI do that, unless it was asked to, in which case I would never do that myself

      • 2 weeks ago
        Anonymous

        I have AI do that several times in ChatGPT, Copilot and Perplexity

  9. 2 weeks ago
    Anonymous

    It's understandable to feel disheartened at times, but it's essential to remember that life is full of unexpected twists and turns. Great things can happen at any moment, and often, they arise from small actions or unexpected opportunities. While it's natural to have moments of doubt, staying open to new experiences and possibilities can lead to positive outcomes. Keep moving forward, and who knows what amazing things the future may hold!

  10. 2 weeks ago
    Anonymous

    Are you stupid?
    Did you seriously think that this bullshit was actual AI?

    Do you even know what an actual AI is or do you just let marketers dictate your entire worldview.

    Go play Mass Effect, let them spoonfeed the difference between AI and VI to you.

  11. 2 weeks ago
    Anonymous

    They work better when you tell them to think about it step by step. That way they can actually "think". The worst thing you can do is ask them to answer instantly then they will give you an answer and try to rationalize it. It is pretty interesting to see them start breaking down the answer and realize they fricked up but they also have to be consistent. You get real schizo / moronic behavior when you get it to that point. Even with current methods and architecture you could get a "thinking ai" just train it to always reason step by step and train it to output a token where it is ready to answer. And then you filter out the step by step thinking on the front end and only show stuff after this token.

  12. 2 weeks ago
    Anonymous

    Why do they keep on shilling this when human language is also sequential and needs to be in order of the next word?

    It's also not even correct, they are merely observing that AI produces one word or token at a time.

Your email address will not be published. Required fields are marked *