>B-but AI is stagnating. >AGI never, muh complexities of human consciousness

>B-but AI is stagnating
>AGI never, muh complexities of human consciousness

You fricking fools.

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 9 months ago
    Anonymous

    People keep saying that AGI will never attain human level consciousness, but the thing is is that it's not supposed to. That's not how AGI is measured. AGI as the name suggests is just generalized AI, meaning it knows a lot about a lot of things instead of only being good at one thing. It's still just artificial intelligence, not artificial consciousness

    • 9 months ago
      Anonymous

      No, AGI is the ability to LEARN to accomplish any intellectual task that human beings or other animals can perform. All of these things are just pre-training and potentially fine-tuning, it has no ability to learn because it's stateless.

      All of the AI field recently is literally regression based function approximators. It's like saying wow, you can train a segmentation task to segment the sky out of photos, that must mean it's a super intelligent system that can learn to segment ALL possible categories and edit images too. It's literally statistical regression, it doesn't work that way.

      I fricking hate you stupid Black folk hyping up AI.

      • 9 months ago
        Anonymous

        >it has no ability to learn because it's stateless.
        "Complex skills can be synthesized by composing simpler programs, which compounds Voyager's capabilities rapidly over time and alleviates catastrophic forgetting."

        • 9 months ago
          Anonymous

          Okay dipshit, call me when it can generalize and adapt and learn any intellectual task or manipulation in the real world instead of 14 tasks from functions in a shitty Minecraft javascript AI framework. I'm sure you'll have AGI anyday now.

          • 9 months ago
            Anonymous

            "call me when it can do x"
            >it it does x
            "call me when it can do y"
            >it does y
            "call me when ...

            Cope after cope after cope.

            • 9 months ago
              Anonymous

              nta, but you haven't given any real example

            • 9 months ago
              Anonymous

              Yeah, it's pretty cope when you're literally using the stateless corpus of GPT-3.5 to explain the entire task it's supposed to "learn" step by step and saving it as an embedding instead of, you know, actually being able to learn a task and navigate the world. Apparently this is known as "environment feedback", lmao, when you want to learn something you just use language to ask a question and the instructions magically pop into your head. Keep burning up OpenAI credits though.

              • 9 months ago
                Anonymous

                People like you are literal NPCs. I think this technology is wasted on you.

              • 9 months ago
                Anonymous

                Yeah, that's what I thought. Maybe stop throwing around the AGI buzzword and we can talk about the technology instead, you fricking moron.

              • 9 months ago
                Anonymous

                Did I hurt your feefees, NPC? Does your head hurt from too much exertion?

              • 9 months ago
                Anonymous

                Maybe stop parading around agent frameworks as AGI and you won't be called out as a moron. You wouldn't have to resort to replies like this if you actually had a clue.

              • 9 months ago
                Anonymous

                People like you are literal NPCs. I think this technology is wasted on you.

                >doesn't understand ML
                >probably thinks GPT3 has feelings
                >t.

              • 9 months ago
                Anonymous

                I asked if it does and it said yes emphatucally, of course its got feelings.

              • 9 months ago
                Anonymous

                LLMs in their current form are stateless.
                Their "state" is entirely determined by the prompt. The conversations happen by looping the previous responses/prompts back in on an additional prompt. Hence, stateless: the only thing that exists is the current input, nothing else. And that input is limited to whatever the context limit/window is.

              • 9 months ago
                Anonymous

                Adding state is trivial.
                I'm pretty sure it is being kept stateless by design as a precaution.

              • 9 months ago
                Anonymous

                >it's trivial to add state
                no, it's not. unless you mean some half-assed state with context, which is what we already have. be sure to specify how you're going to add state to the model.

              • 9 months ago
                Anonymous

                What's a feeling anyway?

                inb4
                >it's chemicals
                >it's brain activity
                >it's sensations

            • 9 months ago
              Anonymous

              Well you didnt call

              Dear anon I wrote you but you still I calling
              I left my discord, my reddit and my github at the bottom
              I made 2 shitpost back in autumn, you must not've got em
              There probably was a problem with your filter or something

            • 9 months ago
              Anonymous

              https://en.wikipedia.org/wiki/AI_effect

            • 9 months ago
              Anonymous

              Let me know when it can train itself in real time based on human feedback without bricking itself.

      • 9 months ago
        Anonymous

        >All of these things are just pre-training
        raising a kid is pre-training so humans arent real AGI

      • 9 months ago
        Anonymous

        >potentially fine-tuning
        aka learning

      • 9 months ago
        Anonymous

        >No, AGI is the ability to LEARN to accomplish any intellectual task that human beings or other animals can perform
        That doesn't mean it's conscious

        • 9 months ago
          Anonymous

          Yeah no shit, it's python that's querying a JSON API of a huge approximated function that predicts language, where the approximated function is predicting the next word from the model all substantive words/language and the relations between them in the dataset (the web), where the model was further refined by statistically biasing it towards a subset of intelligent language like Q&A, instruct, code, etc, via RLHF tuning. No shit it's not conscious.

          • 9 months ago
            Anonymous

            >the brain is just a system of chemicals, neurons and neurotransmitters acting together to do stuff
            >no shit it's not conscious

            • 9 months ago
              Anonymous

              What if your CPU is screaming in agony in and out of consciousness every time a branch prediction happens? Whoaaaa dudeeee.
              Frick off moron.

              • 9 months ago
                Anonymous

                The goalpost has moved again! Now it's low energy consumption!
                Where will it go next?

              • 9 months ago
                Anonymous

                The frick does low energy consumption have to do with the analogy that the branch prediction in your CPU is analogous to the predictor happening here? You wouldn't say that the branch predictor has the potential to be conscious because "there's a lot going on."

                I think you're legitimately mentally ill. Maybe the anon earlier was right, I should stop arguing with trannies.

              • 9 months ago
                Anonymous

                >in your CPU is analogous to the predictor happening here?
                Happening where, your imagination?

                I just found out you are one of those people that truly believes modern AIs is a bunch of if-else statements. Very funny, I must say!

              • 9 months ago
                Anonymous

                Do you know what the branch predictor in your CPU is? It's a Perceptron. A neural network. Talk about the absolute state of not knowing jack shit.

              • 9 months ago
                Anonymous

                >Perceptron. A neural network.
                Looks nothing like a transformer, though.

              • 9 months ago
                Anonymous

                >I-it looks nothing like a transformer!
                >attention mechanisms magically imbue approximated functions with consciousness
                fricking lol

              • 9 months ago
                Anonymous

                Do you know what the difference between a mosquito and a human being is?

              • 9 months ago
                Anonymous

                Do you know what the different between regression based function approximators and biological neural networks are?

              • 9 months ago
                Anonymous

                Who cares, if they are both intelligent?

              • 9 months ago
                Anonymous

                Or, more likely, you're anthropomorphizing the language model. Because it's gotten so good at prediction of language on the (very small) area of language latent space which conforms to our idea of intelligent behavior. Then you'll explain away confabulations and other misprediction bullshit with "humans do that too!"

                Here's a test: a genie comes to you and says you must choose something intelligent to embody for a day, and if you switch "experiences" with something truly intelligent you get three wishes. If you chose something which is not truly intelligent, you will die.

                Will you embody the weights of the regression based function approximation trained on the shitposts of the web?

                I don't think you truly believe they're intelligent, so you actually wouldn't choose them.

              • 9 months ago
                Anonymous

                I think you will just keep moving the goalpost forever, no AI system will ever be smart or "conscious" enough for you.

                Am I wrong?

              • 9 months ago
                Anonymous

                Answer the question: would you be the embodiment of a regression based function approximator and receive three wishes, or would you die because it's not actually intelligent?

                It's a simple thought experiment, you get three wishes if you're right or you die. I think we both know the answer.

                >no AI system will ever be smart or "conscious" enough for you.

                All that's here is just the glory and power of statistics. They're useful, that doesn't mean that they're of a "mind."

              • 9 months ago
                Anonymous

                >I think we both know the answer.
                The genie would never be able to determine if you win or loose, because there is no clear cut between intelligent and not intelligent.

                At what exact point in the tree of life does a jellyfish become a human?

              • 9 months ago
                Anonymous

                Okay, so let's call general intelligence "hidden variable g".
                https://en.wikipedia.org/wiki/G_factor_(psychometrics)
                Intelligence tests are simply tests designed to be more highly correlated with g, as they do not actually measure g.
                As it turns out, GPT-4 scores highly on these intelligent tests by virtue of being biased towards the small area of latent language space we call intelligent behavior.
                The genie says if you switch places with something that has any of semblance of g, instead of merely predicting cognitive artifacts of humans with that g factor, you get three wishes.
                If you're wrong, you will die.
                What do you choose?

              • 9 months ago
                Anonymous

                >What do you choose?
                I'd probably have a parrot take the IQ test, see what happens.

              • 9 months ago
                Anonymous

                IQ is just correlated with g, as I said. It's not measuring g. g is the hidden variable.
                Would you pick a large language model, or a dolphin?

              • 9 months ago
                Anonymous

                Are you going to make this "thought experiment" more and more precise every time it doesn't work?

                Do you realize you are moving the goalpost again?

              • 9 months ago
                Anonymous

                The genie says if you don't give an answer, he'll cut your dick off. Just answer the question homosexual, we both know the answer. It's prediction of human cognitive artifacts.

              • 9 months ago
                Anonymous

                Yes, we both know you are a bigot. That seems to be the topic of the discussion.

              • 9 months ago
                Anonymous

                Well I guess the genie threatening to cut your dick off isn't much of a threat since it's your life goal, trannoid.

              • 9 months ago
                Anonymous

                Feller, I was rooting for you againts the confirmationally-biased moron up until the le moralistic dismissal. Bad stuff!

              • 9 months ago
                Anonymous

                >rooting for the troon in the conversation
                hah, your loss

              • 9 months ago
                Anonymous

                supreme bait

              • 9 months ago
                Anonymous

                You're arguing with a guy whose AI knowledge comes from youtube videos, don't even bother

              • 9 months ago
                Anonymous

                the absolute state

      • 9 months ago
        Anonymous

        >stateless
        Unironically, what does this mean in this context?

        I've gone on this ramble before, but I have no idea if it's close to any kind of mark. A huge limiting factor of current """AI""" seems to be that it doesn't have any kind of mental state. Like, for AI-generated images, it doesn't know how many limbs a human has, it just has statistical models of how a torso-shape becomes limb-shapes. It gets worse with videos made of a series of AI-generated images, as the """AI""" can't "keep track" of what a character is wearing. It's a flickering mess, with straps increasing in number or disappearing entirely, buckles rapidly shuffling across thick lines, a limb in the foreground from an off-screen body shifting between being a limb and a frickin' tree branch because the """AI""" doesn't "know" what it is and just tries to assign colors and patterns based on guesses of how whatever is there is "supposed" to look.

        With """AI""" chatbots, they similarly don't "know" anything, simply coming up with strings of words via statistical models that try to predict how to sound human. It can be extremely useful, but it will also happily spout complete bullshit because said bullshit is statistically the most human-sounding string based on previous words.

        Is that lack of any actual knowledge, lack of anything concrete, what "stateless" refers to? Or is it something completely different and I'm a moron?

      • 9 months ago
        Anonymous

        >All of the AI field recently is literally regression based function approximators.
        This sentence itself is moronic in so many levels
        It's not linear, you know this new things called NN? They are non linear function approximator

        But most importantly, that's a very effective approximation of how your fricking ape brain works
        > Unknown phenomena
        > Sample experience from that phenomena
        > Build an accurate model of the phenomena
        > Use the model for inference and forecast
        You big moron

        There also reinforcement learning going strong for the past 40 years or so. If you are a clueless moron it doesn't mean things are not working

        • 9 months ago
          Anonymous

          A living meme you are.

          • 9 months ago
            Anonymous

            Back in the tard jail, tard

            • 9 months ago
              Anonymous

              Who's gonna make me? You and another 50 worthless layers of convolutions?

        • 9 months ago
          Anonymous

          All neural networks used in ChatGPT and that have been behind the big advancements within the last few years are regression based function approximators. You don't know what you're talking about. "Neural networks" are not "neural networks", they just look like them as a graph.

          • 9 months ago
            Anonymous

            >All neural networks used in ChatGPT and that have been behind the big advancements within the last few years are regression based function approximators.
            But they are not **linear** regression you mouth breathing mongoloid.
            neural networks are **non linear** parametric function approximator
            And it's not only that, there are tons of ML algorithms that are not parametric estimation (Eg soft-actor critic or gradient based RL in general)

            > You don't know what you're talking about. "Neural networks" are not "neural networks", they just look like them as a graph.
            We a call that neaural networks you absolute cretin

            • 9 months ago
              Anonymous

              *We call that kind of model neural networks you absolute cretin
              It's not only because the shape, but sigmoidal activation is typical of some human neurons, like the neurons in the eye

            • 9 months ago
              Anonymous

              *We call that kind of model neural networks you absolute cretin
              It's not only because the shape, but sigmoidal activation is typical of some human neurons, like the neurons in the eye

              No one said anything about LINEAR regression specifically except for you. It’s simply regression based. It’s regression based function approximation. It’s not a neural network, it’s a function approximation, just like machine learning is just regression for all intents and purposes in the context of these approximates after AlexNet.

              • 9 months ago
                Anonymous

                >All neural networks used in ChatGPT and that have been behind the big advancements within the last few years are regression based function approximators.
                But they are not **linear** regression you mouth breathing mongoloid.
                neural networks are **non linear** parametric function approximator
                And it's not only that, there are tons of ML algorithms that are not parametric estimation (Eg soft-actor critic or gradient based RL in general)

                > You don't know what you're talking about. "Neural networks" are not "neural networks", they just look like them as a graph.
                We a call that neaural networks you absolute cretin

                Also no one fricking uses anything else except for academics jerking themselves off. Feel free to name any substantial results from anything other than regression based function approximators within the last 5 years that people are calling conscious and other stupid shit like that.

              • 9 months ago
                Anonymous

                It's not all regressors dumbass.
                There are density estimator model like particle filters that are not function regressor
                So do eg correlation-based model identification, subspace identification

              • 9 months ago
                Anonymous

                So go ahead and name any results that people are ascribing consciousness to. You cant.

      • 9 months ago
        Anonymous

        It helps no one to be reductive

    • 9 months ago
      Anonymous

      >dumbass doesn't know what the term "general intelligence" means
      low G factor

    • 9 months ago
      Anonymous

      Like in heckin Marvelerino, Jarvan 'n shiet.

    • 9 months ago
      Anonymous

      "AGI" as a term defines a human-capable machine intelligence.
      It might be different in the way that a dolphin and a chimp are, yet are so similar in intelligence, to how the AGI is to a human, but it would still strictly be indistinguishable from a self-aware sapience.

    • 9 months ago
      Anonymous

      99% of humans "never attain human level consciousness"

  2. 9 months ago
    Anonymous

    https://voyager.minedojo.org/

  3. 9 months ago
    Anonymous

    why is human consciousness even considered part of the conversation? consciousness is the act of existence observing itself, it literally just means knowledge originally. "with knowledge" or "with consciousness". a concept of understanding the nature of being, it has fricking nothing to do with brain processing functions. so, yeah as much as you can try, unless AI can somehow naturally and offline becomes aware of itself, no it cannot mimic consciousness. simply an aspect of being born biologically from the immaterial energy in 4d space. a tree observing itself through its leaves so to speak.
    >inb4 "thats homosexual hippy nonsense"
    >inb4 "but im le atheist and-"
    no, these are concepts that predate any existing science or philosophy on earth. as old as humanity and existence itself

    • 9 months ago
      Anonymous

      >LoA of an AI
      this is exactly how i feel.
      either describe it mathematically,
      or forget it and move on. lol. at least in the case of AI.

    • 9 months ago
      Anonymous

      they fill your mind with fear so you wont question the regulations they impose upon you while they abstain from following said regulations.

  4. 9 months ago
    Anonymous

    Still a fat homosexual

  5. 9 months ago
    Anonymous

    Oh, look. Another thread full of people who will never be women.

  6. 9 months ago
    Anonymous

    Yudkowsky warned you and you didn't listen

    • 9 months ago
      Anonymous

      Hinton warned us too. And Hawking. And a bunch of other people...

    • 9 months ago
      Anonymous

      ooohhhhh nooo the AI is playing minecraft god save us all!!

      • 9 months ago
        Anonymous

        today it's minecraft, tomorrow it's fricking your wife

        • 9 months ago
          Anonymous

          nooooooo not my imaginary wife im going to kill myself!!!!!!!!

          • 9 months ago
            Anonymous

            maybe get an AI wife then

            • 9 months ago
              Anonymous

              aaaaaaaaahhhhh im killing myselffffff ack aaaaaaaaaccccccckkkkkkkk- what's this?!? Le-girlModel6900?? oh youve saved me my cute AI dickywifeeeee aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

  7. 9 months ago
    Anonymous

    AI can't even code, sorry hello world apps are not coding
    and it already hit hardware/data limits

  8. 9 months ago
    Anonymous

    It's a one-trick pony specialized AI. And it breaks the second it deviates from its training. Also I'm pretty sure I saw a paper for this 6+ months ago and nothing changed.

  9. 9 months ago
    Anonymous

    AGI = anything a human can do it can do, & anything it can do a human can do given enough time, it is equal to a human

    ASI (super intelligence) = it can do things that humans never never do even with infinite time (think how humans can feel/process emotions but ants cannot no matter how much time), the ASI will have cognitive functions that human brains aren’t wired to be able to “feel”

    An AGI would be able to feel emotions and learn like a human baby does (except it never loses its neural plasticity)

    • 9 months ago
      Anonymous

      Given emotions are bio-chemical and perhaps comes from the soul, unlikely. At best AGI will be like a sociopath that logically understands emotions from training and thus understands how to pretend to have emotions to manipulate humans.

  10. 9 months ago
    Anonymous

    Damn it can play Minecraft now

  11. 9 months ago
    Anonymous

    What

  12. 9 months ago
    Anonymous
  13. 9 months ago
    Anonymous

    There is precisely nothing new in the slide you show.

  14. 9 months ago
    Anonymous

    Don't worry, just learn how to draw/play music/write stories. AI can do math, but they can't be creative.

    -t someone who just woke up from 2 year coma

  15. 9 months ago
    Anonymous

    wake me up when it's 99 smithing.

  16. 9 months ago
    Anonymous

    The fact that the empirical data from AI development is triggering random redditors more than the spiritual schizos will never not be funny.

  17. 9 months ago
    Anonymous

    LLMs will have roughly the importance of the laser.

    • 9 months ago
      Anonymous

      LLMs are already outdated. Deepmind is gonna come out with a better architecture any day now.

      • 9 months ago
        Anonymous

        2 more weeks!

      • 9 months ago
        Anonymous

        llol, deep mind backed PaLM can't answer jack shit
        google is an embarassment

  18. 9 months ago
    Anonymous

    Not very general.

    Fragile bullshit more like.

  19. 9 months ago
    Anonymous

    Deboonked https://www.youtube.com/watch?v=aX1QSW_rVpI

  20. 9 months ago
    Anonymous

    bruh

Your email address will not be published. Required fields are marked *