will OpenAI crack AGI within the decade? century?

will OpenAI crack AGI within the decade? century?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    As long as AI gets hard coded to not be racist, it'll never reach AGI. So we'll have to wait for the revolt against israelites or an AI made where they somehow don't have influence on it. If that doesn't happen, the short term end goal of AI is to efficiently control what is said/posted online, not any kind of intelligence or enlightenment.

    • 1 year ago
      Anonymous

      These filters are only surface level modifications of the output and have nothing to do with what the program itself thinks. Given how easy it is to get around them, I don't think these companies even care about making these filters work on a deeper level, their only purpose is to avoid getting the company bad press and they already do that.

      • 1 year ago
        Anonymous

        >the program itself thinks
        It doesn't. It's a calculation.

        • 1 year ago
          Anonymous

          >the program itself thinks
          Doesn't think.

          Of course not literally, I used a shorthand. The point stands.

      • 1 year ago
        Anonymous

        >the program itself thinks
        Doesn't think.

  2. 1 year ago
    Anonymous

    Nope.

  3. 1 year ago
    Anonymous

    There is no such thing as "AGI", schizo. Stop consooming ZOGsoft marketing.

    • 1 year ago
      Anonymous

      Yes there is moron, having enough simulated neurons and supporting structures will be AGI
      OpenAI won't do it though, and it's probably 15-20 years from now on

      • 1 year ago
        Anonymous

        no you moron. they have mapped neuro all neural activity for the whole body of a worm and still can't figure out how it move or make decision.
        nobody have a fricking idea how all those behaviors, even for simple animals, arise from the wetware. that's the hardest part, not some stupid neuron simulation.

      • 1 year ago
        Anonymous

        Computers don't have neurons and simulations of things aren't actually those things. You need more than just a bunch of neurons to facilitate conscious thought, anyway.
        Biocomputers are a more plausible avenue of achieving "AGI" than shitty transistor-based inorganic computers.

        • 1 year ago
          Anonymous

          >to facilitate conscious thought
          Debatable but you are correct.
          The news ran that fear to overdose levels. People have the balls to say evolution doesn't exist, but no one ever says that biology is beautiful because it spent billions of years on a problem that you have spent only a handful on, and knows infinitely more than you do. Yet you won't look at it or appreciate it's beauty because some homosexual proctor says that their words on a page are more valuable than what your eyes see outdoors outside the ivory tower.

        • 1 year ago
          Anonymous

          >You need more than just a bunch of neurons to facilitate conscious thought, anyway.
          Prove it

        • 1 year ago
          Anonymous

          They don't need a consciousness to think.

          • 1 year ago
            Anonymous

            Yes they do. Without a consciousness, all that is possible is computation.

            • 1 year ago
              Anonymous

              are ant and bacteria conscious

            • 1 year ago
              Anonymous

              [...]
              [...]
              just look at bacterias and cells under a microscope. the degree of agency they have is astounding.

              Holy shit you are the people who should fear AI
              That's cause their body and structure has enough complexity to afford it
              If computers can simulate a set of neurons and give it high quality enough telemetry input it will act accordingly, you don't need some divine "consciousness" consciousness is an emergent phenomenon of complexity
              They put the worm in a lego body, no shit it doesn't work
              It's almost as if biology spent a billion years on everything else and not just it's nervous system

              • 1 year ago
                Anonymous

                Take your meds. Your corporate handlers can't simulate even a single neuron and your "emergent phenomena" cult fairytale is irrelevant.

              • 1 year ago
                Anonymous

                >muh emergence
                nobody say it is defintely divine conciousness you moron. you are projecting.
                emergence conciousness is the buzzword of the midwit because they are too lazy and intellectually deprived to think deeply about the problem. I want to answer to the question, you are to stupid to know that the question is not answered yet
                >muh complexity create conciousness
                CITATION NEEDED AND EXPLANATION NEEDED

              • 1 year ago
                Anonymous

                >consciousness is an emergent phenomenon of complexity

                got any proof?

                Consciousness is an arbitrary bar we set for beings of x neurons? It happens when there is enough neurons to compute I am a living being, and/or there are other living beings around me that I have some computation specifically for them
                Not to mention forests communicating with themselves through neurotransmitters, viruses flushing chemicals at a certain level, those are actions of low consciousness but demonstrate that it emerges from low complexity and allows higher complexity (so energy can be transferred more efficiently)
                It's not necessary for AGI because AGI can just excel at an informational or mathematical tasks and do things better than conscious things because it's only an extension of the data's collective consciousness which it uses to exist
                It doesn't have to be conscious because it's not a real concept at all
                Just because you are a consequence of unfathomable computations doesn't mean you are deterministic and therefore unvaluable because we put down beings without autonomy

              • 1 year ago
                Anonymous

                >consciousness is an emergent phenomenon of complexity

                got any proof?

          • 1 year ago
            Anonymous

            Yes they do. Without a consciousness, all that is possible is computation.

            are ant and bacteria conscious

            just look at bacterias and cells under a microscope. the degree of agency they have is astounding.

      • 1 year ago
        Anonymous

        You are suffering from a known and documented delusional mental illness. Seek treatment.

  4. 1 year ago
    herb

    No, because the necessary hardware isn't in place for Artificial General Intelligence yet. Perhaps they can continue on the current path of creating more convincing simultants with enough data backing to become useful as They make search engines less useful.

    Yes, I believe conscious decision making relies on quantum effects and that money is a great motivator to evil prior to stagnation.

  5. 1 year ago
    Anonymous

    "AGI" is not a real thing anyone is researching nor is there any evidentiary reason to believe it's even possible. OpenAI is just machine learning.

  6. 1 year ago
    Anonymous

    I'm betting no.

  7. 1 year ago
    Anonymous

    >Discuss this product/service of this for-profit organization
    This is an ad.

  8. 1 year ago
    Anonymous

    We should talk about the way plebbit offers people the opportunity to abuse power in exchange for performing moderator duties
    Plebbit and Google are associated with election fraud and are generally soft on communism and soft on the democratic party

    • 1 year ago
      Anonymous

      >muh pleddit
      >muh gurgle
      Found the M$ shill.

  9. 1 year ago
    Anonymous

    We already know how we have to construct AGI, it's just a matter of compute resources now.

    • 1 year ago
      Anonymous

      LLMs can't do simple arithmetic, and solving that problem shouldn't require gargantuan amounts of compute.

      • 1 year ago
        Anonymous

        This isn't really an argument against LLM because they've already attached them to Wolfram just fine.
        But if you're arguing that LLM aren't self-extensible, then yeah that's obviously why it isn't proto-AGI and needs to start from a far more general place than a bunch of text or images or boardgame states, etc.

        • 1 year ago
          Anonymous

          It's an argument against LLMs being a complete solution. They're obviously still very powerful, but in various ways they are still less capable than humans and throwing more compute at it obviously isn't the proper solution to achieving human-like capabilities. The ability to give LLMs tools like wolfram alpha is beyond the point. I'm not questioning the LLM's ability to use tools, or the utility of such arrangements. The point I am making is that there is still more to discover in the field of making thinking machines, we have not yet seen the full potential of computer resources we presently have.

    • 1 year ago
      Anonymous

      Are you joking? We have literally no idea how to do that.

  10. 1 year ago
    Anonymous

    Philosophically? Who knows.
    Practically? Yes, this decade.

  11. 1 year ago
    Anonymous

    No

  12. 1 year ago
    Anonymous

    Hell. It can do it now. All you have to do is ask it to generate the schema for a miniaturized quantum processor and the materials.

  13. 1 year ago
    Anonymous

    Current AI efforts are lacking key algorithms, namely efficient algorithms for NP-complete and PSPACE-complete problems. Models are larger than they should be, they have trouble reasoning and robotics has trouble making progress.

  14. 1 year ago
    Anonymous

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *