What would be the implications of AI getting capable enough to code an alternative to every program?

What would be the implications of AI getting capable enough to code an alternative to every program? Talking about AI generated operating systems and various software, I'd say we are about 5 years away from that.

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 months ago
    Anonymous

    What passes for "A.I." today is unable to understand what it's processing, so it will never be capable enough. All it can do is mash things together, a glorified search engine that generates its own results instead of just indexing.

    • 2 months ago
      Anonymous

      Pretty much this.
      But to answer the OP, when things get to that point, we're pretty much at their mercy.

    • 2 months ago
      Anonymous

      I don't think AGI is possible, however the better "AI" learns to mash data the more it will look like a convincing illusion of sentience, and there's no reason for why such a thing can't become practical enough to assemble working code given enough time to develop and train.

      • 2 months ago
        Anonymous

        >I don't think AGI is possible
        Clearest sign of a brainlet right there

        • 2 months ago
          Anonymous

          If by AGI we mean artificial true sentience then it seems nigh impossible, we do not even understand sentience or the ways a brain "computes" to result in it, it's a totally alien field compared to standard computing, and we barely just scratched it's surface.

          • 2 months ago
            Anonymous

            GI (without the A) happened at least once already and literally by accident. Unless you're religious there is no reason to believe it can't happen again.

            • 2 months ago
              Anonymous

              Even if you aren't religious, it's moronic to think that we can just randomly stumble onto a process that took billions of years to even get started and clearly not finished, if we are going by the morons that think we do it at all, especially considering the state of modern technology and just AI research as a whole. The vast majority of AI research is just statistics and optimization, not AGI bullshit because even they know that's basically impossible. AI cultist are the dumbest motherfrickers alive.

              • 2 months ago
                Anonymous

                >it's moronic to think that we can just randomly stumble onto a process that took billions of years to even get started and clearly not finished
                What process? When did it start? What are the finishing conditions that were not yet reached?

              • 2 months ago
                Anonymous

                >What process?
                General Intelligence
                >When did it start?
                When the animal kingdom started at the earliest. Definitely by the time hominids came onto the scene.

                >What are the finishing conditions that were not yet reached?
                Making an intelligence that is so robust that not only can it control its own evolution, but it can do so in a way that ensures the survival of its kind essentially in perpetuity.

        • 2 months ago
          I enjoy VAGINA

          Not in our grandchildren's lifetime, anyway.

        • 2 months ago
          Anonymous

          If you don't think AGI will only be achievable with a quantum computer processing at least 100 billion qubits then you're either a total moron or a poo-zombie.

          • 2 months ago
            Anonymous

            Buzzword overload detected. Dial down your Tar Dreck watching.

            • 2 months ago
              Anonymous

              Yikes. I feel sorry for you.

      • 2 months ago
        Anonymous

        >I don't think AGI is possible
        Why not?

        If by AGI we mean artificial true sentience then it seems nigh impossible, we do not even understand sentience or the ways a brain "computes" to result in it, it's a totally alien field compared to standard computing, and we barely just scratched it's surface.

        >we do not even understand sentience or the ways a brain "computes" to result in it
        Our understanding of the world has a well observed tendency to advance. Also, saying that "we barely just scratched it's surface" is plainly untrue.

        But there is a lot to learn about out brains, that much is true.

      • 2 months ago
        Anonymous

        I think AGI is possible, but it's a very very long way off being implemented. If it ever is, I imagine it will just be used by bankers to embezzle money and by zog to murder civilians in drone strikes. reddit homies fr out here thinking that it's going to be like their video games and goyslop movies

        • 2 months ago
          Anonymous

          >I think AGI is possible, but it's a very very long way off being implemented.
          I think we're at least one major breakthrough away. The trouble with them is it's very hard to predict when they're gonna happen.
          Up until the advances in AI made over the past year or two, I thought we were at least two breakthroughs away, but I think we've got to find a better more efficient way that doesn't require backprop. Backprop is just simulating some of the ways in which our brain works, but it isn't how the brain actually works.

          • 2 months ago
            Anonymous

            >I think we're at least one major breakthrough away.
            Much better to be an optimist then a pessimist, but I think you are a bit overeager. AI software has come a long way, bit it is no where near where it needs to be to be at an AGI level. Not only that, but hardware also needs to evolve a whole lot to start accommodating AI, the next gen CPU's and GPU's will have dedicated AI handling in them, but that is quite literally the first iteration for consumer hardware. Neuromorphic chips are a potential to the hardware problem but those are so experimental its basically just pure research at this point rather then anything practical.
            AGI will not come in a "Breakthrough", but in multiple iterative steps over a period of decades. Three years ago AI art could have been considered "Abstract" if you squinted, now it can be photorealistic and can mimic other styles as well. There was no breakthrough here, but rather years of iterative improvements, each step making it better and better until we have what we have now.
            The first AGI's will barely function, have poor performance and might not even be classified as AGI's, but each improvement gets it closer and before we know it it will be here.

    • 2 months ago
      Anonymous

      Pretty bold of you to imply that *you* do understand what you are doing, meatbag.

    • 2 months ago
      Anonymous

      >All it can do is mash things together
      this

      "AI" is just a Large Language Model - a simple neural network with access to big data

      it's literally just a calculator that solves a set of equations and spits out the result

      • 2 months ago
        Anonymous

        That is what 99.9% of humans do too.

        • 2 months ago
          Anonymous

          Unless you are moronic, that is literally not the case for actual human (or even a lot of animal) thought processes.

        • 2 months ago
          Anonymous

          Unless you are moronic, that is literally not the case for actual human (or even a lot of animal) thought processes.

          and unless you are a liberal NPC

      • 2 months ago
        Anonymous

        >"AI" is just a Large Language Model
        There are more kinds of AI than just neural networks. And there are more kinds of neural networks than just large language models.
        But you wouldn't know any of that because you don't know shit about AI. You just parrot the headlines.

        • 2 months ago
          Anonymous

          you know shit about neural networks
          everything "AI" does now is related and can be modeled as a neural network

          you are repeating headlines about "AI" which doesn't exist and never will

          calculators do not think, do not create, do not imagine and are not conscious. period.

          • 2 months ago
            Anonymous

            >confusing AI with GAI
            AI isn't meant to think, create, imagine or be conscious.

            • 2 months ago
              Anonymous

              AI stands for Intelligence, and this is something that at least can understand and create. Not just imitate by mixing random words it encountered in certain order before, with absolutely no clue about what it is doing, nor even a mechanism for having a clue.

              • 2 months ago
                Anonymous

                >still confusing AI with AGI
                AI isn't about creating something intelligent. It's about using a computer to solve problems that ordinarily require intelligence. That's why the A is there.

    • 2 months ago
      Anonymous

      In that case humans would instead work on design and specs, as in what the software should do. However, as long as is correct, this will be a pipe dream.

    • 2 months ago
      Anonymous

      >All it can do is mash things together
      this

      "AI" is just a Large Language Model - a simple neural network with access to big data

      it's literally just a calculator that solves a set of equations and spits out the result

      down-players always be like:
      >Did you know the "A" in "AI" stands for ... ARTIFICIAL?????????

      yes the synonym of artificial is literally "fake". you're saying the sky is blue. even when we have AI from the movie Terminator, the sky will be blue.

  2. 2 months ago
    Anonymous

    >set up AI to scan Github for open source projects with cuck licenses (e.g. MIT)
    >it clones the repository
    >changes license to GPLv3 +Black person
    >add comments to actually explain what the code is doing, because nobody ever does this
    >uses lots of slurs
    >fixes all the bugs and makes it more efficient
    >releases the work

  3. 2 months ago
    Anonymous

    >AI generated operating systems and various software, I'd say we are about 5 years away from that
    Ahahaha, okay, sure thing, buddy. I'll still be here in 5 years, you can show me your AI generated OS then.

  4. 2 months ago
    Anonymous

    Linus Torvald already supports this, so I would say expect it not in five years but today.

  5. 2 months ago
    Anonymous

    Asked chatgtp to cook me an express example to upload files and I ended up throwing most of the code and reading the docs.

  6. 2 months ago
    Anonymous

    >I'd say we are about 5 years away from that.
    That's because you are ignoring the glaring problems of current "AI" and are ignorant of how complicated an OS.

  7. 2 months ago
    Anonymous

    bottom right actually looks really cute, can't wait to be emotionally manipulated by AI

  8. 2 months ago
    Anonymous

    Most people wouldn't bother, but I think the biggest consequence is that walled gardens wouldn't exist anymore for those who care. Program or game you like is locked to a platform, now it's not or something identical is available better optimized. DRM? A thing of the past.

  9. 2 months ago
    Anonymous

    >What would be the implications of AI getting capable enough to code an alternative to every program? Talking about AI generated operating systems and various software, I'd say we are about 5 years away from that.

    Have you ever tried to get AI to write code? It does not work for anything outside of completely self-contained, cherrypicked examples used for marketing.

    • 2 months ago
      Anonymous

      I tried to get Brave's Leo AI help with CouchDB for a uni assignment. At first, I was amazed at how easy it made things for me, but when I actually tried to use its code, I quickly realized that it was actually pulling shit out of its digital ass with extreme confidence. I assume that this wouldn't be as bad for something more commonplace, like Python, but I still wouldn't trust an LLM to write code for me.

    • 2 months ago
      Anonymous

      Yes, it's especially shit at code because unlike text and image generation that stuff is still in it's infancy, look at the first ai images and at GPT 1 (cleverbot), code-writing capabilities of AI are a bit above that.

      • 2 months ago
        Anonymous

        >code-writing capabilities of AI are a bit above that
        it didn't have any before and it still doesn't have any
        it's not capable of finding solutions to unseen problems, it's not capable of performing the kind of reasoning that programming requires, it's always the same garbage with more data squeezed into it to copy shit from

  10. 2 months ago
    Anonymous

    Your prompt would be longer than just writing the code.

  11. 2 months ago
    Anonymous

    AI is about as smart as the average redditor right now
    I give it 25 years or more until it's actually at a human level

  12. 2 months ago
    Anonymous

    any truly intelligent entity operating faster and above human level will create ad hoc and unique languages from scratch as needed by purpose

Your email address will not be published. Required fields are marked *