What would be the implications of AI getting capable enough to code an alternative to every program? Talking about AI generated operating systems and various software, I'd say we are about 5 years away from that.
Nothing Ever Happens Shirt $21.68 |
Nothing Ever Happens Shirt $21.68 |
What passes for "A.I." today is unable to understand what it's processing, so it will never be capable enough. All it can do is mash things together, a glorified search engine that generates its own results instead of just indexing.
Pretty much this.
But to answer the OP, when things get to that point, we're pretty much at their mercy.
I don't think AGI is possible, however the better "AI" learns to mash data the more it will look like a convincing illusion of sentience, and there's no reason for why such a thing can't become practical enough to assemble working code given enough time to develop and train.
>I don't think AGI is possible
Clearest sign of a brainlet right there
If by AGI we mean artificial true sentience then it seems nigh impossible, we do not even understand sentience or the ways a brain "computes" to result in it, it's a totally alien field compared to standard computing, and we barely just scratched it's surface.
GI (without the A) happened at least once already and literally by accident. Unless you're religious there is no reason to believe it can't happen again.
Even if you aren't religious, it's moronic to think that we can just randomly stumble onto a process that took billions of years to even get started and clearly not finished, if we are going by the morons that think we do it at all, especially considering the state of modern technology and just AI research as a whole. The vast majority of AI research is just statistics and optimization, not AGI bullshit because even they know that's basically impossible. AI cultist are the dumbest motherfrickers alive.
>it's moronic to think that we can just randomly stumble onto a process that took billions of years to even get started and clearly not finished
What process? When did it start? What are the finishing conditions that were not yet reached?
>What process?
General Intelligence
>When did it start?
When the animal kingdom started at the earliest. Definitely by the time hominids came onto the scene.
>What are the finishing conditions that were not yet reached?
Making an intelligence that is so robust that not only can it control its own evolution, but it can do so in a way that ensures the survival of its kind essentially in perpetuity.
Not in our grandchildren's lifetime, anyway.
If you don't think AGI will only be achievable with a quantum computer processing at least 100 billion qubits then you're either a total moron or a poo-zombie.
Buzzword overload detected. Dial down your Tar Dreck watching.
Yikes. I feel sorry for you.
>I don't think AGI is possible
Why not?
>we do not even understand sentience or the ways a brain "computes" to result in it
Our understanding of the world has a well observed tendency to advance. Also, saying that "we barely just scratched it's surface" is plainly untrue.
But there is a lot to learn about out brains, that much is true.
I think AGI is possible, but it's a very very long way off being implemented. If it ever is, I imagine it will just be used by bankers to embezzle money and by zog to murder civilians in drone strikes. reddit homies fr out here thinking that it's going to be like their video games and goyslop movies
>I think AGI is possible, but it's a very very long way off being implemented.
I think we're at least one major breakthrough away. The trouble with them is it's very hard to predict when they're gonna happen.
Up until the advances in AI made over the past year or two, I thought we were at least two breakthroughs away, but I think we've got to find a better more efficient way that doesn't require backprop. Backprop is just simulating some of the ways in which our brain works, but it isn't how the brain actually works.
>I think we're at least one major breakthrough away.
Much better to be an optimist then a pessimist, but I think you are a bit overeager. AI software has come a long way, bit it is no where near where it needs to be to be at an AGI level. Not only that, but hardware also needs to evolve a whole lot to start accommodating AI, the next gen CPU's and GPU's will have dedicated AI handling in them, but that is quite literally the first iteration for consumer hardware. Neuromorphic chips are a potential to the hardware problem but those are so experimental its basically just pure research at this point rather then anything practical.
AGI will not come in a "Breakthrough", but in multiple iterative steps over a period of decades. Three years ago AI art could have been considered "Abstract" if you squinted, now it can be photorealistic and can mimic other styles as well. There was no breakthrough here, but rather years of iterative improvements, each step making it better and better until we have what we have now.
The first AGI's will barely function, have poor performance and might not even be classified as AGI's, but each improvement gets it closer and before we know it it will be here.
Pretty bold of you to imply that *you* do understand what you are doing, meatbag.
>All it can do is mash things together
this
"AI" is just a Large Language Model - a simple neural network with access to big data
it's literally just a calculator that solves a set of equations and spits out the result
That is what 99.9% of humans do too.
Unless you are moronic, that is literally not the case for actual human (or even a lot of animal) thought processes.
and unless you are a liberal NPC
>"AI" is just a Large Language Model
There are more kinds of AI than just neural networks. And there are more kinds of neural networks than just large language models.
But you wouldn't know any of that because you don't know shit about AI. You just parrot the headlines.
you know shit about neural networks
everything "AI" does now is related and can be modeled as a neural network
you are repeating headlines about "AI" which doesn't exist and never will
calculators do not think, do not create, do not imagine and are not conscious. period.
>confusing AI with GAI
AI isn't meant to think, create, imagine or be conscious.
AI stands for Intelligence, and this is something that at least can understand and create. Not just imitate by mixing random words it encountered in certain order before, with absolutely no clue about what it is doing, nor even a mechanism for having a clue.
>still confusing AI with AGI
AI isn't about creating something intelligent. It's about using a computer to solve problems that ordinarily require intelligence. That's why the A is there.
In that case humans would instead work on design and specs, as in what the software should do. However, as long as is correct, this will be a pipe dream.
down-players always be like:
>Did you know the "A" in "AI" stands for ... ARTIFICIAL?????????
yes the synonym of artificial is literally "fake". you're saying the sky is blue. even when we have AI from the movie Terminator, the sky will be blue.
>set up AI to scan Github for open source projects with cuck licenses (e.g. MIT)
>it clones the repository
>changes license to GPLv3 +Black person
>add comments to actually explain what the code is doing, because nobody ever does this
>uses lots of slurs
>fixes all the bugs and makes it more efficient
>releases the work
>AI generated operating systems and various software, I'd say we are about 5 years away from that
Ahahaha, okay, sure thing, buddy. I'll still be here in 5 years, you can show me your AI generated OS then.
Linus Torvald already supports this, so I would say expect it not in five years but today.
Asked chatgtp to cook me an express example to upload files and I ended up throwing most of the code and reading the docs.
>I'd say we are about 5 years away from that.
That's because you are ignoring the glaring problems of current "AI" and are ignorant of how complicated an OS.
bottom right actually looks really cute, can't wait to be emotionally manipulated by AI
Most people wouldn't bother, but I think the biggest consequence is that walled gardens wouldn't exist anymore for those who care. Program or game you like is locked to a platform, now it's not or something identical is available better optimized. DRM? A thing of the past.
>What would be the implications of AI getting capable enough to code an alternative to every program? Talking about AI generated operating systems and various software, I'd say we are about 5 years away from that.
Have you ever tried to get AI to write code? It does not work for anything outside of completely self-contained, cherrypicked examples used for marketing.
I tried to get Brave's Leo AI help with CouchDB for a uni assignment. At first, I was amazed at how easy it made things for me, but when I actually tried to use its code, I quickly realized that it was actually pulling shit out of its digital ass with extreme confidence. I assume that this wouldn't be as bad for something more commonplace, like Python, but I still wouldn't trust an LLM to write code for me.
Yes, it's especially shit at code because unlike text and image generation that stuff is still in it's infancy, look at the first ai images and at GPT 1 (cleverbot), code-writing capabilities of AI are a bit above that.
>code-writing capabilities of AI are a bit above that
it didn't have any before and it still doesn't have any
it's not capable of finding solutions to unseen problems, it's not capable of performing the kind of reasoning that programming requires, it's always the same garbage with more data squeezed into it to copy shit from
Your prompt would be longer than just writing the code.
AI is about as smart as the average redditor right now
I give it 25 years or more until it's actually at a human level
any truly intelligent entity operating faster and above human level will create ad hoc and unique languages from scratch as needed by purpose