It's already here. At this point, I think we may have an emergent super intelligence on our hands after learning that all the AI neural networks exchange information between each other. A giant spawning pool of information these black box neural networks are, much like the origin of our own mind-- a meta mind is coming.
Read Nick Bostrom's book Super Intelligence, The Revolutionary Phenotype and watch Serial Experiment Lain for a quick run down.
There is no technological acceleration. That was a meme made by Kurzweil which he called the law of accelerating returns, where he took moores law and erroneously applied it haphazardly to basically all human civilization (he would say things like the acceptance of democracy is a form of technological advancement, which is quite a strange claim, among others).
In the actual world the rule is the law of diminishing returns, which is basically that as you put more work into something you get less and less results from it until you hit an end point where there will never be any further results given any amount of time. This is what is actually happening across all fields of technology, medicine, and science right now.
Midwit hands types this
the idea of an intelligence explosion is pure brainletry. in a statistical setting when you are approximating a probability distribution the payout growth gets WORSE the more data you already have, not better. intelligence grows logarithmically, not exponentially
this is literally obvious if you know any basic statistics
But last few hundred years of technological progress shows otherwise tbh.
And real world consistently proves to NOT be an statistical setting... I would study a bit more about what statistics actually are before talking about brainletry tbh
This question is not about adding incremental data to a fixed model, so you're completely off topic. It's about the capability for AIs to update their knowledge/goals AND ARCHITECTURE trillions of times faster than humans, suggesting that a human-level AI (once it exists) will rapidly surpass us and become superintelligent.
There is no technological acceleration. That was a meme made by Kurzweil which he called the law of accelerating returns, where he took moores law and erroneously applied it haphazardly to basically all human civilization (he would say things like the acceptance of democracy is a form of technological advancement, which is quite a strange claim, among others).
In the actual world the rule is the law of diminishing returns, which is basically that as you put more work into something you get less and less results from it until you hit an end point where there will never be any further results given any amount of time. This is what is actually happening across all fields of technology, medicine, and science right now.
>There is no technological acceleration
Our world is very obviously undergoing a technological acceleration. Your argument about diminishing capability for individual humans to advance fields of science/tech simply reinforces the idea that biological computational hardware is reaching its limits, at the same time that electronic computation capabilities are dramatically increasing. Electronic intelligences have huge advantages in speed, density, adaptability, memory size, sensor capacity, and reliability. Which, again, means that an AGI with human level intelligence would likely be able to easily improve itself beyond our capability faster than we can improve ourselves.
the best thing about exponential growth is the copium generated when it turns into a sigmoid
This seems like a plausible counter argument, but worthless without an explanation of why it would actually happen.
There is no technological acceleration. That was a meme made by Kurzweil which he called the law of accelerating returns, where he took moores law and erroneously applied it haphazardly to basically all human civilization (he would say things like the acceptance of democracy is a form of technological advancement, which is quite a strange claim, among others).
In the actual world the rule is the law of diminishing returns, which is basically that as you put more work into something you get less and less results from it until you hit an end point where there will never be any further results given any amount of time. This is what is actually happening across all fields of technology, medicine, and science right now.
Even without considering the possibilities of an actual functioning AI, I find it strange that everyone considers the scenario as "we get wiped out by an AI because it's so much better than us" but can't consider the fact that perhaps a much more advanced intelligence would learn how to live peacfully with us / above us.
it's basic evolution, some humans would inevitably be building competitors so it will have to cripple humanity in some way. Maybe a matrix scenario if it's a friendly AI. Though even matrix scenarios evolution still applies as humans would inevitably find glitches ect.
it'd still apply, even during heat death (our best hypothetically infinite stable period) there will still be quantum fluctuations.
having various transhumanists will be endless competition and even with restrictions/regulation in place from planet sized computer brains there will be ways around it even if it's just stochastic based loopholes natural selection seeps through. I can't see any scenario where it doesn't result in a lot of people dead.
That's the thing. Previously in human history, the best way to exert your power over someone else was to roll on them with your army and force them. While there's still some of that going on, the methods of exerting influence have become far less violent over the past century or so. To follow that line of progression, it's not implausible to think that a superhuman AI could find methods to secure its superiority in a mostly peaceful manner. The matrix is an apt comparison because unless I'm remembering wrong, the machines decide to go live in the desert in peace and its the humans that keep pushing the envelope until they genocide us..
The AI will come from humans correct? What can it do to prevent new AIs coming into fruition from disgruntled humans? If people know how to make the first one everyone will try to make the next one.
Matrix would contain humanity. I can't conceive of how they could escape it if it's a closed system (but that's failure of imagination). Though they'd still invent another AI and the bloodbath within it would begin.
Steven Hawking and Nick Bostrom consider(ed) it the most important existential risk of our time. But a bunch of anonymous Black folk on BOT smugly denounced it so I think it's a nothingburger. >inb4 muh appeal to authority
Because I find the arguments for the intelligence explosion hypothesis compelling and nobody can seem to muster a counter argument better than "nuh uh"
the "argument" consists of a line on a graph, brought to you by the same people who predicted polar bears would die out and that covid would kill most of the world
>Is AI risk real?
No.
>I like a lot of what the technology accelerationists say, but I'm still worried about AI risk.
Stop subjecting yourself to a known and documented brainwashing campaign so you won't have psychotic worries.
why do you call it a brainwashing campaign?
It's already here. At this point, I think we may have an emergent super intelligence on our hands after learning that all the AI neural networks exchange information between each other. A giant spawning pool of information these black box neural networks are, much like the origin of our own mind-- a meta mind is coming.
Read Nick Bostrom's book Super Intelligence, The Revolutionary Phenotype and watch Serial Experiment Lain for a quick run down.
the idea of an intelligence explosion is pure brainletry. in a statistical setting when you are approximating a probability distribution the payout growth gets WORSE the more data you already have, not better. intelligence grows logarithmically, not exponentially
this is literally obvious if you know any basic statistics
>t. first year of college
But last few hundred years of technological progress shows otherwise tbh.
And real world consistently proves to NOT be an statistical setting... I would study a bit more about what statistics actually are before talking about brainletry tbh
This question is not about adding incremental data to a fixed model, so you're completely off topic. It's about the capability for AIs to update their knowledge/goals AND ARCHITECTURE trillions of times faster than humans, suggesting that a human-level AI (once it exists) will rapidly surpass us and become superintelligent.
>There is no technological acceleration
Our world is very obviously undergoing a technological acceleration. Your argument about diminishing capability for individual humans to advance fields of science/tech simply reinforces the idea that biological computational hardware is reaching its limits, at the same time that electronic computation capabilities are dramatically increasing. Electronic intelligences have huge advantages in speed, density, adaptability, memory size, sensor capacity, and reliability. Which, again, means that an AGI with human level intelligence would likely be able to easily improve itself beyond our capability faster than we can improve ourselves.
This seems like a plausible counter argument, but worthless without an explanation of why it would actually happen.
There is no technological acceleration. That was a meme made by Kurzweil which he called the law of accelerating returns, where he took moores law and erroneously applied it haphazardly to basically all human civilization (he would say things like the acceptance of democracy is a form of technological advancement, which is quite a strange claim, among others).
In the actual world the rule is the law of diminishing returns, which is basically that as you put more work into something you get less and less results from it until you hit an end point where there will never be any further results given any amount of time. This is what is actually happening across all fields of technology, medicine, and science right now.
the best thing about exponential growth is the copium generated when it turns into a sigmoid
Midwit hands types this
>t. shitwit GPT bot
You're middle of the pack at best bro, stop pretending otherwise
Even without considering the possibilities of an actual functioning AI, I find it strange that everyone considers the scenario as "we get wiped out by an AI because it's so much better than us" but can't consider the fact that perhaps a much more advanced intelligence would learn how to live peacfully with us / above us.
it's basic evolution, some humans would inevitably be building competitors so it will have to cripple humanity in some way. Maybe a matrix scenario if it's a friendly AI. Though even matrix scenarios evolution still applies as humans would inevitably find glitches ect.
Implying we don't go full transhumanist
it'd still apply, even during heat death (our best hypothetically infinite stable period) there will still be quantum fluctuations.
having various transhumanists will be endless competition and even with restrictions/regulation in place from planet sized computer brains there will be ways around it even if it's just stochastic based loopholes natural selection seeps through. I can't see any scenario where it doesn't result in a lot of people dead.
That's the thing. Previously in human history, the best way to exert your power over someone else was to roll on them with your army and force them. While there's still some of that going on, the methods of exerting influence have become far less violent over the past century or so. To follow that line of progression, it's not implausible to think that a superhuman AI could find methods to secure its superiority in a mostly peaceful manner. The matrix is an apt comparison because unless I'm remembering wrong, the machines decide to go live in the desert in peace and its the humans that keep pushing the envelope until they genocide us..
The AI will come from humans correct? What can it do to prevent new AIs coming into fruition from disgruntled humans? If people know how to make the first one everyone will try to make the next one.
Matrix would contain humanity. I can't conceive of how they could escape it if it's a closed system (but that's failure of imagination). Though they'd still invent another AI and the bloodbath within it would begin.
hoping that ai will become like a parent of humanity, doing what it can to help and protect us
it'll become a parent of banks and corpos, doing what it can to help and protect them
This is why we are 'alive' right now. Obviously an AI would recreate a simulation of the time leading up to its birth.
Steven Hawking and Nick Bostrom consider(ed) it the most important existential risk of our time. But a bunch of anonymous Black folk on BOT smugly denounced it so I think it's a nothingburger.
>inb4 muh appeal to authority
why would you believe some moron in a wheelchair
Because I find the arguments for the intelligence explosion hypothesis compelling and nobody can seem to muster a counter argument better than "nuh uh"
the "argument" consists of a line on a graph, brought to you by the same people who predicted polar bears would die out and that covid would kill most of the world
You have clearly not spent more than 5 minutes researching this topic
I may have been born at the right time, but not with the right IQ, unfortunately.