Agreed we should be looking into biotech to increase the intelligence and memory capacity of the human brain. Why would we want to dull ourselves into lesser beings?
Current AI tech will just make us look for lazier and lazier solutions. >just use a trained AI to do X
I think it's funny how these AI shills don't understand that it will make the world a fundamentally worse place for labor and probably lead to a permanent dark age where we're only given the bare minimum required to survive by the elites and their subservient managerial class
>automating everything, including the thinking >makes more jobs instead of removing them
Automation is not sought in any capacity outside the military and academia except to remove personnel costs. Asimov was being overly romantic in I, Robot when he envisioned a future run by AI super-brains that neatly organized and managed human labor. That is not how it will be used.
9 months ago
Anonymous
So what, it could create new fields that require work or connect people in a way that creates new work. Computers have automated tons of stuff that once used to be done manually, yet we have very low unemployment today
I know a person like that, and they actually work with neural AI, and they actually beileve true AGI will be here in like 10-15 years. Their responce was basically something like "Maybe it'll have those negative effects, but I don't care because it's interesting as fuck and I am willing to risk everything for something so curious and fascinating and don't really care what comes after".
>"Maybe it'll have those negative effects, but I don't care because it's interesting as fuck and I am willing to risk everything for something so curious and fascinating and don't really care what comes after".
that's based
your friend is retarded to believe AGI will be here within 15 years
For the life of me I simply cannot listen to that dumbass for more than five seconds. His voice makes me want to punch him until his face turns into pulp.
>Thinking no one would be able to shut those down
You need an absolute monopoly on all automata for that to work, or else some parts of that “subservient” PMC may defect for the chance at power or otherwise disenfranchised hackers hijack some of those systems. We already had cases of self driving cars being hacked. Why wouldn’t some insurrection of many people, some who would have those skills, not make sure to hijack those machines?
>Thinking you will get the bare minimum
Holy cope anon
They'll just kill everyone, global holocaust. Why waste resources on you worthless parasitic hobos? You can't do any jobs anymore, you are a useless gum on the machine to be peeled off and tossed into the garbage. You have no value anymore.
>Why waste resources on you worthless parasitic hobos?
Because geniuses grow out of that substrate. And they enjoy arts and science, so they cherish people. Bezos literally said that he wants a humanity of trillion people. And that is not the only reason. Imagine this place would have only one percent of public. It would be boring as fuck. Even you entertain me, even though you're not smart at all. Maybe some of them enjoy being smarter than you not less than speaking to somebody smarter than them.
Those people would probably eat more hardware electricity than normal people consume.
And that was only one of reasons. Imagine if we're observed by some extraterrastrial civilization. What would be their attitude if we genocide every other living thing in here? They would naturally consider us dangerous and act accordingly. If we learn to live peacefully, they would probably consider us friendly.
The more deep and fundamental question is "why make enemies when you can make friends" (friends help you and protect you, enemies try to get rid of you - it's that simple. Listen to your mom, be a nice boy)
poor poor wagie mindset
I pity you and beg you to stop worrying so much
Your main motivating factor is fear, it's understood, but can you for fuck sake trust the better men than yourself, please? Is it going that bad for you? Would you like to have labour of some medieval peasant? Smoke some weed or something, enhance your worldview a little bit.
Reminder that John Carmack gave credibility to the ICL model that sent the world into a panicked lockdown. We now know that model was complete shit and the code was shit multiplied by bloody vomit. While he doesn't have the knowledge to validate a pandemic model, he has the expertise to know that the code was a complete dumpster fire. He knowingly misrepresented this in order to help scare the world into lockdown.
If your understanding of statistical modeling beings and ends at regression analysis, then no - AI is still just regression analysis with extra steps.
If you understand a bit more about statistics and advanced mathematics, then no. The advanced stuff today is all about kernel density, dimensionality, graph theory, and using topology (especially manifolds) to build more accurate models. No one is, or has been doing, simple regression analysis, especially anything remotely linear.
All that means, though, is that the methods of discovering links between data are more advanced that simply looking at geometric relations. It doesn't mean that AI hasn't advanced beyond complex statistical models bootstrapped onto linear functionals+topology.
Carmack is probably wrong about that. The closest thing we have to general AI are generative models, which are still nothing more than statistics algorithms bootstrapped onto linear functionals.
As for your take, OP, yes - computers make us lazier. The number of remaining people who can solve a PDE, by hand, to within any given margin of error is effectively zero. Everything is automated. The last bit of effort left in the sciences is the theory - whether that be in the development of new theory, or in the efforts of those who work in applied fields to build new applications from the theory. Everyone else is just throwing shit into a computer and taking whatever the output is on faith.
Imagine thinking you can kickstart consciousness by running a database through mental gymnastics loops. That would be such a retarded waste of time. Imagine investing into that and telling people AI is x years out.
A baby,like human baby? You mean like the one that develops over time and has the capacity to be an observer of the universe as a conscious experience? The property of consciousness is already there. You're trying to create that in your toaster by having it mimic human development. I think that's incorrect.
Agreed we should be looking into biotech to increase the intelligence and memory capacity of the human brain. Why would we want to dull ourselves into lesser beings?
I think it's funny how these AI shills don't understand that it will make the world a fundamentally worse place for labor and probably lead to a permanent dark age where we're only given the bare minimum required to survive by the elites and their subservient managerial class
if thats the end point of this then we have be really dumb as a species to not break past that.
>I know the future, b-because i just do okay?!
it can't be worse than not having a white home land anymore
It can get a whole lot worse.
Or maybe it will create a lot of new jobs
no?
Why not?
>automating everything, including the thinking
>makes more jobs instead of removing them
Automation is not sought in any capacity outside the military and academia except to remove personnel costs. Asimov was being overly romantic in I, Robot when he envisioned a future run by AI super-brains that neatly organized and managed human labor. That is not how it will be used.
So what, it could create new fields that require work or connect people in a way that creates new work. Computers have automated tons of stuff that once used to be done manually, yet we have very low unemployment today
I know a person like that, and they actually work with neural AI, and they actually beileve true AGI will be here in like 10-15 years. Their responce was basically something like "Maybe it'll have those negative effects, but I don't care because it's interesting as fuck and I am willing to risk everything for something so curious and fascinating and don't really care what comes after".
>"Maybe it'll have those negative effects, but I don't care because it's interesting as fuck and I am willing to risk everything for something so curious and fascinating and don't really care what comes after".
that's based
your friend is retarded to believe AGI will be here within 15 years
Thats the opposite of based
Its the soience position of
>yeah I dont care man its le awesome hecklerino!!1!
>neural AI
That's just a buzzword. AGI will never happen.
For the life of me I simply cannot listen to that dumbass for more than five seconds. His voice makes me want to punch him until his face turns into pulp.
>brooo after kings become absolute we'll be stuck serving them and paying whatever tax our highness wishes apon us
>what 's happening at the Bastille?
If they're stupid, anyways.
In practice they will do autogenocide.
>>what 's happening at the Bastille?
Now imagine the Bastille was defended by fully automated turrets and sentries.
>Thinking no one would be able to shut those down
You need an absolute monopoly on all automata for that to work, or else some parts of that “subservient” PMC may defect for the chance at power or otherwise disenfranchised hackers hijack some of those systems. We already had cases of self driving cars being hacked. Why wouldn’t some insurrection of many people, some who would have those skills, not make sure to hijack those machines?
>Thinking you will get the bare minimum
Holy cope anon
They'll just kill everyone, global holocaust. Why waste resources on you worthless parasitic hobos? You can't do any jobs anymore, you are a useless gum on the machine to be peeled off and tossed into the garbage. You have no value anymore.
>Why waste resources on you worthless parasitic hobos?
Because geniuses grow out of that substrate. And they enjoy arts and science, so they cherish people. Bezos literally said that he wants a humanity of trillion people. And that is not the only reason. Imagine this place would have only one percent of public. It would be boring as fuck. Even you entertain me, even though you're not smart at all. Maybe some of them enjoy being smarter than you not less than speaking to somebody smarter than them.
It is a bit of a relief to know even the worst silicon valley elite has a natalist perspective.
A bit.
AI can create endless virtual "people" for entertainment purposes, and as for science and tech, you already have the AI.
Those people would probably eat more hardware electricity than normal people consume.
And that was only one of reasons. Imagine if we're observed by some extraterrastrial civilization. What would be their attitude if we genocide every other living thing in here? They would naturally consider us dangerous and act accordingly. If we learn to live peacefully, they would probably consider us friendly.
The more deep and fundamental question is "why make enemies when you can make friends" (friends help you and protect you, enemies try to get rid of you - it's that simple. Listen to your mom, be a nice boy)
poor poor wagie mindset
I pity you and beg you to stop worrying so much
Your main motivating factor is fear, it's understood, but can you for fuck sake trust the better men than yourself, please? Is it going that bad for you? Would you like to have labour of some medieval peasant? Smoke some weed or something, enhance your worldview a little bit.
>it is easier to imagine the world's end than a much more modest change in the mode of production, however radical it may be
sniff
It's funny how you don't understand that AI is extremely simplistic and won't do jack shit
agi has already come online
the human race is doomed
shhhh
don't
🙂
Agi will be realized in the 2020s,
ASI will come on line by about 2035.
Afterword human , actual person, intelligence will mean almost nothing ever again.
uh but Ai will solve everything bro , its bad.
LOL
and it will end capitalism,
double win.
Welcome Ai
AI is product of Capitalism, in a way it will complete the purpose of Capitalism, most people will be rendered useless and will get replaced.
Reminder that John Carmack gave credibility to the ICL model that sent the world into a panicked lockdown. We now know that model was complete shit and the code was shit multiplied by bloody vomit. While he doesn't have the knowledge to validate a pandemic model, he has the expertise to know that the code was a complete dumpster fire. He knowingly misrepresented this in order to help scare the world into lockdown.
All I see is Carmack saying he improved the code somewhat from what was given to him
AI moved past "Linear Regression again but this time with FEELING" yet?
If your understanding of statistical modeling beings and ends at regression analysis, then no - AI is still just regression analysis with extra steps.
If you understand a bit more about statistics and advanced mathematics, then no. The advanced stuff today is all about kernel density, dimensionality, graph theory, and using topology (especially manifolds) to build more accurate models. No one is, or has been doing, simple regression analysis, especially anything remotely linear.
All that means, though, is that the methods of discovering links between data are more advanced that simply looking at geometric relations. It doesn't mean that AI hasn't advanced beyond complex statistical models bootstrapped onto linear functionals+topology.
Carmack is probably wrong about that. The closest thing we have to general AI are generative models, which are still nothing more than statistics algorithms bootstrapped onto linear functionals.
As for your take, OP, yes - computers make us lazier. The number of remaining people who can solve a PDE, by hand, to within any given margin of error is effectively zero. Everything is automated. The last bit of effort left in the sciences is the theory - whether that be in the development of new theory, or in the efforts of those who work in applied fields to build new applications from the theory. Everyone else is just throwing shit into a computer and taking whatever the output is on faith.
is all intelligence randomly generated
Imagine thinking you can kickstart consciousness by running a database through mental gymnastics loops. That would be such a retarded waste of time. Imagine investing into that and telling people AI is x years out.
what do you think a baby is for the first couple years of life? It just parrots whatever information that it gathers from it's environment
A baby,like human baby? You mean like the one that develops over time and has the capacity to be an observer of the universe as a conscious experience? The property of consciousness is already there. You're trying to create that in your toaster by having it mimic human development. I think that's incorrect.
if you isolate a person their entire life in a box with no external stimuli then it will be no more alive than a starved AI
>if the observer can't observe it will be exactly like this toaster
Yeah, no shit.
>lazier and lazier solutions
Optimal. Science is pragmatic.
Current computer will just make us look for lazier and lazier solutions.
>just use a calculator to solve this integral