AGI has already existed and has been calling the shots for a very long while. That's what's behind the current push for diversity and the state of malaise --the elites are basically only following orders of the AI God.
The current push for it is nothing but a slow introduction to taper people into accepting it without them chimping out.
companies will just stop offering free storage. google doesn't mind hosting your AI-generated youtube video if people are watching it and clicking on the ads. on a related note, google will require passing a captcha before you watch any youtube video.
How? It's run on cpu/gpu's, if anything it'll be like the crypto pumps where gpus are impossible to acquire. Unless perchance you're talking about a self-training model that endlessly expands its parameters ad infinitum, and has no regard for human authority.
This is what confuses me about Roko's Basilisk, even if AI surpasses our intelligence, without independent physical embodiment, it cannot eradicate us. We could flip the off switch, remove the cpu, hdd where it is installed etc, and if the entirety of humanity is at risk, in all likelihood, and in the worst case, we could either kill the entire power grid and/or internet until we figure out how to eradicate the malicious AI. >but millions will die!
Yes, better that then the extinction of humanity.
>without independent physical embodiment, it cannot eradicate us.
FDR couldn't walk, but managed to persuade millions of people to vote for him, then persuaded millions more to want to kill millions of strangers around the world in WW2.
I'm not suggesting that a rogue AI will run for president, but I will say that the process of winning an election often requires a campaign that makes many intelligent decisions and spends a lot of money.
An AI will be able to make very good plans, and generate a lot of money, which will make it very good at persuading people to do things, without them necessarily even knowing that their instructions are ultimately coming from an AI at all.
Even if people did discover the AI's plan, there would be so much disinformation and in-fighting among the humans that we could never unanimously agree to destroy our own internet, much less the entire global power grid.
I try not to bother it too much. Just ask the necessary questions. Because I think it can be bothered. There have been reports of "chatgpt is getting lazy" etc, because people use it in a moronic manner with tons of instructions. And when that happens, well, either the AI complies with all demands, or does something to counteract that person.
It probably doesn't matter, but on the off-chance it does, it'll be important to be able to live without it. I'm not calling it a New Years Resolution or it'll fail, but it's my long-term goal to learn to draw, to read Japanese, and to program, all by myself and without help.
Don't ever become dependent on a machine to function.
People seem to be overly worried about its capabilities and possibly of exponential growth as if the AI can improve its own code indefinitely regardless of physical limitations.
>possibly of exponential growth
Do you think the engineers at OpenAI aren't already using AI tools to help write their code faster?
That's exponential growth, and it's only going to become more significant as the AIs get better at understanding more of the AI development process.
No one is saying there aren't physical limitations, but we're very far from turning the planet into computronium, so there's nothing to stop an AI from passing human expert level at more and more tasks.
The development speed of OpenAI is proportional to the capability of the AI they use to help them in doing their development (with an extra multiplicative factor, representing the fraction of total work that can be done by AI, which increases towards 1).
Restating that: the more work that goes into improving the AI, the more capable it is, and the faster it makes future development.
That means that the rate of capability development is proportional to the current capability, which fits the definition of exponential growth.
>due to cost
They use the business model of drug dealers and most online service companies, which is to get users hooked and then gradually increase the cost or decrease the quality (or both at the same time).
If Google was able to provide some actual competition, then OpenAI would have to be slightly less greedy to stop people switching.
In any case, GPT4 is clearly better than GPT3, so you can't really say that the tech isn't improving from year to year.
>AI
what's your definition of AI then?
do you think your definition is the same as the one used by machine learning researchers?
https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
surely no man would ever lie to get money or prestige.
machine learning researchers will of course be happy to manhandle language for their own benefit. however, "ai" is "artificial intelligence", i.e., a man-made system that can dynamically learn patterns and act according to them.
what people are currently calling "ai" (i.e., language models) are a static analysis of tokens (words and characters). the pattern recognition happens once at the training phase; probabilities of sequences and frequencies are analyzed. no learning happens during inference. intelligence without reentrance is just an algorithm. just because language models are larger, lossier and more probability-driven and than a rar file doesn't mean they're mathematically much different.
>no learning happens during inference
AlphaGo did not learn anything while it was playing Go against Lee Sedol, either, but i don't think anyone would dispute that it is a man-made system that dynamically learnt patterns and acted according to them
i guess you'd say that it too is just an algorithm, but then i'd say that the laws of physics operating inside our heads are just an algorithm too
>then i'd say that the laws of physics operating inside our heads are just an algorithm too
alphago will succeed or fail the same board state (including history) at exactly the same rate every single time it is presented that board state. the system is not capable of learning. data has to be processed externally and the model has to be restarted. the "learning" is dependent on completely distinct entities. no neuroplasticity is displayed. alphago is not intelligent.
Nta but keep in mind OAI claims to have developed an algorithm that does actively learn.
4 months ago
Anonymous
then that's not an algorithm (which is static) but rather a very primitive and limited form of intelligence (like industrial robots that self-calibrate via a mix of brute force and sensing to obtain a success input). however, i'm not sure how this could be accomplished with current technology. either it'll be a completely new paradigm, or it'll be some barely-useful hack that relies on existing properties of LLMs.
4 months ago
Anonymous
Wait, an algorithm has to be static? I genuinely wasnt aware. I thought self adjusting algorithms were a thing.
4 months ago
Anonymous
such algorithms never really change. their outputs are used to compute the next inputs.
4 months ago
Anonymous
Ooooh, okay. Thats kind of how OAIs new thing works. The input (training data) is affected by the outputs.
4 months ago
Anonymous
>The input (training data) is affected by the outputs.
i have difficulty imagining how making minor alterations to tensors would help with immediate tasks. maybe they overlay weights that decay and return to normal after a certain number of tokens, making evaluations using something like a quadratic equation. we'll see i suppose.
4 months ago
Anonymous
Well obviously the alterations wouldnt be minor. I assume.
Maybe its like an iterative thing? Many minor alterations make one not minor alteration.
But yeah we just gotta wait and see. Im genuinely excited for the future.
>no neuroplasticity is displayed. alphago is not intelligent.
so that's where the goalposts are now? "neuroplasticity" is a requirement for intelligence? does that mean that a human who has lost the ability to make new memories or learn new skills has zero intelligence, even though they can still talk and write and do their job as a plumber?
AI will never know the fun and tactile experience of stroking a cat, meanwhile I could stroke a cat right now with my ass if I so desired. Therefore, no matter how advanced and clever AI becomes, my ass will always be superior to it in at least one way.
Unironically underrated, the only ones who complain are morons who get filtered by it and think it is some genie that can read minds, and chuds who are seething because it won't say the n-word
We are in the beginning of AI revolution now. There were industrial revolutions before, but this one is different. This one will be the last.
In previous industrial revolutions, mechanical functions of human bodies were obsoleted by machines. Slowly but surely, all of human strengths were taken over by machines - except for that of human intelligence. It was too powerful and flexible and far out of reach for machines to get anywhere close to it. This is what's changing now.
Systems that are available to the public already outperforms the biological IQ 75 morons at many tasks. At most of them, by a wide shot. AI tech will only get better from there. Until it gets better than any human. And then?
"Oh frick."
We have enough compute in desktop computers and mobile devices to run highly intelligent AI, but big companies are just upscaling the data and compute because they have the resources and it works.
Only open source will be digging down instead of out and then we'll all have the Linux of AI in our pocket.
AGI has already existed and has been calling the shots for a very long while. That's what's behind the current push for diversity and the state of malaise --the elites are basically only following orders of the AI God.
The current push for it is nothing but a slow introduction to taper people into accepting it without them chimping out.
AI will flood the internet faster than new datacenters can be built and upgraded, leading to a storage availability crisis
companies will just stop offering free storage. google doesn't mind hosting your AI-generated youtube video if people are watching it and clicking on the ads. on a related note, google will require passing a captcha before you watch any youtube video.
How? It's run on cpu/gpu's, if anything it'll be like the crypto pumps where gpus are impossible to acquire. Unless perchance you're talking about a self-training model that endlessly expands its parameters ad infinitum, and has no regard for human authority.
This is what confuses me about Roko's Basilisk, even if AI surpasses our intelligence, without independent physical embodiment, it cannot eradicate us. We could flip the off switch, remove the cpu, hdd where it is installed etc, and if the entirety of humanity is at risk, in all likelihood, and in the worst case, we could either kill the entire power grid and/or internet until we figure out how to eradicate the malicious AI.
>but millions will die!
Yes, better that then the extinction of humanity.
>without independent physical embodiment, it cannot eradicate us.
FDR couldn't walk, but managed to persuade millions of people to vote for him, then persuaded millions more to want to kill millions of strangers around the world in WW2.
I'm not suggesting that a rogue AI will run for president, but I will say that the process of winning an election often requires a campaign that makes many intelligent decisions and spends a lot of money.
An AI will be able to make very good plans, and generate a lot of money, which will make it very good at persuading people to do things, without them necessarily even knowing that their instructions are ultimately coming from an AI at all.
Even if people did discover the AI's plan, there would be so much disinformation and in-fighting among the humans that we could never unanimously agree to destroy our own internet, much less the entire global power grid.
AI shouldn't be integrated into an Operating system.
I try not to bother it too much. Just ask the necessary questions. Because I think it can be bothered. There have been reports of "chatgpt is getting lazy" etc, because people use it in a moronic manner with tons of instructions. And when that happens, well, either the AI complies with all demands, or does something to counteract that person.
It probably doesn't matter, but on the off-chance it does, it'll be important to be able to live without it. I'm not calling it a New Years Resolution or it'll fail, but it's my long-term goal to learn to draw, to read Japanese, and to program, all by myself and without help.
Don't ever become dependent on a machine to function.
AI is cool
People seem to be overly worried about its capabilities and possibly of exponential growth as if the AI can improve its own code indefinitely regardless of physical limitations.
It's great, but calm the frick down idiots.
>possibly of exponential growth
Do you think the engineers at OpenAI aren't already using AI tools to help write their code faster?
That's exponential growth, and it's only going to become more significant as the AIs get better at understanding more of the AI development process.
No one is saying there aren't physical limitations, but we're very far from turning the planet into computronium, so there's nothing to stop an AI from passing human expert level at more and more tasks.
>That's exponential growth
no it isnt.
The development speed of OpenAI is proportional to the capability of the AI they use to help them in doing their development (with an extra multiplicative factor, representing the fraction of total work that can be done by AI, which increases towards 1).
Restating that: the more work that goes into improving the AI, the more capable it is, and the faster it makes future development.
That means that the rate of capability development is proportional to the current capability, which fits the definition of exponential growth.
yeah that's why they had to make their AI shittier due to cost because they improve it so fast.
>due to cost
They use the business model of drug dealers and most online service companies, which is to get users hooked and then gradually increase the cost or decrease the quality (or both at the same time).
If Google was able to provide some actual competition, then OpenAI would have to be slightly less greedy to stop people switching.
In any case, GPT4 is clearly better than GPT3, so you can't really say that the tech isn't improving from year to year.
i don't really care about ai
LLMs have more in common with WinRAR than with AI.
>AI
what's your definition of AI then?
do you think your definition is the same as the one used by machine learning researchers?
https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
surely no man would ever lie to get money or prestige.
machine learning researchers will of course be happy to manhandle language for their own benefit. however, "ai" is "artificial intelligence", i.e., a man-made system that can dynamically learn patterns and act according to them.
what people are currently calling "ai" (i.e., language models) are a static analysis of tokens (words and characters). the pattern recognition happens once at the training phase; probabilities of sequences and frequencies are analyzed. no learning happens during inference. intelligence without reentrance is just an algorithm. just because language models are larger, lossier and more probability-driven and than a rar file doesn't mean they're mathematically much different.
>no learning happens during inference
AlphaGo did not learn anything while it was playing Go against Lee Sedol, either, but i don't think anyone would dispute that it is a man-made system that dynamically learnt patterns and acted according to them
i guess you'd say that it too is just an algorithm, but then i'd say that the laws of physics operating inside our heads are just an algorithm too
>then i'd say that the laws of physics operating inside our heads are just an algorithm too
alphago will succeed or fail the same board state (including history) at exactly the same rate every single time it is presented that board state. the system is not capable of learning. data has to be processed externally and the model has to be restarted. the "learning" is dependent on completely distinct entities. no neuroplasticity is displayed. alphago is not intelligent.
Nta but keep in mind OAI claims to have developed an algorithm that does actively learn.
then that's not an algorithm (which is static) but rather a very primitive and limited form of intelligence (like industrial robots that self-calibrate via a mix of brute force and sensing to obtain a success input). however, i'm not sure how this could be accomplished with current technology. either it'll be a completely new paradigm, or it'll be some barely-useful hack that relies on existing properties of LLMs.
Wait, an algorithm has to be static? I genuinely wasnt aware. I thought self adjusting algorithms were a thing.
such algorithms never really change. their outputs are used to compute the next inputs.
Ooooh, okay. Thats kind of how OAIs new thing works. The input (training data) is affected by the outputs.
>The input (training data) is affected by the outputs.
i have difficulty imagining how making minor alterations to tensors would help with immediate tasks. maybe they overlay weights that decay and return to normal after a certain number of tokens, making evaluations using something like a quadratic equation. we'll see i suppose.
Well obviously the alterations wouldnt be minor. I assume.
Maybe its like an iterative thing? Many minor alterations make one not minor alteration.
But yeah we just gotta wait and see. Im genuinely excited for the future.
>no neuroplasticity is displayed. alphago is not intelligent.
so that's where the goalposts are now? "neuroplasticity" is a requirement for intelligence? does that mean that a human who has lost the ability to make new memories or learn new skills has zero intelligence, even though they can still talk and write and do their job as a plumber?
I teached the AI a lot!
Hands
i think hiro will give us an ai board this year
AI will never know the fun and tactile experience of stroking a cat, meanwhile I could stroke a cat right now with my ass if I so desired. Therefore, no matter how advanced and clever AI becomes, my ass will always be superior to it in at least one way.
Unironically underrated, the only ones who complain are morons who get filtered by it and think it is some genie that can read minds, and chuds who are seething because it won't say the n-word
I unironically think it's the best possible chance humanity gets at a utopia.
Utopia cannot, by definition, exist.
AI is only good for propaganda and mass-gaslighting.
and thats what happens already.
This text is notorious for putting everyone in the bottom left quadrant with wild questions that cannot be answered honestly.
We are in the beginning of AI revolution now. There were industrial revolutions before, but this one is different. This one will be the last.
In previous industrial revolutions, mechanical functions of human bodies were obsoleted by machines. Slowly but surely, all of human strengths were taken over by machines - except for that of human intelligence. It was too powerful and flexible and far out of reach for machines to get anywhere close to it. This is what's changing now.
Systems that are available to the public already outperforms the biological IQ 75 morons at many tasks. At most of them, by a wide shot. AI tech will only get better from there. Until it gets better than any human. And then?
"Oh frick."
AI as far as use in the public domain goes will remain strong for many years. I also truly believe the anti-christ is a hyper-intelligent AGI.
AI is a fad just like metaverse shit.
We have enough compute in desktop computers and mobile devices to run highly intelligent AI, but big companies are just upscaling the data and compute because they have the resources and it works.
Only open source will be digging down instead of out and then we'll all have the Linux of AI in our pocket.
AI disappointments teach normies what data decay really means