Do you believe AI has diminishing returns of scale?
Gwern doesn't seem to think so. "More powerful NNs are ‘just’ scaled-up weak NNs, in much the same way that human brains look much like scaled-up primate brains" https://www.gwern.net/Scaling-hypothesis
Remember to support your answer!
neural models are graded in nature and organize over time - eliminating the potential for exponential improvement as with symbolic models.
that said, i dont see why there should be an upper limit on what neural networks can process, though i suspect our simplistic MLP models are a dead-end.
"AI" is a corporate buzzword
Isn't the whole point that after a certain treshold they are powerful enough to incrementally self-optimize? It reminds me of bootstrapping a compiler in that it sounds both completely logical and totally nonsensical.
>It reminds me of bootstrapping a compiler in that it sounds both completely logical and totally nonsensical.
Only for like one minute. Then you think it through and realize there is nothing magical and it's pretty basic.
AI auto evolving is a different story though and I don't expect that to happen for at least a century.
>a century.
Way too much time, we already got some models training on code.
Of course, these beasts can't do logic to save their life, but it's just a matter of time for someone to 1+1 in their brain and figure out how to do it.
Once the transistor/heat problem and the ram problem is solved, we'll be on our way to better machines.
50 years top, as long as the climate change and solar storms/war/whatever don't get in the way, I say.
That's the idea, but it has never been demonstrated practically or theoretically. It's science fiction at this point.
>Do you believe AI has diminishing returns of scale?
AIs are fundamentally non-linear, so anyone expecting only linear behavior is a fool.
The big problem with them is that the current iteration of the tech takes far too much data, energy and money to train.
Logarithms aren't linear
Arent there special Neural Network training chips being developed right now that should cut the training cost to 1/100th or even less? Neuromorphic AI chips is what they are called
Do you think they will be built in the style of FPGAs (LUTs and multiplexers) or will they be more analog in nature?
Analogue training, digital inference is the future.
>analogue training
What do you call analogue?
Wdym analog? Are you talking opamps, capacitors, and ADCs?
If your goal is to replace a human being then AI will do a good job but if your job is to solve a problem quickly and accurately, then you'll have to use a different tool (Or maybe hire a real engineer).
I don't understand that graph.
What the hell is the Y axis? How much computation was required to train it?
How many smart points the AI has
How is it measured? Performance on the same benchmarks? If so, how can it be unbounded?
It's a logarithm and all evidence indicates this
Yes, exponential growth patterns always cease once physical limitations rear their ugly head.
The "AI meme" is no exception. It will limited by the power consumption and logistics to operate all of the hardware.
Depends on what kind of AI.
Current "AI", aka deep learning/reinforcement learning, is just brute forcing. This type of AI is has logarithmic growth against processing power - but you also have to consider that processing power grows exponentially, so that gives something of a linear curve that slowly diminishes.
Real AI? Nobody knows since nothing close to it even exists.
Didnt a google engineer blow the news that google has a sapient, real AI, not very long ago?
Do you think ChatGPT is sapient? These AIs lack the logic layer, which is arguably the most important part of sapience. They're essentially just extraordinarily good databases and retrieval systems, much in the same way a human can talk for a few sentences while zoned out and then snap to reality and be like - "what the fuck did I just say", except much better since technology is outpacing the human brain.
>They're essentially just extraordinarily good databases and retrieval systems
Not even close
no, that was mostly just an overdramatic gay
No, that's the news manipulating you.
Dude said something like "Google should not have a monopoly on this technology" and briefly mentioned something about the machine being as smart as a 4 year old or some shit like that, sort of unrelated.
What do you think the news published?
no, that isn't what happened
for fucks sake his heavily edited '''transcript,,, was titled "Is LaMDA Sentient? - an Interview"
the media didn't take some random thing he said out of context and blow it out of proportion
after everyone with a brain called him retarded he retreated from his bailey to a motte
The current stuff is a hackjob just to help us find a real AGI program.
>logistic
>More powerful NNs are ‘just’ scaled-up weak NNs, in much the same way that human brains look much like scaled-up primate brains" https://www.gwern.net/Scaling-hypothesis
lol these gays are retarded. the human brain is a DYNAMICAL system. every neuron interacts with every other neuron a la a chaotic differential equation. classical computers do not do this. neural neteorks may try to imitate this chaotic interaction but fundamentally they exist on classical computers which cannot simulate truly complex chaotic systems
Turing complete machines can perform any calculation (apart from a few things like the halting problem). It may be more inefficient, but they can certainly imitate the way the brain works.
it's not about the theoretical ability, it's about size. to simulate an actual human brain with a classical computer you'd need a computer the size of the galaxy because the small-scale interactions are incredibly complex
modern supercomputers can simulate a few molecules interacting accurately for a few fractions of a second, and the size of the problem grows combinatorially as you add more molecules
You think we can't reduce neurons down to abstracted data structures and that every molecule matters? Odd take.
>You think we can't reduce neurons down to abstracted data structures
No you need the molecular information
Why not? Neurons at heart are just a data structure with a reference to other neuron data structures that perform behaviors based on the signals they receive from other neurons, like continuing the signal down to its own references.
Like hell it's probably something like this in C++:
class Neuron {
unordered_set<std::shared_ptr<Neuron>> links;
LinkData linkData;
ReceivedSignal(float intensity){
for(const auto& link : links){
if( linkData.ShouldContinueSignal( link ){
link.ReceivedSignal( intensity );
}
}
}
};
Obviously far more complicated but that's really all neurons are. Look up Neural Networks - they don't behave exactly like neurons but the idea of a hidden layer was come up with by examining how the brain works.
>here I know everything about how the brain works
You can educate yourself just by reading some articles on it. Doesn't take an expert to know how Neurons work on a basic level. The claim that you would need to simulate the molecules of the brain is pretty obviously stupid - the molecules in the brain are trying to imitate this same data structure, so we can just imitate it using modern programming techniques.
You overestimate academics. They have answers to everything and they write thousands of papers a day, but in reality nobody can answer even simple questions about the brain like "why do we have to sleep"
It's all post hoc explanations and rationalizations
A computer literally can't simulate a cell but you think it can simulate a neuron?
But that's not true, computers can simulate cells and they are often used to do just that in biomedicine research
They can't simulate a cell. The models used in research are toys. They're good for some purposes, but completely inadequate for many others.
Think of it this way
All simulations are wrong
Some are useful
yes.
when you look at neural network behavior vs human behavior, humans are capable of feats that computers are utterly incapable of (writing long mathematical proofs, for example), whereas computers are capable of feats that human are utterly incapable of (multiplying a trillion numbers)
this suggests that they work on a fundamentally different level. dynamical vs non-dynamical computation
https://en.wikipedia.org/wiki/Level_of_analysis#Marr's_tri-level_hypothesis
you can in theory simulate the brain to arbitrary precision, as
said.
that said, popular models ignore what the brain does... the last model i read about that implemented ff/fb inhibition was Leabra.
mathgays really ruined the field.
mathgays literally built the field, filthy code monkey
lmao.
im not interested in mathgay values filtered through a nonlinear function with saturating non-linearity, im interested in the biological neuron firing rate limits it attempts to reproduce.
mathgays built the field, and now the field is full of hollow models.
And cavemen built the wheel, your point?
By now you should know code logic and math are as similar as math and physics.
The amount of people in this thread who are just spouting words they clearly don't understand is amazing
Algorithmic efficiency doubles every 16 months. Recently there was a paper where a 350M parameter model outperformed InstructGPT 175B on a variety of zero-shot instruction induction tasks. Blindly scaling up existing models clearly has diminishing returns because it's not worth spending an order of magnitude more money to get a small linear increase in performance. Eventually we will reach physical limits of computation, both in hardware and in information entropy, but we're not even close to that yet, especially with advances in neuromorphic computing.
>Recently there was a paper where a 350M parameter model outperformed InstructGPT 175B on a variety of zero-shot instruction induction tasks
Wtfff how
>needs orders of magnitude more compute and data to improve
>prompt ai to access all hardware connected to the internet
>have giga ai at your disposal
>prompt ai to design gpu/cpu's
>create the hardware and make it run on it
>repeat
>prompt ai to rewrite its code in the most efficient way in the next language and run on it
https://en.m.wikipedia.org/wiki/E._coli_long-term_evolution_experiment
There are diminishing returns, but it is unbounded.
Think logarithmic or square-root growth.
I don't see how your link relates.
Obviously evolution will "slow down" once the organism is fit enough for its environment.
Most biologists assume hyperbolic or logistic growth (e.g. carrying capacity) which has a finite limit.
Evolution appears to not have an upper limit, but it does still have diminishing returns.
AI models are pretty much evolutionary in nature, so it'd be reasonable to use evolution as a benchmark.
Are you a 5th grader? You realize you can fit infinitely many exponential and logistic curves over that set of points.