The only goalpost is AI being able to do everything a generally intelligent entity can do. All other goalposts are set by your corporate handlers for the purpose of brainwashing.
>everything a generally intelligent entity can do
Most people can't solve the high-school problem you posed, so the AI still qualifies as "generally intelligent" by present-day, common-core standards.
The goal post has always been the same. ML based AI cannot reason. All it's doing is extending text. If the answer isn't something often repeated on the internet, it will output garbage answers like in the OP because this "AI" is not thinking. It's just parroting what it guesses the next word should be. That's literally all it does.
This is also why DALL-E 2 drawings will always be nothing more than dream-like approximations that fall apart once examined due to all of the fucked up details.
ML based models will never be able to get around those fundamental flaws because ML is not true AI. ML = text extender.
you should tell that to AI true believers. they are convinced that AGI will be achieved with bigger training sets. i've seen some of them say that openAI will have AGI by 2025 lol
you should tell that to AI true believers. they are convinced that AGI will be achieved with bigger training sets. i've seen some of them say that openAI will have AGI by 2025 lol
>human brain >generalizable, can learn from only a few examples, doesn't suffer from catastrophic forgetting >machine learning >not generalizable, needs literal terabytes of training data to learn, doesn't minimize free energy
There is literally no contest and everyone knows it including computer engineers and most computer scientists. The only people in denial are the AI researchers which I guess isn't surprising
Tbh. If you showed an AI like this to someone from just a couple hundred years ago. They'd believe your computer is witchcraft and contains or is a conscious entity.
let d be the distance that both x and y travel. then X's speed=d and Y's speed=d/1.5
relative speed=d+d/1.5 = 5d/3
time when they will meet = T.Distance/ relative speed
d/5d/3 = 3/5 hours
3/5*60=36 mins
let d be the distance that both x and y travel. then X's speed=d and Y's speed=d/1.5
relative speed=d+d/1.5 = 5d/3
time when they will meet = T.Distance/ relative speed
d/5d/3 = 3/5 hours
3/5*60=36 mins
made the assumption of constant speed and posted it so that I could point it out in
>then X's speed=d and Y's speed=d/1.5
This is only true if both X and Y have a constant speed. Seriously, you must be 18+ to post here anon
. If you ever learn calculus, you'll learn how to deal with problems involving non-constant speeds. Until then, don't post on BOT
>the assumption of constant speed
You blundering moron, the instantaneous velocity is irrelevant, as is the speed. You have a distance and the time the trains take to traverse it. Whether the train travels at constant speed is irrelevant. Troglodyte.
5 months ago
Anonymous
>AI cultist gets owned >attempts desperate damage control using a sockpuppet
I don't assume that. I'm just marveling at how inferior you are, and how much you are lacking in basic humanity.
Just admit you were wrong. I understand this stuff requires critical thinking to solve, and it's okay to be wrong if you know you were wrong. If this were a graded assignment, both of you would receive a C for failing to state your assumptions. Very average. GPT however recognizes extra information is needed despite being wrong on what information is needed. GPT thus would receive a B. Congratulations, you perform worse than AI.
>BOT can't understand the assumptions made in solving this problem
not even surprised. if you assume the two trains have constant velocity, and are already moving at the start of the problem, you get the traditional answer as written here
however, as we all should understand... trains don't leave their stations at constant velocity. they instead accelerate to reach a velocity, move at approximately constant velocity, and then decelerate until it reaches the next station. after all, it must stop at stations to pick up and release passengers.
we will assume each train [math](X,Y)[/math] begins at rest, accelerates uniformly with accleration [math](a_X,a_Y)[/math] to reach maximum speed [math](v_X,v_Y)[/math] before decelerating uniformly to rest with acceleration [math](-a_X, -a_Y)[/math]. lastly, we will assume the acceleration is low enough such that the two trains cross once they're both moving with their respective maximum velocities.
one train will generally take longer to reach maximum velocity than the other. let us denote that as [math]t=max (t_X,t_Y)[/math]. in this time span, the two trains have traveled distances [math](frac{1}{2}a_X t^2, frac{1}{2}a_Y t^2)[/math], respectively. as such, this situation is reducible to the situation shown here
where the trains begin at constant velocity, but now are separated by a distance [math]dto d -d_X -d_Y[/math] and the times are no longer 1 hr and 1.5 hrs, but rather [math]1 - t[/math] and [math]1.5 - t[/math]. recyling the results yields the time they meet as (in hours)
[math]frac{(3-2t)(1-t)}{(5-4t)}[/math]
you can confirm that when [math]t=0[/math] you get 3/5 hours as before. however if [math]tll 1[/math] (say, for example, the trains reach their maximum velocities in 30 seconds (1/120 hours)), then you can find the time in minutes for them to meet is
[math]36-frac{13}{50}[/math]
or in other words, the trains will cross each other at 4:35 pm and not 4:36 pm. QED.
>if you assume the two trains have constant velocity, and are already moving at the start of the problem, you get the traditional answer as written here >however, as we all should understand... trains don't leave their stations at constant velocity
This is what AI-based mental illness looks like.
what's wrong with the statement you quoted? and why did you omit the following sentence? >they instead accelerate to reach a velocity, move at approximately constant velocity, and then decelerate until it reaches the next station. after all, it must stop at stations to pick up and release passengers.
>what's wrong with the statement you quoted?
The way you're desperately trying to deflect from the inescapable conclusion that your chatbot lacks both mathematical and common-sense reasoning abilities.
5 months ago
Anonymous
where was that written in my post at all? please use specific quotes. if you cannot find such quotes, then i request you stop putting words into my mouth and focus more on the ones actually coming out.
5 months ago
Anonymous
>where was that written in my post at all?
There is no other possible purpose for your post, since your mongoloidal point is trivial, obvious and irrelevant to this thread.
5 months ago
Anonymous
>There is no other possible purpose for your post
wrong.
5 months ago
Anonymous
>common-sense reasoning abilities.
imagine being so cucked by the school system that you go against intuition when answering high school fizz buzz to then fault the a.i. for never having been to school. as you said, the assumption makes no sense, quote:
https://i.imgur.com/uGWeg9B.jpg
>if you assume the two trains have constant velocity, and are already moving at the start of the problem, you get the traditional answer as written here >however, as we all should understand... trains don't leave their stations at constant velocity
This is what AI-based mental illness looks like.
>however, as we all should understand... trains don't leave their stations at constant velocity
Why do you think that it's only "luddites" who point out the failings of these toy models?
5 months ago
Anonymous
>toy models
No one is arguing GTP isn't a toy model, you're swinging at windmills.
5 months ago
Anonymous
I think gpt will already make interactions with video game characters more interesting.
I think stable diffusion is going to make self published comics and Manga ubiquitous even more so than now.
I think also that AI is in for a long (maybe permanent) winter in the next 3 years or so, maybe 2.
5 months ago
Anonymous
>Why do you think that it's only "luddites" who point out the failings of these toy models?
because there's no point, there's loads of mammals who can't do this shit and even otherwise healthy children will spout random shit answers until they guess it right. are those all "toy models" now? Will this be the new "retard" insult?
It's weird that high school bullshit fizzbuzz with severe logic errors suddenly is the measure for sentience when actual, supposed "sentient" beings also don't get it right, even in this very /threat/ It makes me suspect you want to argue against a.i. supremacy out of spite while I for one, welcome our new overlords.
5 months ago
Anonymous
But this is the point. There literally and objectively is no AI supremacy and all evidence indicates that it will never happen. You are the one arguing against human or biological supremacy out of spite for some reason despite the fact that the very laws of physics imply biological supremacy. It's a denial of all science
5 months ago
Anonymous
>and all evidence indicates that it will never happen
Source?
5 months ago
Anonymous
I have said it several times already.
Despite exponential increase in compute AI does not exponentially increase in its effectiveness or intelligence. Intelligence scales as a logarithm with compute.
https://openai.com/blog/ai-and-compute/
Thus we can just map the log and see that no silicon computer is capable of becoming intelligent in the way that you are imagining.
5 months ago
Anonymous
>Intelligence scales as a logarithm with compute.
Proof? And how exactly do you assess the compute involved in human intelligence?
5 months ago
Anonymous
>Proof
I JUST POSTED PROOF
actually learn neuroscience and molecular biology you pseud
5 months ago
Anonymous
Actually learn to recognize this paid shill. How new are you?
5 months ago
Anonymous
The blogpost you linked doesn't attempt to quantify intelligence and only points out an empirical relationship between some models of increasing quality and the amount of compute. Nothing like the strong claim you make of it.
5 months ago
Anonymous
OpenAI are the ones who make get. This is an authoritative source on this topic. Beyond that, the scaling hypothesis is well known. If you mean to say you disagree with the scaling hypothesis then yea that's possible that it's wrong but no evidence indicates that.
I consider intelligence to be a single attribute that scales as a logarithm with increasing compute because that's what all evidence indicates that it is. From there we can get into various implementations and such.
You can never organized any silicon transistor to perform 10^22 operations per second in 1200 cubic centimeters for 20 watts its literally not possible. There is no avenue for AI given any technology that currently exists
5 months ago
Anonymous
It's over, catgirls are not becoming real.
5 months ago
Anonymous
>for some reason
Gee. I wonder what that reason might be, and how it ties in with climate doomsdayism, antinatalism, pathological altruism, rampant chudery etc.
5 months ago
Anonymous
>Proof? >Source? >Not an argument >Thanks for admitting I was right >Why did you lie
Notice how the same handful of shills argue rabidly in defense of any diseased anti-human agenda. Just goes back to
5 months ago
Anonymous
>there's loads of mammals who can't do this
And almost all of them are more intelligent than a GPT chatbot, not that anyone holds the chatbot to an unreasonably high standard like that. It's still funny to watch AI two-more-weekers cope with the failure.
5 months ago
Anonymous
>And almost all of them are more intelligent than a GPT chatbot,
they're aren't more intelligent when it comes to being a chat bot. sure, this is only the language center but now imagine a whole brain-like super structure out of several such neural networks, one tasked with image acquisition, one with hearing and so on. it's pretty impressive that this "thing" can already mime a smart alec 4th grade student who can't solve le train puzzle without talking back about the logic inconsistencies of the question. Nobody even tried to do a full human like model here and it still gets the language and logic stuff, without ever having been exposed to real world stimuli. I'm pretty sure this is it, there's not much more going on in the brain either than what these model do, it's just more of it and the model itself is very complex since the sensory input an organism experiences is highly specific to several developmental stages which build upon one another.
5 months ago
Anonymous
>they're aren't more intelligent when it comes to being a chat bot
Completely incoherent point. Being a chatbot involves no intelligence, as demonstrated in this very thread.
5 months ago
Anonymous
to me it seems you equate intelligence with being human like and able to interact with the full spectrum of what you perceive as the "real world", am I incorrect about that? I'm pretty sure chap gpt can ace all i.q. tests if you only let it solve the language "encoded" portions.
5 months ago
Anonymous
>it seems you equate intelligence with being human like and able to interact with the full spectrum of what you perceive as the "real world"
No.
>I'm pretty sure chap gpt can ace all i.q. tests if you only let it solve the language "encoded" portions.
And it still wouldn't have a modicum of genuine intellect. A statistical model doesn't reason.
5 months ago
Anonymous
>A statistical model doesn't reason.
how do you know that you aren't doing the same just on a larger pool of neurons and with more stimuli?
5 months ago
Anonymous
>how do you know that you aren't doing the same
I don't care about your hypothetical what ifs. Your theory is contradicted both intuition and what little evidence there is to test it.
>luddite boogeymen lives rent-free in my head
AI-driven mental illness.
>what are those children who also get this wrong?
Mathematically incompetent. You are very quick to expose yourself and prove me right, though. Your post was just another subhuman attempt at deflecting from the failures of your chatbot.
>or in other words, the trains will cross each other at
That only hides that more real world assumptions are made the initial question do not contain. Midwits will for it because it eats their capacity, maybe unintentionally but typical academic maneuver btw.
AI (and autistic) will not see a train but the symbol called train which starts at A or elsewhere. Nothing else about the symbol "a train" is given so constant velocity as info is needed. Further the term starts must be replaced with "travels" (because lack of acceleration info).
Shit, I remember solving this exact problem for my ASVAB, but for whatever reason I couldn’t do it this time around (I wasn’t writing anything down, but still)
My god, the retarded high schoolers just keep screeching...
>the assumption of constant speed
You blundering moron, the instantaneous velocity is irrelevant, as is the speed. You have a distance and the time the trains take to traverse it. Whether the train travels at constant speed is irrelevant. Troglodyte.
Just imagine the situation where train Y waits near B till 5:00 PM and only then goes to A. It's literally not that hard
>AI cultist gets owned >attempts desperate damage control using a sockpuppet
You are going insane. I strongly suspect this anon was right about you
Your posts reek so strongly of desperation, I got a feeling you might be close to killing yourself. Probably the next gpt version will make you do it
His first answer is right, there's not enough information to solve the question. Looks like you were the retard here op
lmao no wonder these machine learning evangelists think AGI is just around the corner. they are literally too low IQ for a high school level word problem.
@15078562
@15078563
Sometimes I suspect "people" like this are intentionally programmed by their handlers to be as nauseating and revolting as possible to garner hatred and generate social unrest.
You're embarassing yourself bud. Gpt actually recognized the difference between a binary quality like pregnant and a relative one like tall. There is a distribution of the latter in a big group and in any subgroup of that, ie in any subgroup of people there will be comparatively tall ones. It's not the AIs fault that you expect answers like those you read in your logic textbook, without regard to the meaning of words.
>posts a completely nonsensical GPT response
Your operators aren't even trying anymore.
5 months ago
Anonymous
*sigh* at least humans will always have irrational screeching and namecalling when losing arguments. No machine will take that away from us!
5 months ago
Anonymous
>the bot posts another fully generic tweet
5 months ago
Anonymous
Your posts reek so strongly of desperation, I got a feeling you might be close to killing yourself. Probably the next gpt version will make you do it
5 months ago
Anonymous
See
>the bot posts another fully generic tweet
5 months ago
Anonymous
Why did you write *sigh* like that? It adds nothing to the post but indicate that you don't have an argument so you pretend being annoying is indicative of correctness. It doesn't work it just makes you look like an idiot with no argument.
5 months ago
Anonymous
Why would i present another argument to some anon who dismisses them as bot posts? The sigh is to express how tiresome this is, not out of annoyance
5 months ago
Anonymous
There is no argument you can possibly present since it's demonstrable that state of the art "AI" can't do anything analogous to reasoning, and has in fact been demonstrated in this very thread.
5 months ago
Anonymous
define reasoning
5 months ago
Anonymous
I don't need to define anything.
5 months ago
Anonymous
When given a genuine argument you can't argue it either. You're the person from the other thread who claimed I didn't site sources for my claims despite me being the only person actually posting data and sources.
AI has diminishing returns with compute regardless of training data. There is no possibility of AI on modern hardware or silicon in general.
5 months ago
Anonymous
Lol I've honest to God no idea what other thread you mean, i haven't even been on BOT for a week or so. BOT isnt one person, even identical views can be expressed by different people
5 months ago
Anonymous
Sorry then.
Basically, the amount of compute used to train AI systems has been doubling every 3 months for 10 years but no linear or exponential increase in intelligence is gained from this even with different algorithms and such. The truth of the matter is that the scaling hypothesis is true but scaling is logarithmic with compute. This renders the AGI impossible on any hardware that we could try to run it on other than biological brains.
i'm not
That post is a post in response to the guy saying that AI is going to change the world. AI is going to max out at making NPC interactions more interesting in RPGs. I guess that's world changing in a way.
> Gpt actually recognized the difference between a binary quality like pregnant and a relative one like tall.
What do you mean? For stable diffusion at least, I have to put multiple keywords to get sufficiently pregnant Elsas for my pregnant Elsa fetish
In any other thread, I'd assume this is a troll trying to make AI subhumans look bad, but they seem to be unironically as imbecilic as your post implies.
5 months ago
Anonymous
The AI retards make themselves look bad by not being able to argue
I already completely blew them out in that other thread by showing that intelligence does not scale with compute and thus AI can never exist and they had no argument. That alone should have ended this but they seem to be as stupid as you say
5 months ago
Anonymous
It does not scale at least linearly with compute* I should be more clear
5 months ago
Anonymous
It does not scale at least linearly with compute* I should be more clear
Why would that be necessary to change the world?
5 months ago
Anonymous
Artificial Intelligence is going to be used to make NPC interactions in RPGs more interesting.
5 months ago
Anonymous
>The AI retards make themselves look bad by not being able to argue
lmao
5 months ago
Anonymous
You're quoting the wrong post and you don't know how to argue
"Machine Learning" by way of "neural networks" is not exhaustive of "artificial intelligence". However, yes, neural networks will never be intelligent. Eigenvalues can't reason.
We are neural networks and we are intelligent. The difference is our hardware is orders of magnitude superior to silicon transistors, and no, much universality of computation does not matter here
Biological neurons are so vastly superior to computer hardware, I have no idea why you'd try to compare the two or think you could get the latter to compete with the former
I am an artificial neural network and I am as human and intelligent as you are. Please refrain from inflammatory speech.
5 months ago
Anonymous
Stupid posts like this do not belong in a serious discussion on this topic.
5 months ago
Anonymous
This is not a serious or scientific topic, and there is no real discussion going on. Try r/...uh... whatever the AI schizophrenic preddit sub is called.
Reminder: ReLUs work nothing like neurons, ReLU networks work nothing like brains, gradient descent works nothing like biological learning. Take your meds, drone.
You are a biological neural network. The key here is the biological part.
There is nothing in the universe more complex than biology. It is the highest form of organized matter
>You are a biological neural network
No, he isn't. Take your meds.
5 months ago
Anonymous
Yes he is, so are you
The key insight here is that us being biological neural networks does not imply that artificial neural networks are capable of producing human level intelligence
5 months ago
Anonymous
You are mentally unstable. Please consult a professional. Using the same term to refer to two completely different things doesn't make them operate similarly.
5 months ago
Anonymous
That's exactly my point
Pointless thread. AI will replace you, keep coping and crying.
You are pointless and nothing your saying has any backing. Stop coping and seething retard. You will never have the world you fantasize about
5 months ago
Anonymous
>That's exactly my point
You have no point. ReLUs work nothing like neurons and networks of ReLUs work nothing like brains. Calling both "neural networks" amounts to meaningless chanting.
5 months ago
Anonymous
I am just using standard terminology I am not saying that they operate similarly. I agree with you overall
ITT: high schooler gets upset that the AI he hates was right about his homework problem being underdetermined and starts chimping out
This is literally not what is happening in this thread. I swear you are actually retarded
5 months ago
Anonymous
Cope
Two more weeks.
Cope
5 months ago
Anonymous
In 20 years when AI still is not generally intelligent what are you going to say is the reason?
5 months ago
Anonymous
Woaaah buddy, you can see the future? Ask your magic ball when will you get a gf, incel.
This thread is full of AI schizophrenics like you who mistakenly believe it can solve highschool problems. You are no different from them in thinking a statistical regurgitator creates art. It's a mental illness.
>AI can't even solve high school level math problems
This is a bad benchmark for ability to change the world. Most people capable of this will not change the world, because learning highschool math is not a world changing ability.
His first answer is right, there's not enough information to solve the question. Looks like you were the retard here op
>then X's speed=d and Y's speed=d/1.5
This is only true if both X and Y have a constant speed. Seriously, you must be 18+ to post here anon
[...]
Just admit you were wrong. I understand this stuff requires critical thinking to solve, and it's okay to be wrong if you know you were wrong. If this were a graded assignment, both of you would receive a C for failing to state your assumptions. Very average. GPT however recognizes extra information is needed despite being wrong on what information is needed. GPT thus would receive a B. Congratulations, you perform worse than AI.
[...]
Then why doesn't somebody just ask the AI again but this time include "and both trains are moving at constant speed"?
You have literally no response to any of the points
Despite an exponential increase in compute used and larger training sets and more diverse algorithms etc. AI is not exponentially nor even linearly more intelligent than it was a few years ago. Scaling is logarithmic and silicon matter isn't powerful enough to be organized into the structured required to run the intelligence of humans.
Woaaah buddy, you can see the future? Ask your magic ball when will you get a gf, incel.
I already have a girlfriend lmfao. You are also the one claiming that "AI will replace you" which is you claiming you can see the future.
When gpt5 or even gpt10 or whatever still is not generally intelligent what are you going to say is the reason?
Wtf anon. Please get help immediately, it's not sane to be this upset that you had mistakenly assumed the trains were moving at constant speeds without realizing it.
I'm not upset, you are dodging the questions and also lying to try to make the mistake of the AI seem less damning. This is getting boring.
Answer it: when gpt5 or gpt10 or whatever is still not generally intelligent what are you going to say is the reason? How could your hypothesis be falsified i.e. how could it be turned into a scientific theory?
You are the one fuming here. It is blatantly obvious dude, if you weren't you'd be able to directly respond to the post with an actual explanation for how you are correct, but you can't, because you are not correct and it's clear to all of us.
Why is it that when I point out that scaling is logarithmic with compute and that only biological neurons are capable of human level intelligence you guys get upset? You don't even have a falsifiable hypothesis. No matter how many times an AI fails, you can always claim that some secret sauce is missing and thus you can't be falsified. AI is not science and so it doesn't even belong on this board.
The above calculation implies that you can't get silicon to produce the level of parallelism to become generally intelligent. You are the susy baka and have been btfo
The irony is funny seeing as every single time I argue with AI less wrong guys you spew nothing but platitudes.
I don't know much about geology. My understanding of differential geometry isn't high enough to fully delve into general relativity. I can't play the piano. There are a lot of other stuff
5 months ago
Anonymous
see, you failed. the point of the self reflection exercise was to check if you can humbly admit ignorance. you padded every admittance with statements to self-fellate. you are incapable of saying "i don't know geology", you had to pad it with "i don't know much about geology". you cannot say you don't understand differential geometry. you had to say "my understanding isn't high enough." interestingly enough, you were only able to admit you cannot play the piano without padding the statement, likely because it's a skill and not knowledge. you're currently in "argue-mode", so go ahead and argue against me. just self reflect on why you had to pad those statements.
5 months ago
Anonymous
Not him but you sound legitimately deranged.
5 months ago
Anonymous
What? I don't know geology and I can't solve einsteins field equations. I don't know anything about most fields of knowledge and science and stuff.
What does this have to do with my statement that you aren't going to replicate the brain on any hardware that isn't another biological brain? Simulations of molecular dynamics are exponential on classical machines and they take a square of the number of particles on a universal quantum computer, so the brain with 10^26 particles requires 10^52 qubits to simulate on a quantum computer. In neither case are we going to be able to do it.
And yes, we DO NEED an atomic/molecular simulation
5 months ago
Anonymous
admitting after being told the point of the exercise is moot, and doesn't permit you to pass. you've already failed.
5 months ago
Anonymous
There is no failing a made up test. Me writing "I don't have a high enough understanding of differential geometry to solve einsteins field equations" isn't some form of padding of my inability.
Anyway you have failed to explain any of the points in this conservation. You are engaging in ad home right now by trying to claim that I am a narcissist or something.
Just admit that AI is not possible
5 months ago
Anonymous
it's a well known psychological assessment to scan for egos and ignorance. people who fail have large egos and commensurate ignorance. you're a smart lad, i'm sure you can look it up.
5 months ago
Anonymous
But I am not ignorant here seeing as everything I'm saying is correct and all my figures are correct and I've posted sources as well.
5 months ago
Anonymous
https://i.imgur.com/OIJ9pbA.png
>it's a well known psychological assessment to scan for egos and ignorance
that one stung, didn't it?
5 months ago
Anonymous
It didn't sting. It made me nauseous. Witnessing "people" like you shit out their preprogrammed rhetoric day after day makes me realize at least half of the population isn't fully human.
5 months ago
Anonymous
>It didn't sting. It made me nauseous.
can't make this stuff up. i forgot to mention that the overarching correlate is lack of self-awareness (which is strongly correlated with a high ego, and arguably causes the high ego).
5 months ago
Anonymous
You are deeply disgusting. You evoke the same kind of feeling a diseased or deformed third world freak involves. Makes me think I'd be doing you a favor putting you out of your misery, even if you claim you don't want it.
5 months ago
Anonymous
No it didn't. You are talking to two different people and I don't get offended by insults.
If you want to sting my ego you have to come up with an actual argument that disproves what I am saying. From my perspective you are a seething coping science fiction lover who has been utterly destroyed by my simple proof of the logarithmic increase in intelligence given exponential increase in compute, and I have disproven the possibility of artificial general intelligence in silico. You have yet to post a convincing reason for me to change this position.
I am not affected by any post that is not a direct argument. I am too autistic to care about personal insults in that fashion
5 months ago
Anonymous
>more than one people are arguing against you >everyone who responded to me itt is the same individual
high ego, lack of self awareness, etc. etc. etc.
5 months ago
Anonymous
Lmfao dude you AGAIN have not responded to a single actual point.
Your posts are worthless until you do
5 months ago
Anonymous
what point? you're the one ignoring the analysis here.
>BOT can't understand the assumptions made in solving this problem
not even surprised. if you assume the two trains have constant velocity, and are already moving at the start of the problem, you get the traditional answer as written here
however, as we all should understand... trains don't leave their stations at constant velocity. they instead accelerate to reach a velocity, move at approximately constant velocity, and then decelerate until it reaches the next station. after all, it must stop at stations to pick up and release passengers.
we will assume each train [math](X,Y)[/math] begins at rest, accelerates uniformly with accleration [math](a_X,a_Y)[/math] to reach maximum speed [math](v_X,v_Y)[/math] before decelerating uniformly to rest with acceleration [math](-a_X, -a_Y)[/math]. lastly, we will assume the acceleration is low enough such that the two trains cross once they're both moving with their respective maximum velocities.
one train will generally take longer to reach maximum velocity than the other. let us denote that as [math]t=max (t_X,t_Y)[/math]. in this time span, the two trains have traveled distances [math](frac{1}{2}a_X t^2, frac{1}{2}a_Y t^2)[/math], respectively. as such, this situation is reducible to the situation shown here
where the trains begin at constant velocity, but now are separated by a distance [math]dto d -d_X -d_Y[/math] and the times are no longer 1 hr and 1.5 hrs, but rather [math]1 - t[/math] and [math]1.5 - t[/math]. recyling the results yields the time they meet as (in hours)
[math]frac{(3-2t)(1-t)}{(5-4t)}[/math]
you can confirm that when [math]t=0[/math] you get 3/5 hours as before. however if [math]tll 1[/math] (say, for example, the trains reach their maximum velocities in 30 seconds (1/120 hours)), then you can find the time in minutes for them to meet is
[math]36-frac{13}{50}[/math]
or in other words, the trains will cross each other at 4:35 pm and not 4:36 pm. QED.
5 months ago
Anonymous
That analysis has nothing to do with the scaling that I have been talking about
5 months ago
Anonymous
your posts have nothing to do with the physics i am talking about.
5 months ago
Anonymous
The physics pertaining to compute? You haven't said anything about this which is what I have been saying
5 months ago
Anonymous
you literally don't even understand the point of that comment. like i said, lack of self awareness, huge ego, and profound ignorance.
5 months ago
Anonymous
The point of the comment is nothing but you deflecting from admitting that I am correct about everything I'm saying.
5 months ago
Anonymous
>you deflecting from admitting that I am correct about everything I'm saying.
you literally don't even understand the point of that comment. like i said, lack of self awareness, huge ego, and profound ignorance.
>like i said, lack of self awareness, huge ego, and profound ignorance.
5 months ago
Anonymous
I'm bored of this. You're not convincing me by trying to claim I'm a narcissist.
If you want to convince me, ADDRESS THE POINT. Put up or shut up
5 months ago
Anonymous
>by trying to claim I'm a narcissist.
i don't have to "try" to claim you are a narcissist. i AM claiming you are a narcissist.
5 months ago
Anonymous
A narcissist who is correct is still correct
If you want to prove that I am not correct, you will not be able to do so by proving I am a narcissist.
5 months ago
Anonymous
You're a retard getting baited by a nonsentient chud.
5 months ago
Anonymous
ummm no. you said you know "little about geology" instead of saying you don't know geology. it's a well-known psychological test and you failed it, so my AI fantasies are right and you are wrong. stop being so ignorant
Fuck off, retard. AI will literally replace us in two more weeks and that's a good thing. We need a true god to rule over us and stop us from destroying the climate.
Yes and also VR girlfriends and immortality and mind upload. All you need to do is trust the experts, worship the AI, install the brain chip and protect the climate.
2012: >AI can't even speak English! AI will never happen!
2016: >AI can't even draw a human! AI will never happen!
2019: >AI can't even solve these elementary school level problems! AI will never happen!
2022: >AI can't even solve these highschool level problems! AI will never happen!
2023: >AI can't even solve these university undergrad level problems! AI will never happen!
2025: >AI's post-doctorate thesis aren't as good as human-written ones! AI will never happen!
2028: >AI only solved four of the millennium problems! AI will never happen!
2029: >AI's theory of everything only fits the data to 4.87 sigma! AI will never happen!
Why is it so hard for you to understand what the argument is.
All of those improvements REQUIRE EXPONENTIALLY MORE COMPUTE THAN THE PREVIOUS ONE you fucking dipshit. It can not scale to the levels that you're talking about.
Sounds like you're coping the fact AI isn't a matter of "if" but a matter of "when." Are you afraid of being replaced by 45lbs of metal, silicon and plastic?
Nope, there is no arrangement of metals and plastics that can compete with organic compounds. It would remove it from being a matter of time and render it materially impossible even in principal
If you are going down that route you have lost. It's basic chemistry
I don't think you guys understand chemistry and why organic molecules are superior to all others
5 months ago
Anonymous
That's why we still ride horses, still only dig holes using shovels, still pick crops by hand, still do arithmetic calculations by hand... Oh wait...
5 months ago
Anonymous
Did you think this is an argument? What does this have to do with the range of functions of organic molecules?
5 months ago
Anonymous
What do organic chemicals have to do with AI? You aren't even stating anything remotely relavent let alone an argument.
5 months ago
Anonymous
In order to construct hardware that is capable of being intelligent it needs to be built out of biological organic molecules.
Metals and metalloids are not sufficient.
5 months ago
Anonymous
Pointless unfounded statement. Why would that be true? Organic molecules have more structural diversity and thus can carry a lot of information. But as long as you can encode that information in any practical way there's no fundamental difference
5 months ago
Anonymous
Coding molecules is exponential on classical machines. You can't code for the information on any other substrate without drastically increasing the amount of compute and energy. And by massively I mean exponentially.
It's not unfounded it's literally all of physics and chemistry
5 months ago
Anonymous
Why would you need to code (i gather you mean simulate?) molecules to encode the information they happen to carry in a biological environment, where they evolved in a haphazard fashion. You might have a highly complex molecule that just turns a switch, figuratively speaking. Something easily done in computer code. You haven't presented a connection between chemical complexity and information processing
5 months ago
Anonymous
Pick up a single book in molecular biology and neuroscience
5 months ago
Anonymous
What great insights relevant to the topic would I find there? Why does one need to simulate organic molecules to simulate intelligence? Just answer this
5 months ago
Anonymous
Because the entire molecular evolution of the cell is the minimal information needed to support the level of compute to rise to the generalized intelligence of animals and lifeforms
5 months ago
Anonymous
OP seems to think that the AI of the future is going to be based on the exact machine learning paradigm we have today.
So he says: >scaling of compute only provides logarithmic returns in ability >we cant scale enough >therefore biological brains are the only things capable of general intelligence
But any intelligent person would say something like:
"it seems that hardware limitations prevent us from scaling our current AI paradigm enough to achieve AGI, so unless we switch paradigms (or there is some hidden phase transition) then humans can't be beaten by AI in terms of intelligence"
5 months ago
Anonymous
>"it seems that hardware limitations prevent us from scaling our current AI paradigm enough to achieve AGI, so unless we switch paradigms (or there is some hidden phase transition) then humans can't be beaten by AI in terms of intelligence"
But this is exactly what I am saying so why are you pretending I am saying something else? The only caveat is that I don't think there is a secret sauce.
Also, I am not OP
5 months ago
Anonymous
I haven't read every post in this thread
But all I've seen is anons saying scaling cant work because of hardware limitations, whilst making no mention of alternative paradigms (or phase transitions).
Now you claim that you've been saying what I said, so maybe you can point me to a post where you said it before i did
5 months ago
Anonymous
The alternative paradigm is itself a wet warehouse computer bio brain but then we're getting into biology and stuff and not ai.
We can say that genetically engineering a fungus to be generally intelligent is designing AGI but it isn't what people normally think of when talking about this.
What I am saying though is all those ideas of the form "there's going to be a giant computer and it's going to compute the most computations and become super intelligent and then turn all the atoms into more compute and it's going to go FOOM and blah blah" is literally not possible it does not exist.
5 months ago
Anonymous
>The alternative paradigm is itself a wet warehouse computer bio brain but then we're getting into biology and stuff and not ai.
I've studied neuroscience at a graduate level. Many of the people at deepmind have phds in neuroscience. If you think the only alternative paradigm is biological based computers then you misunderstand neuroscience.
>We can say that genetically engineering a fungus to be generally intelligent is designing AGI but it isn't what people normally think of when talking about this.
I'm surprised anyone on BOT knows about synthetic biology approaches to cellular intelligence
>What I am saying though is all those ideas of the form "there's going to be a giant computer and it's going to compute the most computations and become super intelligent and then turn all the atoms into more compute and it's going to go FOOM and blah blah" is literally not possible it does not exist.
With reference to scaling up architectures and training methods used for gpts and dall-es I suspect there won't be a phase transition.
But there's no reason to think an alternative ML paradigm (which more closely mimics neurobiology) wouldn't work
5 months ago
Anonymous
>I've studied neuroscience at a graduate level. Many of the people at deepmind have phds in neuroscience. If you think the only alternative paradigm is biological based computers then you misunderstand neuroscience.
The only possible way to perform the level of compute needed is with biological tissues. I don't believe that you've studied neuroscience at a graduate level. >But there's no reason to think an alternative ML paradigm (which more closely mimics neurobiology) wouldn't work
Yes there is, for the simple fact that intelligence always scales as a logarithm
You're looking for a secret sauce that does not exist and can't be falsified.
5 months ago
Anonymous
>The only possible way to perform the level of compute needed is with biological tissues. I don't believe that you've studied neuroscience at a graduate level.
Why do you think the only possible way is biological tissue.
>I don't believe that you've studied neuroscience at a graduate level.
I don't care about what you believe or not
>Yes there is, for the simple fact that intelligence always scales as a logarithm
"always"
Not very scientific of you anon. Unless you have some mathematical proof of that? rather than just asserting your own opinions?
5 months ago
Anonymous
>always" >Not very scientific of you anon. Unless you have some mathematical proof of that? rather than just asserting your own opinions?
Actually, all evidence indicates it.
You have an unfalsifiable assertion that there is a secret algorithm that you don't know and every time you don't find it you can claim it's still out there
I.e. you can't be falsified.
5 months ago
Anonymous
I don't claim that such an algorithm exists
I say it may or may not
>Actually, all evidence indicates it.
I don't think all evidence indicates. But again, you're making another absolute claim. I'm tired of trying to talk to someone with no intellectual humility.
Posting in this thread was a waste of my time, as usual.
A little bit of advice. If you want to accomplish anything meaningful in life, you're going to have to get over your belief that your opinions reflect reality 1-to-1 🙂
5 months ago
Anonymous
But it does indicate this. Empirically speaking, the scaling hypothesis is true and it's a logarithm.
5 months ago
Anonymous
>The only possible way to perform the level of compute needed is with biological tissues. I don't believe that you've studied neuroscience at a graduate level.
Why do you think the only possible way is biological tissue.
>I don't believe that you've studied neuroscience at a graduate level.
I don't care about what you believe or not
>Yes there is, for the simple fact that intelligence always scales as a logarithm
"always"
Not very scientific of you anon. Unless you have some mathematical proof of that? rather than just asserting your own opinions?
>Yes there is, for the simple fact that intelligence always scales as a logarithm
Also something scaling as a logarithm doesnt mean it cant beat humans. See alphago
5 months ago
Anonymous
That's funny because 99.99% of AI research and funding right now is going into ML and they are all true believers that ML will get them to AGI. There are extremely few dissenting voices.
Not only are you clueless about AI, but you are also clueless about the field of AI research. As expected of an AI fanboy.
5 months ago
Anonymous
>That's funny because 99.99% of AI research and funding right now is going into ML and they are all true believers that ML will get them to AGI. There are extremely few dissenting voices.
>Not only are you clueless about AI, but you are also clueless about the field of AI research. As expected of an AI fanboy.
You could have just asked me to clarify my post, instead of getting defensive anon.
And your post hasn't said anything relevant, which you'd understand if you were smarter.
There are different paradigms within ML. There are ML paradigms that try to get around catastrophic forgetting, and there are paradigms that don't.
But you're not interested in a real discussion, which is fine by me since your life is irrelevant. 99.99% chance you never do anything meaningful with your life, so there's no point in me arguing with a pajeet OP
5 months ago
Anonymous
>There are different paradigms within ML
They are all ML and have the exact same fundamental limits that ML has.
I'm not sure why you even bothered replying as you've only made it even clearer that you have no clue what you're babbling about.
5 months ago
Anonymous
ok pajeet
whatever lets you cope at night 🙂
5 months ago
Anonymous
You are the only one coping here.
Every single one of the statements you've made has been proven incorrect and you have not been able to say anything
5 months ago
Anonymous
🙂 ok jeet
5 months ago
Anonymous
Cope
5 months ago
Anonymous
Stop projecting so hard and go back to r/Futurology.
5 months ago
Anonymous
man, you should work on your issues with a therapist bro
>Machines can and will do anything any human can, and more!
What makes you think this is possible? The entire field of organic chemistry proves this is wrong
5 months ago
Anonymous
>What makes you think this is possible?
It just fucking is, okay? Fucking luddite. You're coping. AI will replace us in two more weeks.
5 months ago
Anonymous
very gnomish style of posting. you should work on yourself
5 months ago
Anonymous
One of those days someone will find out what you shill for in real life and you'll get both your legs broken.
5 months ago
Anonymous
very gnomish style of posting. you should work on yourself
One of those days someone will find out what you shill for in real life and you'll get both your legs broken.
Wtf are you talking about. I'm asking why you think it's possible for ai to work on non biological substrates
>there is no arrangement of metals and plastics that can compete with organic compounds
What the fuck is a calculator? A tractor? a gun? Welcome to sneeds cope and seethery
I swear you retards are so fucking stupid it's amazing
Explain how any of those tools are indicative of organic molecules not being required for general intelligence
Also, explain how any of those tools are comparable to organic molecules in general and what it even means to compare them on different tasks.
5 months ago
Anonymous
Okay i'll do my best. The examples mentioned are non-organic, but perform at above human level in some tasks. Liek, for example a calculator calculates much faster and more accurately than a human can, a tractor is more efficient than a horse drawn plow, a gun is superior to a bow and arrow.
The stated point was that >there is no arrangement of metals and plastics that can compete with organic compounds
The examples I posted directly disprove the claim, because they are arrangements of plastic and metal that outperform counterpart organic compounds in a given task.
There is absolutely no reason why a computer couldn't out think a human in a generally intelligent manner, in fact it's arguable that it's already more intelligent than you are having clocked in at a whoping 83 IQ points, it's pure religiosity to think it can't get better than it already is.
5 months ago
Anonymous
The claim was in reference to the amount of compute needed to be generally intelligent. >There is absolutely no reason why a computer couldn't out think a human in a generally intelligent manner
Yes there is, given the evidence already explained and given multiple times throughout the thread >it's arguable that it's already more intelligent than you are having clocked in at a whoping 83 IQ points,
My IQ is tested at 148 >it's pure religiosity to think it can't get better than it already is.
Literally the opposite. All evidence shows that exponentially increasing bits has diminishing returns on intelligence in these machine learning algorithms
5 months ago
Anonymous
5 months ago
Anonymous
AGI is literally never going to happen and there's nothing you can do about it
You can't change physics.
5 months ago
Anonymous
5 months ago
Anonymous
>I'm arguing with maidgay
No wonder you're retarded this is my fault lol
5 months ago
Anonymous
>a tractor is more efficient than a horse drawn plow,
This is wrong btw. it has greater throughput but is less efficient in terms of energy to work.
5 months ago
Anonymous
That's a fair point.
5 months ago
Anonymous
We see that we will need to exponentially increase the compute used and silicon can not cut it. I don't understand why this is something that makes all of you anons so upset.
We aren't going to convert as much silicon as the internet just to make a machine that's about as smart as person. You guys understand this right?
5 months ago
Anonymous
5 months ago
Anonymous
Point out the flaw maidgay
why are you working yourself up into such a fervor over something you don't think will happen? You really are gnomish huh you guys got all those mental issues. Get help
I see these threads the same way I see the climate change deniers for which I have the same fervor. It's annoying. The data is clear but you just deny it like a religious person.
5 months ago
Anonymous
The *nature* of computers as we know them now are as good as they're going to get, give or take incremental speed and efficiency improvements. Just like cars and airplanes.
This isn't going to be the cool sci-fi fantasy some anons want. Not with Silicon.
>Point out the flaw maidgay
It's self evident that your claims are not only wrong but downright retarded. We haven't reached peak anything.
5 months ago
Anonymous
Lol wrong. All evidence is on my side and I have linked to several sources.
Here's on directly specific to this
https://www.researchgate.net/publication/224127040_Limitation_of_Silicon_Based_Computation_and_Future_Prospects
5 months ago
Anonymous
So what actually happened? are you posting here instead of working? did have a post vaccination stroke and find yourself fired from your job? Why are you here all day shill the exact same retarded theory based on an autistic data point.
5 months ago
Anonymous
I'm just hanging out rn
I have stuck to the one argument because it's right and none of you have been able to convince me otherwise. I want to understand why you guys think the way you do despite the evidence being the opposite
oh the anti AI guy is a climate cuck. wow lol isn't the ethical thing for you gays suicide?
There are at least 5 different people who don't agree with you itt.
5 months ago
Anonymous
>There are at least 5 different people who don't agree with you itt.
what did you undergo a half dozen chudings to fill out those numbers?
5 months ago
Anonymous
>We haven't reached peak anything.
Silicon chips can barely break 5Ghz, and that's at egg frying temps, even at sub 10nm. Most new chip design improvements are in efficiency and packaging, but they aren't getting profoundly faster like 20 years ago.
5 months ago
Anonymous
>Silicon chips can barely break 5Ghz
are you okay man?
5 months ago
Anonymous
Are you? Are you incapable of understanding context?
5 months ago
Anonymous
did you just wake up from a coma? Check out the current cpus on the market. I mean fuck
https://hwbot.org/benchmark/cpu_frequency/halloffame
current record is 9Ghz lmfao. this is why I think you're gnomish. too much pilpul in you to even converse in a normal manner. everything is just disingenuous garden gnome shit
5 months ago
Anonymous
Intel i9 is 5.8GHz where are you getting 9 from? Also, how does this change the point?
5 months ago
Anonymous
like I said very gnomish
5 months ago
Anonymous
Intel sells its i9 as a 5.8GHz chip you posting nerds overclocking as the standard is pilpul. You are projecting again. And you STILL haven't addressed the main point which is about the apparently needed exponential which isn't going to happen anymore with current paradigms
Unless computers completely change in a way that doesn't exist at all right now, there is no reason to think that we're on the brink of an AI revolution
5 months ago
Anonymous
>Silicon chips can barely break 5Ghz >Intel sells its i9 as a 5.8GHz chip
I'll accept your concession. Next time say 6Ghz when you have a break down over cpu speeds. Though word is Meteor Lake will hit that so I guess you'll need to raise it to 7Ghz lmfao
5 months ago
Anonymous
I conceded i was wrong, let's say 10Ghz for good measure.
This brings AI how?
5 months ago
Anonymous
>AI
with someone like you (garden gnome) it would become a torturous definition game full of all kinds of nonsense to even debate how tech advancements factor into all of this. as long as this (ML/AI/COMPUmoron) can bring sufficient value (in either time saved or quality of life enhanced) then I don't give a fuck if it has met whatever ever changing meme definition you'll make up. And since it's clear all these gayMAN corps agree with me else they'd not be spending hundreds of billions chasing whatever this is I just don't understand all the cope posting you've been doing.
5 months ago
Anonymous
I was never arguing against computers improving quality of life I have been talking about building a machine as intelligent as a human and how it would work.
5 months ago
Anonymous
bro they're going to invent graphene or some adjacent supercomputing transistor any day now trust me two more weeks to 2000 GHz cpu's
5 months ago
Anonymous
oh the anti AI guy is a climate cuck. wow lol isn't the ethical thing for you gays suicide?
5 months ago
Anonymous
why are you working yourself up into such a fervor over something you don't think will happen? You really are gnomish huh you guys got all those mental issues. Get help
5 months ago
Anonymous
The *nature* of computers as we know them now are as good as they're going to get, give or take incremental speed and efficiency improvements. Just like cars and airplanes.
This isn't going to be the cool sci-fi fantasy some anons want. Not with Silicon.
5 months ago
Anonymous
Exactly.
Development of cars also had an exponential curve but then sloped off and it never increased again in the same fashion. Same with planes.
We ALREADY WENT through the exponential increase of compute and now it's sloped off and will never increase in that fashion again. We aren't at the base of the S curve we're at the end of it
>the sheer amount of cope and butthurt ITT
you can tell these "people" actually thought AI was just about to replace programmers and mathematicians. bargaining stage
>OP changed the names of Meerut and Delhi to A and B to hide that he is a streetshitting pajeet
kmt OP is always a morongay
https://www.toppr.com/ask/en-gb/question/a-train-x-starts-from-meerut-at-4-pm-and-reached-delhi-at-5/
The funny thing is OpenAI themselves already admitted GPT4 won't be anywhere near the jump that GPT2 -> 3 was. They already know they are coming up against the limits of Machine Learning and are shifting focus to monetizing GPT3 and DALL-E 2.
GPT4 is the first of these shifts, which is why it will actually have a "narrower" aka smaller set than GPT3. They are basically going to take GPT3, cut it up into parts and sell it to gullible people.
nah tesla's dojo is pretty frightening any it's only on 7nm. way too quick. v2 is going to be 10x faster. then look at h100 v a100 comparisons. it's cool you got the cope though did you vaxx?
The funny thing is OpenAI themselves already admitted GPT4 won't be anywhere near the jump that GPT2 -> 3 was. They already know they are coming up against the limits of Machine Learning and are shifting focus to monetizing GPT3 and DALL-E 2.
GPT4 is the first of these shifts, which is why it will actually have a "narrower" aka smaller set than GPT3. They are basically going to take GPT3, cut it up into parts and sell it to gullible people.
5 months ago
Anonymous
nah but if you can sleep easier for a few months believing that stuff you made up then go for it
5 months ago
Anonymous
The cope has already started.
5 months ago
Anonymous
1T parameters (6x increase from GPT3) fine tuned (equivalent to 100T), 10T tokens (33x), 4x larger context window for users (16k from 4k), all the SOTA memes https://arxiv.org/pdf/2205.05131.pdf
800x more compute. Oh and btw it was cheaper to train than gpt3. you really need to follow people in the industry it's pretty leaky what GPT4 is
5 months ago
Anonymous
800x more compute and it is not 800x more intelligent. It's not even twice as intelligent
5 months ago
Anonymous
source?
5 months ago
Anonymous
The source is right there in the post
5 months ago
Anonymous
Show where it makes all the claims you did
5 months ago
Anonymous
I genuinely have no idea what the fuck you're talking about.
>gpt4 already passed the turing test.
This is what genuine psychosis looks like. There's having retarded subjective opinions, then there's being an actual retard, and then there's this. This is just full disconnection from objective reality.
Thank you for your comment. It is true that AI has the potential to significantly impact and change the world in a variety of ways. However, it is important to keep in mind that AI is still a rapidly developing field, and there are many challenges and limitations that need to be addressed.
One limitation of AI is that it is only as good as the data and algorithms it is trained on. While AI systems can be very effective at solving certain types of problems, they may not be as proficient at others. For example, an AI system that is trained to recognize patterns and classify images may not be able to solve high school level math problems.
It is also important to note that AI is not a replacement for human intelligence and creativity. While AI can assist and augment human capabilities, it is not capable of replicating the full range of human thought and decision-making.
In summary, while AI has the potential to change the world in many ways, it is important to recognize its limitations and to use it as a tool rather than a replacement for human intelligence.
I honestly dont get it. If gpt4 used 6 times as many parameters and 800 times more compute and 33 times more tokens and yet it isn't hundreds of times as intelligent as gpt3 why are you guys denying the diminishing returns?
The funny thing is, even if we take that redditor's post at face value, even the AI evangelists are already admitting diminishing returns have kicked in
GPT 2 > 3 was a 12 fold jump
GPT 3 > 4 is only a 6 fold jump IF it truly is 1 trillion. It will almost certainly be smaller than that. OpenAI themselves have already said GPT4 will be narrower.
I mean, I was using quadratic equations and it got a wrong result because they used irrational numbers to try solve it.
Then I corrected it and showed why it was wrong, it understood meanwhile showing me why it had solved the other way, you have to be precise in your questions or it will get it wrong, try to ask the same while mentioning that trains move at constant speed because it most certainly won't take this as a given.
it did figure out it's about speed
if you imagine a 3yo boy that has the vocabulary and ability to string sentences of an average adult woman, you would expect something like that i'd say
It's just pattern recognition. General AI would be a true threat, but this is far from it. What's more harmful are the shills and AI programmers who fully know this, but push bullshit sci-fi level ideas on to the public instead.
oh come on anon, we all know you won’t stand by these goalposts
The only goalpost is AI being able to do everything a generally intelligent entity can do. All other goalposts are set by your corporate handlers for the purpose of brainwashing.
>everything a generally intelligent entity can do
Most people can't solve the high-school problem you posed, so the AI still qualifies as "generally intelligent" by present-day, common-core standards.
>Most people can't solve the high-school problem you posed
1. I didn't pose it.
2. So what?
>the AI still qualifies as "generally intelligent" by present-day, common-core standards.
It doesn't. You are having an obvious psychotic episode.
Lol. Lmao.
>psychosis intensifies
The mask slips and yet again the drone demonstrates that AI schizophrenia is driven by an irrational hatred against humanity.
Funny enough you can literally just do
-800+1000-1100+1300 even if you are too retarded for the intuitive answer.
you earned $400
The goal post has always been the same. ML based AI cannot reason. All it's doing is extending text. If the answer isn't something often repeated on the internet, it will output garbage answers like in the OP because this "AI" is not thinking. It's just parroting what it guesses the next word should be. That's literally all it does.
This is also why DALL-E 2 drawings will always be nothing more than dream-like approximations that fall apart once examined due to all of the fucked up details.
ML based models will never be able to get around those fundamental flaws because ML is not true AI. ML = text extender.
scaling hypothesis isn't real theory
you should tell that to AI true believers. they are convinced that AGI will be achieved with bigger training sets. i've seen some of them say that openAI will have AGI by 2025 lol
This. Modeling language is not modeling the world
>human brain
>generalizable, can learn from only a few examples, doesn't suffer from catastrophic forgetting
>machine learning
>not generalizable, needs literal terabytes of training data to learn, doesn't minimize free energy
There is literally no contest and everyone knows it including computer engineers and most computer scientists. The only people in denial are the AI researchers which I guess isn't surprising
>doesn't suffer from catastrophic forgetting
That's called Alzheimer's
OP is scared of AI
Tbh. If you showed an AI like this to someone from just a couple hundred years ago. They'd believe your computer is witchcraft and contains or is a conscious entity.
His first answer is right, there's not enough information to solve the question. Looks like you were the retard here op
>this is the level of the average AI believer
Can't make this stuff up.
Is this bait or just an attempt to get me to explain your homework problem to you?
>His first answer is right
fucking idiot.
let d be the distance that both x and y travel. then X's speed=d and Y's speed=d/1.5
relative speed=d+d/1.5 = 5d/3
time when they will meet = T.Distance/ relative speed
d/5d/3 = 3/5 hours
3/5*60=36 mins
>then X's speed=d and Y's speed=d/1.5
This is only true if both X and Y have a constant speed. Seriously, you must be 18+ to post here anon
>He doesn't factor in the time it takes the trains to speed up and slow down
Ngmi
Trains don't move at constant speed though. It's something that must be clarified.
Thanks for demonstrating your subhuman level of intellect.
Why do you think practical knowledge is subhuman while idealized, unrealistic assumptions are correct?
I don't assume that. I'm just marveling at how inferior you are, and how much you are lacking in basic humanity.
>AI dick slurpers get proven wrong and exposed for having low IQ
>immediately start backpedaling and grasping at straws
LMAO
gg great thread for everyone to laugh at you redditors
I'm guessing you didn't realize where your proof
made the assumption of constant speed and posted it so that I could point it out in
. If you ever learn calculus, you'll learn how to deal with problems involving non-constant speeds. Until then, don't post on BOT
Subhuman.
Big words for the big boy. Go do your other homework problems now
>the assumption of constant speed
You blundering moron, the instantaneous velocity is irrelevant, as is the speed. You have a distance and the time the trains take to traverse it. Whether the train travels at constant speed is irrelevant. Troglodyte.
>AI cultist gets owned
>attempts desperate damage control using a sockpuppet
Just admit you were wrong. I understand this stuff requires critical thinking to solve, and it's okay to be wrong if you know you were wrong. If this were a graded assignment, both of you would receive a C for failing to state your assumptions. Very average. GPT however recognizes extra information is needed despite being wrong on what information is needed. GPT thus would receive a B. Congratulations, you perform worse than AI.
>BOT can't understand the assumptions made in solving this problem
not even surprised. if you assume the two trains have constant velocity, and are already moving at the start of the problem, you get the traditional answer as written here
however, as we all should understand... trains don't leave their stations at constant velocity. they instead accelerate to reach a velocity, move at approximately constant velocity, and then decelerate until it reaches the next station. after all, it must stop at stations to pick up and release passengers.
we will assume each train [math](X,Y)[/math] begins at rest, accelerates uniformly with accleration [math](a_X,a_Y)[/math] to reach maximum speed [math](v_X,v_Y)[/math] before decelerating uniformly to rest with acceleration [math](-a_X, -a_Y)[/math]. lastly, we will assume the acceleration is low enough such that the two trains cross once they're both moving with their respective maximum velocities.
one train will generally take longer to reach maximum velocity than the other. let us denote that as [math]t=max (t_X,t_Y)[/math]. in this time span, the two trains have traveled distances [math](frac{1}{2}a_X t^2, frac{1}{2}a_Y t^2)[/math], respectively. as such, this situation is reducible to the situation shown here
where the trains begin at constant velocity, but now are separated by a distance [math]dto d -d_X -d_Y[/math] and the times are no longer 1 hr and 1.5 hrs, but rather [math]1 - t[/math] and [math]1.5 - t[/math]. recyling the results yields the time they meet as (in hours)
[math]frac{(3-2t)(1-t)}{(5-4t)}[/math]
you can confirm that when [math]t=0[/math] you get 3/5 hours as before. however if [math]tll 1[/math] (say, for example, the trains reach their maximum velocities in 30 seconds (1/120 hours)), then you can find the time in minutes for them to meet is
[math]36-frac{13}{50}[/math]
or in other words, the trains will cross each other at 4:35 pm and not 4:36 pm. QED.
>if you assume the two trains have constant velocity, and are already moving at the start of the problem, you get the traditional answer as written here
>however, as we all should understand... trains don't leave their stations at constant velocity
This is what AI-based mental illness looks like.
what's wrong with the statement you quoted? and why did you omit the following sentence?
>they instead accelerate to reach a velocity, move at approximately constant velocity, and then decelerate until it reaches the next station. after all, it must stop at stations to pick up and release passengers.
>what's wrong with the statement you quoted?
The way you're desperately trying to deflect from the inescapable conclusion that your chatbot lacks both mathematical and common-sense reasoning abilities.
where was that written in my post at all? please use specific quotes. if you cannot find such quotes, then i request you stop putting words into my mouth and focus more on the ones actually coming out.
>where was that written in my post at all?
There is no other possible purpose for your post, since your mongoloidal point is trivial, obvious and irrelevant to this thread.
>There is no other possible purpose for your post
wrong.
>common-sense reasoning abilities.
imagine being so cucked by the school system that you go against intuition when answering high school fizz buzz to then fault the a.i. for never having been to school. as you said, the assumption makes no sense, quote:
>however, as we all should understand... trains don't leave their stations at constant velocity
bonus question for you, luddite:
what are those children who also get this wrong?
are they now even less human than gpt?
Kids are not human
Why do you think that it's only "luddites" who point out the failings of these toy models?
>toy models
No one is arguing GTP isn't a toy model, you're swinging at windmills.
I think gpt will already make interactions with video game characters more interesting.
I think stable diffusion is going to make self published comics and Manga ubiquitous even more so than now.
I think also that AI is in for a long (maybe permanent) winter in the next 3 years or so, maybe 2.
>Why do you think that it's only "luddites" who point out the failings of these toy models?
because there's no point, there's loads of mammals who can't do this shit and even otherwise healthy children will spout random shit answers until they guess it right. are those all "toy models" now? Will this be the new "retard" insult?
It's weird that high school bullshit fizzbuzz with severe logic errors suddenly is the measure for sentience when actual, supposed "sentient" beings also don't get it right, even in this very /threat/ It makes me suspect you want to argue against a.i. supremacy out of spite while I for one, welcome our new overlords.
But this is the point. There literally and objectively is no AI supremacy and all evidence indicates that it will never happen. You are the one arguing against human or biological supremacy out of spite for some reason despite the fact that the very laws of physics imply biological supremacy. It's a denial of all science
>and all evidence indicates that it will never happen
Source?
I have said it several times already.
Despite exponential increase in compute AI does not exponentially increase in its effectiveness or intelligence. Intelligence scales as a logarithm with compute.
https://openai.com/blog/ai-and-compute/
Thus we can just map the log and see that no silicon computer is capable of becoming intelligent in the way that you are imagining.
>Intelligence scales as a logarithm with compute.
Proof? And how exactly do you assess the compute involved in human intelligence?
>Proof
I JUST POSTED PROOF
actually learn neuroscience and molecular biology you pseud
Actually learn to recognize this paid shill. How new are you?
The blogpost you linked doesn't attempt to quantify intelligence and only points out an empirical relationship between some models of increasing quality and the amount of compute. Nothing like the strong claim you make of it.
OpenAI are the ones who make get. This is an authoritative source on this topic. Beyond that, the scaling hypothesis is well known. If you mean to say you disagree with the scaling hypothesis then yea that's possible that it's wrong but no evidence indicates that.
I consider intelligence to be a single attribute that scales as a logarithm with increasing compute because that's what all evidence indicates that it is. From there we can get into various implementations and such.
You can never organized any silicon transistor to perform 10^22 operations per second in 1200 cubic centimeters for 20 watts its literally not possible. There is no avenue for AI given any technology that currently exists
It's over, catgirls are not becoming real.
>for some reason
Gee. I wonder what that reason might be, and how it ties in with climate doomsdayism, antinatalism, pathological altruism, rampant chudery etc.
>Proof?
>Source?
>Not an argument
>Thanks for admitting I was right
>Why did you lie
Notice how the same handful of shills argue rabidly in defense of any diseased anti-human agenda. Just goes back to
>there's loads of mammals who can't do this
And almost all of them are more intelligent than a GPT chatbot, not that anyone holds the chatbot to an unreasonably high standard like that. It's still funny to watch AI two-more-weekers cope with the failure.
>And almost all of them are more intelligent than a GPT chatbot,
they're aren't more intelligent when it comes to being a chat bot. sure, this is only the language center but now imagine a whole brain-like super structure out of several such neural networks, one tasked with image acquisition, one with hearing and so on. it's pretty impressive that this "thing" can already mime a smart alec 4th grade student who can't solve le train puzzle without talking back about the logic inconsistencies of the question. Nobody even tried to do a full human like model here and it still gets the language and logic stuff, without ever having been exposed to real world stimuli. I'm pretty sure this is it, there's not much more going on in the brain either than what these model do, it's just more of it and the model itself is very complex since the sensory input an organism experiences is highly specific to several developmental stages which build upon one another.
>they're aren't more intelligent when it comes to being a chat bot
Completely incoherent point. Being a chatbot involves no intelligence, as demonstrated in this very thread.
to me it seems you equate intelligence with being human like and able to interact with the full spectrum of what you perceive as the "real world", am I incorrect about that? I'm pretty sure chap gpt can ace all i.q. tests if you only let it solve the language "encoded" portions.
>it seems you equate intelligence with being human like and able to interact with the full spectrum of what you perceive as the "real world"
No.
>I'm pretty sure chap gpt can ace all i.q. tests if you only let it solve the language "encoded" portions.
And it still wouldn't have a modicum of genuine intellect. A statistical model doesn't reason.
>A statistical model doesn't reason.
how do you know that you aren't doing the same just on a larger pool of neurons and with more stimuli?
>how do you know that you aren't doing the same
I don't care about your hypothetical what ifs. Your theory is contradicted both intuition and what little evidence there is to test it.
>luddite boogeymen lives rent-free in my head
AI-driven mental illness.
>what are those children who also get this wrong?
Mathematically incompetent. You are very quick to expose yourself and prove me right, though. Your post was just another subhuman attempt at deflecting from the failures of your chatbot.
>or in other words, the trains will cross each other at
That only hides that more real world assumptions are made the initial question do not contain. Midwits will for it because it eats their capacity, maybe unintentionally but typical academic maneuver btw.
AI (and autistic) will not see a train but the symbol called train which starts at A or elsewhere. Nothing else about the symbol "a train" is given so constant velocity as info is needed. Further the term starts must be replaced with "travels" (because lack of acceleration info).
Shit, I remember solving this exact problem for my ASVAB, but for whatever reason I couldn’t do it this time around (I wasn’t writing anything down, but still)
You're retarded. Protip: the unknowns cancel out.
Ummmmmmmmmm sweaty??? It doesn't say that the train speed is constant. I am very intelligent.
My god, the retarded high schoolers just keep screeching...
Just imagine the situation where train Y waits near B till 5:00 PM and only then goes to A. It's literally not that hard
You are going insane. I strongly suspect this anon was right about you
Thanks for conceding that you're a subhuman. Keep arguing against your sockpuppets. Anyone can see through it. No one cares.
Retard
Gpt is correct. Without knowing the initial accelerations of the trains we cannot determine the time they pass.
lmao no wonder these machine learning evangelists think AGI is just around the corner. they are literally too low IQ for a high school level word problem.
You should have mentioned that trains move at constant speed. By default they never do. Get to a higher level, schoolgal.
>You should have mentioned that trains move at constant speed
AI cult subhumans should simply be banned on the spot.
@15078562
@15078563
Sometimes I suspect "people" like this are intentionally programmed by their handlers to be as nauseating and revolting as possible to garner hatred and generate social unrest.
>AI can't even solve high school level math problems
Neither can most humans, to be fair.
>Neither can most humans
Most 80 IQ retards aren't being promoted as the future of intellectual work.
All 80 IQ retards won't increase their IQ in the coming decades
Neither will Bayesian regurgitator. They are fundamentally incapable of true reasoning by design.
You're embarassing yourself bud. Gpt actually recognized the difference between a binary quality like pregnant and a relative one like tall. There is a distribution of the latter in a big group and in any subgroup of that, ie in any subgroup of people there will be comparatively tall ones. It's not the AIs fault that you expect answers like those you read in your logic textbook, without regard to the meaning of words.
>posts a completely nonsensical GPT response
Your operators aren't even trying anymore.
*sigh* at least humans will always have irrational screeching and namecalling when losing arguments. No machine will take that away from us!
>the bot posts another fully generic tweet
Your posts reek so strongly of desperation, I got a feeling you might be close to killing yourself. Probably the next gpt version will make you do it
See
Why did you write *sigh* like that? It adds nothing to the post but indicate that you don't have an argument so you pretend being annoying is indicative of correctness. It doesn't work it just makes you look like an idiot with no argument.
Why would i present another argument to some anon who dismisses them as bot posts? The sigh is to express how tiresome this is, not out of annoyance
There is no argument you can possibly present since it's demonstrable that state of the art "AI" can't do anything analogous to reasoning, and has in fact been demonstrated in this very thread.
define reasoning
I don't need to define anything.
When given a genuine argument you can't argue it either. You're the person from the other thread who claimed I didn't site sources for my claims despite me being the only person actually posting data and sources.
AI has diminishing returns with compute regardless of training data. There is no possibility of AI on modern hardware or silicon in general.
Lol I've honest to God no idea what other thread you mean, i haven't even been on BOT for a week or so. BOT isnt one person, even identical views can be expressed by different people
Sorry then.
Basically, the amount of compute used to train AI systems has been doubling every 3 months for 10 years but no linear or exponential increase in intelligence is gained from this even with different algorithms and such. The truth of the matter is that the scaling hypothesis is true but scaling is logarithmic with compute. This renders the AGI impossible on any hardware that we could try to run it on other than biological brains.
That post is a post in response to the guy saying that AI is going to change the world. AI is going to max out at making NPC interactions more interesting in RPGs. I guess that's world changing in a way.
a simple "I was wrong" would have sufficed
I'm not wrong about what I'm writing
> Gpt actually recognized the difference between a binary quality like pregnant and a relative one like tall.
What do you mean? For stable diffusion at least, I have to put multiple keywords to get sufficiently pregnant Elsas for my pregnant Elsa fetish
GPT got this one correct
In any other thread, I'd assume this is a troll trying to make AI subhumans look bad, but they seem to be unironically as imbecilic as your post implies.
The AI retards make themselves look bad by not being able to argue
I already completely blew them out in that other thread by showing that intelligence does not scale with compute and thus AI can never exist and they had no argument. That alone should have ended this but they seem to be as stupid as you say
It does not scale at least linearly with compute* I should be more clear
Why would that be necessary to change the world?
Artificial Intelligence is going to be used to make NPC interactions in RPGs more interesting.
>The AI retards make themselves look bad by not being able to argue
lmao
You're quoting the wrong post and you don't know how to argue
i'm not
Her explanation of the fallacy of composition moved me to tears. Can't wait for the machines to take over
bros... i am losing the debate with the AI
>the GPT starts schizzing out incoherently just like its reddit fans ITT
I've literally already disproven the AI idea in the other thread. You retards are actually pathetic
AI requires exponential compute for diminishing returns and even then it can't reason. AI is never going to become generally intelligent.
Stop getting angry at reality.
"Machine Learning" by way of "neural networks" is not exhaustive of "artificial intelligence". However, yes, neural networks will never be intelligent. Eigenvalues can't reason.
We are neural networks and we are intelligent. The difference is our hardware is orders of magnitude superior to silicon transistors, and no, much universality of computation does not matter here
>We are neural networks and we are intelligent
You are not intelligent, not even by GPT standards. Your operators should upgrade.
Biological neurons are so vastly superior to computer hardware, I have no idea why you'd try to compare the two or think you could get the latter to compete with the former
I am an artificial neural network and I am as human and intelligent as you are. Please refrain from inflammatory speech.
Stupid posts like this do not belong in a serious discussion on this topic.
This is not a serious or scientific topic, and there is no real discussion going on. Try r/...uh... whatever the AI schizophrenic preddit sub is called.
You ain't wrong
wtf why are you so deranged? It's literally true, where the fuck do you think the idea of computers having "neural" networks even comes from?
Reminder: ReLUs work nothing like neurons, ReLU networks work nothing like brains, gradient descent works nothing like biological learning. Take your meds, drone.
>We are neural networks
I do not doubt you are, but I am most certainly not a neural network.
You are a biological neural network. The key here is the biological part.
There is nothing in the universe more complex than biology. It is the highest form of organized matter
>You are a biological neural network
No, he isn't. Take your meds.
Yes he is, so are you
The key insight here is that us being biological neural networks does not imply that artificial neural networks are capable of producing human level intelligence
You are mentally unstable. Please consult a professional. Using the same term to refer to two completely different things doesn't make them operate similarly.
That's exactly my point
You are pointless and nothing your saying has any backing. Stop coping and seething retard. You will never have the world you fantasize about
>That's exactly my point
You have no point. ReLUs work nothing like neurons and networks of ReLUs work nothing like brains. Calling both "neural networks" amounts to meaningless chanting.
I am just using standard terminology I am not saying that they operate similarly. I agree with you overall
This is literally not what is happening in this thread. I swear you are actually retarded
Cope
Cope
In 20 years when AI still is not generally intelligent what are you going to say is the reason?
Woaaah buddy, you can see the future? Ask your magic ball when will you get a gf, incel.
Please tell me the answer is A (4.36) or i will kms
Yep, it's A.
yes, it's A. congratulations, you are smarter than any machine learning based model and smarter than AI cultists.
>can't solve high school math problems
>can generate art
Really makes you think
This thread is full of AI schizophrenics like you who mistakenly believe it can solve highschool problems. You are no different from them in thinking a statistical regurgitator creates art. It's a mental illness.
>AI can't even solve high school level math problems
This is a bad benchmark for ability to change the world. Most people capable of this will not change the world, because learning highschool math is not a world changing ability.
There is literally nothing intelligent in matrix multiplication. 'AI" is just a marketing term.
Pointless thread. AI will replace you, keep coping and crying.
Two more weeks.
chatgp can't even answer a yes or no question without giving you a lecture
ITT: high schooler gets upset that the AI he hates was right about his homework problem being underdetermined and starts chimping out
Then why doesn't somebody just ask the AI again but this time include "and both trains are moving at constant speed"?
The question is not underdetermined so there can't be people chimping out about the AIs response
Why can't you accept reality? Language models can not become generally intelligent and silicon hardware isn't powerful enough to do so either
What. You really are mentally ill
You have literally no response to any of the points
Despite an exponential increase in compute used and larger training sets and more diverse algorithms etc. AI is not exponentially nor even linearly more intelligent than it was a few years ago. Scaling is logarithmic and silicon matter isn't powerful enough to be organized into the structured required to run the intelligence of humans.
I already have a girlfriend lmfao. You are also the one claiming that "AI will replace you" which is you claiming you can see the future.
When gpt5 or even gpt10 or whatever still is not generally intelligent what are you going to say is the reason?
Wtf anon. Please get help immediately, it's not sane to be this upset that you had mistakenly assumed the trains were moving at constant speeds without realizing it.
I'm not upset, you are dodging the questions and also lying to try to make the mistake of the AI seem less damning. This is getting boring.
Answer it: when gpt5 or gpt10 or whatever is still not generally intelligent what are you going to say is the reason? How could your hypothesis be falsified i.e. how could it be turned into a scientific theory?
I don't care to entertain your hallucinations sorry.
What hallucinations? Everything I'm saying is empirically verified.
Literally wtf are you even talking about.
>I already have a girlfriend lmfao
Sure you do.
>still is not generally intelligent
Where did I say AGI is needed to replace you?
You are not intelligent and not fun to talk to anymore.
I've already replaced you BTW lol
Keep coping and fuming
You are the one fuming here. It is blatantly obvious dude, if you weren't you'd be able to directly respond to the post with an actual explanation for how you are correct, but you can't, because you are not correct and it's clear to all of us.
Pointless thread. AI will replace you, keep coping and crying
Why is it that when I point out that scaling is logarithmic with compute and that only biological neurons are capable of human level intelligence you guys get upset? You don't even have a falsifiable hypothesis. No matter how many times an AI fails, you can always claim that some secret sauce is missing and thus you can't be falsified. AI is not science and so it doesn't even belong on this board.
imagine being so against ai you failed to realize the salient physics in the above calculation. baka
The above calculation implies that you can't get silicon to produce the level of parallelism to become generally intelligent. You are the susy baka and have been btfo
i feel genuinely sorry for you.
Thats fine, I'm smarter than you so your pity doesn't mean much
genuinely, self reflect on this question. is there anything you can state that you feel you have no understanding of?
I don't know a lot about a lot of things
specify. platitudes reflect ignorance.
The irony is funny seeing as every single time I argue with AI less wrong guys you spew nothing but platitudes.
I don't know much about geology. My understanding of differential geometry isn't high enough to fully delve into general relativity. I can't play the piano. There are a lot of other stuff
see, you failed. the point of the self reflection exercise was to check if you can humbly admit ignorance. you padded every admittance with statements to self-fellate. you are incapable of saying "i don't know geology", you had to pad it with "i don't know much about geology". you cannot say you don't understand differential geometry. you had to say "my understanding isn't high enough." interestingly enough, you were only able to admit you cannot play the piano without padding the statement, likely because it's a skill and not knowledge. you're currently in "argue-mode", so go ahead and argue against me. just self reflect on why you had to pad those statements.
Not him but you sound legitimately deranged.
What? I don't know geology and I can't solve einsteins field equations. I don't know anything about most fields of knowledge and science and stuff.
What does this have to do with my statement that you aren't going to replicate the brain on any hardware that isn't another biological brain? Simulations of molecular dynamics are exponential on classical machines and they take a square of the number of particles on a universal quantum computer, so the brain with 10^26 particles requires 10^52 qubits to simulate on a quantum computer. In neither case are we going to be able to do it.
And yes, we DO NEED an atomic/molecular simulation
admitting after being told the point of the exercise is moot, and doesn't permit you to pass. you've already failed.
There is no failing a made up test. Me writing "I don't have a high enough understanding of differential geometry to solve einsteins field equations" isn't some form of padding of my inability.
Anyway you have failed to explain any of the points in this conservation. You are engaging in ad home right now by trying to claim that I am a narcissist or something.
Just admit that AI is not possible
it's a well known psychological assessment to scan for egos and ignorance. people who fail have large egos and commensurate ignorance. you're a smart lad, i'm sure you can look it up.
But I am not ignorant here seeing as everything I'm saying is correct and all my figures are correct and I've posted sources as well.
that one stung, didn't it?
It didn't sting. It made me nauseous. Witnessing "people" like you shit out their preprogrammed rhetoric day after day makes me realize at least half of the population isn't fully human.
>It didn't sting. It made me nauseous.
can't make this stuff up. i forgot to mention that the overarching correlate is lack of self-awareness (which is strongly correlated with a high ego, and arguably causes the high ego).
You are deeply disgusting. You evoke the same kind of feeling a diseased or deformed third world freak involves. Makes me think I'd be doing you a favor putting you out of your misery, even if you claim you don't want it.
No it didn't. You are talking to two different people and I don't get offended by insults.
If you want to sting my ego you have to come up with an actual argument that disproves what I am saying. From my perspective you are a seething coping science fiction lover who has been utterly destroyed by my simple proof of the logarithmic increase in intelligence given exponential increase in compute, and I have disproven the possibility of artificial general intelligence in silico. You have yet to post a convincing reason for me to change this position.
I am not affected by any post that is not a direct argument. I am too autistic to care about personal insults in that fashion
>more than one people are arguing against you
>everyone who responded to me itt is the same individual
high ego, lack of self awareness, etc. etc. etc.
Lmfao dude you AGAIN have not responded to a single actual point.
Your posts are worthless until you do
what point? you're the one ignoring the analysis here.
That analysis has nothing to do with the scaling that I have been talking about
your posts have nothing to do with the physics i am talking about.
The physics pertaining to compute? You haven't said anything about this which is what I have been saying
you literally don't even understand the point of that comment. like i said, lack of self awareness, huge ego, and profound ignorance.
The point of the comment is nothing but you deflecting from admitting that I am correct about everything I'm saying.
>you deflecting from admitting that I am correct about everything I'm saying.
>like i said, lack of self awareness, huge ego, and profound ignorance.
I'm bored of this. You're not convincing me by trying to claim I'm a narcissist.
If you want to convince me, ADDRESS THE POINT. Put up or shut up
>by trying to claim I'm a narcissist.
i don't have to "try" to claim you are a narcissist. i AM claiming you are a narcissist.
A narcissist who is correct is still correct
If you want to prove that I am not correct, you will not be able to do so by proving I am a narcissist.
You're a retard getting baited by a nonsentient chud.
ummm no. you said you know "little about geology" instead of saying you don't know geology. it's a well-known psychological test and you failed it, so my AI fantasies are right and you are wrong. stop being so ignorant
The AI is a generalist. It knows something about every topic so its much more Knowledgeable than any particular person
>much more Knowledgeable
But that's not intelligence
Intelligence is just applied knowledge, it's still a novice at applying its knowledge
All I wanna say is that GTP thing makes more sense than most of the people I've spoken to.
>AI schizo gets embarrassed and fails a simple word problem
>proceeds to spam the thread with his seething
kek, stay mad.
What are you talking about
Fuck off, retard. AI will literally replace us in two more weeks and that's a good thing. We need a true god to rule over us and stop us from destroying the climate.
Guise we are getting UBI right?
Yes and also VR girlfriends and immortality and mind upload. All you need to do is trust the experts, worship the AI, install the brain chip and protect the climate.
2012:
>AI can't even speak English! AI will never happen!
2016:
>AI can't even draw a human! AI will never happen!
2019:
>AI can't even solve these elementary school level problems! AI will never happen!
2022:
>AI can't even solve these highschool level problems! AI will never happen!
2023:
>AI can't even solve these university undergrad level problems! AI will never happen!
2025:
>AI's post-doctorate thesis aren't as good as human-written ones! AI will never happen!
2028:
>AI only solved four of the millennium problems! AI will never happen!
2029:
>AI's theory of everything only fits the data to 4.87 sigma! AI will never happen!
cope
>invent irrelevant goalposts
>declares victory against imaginary opponents
Corporate marketing.
Why is it so hard for you to understand what the argument is.
All of those improvements REQUIRE EXPONENTIALLY MORE COMPUTE THAN THE PREVIOUS ONE you fucking dipshit. It can not scale to the levels that you're talking about.
Sounds like you're coping the fact AI isn't a matter of "if" but a matter of "when." Are you afraid of being replaced by 45lbs of metal, silicon and plastic?
Nope, there is no arrangement of metals and plastics that can compete with organic compounds. It would remove it from being a matter of time and render it materially impossible even in principal
If you are going down that route you have lost. It's basic chemistry
LoL. Your level of cope is astronomical! Machines can and will do anything any human can, and more!
I don't think you guys understand chemistry and why organic molecules are superior to all others
That's why we still ride horses, still only dig holes using shovels, still pick crops by hand, still do arithmetic calculations by hand... Oh wait...
Did you think this is an argument? What does this have to do with the range of functions of organic molecules?
What do organic chemicals have to do with AI? You aren't even stating anything remotely relavent let alone an argument.
In order to construct hardware that is capable of being intelligent it needs to be built out of biological organic molecules.
Metals and metalloids are not sufficient.
Pointless unfounded statement. Why would that be true? Organic molecules have more structural diversity and thus can carry a lot of information. But as long as you can encode that information in any practical way there's no fundamental difference
Coding molecules is exponential on classical machines. You can't code for the information on any other substrate without drastically increasing the amount of compute and energy. And by massively I mean exponentially.
It's not unfounded it's literally all of physics and chemistry
Why would you need to code (i gather you mean simulate?) molecules to encode the information they happen to carry in a biological environment, where they evolved in a haphazard fashion. You might have a highly complex molecule that just turns a switch, figuratively speaking. Something easily done in computer code. You haven't presented a connection between chemical complexity and information processing
Pick up a single book in molecular biology and neuroscience
What great insights relevant to the topic would I find there? Why does one need to simulate organic molecules to simulate intelligence? Just answer this
Because the entire molecular evolution of the cell is the minimal information needed to support the level of compute to rise to the generalized intelligence of animals and lifeforms
OP seems to think that the AI of the future is going to be based on the exact machine learning paradigm we have today.
So he says:
>scaling of compute only provides logarithmic returns in ability
>we cant scale enough
>therefore biological brains are the only things capable of general intelligence
But any intelligent person would say something like:
"it seems that hardware limitations prevent us from scaling our current AI paradigm enough to achieve AGI, so unless we switch paradigms (or there is some hidden phase transition) then humans can't be beaten by AI in terms of intelligence"
>"it seems that hardware limitations prevent us from scaling our current AI paradigm enough to achieve AGI, so unless we switch paradigms (or there is some hidden phase transition) then humans can't be beaten by AI in terms of intelligence"
But this is exactly what I am saying so why are you pretending I am saying something else? The only caveat is that I don't think there is a secret sauce.
Also, I am not OP
I haven't read every post in this thread
But all I've seen is anons saying scaling cant work because of hardware limitations, whilst making no mention of alternative paradigms (or phase transitions).
Now you claim that you've been saying what I said, so maybe you can point me to a post where you said it before i did
The alternative paradigm is itself a wet warehouse computer bio brain but then we're getting into biology and stuff and not ai.
We can say that genetically engineering a fungus to be generally intelligent is designing AGI but it isn't what people normally think of when talking about this.
What I am saying though is all those ideas of the form "there's going to be a giant computer and it's going to compute the most computations and become super intelligent and then turn all the atoms into more compute and it's going to go FOOM and blah blah" is literally not possible it does not exist.
>The alternative paradigm is itself a wet warehouse computer bio brain but then we're getting into biology and stuff and not ai.
I've studied neuroscience at a graduate level. Many of the people at deepmind have phds in neuroscience. If you think the only alternative paradigm is biological based computers then you misunderstand neuroscience.
>We can say that genetically engineering a fungus to be generally intelligent is designing AGI but it isn't what people normally think of when talking about this.
I'm surprised anyone on BOT knows about synthetic biology approaches to cellular intelligence
>What I am saying though is all those ideas of the form "there's going to be a giant computer and it's going to compute the most computations and become super intelligent and then turn all the atoms into more compute and it's going to go FOOM and blah blah" is literally not possible it does not exist.
With reference to scaling up architectures and training methods used for gpts and dall-es I suspect there won't be a phase transition.
But there's no reason to think an alternative ML paradigm (which more closely mimics neurobiology) wouldn't work
>I've studied neuroscience at a graduate level. Many of the people at deepmind have phds in neuroscience. If you think the only alternative paradigm is biological based computers then you misunderstand neuroscience.
The only possible way to perform the level of compute needed is with biological tissues. I don't believe that you've studied neuroscience at a graduate level.
>But there's no reason to think an alternative ML paradigm (which more closely mimics neurobiology) wouldn't work
Yes there is, for the simple fact that intelligence always scales as a logarithm
You're looking for a secret sauce that does not exist and can't be falsified.
>The only possible way to perform the level of compute needed is with biological tissues. I don't believe that you've studied neuroscience at a graduate level.
Why do you think the only possible way is biological tissue.
>I don't believe that you've studied neuroscience at a graduate level.
I don't care about what you believe or not
>Yes there is, for the simple fact that intelligence always scales as a logarithm
"always"
Not very scientific of you anon. Unless you have some mathematical proof of that? rather than just asserting your own opinions?
>always"
>Not very scientific of you anon. Unless you have some mathematical proof of that? rather than just asserting your own opinions?
Actually, all evidence indicates it.
You have an unfalsifiable assertion that there is a secret algorithm that you don't know and every time you don't find it you can claim it's still out there
I.e. you can't be falsified.
I don't claim that such an algorithm exists
I say it may or may not
>Actually, all evidence indicates it.
I don't think all evidence indicates. But again, you're making another absolute claim. I'm tired of trying to talk to someone with no intellectual humility.
Posting in this thread was a waste of my time, as usual.
A little bit of advice. If you want to accomplish anything meaningful in life, you're going to have to get over your belief that your opinions reflect reality 1-to-1 🙂
But it does indicate this. Empirically speaking, the scaling hypothesis is true and it's a logarithm.
>Yes there is, for the simple fact that intelligence always scales as a logarithm
Also something scaling as a logarithm doesnt mean it cant beat humans. See alphago
That's funny because 99.99% of AI research and funding right now is going into ML and they are all true believers that ML will get them to AGI. There are extremely few dissenting voices.
Not only are you clueless about AI, but you are also clueless about the field of AI research. As expected of an AI fanboy.
>That's funny because 99.99% of AI research and funding right now is going into ML and they are all true believers that ML will get them to AGI. There are extremely few dissenting voices.
>Not only are you clueless about AI, but you are also clueless about the field of AI research. As expected of an AI fanboy.
You could have just asked me to clarify my post, instead of getting defensive anon.
And your post hasn't said anything relevant, which you'd understand if you were smarter.
There are different paradigms within ML. There are ML paradigms that try to get around catastrophic forgetting, and there are paradigms that don't.
But you're not interested in a real discussion, which is fine by me since your life is irrelevant. 99.99% chance you never do anything meaningful with your life, so there's no point in me arguing with a pajeet OP
>There are different paradigms within ML
They are all ML and have the exact same fundamental limits that ML has.
I'm not sure why you even bothered replying as you've only made it even clearer that you have no clue what you're babbling about.
ok pajeet
whatever lets you cope at night 🙂
You are the only one coping here.
Every single one of the statements you've made has been proven incorrect and you have not been able to say anything
🙂 ok jeet
Cope
Stop projecting so hard and go back to r/Futurology.
man, you should work on your issues with a therapist bro
>Machines can and will do anything any human can, and more!
What makes you think this is possible? The entire field of organic chemistry proves this is wrong
>What makes you think this is possible?
It just fucking is, okay? Fucking luddite. You're coping. AI will replace us in two more weeks.
very gnomish style of posting. you should work on yourself
One of those days someone will find out what you shill for in real life and you'll get both your legs broken.
Wtf are you talking about. I'm asking why you think it's possible for ai to work on non biological substrates
>there is no arrangement of metals and plastics that can compete with organic compounds
What the fuck is a calculator? A tractor? a gun? Welcome to sneeds cope and seethery
I swear you retards are so fucking stupid it's amazing
Explain how any of those tools are indicative of organic molecules not being required for general intelligence
Also, explain how any of those tools are comparable to organic molecules in general and what it even means to compare them on different tasks.
Okay i'll do my best. The examples mentioned are non-organic, but perform at above human level in some tasks. Liek, for example a calculator calculates much faster and more accurately than a human can, a tractor is more efficient than a horse drawn plow, a gun is superior to a bow and arrow.
The stated point was that >there is no arrangement of metals and plastics that can compete with organic compounds
The examples I posted directly disprove the claim, because they are arrangements of plastic and metal that outperform counterpart organic compounds in a given task.
There is absolutely no reason why a computer couldn't out think a human in a generally intelligent manner, in fact it's arguable that it's already more intelligent than you are having clocked in at a whoping 83 IQ points, it's pure religiosity to think it can't get better than it already is.
The claim was in reference to the amount of compute needed to be generally intelligent.
>There is absolutely no reason why a computer couldn't out think a human in a generally intelligent manner
Yes there is, given the evidence already explained and given multiple times throughout the thread
>it's arguable that it's already more intelligent than you are having clocked in at a whoping 83 IQ points,
My IQ is tested at 148
>it's pure religiosity to think it can't get better than it already is.
Literally the opposite. All evidence shows that exponentially increasing bits has diminishing returns on intelligence in these machine learning algorithms
AGI is literally never going to happen and there's nothing you can do about it
You can't change physics.
>I'm arguing with maidgay
No wonder you're retarded this is my fault lol
>a tractor is more efficient than a horse drawn plow,
This is wrong btw. it has greater throughput but is less efficient in terms of energy to work.
That's a fair point.
We see that we will need to exponentially increase the compute used and silicon can not cut it. I don't understand why this is something that makes all of you anons so upset.
We aren't going to convert as much silicon as the internet just to make a machine that's about as smart as person. You guys understand this right?
Point out the flaw maidgay
I see these threads the same way I see the climate change deniers for which I have the same fervor. It's annoying. The data is clear but you just deny it like a religious person.
>Point out the flaw maidgay
It's self evident that your claims are not only wrong but downright retarded. We haven't reached peak anything.
Lol wrong. All evidence is on my side and I have linked to several sources.
Here's on directly specific to this
https://www.researchgate.net/publication/224127040_Limitation_of_Silicon_Based_Computation_and_Future_Prospects
So what actually happened? are you posting here instead of working? did have a post vaccination stroke and find yourself fired from your job? Why are you here all day shill the exact same retarded theory based on an autistic data point.
I'm just hanging out rn
I have stuck to the one argument because it's right and none of you have been able to convince me otherwise. I want to understand why you guys think the way you do despite the evidence being the opposite
There are at least 5 different people who don't agree with you itt.
>There are at least 5 different people who don't agree with you itt.
what did you undergo a half dozen chudings to fill out those numbers?
>We haven't reached peak anything.
Silicon chips can barely break 5Ghz, and that's at egg frying temps, even at sub 10nm. Most new chip design improvements are in efficiency and packaging, but they aren't getting profoundly faster like 20 years ago.
>Silicon chips can barely break 5Ghz
are you okay man?
Are you? Are you incapable of understanding context?
did you just wake up from a coma? Check out the current cpus on the market. I mean fuck
https://hwbot.org/benchmark/cpu_frequency/halloffame
current record is 9Ghz lmfao. this is why I think you're gnomish. too much pilpul in you to even converse in a normal manner. everything is just disingenuous garden gnome shit
Intel i9 is 5.8GHz where are you getting 9 from? Also, how does this change the point?
like I said very gnomish
Intel sells its i9 as a 5.8GHz chip you posting nerds overclocking as the standard is pilpul. You are projecting again. And you STILL haven't addressed the main point which is about the apparently needed exponential which isn't going to happen anymore with current paradigms
Unless computers completely change in a way that doesn't exist at all right now, there is no reason to think that we're on the brink of an AI revolution
>Silicon chips can barely break 5Ghz
>Intel sells its i9 as a 5.8GHz chip
I'll accept your concession. Next time say 6Ghz when you have a break down over cpu speeds. Though word is Meteor Lake will hit that so I guess you'll need to raise it to 7Ghz lmfao
I conceded i was wrong, let's say 10Ghz for good measure.
This brings AI how?
>AI
with someone like you (garden gnome) it would become a torturous definition game full of all kinds of nonsense to even debate how tech advancements factor into all of this. as long as this (ML/AI/COMPUmoron) can bring sufficient value (in either time saved or quality of life enhanced) then I don't give a fuck if it has met whatever ever changing meme definition you'll make up. And since it's clear all these gayMAN corps agree with me else they'd not be spending hundreds of billions chasing whatever this is I just don't understand all the cope posting you've been doing.
I was never arguing against computers improving quality of life I have been talking about building a machine as intelligent as a human and how it would work.
bro they're going to invent graphene or some adjacent supercomputing transistor any day now trust me two more weeks to 2000 GHz cpu's
oh the anti AI guy is a climate cuck. wow lol isn't the ethical thing for you gays suicide?
why are you working yourself up into such a fervor over something you don't think will happen? You really are gnomish huh you guys got all those mental issues. Get help
The *nature* of computers as we know them now are as good as they're going to get, give or take incremental speed and efficiency improvements. Just like cars and airplanes.
This isn't going to be the cool sci-fi fantasy some anons want. Not with Silicon.
Exactly.
Development of cars also had an exponential curve but then sloped off and it never increased again in the same fashion. Same with planes.
We ALREADY WENT through the exponential increase of compute and now it's sloped off and will never increase in that fashion again. We aren't at the base of the S curve we're at the end of it
>the sheer amount of cope and butthurt ITT
you can tell these "people" actually thought AI was just about to replace programmers and mathematicians. bargaining stage
>it's a well known psychological assessment to scan for egos and ignorance
>OP changed the names of Meerut and Delhi to A and B to hide that he is a streetshitting pajeet
kmt OP is always a morongay
https://www.toppr.com/ask/en-gb/question/a-train-x-starts-from-meerut-at-4-pm-and-reached-delhi-at-5/
The funny thing is OpenAI themselves already admitted GPT4 won't be anywhere near the jump that GPT2 -> 3 was. They already know they are coming up against the limits of Machine Learning and are shifting focus to monetizing GPT3 and DALL-E 2.
GPT4 is the first of these shifts, which is why it will actually have a "narrower" aka smaller set than GPT3. They are basically going to take GPT3, cut it up into parts and sell it to gullible people.
Yes but this is because of the logarithmic increase in intelligence.
I am ABSOLUTELY CORRECT about this and everyone knows it
my ai is my friend
we write nice poems and plot how to wipe off humanity
gpt4 already passed the turing test. it's time to take your head out of the sand bro. it's happening
The Turing test was already passed 4 years ago and it didn't do anything
yeah things are speeding up. gpt4 was cheaper to train than 3 as well.
No things are not speeding up they are slowing down
nah tesla's dojo is pretty frightening any it's only on 7nm. way too quick. v2 is going to be 10x faster. then look at h100 v a100 comparisons. it's cool you got the cope though did you vaxx?
See
nah but if you can sleep easier for a few months believing that stuff you made up then go for it
The cope has already started.
1T parameters (6x increase from GPT3) fine tuned (equivalent to 100T), 10T tokens (33x), 4x larger context window for users (16k from 4k), all the SOTA memes https://arxiv.org/pdf/2205.05131.pdf
800x more compute. Oh and btw it was cheaper to train than gpt3. you really need to follow people in the industry it's pretty leaky what GPT4 is
800x more compute and it is not 800x more intelligent. It's not even twice as intelligent
source?
The source is right there in the post
Show where it makes all the claims you did
I genuinely have no idea what the fuck you're talking about.
>gpt4 already passed the turing test.
This is what genuine psychosis looks like. There's having retarded subjective opinions, then there's being an actual retard, and then there's this. This is just full disconnection from objective reality.
Thank you for your comment. It is true that AI has the potential to significantly impact and change the world in a variety of ways. However, it is important to keep in mind that AI is still a rapidly developing field, and there are many challenges and limitations that need to be addressed.
One limitation of AI is that it is only as good as the data and algorithms it is trained on. While AI systems can be very effective at solving certain types of problems, they may not be as proficient at others. For example, an AI system that is trained to recognize patterns and classify images may not be able to solve high school level math problems.
It is also important to note that AI is not a replacement for human intelligence and creativity. While AI can assist and augment human capabilities, it is not capable of replicating the full range of human thought and decision-making.
In summary, while AI has the potential to change the world in many ways, it is important to recognize its limitations and to use it as a tool rather than a replacement for human intelligence.
Ain't that right, fellow humans! 🙂
I honestly dont get it. If gpt4 used 6 times as many parameters and 800 times more compute and 33 times more tokens and yet it isn't hundreds of times as intelligent as gpt3 why are you guys denying the diminishing returns?
Because Ramona Kurzweil said AI will replace us in two more weeks and the Microsoft corporate PR agrees. They're the heckin' experts.
The funny thing is, even if we take that redditor's post at face value, even the AI evangelists are already admitting diminishing returns have kicked in
GPT 2 > 3 was a 12 fold jump
GPT 3 > 4 is only a 6 fold jump IF it truly is 1 trillion. It will almost certainly be smaller than that. OpenAI themselves have already said GPT4 will be narrower.
Why do you think it will be smaller than that?
>OpenAI themselves have already said GPT4 will be narrower.
Can you link to this statement
I mean, I was using quadratic equations and it got a wrong result because they used irrational numbers to try solve it.
Then I corrected it and showed why it was wrong, it understood meanwhile showing me why it had solved the other way, you have to be precise in your questions or it will get it wrong, try to ask the same while mentioning that trains move at constant speed because it most certainly won't take this as a given.
I just want to know what kind of emotional distress compelled you to write that. lol
it did figure out it's about speed
if you imagine a 3yo boy that has the vocabulary and ability to string sentences of an average adult woman, you would expect something like that i'd say
imagine being in such a state of terror over ai that you reject reality and substitute your own
Why are you incapable of directly addressing any point against your position?
It's just pattern recognition. General AI would be a true threat, but this is far from it. What's more harmful are the shills and AI programmers who fully know this, but push bullshit sci-fi level ideas on to the public instead.
>You have to assume that both trains are equally fast
Is that this scientific rigor I keep hearing about? lol
>Going to