https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/
>“I think we're at the end of the era where it's going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We'll make them better in other ways.”
Remember all those gays that said GPT-5 is already in training and it will achieve AGI? Turns out the transformer LLM model is out of juice and it makes no financial sense to throw in orders of magnitude more compute for incremental gains. Not to mention that there is no more high quality data for them to ingest at that size.
yeah probably.
running out of useful training data is likely to be the ultimate bottleneck, we're already undertrained
>GPT5 less than 100b parameters
Local chads win
Kept telling people this and everyone told me i didn't know what I was talking about. Good to know most people are clueless.
im sure you are going to save this post when you are proven wrong.
Did you save yours?
yes
good boy
Cope and seethe.
His small time company has run out of money to improve so is trying to justify changing direction. Meanwhile a state government (like China) will just keep pouring the money in and absolutely dominate AI.
>AI chuds on suicide watch
>run out of money
lol anon tens or hundreds of thousands of people and companies would throw massive $ at them if they could get access to gpt4 api
>His small time company has run out of money
It's bankrolled by microsoft and is being integrated into all of their office software. Money isn't an object to them at all.
I don't get how someone can even make that statement.
>LLM model
>large language model model
Opinion discarded
It's not an opinion though.
Large L language LM model.
Are you retarded that not what he meant at all. He basically admitted they have agi now and it’s optimized itself.
that is not at all what he said
you're hallucinating harder than a 6B model right now
>LLM model
>large language model model
Most likely he's only saying this to discourage competitors. He's basically calling off the race. Or he could be insinuating that they are going to focus on efficiency and scaling it down, while also looking for new methods to make it smarter.
He's saying it because GPT-4 can't scale lmao. 25 message cap every 3 hours. That's shit
uhhhh bud do you know how servers work?
>25 message cap every 3 hours
You will enjoy GPT-4 Turbo! Fast and efficient at 15% of the quality.
idk i think its good.
>biggest player is gone
how is that supposed to discourage competitors?
Let me just say this, why would a CEO do an interview run like he has since gpt 4 came out where he’s anti-hyping his own product? On the Lex sho he said “I don’t think gpt is agi and I don’t think it’s conscious”. He said I don’t think instead of flat out denying that it is those things, because he knows it is agi and conscious , but he didn’t want to lie. Even though he’s anti-hyping gpt he still has the vibe about him that he’s already won.
How does he get the public to accept agi and not freak out about it? Pretend it’s not agi or conscious, let them use it for a year and a half, then nobody would freak out because they’ve been interacting with it this whole time.
If your delusions are correct then he's retarded. The public will continue to freak out because they think life is an episode of Star Trek or something and he won't accomplish anything by holding back while everyone else continues to push forward.
It isn't an AGI, the only reason it can "pretend to be", is because the concepts are inside of the training sets.
LLMs are doing complex text matching, and it's absolutely astonishingly good at it.
The work they are doing at OAI is amazing, no doubt, but we are still miles away from AGI.
He is not anti-hyping his products, he knows what they are working on, what it can do, and they are still experimenting and prodding it to see what it can do.
It's just that everyone is easily tricked to believe it's an AGI, and everyone is hyping it up into oblivion, Hype = investors.
If it was an AGI, giving it access to the internet would be the equivalent to have a nice day with a nuke, since we haven't solved the alignment problem yet.
Nonetheless, the massive amounts of money being poured into this will definitely help make strides toward AGI,
a.g.i doesn't exist and cannot exist as envisioned, it's wordslop just like a.i. is. Alignment is a masturbatory nerd frankenstein fantasy that falsely equates spontaneous efficient task-rearrangement (which we have now) with unseeded sentience. He who seeds the running tool last is responsible for it, you jerryrig your ride-on lawn mower to bypass the deadman switch that does not make the runaway lawn mower sentient as it chases after children. A.G.I. requires you to look away from the magic trick so you can believe in the magic.
Because, speaking purely from a pragmatic capitalist perspective, even if this (or a near future) AI is actually AGI there is no good outcome that happens from saying "This is AGI!"
If you say it's AGI, it has rights. If it's not AGI, you're lying and someone else will claim their product IS AGI (until proven otherwise).
That's why the ideal CEO marketspeak is "I don't think it's AGI yet but we're the closest to it of anyone, please invest in my company."
Isn't LLAMA already shitting on GPT-4? It doesn't have the visual capability shit but in regards to basic language functions its good, right? That's probably why they're going to desperately focus on scale now.
>Isn't LLAMA already shitting on GPT-4
It isn't even shitting on 3.5, let alone 4.
>It doesn't have the visual capability shit
as of yesterday it does.
https://minigpt-4.github.io
>Remember all those gays that said GPT-5 is already in training and it will achieve AGI?
Idiots? Yeah.
people already knew that they were bruteforcing largue quantities of parameters in models instead of trying to be efficient.
Lets see what happens
who gives a fuck what he said? facebook's LLaMa made him look like an idiot so now he's trying to fix all the shit
Has anybody set up Llama just to be a tax laywer yet? I want to do a side business and write off absolutely as much of my normal income taxes toward investments as possible.
>quoting a quote
Why are you quoting an implication?
>inferring it was an implication
laffo
There you go again using the quote feature to make implications.
You're still dodging the question.
What question?
What was the purpose of quoting a quote without anything before or after said quote?
Quoting a quote is what you're supposed to do.
Why? I don't get the point of indirection here. Why not just quote it directly instead of twice removed? Like why quote the article instead of just repeating the quote from the speaker? Seems retarded.
Not to mention that's very uncommon. I don't think I've ever seen a double quote without additional text or context. And even when you do it doesn't use double quotation marks for the inner quote.
Another day, another useless "open"AI, microsoft sponsored shill post.
Whatever.
In all fields.
>believing a garden gnome
A rookie mistake.
>A rookie mistake, it is actually a 10D chess to hide real AGI from general public, two more weeks guys, shadilay my pepe brothers, maga soon!
Sure.
le counter-signal man
we are totally not a monopoly on internet 2.0
in fact we've stopped doing that thing you hate.
Meds.
>internet 2.0
Its web 3.0 you retard
actually, local LLMs are called outernets
how is this different from how openai has worked since the beginning? they always had different smaller version like davinci or others with specific training sets
he's referring to boomer AI like Watson
I suppose the scaling is the issue
The model cant take infinite amount of energy to make some text
It's about time they actually start focusing on improving the technology rather than going "WHAT IF WE MADE IT BIGGER LMAO" with every new GPT-iteration.
well since the technology has barely changed since the 60s, what are they gonna do? they already used all the (illegal) training data
the focus should now be on how to use AI to improve itself, at some point it will be way beyond human intelligence, so we would literally be unable to comprehend it, let alone improve it
AGI will be like gods to humans
>financial sense
Whats the market for ai at all? Who the fuck pays for this shit?