How likely is it?
GPT-4 finished pre-training in July 2022. Everyone else is just now catching up. What the frick do they have behind closed doors that they're not showing us? Because there is absolutely something back there that's much better than GPT-4.
They released Sora like it was a normal technical blog post. I feel like their plan is to just drip feed more news and push the Overton window until 8-12 months from now they just say "Yeah, we figured AGI a while ago"
Shopping Cart Returner Shirt $21.68 |
Tip Your Landlord Shirt $21.68 |
Shopping Cart Returner Shirt $21.68 |
Given that Altman is begging for absurd amounts of money to manufacture gpus I'd say they got close to the peak with gpt4, at least for now. He doesn't seem the type to actually play it safe very much.
Listen, if you want me to speculate on the market strategies of a very successful company, you need to pay some money for that. Otherwise, go frick yourself.
AGI is literally impossible and "AI" is just predictive text generations. No matter how much money you throw at a model it will never be sentient
>implying you need anything close to "sentience" to put millions of people out of work.
Even if there were zero new developments you can could easily take the "predictive text generations" and basically bootstrap every human out of the customer-service industry.
wrong wrong WRONG
moron homosexual
GPT 5 hands typed this post.
We're on to you.
>can't argue so resorts to seething rage name calling
I accept your concession
Sentience has nothing to do with AGI.
It doesn't need to be sentient to be sapient
Things people mostly do at work probably don't require sentience. Whether it is actually feasible or not, the OpenAI people might well believe at this point that with enough computing power and training data, most office jobs could be automated at least to the degree that the result only has to be checked by a human worker.
no problem
simulation of intelligence/sentience is all that's needed
of course, the goal is total annihilation of all life and it's a israelites' pipedream
(You) will never be sentient.
Not very. They are hitting the wall when it comes to scaling things further up.
agi is likely still off by a bit however working to combine various models and figuring out feedback loops that don't take entire re-trains must be something they are working on which would mimic entry agi.
A human brain is really just a lot of different models constantly updating and interacting.
>A human brain is really just a lot of different models constantly updating and interacting.
This reminds me how people would compare brains to computers.
>brains are just cpus
>short-memory is just like the cache!
>processing memory is just like ram!
>long-term memories are just like drives!
yeah.
0. It's all marketing bullshit and they're just trying to keep the current scam going as long as they can. The underlying technology isn't possible to scale to the holy grail of "AGI".
>0. It's all marketing bullshit and they're just trying to keep the current scam going as long as they can. The underlying technology isn't possible to scale to the holy grail of "AGI".
Anon! I would like to hear your theory. No, actually I do. What makes the underlying hardware incapable of running AGI? Is it something to do with 1/0 architecture?
>A human brain is really just a lot of different models constantly updating and interacting.
I hold a hammer. Where have all the screws gone?
>This reminds me how people would compare brains to computers.
Nested metaphors are useful, but they're just that - metaphors. It's easy to see them as analogues when they're presented in such a fashion.
>Anon! I would like to hear your theory. No, actually I do. What makes the underlying hardware incapable of running AGI? Is it something to do with 1/0 architecture?
Regarding training costs and operating costs compared to the prices they charge, GPT4 is losing them millions of dollars every single day. Their profits come from all the investors they're attracting by having the world's best LLM.
They're at the point where any meaningful improvement would be too expensive to be worth it. Usage prices would have to be so high that nobody would use it, even if it was a bit smarter than GPT4.
>GPT4 is losing them millions of dollars every single day. Their profits come from all the investors they're attracting by having the world's best LLM.
Is this information publicly available?
Not verbatim, no. But if it is indeed an 8x220B model like George Hotz leaked, then judging by the API costs it has to be so.
AI already partially designs AI chips, has been for years.
Dumb AI tools will design smarter AI tools until it gives birth to AGI. We won't know because we wont recognise what we are looking at, at the time.
I'd say they're close. They have candidate algorithms and ideas for systems, but needs lots of money/compute to test everything.
What the npc media presents as "AI" is not an intelligence. So we are not close to it, because we have not advanced at all towards it in the last 100 years.