You mean like an AI that is capable of understanding what it is saying and correcting itself on the fly when it finds new knowledge?
That would be AGI.
Chinese room AI is all we gonna get for a long time.
Nice bait. >Men are by far more likely to be victims of physical violence >Many job-sectors fall over themselves in order to hire women due to wanting 50/50 ratios. I'm not getting into the gender paygap myth, but you know how to use google. >Women are hugely over-represented in higher education >Men are probably over-represented in politics, but definitely not in media. Most media is made to appeal to women first and foremost >The social expectations point has nothing to do with privelege >Women take up more healthcare resources by far. Female-specific cancers receive twice the funding that male-specific ones do despite killing fewer people
As you said, the 7th point is just nonsense.
Well, yeah. GPT-4 is basically finished and due any time no, so 5 is inevitably in the works. What's really up in their air is what other firms have up their sleeves.
> right now > "we think" it's using 25k nvidia gpus > ... we think > some garbage from morgan stanley > that will ride the hype train of anything that makes money
amazing blogpost, rajeesh.
Is there anything else that can be done between now and AGI besides moar data? There is only so much usable text on the internet and I feel like it's going to be a limiting factor very soon. Is there anything multimodal being worked on that can be trained on text as well as video, audio, and pictures?
From what I've been reading: optimization (a lot of it and actually done by Meta's LLaMA more recently), autolearning? (I don't know how they call it, but it's like these models being able to conceptualize/infer stuff that are not in their datasets, basically they need to have some internal representation of the world and then gather, analyze, synthesize and essentially build on top of it, I'm really not doing a good job explaining since there are "easier" steps to be done before achieving it) and sense making (vision, audio, contact etc).
What have you been reading? Share some links if you have them. I'm having a hard time finding anything that isn't just an article sucking chatgpt cock or someone with no product trying to bait venture capitalist money. Finding research into the actual methods and not just virtue signaling over ethics is proving difficult.
https://link.springer.com/chapter/10.1007/978-3-030-93758-4_4
https://www.vetta.org/documents/Machine_Super_Intelligence.pdf (overview of the theoretical state of machine intelligence)
If you want a primer on what ChatGPT is doing read this: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
As for the example you asked about a multimodal types there's Gato by Google's Deepmind: >https://www.deepmind.com/publications/a-generalist-agent (2022).
It's was bit mediocre, but it's generalist by essence and possibly tiny wee scary by now in the lab.
>Inspired by progress in large-scale language modeling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato.
internet text is trash
google has the next big step which is trained on actual books which nobody else has
microsoft has code from github but that's a massive legal issue
google's shit is public domain
wolfram is probably trying to tie a logic engine into it somehow
that guy's the reincarnation of von neumann and has like a 25% chance of pulling AGI out of a hat
>but that's a massive legal issue
It's easier to ask forgiveness than it is to get permission and just like that MS can't get around giving a fuck about licenses issues while turbocharging Copilot. The same applies to data over the web and inside companies. As soon as GAI starts to delivering on the bottom line a race is going to begin to get every dataset available under the sun in order to feed to these models.
Does transformer training allow saving checkpoints of the model like in stable diffusion? if so of course they're training, they're always training, since they can always go back
I thought it would be quite a bit more than that...
Maybe some day they'll come up with something better than large language models that doesn't say false things.
You mean like an AI that is capable of understanding what it is saying and correcting itself on the fly when it finds new knowledge?
That would be AGI.
Chinese room AI is all we gonna get for a long time.
> language models that doesn't say false things
Why? They already got what they needed.
Apart from the legal system stuff and the healthcare part which I know nothing about, it is right about everything, though.
Nice bait.
>Men are by far more likely to be victims of physical violence
>Many job-sectors fall over themselves in order to hire women due to wanting 50/50 ratios. I'm not getting into the gender paygap myth, but you know how to use google.
>Women are hugely over-represented in higher education
>Men are probably over-represented in politics, but definitely not in media. Most media is made to appeal to women first and foremost
>The social expectations point has nothing to do with privelege
>Women take up more healthcare resources by far. Female-specific cancers receive twice the funding that male-specific ones do despite killing fewer people
As you said, the 7th point is just nonsense.
>25,000
oh lordy the absolute GIRTH on this one
The video cards are actually being used to mine BTC, and GPT-5 is just going to be an army of mechanical turks.
Well, yeah. GPT-4 is basically finished and due any time no, so 5 is inevitably in the works. What's really up in their air is what other firms have up their sleeves.
Link, moron.
NTA but https://nitter.nl/davidtayar5/status/1625140481016340483
Thanks
> right now
> "we think" it's using 25k nvidia gpus
> ... we think
> some garbage from morgan stanley
> that will ride the hype train of anything that makes money
amazing blogpost, rajeesh.
Isn't it true that larger models will yield *worse* performance due to scaling effects similar to over fitting?
Actually GPT is just a pajeet replying to you in real time, there's no such thing as AI.
goos morning sir sir it's me your ai. ask me anything and i'll do the needful
Use toilet
few yil ago: crypogays buying GPUs en gros to mine memecoins
today: new GPT version in training, using 25k nvidia GPUs
NEVER ENDING CYCLE AAAAAAAAAAAAAA
At least they're using A100 workstation grade cards instead of consoomer 4080s.
https://www.nvidia.com/en-us/data-center/h100/
https://www.nvidia.com/content/dam/en-zz/Solutions/gtcs22/data-center/h100/nvidia-h100-workload-training-2c50-d.svg
Wrong, nobody is using 2 year old GPUs when you get so much more performance and time savings
it not like there are any games new games that are worthwhile of rtx 4080 or better performance anyways
Is there anything else that can be done between now and AGI besides moar data? There is only so much usable text on the internet and I feel like it's going to be a limiting factor very soon. Is there anything multimodal being worked on that can be trained on text as well as video, audio, and pictures?
From what I've been reading: optimization (a lot of it and actually done by Meta's LLaMA more recently), autolearning? (I don't know how they call it, but it's like these models being able to conceptualize/infer stuff that are not in their datasets, basically they need to have some internal representation of the world and then gather, analyze, synthesize and essentially build on top of it, I'm really not doing a good job explaining since there are "easier" steps to be done before achieving it) and sense making (vision, audio, contact etc).
What have you been reading? Share some links if you have them. I'm having a hard time finding anything that isn't just an article sucking chatgpt cock or someone with no product trying to bait venture capitalist money. Finding research into the actual methods and not just virtue signaling over ethics is proving difficult.
https://link.springer.com/chapter/10.1007/978-3-030-93758-4_4
https://www.vetta.org/documents/Machine_Super_Intelligence.pdf (overview of the theoretical state of machine intelligence)
If you want a primer on what ChatGPT is doing read this: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
As for the example you asked about a multimodal types there's Gato by Google's Deepmind:
>https://www.deepmind.com/publications/a-generalist-agent (2022).
It's was bit mediocre, but it's generalist by essence and possibly tiny wee scary by now in the lab.
>Inspired by progress in large-scale language modeling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato.
Thanks anon
The ability to actually remember the topic of conversation
internet text is trash
google has the next big step which is trained on actual books which nobody else has
microsoft has code from github but that's a massive legal issue
google's shit is public domain
wolfram is probably trying to tie a logic engine into it somehow
that guy's the reincarnation of von neumann and has like a 25% chance of pulling AGI out of a hat
>but that's a massive legal issue
It's easier to ask forgiveness than it is to get permission and just like that MS can't get around giving a fuck about licenses issues while turbocharging Copilot. The same applies to data over the web and inside companies. As soon as GAI starts to delivering on the bottom line a race is going to begin to get every dataset available under the sun in order to feed to these models.
why should I give a shit about 5th gen being in development when I'm still not allowed to play with 4th gen yet
>because it'll take your job
no it won't I'm unemployed
What's DD?
Due Diligence, iirc.
Does transformer training allow saving checkpoints of the model like in stable diffusion? if so of course they're training, they're always training, since they can always go back