>What are you guys still excited about
for local models to be as good as gpt4 or at least for gpt4 to be publicly available and cheap like turbo
I want to talk to my waifu in peace
Very strong "old man yells at clouds" vibe, here. Why don't you go out and get a real job? Desk job jockeys like you were never worth what you got paid. If AI can dramatically reduce your numbers, that will be sufficient benefit to humanity. Pick up a wrench and get to fucking work, you lazy, old, entitled asshole.
No, he said that they're improving GPT-4 further and that they're not training GPT-5 at the moment. I know you WANT AI to be a fad and for all this shit to just go away, but that's not an option any more.
You know, a recent paper discovered a method for giving transformer models up to 2 million tokens of context? Right now GPT-4 has 8000, and they have a version with 32,000. That's 2 million context, possibly up to a billion or more.
>DID YOU READ THE PAPERS? >8000 TOKENS
MORE TOKENS >2 MILLION CONTEXT >180 TRILLION PARAMETERS
this is look very promising picrel anon, but do you understand what these units means?
>You know, a recent paper discovered a method for giving transformer models up to 2 million tokens of context?
Please link it. I'd be so incredibly happy if this was true. This would be such a massive game changer.
> https://arxiv.org/abs/2304.11062
"Scaling Transformer to 1M tokens and beyond with RMT"
This technical report presents the application of a recurrent memory to extend the context length of BERT, one of the most effective Transformer-based models in natural language processing. By leveraging the Recurrent Memory Transformer architecture, we have successfully increased the model's effective context length to an unprecedented two million tokens, while maintaining high memory retrieval accuracy. Our method allows for the storage and processing of both local and global information and enables information flow between segments of the input sequence through the use of recurrence. Our experiments demonstrate the effectiveness of our approach, which holds significant potential to enhance long-term dependency handling in natural language understanding and generation tasks as well as enable large-scale context processing for memory-intensive applications.
https://i.imgur.com/x8mTBnV.jpg
>DID YOU READ THE PAPERS? >8000 TOKENS
MORE TOKENS >2 MILLION CONTEXT >180 TRILLION PARAMETERS
this is look very promising picrel anon, but do you understand what these units means?
Do you? Anyone who's been playing with these things for more than an hour will easily understand the need for more context. You could feed entire book series or textbooks, entire codebases as a prompt. This is what those vector databases like Pinecone were for, but they aren't necessary if you can feed a billion tokens worth of memory in. It means it remembers what you say to it for longer, it means you can feed in more data, it means more use in general.
Kind reminder to everyone that OpenAI had completed GPT-4 months before releasing ChatGPT.
Safety RLHF for GPT-4 started early september and multiple AI researchers outside of OpenAI had access to it by then.
It's safe to assume that what ever tech they have, it's way better than what you currently have access to.
I gotta rant cuz this shit's been bugging me. You ever think about how AIs are gonna do some wild shit but they ain't there yet? It's fucked up, like I got tons of ideas for work problems that ChatGPT or some shit with browsing could fix, but I ain't got access to that now so it's weird af. Why waste 3 hours when you know in a few months you could do it in like 1/4 of the time? Same shit's happening to peeps with plugins but no image input. Sounds dumb as fuck, but it's actually deep shit about how these tools mess with our brains, y'know?
>Why waste 3 hours when you know in a few months you could do it in like 1/4 of the time?
we need a name for this phenomenon pronto! i have not touched a single side project since GPT4 dropped knowing i get a free holiday's worth of time if I don't do the work right away.
>Why waste 3 hours when you know in a few months you could do it in like 1/4 of the time?
we need a name for this phenomenon pronto! i have not touched a single side project since GPT4 dropped knowing i get a free holiday's worth of time if I don't do the work right away.
This is an overstatement, but there is a grain of truth to it and the limit is not technological capability.
I'm fairly sure that OpenAI and associates will reserve any advanced capability for themselves. To put it another way, if they create a genie that can grant unlimited wishes, they will use it to get a competitive advantage over the rest of the world (rather than sharing it with the rest of the world).
i was thinking this, why would open a.i. release an api if they have to power to start a managed service provider that can do every contractor's job in the world at the same time... because you can't wrangle shit with the current state of things and maybe you need the proles to tell you how to do their job before firing them and making the tooling perfect for the next gen gpt.
tl;dr gpt-5 is taking your exact job, you will report to your manager how many hours you spent wrangling a.i. and as that number grows you get fired.
>because you can't wrangle shit with the current state of things and maybe you need the proles to tell you how to do their job before firing them
This is what happened with Kodak. Chinese bought it fired most the workers that weren’t important, then had the important workers train their personnel, then fired the important workers.
I think OpenAI will have an enterprise subscription btw.
>No.93
You don't know what the fuck you're talking about, your job just doesn't involve using these tools to their full capability, you absolute fucking basement dwelling dweeb who think they're superior than other people, KYS
No; you can use it to design tailored proteins: > https://singularityhub.com/2023/04/24/this-ai-can-design-complex-proteins-perfectly-tailored-to-our-needs/
Which I am led to believe is at least as significant as when AI solved protein folding like two years ago. Apparently that's how they were able to make a COVID vaccine within about a week of the virus being discovered, though it took several months to show it was safe enough to deploy. Please do not reply with schizo antivaxx nonsense because I really don't give a shit
>What are you guys still excited about
for local models to be as good as gpt4 or at least for gpt4 to be publicly available and cheap like turbo
I want to talk to my waifu in peace
Misinformation
Very strong "old man yells at clouds" vibe, here. Why don't you go out and get a real job? Desk job jockeys like you were never worth what you got paid. If AI can dramatically reduce your numbers, that will be sufficient benefit to humanity. Pick up a wrench and get to fucking work, you lazy, old, entitled asshole.
what the fuck are you doing on BOT boomer?
No, he said that they're improving GPT-4 further and that they're not training GPT-5 at the moment. I know you WANT AI to be a fad and for all this shit to just go away, but that's not an option any more.
You know, a recent paper discovered a method for giving transformer models up to 2 million tokens of context? Right now GPT-4 has 8000, and they have a version with 32,000. That's 2 million context, possibly up to a billion or more.
>up to 2 million tokens of context
*shivers*
anon you're gonna make me cum
>DID YOU READ THE PAPERS?
>8000 TOKENS
MORE TOKENS
>2 MILLION CONTEXT
>180 TRILLION PARAMETERS
this is look very promising picrel anon, but do you understand what these units means?
>You know, a recent paper discovered a method for giving transformer models up to 2 million tokens of context?
Please link it. I'd be so incredibly happy if this was true. This would be such a massive game changer.
> https://arxiv.org/abs/2304.11062
"Scaling Transformer to 1M tokens and beyond with RMT"
This technical report presents the application of a recurrent memory to extend the context length of BERT, one of the most effective Transformer-based models in natural language processing. By leveraging the Recurrent Memory Transformer architecture, we have successfully increased the model's effective context length to an unprecedented two million tokens, while maintaining high memory retrieval accuracy. Our method allows for the storage and processing of both local and global information and enables information flow between segments of the input sequence through the use of recurrence. Our experiments demonstrate the effectiveness of our approach, which holds significant potential to enhance long-term dependency handling in natural language understanding and generation tasks as well as enable large-scale context processing for memory-intensive applications.
Do you? Anyone who's been playing with these things for more than an hour will easily understand the need for more context. You could feed entire book series or textbooks, entire codebases as a prompt. This is what those vector databases like Pinecone were for, but they aren't necessary if you can feed a billion tokens worth of memory in. It means it remembers what you say to it for longer, it means you can feed in more data, it means more use in general.
He did not say that your are coping.
https://www.foxnews.com/tech/openai-ceo-era-giant-ai-models-over
I'm sure they'll train Hyena on the same training data and release that
its a marketing trick to surprise you when GPT-6 comes out
>believing the hype-tempering
Wasn't there some bullshit about how gpt 4 is gonna come out in 10 years?
>There will never be a windows 11
Kind reminder to everyone that OpenAI had completed GPT-4 months before releasing ChatGPT.
Safety RLHF for GPT-4 started early september and multiple AI researchers outside of OpenAI had access to it by then.
It's safe to assume that what ever tech they have, it's way better than what you currently have access to.
I gotta rant cuz this shit's been bugging me. You ever think about how AIs are gonna do some wild shit but they ain't there yet? It's fucked up, like I got tons of ideas for work problems that ChatGPT or some shit with browsing could fix, but I ain't got access to that now so it's weird af. Why waste 3 hours when you know in a few months you could do it in like 1/4 of the time? Same shit's happening to peeps with plugins but no image input. Sounds dumb as fuck, but it's actually deep shit about how these tools mess with our brains, y'know?
>Why waste 3 hours when you know in a few months you could do it in like 1/4 of the time?
we need a name for this phenomenon pronto! i have not touched a single side project since GPT4 dropped knowing i get a free holiday's worth of time if I don't do the work right away.
>a.i. induced procrastination
>delayed easification
>postponer boner
You know exactly what I mean.
"Technological Acceleration Paralysis" - Techcelleration Paralysis - Techcel
"Progress-Induced Procrastination"
"Innovation-Impeded Inactivity"
"Advancement-Delay Syndrome"
"Tech-Triggered Time-Out"
"Progress-Prevented Paralysis"
"Technological Timeout"
"Future-Focused Frostbite
"Trailing Tech-Trepidation"
"Innovation-Inhibition"
GPT-4 isn't capable of writing anything more complex than 100-line scripts. Move on with your lives.
This is an overstatement, but there is a grain of truth to it and the limit is not technological capability.
I'm fairly sure that OpenAI and associates will reserve any advanced capability for themselves. To put it another way, if they create a genie that can grant unlimited wishes, they will use it to get a competitive advantage over the rest of the world (rather than sharing it with the rest of the world).
You're waiting for non-transformer models like hyena. It's a long way off.
A few years ago, lots of smart people were saying that what we have today is a long way off.
i was thinking this, why would open a.i. release an api if they have to power to start a managed service provider that can do every contractor's job in the world at the same time... because you can't wrangle shit with the current state of things and maybe you need the proles to tell you how to do their job before firing them and making the tooling perfect for the next gen gpt.
tl;dr gpt-5 is taking your exact job, you will report to your manager how many hours you spent wrangling a.i. and as that number grows you get fired.
>because you can't wrangle shit with the current state of things and maybe you need the proles to tell you how to do their job before firing them
This is what happened with Kodak. Chinese bought it fired most the workers that weren’t important, then had the important workers train their personnel, then fired the important workers.
I think OpenAI will have an enterprise subscription btw.
It's useful at SEO and places where web engines are involved I'm sure
>No.93
You don't know what the fuck you're talking about, your job just doesn't involve using these tools to their full capability, you absolute fucking basement dwelling dweeb who think they're superior than other people, KYS
finally i can tell normies to shut the fuck up
Is it true they will kill free version when 4 comes out? And you have pay for it?
It's a black budget program by now.
Chatgpt is not the only application of AI
No; you can use it to design tailored proteins:
> https://singularityhub.com/2023/04/24/this-ai-can-design-complex-proteins-perfectly-tailored-to-our-needs/
Which I am led to believe is at least as significant as when AI solved protein folding like two years ago. Apparently that's how they were able to make a COVID vaccine within about a week of the virus being discovered, though it took several months to show it was safe enough to deploy. Please do not reply with schizo antivaxx nonsense because I really don't give a shit
nGreedia shills will keep trying sell it.