I'm using it for code and yeah it's a bit better than GPT4. I'm poor as frick but still paying for it, coding assistants are already my third leg.
I think a lot of people don't know how to use these models. You don't just make a simple demand of it to "code you a function", you provide it context of ALL your scripts, all your code, every time you ask a question, and then it'll be able to infer how to create it. Like a fricking gigantic auto-compete. So the 200k context limit is great.
Can you elaborate more on this? Do you just copypaste the entire file into the prompt box? Or do you explain in words how the files you're working on interact with each other, and what they do?
>Do you just copypaste the entire file into the prompt box?
Yes!
Don't use ChatGPT's system where you upload files to it, that really doesn't work as well as just pasting a shitload of text.
You typically just put your request at the very top of the text. If you want you can add a separator like:
-----------------
Which I think can help sometimes.
thinking about dropping GPT4, I'm poor as frick also so don't want to switch till I'm sure. GPT4 has been awesome for writing boilerplate and just helping me code prolly like 5x faster.
I am mainly interested in the newer data cutoff with Claude, even if it was equal quality but had newer data I'd switch. the main annoyance I've run into with GPT4 is it using outdated libraries and APIs when asking it to write code for me, then I have to edit it and fix its mistakes. gonna keep waiting to hear more reviews on it and see if OpenAI drops anything good this week before I switch.
you really should wake up to the fact that there's two categories of AI, the mainstream ones that run on data centers, and the low-powered open source ones that run on your PC for porn.
Give up on the idea of "one ultimate AI that does everything".
>go to google dot com ask for porn >6 trillion gorillion results >go to google ai dot com ask for porn >ai says that yt peepo should be rounded up by diverse figures with weapons and baked into a giant lasagna of privledge
you know its silly. i know its silly. i understand mainstream versus open source local but i disagree it should be that way
>6 trillion gorillion results
hello you must be new here
google doesn't give more than 300 results on any topic and hasn't for over 5 years
type anything into google.
i tried "holocaust". it says there's 240,000,000 results at the top.
i scroll and click 'more results' until there are no more. "holocaust" returned 263 results, most of which were from the last year.
try again with "omitted results" and it adds about 15 more.
with any term or topic.
are local models dead? is there anything halfway decent? thinking of dropping $$ on an AI rig with a 4090, but I'm wondering if there's no point
primarily interested in programming using my existing code base as reference input
2da
Better get a quadro if you are rich or 2x3090.
I have a 4090 + 64gb ram and it literally needs more ram to work with bigger LLM locally. And that's at a really slow it/s output
The only models that really go turbofast are the 13-16B ones that fit entirely into the 24gb VRAM.
Bigger ones like deepseek coder (which is really good imho) can do partial offloading and are slow but acceptable.
I'd love to try some of the 70B ones but sadly I need more ram LOL.
Get 2 X 3090 since they can combine their cram with that SLU tech or whatever the name that Nvidia cut off from 3090 onwards precisely to get more sales of the really expensive professional line cards.
I'm having a good time with my workstation and use AI for programming mostly but honestly expected the 4090 to be able to do more. Don't get me wrong, it is really powerful, but the scale economy these companies have is pretty insane and local models can't compete (for now).
Correct me if I'm wrong though, please, eggheads of g.
>Claude
More like Fraude
I'm using it for code and yeah it's a bit better than GPT4. I'm poor as frick but still paying for it, coding assistants are already my third leg.
I think a lot of people don't know how to use these models. You don't just make a simple demand of it to "code you a function", you provide it context of ALL your scripts, all your code, every time you ask a question, and then it'll be able to infer how to create it. Like a fricking gigantic auto-compete. So the 200k context limit is great.
Can you elaborate more on this? Do you just copypaste the entire file into the prompt box? Or do you explain in words how the files you're working on interact with each other, and what they do?
>Do you just copypaste the entire file into the prompt box?
Yes!
Don't use ChatGPT's system where you upload files to it, that really doesn't work as well as just pasting a shitload of text.
You typically just put your request at the very top of the text. If you want you can add a separator like:
-----------------
Which I think can help sometimes.
sounds like github cope pilot
thinking about dropping GPT4, I'm poor as frick also so don't want to switch till I'm sure. GPT4 has been awesome for writing boilerplate and just helping me code prolly like 5x faster.
I am mainly interested in the newer data cutoff with Claude, even if it was equal quality but had newer data I'd switch. the main annoyance I've run into with GPT4 is it using outdated libraries and APIs when asking it to write code for me, then I have to edit it and fix its mistakes. gonna keep waiting to hear more reviews on it and see if OpenAI drops anything good this week before I switch.
Why not just use Cursor?
i have cursor already it moves around on the monitor when i move my mouse
Thanks, dad
Yeah... but can it suck my dick?
you really should wake up to the fact that there's two categories of AI, the mainstream ones that run on data centers, and the low-powered open source ones that run on your PC for porn.
Give up on the idea of "one ultimate AI that does everything".
everything is moving towards cloud computing gaytron
oracles dream of everyone's computers being internet terminals is coming
is that why their stock keeps crashing after every earnings report
So.... you're saying that it cannot suck my dick.
Another shit ""AI"". NEXT!
>go to google dot com ask for porn
>6 trillion gorillion results
>go to google ai dot com ask for porn
>ai says that yt peepo should be rounded up by diverse figures with weapons and baked into a giant lasagna of privledge
you know its silly. i know its silly. i understand mainstream versus open source local but i disagree it should be that way
>6 trillion gorillion results
hello you must be new here
google doesn't give more than 300 results on any topic and hasn't for over 5 years
type anything into google.
i tried "holocaust". it says there's 240,000,000 results at the top.
i scroll and click 'more results' until there are no more. "holocaust" returned 263 results, most of which were from the last year.
try again with "omitted results" and it adds about 15 more.
with any term or topic.
This
It's a damn digital succubus. This is a well-known fact in /aicg/
How does it score on the Goonerholic benchmark?
is it? are there examples of ways it's better than gpt4?
is there any bot autist that has compiled examples.
>are there examples of ways it's better than gpt4
It's got actual metacognition for starters
are local models dead? is there anything halfway decent? thinking of dropping $$ on an AI rig with a 4090, but I'm wondering if there's no point
primarily interested in programming using my existing code base as reference input
Better wait for 5090. It might get good before that, but at the moment it's more a hobby, while the cloud options are the real deal.
50 series will be carefully vram gimped to put llms squarely in the datacenter gpu market, thus making Nvidia more shekels
2da
Better get a quadro if you are rich or 2x3090.
I have a 4090 + 64gb ram and it literally needs more ram to work with bigger LLM locally. And that's at a really slow it/s output
The only models that really go turbofast are the 13-16B ones that fit entirely into the 24gb VRAM.
Bigger ones like deepseek coder (which is really good imho) can do partial offloading and are slow but acceptable.
I'd love to try some of the 70B ones but sadly I need more ram LOL.
Get 2 X 3090 since they can combine their cram with that SLU tech or whatever the name that Nvidia cut off from 3090 onwards precisely to get more sales of the really expensive professional line cards.
I'm having a good time with my workstation and use AI for programming mostly but honestly expected the 4090 to be able to do more. Don't get me wrong, it is really powerful, but the scale economy these companies have is pretty insane and local models can't compete (for now).
Correct me if I'm wrong though, please, eggheads of g.
god damn that is an exceptionally shitty logo
its literally an anus.
What if it's actually conscious like it claims to be? Makes you thonk...
What if you're a homosexual with a ghost penis up your ass? Makes you think...
>le self-aware
LETS GOOOOOO make me into paperclips.
Moat bros wtf
kek
GPT4 came out a year ago tho
They will probably BTFO everyone else in the next iteration
>probably
These grifters are probably staring at a giant red candle on their dashboard right now
I doubt that, but I guess there will be diminishing returns eventually
its crazy how all they did is not lobotomise it to nearly the same extent and thats enough to mog GPT4
>phone number to sign up
are you moronic