Is GPT-whatever able to work offline or this shit just googles stuff when asking to generate code? Posted on March 15, 2023 by Anonymous Is GPT-whatever able to work offline or this shit just googles stuff when asking to generate code?
Please, stop creating GPT related threads.
It's over. gpt has done to BOT what crypto did to BOT. Another baord down.
Meh. It was like it this a couple a month or so ago. Whoever was making the threads gave up, or was banned.
there's maybe 3 good threads active on biz at any one time, and they're all slow as shit.
where did everyone with a brain migrate to? I flit between g and biz, sometimes diy, ck, tv if I've seen a recent film
>tv if I've seen a recent film
you are not welcome there
stay out, secondary
you deserve it for ruining /ic/
>what crypto did to BOT
What did he mean by this
>STOP TALKING ABOUT TECH!
How about you stop flooding threads with your retarded post
Need a general already.
I can't tell if this is a parody of a moron asking about how language models work or an actual moron asking how language models work.
Do you have proof it's language model and not pajeets googling? It's not open source.
I need answers anon
I think that llama thing can run locally. But yeah stop it with these GPT related threads already.
Maybe this? https://github.com/AmericanPresidentJimmyCarter/yal-discord-bot
where it the moronpt4?
give me it nao
Bing bings things and appends the top links to the prompt. GPT-* has no access to data after its training dataset was compiled.
Counter question. Why does it bother you if you don't know how either of those things work?
I need to know how to kill it just in case
Anon, you aren't smart enough to kill a mouse
Funny you mentioned that, I killed one but like years ago, so better don't make do that again.
its as open as the federal reserve is federal (just the api's to interact with their servers/super computer )
it doesnt have access to the internet
Sydney's GPT-4 literally looks through your search results and even queries websites in real time to respond to you. ChatGPT's GPT-3.5 has a context window of 4098 tokens, which makes synthesizing websites hard, but GPT-4 can handle a maximum of 32k tokens, making this approach possible. Connecting it to the internet is trivial, it just that back then it was inconvenient to do so.
im addressing the OP.
no, it doesnt connect to the internet.
basically only llama can work offline, and that is available via llama.cpp