>AI is just hype >chooses to pay for it
Or >AI IS HECKIN REVOLUTION!!! CYBER KILL HOLOCAUSST!!!! DWATH STARVATION END END END HAPPENING GNG?!!!!!! >chooses to pay for it.
1. How does the API side step message limits? >pay the costs entirely yourself
2. Wut? Or can you explain how this is different then paying $20/month entirely yourself?
1. The API bills per token, instead of a flat rate, so you don't get a message cap since you're pay as you go instead of a subscription.
2. See 1. Because you're paying as you go, you don't pay a flat $20, you pay per token. It's $0.03 per 1000 input tokens and $0.06 per 1000 output tokens. Anything you send to it is an input token, anything it spits out is an output token. It's limited to 8k tokens per conversation history, and every time you exchange an input and output with it, that goes into your next input as a primer so it can "remember" the conversation history.
Roughly, 1 word is about 1 token, give or take (around 750 tokens per 1k words when you get to large messages).
An example:
You send a message that has 10 words, so that's 10 tokens, and you're billed $0.0003 (0.03 * 10 / 1000). It replies and its reply is also 10 words, or 10 tokens, so that's $0.0006 (0.06 * 10 / 1000). So far, that's $0.0009.
You toggled the API option to "remember message history", and send another message that's also 10 tokens too. Because of how it works, it primes the input with the entire message history so far, so it sends your initial input + its initial output as another input, then it sends your new input, so you only wrote 10 tokens for this second message to the bot, but it got 30 tokens worth of information as input, so that's now $0.009 on top of the $0.0009 you've already spent.
Then it replies another message, but this one is 20 tokens or something. So that's $0.0012 on top of the $0.0099 so far, so that's now $0.0111 spent. And so on and so on, exponentially increasing as the conversation gets longer.
If you don't want it to remember the whole conversation or you don't want it to remember anything past the input you're giving it at the time, then naturally your costs go down.
To expand on this further, remember that there's an 8k total token limit, so if you get to a large conversation you're going to get stopped from sending that particular instance any more messages unless you trim what you want it to remember out. And an 8k token input is $0.24.
The tradeoff of the 25 messages every 3 hours is you aren't capped on conversation length (as far as I can tell, but it likely uses the 32k context model which isn't available via the API yet so after 32k tokens it probably just selectively stops remembering parts of the conversation to be able to continue), so there's a bit of a benefit there. But if you aren't using it for super long exchanges anyway, you might get more mileage out of the API even with its 8k token context limit.
A message is a single message, a conversation is the entire collection of messages so far. So if you wanted a comparison to this, each of our back and forths is a message each, and our entire back and forth (assuming you're the OP) is 5-6 messages (depending on whether or not you want to condense my two posts where the second post just expands on the first) which is currently 1 conversation.
>and $0.06 per 1000 output tokens.
HAHAHAHAHAHAHAHAHHAAHAHAHAHAHAHHAHAHAHAHAHAHHAHAHAHAHAHAHAHHAHAHAHAHAHAHAHAHHAHAHAH no wonder it gives you a 15 page essay when you ask it why black people commit so much crime
>Your saying it essentally resend the entire conversation ever new request?
Yep, and the api version is limited to 8k tokens where the chat based one is 32k tokens. That explains why, if you have a long conversation with it, it starts to go off the rails and repeat itself or forget what you discussed with it earlier.
Yea ive noticed that too. Conversation history is really only useful for saving some answers as deep development of a topic or creative work cuts off if you try working on something more then a few hours. I know now its not time dependent but conversation length. Ive basically stopped saving conversations for anything other then not having to write down an answer I liked.
I was one of the first to get GPT-4 through the API, but the costs are absurd. It is much cheaper to go with ChatGPT Plus. I use two accounts, so I barely ever hit the limit.
Use ChatGPT on Poe. It's not as good, but it's free, you don't need a jailbreak, and if your prompt fu is strong enough, it can still provide a standard fap, just not a life changing one.
free 3.5 is enough for me
you just need to get some brain cells to do the right question
NO REFUNDS!
Really, is this a thing?
LMFAO
>rename GPT-3 to GPT-4
>retards buy it in droves
>AI is just hype
>chooses to pay for it
Or
>AI IS HECKIN REVOLUTION!!! CYBER KILL HOLOCAUSST!!!! DWATH STARVATION END END END HAPPENING GNG?!!!!!!
>chooses to pay for it.
How do you justify either of these?
Just use the API and pay the costs entirely yourself, you get unlimited messages then
1. How does the API side step message limits?
>pay the costs entirely yourself
2. Wut? Or can you explain how this is different then paying $20/month entirely yourself?
1. The API bills per token, instead of a flat rate, so you don't get a message cap since you're pay as you go instead of a subscription.
2. See 1. Because you're paying as you go, you don't pay a flat $20, you pay per token. It's $0.03 per 1000 input tokens and $0.06 per 1000 output tokens. Anything you send to it is an input token, anything it spits out is an output token. It's limited to 8k tokens per conversation history, and every time you exchange an input and output with it, that goes into your next input as a primer so it can "remember" the conversation history.
Roughly, 1 word is about 1 token, give or take (around 750 tokens per 1k words when you get to large messages).
An example:
You send a message that has 10 words, so that's 10 tokens, and you're billed $0.0003 (0.03 * 10 / 1000). It replies and its reply is also 10 words, or 10 tokens, so that's $0.0006 (0.06 * 10 / 1000). So far, that's $0.0009.
You toggled the API option to "remember message history", and send another message that's also 10 tokens too. Because of how it works, it primes the input with the entire message history so far, so it sends your initial input + its initial output as another input, then it sends your new input, so you only wrote 10 tokens for this second message to the bot, but it got 30 tokens worth of information as input, so that's now $0.009 on top of the $0.0009 you've already spent.
Then it replies another message, but this one is 20 tokens or something. So that's $0.0012 on top of the $0.0099 so far, so that's now $0.0111 spent. And so on and so on, exponentially increasing as the conversation gets longer.
If you don't want it to remember the whole conversation or you don't want it to remember anything past the input you're giving it at the time, then naturally your costs go down.
To expand on this further, remember that there's an 8k total token limit, so if you get to a large conversation you're going to get stopped from sending that particular instance any more messages unless you trim what you want it to remember out. And an 8k token input is $0.24.
The tradeoff of the 25 messages every 3 hours is you aren't capped on conversation length (as far as I can tell, but it likely uses the 32k context model which isn't available via the API yet so after 32k tokens it probably just selectively stops remembering parts of the conversation to be able to continue), so there's a bit of a benefit there. But if you aren't using it for super long exchanges anyway, you might get more mileage out of the API even with its 8k token context limit.
Is a message a conversation?
A message is a single message, a conversation is the entire collection of messages so far. So if you wanted a comparison to this, each of our back and forths is a message each, and our entire back and forth (assuming you're the OP) is 5-6 messages (depending on whether or not you want to condense my two posts where the second post just expands on the first) which is currently 1 conversation.
>and $0.06 per 1000 output tokens.
HAHAHAHAHAHAHAHAHHAAHAHAHAHAHAHHAHAHAHAHAHAHHAHAHAHAHAHAHAHHAHAHAHAHAHAHAHAHHAHAHAH no wonder it gives you a 15 page essay when you ask it why black people commit so much crime
yeah fucking exactly, the more verbose it is the more shekels they get
especially considering that 75% of those "tokens" are not even AI generated, they are just boilerplate disclaimers they put on every output.
>10 words, so that's 10 tokens
>limited to 8k tokens per conversation
So instead of 25 messages, your limited to 8,000 words?
This is interesting as far as it explains how GPT remembers a conversation. Your saying it essentally resend the entire conversation ever new request?
>Your saying it essentally resend the entire conversation ever new request?
Yep, and the api version is limited to 8k tokens where the chat based one is 32k tokens. That explains why, if you have a long conversation with it, it starts to go off the rails and repeat itself or forget what you discussed with it earlier.
Yea ive noticed that too. Conversation history is really only useful for saving some answers as deep development of a topic or creative work cuts off if you try working on something more then a few hours. I know now its not time dependent but conversation length. Ive basically stopped saving conversations for anything other then not having to write down an answer I liked.
I was one of the first to get GPT-4 through the API, but the costs are absurd. It is much cheaper to go with ChatGPT Plus. I use two accounts, so I barely ever hit the limit.
>scrapes millions of documents ignoring every terms of service and copyright law in the process
>charges you $20 per month and rate limits you
Use ChatGPT on Poe. It's not as good, but it's free, you don't need a jailbreak, and if your prompt fu is strong enough, it can still provide a standard fap, just not a life changing one.
Just get API access and use playground in chat mode.