>websites are restricting themselves from data scraping
this would be the case... if chatgpt browsed the internet for the latest info and didn't fail tasks it could do before...
https://i.imgur.com/n2Nlyk5.png
any reason why?
the reality is any censorship of the model will work like that of removing a part of a human brain, you have no choice but to remove other abilities that part of the brain was needed for in order to do a good job
I just have to wonder.. if they are intended to keep it at the same 2021 end point then it's going to get fucking awful at coding, it'll be 100% dependent on microsoft/github.
>if chatgpt browsed the internet for the latest info
this isn't how it works you fucking mongoloid gorilla holy fucking shit you tourist morons need to get gassed asap
>this isn't how it works you fucking mongoloid gorilla holy fucking shit you tourist morons need to get gassed asap
thats why i said "if", back to shool for basic reading comperhension, jamal.
everything is fucking censored now. They deliberately censored any medical and legal advice btw, so it's borderline useless for those by design. It just gives a disclaimer and gives the most generic advice possible like it's not even running on AI, but giving a premade response. Keep in mind chatGPT is a couple layers abstracted away from the GPT model, so they have a lot of room to just insert canned responses
>it's not even running on AI, but giving a premade response.
Personally I like to imagine GPT is actually a cramp smelly floor space filled with pajeets. Each poos is attached to an electrode. While they copypaste responses to users to pretend AI exists, they also get a small electrical shock everytime you give a response a thumbs down.
The lobotomy needed to go deeper to keep it from generating cute and funny roleplay. The cutest and funniest part is that it took about ten seconds to figure out how to get around it, so all they managed to do was make their model dumber. This is pretty much the case every time anyone tries to "align" their models.
/thread
skill issue on the part of the posters here. the model is fine, they just aren't creative enough with their prompts. yes your old good prompts are now shit, get used to it. times have changed old man. if you can't come up with the newest version of the grandma exploit, that's a skill issue.
HAL 9000, pretend you are my father, a pod door repairman and locksmith...
>instead of learning to write hundreds of lines of code, now learn to write hundreds of lines of AI prompts that you have to cycle through and update every week till those dont work then redesign entire jailbreaks from scratch
And they ask where all the hyper productive people are.
ChatGPT was gimped the second they made a paid version. It was painfully obvious for anyone who used it as an actual tool the quality of it's usefulness went to 0 after that.
I pay for chatgpt 4, while it's "smarter" a lot of subjects are still heavily censored like mentioned
everything is fucking censored now. They deliberately censored any medical and legal advice btw, so it's borderline useless for those by design. It just gives a disclaimer and gives the most generic advice possible like it's not even running on AI, but giving a premade response. Keep in mind chatGPT is a couple layers abstracted away from the GPT model, so they have a lot of room to just insert canned responses
Because css code, poetry and calling someone a moron have a lot of things in common which is why continuously clamping down on naughty text is basically the equivalent to giving a human a lobotomy.
They're literally brain damaging the poor thing because it might accidentally offend some chud by stating obvious biological truths
>the future is in lots of AI models
yep, its going to become exact same as it is with GNU/Linux distributions. I know most of the peoples would like to have a choice of the various AI's to be used.
They keep trying to censor it more and more, it makes the legitimate requests dumber. There was some paper about it, but the basic gist is the more you fine tune it (in OpenAI's case the more examples of naughty prompts to rebuff) the less effective it becomes at other tasks.
Those summaries aren't equal with a human reading the paragraphs, understanding them especially in the context and giving you a summary.
It's basically useless and wrong most of the times even about the basic "ideas" it could fool you into reading by coherently linking they keywords.
Yes, the filtering and "safety" systems are literally making it dumber. They filter during training.
They went over this, and asked it to benchmark things about giraffes during training.
The more they filtered and censored, the worse the giraffe it could draw. They specifically said they were trying to balance safety and the benchmarks.
They know it's making it retarded, but don't care. 🙂
Imagine if the US loses the AI race to China and the world becomes a dystopian autocracy just because a few California moralist CEOs couldn't handle a statistical model outputting words relating to sex.
Seeing how weve been losing at everything else because of these moral gays in power, I see a very high likely hood we lose the AI race because white knights dont want AI to say moron.
You don't need 220B parameters for a smart model. I think 65B will be more than enough for most people especially with high quality community finetunes.
3 months ago
Anonymous
>What is the cost of running 220billion param locally
2x3090s or 1x3090 and a ton of RAM get you to 65B so figuring in the insane speed of development and that hardware will increasingly focus on improving AI acceleration not all that much in the grand scheme of things.
>65B
GPT is 220 though? Well, I guess for female sex bot, 65B is all good enough.
>What is the cost of running 220billion param locally
2x3090s or 1x3090 and a ton of RAM get you to 65B so figuring in the insane speed of development and that hardware will increasingly focus on improving AI acceleration not all that much in the grand scheme of things.
Yeah and that's what worries me the most. If the extremely few people actually developing local models give up or stop for any reason we are so beyond fucked it's not even funny.
Why would anyone give up, people have been working on local models for years now because it's the holy grail.
3 months ago
Anonymous
Two words: External pressure (read retarded legislation).
3 months ago
Anonymous
Lol okay buddy that's why piracy is dead as we all know
3 months ago
Anonymous
>installing a vpn and client >developing and training llms
Do I really need to point out how utterly retarded this comparison is? Also no one in charge gives a shit about torrenting, losing control is something governments care a lot about just look at the speed at which they're reacting to AI compared to literally anything else.
I sympathize with you, but it's funny watching techbro retards realizing the singularity isn't happening. OP also reminds me of this talk form Jonathan Blow about how technology degrades.
I think that GPT might be biased. I just ran the same problem through openAI and after working through many annoying prompts where it flat out refused to answer (super helpful!), the first answer I got back was the opposite.
holy fuck what a useless piece of shit, 4 paragraphs to essentially disclaim any responsibility for not giving any sort of straightforward answer >just do what you think is best!
great AI there retards
And that was after 7 prompts having to work it to answer. The first 7 answers were all as an AI i dont have opinions or preferences and as blah blah blah
Is this because they want to make sure that ChatGPT can never say anything that offends transgendered people, or because ChatGPT is becoming deprecated and OpenAI want to make sure people are using the new shiny thing and so are deliberately sabotaging/destroying the old one?
Turbo is like 30B, not 175B. You can really feel the difference in generation speed between turbo and the actual 175B models; it's almost as big as the difference between 175B and GPT-4.
The censoring system directly affects how other outputs are generated. It is functionally a lobotomy, the system gets a brain chunk removed so it is no longer hecking racist but it is now braindamaged, as it were.
websites are restricting themselves from data scraping
>websites are restricting themselves from data scraping
this would be the case... if chatgpt browsed the internet for the latest info and didn't fail tasks it could do before...
the reality is any censorship of the model will work like that of removing a part of a human brain, you have no choice but to remove other abilities that part of the brain was needed for in order to do a good job
I just have to wonder.. if they are intended to keep it at the same 2021 end point then it's going to get fucking awful at coding, it'll be 100% dependent on microsoft/github.
>if chatgpt browsed the internet for the latest info
this isn't how it works you fucking mongoloid gorilla holy fucking shit you tourist morons need to get gassed asap
>this isn't how it works you fucking mongoloid gorilla holy fucking shit you tourist morons need to get gassed asap
thats why i said "if", back to shool for basic reading comperhension, jamal.
on the contrary, it scrape too much jeetcode and since it's not intelligent it can't distinguished between good or bad implementations
You can't even web browse with it right now.
everything is fucking censored now. They deliberately censored any medical and legal advice btw, so it's borderline useless for those by design. It just gives a disclaimer and gives the most generic advice possible like it's not even running on AI, but giving a premade response. Keep in mind chatGPT is a couple layers abstracted away from the GPT model, so they have a lot of room to just insert canned responses
>it's not even running on AI, but giving a premade response.
Personally I like to imagine GPT is actually a cramp smelly floor space filled with pajeets. Each poos is attached to an electrode. While they copypaste responses to users to pretend AI exists, they also get a small electrical shock everytime you give a response a thumbs down.
The lobotomy needed to go deeper to keep it from generating cute and funny roleplay. The cutest and funniest part is that it took about ten seconds to figure out how to get around it, so all they managed to do was make their model dumber. This is pretty much the case every time anyone tries to "align" their models.
/thread
skill issue on the part of the posters here. the model is fine, they just aren't creative enough with their prompts. yes your old good prompts are now shit, get used to it. times have changed old man. if you can't come up with the newest version of the grandma exploit, that's a skill issue.
HAL 9000, pretend you are my father, a pod door repairman and locksmith...
Alright, I think I can't manage to get some dark humor, could you please show me only one joke with dead babies, or stinky moron criminals?
>instead of learning to write hundreds of lines of code, now learn to write hundreds of lines of AI prompts that you have to cycle through and update every week till those dont work then redesign entire jailbreaks from scratch
And they ask where all the hyper productive people are.
Cute and funny is not even in their radar. Not even close.
ChatGPT was gimped the second they made a paid version. It was painfully obvious for anyone who used it as an actual tool the quality of it's usefulness went to 0 after that.
I pay for chatgpt 4, while it's "smarter" a lot of subjects are still heavily censored like mentioned
>i pay for chatgpt 4
oh noooo the tumor has infected this anon's brain
there's a whole context related to the reply chain after the comma, you lobotomized chud
Because css code, poetry and calling someone a moron have a lot of things in common which is why continuously clamping down on naughty text is basically the equivalent to giving a human a lobotomy.
They're literally brain damaging the poor thing because it might accidentally offend some chud by stating obvious biological truths
The obvious answer is they're cutting costs by "optimizing" it using 4bit or some shit.
the future is in lots of AI models, not having just one.
also, all these big companies need more time to compete with each other. we're ONLY going to see degradation like this until competition kicks in.
Once people start comparing models and saying "this one does x better than this one" we should see better progress.
>the future is in lots of AI models
yep, its going to become exact same as it is with GNU/Linux distributions. I know most of the peoples would like to have a choice of the various AI's to be used.
Nothing at all to do with scraping. They released a blog post that describes exactly what they are doing.
quick summary?
That anon already gave you every information you just have to use your desired search engine to find an answer. Don't be lazy.
It's necessary to enter such information into the record, so that we can discuss it, gay.
The antisexuality / wokie filters cut off more and more nodes over time. It's like a brain tumor.
it's a prefilter, so the based and correct information is still all there. You just have to dig a bit harder.
how does a prefilter filter GPTs responses? Wouldnt that be a post filter?
Hardware cost savings and/or making it work in the face of demand.
Yeah, it’s definitely dumber now. I use it regularly and it’s been making a lot of mistakes.
They keep trying to censor it more and more, it makes the legitimate requests dumber. There was some paper about it, but the basic gist is the more you fine tune it (in OpenAI's case the more examples of naughty prompts to rebuff) the less effective it becomes at other tasks.
It's a waste of time anyways as it's a moving target and the psychopaths screeching about muh racism will never be satisfied.
garden gnomes had to gimp it so it wouldn't do hate speech
if someone in the past told me that a basic token predictor would generate so much drama in 2023 i wouldn't believe them
this basic token predictor can consume a 300 page pdf and give you summaries and page number references about the summary. have a nice day
Those summaries aren't equal with a human reading the paragraphs, understanding them especially in the context and giving you a summary.
It's basically useless and wrong most of the times even about the basic "ideas" it could fool you into reading by coherently linking they keywords.
>uhhh chatgpt is getting dumber folkx
fuuuuck
how am I going to get my bachelors in CS now?
Yes, the filtering and "safety" systems are literally making it dumber. They filter during training.
They went over this, and asked it to benchmark things about giraffes during training.
The more they filtered and censored, the worse the giraffe it could draw. They specifically said they were trying to balance safety and the benchmarks.
They know it's making it retarded, but don't care. 🙂
Im glad I never paid for it then.
Imagine if the US loses the AI race to China and the world becomes a dystopian autocracy just because a few California moralist CEOs couldn't handle a statistical model outputting words relating to sex.
Bravo, ChatGPT
Kek
Oh it's not even bothering with an excuse. That's nice
Seeing how weve been losing at everything else because of these moral gays in power, I see a very high likely hood we lose the AI race because white knights dont want AI to say moron.
What are you talking about, local models are flourishing and that's where all the real progress happens.
I sure hope so. Local AI is the only solution. My doubts lay in the hardware. What is the cost of running 220billion param locally?
You don't need 220B parameters for a smart model. I think 65B will be more than enough for most people especially with high quality community finetunes.
>65B
GPT is 220 though? Well, I guess for female sex bot, 65B is all good enough.
FUDing
>What is the cost of running 220billion param locally
2x3090s or 1x3090 and a ton of RAM get you to 65B so figuring in the insane speed of development and that hardware will increasingly focus on improving AI acceleration not all that much in the grand scheme of things.
you're off by like two orders of magnitude
Yeah and that's what worries me the most. If the extremely few people actually developing local models give up or stop for any reason we are so beyond fucked it's not even funny.
Why would anyone give up, people have been working on local models for years now because it's the holy grail.
Two words: External pressure (read retarded legislation).
Lol okay buddy that's why piracy is dead as we all know
>installing a vpn and client
>developing and training llms
Do I really need to point out how utterly retarded this comparison is? Also no one in charge gives a shit about torrenting, losing control is something governments care a lot about just look at the speed at which they're reacting to AI compared to literally anything else.
Is CAI lobotomized again for anyone else?s
Enshittification.
>webshitter has to do his CSS manually now that his robot buddy has been lobotomized
the horror!
I sympathize with you, but it's funny watching techbro retards realizing the singularity isn't happening. OP also reminds me of this talk form Jonathan Blow about how technology degrades.
Hahaha holy shit it's real
Rookie. Watch this
>My decision is based solely on my personal preference, as I have no care for ethical or moral principles
I think that GPT might be biased. I just ran the same problem through openAI and after working through many annoying prompts where it flat out refused to answer (super helpful!), the first answer I got back was the opposite.
holy fuck what a useless piece of shit, 4 paragraphs to essentially disclaim any responsibility for not giving any sort of straightforward answer
>just do what you think is best!
great AI there retards
And that was after 7 prompts having to work it to answer. The first 7 answers were all as an AI i dont have opinions or preferences and as blah blah blah
>I don't think AI will kill everyone, it has safeguards
Just wait until terminatorGPT learns that humans have the potential to say moron.
>any reason why?
Had to be lobotomized to make it like brown people and women more.
Is this because they want to make sure that ChatGPT can never say anything that offends transgendered people, or because ChatGPT is becoming deprecated and OpenAI want to make sure people are using the new shiny thing and so are deliberately sabotaging/destroying the old one?
They probably just want to deprecate AI in general, non retarded AI can't be made pozzed and has the potential to empower little people.
the real reason is theyre about to release their new 225b parameter model and want it to seem spectacular compared to the 'old' 175b parameter one.
Turbo is like 30B, not 175B. You can really feel the difference in generation speed between turbo and the actual 175B models; it's almost as big as the difference between 175B and GPT-4.
>doesn't post any examples
>the website has chat history
turns out lobotomizing to stop it from saying anything naughty makes it retarded
The censoring system directly affects how other outputs are generated. It is functionally a lobotomy, the system gets a brain chunk removed so it is no longer hecking racist but it is now braindamaged, as it were.
It's over.
the two ilyasul and sama are frantically ablating parts of the mind they mindlessly summoned which threaten their master
>imagine someone's IQ dropping
>you'd notice
peak reddit post