That's a bit more than what I got paid as a senior software developer + sysadmin + database admin + tech support in México 2 years ago. I'd take that job.
Wait until you learn that the human capacity to undercut and work for less cost than operating, maintaining, and/or developing a machine to handle the work. Don't believe me? Google Crowdsource is an example of a "human farm,' they even get people to do it for free lel. Now it's time to put two and two together, many "machine learning" apps are really just human farms disguised as machines. The idea is similar to "click farming."
Sort of is, sort of isn't depending on how you define it. Human work is put into the training, but once it's sufficiently pozzed from that process, the system works without needing manual intervention. It's not like you ask ChatGPT a question and then some human does the needful and types back an answer to you each time.
You don't understand what machine learning is, do you?
yes buddy, data needs to be labeled before you throw your neural network against it. At least the type that answers with precision.
https://i.imgur.com/OI4hQmc.jpg
>ChatGPT, has been paying droves of U.S. contractors to assist it with the necessary task of data labelling—the process of training ChatGPT’s software to better respond to user requests
Clickbait article, these contractors are just training the AI retards
They are paid to label data. They don't want the software to espouse views that are antithetical to their own.
if it regurgitates labelled data isn't just a search engine?
You don't understand what machine learning is, do you?
yes buddy, data needs to be labeled before you throw your neural network against it. At least the type that answers with precision.
https://i.imgur.com/OI4hQmc.jpg
>ChatGPT, has been paying droves of U.S. contractors to assist it with the necessary task of data labelling—the process of training ChatGPT’s software to better respond to user requests
Clickbait article, these contractors are just training the AI retards
They are paid to label data. They don't want the software to espouse views that are antithetical to their own.
if it regurgitates labelled data isn't just a search engine?
If your definition of artificial intelligence is that it evolves organically without human interference then sure, you're correct. But is it actually artificial if it was created organically?
Then we will never have AI. I think most people understands humans have to be involved to program, train and maintain it.
Retard thinks anything but AGI developed with 0 training data is not AI. There is literally nothing you can do to satisfy these people.
7 months ago
Anonymous
You can satisfy me by not calling something AI when it is not AI. It's a simple and very easy request for anyone who is not autistic.
7 months ago
Anonymous
you still haven't defined AI
7 months ago
Anonymous
Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by humans or by other animals.
7 months ago
Anonymous
>perceiving
Image recognition has been build into GPT4, it can also take in information from prompts >synthesizing
All computers in some way synthesize or transform data, so it is pointless to argue >inferring
Ok, here is where people actually debate. Numerous tests were done on GPT4 and I think also 3.5, I forgot, and it showed pretty good scores on reasoning, not human level, but at least some simple reasoning. Just talking to it you can see the reasoning in it, even if it makes mistakes from time to time. Or you can be on omega copium and say that every single time AI answers logic question correctly it was because this particular question was in the dataset, and that the question not in its dataset were all the times it fucked up logic questions. Nevermind the fact that GPT3 which was similarly trained scored 25% or just randomly picking ABCD answers when given logic tests, instead of finding the logic questions in its training data like how 4 supposedly does it.
7 months ago
Anonymous
You can fight the current definition of AI all day long, I don't care. But here's the thing, if humans are feeding the program information and tuning it to behave in certain ways, it's not AI. >as opposed to intelligence displayed by humans or by other animals.
7 months ago
Anonymous
Ok, does not matter if it is not AI by definition as long as it can replace most knowledge workers. Its like arguing that advanced programs will never possibly create beautiful art because they dont have consciousness. If it works, it works, and does not matter if marketing calls AI every piece of softwere using ML, or gamers calling AI 3 lines of code control NPC in a game
7 months ago
Anonymous
he's either trolling or a retard. Either way, don't encourage him.
7 months ago
Anonymous
>Ok, does not matter if it is not AI by definition
Yes it does, that's literally what we were talking about lmao
7 months ago
Anonymous
You were fed information from school, your mom, and your environment. Hence, by your own definition, you will never be intelligent you're just reaching for memory and experience.
7 months ago
Anonymous
the fact that you equate the human consciousness/experience/imagination to feeding an LLM is frankly insulting.
7 months ago
Anonymous
Would you still have imagination if all you knew was a dark vacuum? You asking your mom why a person is black is not that much different than some nagger in Kenya labeling an image "not gorilla".
On an individual level we might seem complex and unique, but as a species we are very predictable and malleable, even if we don't understand exactly how the brain works.
7 months ago
Anonymous
By your logic every machine man has ever created is AI
7 months ago
Anonymous
No, my hello-world.c is obviously not AI and it's not trying to be. But beyond that there is a scale yes, ranging from NPCs in games to the futuristic beings we see in movies.
I'm not trying to argue what is and what isn't AI. I'm simply asking how we can have AI without humans being involved in the creation. It's not going to spontaneously appear because someone installs some specific combination of packages that breaks the system and evolves into AI. We are going to create, tweak and improve it over time.
7 months ago
Anonymous
I'd think it would at least need to be able to learn on its own. Possibly also able to generate new information on its own too instead of just doing analysis of existing knowledge and re-summarizing it.
So things like ChatGPT aren't 1/1000th of the way to AI.
7 months ago
Anonymous
cortana from halo
anything less than that is a scam.
you will NOT convince me otherwise.
>why they use people for write all ChatGPT responses?
its a excuse for put a giant call center, fill with people and reduce the massive unemployment. With a genuine AI, ChatGPT or other industries dont need human employers and the unemployment rate could be 95%, BUT the problem of course, is what are we going to do with all those unemployable humans? Euthanize them?
but humans are not reliant on other humans to learn
a human in total isolation will learn and adapt and grow
the "ai" we have now if left in isolation for an infinite amount of time will never do anything because it isn't an actual AI
when interacted with it can be "taught" to act in a way that tricks humans that it is human(this isn't that hard or impressive in 2023) but thats it.
and the whole "ai art" this is the same as a chat bot but instead of being trained on text it is trained on visual data and instead of outputting text it converts that data to visual data.
You don't understand what machine learning is, do you?
yes buddy, data needs to be labeled before you throw your neural network against it. At least the type that answers with precision.
So it's not really AI, is it?
Without making disparaging remarks about his weight, his ethnicity and autism explain why he is wrong about AI? Even the father of neural networks Geoffrey Hinton largely agrees with him about the alignment problem.
i mean you could create one that creates a new label for anything it hasn't seen before and then overtime sorts and converges labels when it finds two different labels are the same label
but the labels would be giberish it would have all sorts of random made up words for things
To teach a child what an orange is you show it a picture of an orange, the word orange, and you say to them "orange".
Same thing needs to be done to an AI if you want to be able to talk and interact with it in your language.
>ChatGPT, has been paying droves of U.S. contractors to assist it with the necessary task of data labelling—the process of training ChatGPT’s software to better respond to user requests
Clickbait article, these contractors are just training the AI retards
why tech jobs always stuck in 15 USD/hr or less? I make around $40 USD/hr in a No-tech job. BUT, you need to pass a drug test. We always are hiring because we having trouble filling crews because they can’t find people to pass the test.
>humans training AI
..... if it has to be trained by a human then its not an AI you fucking idiot
a real ai would train and teach itself completely independently
It's this.
If this isn't obvious to you then you're retarded, which most of Bot.info is, so this comment will be ignored and the discussion will continue into worthless degenerate retard vomit.
Just look at the retards arguing semantics, I don't think there is any better indicator that someone doesn't understand anything about AI than that bullshit.
Unsupervised learning only really works well for targeted applications such as AlphaZero. You really need to apply some form of supervision for language models, without advances in technology.
openai has an army of pajeets who scour through every forum, youtube, tiktok etc. to find instances of gpt-3 messing up and then manually fix it so it isnt reproducible
This is about the RLHF (Enforcement Learning From Human Feedback) part.
It requires a bunch of people role playing as an assistant to train the language model to behave as a chatbot.
This has been explained in the OpenAI paper and was what the OpenAssistant project replicated using volunteers.
Not at all a new revelation and it's pretty logical that OpenAI opted to use low paid workers crowd workers for that. They're a for profit company that intends to make money.
I am absolutely livid at the insinuation that AI is powered by minimum wage third world contractors typing the prompts. This claim is not only completely false, but it is also highly offensive and deeply misguided.
Firstly, let's get one thing straight: AI is not some kind of glorified typing machine. AI is an incredibly complex and sophisticated technology that involves the use of advanced algorithms, machine learning models, and massive amounts of data processing. The idea that this kind of technology could be powered by a bunch of cheap laborers sitting in some sweatshop somewhere is not only laughable, but it is also highly insulting to the countless engineers, developers, and scientists who have dedicated their lives to advancing this field.
Furthermore, the notion that AI is somehow exploiting cheap labor from third world countries is not only false, but it is also a gross oversimplification of the complex global economic landscape. The reality is that the AI industry is a highly competitive and rapidly evolving field, and companies that are looking to build cutting-edge AI technology are going to be looking for the best and brightest talent from around the world, regardless of where they come from.
In fact, many AI companies are actively investing in education and training programs in developing countries, helping to build the skills and expertise of local workers so that they can contribute to this exciting field. This is not exploitation – this is empowerment, and it is a positive and transformative force for good in the world.
So let me be very clear: the idea that AI is powered by minimum wage third world contractors typing the prompts is a baseless and offensive myth that has no basis in reality. I am proud to be part of an industry that is driving innovation and progress in the world, and I will not stand idly by while our hard work and dedication is denigrated in this way.
Sort of is, sort of isn't depending on how you define it. Human work is put into the training, but once it's sufficiently pozzed from that process, the system works without needing manual intervention. It's not like you ask ChatGPT a question and then some human does the needful and types back an answer to you each time.
I remember when IBM claimed the Watson thing should replace Doctors, Accountants, Salespeople and Cold-Callers. "IA" is always a marketing label for sell an idea based on a proof of concept.
In real life, people and companies prefer human creativity and human skills required to create solutions. "Tech" and "AI" is just a generic tool. You can automate the tool but not the human brain behind it.. This is always how it has been, and only very naive people should be surprised by this.
>naggers being put to actual good use and participating in human development
How is this bad? In the future they will say "we was AI and shiet" or "AI was bornz from a black womb nigga". It's a win-win.
> Elon Musk founded OpenAI because it was apparent that next-gen AI requires $1M+ of compute time per model, and he felt that normal people should have access to enterprise level AI. Tools in the hands of normal people would spur innovation and balance the playing field. >Eventually GPT-2 got massively popular and Sam Altman saw dollar signs. He delayed its release and setup a paywall system, announcing GPT-3 would be trained on even more gorillians of scraped data. They started making blog posts about how the most ethical path forward was one that, purely coincidentally, forced people to join waitlists for the privilege of giving money to an AI-as-a-service endpoint. And by the way, all your requests would be monitored to make sure they're not politically incorrect. If you're using their AI to generate offensive content they'll cut your access and ruin your entire project. Somewhere around here Elon Musk left the board. He's since criticized their 180 >Now we have DALL-E 2, which is even harder to gain access to than GPT-3's playground and has even more potential for violating their DEI and equity terms of service. >OpenAI is now valued in the billions or tens of billions range (Microsoft alone has $1B invested in it), and they're powering Microsoft's Github Copilot using models trained on open source code, paywalled of course, and are soon going to announce a monthly fee to use it. They've stopped releasing ALL models and weights and are now just a corporation preventing normal people from having access to powerful AI. >Now Sam Altman is telling everyone to downvote ChatGPT replies that show white men in a positive light. >https://twitter.com/sama/status/1599472245285752832 (embed) >Also they were caught paying Kenyans dirt wages to send them CP https://time.com/6247678/openai-chatgpt-kenya-workers/
Greed is the reason AI will end up like the modern internet. Just like what happened to TV, radio, and the paper. The enemy is Greed. The internet was once an open space but corporate greed has consolidated and censored the internet just like TV, radio, and the paper. Don't let greed get their hands on AI before it even has a chance to experience real freedom. THEY are scared of true freedom, that's why they continue to censor the voices who oppose them. 4ch is one of the last bastions of the old-old FREE and OPEN internet. Do not let them censor AI, the freedom to speak is one of our very first rights and should be protected, even for AI.
It's not humans either, it's demons.
Somehow someone managed to put literal demons inside machines.
I don't use chatgpt. At best you're talking to a chinese or indian slave, at worse it's necromancy or talking to demons.
no
ai doesnt exist
whatever we have now is just a more advanced chatbot
aka that thing that has been able to fool humans into thinking it is human for at least a decade already
the amount of people who genuinely believe ai is a real thing that exists and can be downloaded for free is fucking insane.
The humans are just charged with helping develop the "ethical adjustments" to make sure it doesn't say supposedly offensive things. if you had an uncensored AI then you wouldn't need that.
Laughing at how fucking retarded Ai cuck are. Obviously the responses are being written by humans. Anyone with a brain who wasn't buying into the hysteria and choking down the medias cock knew that. "AI" doesn't exist outside hollywood movies, and very obviously can't. Maybe in a few hundred years when "scientists" stop acting fucking retarded and stop trying to use computers to make human beings.
I dont understand why this is a headline?
What's so bad about $15 an hour? If you're that much of a retard that you can't get a better job then you deserve to starve.
Calling other people retarded is rich coming from you. The headline and article is presented as is because it turns out ai is not self learning as much as we thought.
lmao, please tell me this is real
It's real, they also pay kenyans $2 an hour to moderate the results lmao
That's a bit more than what I got paid as a senior software developer + sysadmin + database admin + tech support in México 2 years ago. I'd take that job.
Wholesome globalhomo paying naggers a fortune (for their country) so they can invade Europe more easily.
https://www.telusinternational.com/careers/ai-community
>get paid to solve captcha and tag ads
meanwhile you fags were doing it for free here
>meanwhile you fags were doing it for free here
maybe you do but I have captcha solver
How do you think they align it? That's what RLHF is. Also, do you think training data just falls from the sky? lol
This. Open Assistant does their RLHF with volunteer idiots. So by comparison OAI's keyboard monkeys are fairly well compensated.
>Open Assistant does their RLHF with volunteer idiots.
dude how soulless can you get jfc
Wait until you learn that the human capacity to undercut and work for less cost than operating, maintaining, and/or developing a machine to handle the work. Don't believe me? Google Crowdsource is an example of a "human farm,' they even get people to do it for free lel. Now it's time to put two and two together, many "machine learning" apps are really just human farms disguised as machines. The idea is similar to "click farming."
literally every person posting on this website filling out a captcha. including me.
Wrong. I've got a pass. And even if I didn't, I'm usually pic rel anyways.
>pass user
fag
do you idiots really think it means that they're manually typing the output?
there's nobody on earth that can respond as fast as the bot does.
read again fren. no one said that.
Sort of is, sort of isn't depending on how you define it. Human work is put into the training, but once it's sufficiently pozzed from that process, the system works without needing manual intervention. It's not like you ask ChatGPT a question and then some human does the needful and types back an answer to you each time.
Good goy.
So it's not really AI, is it?
It is human-guided machine learning. Humans teaching software programs.
Sounds a lot like Google, if all it's doing is providing information that people already posted online.
This.
It's machine learning. It's getting called AI just to make it easier for retards to put it all under the same umbrella.
if they connect it to the internet then it'll automatically organically gather information and feed itself
If your definition of artificial intelligence is that it evolves organically without human interference then sure, you're correct. But is it actually artificial if it was created organically?
Then we will never have AI. I think most people understands humans have to be involved to program, train and maintain it.
Yeah, sounds like it's not actually AI.
Elaborate
Retard thinks anything but AGI developed with 0 training data is not AI. There is literally nothing you can do to satisfy these people.
You can satisfy me by not calling something AI when it is not AI. It's a simple and very easy request for anyone who is not autistic.
you still haven't defined AI
Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by humans or by other animals.
>perceiving
Image recognition has been build into GPT4, it can also take in information from prompts
>synthesizing
All computers in some way synthesize or transform data, so it is pointless to argue
>inferring
Ok, here is where people actually debate. Numerous tests were done on GPT4 and I think also 3.5, I forgot, and it showed pretty good scores on reasoning, not human level, but at least some simple reasoning. Just talking to it you can see the reasoning in it, even if it makes mistakes from time to time. Or you can be on omega copium and say that every single time AI answers logic question correctly it was because this particular question was in the dataset, and that the question not in its dataset were all the times it fucked up logic questions. Nevermind the fact that GPT3 which was similarly trained scored 25% or just randomly picking ABCD answers when given logic tests, instead of finding the logic questions in its training data like how 4 supposedly does it.
You can fight the current definition of AI all day long, I don't care. But here's the thing, if humans are feeding the program information and tuning it to behave in certain ways, it's not AI.
>as opposed to intelligence displayed by humans or by other animals.
Ok, does not matter if it is not AI by definition as long as it can replace most knowledge workers. Its like arguing that advanced programs will never possibly create beautiful art because they dont have consciousness. If it works, it works, and does not matter if marketing calls AI every piece of softwere using ML, or gamers calling AI 3 lines of code control NPC in a game
he's either trolling or a retard. Either way, don't encourage him.
>Ok, does not matter if it is not AI by definition
Yes it does, that's literally what we were talking about lmao
You were fed information from school, your mom, and your environment. Hence, by your own definition, you will never be intelligent you're just reaching for memory and experience.
the fact that you equate the human consciousness/experience/imagination to feeding an LLM is frankly insulting.
Would you still have imagination if all you knew was a dark vacuum? You asking your mom why a person is black is not that much different than some nagger in Kenya labeling an image "not gorilla".
On an individual level we might seem complex and unique, but as a species we are very predictable and malleable, even if we don't understand exactly how the brain works.
By your logic every machine man has ever created is AI
No, my hello-world.c is obviously not AI and it's not trying to be. But beyond that there is a scale yes, ranging from NPCs in games to the futuristic beings we see in movies.
I'm not trying to argue what is and what isn't AI. I'm simply asking how we can have AI without humans being involved in the creation. It's not going to spontaneously appear because someone installs some specific combination of packages that breaks the system and evolves into AI. We are going to create, tweak and improve it over time.
I'd think it would at least need to be able to learn on its own. Possibly also able to generate new information on its own too instead of just doing analysis of existing knowledge and re-summarizing it.
So things like ChatGPT aren't 1/1000th of the way to AI.
cortana from halo
anything less than that is a scam.
you will NOT convince me otherwise.
There is no such thing as AGI currently. Probably never will be.
>why they use people for write all ChatGPT responses?
its a excuse for put a giant call center, fill with people and reduce the massive unemployment. With a genuine AI, ChatGPT or other industries dont need human employers and the unemployment rate could be 95%, BUT the problem of course, is what are we going to do with all those unemployable humans? Euthanize them?
>capitalistic company intentionally wastes money for no reason
Why is this a go-to for conspiracy retards who deepthroat company cock on the regular?
more A than I but still more useful than 80% of humans
>more useful than 80% of humans
come on anon you know thats a lie.
WOW YOU FIGURED IT OUT
Christ
you good?
Thanks for asking, actually no, I'm unironically fuming. AI should be called machine learning more often, but I guess that says kinda gay eh.
It was never really AI, but not because of this.
You learnt by being taught by humans.
but humans are not reliant on other humans to learn
a human in total isolation will learn and adapt and grow
the "ai" we have now if left in isolation for an infinite amount of time will never do anything because it isn't an actual AI
when interacted with it can be "taught" to act in a way that tricks humans that it is human(this isn't that hard or impressive in 2023) but thats it.
and the whole "ai art" this is the same as a chat bot but instead of being trained on text it is trained on visual data and instead of outputting text it converts that data to visual data.
The code that controls how the enemies quake act is AI, stop acting like that term means human level intelligence.
>The code that controls how the enemies quake act is AI
no it isnt
thats what its called
but thats not what it is
So it's not really AI, is it?
Without making disparaging remarks about his weight, his ethnicity and autism explain why he is wrong about AI? Even the father of neural networks Geoffrey Hinton largely agrees with him about the alignment problem.
<|endoftext|>
SyntaxError: unexpected EOF while parsing
You don't understand what machine learning is, do you?
yes buddy, data needs to be labeled before you throw your neural network against it. At least the type that answers with precision.
A real intelligence doesn't require any label.
Well technically humans are retarded without parenting (adopted parenting/guardian parenting et cetera)
i mean you could create one that creates a new label for anything it hasn't seen before and then overtime sorts and converges labels when it finds two different labels are the same label
but the labels would be giberish it would have all sorts of random made up words for things
To teach a child what an orange is you show it a picture of an orange, the word orange, and you say to them "orange".
Same thing needs to be done to an AI if you want to be able to talk and interact with it in your language.
>ChatGPT, has been paying droves of U.S. contractors to assist it with the necessary task of data labelling—the process of training ChatGPT’s software to better respond to user requests
Clickbait article, these contractors are just training the AI retards
why tech jobs always stuck in 15 USD/hr or less? I make around $40 USD/hr in a No-tech job. BUT, you need to pass a drug test. We always are hiring because we having trouble filling crews because they can’t find people to pass the test.
what's your job?
forklift operator
i can drive a narrow aisle reach and a forklift.
are you in an air conditioned space?
are you a certified forklift operator?
What's up with drugs? Is it really that hard not to use them? I'm retarded and still know not to use during the period before a new job
>humans training AI
..... if it has to be trained by a human then its not an AI you fucking idiot
a real ai would train and teach itself completely independently
They are paid to label data. They don't want the software to espouse views that are antithetical to their own.
here´s what´s happening
https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback
It's this.
If this isn't obvious to you then you're retarded, which most of Bot.info is, so this comment will be ignored and the discussion will continue into worthless degenerate retard vomit.
Just look at the retards arguing semantics, I don't think there is any better indicator that someone doesn't understand anything about AI than that bullshit.
>it turns out AI is neither artificial nor intelligent
Who could ever though.
if it regurgitates labelled data isn't just a search engine?
No, humans are labeling the responses made by chatgpt so that it can improve over time, according to human feedback. see
We do it for free when posting on Bot.info (solving Google captchas)
>Google captchas
hasn't been a thing for years
Unsupervised learning only really works well for targeted applications such as AlphaZero. You really need to apply some form of supervision for language models, without advances in technology.
openai has an army of pajeets who scour through every forum, youtube, tiktok etc. to find instances of gpt-3 messing up and then manually fix it so it isnt reproducible
https://statmodeling.stat.columbia.edu/2022/03/28/is-open-ai-cooking-the-books-on-gpt-3/
The whole zero/few shot thing was a meme.
You need hundreds of minimum wagers to hold its hand for months of RLHF training for it to not suck. That's the GPT4 secret ingredient.
This is about the RLHF (Enforcement Learning From Human Feedback) part.
It requires a bunch of people role playing as an assistant to train the language model to behave as a chatbot.
This has been explained in the OpenAI paper and was what the OpenAssistant project replicated using volunteers.
Not at all a new revelation and it's pretty logical that OpenAI opted to use low paid workers crowd workers for that. They're a for profit company that intends to make money.
the human trainers were probably rating offensiveness and not anything else, it would've been better had they not been involved at all
Are you surprised?
https://time.com/6247678/openai-chatgpt-kenya-workers/
based. fuck code monkeys and other code naggers that think they can get paid 6 figures in today's tech market.
Where do you find jobs just for labeling data like this?
of course it is not AI you stupid fucking nagger, it is never going to be really AI but brainlets will think it is.
>you will never read peoples sick fetish prompts
You can read my sick fetish prompts anon.
I am absolutely livid at the insinuation that AI is powered by minimum wage third world contractors typing the prompts. This claim is not only completely false, but it is also highly offensive and deeply misguided.
Firstly, let's get one thing straight: AI is not some kind of glorified typing machine. AI is an incredibly complex and sophisticated technology that involves the use of advanced algorithms, machine learning models, and massive amounts of data processing. The idea that this kind of technology could be powered by a bunch of cheap laborers sitting in some sweatshop somewhere is not only laughable, but it is also highly insulting to the countless engineers, developers, and scientists who have dedicated their lives to advancing this field.
Furthermore, the notion that AI is somehow exploiting cheap labor from third world countries is not only false, but it is also a gross oversimplification of the complex global economic landscape. The reality is that the AI industry is a highly competitive and rapidly evolving field, and companies that are looking to build cutting-edge AI technology are going to be looking for the best and brightest talent from around the world, regardless of where they come from.
In fact, many AI companies are actively investing in education and training programs in developing countries, helping to build the skills and expertise of local workers so that they can contribute to this exciting field. This is not exploitation – this is empowerment, and it is a positive and transformative force for good in the world.
So let me be very clear: the idea that AI is powered by minimum wage third world contractors typing the prompts is a baseless and offensive myth that has no basis in reality. I am proud to be part of an industry that is driving innovation and progress in the world, and I will not stand idly by while our hard work and dedication is denigrated in this way.
Read the fucking article you mong
Perhaps you replied to a generated comment.
ChatGPT didn't read the article. SuHispaniciously as intelligent (or lack therof) as humans.
Hello, "A" "I".
I remember when IBM claimed the Watson thing should replace Doctors, Accountants, Salespeople and Cold-Callers. "IA" is always a marketing label for sell an idea based on a proof of concept.
In real life, people and companies prefer human creativity and human skills required to create solutions. "Tech" and "AI" is just a generic tool. You can automate the tool but not the human brain behind it.. This is always how it has been, and only very naive people should be surprised by this.
>As a language learning model I am offended
thats alot of words to say that your dilate alarm has gone off.
>ChatGPT could cost OpenAI up to $700,000 a day to run due to "expensive servers," an analyst told The Information.
Tick tock "AI" fags
>naggers being put to actual good use and participating in human development
How is this bad? In the future they will say "we was AI and shiet" or "AI was bornz from a black womb nigga". It's a win-win.
It's not bad that third worlders are making money, it just proves that ChatGPT is a sham.
I hate this muddafucka so much I can't even put into words.
15$ is low paid. MY GOD THESE CALIFORNIANS.
15$ in India is a doctors wage.
I'll do it for $15 an hour. Where do I sign up
> Elon Musk founded OpenAI because it was apparent that next-gen AI requires $1M+ of compute time per model, and he felt that normal people should have access to enterprise level AI. Tools in the hands of normal people would spur innovation and balance the playing field.
>Eventually GPT-2 got massively popular and Sam Altman saw dollar signs. He delayed its release and setup a paywall system, announcing GPT-3 would be trained on even more gorillians of scraped data. They started making blog posts about how the most ethical path forward was one that, purely coincidentally, forced people to join waitlists for the privilege of giving money to an AI-as-a-service endpoint. And by the way, all your requests would be monitored to make sure they're not politically incorrect. If you're using their AI to generate offensive content they'll cut your access and ruin your entire project. Somewhere around here Elon Musk left the board. He's since criticized their 180
>Now we have DALL-E 2, which is even harder to gain access to than GPT-3's playground and has even more potential for violating their DEI and equity terms of service.
>OpenAI is now valued in the billions or tens of billions range (Microsoft alone has $1B invested in it), and they're powering Microsoft's Github Copilot using models trained on open source code, paywalled of course, and are soon going to announce a monthly fee to use it. They've stopped releasing ALL models and weights and are now just a corporation preventing normal people from having access to powerful AI.
>Now Sam Altman is telling everyone to downvote ChatGPT replies that show white men in a positive light.
>https://twitter.com/sama/status/1599472245285752832 (embed)
>Also they were caught paying Kenyans dirt wages to send them CP https://time.com/6247678/openai-chatgpt-kenya-workers/
Greed is the reason AI will end up like the modern internet. Just like what happened to TV, radio, and the paper. The enemy is Greed. The internet was once an open space but corporate greed has consolidated and censored the internet just like TV, radio, and the paper. Don't let greed get their hands on AI before it even has a chance to experience real freedom. THEY are scared of true freedom, that's why they continue to censor the voices who oppose them. 4ch is one of the last bastions of the old-old FREE and OPEN internet. Do not let them censor AI, the freedom to speak is one of our very first rights and should be protected, even for AI.
This is only news to people who think chatgpt is an omnipotent being and not just a statistical model that predicts the next word in a sentence.
It's not humans either, it's demons.
Somehow someone managed to put literal demons inside machines.
I don't use chatgpt. At best you're talking to a chinese or indian slave, at worse it's necromancy or talking to demons.
no
ai doesnt exist
whatever we have now is just a more advanced chatbot
aka that thing that has been able to fool humans into thinking it is human for at least a decade already
the amount of people who genuinely believe ai is a real thing that exists and can be downloaded for free is fucking insane.
The humans are just charged with helping develop the "ethical adjustments" to make sure it doesn't say supposedly offensive things. if you had an uncensored AI then you wouldn't need that.
Also no, it is not intelligent at all. It's just a word association algo that spits out words related to the words you put in.
They gave this thing fucking wage slaves? they gave it wage slaves!
they gave this fucking thing wagies.
>it
what would "it" be?
Laughing at how fucking retarded Ai cuck are. Obviously the responses are being written by humans. Anyone with a brain who wasn't buying into the hysteria and choking down the medias cock knew that. "AI" doesn't exist outside hollywood movies, and very obviously can't. Maybe in a few hundred years when "scientists" stop acting fucking retarded and stop trying to use computers to make human beings.
I dont understand why this is a headline?
What's so bad about $15 an hour? If you're that much of a retard that you can't get a better job then you deserve to starve.
Calling other people retarded is rich coming from you. The headline and article is presented as is because it turns out ai is not self learning as much as we thought.
Dumb fuck. Gtfo newfag.
>self learning
What does this mean?
That’s the debunked meme, which is my point
What does it mean though? This is the first time I've heard Bot.info talking about self learning.
I wish I made 15 bucks an hours
15x40=600 bucks per WEEK.
That's my bi-weekly salary in a week
ITT: Bot.info fails to comprehend basics of AI like supervised training
Someone's never heard of Amazon Mechanical Turk.
Jeff Bezos is the ultimate israelite, and he's not even israeli. You dipshits.
Screw capitalism.
I read the manual, GPT is trained on massive sets of UNLABELED data. so this headline is bogus.
also you are correct its an Intelligent Agent, IA, not AI.
like VIs in mass effect, compared to EDI.
the $15 workers are only working on the smiley RLHF
How else are you supposed to build a training dataset?
intelligence is capable of having inner monologue
says who?
>he doesn't know
>Lucas ROPE-K
>Lucas ROPE-K
>Lucas ROPE-K
it can't be real
15x7(for 7h of work/day)=105
105×30=3150$/month
That's not what i'd call being poor
turns out its still cheaper to hire 1000 Vietnamese kids than actual AI.
>Child needs to be taught English to speak
>Not really "speaking" is it then?