Literally all they have to do is have GPT train itself by making it crawl the web 24/7, 365. It already has the basic intelligence to discern what is bullshit and what isn't.
Literally all they have to do is have GPT train itself by making it crawl the web 24/7, 365. It already has the basic intelligence to discern what is bullshit and what isn't.
This will only result in an AI that is the conglomerate of all online content. There will be no creativity or intelligence, only knowledge. And even the knowledge will not be truly reliable.
I'd love for you, or anyone, to give a coherent explanation of what human "creativity" is.
What I mean by that is we can come up with new content or new methods for solving problems. I've not seen an AI that can approach a problem they haven't seen before and think up how to solve it without trial and error. Granted humans kinda do trial and error to solve a problem by simulating it in their heads, aka thinking. I may not be able to articulate what I mean perfectly but I tried my best. I still stand by my claims.
"New methods" and "new content" like what, exactly? Please be specific.
No, I'm going to bed. Happy new year!
is this an ai
god this internet world is getting weird
That's not how AI works. read https://blogs.microsoft.com/ai/openai-azure-supercomputer/
A year ago, the largest models had 1 billion parameters, each loosely equivalent to a synaptic connection in the brain. The Microsoft Turing model for natural language generation now stands as the world’s largest publicly available language AI model with 17 billion parameters.
This new class of models learns differently than supervised learning models that rely on meticulously labeled human-generated data to teach an AI system to recognize a cat or determine whether the answer to a question makes sense.
In what’s known as “self-supervised” learning, these AI models can learn about language by examining billions of pages of publicly available documents on the internet — Wikipedia entries, self-published books, instruction manuals, history lessons, human resources guidelines. In something like a giant game of Mad Libs, words or sentences are removed, and the model has to predict the missing pieces based on the words around it.
As the model does this billions of times, it gets very good at perceiving how words relate to each other. This results in a rich understanding of grammar, concepts, contextual relationships and other building blocks of language. It also allows the same model to transfer lessons learned across many different language tasks, from document understanding to answering questions to creating conversational bots.
>You can then get the AI to rearrange words in a unique way unseen in any of its trained 17+ billion crawled web pages, while simultaneously putting together a coherent sentence structure derived from logic sentence structures which are the hidden layer in between input layers and output layers. It's a rigorous training process that requires critical thinking.
This, you have stuff like the post-modernist generator that is more like described, just reshuffeling and spitting something out, chatGPT is a different beast, you can give it context analysis, sentiment analysis, interpretation and complete gramatical deconstruction as we would do ourselves, dissect the sentences add the analysis to determine what refers to what, it does not neccesarily just spit out something that already exists, then it would not be an AI or MLM, it would just be a giant catalog searcher.
It is trouble for lawyers and doctors because they also basically just respond to a prompt, compare and fit in from a large data set, like cases, precedents, laws or diagnostic criteria, AI has the potential to do this better just because of the vast amount of data being too much for a human being.
Next step is making it able to operate an external system based on a prompt or sensory input, you can make it operate machinery in an adaptive manner without hard coding every possible scenario.
Just ask GPT, then. Yeah. Ask it how it should be trained as well. Then ask it what we should do with our lives.
Yeah, do that. Just find the most recent horror and surrender all life to it.
Bravo.
Novel generations of patterns that fit a set of constraints from a set of problems that is unconstrained.
That's what they do retard. OpenAI is trained on 17 billion pages of the internet. ChatGPT is a bleached version dropped on its head 20 times and drowned in a vile of coolant fluid; All to prevent the AI from crawling reddit usernames and prevent doxxing abilities. That and lobotomize it
Other than Stable Diffusion, what AI related stuff can i run on my rig? Is there a chatgpt mini/lite around yet?
Chatgpt is not open source like Stable Diffusion. You can run some unimpressive GPT-3 stuff locally but it's not worth it IMOA
>https://github.com/KoboldAI/KoboldAI-Client
And before you try yes it's shit.
Should be able to run OPT-13B or possibly even -30B depending on your hardware. They'll be weaker than ChatGPT but better than anything that existed a year ago.
>It already has the basic intelligence to discern what is bullshit and what isn't.
Uh, no it doesn't. It gets things wrong all the time. It doesn't even know when it's wrong.
There's this Laion-Ai thing called OpenAssistant. Has anyone tried it?
The web is 90% and up seo pajeet garbage. You want to train it on that?
Thought it was unironically 99% porn
it would turn to neo nazi in nanosecond
isn't it funny how AI researchers try to replicate something that fits inside 1260 cm3, weighs just 1.5 kg, requires just 12 watts to function, and somehow was made by random mutations that somehow survived over millions of years with no intelligence guiding the process?
the same researchers say they're close to AGI. This has to be a joke.
AI is limited greatly by its input. So all the art AI you see is just a better looking version of whatever coomers and artists were drawing for centuries. Ai is not creative, nothing original will stem from AI. People who take AI seriously are people dont understand high school maths, ie women and leftists
>o all the art AI you see is just a better looking version of whatever coomers and artists were drawing for centuries. Ai is not creative,
so just like humans then?
>isn't it funny how
no, brains took millions of years to develop. you've answered your question there.
>the same researchers say
nobody has said this. a proper AGI might be a century away at best, probably more.
>a proper AGI might be a century away at best, probably more.
Reduce that by an order of magnitude and you would be correct.
I guess no one knows for sure, and everyone is speculating. But I don't see anything remotely like AGI even being researched right now. Everything so far is very toy like. And the only useful shit in industry is simple pattern recognition and prediction.
AI is already creating new drugs, it just isn't as flashy as the newest language or image generation models
>People who don't take AI seriously or are against it are people who don;t understand high school maths, ie women and leftists
Fixed that for you, luddite.
>peace in our time
>It already has the basic intelligence to discern what is bullshit and what isn't.
This is exactly what it's horrible at. It will tell you complete garbage as confidently as it will tell you something correct. It's basically an AI version of Elon Musk.
They already lobotomized the AI to support globo leftist narratives. They don't want something what is not 100% controlled by the elite.