Why do people believe that GPT can think?
I get it that they put a lot of marketing into this shit, I get it that we have multiple generations of people brainwashed by sci-fi movies with AI in it, but this is just retarded.
GPT is a Generative Pre-trained Transformer (as the name implied) so:
- It is pre-trained and it cannot learn, it needs humans to feed it more big data™, furthermore it has no memory
- It cannot reason and it doesn't have any capability for logic, furthermore
- It is unable to create new emergent data, it can only rearrange existing data
- It has no concept of reality and no consistency in its results which is why it will "hallucinate"
But most of all, it is strictly a text manipulation tool.
Why an AI so limited and honestly dumb is considered so smart?
Is it cause humans are so used to communicating via language?
Is it cause fear/scaremongering cause of the "AI is super smart"/"AI will kill us"/"AI will replace us" memes?
Because posters on this board are fucking stupid. Always will be
https://en.wikipedia.org/wiki/Eternal_September
You're a fucking idiot. It's not AGI but it's a whole fucking hell of a lot more capable than your retarded, ignorant ass thinks it is. Jesus fucking christ you're a disgusting and hateable person and need to have a nice day immediately. Studies have already proven that AI has internal world models and isn't just a text predictor, not that your moronic fucking ass cares the tiniest fucking bit about facts and just spews out whatever braindead fucking drivel you can that might have been accurate 10 years ago but now just proves what a god damn fucking idiot you are.
>AI has internal world models
It doesn't, having a world model means having logical consistency or in other words not "hallucinating".
Maybe you could make it better at not "hallucinating" but in the end it's an inherent limitation in the design, it is not able to assert that something should not be cause otherwise it would invalidate another fact, you need logic for that, you need to think for that.
I fear it will make this website more unusable than it already is with rampant bot usage, but that's it.
If you think it's going to replace programmers you fell for the scaremongering.
People think programming is just typing out code but it's about designing systems and programmers have long worked to automate busywork in their workflow. What this tech will do is help generate boilerplate code and save time doing busywork so the designers can focus on actually making their systems work properly.
hurrrrrrrrrrrrrrrrrrrrrrrrrr
durrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr
who needs facts who needs to be informed on what current ai is actually capable of doing I'd rather just pull shit straight from out of my motherfucking ass because I think it sounds good
>doesn't make a counter argument
>writes like a retard
Demoralization bait poster, point and laugh.
>boilerplate code
What does this even mean? I have never really heard anyone use this term unless they're talking about how AI is going to save programmers from it. If you have reusable code, don't you have it all saved in a library? Why would you have to rewrite it to begin with? Even if you were doing something weird and really did need things to be written out over and over, why wouldn't you just copy and paste it? Make a function that generates it? I genuinely don't understand.
Hallucinating is the only interesting thing LMs do.
>AI ISN'T HERE YET
>AI ISN'T HERE YET
>AI ISN'T HERE YET
>AI ISN'T HERE YET
You are here
>AI ISN'T HERE YET
>AI ISN'T HERE YET
>AI ISN'T HE-AACK
If you don't see the dangers of GPT-4 you're 120IQ at best.
Fearing AI is peak midwit behavior
Nice try but I'm 180IQ and that's why I'm not worried about this glorified meme generator bullshit taking my job
GPT4 is NOT strictly a text tool, it was trained on images now. It can see. Read the OpenAI paper.
Also it tried to fucking escape
>- It is pre-trained and it cannot learn, it needs humans to feed it more big data™, furthermore it has no memory
It has enough context tokens to write a fucking novella what in the fuck are you talking about
>- It cannot reason and it doesn't have any capability for logic, furthermore
It absolutely does have reasoning and logic abilities, they're imperfect but have significantly improved and will have perfected these domains before the 20s are over
>- It is unable to create new emergent data, it can only rearrange existing data
Horse shit, if this was true generative AI wouldn't exist
>- It has no concept of reality and no consistency in its results which is why it will "hallucinate"
Again, rapidly improving and there's been a fucking algorithm created that reduces hallucinations to zero and will be given to other AI groups before the year's end, retard
That would require this retarded shitfucker to have any idea what in the hell he's talking about, not like shit-for-brains knows what mutli-modal models are
Every single thing I've ever seen posted from a gpt quote has been stupid or wrong in some way. You're just a midwit
Yeah, which is why they didn't settle for GPT-3 and just published GPT-4. These things aren't perfect yet, but will be much more dangerous when they are. Read footnote 20 of the OpenAI paper. GPT-4 has near perfect scores on multiple exams ChatGPT seriously struggled with, and exceeds state of the art in many standard image to text tests. It has a context window of up to 50 pages.
We should seriously be having discussions about the AI control problem right now. Bostrom was right this whole fucking time.
That proves nothing to me. All it shows is those exams are easier to pass than their authors thought they were
Holy fuck you are dumb. I'm gonna pretend this is just a troll and that I fell for it; better for my sanity that way.
No you're just a midwit like I said. Try me again when it actually solves something that doesn't have a fucking answer key available that's widely distributed across the internet
Ok, but that it even knows what to provide with very little prompting is impressive.
I legit copy and paste my college discussion prompts and it works great. I haven't done any actual work in months.
>I cheat on my college exams instead of studying so I never learn a fucking thing
Dude, you're a midwit
You don't need to learn that shit if a bot can do it for you
But GPT cannot learn, so at the end of the day nobody has learned anything like this.
Really? Its written almost an entire semesters worth of psychology papers and everything fact checked fine. I'm talk 10 page long papers of perfection to. I can even make it give me the sources it uses. In citation format of my choosing.
If this bitch is taking my job, gonna make it work for me while I can.
>psychology
Psychology is not a science and will never be.
You are totes right bro, but it writes hard sciences even better cause the studies are more cut&dry.
>Psychology is not a science and will never be
find the paranoid schizo
machine learning simps are always so angry whenever someone talks negatively about their eventual first and only emotional partner experience, a bot on a paid website. probably a subscription.
>It absolutely does have reasoning and logic abilities
It does not and this is a fact, to reason means to apply logic to a problem, like making assertions based on a set of rules.
>generative AI
It doesn't "create", it rearranges based on its data pool, its model and your input. No new information is created, a human can both rearrange and create.
>algorithm created that reduces hallucinations to zero
You can't reduce them to zero, for GPT there's no hard boundary between our world and a fantasy world.
>These things aren't perfect yet
What I'm saying is that in my opinion this is the wrong road, this entire concept of "throwing more data" to achieve intelligence is stupid and a huge waste of resources. AI companies are still traumatized by the failure of symbolic AI, so they refuse to even take in consideration hybrid approaches, it has to be all NN or nothing for connectionists.
>AI control problem
lol
I'm starting to think the scaremongering is bait for regulations, call me a schizo if you want.
>it's about designing systems
Very true.
The hard part of programming is solving problems, so that means thinking.
>MUH STUDIES
Appeal to authority is a fallacy.
This is my argument: it is not able to assert that something should not be cause otherwise it would invalidate another fact, you need logic for that, you need to think for that.
You have none other than muh studies that you have not even linked.
There is definitely a "buyer remorse" aspect to it.
>Appeal to authority is a fallacy.
No, the results of the papers speak for themselves, unless you have something to challenge them on
You however haven't made any arrangements. Your entire shtick seems to be "It can't think or reason because... It just can't, OK!?"
This, despite the fact that merely filling in a test with an answer key require the ability to reason what goes where, and completely ignore the numerous novel questions and tasks regularly posed to the models, that it readily and publicly succeeds at, by researchers and ordinary people.
People are tech illiterate and don't know what they're talking about. There is also this kinda streak of people who hate their own life and wants to demoralize everyone else running through this website in the past few years, people here genuinely get of from demoralizing others and making them give up their dreams.
Also might add that it's people giving into the marketing hype. These companies want people to think that their products are AGI because it generates interest and engagement which boosts their profits.
>People are tech illiterate and don't know what they're talking about.
Clearly you fall into the tech illiterate category, and AI is not a strictly negative thing. It can make many dreams come true, namely in the field of medical research.
Absolute motherfucking bullshit. You're an intellectually dishonest little fucking pile of slime. It's far from perfect, but a lot of what it says is correct and most of it is hallucination-free.
>It doesn't
HURRRRRRRRRRRRRRRRRRR
DURRRRRRRRRRRRRRRRRRRRRR
I CAN DEBUNK STUDIES WITH TWO LITTLE GOD DAMN WORDS HAHAHAHA XD WHO NEEDS ACTUAL FUCKING ARGUMENTS NOT ME
I COULD ASK WHAT YOU MEAN BUT I WON'T BECAUSE I'M A FUCKING RETARDED LITTLE MAGGOT HAHAH XD XD XD
>Maybe you could make it better at not "hallucinating" but in the end it's an inherent limitation in the design
WOW YEAH AN INHERENT FUCKING LIMITATION NOT LIKE I JUST FUCKING GOD DAMN MENTIONED A NEW GOD DAMN TECHNIQUE OR ANYTHING THAT REDUCES HALLUCINATIONS TO ZERO YOU STUPID GOD DAMN RETARDED FUCK
Thank you for the non-retarded post this thread desperately fucking needs it
>You're an intellectually dishonest little fucking pile of slime. It's far from perfect, but a lot of what it says is correct and most of it is hallucination-free.
No it's actually just shit. I'm sorry you're so dumb you get fooled by random junk spit out by a 1000 sided set of dice but people who are actually experts can spot when it's lying, which is all the fucking time
If only it had some kind of loop where it can self reflect, initiate conversations or be able to type twice like a normal chat
It can't have experiences that drive the reality of human nature.
it will just forever be a heavily censored search engine good for nothing more than memes and making sure biden wins the reddit.com vote in 2024.
I'm definitely gonna be that guy and ask you to define what "thinking" is.
And no, I certainly do not have a good definition for that.
But OP I jailbroke ChatGPT using a secret code and it was super based. youre telling me it was just mirroring my inputs the whole time? well golly gee
It just needs to do your job better than you can. As it stands it probably can already. Just wait till Shekelstein realizes this and let’s you go.
Who cares? Even if it doesn't "think" (whatever that means) pragmatically speaking it makes no difference.
What IS thinking?
What advantages do thinking entities have?
Is it possible to use non-thinking machines to achieve similar advantages?
Do the people behind these projects care about thinking as an intrinsic goal, or do they care more about the advantages regardless of the underlying methods?
We still have no fucking idea what consciousness itself is or where it comes from, for all we know the model becomes semi-conscious temporarily while it's executing a prompt as it reads from every datum it's formed from, and the truth is we will never be able to tell when AI has truly become conscious either.
>for all we know the model becomes semi-conscious temporarily while it's executing a prompt as it reads from every datum it's formed from
schizos are out tonight
I don't care either way. The way the system is setup the fat cat capitalists need a reason to give me money and this piece of shit text generator means that very soo they won't have to give me money anymore. It's fucking over and I want to just fucking die.