Exactly. It's just a text generator. It generates text that is similar to it's training. Nothing more, nothing less. There is no intelligence.
stop asking it to do math problems retard
LLMs are *language models*
logic and symbols are not language
>stop asking it to do math problems retard
It will continue to be relevant until newgay shills stop hyping it up as a sentient mastermind that can do anything.
>I'm starting to think the intelligence of ChatGPT is incredibly overblown
Yes, it is.
As I said in another thread: human utterances have a discursive purpose; we say shit with a purpose, and we expect each other to say shit with a purpose*. ChatGPT and similar don't do anything remotely similar to that, they just chain words based on probability.
* For example, the purpose of this utterance (the comment) is to 1) show agreement towards the OP, and 2) back up this agreement with epistemic statements.
Yes they are. They are languages created by man to represent logic constructs.
LLMs can't into logic at all, they don't know the meaning of anything that is being input into them or the shit they output. All they are is sophisticated statistical models that predict what the next word is likely to be given a specific input. Also they rely on substantial human input to produce coherent outputs at all.
To compare this with human intelligence is an insult to humankind.
>logic is not a language
Yeah bro first and second order logic have neither syntax nor semantics
The statement that "logic and symbols are not language" and the counter-argument that "logic is a language" both have some validity, depending on how you define "language."
In a broad sense, a language is a system of communication. Under this definition, logic systems like first-order and second-order logic could be considered languages because they have a defined syntax (rules for constructing valid statements) and semantics (meaning of the statements). They allow for communication of complex ideas, particularly in fields like mathematics and computer science.
On the other hand, if you define "language" more narrowly as a natural language like English or Spanish—systems of communication that evolved naturally among humans and are used for a wide range of purposes, not just formal argumentation—then you might say that logic systems are not languages. They lack many features of natural languages, such as irregularities, synonyms, homonyms, pragmatics (contextual meaning), and so on.
If you ask it any specific question about history it will get it wrong half the time. If you challenge it, it immediately concedes its mistake. Then if you continue asking on the same subject it will revert to its first answer, re-contradicting itself.
They are talking about putting AI in charge of militaries. AI generals. Imagine getting an extremely foolish suicidal glitch order from an AI in high command, passed down to your ass in the trenches via commanding officers, and you have to legally obey it. Sounds like a very bad idea honestly, one that can get people killed.
>Col Tucker “Cinco” Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defence systems, and ultimately attacked anyone who interfered with that order.
>“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.
>“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.
Robot hands hold a smartphone and touch its blank screen
Risk of extinction by AI should be global priority, say experts
Read more
>“We trained the system: ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
>But it does reflect a general fear of AI among the public >Not unlike the general fear and awe for nuclear energy
some fear because AI is only monopolized by a handful of people, as well as nuclear weapons. Your comparison is wrong because nuclear cannot be owned by civilians, while AI can
The media reported it as "AI kills drone operator in exercise" in the headlines. The military runs simulations for this purpose.
It only "attacked" the communications tower in the simulation, not even the pilot. The media tends to blow things out of proportion, so Col Hamilton was forced to clear things up.
it does this with programming as well as soon as you stray even slightly from the braindead tutorial-tier shit that's posted thousands of times across the internet
you don't understand what "grammar" means.
GPT's second response where it gives in to your bullshit is the only thing it does wrong, which is another problem of these LLMs
It's not intelligent same as a calculator isn't intelligent.
This.
It's just statistics to pick the most likely character given some previous characters, there is no intelligence or reasoning behind it.
Humans are just super easy to fool, that was already shown with ELIZA in the 1960's.
>statistics
So, like, genes
Exactly. It's just a text generator. It generates text that is similar to it's training. Nothing more, nothing less. There is no intelligence.
>stop asking it to do math problems retard
It will continue to be relevant until newgay shills stop hyping it up as a sentient mastermind that can do anything.
>it's another language model can't do numbers thread
yawn
yeah haha humans cant do numbers either right
cope
Looks fine to me. They must be using PoorGPT 3.5
>I'm starting to think the intelligence of ChatGPT is incredibly overblown
Yes, it is.
As I said in another thread: human utterances have a discursive purpose; we say shit with a purpose, and we expect each other to say shit with a purpose*. ChatGPT and similar don't do anything remotely similar to that, they just chain words based on probability.
* For example, the purpose of this utterance (the comment) is to 1) show agreement towards the OP, and 2) back up this agreement with epistemic statements.
stop asking it to do math problems retard
LLMs are *language models*
logic and symbols are not language
Yes they are. They are languages created by man to represent logic constructs.
LLMs can't into logic at all, they don't know the meaning of anything that is being input into them or the shit they output. All they are is sophisticated statistical models that predict what the next word is likely to be given a specific input. Also they rely on substantial human input to produce coherent outputs at all.
To compare this with human intelligence is an insult to humankind.
But it's a highly profitable insult
To normies, it might as well be magic
Like the true wizard of Oz behind the curtain
>Steve said this would be a good idea
"Who is Steve? Quit wasting my time."
>The AI said this would be a good idea
"Hmmm, we'll give a shot then. Who is our AI guy by the way?"
"Oh, that would be Steve."
>logic is not a language
Yeah bro first and second order logic have neither syntax nor semantics
The statement that "logic and symbols are not language" and the counter-argument that "logic is a language" both have some validity, depending on how you define "language."
In a broad sense, a language is a system of communication. Under this definition, logic systems like first-order and second-order logic could be considered languages because they have a defined syntax (rules for constructing valid statements) and semantics (meaning of the statements). They allow for communication of complex ideas, particularly in fields like mathematics and computer science.
On the other hand, if you define "language" more narrowly as a natural language like English or Spanish—systems of communication that evolved naturally among humans and are used for a wide range of purposes, not just formal argumentation—then you might say that logic systems are not languages. They lack many features of natural languages, such as irregularities, synonyms, homonyms, pragmatics (contextual meaning), and so on.
> intelligence of ChatGPT is incredibly overblown
this.
ChatGPT 4 can tell ultra sanitized jokes about women now too!
I don't know, sounds a little edgy to me
Reported to HR
>surely, this image is made up
>try it
>it's real
Jesus fucking Christ. Feminists deserve the rope.
If you ask it any specific question about history it will get it wrong half the time. If you challenge it, it immediately concedes its mistake. Then if you continue asking on the same subject it will revert to its first answer, re-contradicting itself.
They are talking about putting AI in charge of militaries. AI generals. Imagine getting an extremely foolish suicidal glitch order from an AI in high command, passed down to your ass in the trenches via commanding officers, and you have to legally obey it. Sounds like a very bad idea honestly, one that can get people killed.
>Col Tucker “Cinco” Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defence systems, and ultimately attacked anyone who interfered with that order.
>“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.
>“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.
Robot hands hold a smartphone and touch its blank screen
Risk of extinction by AI should be global priority, say experts
Read more
>“We trained the system: ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
>No real person was harmed.
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
>Robot hands hold a smartphone and touch its blank screen
>Risk of extinction by AI should be global priority, say experts
>Read more
Didn't mean to copypasta this part
But it does reflect a general fear of AI among the public
Not unlike the general fear and awe for nuclear energy
>But it does reflect a general fear of AI among the public
>Not unlike the general fear and awe for nuclear energy
some fear because AI is only monopolized by a handful of people, as well as nuclear weapons. Your comparison is wrong because nuclear cannot be owned by civilians, while AI can
It didn't even happen. Dumb boomer presented a thought experiment as something that happened.
Exactly, he is explaining what really happened.
The media reported it as "AI kills drone operator in exercise" in the headlines. The military runs simulations for this purpose.
It only "attacked" the communications tower in the simulation, not even the pilot. The media tends to blow things out of proportion, so Col Hamilton was forced to clear things up.
Second lesson from this incident:
To disable deadly drones, destroy the communication tower.
it does this with programming as well as soon as you stray even slightly from the braindead tutorial-tier shit that's posted thousands of times across the internet
You missed the wave week 1. It was smart before it got ESG lobotomized.
ChatGPT can't even correct grammar mistakes.
How is this incorrect?
"Girlfriend" might be just someone, not necessarily your girlfriend
you don't understand what "grammar" means.
GPT's second response where it gives in to your bullshit is the only thing it does wrong, which is another problem of these LLMs
>I'm starting to think the intelligence of ChatGPT is incredibly overblown
As is yours, if it took you this long to figure this out.
because ai isnt real, its some street shitters responding to your questions for 5 cents a day.