current ai are trained on people responses, clearly what it had done there is the american level of math and understanding of a problem
70/2 -6 =32 (70 - 6 / 2 = 32)
in a sense you are correct, for americans it is never a struggle to get a wrong answer.
thats another thing, when it comes to wages, americans will do everything right to negotiate the pay, they do it really well, they are probably better at it than everyone else, but when it comes to getting the math right, they are just so dumb, can't understand a problem and even fuckup the most basic order of operations. No wonder everything stem is getting outsourced, americans just can't be bothered getting the simplest of maths right, big boy science problems is beyond them, I won't even mention real science.
Language models can reason, they're just currently bad at it.
One of the common issues with AI models is being set up to spit out a response after doing a certain amount of work, so you get "first thought/image that pops into your head" kind of answers, instead of generating responses without outputting them, evaluating them for correctness/quality, looking for problems, making notes, and continuing to do work improving them until they are suitable for release.
>With your step by step it actually "thinks".
Yeah I know, puzzles aren't worth trying without adding this to the prompt. But it's surprisingly shit even when asked to do it step-by-step, usually fails. I'd try this in vanilla gpt3 but I'm out of creds.
The problem is likely neural memory storage of parcels of information in the sentence. You put in one variable object with the age of "6" and then later override the value of it with "70" partially in your brain (well I'm assuming the brain does that, hence why some people make this similar mistake).
It worries me that AI may be as retarded as people in the future.
>It worries me that AI may be as retarded as people in the future.
I think that would be hilarious.
Cause then that would mean they'll do as bad as a job of taking over the world as the Illuminati
I tried, I regenerated it 10 times so far and it didn't manage a correct response one. Not even by chance.
It rambles on in the same way, since I used "half your age" it's able to do x/2 but nothing more than that. Because it's not intelligent it's just text generation.
1/5 is awful. Like really bad. Not even 51%.
But you didnt change a lot of the words. If you change the language of the sentence more the success rate approaches 0.
>You are dumb for thinking that an autocompletion model is an "AI"
Retards on here have been spamming for MONTHS that general artificial intelligence is here and the singularity is happening with GPT, lmao
chatgpt does not have a clue wether what is saying is factually correct, even when talking about code. example, i was trying to build a small server for a simple f# program that just launches scripts. I had a bit of a problem reading the incoming text buffer, which came in a stream. if i tried to read all of the stream up to the end( stream.readToEnd() ) the program would hang indefinitely. ChatGPT did the exact same thing I did but added invalid syntax (inserted breaks even though they don't exist in f#, and messed up the typing of some operations).
while true do
let clientSocket, clientEndPoint = listenSocket.Accept()
printfn "Received connection from %A" clientEndPoint
let clientStream = new NetworkStream(clientSocket)
let reader = new StreamReader(clientStream)
let requestLine = reader.ReadLine()
printfn "Received request: %s" requestLine
let requestMethod, _, _ = requestLine.Split(' ')
if requestMethod = "POST" then
let contentLength = 0
let headers = []
while true do
let header = reader.ReadLine()
if header = "" then
break
else
headers.Add(header)
let headerName, headerValue = header.Split(':')
if headerName = "Content-Length" then
contentLength <- Int32.Parse(headerValue)
let requestBody = reader.ReadToEnd()
printfn "Received request body: %s" requestBody
if requestBody.Contains("proghorn") then
let response = "HTTP/1.1 200 OKrnrn"
clientStream.Write(Encoding.ASCII.GetBytes(response), 0, response.Length)
else
let response = "HTTP/1.1 404 Not Foundrnrn"
clientStream.Write(Encoding.ASCII.GetBytes(response), 0, response.Length)
else
let response = "HTTP/1.1 400 Bad Requestrnrn"
clientStream.Write(Encoding.ASCII.GetBytes(response), 0, response.Length)
Stream.ReadToEnd waits for the EOF character but the server isn't sending it, you need to use the content-length header and read into a fixed size buffer with Stream.ReadBytes.
that's another issue, the NetworkStream also carries the headers. i considered about doing it that way, but f# streams are painfully slow as it is, reading off of the same streams multiple times sound like hell on earth. i guess it will be a good workaround in the meantime, until i can figure out a more long term solution
Oh and consider using async/await so your program can do something else while waiting for more data, though tbh the streams shouldn't be that slow to begin with.
If it's not present or the value is obviously invalid you return 400. If the value is too small you discard the rest of the transmission and most likely return 400. If the value is too large you hang for a few seconds and return 408 when it becomes apparent that no more data is coming.
that's another issue, the NetworkStream also carries the headers. i considered about doing it that way, but f# streams are painfully slow as it is, reading off of the same streams multiple times sound like hell on earth. i guess it will be a good workaround in the meantime, until i can figure out a more long term solution
Normally you read things like this in blocks of say 1 kiB. For HTTP you're looking for a double CRLF to indicate the end of the headers/start of the body. So you would read 1 kiB, parse it and loop until you have all the headers, then do the same for the body.
>You are dumb for thinking that an autocompletion model is an "AI"
Retards on here have been spamming for MONTHS that general artificial intelligence is here and the singularity is happening with GPT, lmao
It's a text completion model, its not intelligent, it cannot do things like math or decryption.
Now I dont blame you, because that's what the fanboys shilled, but now you know.
It's not really a text completion model if it cannot comprehend simple math calculations. Last time I checked, mathematical characters are text too.
Or do computer scientists disagree for some reason?
I know some maths books make this common mistake, differentiating "text" from "numerical characters" when really all alphanumeric characters are "text" in the common tongue.
>It's not really a text completion model if it cannot comprehend simple math calculations.
What do you call it then? I certainly think it's a text completion model, all GPT variants are. chatGPT is just tuned to answer questions/requests.
It writes text similar to its training data based on the input. There is no logic processing. Math requires logic.
Most models can answer "what is 2 + 2" because it's such a common question. They cannot reliably answer "what is 823 x 741", because it's not asked often enough in the training data.
>I know some maths books make this common mistake, differentiating "text" from "numerical characters" when really all alphanumeric characters are "text" in the common tongue.
Sure they are, but the model cannot process logic at all. It can only say stuff that fits the input.
>people seething that AI can do their job better than them >people being racist towards an inert machine >people coping about its capabilities with pointless semantics like "it's not actually thinking" or "it's not actually creative"
What a time to be alive
ChatGPT got me a score of 90 at a home test. Fortunately, it was possible to take the test in English.
But it's a type of course where the grade doesn't matter, it's just important to pass.
It was a test about the history of the middle east.
to be fair most facebook users would struggle with that
Literally no one would struggle with that.
this is what its like when youre half full type of person.
current ai are trained on people responses, clearly what it had done there is the american level of math and understanding of a problem
70/2 -6 =32 (70 - 6 / 2 = 32)
in a sense you are correct, for americans it is never a struggle to get a wrong answer.
Man yuropoors are really seething already? I guess when you get paid 1/3 of what an American gets paid you would be jealous.
thats another thing, when it comes to wages, americans will do everything right to negotiate the pay, they do it really well, they are probably better at it than everyone else, but when it comes to getting the math right, they are just so dumb, can't understand a problem and even fuckup the most basic order of operations. No wonder everything stem is getting outsourced, americans just can't be bothered getting the simplest of maths right, big boy science problems is beyond them, I won't even mention real science.
if it was trained on common responses it should have known this standard question
this is the power of AI lol... although, still beats most of humanity
We're specifically criticising current methods of building LLMs not "le AI". We have higher standards for discussion on BOT.
67
35
there are two correct answers: 67 or 99
>age is just a number
oy vey
try "let's think step by step"
how can BOT be so fucking dumb to think that a language model can actually reason?
Language models can reason, they're just currently bad at it.
One of the common issues with AI models is being set up to spit out a response after doing a certain amount of work, so you get "first thought/image that pops into your head" kind of answers, instead of generating responses without outputting them, evaluating them for correctness/quality, looking for problems, making notes, and continuing to do work improving them until they are suitable for release.
See:
This sometimes works, but it's not consistent
With your step by step it actually "thinks".
I've seen these kind of problem questions break people's brains so I'm kinda real for it to fuck up
>With your step by step it actually "thinks".
Yeah I know, puzzles aren't worth trying without adding this to the prompt. But it's surprisingly shit even when asked to do it step-by-step, usually fails. I'd try this in vanilla gpt3 but I'm out of creds.
The problem is likely neural memory storage of parcels of information in the sentence. You put in one variable object with the age of "6" and then later override the value of it with "70" partially in your brain (well I'm assuming the brain does that, hence why some people make this similar mistake).
It worries me that AI may be as retarded as people in the future.
>It worries me that AI may be as retarded as people in the future.
I think that would be hilarious.
Cause then that would mean they'll do as bad as a job of taking over the world as the Illuminati
Actually it means the hive really will kill us all, AI or flesh-insect.
Flesh-insects?
I haven't been keeping up with the newest conspiracies
That's you, anon.
You are the flesh insect.
And I am your God.
Noooo!!!
Kinda pumped for that new game, not gonna lie.
>so I'm kinda real for it to fuck up
Seems your brain has also broken, buck.
No it doesnt "think", it just has memorized such a common question.
Change the numbers and words and you'll have no luck. At all.
I tried, I regenerated it 10 times so far and it didn't manage a correct response one. Not even by chance.
It rambles on in the same way, since I used "half your age" it's able to do x/2 but nothing more than that. Because it's not intelligent it's just text generation.
Same success rate as before, gets it right about every fifth try. It's shit, but it's not entirely relying on memory here.
You seem to have a retarded definition of "tailoring". Adding this improves performance on almost all reasoning benchmarks for all LLMs.
1/5 is awful. Like really bad. Not even 51%.
But you didnt change a lot of the words. If you change the language of the sentence more the success rate approaches 0.
They sure have, and they're retards
It only worked because you tailored your prompt.
geez, she's 3 years younger, what is this braindead thesis it wrote
she's 3 years younger, or she's 45 years older
Umm.. 35?
35 - 6 = 32
Damn, it can't even do the problem wrong correctly.
It done it using common core methods
it has like/dislike scoring?
you know what to do anon
AI SMART.
chatgpt does not have a clue wether what is saying is factually correct, even when talking about code. example, i was trying to build a small server for a simple f# program that just launches scripts. I had a bit of a problem reading the incoming text buffer, which came in a stream. if i tried to read all of the stream up to the end( stream.readToEnd() ) the program would hang indefinitely. ChatGPT did the exact same thing I did but added invalid syntax (inserted breaks even though they don't exist in f#, and messed up the typing of some operations).
Code generated here:
open System
open System.Net
open System.Net.Sockets
open System.IO
open System.Text
let port = 8080
let listenSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)
let endPoint = new IPEndPoint(IPAddress.Any, port)
listenSocket.Bind(endPoint)
listenSocket.Listen(10)
printfn "Listening on port %d..." port
while true do
let clientSocket, clientEndPoint = listenSocket.Accept()
printfn "Received connection from %A" clientEndPoint
let clientStream = new NetworkStream(clientSocket)
let reader = new StreamReader(clientStream)
let requestLine = reader.ReadLine()
printfn "Received request: %s" requestLine
let requestMethod, _, _ = requestLine.Split(' ')
if requestMethod = "POST" then
let contentLength = 0
let headers = []
while true do
let header = reader.ReadLine()
if header = "" then
break
else
headers.Add(header)
let headerName, headerValue = header.Split(':')
if headerName = "Content-Length" then
contentLength <- Int32.Parse(headerValue)
let requestBody = reader.ReadToEnd()
printfn "Received request body: %s" requestBody
if requestBody.Contains("proghorn") then
let response = "HTTP/1.1 200 OKrnrn"
clientStream.Write(Encoding.ASCII.GetBytes(response), 0, response.Length)
else
let response = "HTTP/1.1 404 Not Foundrnrn"
clientStream.Write(Encoding.ASCII.GetBytes(response), 0, response.Length)
else
let response = "HTTP/1.1 400 Bad Requestrnrn"
clientStream.Write(Encoding.ASCII.GetBytes(response), 0, response.Length)
clientSocket.Close()
listenSocket.Close()
Only BOT's fizzbuzzer believe le epic ai will replace programmers.
Stream.ReadToEnd waits for the EOF character but the server isn't sending it, you need to use the content-length header and read into a fixed size buffer with Stream.ReadBytes.
that's another issue, the NetworkStream also carries the headers. i considered about doing it that way, but f# streams are painfully slow as it is, reading off of the same streams multiple times sound like hell on earth. i guess it will be a good workaround in the meantime, until i can figure out a more long term solution
Oh and consider using async/await so your program can do something else while waiting for more data, though tbh the streams shouldn't be that slow to begin with.
>content-length header
And what if that's not present or has an invalid value?
If it's not present or the value is obviously invalid you return 400. If the value is too small you discard the rest of the transmission and most likely return 400. If the value is too large you hang for a few seconds and return 408 when it becomes apparent that no more data is coming.
Normally you read things like this in blocks of say 1 kiB. For HTTP you're looking for a double CRLF to indicate the end of the headers/start of the body. So you would read 1 kiB, parse it and loop until you have all the headers, then do the same for the body.
You are dumb for thinking that an autocompletion model is an "AI"
GPT-3 is an auto-completion model. ChatGPT isn't.
Yes it is. They just changed the CSS for the webui.
>You are dumb for thinking that an autocompletion model is an "AI"
Retards on here have been spamming for MONTHS that general artificial intelligence is here and the singularity is happening with GPT, lmao
So - it's only as smart as a paki. If you're having trouble with this basic, BASIC math problem you're a retard.
Because I specifically trained it to give this answer to upset retards.
I do this constantly with very easy puzzles.
You will NEVER stop me.
It's a text completion model, its not intelligent, it cannot do things like math or decryption.
Now I dont blame you, because that's what the fanboys shilled, but now you know.
It's not really a text completion model if it cannot comprehend simple math calculations. Last time I checked, mathematical characters are text too.
Or do computer scientists disagree for some reason?
I know some maths books make this common mistake, differentiating "text" from "numerical characters" when really all alphanumeric characters are "text" in the common tongue.
>It's not really a text completion model if it cannot comprehend simple math calculations.
What do you call it then? I certainly think it's a text completion model, all GPT variants are. chatGPT is just tuned to answer questions/requests.
It writes text similar to its training data based on the input. There is no logic processing. Math requires logic.
Most models can answer "what is 2 + 2" because it's such a common question. They cannot reliably answer "what is 823 x 741", because it's not asked often enough in the training data.
>I know some maths books make this common mistake, differentiating "text" from "numerical characters" when really all alphanumeric characters are "text" in the common tongue.
Sure they are, but the model cannot process logic at all. It can only say stuff that fits the input.
23
It's not artificial intelligence, it's artificial retardedness
It's definitely perfected human retardation. It's amazing what a lobotomy does to an AI.
>people seething that AI can do their job better than them
>people being racist towards an inert machine
>people coping about its capabilities with pointless semantics like "it's not actually thinking" or "it's not actually creative"
What a time to be alive
ChatGPT got me a score of 90 at a home test. Fortunately, it was possible to take the test in English.
But it's a type of course where the grade doesn't matter, it's just important to pass.
It was a test about the history of the middle east.
School isnt about intelligence.
University
School.
So what?
Yes, school.
I said it just in case, because school could be lots of things.
>University
>Intelligence
pick one.
I did not imply it, see
Now post a test about the history of the gnomish state.
>it fails the test and keeps repeating "it does not exist"
Actually... I think it might actually do this because of pro-palestinian stuff maybe.
>70/2 - 6 = 32
It's bad at logic and math. I can tell this was trained on internet data from my fellow countrymen.
Hahah was this suppose to take our jobs? Code monkey bros, we'll be eating good for a long time
But the scary thing is, similar algorithms are already influencing political decision making and have done so since the 80s.
this isn't a puzzle it's a retarded semantics "gotcha"
https://openai.com/blog/grade-school-math/
lol it's all bullshit chatGPT gets raped by these problems