It's not intelligent. It's like someone writing a test and they don't know the answer to a question. But they try to spit out enough half-truths to get partial marks.
>Is it really AI?
You are asking the right questions.
It is AI that has been SUPPLEMENTED by a large pool of dedicated human respondents, to help the company show off the supposed intelligence of it's AI.
ChatGPT is, strictly speaking, a fraud.
How long until chatbots put research assistants out of a job? Are we close to being able to ask the computer to generate a lit review and a bibtex file of sources on a given topic?
Very long. You will always have to check the validity of the generated text, especially for technical topics. And you can't do this if you don't already understand the topic. Being able to press a button and get something mistake free will take a while.
>You will always have to check the validity of the generated text, especially for technical topics. And you can't do this if you don't already understand the topic. Being able to press a button and get something mistake free will take a while.
Try asking it to output something in Coq.
>ask for Coq proof of Riemann >output gets run through Coq and the error message piped back to ChatGPT, requesting a new version >new version back into Coq >repeat
PROFIT???
>Are we close to being able to ask the computer to generate a lit review and a bibtex file of sources on a given topic?
No because it's incapable of understanding anything.
I really, really hope you're baiting. The question in OP's image is nonsensical. Euler's formula has absolutely no possible application to graph theory.
there seems to be this problem as to where the boundary between
- recognizing when two sets are the same
- recognizing when two unary conditions (i.e. classes) are the same
- recognizing when two philosophical concepts are the same (??? lol idk I'm just a dog, where's my beer)
is, and why the fuck would you tell a fucking database any of that
like
- go post on plebbit
- go write for >le philosophical journals to enable you to earn >>la glorie
throughout >le ages
FUCKING GO TO BOT AND READ A FUCKING THREAD
more than that
you have to consider that math and philosophy might have competing ideas as to how to organize this idea that when you have more axioms, you have more ways of proving that two things are different
for example in ZFC there seem to be "too many ways" to prove that things are different because infinite sets that cannot be exhausted by enumeration seem illogical and absurd
the issue is that "naturally" people think the only real numbers are real numbers that have digits that can be calculated or generated by an algorithm
however the problem is that people inject their real world experience into the abstract realm
i.e. they think that when things exist in the real world, they exist in mathematics
however the real world isn't an axiomatic system, and modern math is
>e a large number
It's literally less than tree fiddy.
Is it really AI?
It's not intelligent. It's like someone writing a test and they don't know the answer to a question. But they try to spit out enough half-truths to get partial marks.
It's pretty good when the big complaint is that it sometimes sounds like an uninformed human (rather than not sounding human at all).
https://en.wikipedia.org/wiki/Mark_V._Shaney
we've been able to do this since the fucking 1980s
>Is it really AI?
You are asking the right questions.
It is AI that has been SUPPLEMENTED by a large pool of dedicated human respondents, to help the company show off the supposed intelligence of it's AI.
ChatGPT is, strictly speaking, a fraud.
It's in the field known as AI. It itself is not intelligent. Neither is anything in AI. True intelligence is still a faraway dream.
AI is a relative term that we use to describe anything a computer does that is thought of to require human intelligence
Anything is AI if you’re stupid enough
99% of humans can't answer that. Keep moving the goalposts all you want. We've created intelligence.
>we
bro who exactly is "we"
>Quoting Wikipedia is intelligence.
You accidentally just revealed a lot about yourself.
>The midwit wordcel program is a midwit wordcel
How long until chatbots put research assistants out of a job? Are we close to being able to ask the computer to generate a lit review and a bibtex file of sources on a given topic?
There is no way this thing is even close to being able to produce research level responses.
Very long. You will always have to check the validity of the generated text, especially for technical topics. And you can't do this if you don't already understand the topic. Being able to press a button and get something mistake free will take a while.
>You will always have to check the validity of the generated text, especially for technical topics. And you can't do this if you don't already understand the topic. Being able to press a button and get something mistake free will take a while.
Try asking it to output something in Coq.
https://github.com/clarus/falso
>COQ
PROOF
>OF
FALSE
A
L
S
O
O
O
IT WAS FIXED
moron.
>ask for Coq proof of Riemann
>output gets run through Coq and the error message piped back to ChatGPT, requesting a new version
>new version back into Coq
>repeat
PROFIT???
>Are we close to being able to ask the computer to generate a lit review and a bibtex file of sources on a given topic?
No because it's incapable of understanding anything.
>OpenAI
>you need to log in
>It's not open
no thank you
this is a question an undergraduate 1st year math student should be able to answer easily.
It's inability to find the answer was because it was unable to recognize that the boundary needs to be created using at least 3 vertices and edges.
Next you need to think about how the edges are counted on the boundary, this is at most 2 * the number of edges.
Do do some algebra on euler's formula and you arrive at the number of edges is less than or equal to 3 * vertices - 6
I really, really hope you're baiting. The question in OP's image is nonsensical. Euler's formula has absolutely no possible application to graph theory.
there seems to be this problem as to where the boundary between
- recognizing when two sets are the same
- recognizing when two unary conditions (i.e. classes) are the same
- recognizing when two philosophical concepts are the same (??? lol idk I'm just a dog, where's my beer)
is, and why the fuck would you tell a fucking database any of that
like
- go post on plebbit
- go write for
>le philosophical journals to enable you to earn
>>la glorie
throughout
>le ages
FUCKING GO TO BOT AND READ A FUCKING THREAD
more than that
you have to consider that math and philosophy might have competing ideas as to how to organize this idea that when you have more axioms, you have more ways of proving that two things are different
for example in ZFC there seem to be "too many ways" to prove that things are different because infinite sets that cannot be exhausted by enumeration seem illogical and absurd
the issue is that "naturally" people think the only real numbers are real numbers that have digits that can be calculated or generated by an algorithm
however the problem is that people inject their real world experience into the abstract realm
i.e. they think that when things exist in the real world, they exist in mathematics
however the real world isn't an axiomatic system, and modern math is
what is the answer?