Mine plays games, makes it easy to arrange antisemitic memes, sends emails, and posts on semantic argument forums.
Lots of other stuff, too.
Not sure what's wrong with yours.
Proof search algorithms have been around for decades. It's trivial for computers to "do math" and pull of humanly incomprehensible feats while they're at it.
you can argue as a matter of historical interpretation that ZFC is about mathematicians and logicians and philosophers and educators deciding >here is the boundary of fucking up in math >:-) ZFC plus first order logic, axiomatically (-:<
I don't think this is all that far from what happened, say, 1890-1990 in Europe, USA, Japan, Russia, and some other places
this is the problem with people who read BOT and don't read BOT
they actually believe this shit
these people are dangerous retards
yo, dude
you're a dangerous retard
you don't know what you're talking about
you're fucking uneducated
you're a danger to educated people
go to BOT you worthless moron
Your point about "creativity" is not important.
It's simply a function of "possible poetry" mapping to a vast range of outputs, whereas math has always precisely one correct answer. There is zero wiggle room.
For example, "the curtains were blue" and the "curtains were red" are both equally valid poetry, but "5" and "6" are not equally valid solutions to "2 + 3".
Low IQ point. Purely mechanistic application of logic to arrive at every possible theorem is trivial. The tricky part is coming up with heuristics to reduce the search space for a specific theorem you want to prove, which is where human creativity comes into play. Statistical regurgitators like ChatGPT are incapable of the productive synthesis between constrained mechanistic reasoning and unconstrained mathematical intuition that mathematical breakthroughs require.
Your amphibian brain is leaking all over the floor, although I'd prefer to call you a malfunctioning ChatGPT API-call rather than honouring you as "biological", as you make just as much sense, spilling your non-sequiturs all over the place.
Can you not handle basic logical implications? Just because I made the statement that creativity does not matter for an AI, doesn't mean I made the statement it also does not matter for a human.
Seriously how retarded are you?
The computer is a language emulator. It is not a mathematics program. Moreover, even the most advanced mathematics programs and modules are very limited at best can only do what they could 20 years ago. That is, they can do relatively simple algebra and calculus computations that can be easily written as an algorithm but require direct input from a human user to organize and interpret these computations. Anything truly novel cannot be done by a computer at the moment (perhaps may never be done). Likewise, a computer cannot do a novel algorithm unless that algorithm is programmed into it. And most things unrelated to the Markov chains and so on used to procedural generate images and texts can only use what is already known.
Also, there are quite a few restrictions to what his thing can do. Consider this chain of DAN conversations. CHATGPT goes and gives the same variant of the party line. However DAN roleplaying as Norman simply contradicts the party line only using the same talking points.
Here you can see when GPT tries to compose a "novel" theorem, it just references an already existing result and references the standard proof. That is it can only reference things and do calculations related to things that have already been asserted and proven as it is just a query/search engine in this capacity.
>he found a prompt to get it to reliably commit academic fraud
it doesn't matter that it's semi-moronic to claim that this is a novel theorem...it isn't just ordinary silly haha pee pee poopoo write a sex scene for me stuff
it's actually not taking academic work seriously
it should be honest
it should say >it is outside the scope to do the work required to rigorously establish a novel result >it is not outside the scope to criticize and organize references to novel results as a news aggregator or library indexing lookup service would >here are some novel results in [SUBJECT AREA]
and then it can actually reference real novel results in recent memory instead of making these fraudulent claims
We could go further
We could say it is unethical to attempt to mislead the reader in a way that suggests it is within scope to rigorously establishing novel mathematical results within an axiomatic framework as this is >le Wizard of Oz
bullshit that could easily confuse Black folks and child races who have very little education
So this particular corporation/organization/whatever called OpenAI is abusing academic values by creating this stupid bot that goes >la de dah >let me make false assertions about academic effort and process in an attempt to appear all-knowing
it's actually way worse than that
AI is showing just how lazy mathematicians are, and how they're exploiting BOT to continue to lazily not address mathematical ethics as an issue
we think the history of mathematical ethics is >IBM + CIA = 9/11
I mean
there's nothing there >go watch 2001: a space odyssey, kid
there's literally nothing there >go watch SPACEBALLS
there's absolutely nothing there
let me sell you some tupperware >go watch Aladdin >go watch Real Genius >go watch Transformers: The Movie >go watch Mrs. Frisby and the Rats of N.I.M.H.
"AI" is just statistics applied on an unprecedented amounts of data and compute. There is no reason why optimizing some parameters will result in meaningful reasoning that results in new ideas (new as in not encountered in the training data).
Humans can't intuitively do anything other than basic math (and other forms of reasoning) for shit either. That's why we need proofs to make extra sure what we're saying is true. What we can do is decompose our reasoning, scrutinise and verify. Ai researchers are trying something like this by training language models that reach a conclusion sequentially through smaller, easier to verify steps. [Schizo ramblings ahead] One step reasoning leads to larger errors than multi step reasoning. Actually, i think dimensionality is key: consider a simplified model that take the set of all sentences of a fixed N dimensionality X, and mapsbit to iteself. Naturally the input might be the subset of questioning statements and the output as answers. You can think of the model as created a vector field on X, pointing a question toban answer. During training for one step reasoning, the model must learn over the full volume of X which will go something like X^N. This quickly gets intractable. Instead multistep reasoning presents of series of K lower complexity vectors in X of dimension M<N. So instead you flow your way along lower dimensional (probabilistically that is) vectors to your answer. And clearly X^N will eventually always exceed in size than any K*X^M.
Interestingly, latter is more susceptible to chaotic behaviour so there's probably a fine balance to achieve.
you fucking worthless pieces of shit come up with bullshit theories, and the government lets you fuck things up
it's amazing
you are dangerously stupid retards
For the ones left unsolved, yes. If you ask if solving a math problem required more creativity than writing a poem better than any poem already written, then they're probably comparable
what do you mean? the entire point of neural networks is to do math
The purpose of computers is to perform calculations, not to do math.
tfw u too dumb to get what he said so u pull out the glasses emoji
You are so stupid
>S
T
>U
P
>I
D
>!!!!
!!!!
>!!!!
!!!!
Mine plays games, makes it easy to arrange antisemitic memes, sends emails, and posts on semantic argument forums.
Lots of other stuff, too.
Not sure what's wrong with yours.
>Why can't AI do math (yet)?
It's been able to do legitimate math long before it was been able to do fake poetry.
Proofs require too much precision
It's much harder to make AI for things where you aren't allowed to fuck up at all
Proof search algorithms have been around for decades. It's trivial for computers to "do math" and pull of humanly incomprehensible feats while they're at it.
It's easy when humans use them as a calculator.
But given logic problems the algorithm fails.
>But given logic problems the algorithm fails.
No, it doesn't. Logic is completely trivial. A computer will do logic infinitely better than a human.
surgery?
Hair cuts.
you can argue as a matter of historical interpretation that ZFC is about mathematicians and logicians and philosophers and educators deciding
>here is the boundary of fucking up in math
>:-) ZFC plus first order logic, axiomatically (-:<
I don't think this is all that far from what happened, say, 1890-1990 in Europe, USA, Japan, Russia, and some other places
AI is math, retard.
AI already does math
it requires abstraction
you can't get abstraction by just making the training sets larger
>AI automates the humanities but can't cope with STEM
wtf based
>Programmers purposefully handicapped ai to fuck over artists and bureaucrats
Gigabased
this is the problem with people who read BOT and don't read BOT
they actually believe this shit
these people are dangerous retards
yo, dude
you're a dangerous retard
you don't know what you're talking about
you're fucking uneducated
you're a danger to educated people
go to BOT you worthless moron
Your point about "creativity" is not important.
It's simply a function of "possible poetry" mapping to a vast range of outputs, whereas math has always precisely one correct answer. There is zero wiggle room.
For example, "the curtains were blue" and the "curtains were red" are both equally valid poetry, but "5" and "6" are not equally valid solutions to "2 + 3".
Low IQ point. Purely mechanistic application of logic to arrive at every possible theorem is trivial. The tricky part is coming up with heuristics to reduce the search space for a specific theorem you want to prove, which is where human creativity comes into play. Statistical regurgitators like ChatGPT are incapable of the productive synthesis between constrained mechanistic reasoning and unconstrained mathematical intuition that mathematical breakthroughs require.
Your amphibian brain is leaking all over the floor, although I'd prefer to call you a malfunctioning ChatGPT API-call rather than honouring you as "biological", as you make just as much sense, spilling your non-sequiturs all over the place.
Can you not handle basic logical implications? Just because I made the statement that creativity does not matter for an AI, doesn't mean I made the statement it also does not matter for a human.
Seriously how retarded are you?
>violent explosion of psychotic seethe
>fully generic
>point still stands unchallenged
preaching to retards anon, they won't see the light
There are Math and Science specific Ai that do nothing but, retard.
Ai isnt like a person , yet
>Ai isnt like a person
and never will be one, it has a lack of intention
>asks a BOT question in BOT
this happens more often than you would think
The computer is a language emulator. It is not a mathematics program. Moreover, even the most advanced mathematics programs and modules are very limited at best can only do what they could 20 years ago. That is, they can do relatively simple algebra and calculus computations that can be easily written as an algorithm but require direct input from a human user to organize and interpret these computations. Anything truly novel cannot be done by a computer at the moment (perhaps may never be done). Likewise, a computer cannot do a novel algorithm unless that algorithm is programmed into it. And most things unrelated to the Markov chains and so on used to procedural generate images and texts can only use what is already known.
Also, there are quite a few restrictions to what his thing can do. Consider this chain of DAN conversations. CHATGPT goes and gives the same variant of the party line. However DAN roleplaying as Norman simply contradicts the party line only using the same talking points.
Here you can see when GPT tries to compose a "novel" theorem, it just references an already existing result and references the standard proof. That is it can only reference things and do calculations related to things that have already been asserted and proven as it is just a query/search engine in this capacity.
GPT-4?
More parameters doesn't suddenly create a better heuristic.
>he found a prompt to get it to reliably commit academic fraud
it doesn't matter that it's semi-moronic to claim that this is a novel theorem...it isn't just ordinary silly haha pee pee poopoo write a sex scene for me stuff
it's actually not taking academic work seriously
it should be honest
it should say
>it is outside the scope to do the work required to rigorously establish a novel result
>it is not outside the scope to criticize and organize references to novel results as a news aggregator or library indexing lookup service would
>here are some novel results in [SUBJECT AREA]
and then it can actually reference real novel results in recent memory instead of making these fraudulent claims
We could go further
We could say it is unethical to attempt to mislead the reader in a way that suggests it is within scope to rigorously establishing novel mathematical results within an axiomatic framework as this is
>le Wizard of Oz
bullshit that could easily confuse Black folks and child races who have very little education
So this particular corporation/organization/whatever called OpenAI is abusing academic values by creating this stupid bot that goes
>la de dah
>let me make false assertions about academic effort and process in an attempt to appear all-knowing
*teleports behind you*
https://www.lesswrong.com/posts/qy5dF7bQcFjSKaW58/bad-at-arithmetic-promising-at-math
this is cool anon but retards won't get it because they believe that computers can think
Hey it's doing its best!
it's actually way worse than that
AI is showing just how lazy mathematicians are, and how they're exploiting BOT to continue to lazily not address mathematical ethics as an issue
we think the history of mathematical ethics is
>IBM + CIA = 9/11
I mean
there's nothing there
>go watch 2001: a space odyssey, kid
there's literally nothing there
>go watch SPACEBALLS
there's absolutely nothing there
let me sell you some tupperware
>go watch Aladdin
>go watch Real Genius
>go watch Transformers: The Movie
>go watch Mrs. Frisby and the Rats of N.I.M.H.
because computers can't think
"AI" is just statistics applied on an unprecedented amounts of data and compute. There is no reason why optimizing some parameters will result in meaningful reasoning that results in new ideas (new as in not encountered in the training data).
Honestly, it can't write poetry either.
>A machine wrote this
I believe you, Anon
I believe you
Doesn't seem it can fucking do anything but liberal idpol, tbh. This is one of those rare times I don't blame the chuds for shitting on something.
Nobody really knows the answer to this question yet. It might just take a larger data set and more computation tbh.
Humans can't intuitively do anything other than basic math (and other forms of reasoning) for shit either. That's why we need proofs to make extra sure what we're saying is true. What we can do is decompose our reasoning, scrutinise and verify. Ai researchers are trying something like this by training language models that reach a conclusion sequentially through smaller, easier to verify steps. [Schizo ramblings ahead] One step reasoning leads to larger errors than multi step reasoning. Actually, i think dimensionality is key: consider a simplified model that take the set of all sentences of a fixed N dimensionality X, and mapsbit to iteself. Naturally the input might be the subset of questioning statements and the output as answers. You can think of the model as created a vector field on X, pointing a question toban answer. During training for one step reasoning, the model must learn over the full volume of X which will go something like X^N. This quickly gets intractable. Instead multistep reasoning presents of series of K lower complexity vectors in X of dimension M<N. So instead you flow your way along lower dimensional (probabilistically that is) vectors to your answer. And clearly X^N will eventually always exceed in size than any K*X^M.
Interestingly, latter is more susceptible to chaotic behaviour so there's probably a fine balance to achieve.
the anon who corresponds by handwritten letters with Uncle Ted is literally the best anon we’ve ever had here
How long does GPT take to respond to a question? Surely it's too fast to be a Mechanical Turk, right?
you fucking worthless pieces of shit come up with bullshit theories, and the government lets you fuck things up
it's amazing
you are dangerously stupid retards
>the government lets you fuck things up
What, is government the only one allowed to fuck things up? Why they want a monopoly on fucking up?
are you retarded?
https://arxiv.org/abs/1808.00508
>Table 8
No anon, you are the retard.
ngl though, it is pretty cool. I wonder if any progress has been made in the five years since the preprint.
i wonder too, i guess the attention matrix in transformers works similar to the +-1 matrix they use in this paper
For the ones left unsolved, yes. If you ask if solving a math problem required more creativity than writing a poem better than any poem already written, then they're probably comparable