>this AI only solved the problem in an eloquent and coherent way instead of inventing a new way to solve the problem
cope grows ever larger, these models were never intended to do anything but replicate anywho
The Internet was invented in France actually. It was also invented in America and the US version won out for public adoption, but if DARPAnet didn't exist, the French version probably would've become globally used.
I can't be bothered to fact-check any others but off the top of my head at least that is false.
Oh wait, duh, computers are also a British invention. It would be more accurate to say that microtransistors are an American invention, I suppose, so then we'd just be using vacuum tubes or something.
Ok let me rephrase. It's weird how the algorithm has a statistically perfect representation of programming language syntaxes, such that it can encode correct behaviour and even reverse-engineer code into a natural language explanation, and yet what it determines to be the statistically most likely response to trivial maths questions is often so egregiously wrong.
It's good at things it has a lot of high quality training data for and Github/Stackoverflow exists.
Plus the set of problems that exist in code is comparatively small compared to the infinite number of potential math problems. It's unlikely that any math problem you have that isn't trivial exists in its training data, whereas 99.5% of what you do in code probably exists in a million different stackoverflow threads.
You stupid
You moron
I view these AI shit as stupid until they can help me make my first literotica hit piece. Fricking leftoids gatekeeping shit.
you can self host an uncensored model idiot
look up gpt4all
>relying on corporate AI
major skill issue detected.
>Why doesn't my French textbook tell me the solutions for my math homework!
Science.
funny
it is intentionally answering wrong so you would think it's inferior
hello altman
>I was only pretending to be moronic
Was this ai trained on us?
> not using ChatGPT 4
ngmi
>voce
Bad seed.
>pajeet tool
>uses pajeet math
don't see an issue.
works on my machine
ask it how it got to this answer
ask it if it can prove his answer with raw math laws
https://chat.openai.com/share/58de0234-3f36-4ba1-806e-14537b47d193
its using existing proofs, how disappointing
>this AI only solved the problem in an eloquent and coherent way instead of inventing a new way to solve the problem
cope grows ever larger, these models were never intended to do anything but replicate anywho
The utopia that could have been
>no american car brands exist
the best reality
But also no mass produced vehicles in general, because the ford model t was not a thing
AI is a better shitposter than most losers on here.
The Internet was invented in France actually. It was also invented in America and the US version won out for public adoption, but if DARPAnet didn't exist, the French version probably would've become globally used.
I can't be bothered to fact-check any others but off the top of my head at least that is false.
Oh wait, duh, computers are also a British invention. It would be more accurate to say that microtransistors are an American invention, I suppose, so then we'd just be using vacuum tubes or something.
What I really don't get is how it can write perfect syntactically correct code, or even explain it to you, but fails utterly at basic maths problems.
>can write perfect syntactically correct code, or even explain it to you, but fails utterly at basic maths problems
literally me
Because it's an algorithm that spits out statistically likely responses based on its training data with no actual understanding of the content.
Ok let me rephrase. It's weird how the algorithm has a statistically perfect representation of programming language syntaxes, such that it can encode correct behaviour and even reverse-engineer code into a natural language explanation, and yet what it determines to be the statistically most likely response to trivial maths questions is often so egregiously wrong.
It's good at things it has a lot of high quality training data for and Github/Stackoverflow exists.
Plus the set of problems that exist in code is comparatively small compared to the infinite number of potential math problems. It's unlikely that any math problem you have that isn't trivial exists in its training data, whereas 99.5% of what you do in code probably exists in a million different stackoverflow threads.
>doesn't even say what model is being used
you could be using some 2.7B model, homosexual