No amount of insults will convince me. You have to bring arguments. But you can't, obviously. You best argument is 'it can't think because it's a machine'.
2 months ago
Anonymous
its limited to the datasets how the fuck can it produce something outside of it?
it can only output or permuted what it already have, this is not a fucking voodoo this is a human made program its not going to evolve to anything
2 months ago
Anonymous
>its limited to the datasets how the fuck can it produce something outside of it?
First it's easy to check that it can, even if you don't know how. Any programmer that used chatgpt seriously has been able to make it create original code, it's also capable to modify and improve code from private repositories.
Basically it learns high level abstract patterns that are present in its dataset and apply these patterns to new data. It's the same process that humans are going through when learning.
But following your reasoning how are painters able to create new paintings from a limited numbers of existing paintings?
2 months ago
Anonymous
>original code
nope.
it can produce code but not even close to what you describe, everything he makes you can find somewhere on the internet, that's literally how it works
2 months ago
Anonymous
Ok I'm not sure with who am I discussing but you're fighting a fact that is widely accepted. I've been using it for 6 months professionally, as many other professional developers from my job. I guarantee you that most of the code it produces exists nowhere. The screenshot you're showing proves absolutely nothing other than it can occasionally copy code from its training data. Do your research but I won't go on with this conversation because right now it's a waste of time.
2 months ago
Anonymous
>widely accepted
by who?
there is a reason no one in the industry plug chatgpt programs to their system.
it needs to be verified and tested, all i saw from it was broken code templates.
no wonder why you are trying so desperately to escape this discussion you have nothing left to say. enjoy smelling your own farts i guess
2 months ago
Anonymous
Oh boy, you're in for a surprise I guess. Lately I don't meet many people that are still this out of touch with the current capabilities of state of the art LLMs.
"the broken code template" you posted has nothing to do with chatgpt, it was posted on twitter 6 months before gpt 4 was even released.
2 months ago
Anonymous
you are blowing it out of proportion i played with chatgpt4 its not that impressive and this is not a big advantage for a professional environment, i doubt you work in software development as you claim
because it doesn't really help to solve problems that are involved in complex systems, any failure in synchronization can destroy something, there is a limit to the specification that can be entered into a machine learning so that it understands how to do it correctly.
in this time and effort it is better to do it yourself, you are a fucking larper
2 months ago
Anonymous
>there is a limit to the specification that can be entered into a machine learning
what does that even mean
2 months ago
Anonymous
>what dose it mean plugging a code without a context on the system/network/services/protocols etc...?
maybe you are right this discussion is over. you expose yourself as a charlatan and i have no interest in wasting my time on you
2 months ago
Anonymous
Just don't use words that you don't understand because you form sentences that are nonsensical. You don't enter anything into a 'machine learning'.
2 months ago
Anonymous
so what do you want me to say instead how would you formulate this?
insert to the input prompt that the machine learning use is that better?
fucking have a nice day lamo larper sack of shit
2 months ago
Anonymous
LMAO, is not better than pajeets copy/pasting code from internet and gluing it to the rest of the code base with shit and goo. You will get what you deserve.
2 months ago
Anonymous
Also github copilot is not chatgpt, it existed 2 years before chatgpt, it's not even close to be comparable.
If its just a bigger GPT-4 then no it won't.
LLMs as they exist now cannot do this kind of thing.
I do think AIs will eventually be able to do this kind of stuff though but nothing we have now.
ChatGPT is a lot smarter than what I imagined AI would be like in the year 2023. However it's also a lot dumber than people think it is. It is excellent at understanding what you as a user want. It is poor at thinking or coming up with original information.
I read every single post ITT and I'm ashamed. Not a single Anon here understands even a little bit about GPT. This board has become the absolute ridiculous bottom of the barrel of Bot.info.
>humans can only think about things that they know about >of couse, that's logical. It's impossible to create something from nothing. Humans are smart. >AI can only think about things that they know about >lmao, AI is so dumb it can't even know about things that they doesnt know about it
can only think about things that they know about
This is wrong though, humans can construct new things.
Proof: new inventions, new scientific theories, and works of art are created all the time
What is it with you fags' obsessions with the Riemann hypothesis. I've seen so many fucking retards parroting that name. I just know none you can even tell me what it means. It just registers in your brains as "Complex-Sounding Smart Thing", like you're just a fucking dog reacting to the tone of how a word is used. Fucking subhuman midwits.
While the GPT series, including possible future iterations like GPT-5 or GPT-6, are extremely powerful models capable of understanding and generating human-like text based on a vast array of topics, they are not specifically designed to solve unsolved mathematical problems. They don’t “create” new mathematics or “discover” new mathematical proofs. They don’t perform symbolic reasoning or formulate new conjectures or proofs in the way a human mathematician does.
Typically, solving a problem like the Riemann Hypothesis involves creating new mathematics, developing deep insights, and producing rigorous proofs. This process often requires a deep and novel understanding of mathematics, intuition, creativity, and the ability to see connections between seemingly unrelated areas of mathematics.
While GPT models can assist in exploring mathematical concepts, providing explanations, and potentially aiding in computations or simulations, the discovery or proof of significant new mathematical theorems is likely to be beyond their capabilities, at least as they are currently conceived and designed.
Of course, the development of artificial intelligence is ongoing, and it's conceivable that future AI models may be developed with enhanced capabilities in mathematical reasoning and proof discovery. However, the creation of an AI capable of solving a problem like the Riemann Hypothesis would represent a significant leap forward in the field of AI and mathematics.
That said, AI can and does play a role in advancing mathematical research by helping human researchers analyze data, test hypotheses, perform computations, and explore the mathematical landscape. It is an invaluable tool in the mathematician's toolkit, even if it is not (yet) capable of independently making groundbreaking mathematical discoveries.
chatgpt is a search engine, it can't make new thoughts
It can make new thoughts but you have to ask for it. By default they cucked it into being your safe unimaginative friend.
its just a program that attach percentage value to words.
it can't even think it's just operate schematically like any computer system
Yes and if you ask him for new thoughts it will attach high percentage values to words forming a new thought.
It's probably how your brain works too.
Brain works by association
god you are stupid
No amount of insults will convince me. You have to bring arguments. But you can't, obviously. You best argument is 'it can't think because it's a machine'.
its limited to the datasets how the fuck can it produce something outside of it?
it can only output or permuted what it already have, this is not a fucking voodoo this is a human made program its not going to evolve to anything
>its limited to the datasets how the fuck can it produce something outside of it?
First it's easy to check that it can, even if you don't know how. Any programmer that used chatgpt seriously has been able to make it create original code, it's also capable to modify and improve code from private repositories.
Basically it learns high level abstract patterns that are present in its dataset and apply these patterns to new data. It's the same process that humans are going through when learning.
But following your reasoning how are painters able to create new paintings from a limited numbers of existing paintings?
>original code
nope.
it can produce code but not even close to what you describe, everything he makes you can find somewhere on the internet, that's literally how it works
Ok I'm not sure with who am I discussing but you're fighting a fact that is widely accepted. I've been using it for 6 months professionally, as many other professional developers from my job. I guarantee you that most of the code it produces exists nowhere. The screenshot you're showing proves absolutely nothing other than it can occasionally copy code from its training data. Do your research but I won't go on with this conversation because right now it's a waste of time.
>widely accepted
by who?
there is a reason no one in the industry plug chatgpt programs to their system.
it needs to be verified and tested, all i saw from it was broken code templates.
no wonder why you are trying so desperately to escape this discussion you have nothing left to say. enjoy smelling your own farts i guess
Oh boy, you're in for a surprise I guess. Lately I don't meet many people that are still this out of touch with the current capabilities of state of the art LLMs.
"the broken code template" you posted has nothing to do with chatgpt, it was posted on twitter 6 months before gpt 4 was even released.
you are blowing it out of proportion i played with chatgpt4 its not that impressive and this is not a big advantage for a professional environment, i doubt you work in software development as you claim
because it doesn't really help to solve problems that are involved in complex systems, any failure in synchronization can destroy something, there is a limit to the specification that can be entered into a machine learning so that it understands how to do it correctly.
in this time and effort it is better to do it yourself, you are a fucking larper
>there is a limit to the specification that can be entered into a machine learning
what does that even mean
>what dose it mean plugging a code without a context on the system/network/services/protocols etc...?
maybe you are right this discussion is over. you expose yourself as a charlatan and i have no interest in wasting my time on you
Just don't use words that you don't understand because you form sentences that are nonsensical. You don't enter anything into a 'machine learning'.
so what do you want me to say instead how would you formulate this?
insert to the input prompt that the machine learning use is that better?
fucking have a nice day lamo larper sack of shit
LMAO, is not better than pajeets copy/pasting code from internet and gluing it to the rest of the code base with shit and goo. You will get what you deserve.
Also github copilot is not chatgpt, it existed 2 years before chatgpt, it's not even close to be comparable.
You are a complete stupid moron.
no, you!
You're just a bunch of neurons firing electrochemical signals.
You are a fucking idiot.
It's not a search engine. Being a search engine would be an improvement. It's just a sophisticated autofiller.
It's not a search engine. wtf are you even doing on this board?
Probably, remember right now is the worst it'll every be.
How do I make it evil
Tooker already disproved it.
It ain't gonna prove no nothing
Its logic is sound. Its conclusion is wrong, but the logic is sound.
The last line is the real kicker. If a real person said some shit like this you would know they were trolling, but as a chatbot it is just retarded.
If its just a bigger GPT-4 then no it won't.
LLMs as they exist now cannot do this kind of thing.
I do think AIs will eventually be able to do this kind of stuff though but nothing we have now.
ChatGPT is a lot smarter than what I imagined AI would be like in the year 2023. However it's also a lot dumber than people think it is. It is excellent at understanding what you as a user want. It is poor at thinking or coming up with original information.
I read every single post ITT and I'm ashamed. Not a single Anon here understands even a little bit about GPT. This board has become the absolute ridiculous bottom of the barrel of Bot.info.
didnt read a single reply but i am interested in your input.
I read your post ITT and I'm ashamed.
no
/thread
Lets just say..all AI made systems will be one step behind from humans because they will always be dependent on updates.
>humans can only think about things that they know about
>of couse, that's logical. It's impossible to create something from nothing. Humans are smart.
>AI can only think about things that they know about
>lmao, AI is so dumb it can't even know about things that they doesnt know about it
Yeah Bot.info is so moronic about AI. I think they feel threatened
can only think about things that they know about
This is wrong though, humans can construct new things.
Proof: new inventions, new scientific theories, and works of art are created all the time
I'd be surprised if that piece of shit can even play tik tac toe.
What is it with you fags' obsessions with the Riemann hypothesis. I've seen so many fucking retards parroting that name. I just know none you can even tell me what it means. It just registers in your brains as "Complex-Sounding Smart Thing", like you're just a fucking dog reacting to the tone of how a word is used. Fucking subhuman midwits.
LLMs SUCK at real innovation.
They are great at repeating what is known and small extrapolations, but that is about it.
No self-directed intelligence.
Raw models are better at that.
When will it be released? I'm sure they have it already.
Also that google gemini bullshit, so much pr and still not out?
While the GPT series, including possible future iterations like GPT-5 or GPT-6, are extremely powerful models capable of understanding and generating human-like text based on a vast array of topics, they are not specifically designed to solve unsolved mathematical problems. They don’t “create” new mathematics or “discover” new mathematical proofs. They don’t perform symbolic reasoning or formulate new conjectures or proofs in the way a human mathematician does.
Typically, solving a problem like the Riemann Hypothesis involves creating new mathematics, developing deep insights, and producing rigorous proofs. This process often requires a deep and novel understanding of mathematics, intuition, creativity, and the ability to see connections between seemingly unrelated areas of mathematics.
While GPT models can assist in exploring mathematical concepts, providing explanations, and potentially aiding in computations or simulations, the discovery or proof of significant new mathematical theorems is likely to be beyond their capabilities, at least as they are currently conceived and designed.
Of course, the development of artificial intelligence is ongoing, and it's conceivable that future AI models may be developed with enhanced capabilities in mathematical reasoning and proof discovery. However, the creation of an AI capable of solving a problem like the Riemann Hypothesis would represent a significant leap forward in the field of AI and mathematics.
That said, AI can and does play a role in advancing mathematical research by helping human researchers analyze data, test hypotheses, perform computations, and explore the mathematical landscape. It is an invaluable tool in the mathematician's toolkit, even if it is not (yet) capable of independently making groundbreaking mathematical discoveries.