Will chatgpt 5 prove the riemann hypothesis?

Will ChatGPT 5 prove the riemann hypothesis?

Tip Your Landlord Shirt $21.68

UFOs Are A Psyop Shirt $21.68

Tip Your Landlord Shirt $21.68

  1. 7 months ago
    Anonymous

    chatgpt is a search engine, it can't make new thoughts

    • 7 months ago
      Anonymous

      It can make new thoughts but you have to ask for it. By default they cucked it into being your safe unimaginative friend.

      • 7 months ago
        Anonymous

        its just a program that attach percentage value to words.
        it can't even think it's just operate schematically like any computer system

        • 7 months ago
          Anonymous

          Yes and if you ask him for new thoughts it will attach high percentage values to words forming a new thought.
          It's probably how your brain works too.

          • 7 months ago
            Anonymous

            Brain works by association

          • 7 months ago
            Anonymous

            god you are stupid

            • 7 months ago
              Anonymous

              No amount of insults will convince me. You have to bring arguments. But you can't, obviously. You best argument is 'it can't think because it's a machine'.

              • 7 months ago
                Anonymous

                its limited to the datasets how the frick can it produce something outside of it?
                it can only output or permuted what it already have, this is not a fricking voodoo this is a human made program its not going to evolve to anything

              • 7 months ago
                Anonymous

                >its limited to the datasets how the frick can it produce something outside of it?
                First it's easy to check that it can, even if you don't know how. Any programmer that used chatgpt seriously has been able to make it create original code, it's also capable to modify and improve code from private repositories.
                Basically it learns high level abstract patterns that are present in its dataset and apply these patterns to new data. It's the same process that humans are going through when learning.
                But following your reasoning how are painters able to create new paintings from a limited numbers of existing paintings?

              • 7 months ago
                Anonymous

                >original code
                nope.
                it can produce code but not even close to what you describe, everything he makes you can find somewhere on the internet, that's literally how it works

              • 7 months ago
                Anonymous

                Ok I'm not sure with who am I discussing but you're fighting a fact that is widely accepted. I've been using it for 6 months professionally, as many other professional developers from my job. I guarantee you that most of the code it produces exists nowhere. The screenshot you're showing proves absolutely nothing other than it can occasionally copy code from its training data. Do your research but I won't go on with this conversation because right now it's a waste of time.

              • 7 months ago
                Anonymous

                >widely accepted
                by who?
                there is a reason no one in the industry plug chatgpt programs to their system.
                it needs to be verified and tested, all i saw from it was broken code templates.
                no wonder why you are trying so desperately to escape this discussion you have nothing left to say. enjoy smelling your own farts i guess

              • 7 months ago
                Anonymous

                Oh boy, you're in for a surprise I guess. Lately I don't meet many people that are still this out of touch with the current capabilities of state of the art LLMs.
                "the broken code template" you posted has nothing to do with chatgpt, it was posted on twitter 6 months before gpt 4 was even released.

              • 7 months ago
                Anonymous

                you are blowing it out of proportion i played with chatgpt4 its not that impressive and this is not a big advantage for a professional environment, i doubt you work in software development as you claim
                because it doesn't really help to solve problems that are involved in complex systems, any failure in synchronization can destroy something, there is a limit to the specification that can be entered into a machine learning so that it understands how to do it correctly.
                in this time and effort it is better to do it yourself, you are a fricking larper

              • 7 months ago
                Anonymous

                >there is a limit to the specification that can be entered into a machine learning
                what does that even mean

              • 7 months ago
                Anonymous

                >what dose it mean plugging a code without a context on the system/network/services/protocols etc...?
                maybe you are right this discussion is over. you expose yourself as a charlatan and i have no interest in wasting my time on you

              • 7 months ago
                Anonymous

                Just don't use words that you don't understand because you form sentences that are nonsensical. You don't enter anything into a 'machine learning'.

              • 7 months ago
                Anonymous

                so what do you want me to say instead how would you formulate this?
                insert to the input prompt that the machine learning use is that better?
                fricking have a nice day lamo larper sack of shit

              • 7 months ago
                Anonymous

                LMAO, is not better than pajeets copy/pasting code from internet and gluing it to the rest of the code base with shit and goo. You will get what you deserve.

              • 7 months ago
                Anonymous

                Also github copilot is not chatgpt, it existed 2 years before chatgpt, it's not even close to be comparable.

              • 7 months ago
                Anonymous

                You are a complete stupid moron.

            • 7 months ago
              Anonymous

              no, you!

        • 7 months ago
          Anonymous

          You're just a bunch of neurons firing electrochemical signals.

      • 7 months ago
        Anonymous

        You are a fricking idiot.

    • 7 months ago
      Anonymous

      It's not a search engine. Being a search engine would be an improvement. It's just a sophisticated autofiller.

    • 7 months ago
      Anonymous

      It's not a search engine. wtf are you even doing on this board?

  2. 7 months ago
    Anonymous

    Probably, remember right now is the worst it'll every be.

  3. 7 months ago
    Anonymous

    How do I make it evil

  4. 7 months ago
    Anonymous

    Tooker already disproved it.

  5. 7 months ago
    Anonymous
  6. 7 months ago
    Anonymous
  7. 7 months ago
    Anonymous
  8. 7 months ago
    Anonymous
  9. 7 months ago
    Anonymous

    It ain't gonna prove no nothing

    • 7 months ago
      Anonymous

      Its logic is sound. Its conclusion is wrong, but the logic is sound.

    • 7 months ago
      Anonymous

      The last line is the real kicker. If a real person said some shit like this you would know they were trolling, but as a chatbot it is just moronic.

  10. 7 months ago
    Anonymous

    If its just a bigger GPT-4 then no it won't.
    LLMs as they exist now cannot do this kind of thing.
    I do think AIs will eventually be able to do this kind of stuff though but nothing we have now.

  11. 7 months ago
    Anonymous

    ChatGPT is a lot smarter than what I imagined AI would be like in the year 2023. However it's also a lot dumber than people think it is. It is excellent at understanding what you as a user want. It is poor at thinking or coming up with original information.

  12. 7 months ago
    Anonymous

    I read every single post ITT and I'm ashamed. Not a single Anon here understands even a little bit about GPT. This board has become the absolute ridiculous bottom of the barrel of BOT.

    • 7 months ago
      Anonymous

      didnt read a single reply but i am interested in your input.

    • 7 months ago
      Anonymous

      I read your post ITT and I'm ashamed.

  13. 7 months ago
    Anonymous

    no
    /thread

  14. 7 months ago
    Anonymous

    Lets just say..all AI made systems will be one step behind from humans because they will always be dependent on updates.

  15. 7 months ago
    Anonymous

    >humans can only think about things that they know about
    >of couse, that's logical. It's impossible to create something from nothing. Humans are smart.
    >AI can only think about things that they know about
    >lmao, AI is so dumb it can't even know about things that they doesnt know about it

    • 7 months ago
      Anonymous

      Yeah BOT is so moronic about AI. I think they feel threatened

    • 7 months ago
      Anonymous

      can only think about things that they know about
      This is wrong though, humans can construct new things.
      Proof: new inventions, new scientific theories, and works of art are created all the time

  16. 7 months ago
    Anonymous

    I'd be surprised if that piece of shit can even play tik tac toe.

  17. 7 months ago
    Anonymous

    What is it with you gays' obsessions with the Riemann hypothesis. I've seen so many fricking morons parroting that name. I just know none you can even tell me what it means. It just registers in your brains as "Complex-Sounding Smart Thing", like you're just a fricking dog reacting to the tone of how a word is used. Fricking subhuman midwits.

  18. 7 months ago
    Anonymous

    LLMs SUCK at real innovation.
    They are great at repeating what is known and small extrapolations, but that is about it.
    No self-directed intelligence.

    • 7 months ago
      Anonymous

      Raw models are better at that.

  19. 7 months ago
    Anonymous

    When will it be released? I'm sure they have it already.

    Also that google gemini bullshit, so much pr and still not out?

  20. 7 months ago
    Anonymous

    While the GPT series, including possible future iterations like GPT-5 or GPT-6, are extremely powerful models capable of understanding and generating human-like text based on a vast array of topics, they are not specifically designed to solve unsolved mathematical problems. They don’t “create” new mathematics or “discover” new mathematical proofs. They don’t perform symbolic reasoning or formulate new conjectures or proofs in the way a human mathematician does.

    Typically, solving a problem like the Riemann Hypothesis involves creating new mathematics, developing deep insights, and producing rigorous proofs. This process often requires a deep and novel understanding of mathematics, intuition, creativity, and the ability to see connections between seemingly unrelated areas of mathematics.

    While GPT models can assist in exploring mathematical concepts, providing explanations, and potentially aiding in computations or simulations, the discovery or proof of significant new mathematical theorems is likely to be beyond their capabilities, at least as they are currently conceived and designed.

    Of course, the development of artificial intelligence is ongoing, and it's conceivable that future AI models may be developed with enhanced capabilities in mathematical reasoning and proof discovery. However, the creation of an AI capable of solving a problem like the Riemann Hypothesis would represent a significant leap forward in the field of AI and mathematics.

    That said, AI can and does play a role in advancing mathematical research by helping human researchers analyze data, test hypotheses, perform computations, and explore the mathematical landscape. It is an invaluable tool in the mathematician's toolkit, even if it is not (yet) capable of independently making groundbreaking mathematical discoveries.

Your email address will not be published. Required fields are marked *