>chatgpt has difficulty with basic mathematical operations. >lets give it a calculator so it knows how to do math!

>chatgpt has difficulty with basic mathematical operations
>lets give it a calculator so it knows how to do math!
>ask ChatGPT what 2 + 2 is
>it plugs it into a calculator and gives you the output
>see! it can do math now
>IT'S SENTIENT
Unironically why does anyone believe this bullshit?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 11 months ago
    silly computer

    it's the right idea... have a multi-tier prompt network that ingests text, identifies possible areas where externally sourced information would be beneficial "enclose all text likely to suffer from the LLM hallucination problem with triple brackets", then parse for those info spots e.g. "~~*the distance between New York and California*~~" and pass it into another prompt that re-words them as Wolfram Alpha Queries ("re-word this fact so it is retrievable by Wolfram Alpha"), fetch answers using the generated queries then sub back into the original answer and ask a prompt to answer the question with those facts in mind.

    the real beauty of LLMs is inside this stack -- very few people are intelligent enough at this time to see you can make prompts deal with prompts deal with prompts.

    >e.g. a chatbot engine
    >each new generatiom, a supervisor prompt is asked to synthesize new information from the latest exchange into a condensed "long term memory" text block, being as concise as possible
    >this is prepended to response gen prompts
    >each generation, another supervisor re-tailors the response with the specific styling desired, so cohesion and truthfulness are maximized while still having personality directives

    also, AI shit is Wrong board. gtfo.

    • 11 months ago
      silly computer goes in all fields

      to elaborate, your original prompt/response would be something like this
      >hey anon-chan, what's the specific heat of water?
      then 'specific heat of water' is grabbed from wolfram/whatever and the following response is generated
      <The Specific heat capacity of water in liquid form is 4182 J/kg°C.
      then another prompt processor alters that so it fits the personality definition of your chatbot. for "anon-chan is a bratty tsundere teenage girl with a huge ego who enjoys Japanese culture", you get
      <it's 4182 J/kg°C you baka moron.... what are you, in middle school? you aren't fit to be on the same planet as me.

      and you combine this all into one interaction stack

    • 11 months ago
      Anonymous
  2. 11 months ago
    Anonymous

    Define sentience in such a way that selecting the appropriate tool to solve a problem, and then solving said problem with said appropriate tool, is not evidence of sentience.

  3. 11 months ago
    Anonymous

    Machine learning is the wrong approach for teaching AI math. Humans do it algorithmically, so why shouldn't machines too?
    The heuristic approach should only be used for solving novel problems.

    • 11 months ago
      Anonymous

      But how do you know the AI "knows" math if it's just plugging it into a calculator? The AI wouldn't "know" math so much as you're just assigning it to a Chinese room for mathematical operations. I'm sure a pure ml model has a better understanding of math than if we were to do ML + calculator

      • 11 months ago
        Anonymous

        same argument can be used for humans. we use heuristics to guide our use of tools, but ultimately hard algorithms do the work for us.

        • 11 months ago
          Anonymous

          Human thoughts aren't algorithms though. When you do "2 + 2" in your head, there isn't some mathematical algorithm in your mind doing it. You're neurons are just approximating an addition algorithm, hence why that part of the brain is used for other shit besides addition, like language, because it's not a strict algorithm.

      • 11 months ago
        Anonymous

        youre so close to realizing literal computation machines cant be sentient in any meaningful way

        i hope you find the truth some day

        • 11 months ago
          Anonymous

          humans are also not sentient in any meaningful way, because the concept of sentient is not meaningful

      • 11 months ago
        Anonymous

        Lol windows calculator already has a stupid loadtime from all the UI bloat. Now it's gotta load a 100T synapse AGI so I can ask what 2+2 is

  4. 11 months ago
    Anonymous

    >ask anon what 532/7 is
    >he plugs it into a calculator
    >see! it can do math now
    >ITS SENTIENT
    lol

    • 11 months ago
      Anonymous

      Yeah, but given enough time I could find 532/7 in my head without a calculator. An AI can't even seem to do that at the moment.

      • 11 months ago
        Anonymous

        Yes because you learned a division algorithm you moron. There's no difference.

        • 11 months ago
          Anonymous

          The difference is that someone invented the algorithm. ChatGPT has not invented any algorithms and never will.

          • 11 months ago
            Anonymous

            Humans were born with an inherent capacity for language. It was already predetermined. Chatgpt doesn't need to invent anything as basic. It's like arguing that ai needs to invent neurons or some basic shit to get your attention. If it got the right internal structure it could invent anything, but we don't understand how language develops in the brain so you can't make the same argument about humans inventing anything as basic as math when the internal structure was already present.

  5. 11 months ago
    Anonymous

    hey anon do 7251*16391 without a calculator. if you can't, you're not sentient.

  6. 11 months ago
    Anonymous

    Imagine this, chat GPT and its deep learning has scraped so much data from all corners of the internet and can now use grammar, syntax and vocabulary to identify unique users. Device manufacturers and your former employers have likely sold your biometric information. Services like email and social media can paint a clearer picture of you, the person, than your family or SO. Your geo activity has been tracked via your car/mobile device for the last decade. Your finances and credit are open books to any financial institution. Any ailments, illnesses, health concerns etc are all visible within the healthcare system.

    IoT and blockchain are going to bring all of this information together. Immutable and forever stored for anyone to see. Before you know it you'll be less of a person irl than the digital identity that was created for you. People will leverage AI and predict what you need/want/do/possess before you know those things yourself.

    AI = a(n) eye, much like the Eye of Providence. The all seeing eye. Because AI can and will see everything.

    • 11 months ago
      Anonymous

      >Imagine this, chat GPT and its deep learning has scraped so much data from all corners of the internet and can now use grammar, syntax and vocabulary to identify unique users.
      I don't have to imagine it. You're describing stylometry. It's been used to identify authors for a long time and stylometric tools have only gotten significantly stronger. I have no doubt that models like GPT can do this.

      • 11 months ago
        Anonymous

        Even that is unnecessary. Websites can detect your exact user profile just from the settings your browser uses. Screen resolution, extension usage, etc. can deanonymize someone even if they have a VPN.

  7. 11 months ago
    Anonymous

    It's fricking game over if these language models can learn tool usage.
    They instantly leap frog chimps

    • 11 months ago
      Anonymous

      Language models still have a issue with not being able to "plan" which greatly limits them, since their answers are constructed one word at a time in a linear fashion. For example, if you ask GPT "how many words long is your reply to this prompt" the answer will be wrong, or if you ask what the last word of its reply will be. Any sort of reply where an earlier part of the reply is dependent on a later part is impossible.

      More important than tool use, the language models need access to some sort of memory.

      • 11 months ago
        Anonymous

        Most models also fall apart outside a very narrow range of character tokens. GPT-4 essentially feeds previous prompts and responses back into the prompter to "remember" the conversation. Once you hit the token limit it starts pruning older tokens until it loses the plot entirely. It does this because after a certain amount of tokens, the algorithm takes exponentially more power to do anything and the results are also worse.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *