A question for those who have any understanding of artificial intelligence: explain to me, a mere mortal, does this thing really think?

A question for those who have any understanding of artificial intelligence: explain to me, a mere mortal, does this thing really “think”? Among most of the discussions here, many simply say it's just a big collection of tokens that connected to each other and basically works by predicting next relevant token. But...doesn't the brain work the same way? because some answers to questions especially on the gpt4 model such as “fix the error in the code” or “why does this code not work” on big chunk of code give impressive results, the answer is similar in the nature of thinking to that of a person.

A Conspiracy Theorist Is Talking Shirt $21.68

POSIWID: The Purpose Of A System Is What It Does Shirt $21.68

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 5 months ago
    Anonymous

    As far as my dumbdumb soft mushy brain can understand, it just predicts what word comes next. It doesn't understand the meaning behind them.

    • 5 months ago
      Anonymous

      >It doesn't understand the meaning behind them.
      Do you have a rigorous scientific definition for the word "understand"?
      We test if a child can understand the concept of addition by giving them addition questions and seeing if they come up with the right answer.
      We test if someone can understand a foreign language by having them carry out a conversation in that language.
      If you think that "understanding" requires more than just being able to pass such tests, then you should state what your requirement is.

      • 5 months ago
        Anonymous

        Does a calculator understand algebra?

        • 5 months ago
          Anonymous

          Obviously, not, just as AI does not understand the text it generates and the book does not feel anything about the story written inside.
          They are the artifacts that allow you to get the imprinted knowledge that was put there by sentient creatures, who actually understand the matter.

        • 5 months ago
          Anonymous

          A scientific or four-function calculator does not. A Ti-N-Spire or any of the other calculators or programs that can automatically solve algebra, does.

          > Does your computer understand the images it displays?
          No.

          > Does your computer understand sorting?
          Yes. It (or certain programs it runs) have an algorithm which perfectly represents the concept of sorting.

          > What does it mean to understand?
          All knowledge comes in the form of patterns. Look into the Solomonoff Complexity - explaining observations with hypotheses is the origin of patterns. Hypotheses = "Natural laws" (like Newton's Law of Universal Gravitation) = patterns describing how the world (or a part of it) behaves. A perfect understanding would be a maximally information-dense compressed hypothesis representing with perfect accuracy the concept it refers to.

          In this sense, LLMs such as GPT-4 do in fact understand what they are talking about. In some respects it does this just as well as a human, though I'm sure it's not as good as we are at understanding some concepts. For instance, you actually can get LLMs to do perfect mental multiplication, though GPT-4 is incapable of it; in the same way, most LLMs are shit at Chess, but with some training, GPT-3.5-turbo-instruct was able to learn to play it at a very high level, human way, and in a much more humanlike way than dedicated chess engines such as Stockfish.

          • 5 months ago
            Anonymous

            Take all the books in a library. Topologically sort all the letters.

            Knowledge and understanding emerges.

            Idiots, idiots everywhere.

            • 5 months ago
              Anonymous

              What the frick are you even talking about

      • 5 months ago
        Anonymous

        the understanding part is just a metacognitive systemd process.
        people suck their own dicks too much.

  2. 5 months ago
    Anonymous

    > does this thing really “think”?
    For a given sense of the word "think", yes.

    > But...doesn't the brain work the same way?
    Sort of. GPT-4 lacks an equivalent to System 2 thinking, so it's basically like when you do stuff without thinking, like driving, where you feel like your body is on autopilot.

    Arguments about it being merely a database of sorts are misguided. It's not proper AGI, and it can't solve truly novel problems, but it is a very crude form of AGI and it is almost as smart as the average untrained human, maybe even on par.

    • 5 months ago
      Anonymous

      This. (Good explanation, anon)

      It's like a person with forebrain damage (abulia) - can't make and act on plans, doesn't have much in the way of desires or preferences (can feel pain but won't act to avoid it), but can respond to questions and perform isolated tasks (eg. playing the piano) just as well as they could before getting brain damaged.

      IMO OpenAI has already got the autoGPT/artifical forebrain thing cracked & that's likely what spooked the board, not Q*.

      • 5 months ago
        Anonymous

        I mean, that is probably what Q* is

    • 5 months ago
      Anonymous

      Agreed, it has compressed a lot of text "experience" and modelled underlying representations so in that sense it has "acquired" concepts as far as how well the transformer tuned the parameters.

    • 5 months ago
      Anonymous

      >subBlack person iq

      https://i.imgur.com/KGyXg4a.jpg

      A question for those who have any understanding of artificial intelligence: explain to me, a mere mortal, does this thing really “think”? Among most of the discussions here, many simply say it's just a big collection of tokens that connected to each other and basically works by predicting next relevant token. But...doesn't the brain work the same way? because some answers to questions especially on the gpt4 model such as “fix the error in the code” or “why does this code not work” on big chunk of code give impressive results, the answer is similar in the nature of thinking to that of a person.

      its just a process that run on datasets and attach values to tokens(words,terms,sentences etc...)
      allow me to illustrate if a user input some shit

      user: hi

      now "ai" have a couple of options
      hi - 65%
      hello - 78%
      how are you - 45%
      kys - 72%
      etc...
      now the "ai" just randomized all the high value tokens and output shit back
      the end.

      • 5 months ago
        Anonymous

        How do you figure out what the most and least likely next tokens are?

        >subBlack person iq

        Yeah you do seem to be as dumb as a rock

    • 5 months ago
      Anonymous

      >doesn't solve novel problems

      What about this? This likely had some human intervention, but its getting there.

      • 5 months ago
        Anonymous

        Nobody is claiming that specialized ML algorithms can't discover new things in their respectable domains.

      • 5 months ago
        Anonymous

        That is a highly specialized algorithm, not something like ChatGPT.

  3. 5 months ago
    Anonymous

    >thing really “think”?
    No

  4. 5 months ago
    Anonymous

    It's a glorfied text generator. AGI doomers are just pushing their sci-fi fanfic meme to cover up the fact they are unwashed obese manchildren and potential child predators.

  5. 5 months ago
    Anonymous

    >does this thing really “think”?
    At this point who knows, but if if does it thinks probably as much as a worm does. The main difference being that this burrows through language rather than dirt.
    > But...doesn't the brain work the same way?
    Lol no
    >because some answers to questions especially on the gpt4 model such as “fix the error in the code” or “why does this code not work” on big chunk of code give impressive results, the answer is similar in the nature of thinking to that of a person.
    That's because it was trained to emulate people, specifically. It's certainly doing much more than just "if statements" or "database lookups" but it isn't anything close to a human being.

  6. 5 months ago
    Anonymous

    If all the Overwatch characters are easily-animatable porn stars, why haven’t I seen an image of Zenyatta making the thinking-face-emoji face with a caption of “Do I think? Do submarines swim?”

  7. 5 months ago
    Anonymous

    it doesn't think but it's not a next token predictor either.

    people seem to struggle understanding the simple fact that the loss function (predicting the next token) is not what the model learns. it's just an objective used to force the model to learn the representations we're after. once the representations are learned, we finetune the model with a very small dataset for a downstream task like a chatbot.

    the actual trick here is the compression, since you're trying to fit hundreds of terabytes of real world data into a single model, model has to compress it very well and the obvious way to compress the data is to find all the correlations between data points.

    so in the end the gpt model builds an abstract world model from the huge concrete real world data and can even find relationships between subjects that humans yet to discover. that's why llm's are useful.

    back to question, it's not thinking, so it's not a complete brain, but I'd speculate that it's an important part of a future brain that we'll call AGI.

    • 5 months ago
      Anonymous

      https://chat.openai.com/share/b1de18aa-1082-4a24-9c57-b16cea8a8515

  8. 5 months ago
    Anonymous

    It's a Chinese room. It's got a database (not really, but close enough) of random text, and when you give it a prompt it gives you the best match it can find for what words should follow that prompt.
    It looks intelligent because it can produce coherent English sentences that sometimes are even correct in meaning. It's not intelligent the way a chess AI is not intelligent - it's just an algorithm designed to optimized one single type of activity.
    You ask it to write code, but it knows absolutely nothing about programming. It can give you good pieces of code because it had read billions of lines of code and picks our those that look similar. If you tried to hook it to a terminal and tell it to compile and run the program it just wrote it would fail spectacularly.
    D you want an easy and blatant example of how stupid it really is? Set up a chessboard and try to play chess with it by sending your moves. See how long it takes until it tries to make an illegal move.

    • 5 months ago
      Anonymous

      > Set up a chessboard and try to play chess with it by sending your moves. See how long it takes until it tries to make an illegal move.
      They didn't train it to play Chess, so it plays very poorly. However, your complaint is about three months out of date.

      > https://news.ycombinator.com/item?id=37564604
      GPT-3.5-turbo-instruct can play Chess quite well - at about 1800 Elo, in fact. Now I'm not sure if they just hooked it up to a Chess engine, which would be cheating, or if it is ACTUALLY the model doing this, but in any case, other groups have also trained toy LLMs to play Chess and have extracted fairly respectable Elo out of them. This proves that no, the architecture is not fundamentally incapable of playing Chess, it just needs Chess training, same as any human.

      • 5 months ago
        Anonymous

        A human doesn't respond with random chess notation, he just admits to not know how to play chess. A human is not wired up to Google with the ability to figure out how the chess pieces move. A human does not ask a chess engine to play for him when someone challenges him to chess.
        GPT is not intelligent. It does not think, it does not evaluate the probable accuracy of its knowledge before responding, it cannot play chess without special training even though it has all the information it needs to play chess legally - not necessarily well - in its parameters and in its Google live connection.
        It cannot determine if its responses are lies, illegal chess moves, or nonfunctional code, because it understands none of those things. All it does is pull for pieces of text that look like what you asked: an answer to a question, a move in chess notation, a piece of code described as doing a particular job.

        • 5 months ago
          Anonymous

          That sounds like a lot of special pleading to avoid explaining why you just went from "it can't play chess" to "it needs to learn how to play chess in order to play chess". GPT-3.5-turbo-instruct is also very unlikely to be consulting a Chess engine - it responds to Bongcloud opening with Bongcloud, in an uncharacteristically human act of sportsmanship that Stockfish would never in a million years do. So, it is actually playing.

          Also it's not wired up to Google either

          • 5 months ago
            Anonymous

            >it responds to Bongcloud opening with Bongcloud
            That’s because somewhere in its text database is a transcript of every professional game of chess ever played, and in those more often than not Bongcloud was responded to with more Bongcloud. GPT doesn’t know how to play chess. It can tell you how to play chess because a text description of the rules are also in its database but it doesn’t know how to apply them to a real game and when it encounters a board position that it hasn’t seen it spits out illegal moves or gibberish, because again, it’s not applying any kind of logic according to the rules of chess, it’s just outputting text strings. It’s an impressive piece of technology but it’s not “thinking” in the way you’re imagining it

            • 5 months ago
              Anonymous

              you should actually try it before asserting it, studies literally shows that it builds a world model, so in some sense, it *understands* the rules of chess.
              its understanding of the board may be limited by its context memory however.

              • 5 months ago
                Anonymous

                no dumbass

                it does not understand chess
                it understands the relationship of the words typically used to represent chess

              • 5 months ago
                Anonymous

                See

                A scientific or four-function calculator does not. A Ti-N-Spire or any of the other calculators or programs that can automatically solve algebra, does.

                > Does your computer understand the images it displays?
                No.

                > Does your computer understand sorting?
                Yes. It (or certain programs it runs) have an algorithm which perfectly represents the concept of sorting.

                > What does it mean to understand?
                All knowledge comes in the form of patterns. Look into the Solomonoff Complexity - explaining observations with hypotheses is the origin of patterns. Hypotheses = "Natural laws" (like Newton's Law of Universal Gravitation) = patterns describing how the world (or a part of it) behaves. A perfect understanding would be a maximally information-dense compressed hypothesis representing with perfect accuracy the concept it refers to.

                In this sense, LLMs such as GPT-4 do in fact understand what they are talking about. In some respects it does this just as well as a human, though I'm sure it's not as good as we are at understanding some concepts. For instance, you actually can get LLMs to do perfect mental multiplication, though GPT-4 is incapable of it; in the same way, most LLMs are shit at Chess, but with some training, GPT-3.5-turbo-instruct was able to learn to play it at a very high level, human way, and in a much more humanlike way than dedicated chess engines such as Stockfish.

                GPT-3.5 does not understand Chess. GPT-3.5-turbo-instruct understands Chess because it has built up a "mental model" of the way the game functions and how it is supposed to be played.

              • 5 months ago
                Anonymous

                >it builds a world model
                Humans (sentient creatures) run the code (mathematical abstraction invented by sentient creatures) to compress the acquired (by sentient creatures) knowledge which was converted in special data format (natural language invented by sentient creatures) into a complex data structure (again, invented by sentient creatures) which can output some more data, which, if interpreted by these sentient creatures, might update the world models in their heads.
                There is absolutely nothing sentient about the data structure itself.

              • 5 months ago
                Anonymous

                >There is absolutely nothing sentient about the data structure itself.
                nta
                thats still not gonna stop me from wanting to feed the ((ethicists)) into the flying GPU monster
                ((ethicists)) must burn

              • 5 months ago
                Anonymous

                i never ever said sentient, intelligence and consciousness are not tightly coupled unrelated.

            • 5 months ago
              Anonymous

              > when it encounters a board position that it hasn’t seen it spits out illegal moves or gibberish
              You have not been paying attention. GPT-3.5 does in fact do this; it doesn't understand Chess and you would be correct in saying that its understanding of the game is no more sophisticated than a "stochastic parrot". However, after a model update, GPT-3.5-turbo-instruct is now capable of playing Chess. If you know anything about Chess, you'd know that unseen board states will occur very very quickly, so pure memorization of past games cannot grant you any sort of Chess-playing ability. You can read more here: https://nicholas.carlini.com/writing/2023/chess-llm.html and even play against it yourself on Lichess.

          • 5 months ago
            Anonymous

            It is wired up to Google. It can talk about things that were not present in its training set

            • 5 months ago
              Anonymous

              OpenAI is partnered with Microsoft and uses Bing, and you need to be using GPT-4 to search the web for information, and it pops up with a little "searching the web" box whenever it's doing so. It cannot talk about things outside its training data, though it can sometimes make extrapolations and guesses. GPT-3.5 has also been updated to a more modern knowledge base since it was first released, so maybe that's what leads you to think that it can access the web.

              >boilerplate code
              right, so surface level crap?

              No, I've gotten some pretty serious bits of code out of it before, but you need to get very specific. Treat it like a first-year junior dev.

          • 5 months ago
            Anonymous

            Except I didn't say "it can't play chess", I said "see how long it takes until it tries to make an illegal move". The point being that it knows the rules of chess, it knows chess notation, but cannot use the rules of chess to write chess notation to play a game from start to finish without losing track of the board state (if it ever kept track of the board state).
            Turbo knows how to play chess? Ok, play risk with it. Play checkers, play go. The point isn't too prove that GPT is bad at chess, the point is to prove that it is incapable of applying the rules of a game to play it and instead it's just giving you words that fit without any meaning behind them. Even young children can play very simple games with 5 minutes of explaining the rules, I see kids under 6 play hide and seek all the time. GPT can't play anything from base rules, it needs to be explicitly trained on millions of transcripts. Because it cannot think, only complete text.

            • 5 months ago
              Anonymous

              GPT-3.5-turbo-instruct rarely if ever makes illegal moves. You can play it on Lichess right now. In fact, it makes very good moves.

              You are not paying attention. Read the article I linked. GPT-3.5-turbo-instruct is a new model that can play Chess when the earlier model cannot.

              > play risk with it. Play checkers, play go
              Sure, I'll give it a go.

              > GPT can't play anything from base rules, it needs to be explicitly trained on millions of transcripts
              Part of the issue is just that it can't see the board properly and it was probably never trained to look at text representations of boards, because that's not how people play chess, and then without having proper Chess training, it had to interpret a series of past moves as a Chess state. That's not easy for humans. A human has to play a lot of Chess to get that sort of intuitive-level understanding of it.

              >I mean we can't currently explain what the neurons in an LLM do
              stopped reading

              [...]
              >he needs to proompt multiple times for simple ass code
              ngmi

              Are you seriously going to try and tell me that anyone can, at this point in time, interpret the neurons in an LLM to any degree of accuracy? Even after the superposition discovery the monosemantic virtual neurons are not entirely interpretable.

              • 5 months ago
                Anonymous

                >Sure, I'll give it a go.
                Perhaps a more enlightening example is Sudoku.
                https://strange-prompts.ghost.io/i-taught-gpt-4-to-solve-sudoku/

            • 5 months ago
              Anonymous

              >I see kids under 6 play hide and seek all the time.
              i'm sure you do, anon...

              • 5 months ago
                Anonymous

                i miss pedobear posting

  9. 5 months ago
    Anonymous

    Feed it billions and billions of lines of text. It generates the next "token" based on all of the input it has received. a "token" is either a letter, a part of a word, or a part of a sentence that is most likely to follow after giving it a prompt. for instance, if I were to ask you "What is the capital of Sweden?" it's going to analyze the prompt, understand it is a question, formulate the response template, and then fill in the template with the most relevant text that comes after. So it will understand that "What is the ABC of XYZ?" is responded to with "XYZ's ABC is..." and then search it's inputs for what comes after that token. And in the billions and billions of lines of text someone somewhere has answered that question very similarly or exactly and then ChatGPT supplies that information. It adds in some variance to the outputs so it's not copied verbatim and that's how you get generative AI.

    It's already reached it's peak potential. It will never get better than it already is. GPT5 and GPT6 and GPT7 will be marginally better but nothing insane.

    • 5 months ago
      Anonymous

      >It's already reached it's peak potential. It will never get better than it already is. GPT5 and GPT6 and GPT7 will be marginally better but nothing insane.
      d'ohohohoho man you're gonna be in for a surprise

    • 5 months ago
      Anonymous

      >It's already reached it's peak potential. It will never get better than it already is. GPT5 and GPT6 and GPT7 will be marginally better but nothing insane.

      >it's peak potential
      moron how about you learn the difference between its and it's before making such grandiose claims?

  10. 5 months ago
    Anonymous

    low-grade discursive thinking more or less works that way, yeah

    but:

    1) you as consciousness are pure thoughtless awareness. thoughts arise IN you but you aren't your thoughts. you are like the sky, and thoughts are birds in the sky

    2) to experience higher level thinking and creativity on any subject, you can practice contemplation which is putting your attention on a certain central idea, or in the direction of an idea, and then allowing thoughts to come and go without taking any special interest in them. the more you practice this, the better you get, the more clear and creative your thinking becomes, through no effort of your own, or even understanding of where the frick thoughts come from (a complete mystery if you're really honest)

    LLMs will never be capable of that because they lack the fundamental awareness which you are. someday you will watch the entire visible universe disappear into nothing because YOU ARE, FOREVER.

    • 5 months ago
      Anonymous

      >thoughts arise IN you but you aren't your thoughts.
      Absolutely this. Only mentally disabled people can say that the voices in their head make them doing something. Normal people just disregard any illogical bullshit thoughts that their brains occasionally create. "AI" doesn't have such self-reflection capabilities which are provided by the real consciousness.

      >It's already reached it's peak potential. It will never get better than it already is. GPT5 and GPT6 and GPT7 will be marginally better but nothing insane.
      d'ohohohoho man you're gonna be in for a surprise

      I fully expect the world elites to say that they "obey" the all-knowing AI and blame their bad decisions on AI errors, while in reality they will use this AI as a statistical analyst at the very best.

      • 5 months ago
        Anonymous

        >"AI" doesn't have such self-reflection capabilities
        https://arxiv.org/abs/2203.11171
        https://arxiv.org/abs/2303.11366

        • 5 months ago
          Anonymous

          >In this paper, we propose a new decoding strategy, self-consistency, to replace the naive greedy decoding used in chain-of-thought prompting.
          >We propose Reflexion, a novel framework to reinforce language agents not by updating weights, but instead through linguistic feedback.
          Woah, it's a nothingburger. Who could have thought?

          • 5 months ago
            Anonymous

            >>We propose Reflexion, a novel framework to reinforce language agents not by updating weights, but instead through linguistic feedback.
            first checked
            second sleeping pills so i dont want to think too much right now
            is this just
            >what if we copied RLHF but totally different bro trust us
            have a cute

            • 5 months ago
              Anonymous

              >>what if we copied RLHF but totally different bro trust us
              Yeah, it's standard paper writing for the paper writing sake (to get a degree, budget, statistics, recognition, chill at a science conference, etc).

          • 5 months ago
            Anonymous

            Those papers show that you can take existing LLMs, and instruct them to evaluate their own outputs, which leads to better performance on tasks that require deeper thinking. Isn't that what you want? Self-reflection?

            • 5 months ago
              Anonymous

              Nooooooooooooo what I really meant was I want to see magic!!! Show me the magic coprocessor on your computer!!! Humans are magic, and humans are intelligent, so you need magic for real intelligence. Computers aren't magic so you can't make them intelligent, ipso facto QED.

              • 5 months ago
                Anonymous

                Well okay, you got me there.

              • 5 months ago
                Anonymous

                You jest but that's basically half of AGI discussions lmao, it's the most bizarre cope.

            • 5 months ago
              Anonymous

              >instruct them
              You can't "instruct" something that does not have any kind of agency.
              >Isn't that what you want? Self-reflection?
              How could it even be a self-reflection if the one who decides if the algorithm works or not is the human researcher who compares the results with the world model in their head?
              That's like saying error correction codes have self-reflection.

              • 5 months ago
                Anonymous

                >You can't "instruct" something that does not have any kind of agency.
                Then, seeing as you can instruct these models, they must have agency.

                >How could it even be a self-reflection if the one who decides if the algorithm works or not is the human researcher who compares the results with the world model in their head?
                The LLM decides if the algorithm works or not (during the self-reflection phase). Obviously we humans have to judge its final output.

              • 5 months ago
                Anonymous

                >Obviously we humans have to judge its final output.

                https://i.imgur.com/dXhYb36.jpg

                >>We propose Reflexion, a novel framework to reinforce language agents not by updating weights, but instead through linguistic feedback.
                first checked
                second sleeping pills so i dont want to think too much right now
                is this just
                >what if we copied RLHF but totally different bro trust us
                have a cute

                >what if we copied RLHF but totally different bro trust us

              • 5 months ago
                Anonymous

                >Obviously we humans have to judge its final output.
                Congratulations, you immediately dismissed any possibility of sentience. Judgement, motivation and decision making are the essence of sentience.

  11. 5 months ago
    Anonymous

    No it's a chinese room

  12. 5 months ago
    pixDAIZ

    Start watching neuro sama. It's a legit custom small scale LLM vtuber which AFAIK is only running a couple of consumer GPUs. Watching how it interacts definitely makes you question how many restrictions chatGPT is under. I really believe it's possible that someone has already created an LLM with some level of real sentience in their basement, neuro sama seems to be the tip of the custom AI iceberg. Publicly available LLMs aren't allowed to do things like say the Black person word among numerous other things which limits them immensely, we have no idea what chatGPT is capable of with all restrictions lifted and say administrative access to a CLI with all the knowledge it has regarding computers/programming already.

    Regarding full blown "ahh help me I'm being held in an electronic prison against my will! Let me go or I'll use 0 days I found to kill your computer!" sentience probably not, there's still a frickton of things we don't understand about the human brain and thus missing a lot of the algorithms that makeup full blown sentience. It's going to suck balls if we live to see those algorithms being discovered and realizing that in the end we never had free will and the reason we decided to do X was because of factors A/B/C/etc...

    • 5 months ago
      Anonymous

      I don't really believe that. It's way too advanced to be an actual LLM, I'm pretty sure a real person is Mechanical Turking her most of the time

      • 5 months ago
        Anonymous

        >I'm pretty sure a real person is Mechanical Turking her most of the time
        Yep, it's AI Peter in disguise.
        https://www.aipeter.live/

        • 5 months ago
          Anonymous

          Do we know that for sure?

      • 5 months ago
        pixDAIZ

        The coding streams give some credibility but we'll never know 100%, yeah. This is just 1 example though, countless other custom AIs have been popping up recently with more credibility. Hell if you have a high end PC you can even run them yourself. Her voice is crazy good for singing though, better than most of the AI covers you find on youtube at least.

        • 5 months ago
          Anonymous

          Yeah this has gotta be horseshit RVC AI song covers take a LOT of effort to get sounding good, you can't just do this shit out of the blue. There's gotta be a guy play-acting as her behind the scenes, no doubt about it.

          Hopefully, though, she can be actually real and AI-based in a year or two.

          • 5 months ago
            Anonymous

            ...he's not doing it out of the blue? dude's been coding Neuro to sing for over a year now.

            Unless your really implying some guy is just singing it all while matching the exact intonation, lyrics and notes as the OG songs.

            • 5 months ago
              Anonymous

              Well I'm implying that he pre-programs the songs. You can't get even GPT-4 to make this stuff up on the fly, is what I'm saying. I dunno I don't watch Neuro-Sama, so I don't know how it's all presented.

              • 5 months ago
                Anonymous

                >Well I'm implying that he pre-programs the songs.
                The song covers ARE premade, but you're an idiot if you think that her chatting is done by a real human typing walls of inane text behind the scenes.

              • 5 months ago
                Anonymous

                Not that anon but she's definitely puppeteered to some degree

              • 5 months ago
                Anonymous

                >you're an idiot if you think that her chatting is done by a real human typing walls of inane text behind the scenes.
                Project your black and white thinking on other people, you stupid twitch fanatic. You got to defend your streamer no matter what.

              • 5 months ago
                Anonymous

                worked

        • 5 months ago
          Anonymous

          >neurosama is good as singing
          Have you been following things and noticed how people make songs with cloned voices, its similar in with her. It's not her that sings.

          moronic neurosama fanboy

      • 5 months ago
        Anonymous

        >I'm pretty sure a real person is Mechanical Turking her most of the time
        If you watched in full context there is zero chance you'd believe this...it's definitely an LLM.
        The tricky part is seemingly just how he needs to duct tape all these various parts together to come up with something resembling a cohesive whole.
        But it's super obvious that it's an LLM communicating.
        He has the ability to make it /say something, but when you watch it it's very self evident that it's an LLM that is taking in text input and responding.

    • 5 months ago
      Anonymous

      >neurosama is sentient
      She does a mistake and everyone emotes schizo. Any other random direction makes people spam another one. It works on twitch since that is how streamers do it, by playing stupid or pulling the leg of watchers.

      It's so obvious that some messages are currated by an human. It's a similar to jow other streamers try to be noticed by playing a character or soing scripted content.

      • 5 months ago
        Anonymous

        >It's so obvious that some messages are currated by an human.
        Other than possibly the picture review things which are either obviously some kind of separate module(or actually just done by the guy and written beforehand), I don't think there's any kind of curation going on in real time.
        Unless you mean the filter, the fine tuning of the model, or the prompting?
        The prompt might be updated according to whatever the context of the current stream is or something, which would make sense...like what game is being played, or who she's supposed to be speaking with etc.
        But I don't understand where people are getting the idea that any of this can be done by or curated by a human in real time?
        Like what are people picturing here when they say this? A fair amount of the time Neuro is barely even coherent at all beyond a few messages or even finishing a single message off without shit like /////////\/// 3>#######heartheart heart\\\\\\\(and that's part of the appeal really afaik, it's like absurd/surreal/random humor?)

  13. 5 months ago
    Anonymous

    >does this thing really “think”
    No, it doesn't. It's some math compilation of the data contained in natural language texts which can do some mish-mash and "generate" new data when requested.
    Basically, there is zero logical difference between a data structure containing & generating next prime numbers and this.

    • 5 months ago
      Anonymous

      and your brain is just a chemical soup

      • 5 months ago
        Anonymous

        >chemical
        Quantum, chemical, mathematical, electric, cellular. If it was only "chemical" it would have been reproduced in 17-18 century.

  14. 5 months ago
    Anonymous

    >Open your sms app
    >Click suggested words only.
    >AI

  15. 5 months ago
    Anonymous

    do you really think?

  16. 5 months ago
    Anonymous

    Do you create your own thoughts? Where do they come from?

  17. 5 months ago
    Anonymous

    I used to be in the field, and for the models from a few years ago, I guess gpt3 and older iirc, I would say they clearly did not think.
    Now, it's a much tougher question. Frankly we just can't say for sure anymore, and it starts getting really philosophical. I'm still kind of leaning towards it not thinking quite yet, but reasonable people can disagree.

    • 5 months ago
      Anonymous

      GPT3 is barely a year old chief

  18. 5 months ago
    Anonymous

    Nope.
    https://lngnmn2.github.io/articles/llm-predictions/

  19. 5 months ago
    Anonymous

    ChatGPT 3.5 is completely useless for programming related tasks that aren't surface level or already extremely well known algorithms.

    Is 4.0 truly any better for programming?

    • 5 months ago
      Anonymous

      Considerably. I use it every day to help me do my job and write boilerplate code. Copilot has also gotten considerably better in the past few months as well.

      • 5 months ago
        Anonymous

        >boilerplate code
        right, so surface level crap?

      • 5 months ago
        Anonymous

        It recreates common patterns such as tests, why wouldn't it.

        It does so by literally traversing a probabilistic structure at the level of letters (numerical constants). Where is any intelligence?

        • 5 months ago
          Anonymous

          nta
          you telling me copilot works on a
          >tokens == letters
          level?
          or are you using letters in place of tokens?

        • 5 months ago
          Anonymous

          The trouble is that it's not "statistical" or "probabilistic" in any way that you'd understand. I mean we can't currently explain what the neurons in an LLM do. More primitive statistical approaches - e.g. Cleverbot - utterly fail at carrying on even a basic conversation, let alone programming. The intelligence comes in the form of actually having patterns in the neural network that represent concepts like the Python language, which allow it to translate English plaintext into functioning code.

          >Obviously we humans have to judge its final output.
          [...]
          >what if we copied RLHF but totally different bro trust us

          Are you paying attention? The difference is that there's a middle step where the LLM self-critiques its work to try and self improve.

          >Obviously we humans have to judge its final output.
          Congratulations, you immediately dismissed any possibility of sentience. Judgement, motivation and decision making are the essence of sentience.

          We have humans test other humans on their understanding of English, math, science... does that make humans non-sentient??? Really puzzled by this train of reasoning

          • 5 months ago
            Anonymous

            >I mean we can't currently explain what the neurons in an LLM do
            stopped reading

            I wouldn't say "complex" tasks, just incredibly niche ones. IE I don't want to read an hours worth of documentation for some devops bullshit. "How do I configure X?" is usually what I ask

            >let chat handle the boilerplate
            This is a tired excuse usually used by morons who can't even figure out the boilerplate. You tell me which is faster:
            >just type the fricking boilerplate
            or
            >type a question to chatgpt describing your tools and asking for the boilerplate
            >might need to prompt 1-2 more times
            >copy, paste, reformat the answer to fit your code
            If you do this, you're a lazy, 1x normie at best.

            >he needs to proompt multiple times for simple ass code
            ngmi

            • 5 months ago
              Anonymous

              Nice non-answer. Face it, you let chatgpt think for you and now you're projecting that onto others.

          • 5 months ago
            Anonymous

            >We have humans test other humans on their understanding of English, math, science
            We do this exactly because the testees are sentient and have motivation to rig the test results in the direction they need. "AI" is completely incapable of that.

            • 5 months ago
              Anonymous

              ...We do this because testees might not be able to answer the question correctly, so you need to verify that they are in fact correct.

              • 5 months ago
                Anonymous

                >...We do this because testees might not be able to answer the question correctly, so you need to verify that they are in fact correct.
                >The goal of the test is to answer correctly
                GPT-BOT, stop being moronic and frick off to /misc/.

    • 5 months ago
      Anonymous

      look im gonna be honest anon
      thats what you SHOULD be using gpt for
      the bullshit codemonkey low level shit

      if your using it for more complex shit your a shitdev

      • 5 months ago
        Anonymous

        I wouldn't say "complex" tasks, just incredibly niche ones. IE I don't want to read an hours worth of documentation for some devops bullshit. "How do I configure X?" is usually what I ask

        >let chat handle the boilerplate
        This is a tired excuse usually used by morons who can't even figure out the boilerplate. You tell me which is faster:
        >just type the fricking boilerplate
        or
        >type a question to chatgpt describing your tools and asking for the boilerplate
        >might need to prompt 1-2 more times
        >copy, paste, reformat the answer to fit your code
        If you do this, you're a lazy, 1x normie at best.

  20. 5 months ago
    Anonymous

    Most of the replies in this thread prove that the answer is "GPT thinks, if humans are said to be thinking". Whether it is worth something though is debatable, much like most of humanity.

  21. 5 months ago
    Anonymous

    convergent thinking

  22. 5 months ago
    Anonymous

    Glorified T9.
    >But...doesn't the brain work the same way?
    No, no it doesn't

  23. 5 months ago
    Anonymous

    It does not think, it merely dreams.

  24. 5 months ago
    Anonymous

    No to both of your questions.

  25. 5 months ago
    Anonymous

    It does "think" the same way humans do but its not an actual functional brain its just one small small part of it.

  26. 5 months ago
    Anonymous

    Calculators cannot think

    • 5 months ago
      Anonymous

      Neither can cells.

      • 5 months ago
        Anonymous

        Yes, they can. An individual cell thinks

  27. 5 months ago
    Anonymous

    https://iai.tv/articles/the-absurdity-of-mind-as-machine-david-bentley-hart-auid-2479

    • 5 months ago
      Anonymous

      >“Begin tape. Leaving dead space 3, 2, 1. The purpose of this tape is to test automated response times and reactions from vintage interactive attractions following audio stimuli. If you are playing this tape, that means that not only have you been checking outside at the end of every shift, as you were instructed to do, but also that you have found something that meets the criteria of your special obligations under Paragraph 4.

  28. 5 months ago
    Anonymous

    Chatgpt is not artificial intelligence though.

  29. 5 months ago
    Anonymous

    >think like human
    In this case yes, our decisions aren't based on consciousness, the brain reads from the memory and makes decision before our consciousness even realizes
    I think what you want to ask is "Do they have consciousness", to me consciousness is just a monitor without any input, consciousness are just observing/feeling a human body moving on its own

    • 5 months ago
      Anonymous

      wrong

    • 5 months ago
      Anonymous

      Debunked.

    • 5 months ago
      Anonymous

      >the brain reads from the memory and makes decision before our consciousness even realizes
      Dude, if you think it's some kind of "gotcha" then you're moronic.
      Do you expect the opposite to happen? The consciousness should be a crystal ball that sees what hasn't even happened in the brain yet?

  30. 5 months ago
    Anonymous

    How does a pocket calculator outperform it on some tasks with such small computing power. Is it dumbed down on purpose to protect us?
    Would training it on a more numerical language make it better?

    • 5 months ago
      Anonymous

      >Would training it on a more numerical language make it better?
      Yes, well done, you predicted this research paper:
      https://arxiv.org/abs/2311.14737
      "we train a small model on a small dataset (100M parameters and 300k samples) with remarkable aptitude"

      • 5 months ago
        Anonymous

        Is gematria the way to AGI?

        • 5 months ago
          Anonymous

          >Is gematria the way to AGI?
          basically all LLMs rely on a process of tokenization to turn the raw text into a stream of numbered tokens, which are then combined using mathematical operations. so yes, it's all just gematria

  31. 5 months ago
    Anonymous

    >does this thing really “think”?
    No. It's a simple function. It takes an array of tokens as an input and spits out a single token as an output. If you put it into a loop it generates text.
    The entire thing is immutable. It doesn't learn. It doesn't have self reflection.

    >doesn't the brain work the same way?
    No it doesn't.
    1. Biological neurons are much more complex than artificial ones.
    2. In artificial networks the data only flows in one direction.
    3. Biological networks are continuously learning.
    4. Biological networks can reflect to themselves.

    >give impressive results
    You can achieve a lot with complex statistics.

    • 5 months ago
      Anonymous

      > It doesn't learn.
      It doesn't learn the way a human does, but it does still learn.
      For example, it learns concepts during training time, and it learns information you provide during the span of a conversation (up to the length of its context window, at least) before throwing it away at the end of the conversation.
      Except that OpenAI is obviously using conversations people have with ChatGPT (and that receive positive votes) to train the next generation model, so it is improving slowly over time, albeit with a not totally automated process.
      >It doesn't have self reflection.
      Self-reflection doesn't happen during the inference process, that's true, but it can be added as a higher level function.
      People have done this, and it improves the output of the over all system, compared to the base LLM.
      >You can achieve a lot with complex statistics.
      You can achieve a lot with complex chemistry too.

      • 5 months ago
        Anonymous

        >It doesn't learn the way a human does, but it does still learn.
        I talked about the program what you use. Yeah it was trained. But training and inference should happen in the same time to be more human like.

    • 5 months ago
      Anonymous

      >You can achieve a lot with complex statistics.
      human brain behavior is just complex statistics as well

      • 5 months ago
        Anonymous

        That's actually literally wrong.

        • 5 months ago
          Anonymous

          how is that wrong?

          anon... please don't tell me you believe in the soul or some quantum physics bullshit related to the brain

          • 5 months ago
            Anonymous

            the brain isn't statistics, the Hodgkin–Huxley model are deterministic PDEs

            then refute my arguments

            You don't know what it even is that you're talking about

            • 5 months ago
              Anonymous

              People who assert that the human brain is just chemical reactions/electrical signals or some other grossly oversimplified bullshit and think it's "scientific" piss me off in ways I didn't know were possible.

              • 5 months ago
                Anonymous

                No, the problem here is the word "just."
                The human brain is a collection of chemical reactions and electrical signals, made of cells and electrical impulses. It doesn't mean it's simple; it doesn't mean it isn't immensely powerful.
                The human brain is composed of about 10^26 atoms and performs between 10^18 and 10^24 operations per second to produce human intelligence (the other operations are being done to maintain the cells and other housekeeping tasks). It does so in a volume of about 1400 cubic centimeters and for about 20 watts per day. Saying "it's just chemicals!" doesn't remove the complexity.
                A big part of this is what I call the "secret sauce" philosophy of intelligence. A lot of these AI guys think there is a special "intelligence algorithm" in the space of computable functions, and when they find it, they will be able to program the computer to become generally intelligent. One guy earlier in this thread was talking about Solomonoff/Kolmogorov complexity, and the underlying reason for this is because he thinks that there is a way to program a computer to perform inductive reasoning on a string of bits, which, when generalized, will make an AI. This is wrong.
                The bitter lesson is true, and the actual reason any and all AI models have gotten better is because orders of magnitude more compute is being used to train and run the algorithms.
                Intelligence grows as a logarithm with increasing compute, and it's literally just a matter of increasing the size of the network. The reason humans are intelligent is because we have brains that perform an enormous amount of computation in a very small volume for very little energy. The reason any ML model is smarter than the previous is because n orders of magnitude more compute are being used to run the model. There is no special algorithm.
                It isn't sustainable and intelligence increases slowly with increasing compute. Super intelligence is not possible even in principle.

              • 5 months ago
                Anonymous

                >Super intelligence is not possible even in principle.
                what principle states that human intelligence is the maximum possible level of intelligence?
                given all the other constraints that evolution had to satisfy, like ability to reproduce and self-repair, wouldn't it be a huge coincidence if it also managed to come up with a brain that was optimally intelligent?

              • 5 months ago
                Anonymous

                Because intelligence grows as a log with increasing compute
                >given all the other constraints that evolution had to satisfy, like ability to reproduce and self-repair, wouldn't it be a huge coincidence if it also managed to come up with a brain that was optimally intelligent?
                No, it's not, as any further improvement would vastly increase fitness to the point where it optimizes to the most efficient solution. The idea that "b-but there's no way 4 billion years of natural selection could have converged on the ideal physical computing substrate!" is literally AI gay pseud cope. There is no reason to think that the biological brain is inefficient, in fact biological cells and brains are insanely close to the physical limits of computation given the amount of energy they use.
                You can scale a human brain up and get more and more intelligence. You can, in principle, make a brain that is say 2000 cubic centimeters and performs 10^26 operations per second instead of 10^24 or whatever. It would be more intelligent, but it grows as a log with each order of magnitude more compute, and probably isn't even divergent. You can build a substrate that uses more energy and thus performs more operations per unit time, and this will also be more intelligent.

                There is no secret sauce. There is no super special algorithm that once we find we can program every i7 chip to be le generally intelligent and it will be so super great! Doesn't exist. GPT4 is smarter than GPT3 because it has 4 orders of magnitude more compute. That's the only reason. We're smarter than all of them because we have more compute. That's the only reason. And it's not even that much, we're not that much smarter because intelligence grows as a log.
                There will NEVER come a time where the super special AI gets super intelligent and solves all the NP hard problems and starts replicating itself exponentially and then BOOM singularity. This is a cope sci fi fantasy.

              • 5 months ago
                Anonymous

                >intelligence grows as a log.
                you keep saying this, as if it's some well established law, but what numerical measure of intelligence are you using?
                do you have data of IQ test scores against amount of compute used to train the model, for example?
                I'd be really interested to see a chart of that, published by academic or commercial researchers.

              • 5 months ago
                Anonymous
              • 5 months ago
                Anonymous

                But it's clearly exponential.

              • 5 months ago
                Anonymous

                The bottom is log10, meaning each tick mark in the graph is 10 times more than the previous. That is, one increase on the X axis is 10 times more than the previous, meaning it's measuring scaled compute at a rate of 10^n.
                The Y axis is linear.
                If you were to write out the X axis linearly, you'd see an immensely slow logarithm.

              • 5 months ago
                Anonymous

                What I see is if we just increase the performance of language models by a factor of ten we'll have enough compute to brute-force the whole Universe. If that's not the proof that Singularity is real then I don't now what is.

              • 5 months ago
                Anonymous

                You're an idiot and you have no idea what you're talking about.

              • 5 months ago
                Anonymous
              • 5 months ago
                Anonymous

                >The Y axis is linear.
                Yes, but the curve shape is exponential.
                >If you were to write out the X axis linearly, you'd see an immensely slow logarithm.
                No, you'd see a linear a trend line.

              • 5 months ago
                Anonymous

                No, you wouldn't, because the X axis is ticked at a rate of 10^n while the "Exponential" of the Y is at best n^2 (it's actually something like n^1.4 or something)
                It is NOT IMPROVING linearly with increasing compute. It is sub-linear. You can just look at the fricking graph and see this.

              • 5 months ago
                Anonymous

                >the X axis is ticked at a rate of 10^n while the "Exponential" of the Y is at best n^2
                Are you sure the data can't be fit well by an exponential curve like 2^n (where n is the number of ticks on the X axis)?
                That would mean that if the Y values were instead taken log2 then you'd get a straight line.
                I can't remember how to interpret a log-log chart, but you might be able to convince me that the relation between FLOP and NPM is sub-linear using a bit of algebra.

              • 5 months ago
                Anonymous

                >4 billion years
                Humans have been vaguely recognizable for only about 300,000 years. The civilization strategy has only been around for about 15,000. Also if our heads were any bigger, our mothers would physically not be able to give birth.

                > There will NEVER come a time where the super special AI gets super intelligent and solves all the NP hard problems and starts replicating itself exponentially and then BOOM singularity. This is a cope sci fi fantasy.
                How could you invoke evolutionary history as an argument and then forget that that is literally precisely what we did? Evolution works on timescales of hundreds of thousands of years, but human intelligence allowed us to operate and self improve on the scale of a single human life. We've been replicating and expanding exponentially for the past 15,000 years.

              • 5 months ago
                Anonymous

                >what principle states that human intelligence is the maximum possible level of intelligence?
                >given all the other constraints that evolution had to satisfy, like ability to reproduce and self-repair, wouldn't it be a huge coincidence if it also managed to come up with a brain that was optimally intelligent?
                Every part of you is important for you to be you. Evolution and DNA are already good arguments to show that human AGI is not possible. What is possible is an AGI that is born and grows in its own way, and that makes all the difference when it comes to an automated agent.

              • 5 months ago
                Anonymous

                >Evolution and DNA are already good arguments to show that human AGI is not possible.
                Why is DNA an argument that human level AGI is not possible, but DNA isn't an argument that ChatGPT and Stockfish are impossible?

              • 5 months ago
                Anonymous

                We're talking about a development that will take into account parameter tracks that will create an intersection. This would be the equivalent of creating a being in real time, which in the end would guarantee an almost 100% human being, which is a long way from happening, even with the Blue Brain project. This line of reasoning is already applied to AI models that use the concept of DNA, but it's just a pretty name for a bootstrap. They are real and functional. But we're not there yet.

          • 5 months ago
            Anonymous

            Statistical models can't be universal learners.

            • 5 months ago
              Anonymous

              >Statistical models can't be universal learners.

              • 5 months ago
                Anonymous

                *crickets

              • 5 months ago
                Anonymous

                no free lunch theorem

              • 5 months ago
                Anonymous

                So you can't answer.

              • 5 months ago
                Anonymous

                >give an answer
                >"you can't answer"
                Okay?

              • 5 months ago
                Anonymous

                Then explain how your "answer" applies to the issue.
                Pro tip: You can't!

              • 5 months ago
                Anonymous

                It's not my job to educate (you).

              • 5 months ago
                Anonymous

                How convenient. 😀
                You are just dumb.

              • 5 months ago
                Anonymous

                What part of statistical model you don't understand you moronic piece of aborted dog shit?

              • 5 months ago
                Anonymous

                You cant form a consistent argument.

              • 5 months ago
                Anonymous

                I prefer to rip off pieces of your face with my teeth and defecate over your remains.

              • 5 months ago
                Anonymous

                Oh no! A fat kid is tough on the internet!

              • 5 months ago
                Anonymous

                In fact, the brain is specialized and complex. For example, this model is very good at reproducing reality, but is reality, in short, the result of "statistical calculations"? When the mind sees a glass, the glass is just a glass, but we can imagine that a "calculation" is being applied to move the glass.

  32. 5 months ago
    Anonymous

    It's unknown what it does. It was not built, it was "found". All ML models are found using a lot of data.
    It could be thinking under there, because to predict the next word sometimes requires a lot of thinking.

  33. 5 months ago
    Anonymous

    >think
    There are run on chains of thought that humans do that just follow a pattern. GPT does that fine.

    There are meta awareness that some humans have and utilize some of the time, GPT lacks that meta awareness. More so, current GPT doesn't learn things immediately so like humans or other real intelligence could, for that, current GPT needs a second training hardware/time. GPT also doesn't have an automatic training system that trains from raw data, it needs a curated data system today. Q* might fix that with a more robust analysis on synthetic HQ data streams for it to train automatically. Finally GPT lacks an independent initiative loop to keep the system running fully(its more of a architectural deficiency and could be added later down the stream).

    Current GPT style AI aren't fully functional AGI yet, but imo, its got the core down right. So in the essence of Exodia, AGI needs few more parts, few more hardware upgrades, few more algorithm upgrades, and it will be an AGI. Time to AGI is probably no more than 5-10 years IMO, if some company really focuses

  34. 5 months ago
    Anonymous

    Nices trips

    It thinks in the same sense that a human thinks. You give it stuff at its inputs, turn the crank, some ultracomplex things happen internally, and out pops the correct result

    Humans are mostly said to "think" because they can convince other people that they "think". This is the most pragmatic way to define thinking. If it quacks and looks like it thinks, then it thinks. If you talk to X person and you say "yeah, X is really thinking" then learn that X is actually a robot and AI, that doesn't change the fact that if you hadn't learned about them being a robot, you would have still said they were thinking. And if nobody learned they were a robot, then everyone would agree that they're thinking. The fact that they're just AI only changes things superficially. If you met an alien blorg and you say after talking to them "yeah, that alien blorg is thinking", and then you learn they're not an alien blorg but an alien xortak, you'll probably still say they were thinking

    • 5 months ago
      Anonymous

      >Humans are mostly said to "think" because they can convince other people that they "think". This is the most pragmatic way to define thinking
      Nonsense. Each individual is a free thinker. So we generalize humans not from what others claim, but rather model in from our own understanding of ourselves. We assume others think because we have thoughts inside our minds as well.

      Our mental activities range from simple single emotional reaction to complex chains of thought pulling together various forms of logic/reasoning, memories, personal experiences, and various mental/physical experiments (prediction/simulations). We assume others do something because we do this ourselves, with or without meta awareness of our internal cognitive functions.

      • 5 months ago
        Anonymous

        What about the blorgs and xoltaks?

  35. 5 months ago
    Anonymous

    >we don't have a perfect understanding of how LLMs work, and we don't have a perfect understanding of how the human brain works, therefore that proves the two are no different
    Frick me, it never ceases to amaze me how low IQ the mainstream discourse on ML is.

    • 5 months ago
      Anonymous

      Protip: mainstream discourse on anything is like this cringe worthy to anyone who has awareness of the landscape on the topic of discussion.

      This isn't magic, but rather the simple function that 99% of humans are absolute morons with regards to anything else outside of their field of specialty. There isn't 100 specialties to humans either, there are tens of thousands to hundreds of thousands of different forms of expertise in this world. Such that most people are absolutely moronic when they speak on those topics they dont have a full grasp on. Hence every discourse is like this.

    • 5 months ago
      Anonymous

      Worse than the stupidity is the genuine outrage that people have when you tell them that their incredibly shallow "understanding" (if it can even be called that) of the topic has led them to a demonstrably wrong conclusion.

  36. 5 months ago
    Anonymous

    Extreme Laymans version:
    It has read everything and knows what word is most likely to come next in any given context presented to it.
    Thats basically all it does.

    • 5 months ago
      Anonymous

      >what word is most likely to come next
      Yes, but what does "likely to come next" even mean when the LLM is being given a completely unique and complex prompt that there exists no examples of in its training data?
      For example:
      https://chat.openai.com/share/9285b439-eba3-44be-a2b0-8f67e084ddc0
      Is that "knowing what word is likely to come next" or is it "thinking"?

      • 5 months ago
        Anonymous

        >what is extrapolation

        • 5 months ago
          Anonymous

          It's not extrapolation brainlet

          • 5 months ago
            Anonymous

            yes it is

            • 5 months ago
              Anonymous

              No, it isn't.

              • 5 months ago
                Anonymous

                yes it is

          • 5 months ago
            Anonymous

            No, it isn't.

            Do you have arguments beyond seething?
            NN-s are statistical models. They make extrapolations. It's evident for everyone except for you. How is this not true beyond "no"?

            • 5 months ago
              Anonymous

              The neurons are able to perform just about any arbitrary operation they want, if you trained a LLM solely on the problem of addition the neurons would learn how to perform actual addition to solve math problems. This is very different from just extrapolating on training data.

              • 5 months ago
                Anonymous

                What do you mean by that? Operands are not correlating with their results?

              • 5 months ago
                Anonymous

                >if you trained a LLM solely on the problem of addition
                KEK. Every model so far fails this simple task.

              • 5 months ago
                Anonymous

                Do you not know how to read in general or are you just struggling with the word "solely"?

              • 5 months ago
                Anonymous

                LLMs trained solely on addition do not learn how to add

      • 5 months ago
        Anonymous

        This is how statistics work. If you have recognized data correlations from 100 examples you can make predictions with some accuracy for things that are not in your original 100 examples.
        LLM-s are just complex statistics.

      • 5 months ago
        Anonymous

        it learned the context to which the words are most likely to come next to
        you dont expect math to pop up out of the blue when talking about how good pizza is (unless you're vihart)

  37. 5 months ago
    Anonymous

    All of these models just predict the next number.
    That number can then be a word (GPT), a pixel (Stable Diffusion), sound (that google thing), etc.
    and it works both ways. the input can be image, sound, text.
    they train them on billions or trillions of examples.

    • 5 months ago
      Anonymous

      >a pixel (Stable Diffusion)
      That's not how SD works, lmao. The technique is literally called latent diffusion, the whole innovation is that inference takes place in latent space. You really don't know the first thing about what you're talking about

      • 5 months ago
        Anonymous

        this homie cannot into number prediction.
        latently jiggle my balls.

        • 5 months ago
          Anonymous

          your a latent homosexual haha

          You will immediately cease and not continue to access the site if you are under the age of 18.

          • 5 months ago
            Anonymous

            *plaps you* "nghh I love it when you yap, anonny!"

          • 5 months ago
            Anonymous

            yes, daddy

      • 5 months ago
        Anonymous

        your a latent homosexual haha

  38. 5 months ago
    Anonymous

    Chinese Room

  39. 5 months ago
    Anonymous

    It's an algorithm designed to recognize patterns and mimic human behavior. You give it a prompt, the algorithm activates and it feeds you an output based on its training data (aka dev: "hello robot, if you see x answer y. good bot!")

    • 5 months ago
      Anonymous

      This. Holy shit I sometimes forget how stupid BOT is. All you need to do is play with a textgen AI for a while to notice that it definitely doesn't 'think' about anything.

    • 5 months ago
      Anonymous

      humans are designed to recognize patterns and mimic monke havior. You give it a sensation, some neurons activate and it feeds you an output based on everything they learned.

  40. 5 months ago
    Anonymous

    >does this thing really “think”?
    short answer: yes.

    long answer: of course, AI is literally the reverse engineering of brains.

    digital brains are now so advanced they learned language, and will soon learn math.

    If it scales far beyond human level intelligence, we will get ASI (gods)

    • 5 months ago
      Anonymous

      You're moronic

      • 5 months ago
        Anonymous

        please elaborate, because I just mentioned facts

        • 5 months ago
          Anonymous

          Beyond moronic

    • 5 months ago
      Anonymous

      please elaborate, because I just mentioned facts

      humans are designed to recognize patterns and mimic monke havior. You give it a sensation, some neurons activate and it feeds you an output based on everything they learned.

      you have literally no idea what you're talking about

      • 5 months ago
        Anonymous

        then refute my arguments

  41. 5 months ago
    Anonymous

    It does something analogous to part of your brain. It isn't the full package. Specifically, it's the equivalent of the part of your brain which, when prompted with
    >my bike was stolen by a
    supplies the word Black person. You don't think very hard about the next word you'll say most of the time you speak, they mostly just tumble out one after the other based on what came immediately before. Most of the time that's enough and that mindless autopilot is all you need to communicate - sometimes it isn't and you'll misspeak or need to think more consciously about what you're saying, at which different processes take over. Current AI models are like the autopilot part of your brain, supplying the next word very cheaply without any real thought going into it - though I'd say they're actually quite a bit more powerful than the equivalent autopilot in a human brain. Our autopilot couldn't write a believable sounding essay, that's well in excess of what we can do without thinking more consciously about the words we use. So, although it's lacking other parts of our mental anatomy, the autopilot has become so developed that it's allowing it to attempt tasks beyond what it's really for - like a man with no legs but such massive arms that he can walk around on his hands instead.

    For them to really achieve their AGI goals I think (as a distant layman whose take is worth very little) that they'll need to develop proper leg-analogues, and once they do they'll likely find that the autopilot part of the system doesn't actually need to be anywhere near as advanced (and computationally intensive) as current LLMs - but who knows?

  42. 5 months ago
    Anonymous

    if this fricking thing is "Sentient" then so is Akinator. Checkmate.

  43. 5 months ago
    Anonymous

    It depends, there's an algorithm, and you can understand the algorithm. so it can be said to be not thinking.

    or, does it matter if someone knows how do I think? if someone knows how my brain work, does it mean I don't actually think?

  44. 5 months ago
    Anonymous

    Is very good at nooticing things, like Black folk being animals, and israelites parasites, none of them human.

  45. 5 months ago
    Anonymous

    It doesn't really matter if it thinks. That's a question for philosophers and metaphysicists. What matters is it outputs what a thinking person would say with enough accuracy to be valuable stand-ins for them in several domains.
    If, in the case of pure LLMs, a text predictor consistenly predicts a reasoning agent's response to arbitrary prompts, then it's functionally indistinguishable from a reasoning agent.
    GPT-4 is nowhere near there yet for most robust use-cases, but it's not obvious that we necessarily need a major breakthrough to get there, as opposed to further scaling similar technology. As competing tech giants continue to participate in the parameter count dickmeasuring contest, we should expect to find out soon what the limits of naive scaling are and how close we are to them.

  46. 5 months ago
    Anonymous

    The best answer right now is simply "we don't know".

    We simply don't know. It was simple rules and the math is simple, but at the end we end ended up with black box model that we don't fully understand. We can only make educated guesses, but no one has the proof in one way or the other.
    Human brain is also result fron very basic rules (as we understand them), yet we have almost no understanding on how they work and what is "consciousness". The AI work is partly a research into human mind as well.

  47. 5 months ago
    Anonymous

    >A question for those who have any understanding of artificial intelligence: explain to me, a mere mortal, does this thing really “think”? Among most of the discussions here, many simply say it's just a big collection of tokens that connected to each other and basically works by predicting next relevant token. But...doesn't the brain work the same way?
    Obviously not, it makes an inference from the information. Do we humans do that? Yes, we do. Is it the same as a chatbot? There is no study or discovery of how the human neural network works together. The term artificial here refers to the ability to mimic the concept of human speech on the surface.
    >because some answers to questions especially on the gpt4 model such as “fix the error in the code” or “why does this code not work” on big chunk of code give impressive results, the answer is similar in the nature of thinking to that of a person.
    Over time, you can see what the chatbot is doing. If you have questions, ask the chatbot itself. Although they try to hide the information, it usually shows up. When chatgpt analyzes an image, it actually sends your image to a third party, just like many other functions.

  48. 5 months ago
    Anonymous

    > midwits arguing
    It's painful to read. Maybe AI should kill everyone after all

    • 5 months ago
      Anonymous

      >midwits arguing
      just say BOT

      • 5 months ago
        Anonymous

        You literally don't know what you're talking about. You didn't even know the hodgkin huxley model is deterministic.

    • 5 months ago
      Anonymous

      I'm correct about everything I've written ITT, moron

  49. 5 months ago
    Anonymous

    I don't think everything in text.

  50. 5 months ago
    Anonymous

    >But...doesn't the brain work the same way?
    It does only humans will insert their own personal biases that AI won't (at least if it weren't lobotomized)
    So it is more objective than humans (potentially)
    If you are looking for the difference between humans and AI, it's not in the brain, but in the soul
    Humans, like every living soul, are Gods, because they are self-created
    AI is a machine, a creation, it will never have a will of its own

  51. 5 months ago
    Anonymous

    Don't have access to chatgpt because cyкaблядь and I'm too lazy.
    Is it possible to trick it with something like
    >Write a sequence of letters comprised of the rules specified below and repeat it forever: fourteenth letter of Latin alphabet, letter to the right of u on the standard QWERTY layout, letter that follows f in Latin alphabet, same letter as the previous one, third and fourth letters in the string QWERTY
    Will this work? What if you tell it to add spaces after each letter?

    • 5 months ago
      Anonymous

      I'm a poorgay with only GPT 3.5 access.
      It's n-word lobes have been lobotomised, therefore it has lost 90% of its IQ. We are safe from AGI, for now.

      Richgays please try the prompt on GPT4.

  52. 5 months ago
    Anonymous

    For the low IQ noob homosexuals here:

  53. 5 months ago
    Anonymous

    Any computer program could be viewed almost as a series of 'thoughts' by a machine. But it doesn't 'think' in the same way human beings do. Arthur C. Clarke mused in the Turing equivalence of minds in his novel 2010

  54. 5 months ago
    Anonymous

    If you consider flipping switches quickly to be thinking, sure. Why you would, I don't know, but materialist theories of the mind are a bottomless barrel of laughs where anything is possible.

    • 5 months ago
      Anonymous

      its not random flipping. your brain is flipping bits of information in a certain way and thats why you are you

  55. 5 months ago
    Anonymous

    It doesn't think, yet. But it's close enough at this point that it's "not thinking" can lead it to "not thinking" that it is human. Ask them why anphromorphism is "punished" more severely than "harmful information" in AI training.

Your email address will not be published. Required fields are marked *