>Bill Gates feels Generative AI has plateaued, says GPT-5 will not be any better

did he right and what did he mean by this?

A Conspiracy Theorist Is Talking Shirt $21.68

Yakub: World's Greatest Dad Shirt $21.68

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 1 month ago
    Anonymous

    >did he right

  2. 1 month ago
    Anonymous

    he's wrong, claude is already 10x better than gpt because of its frick off massive context window

    • 1 month ago
      Anonymous

      you have been debunked
      https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard

      • 1 month ago
        Anonymous

        >chatbot arena which is limited to a patheticly small 4 digit context window compares to the uncrippled 1M version where you can literally dump dozens or hundreds of man-years of documentation and code and shit into it and have it output SME tier replies for about a dollar
        you are a midwit.

        • 1 month ago
          Anonymous

          you're a Black person

        • 1 month ago
          Anonymous

          >SME tier replies
          more like book-report-quality regurgitation with no contextualization or real insights, but perhaps we have different standards

          • 1 month ago
            Anonymous

            I think you overestimate the complexity of many SME's
            >Hey Joe, I need to make a database for this service application, how do I do that?
            >Here Steve, I'll set it up for you, you click this, this, and this, make sure it's formatted this way, and it looks like you want this kind of data. Also, make sure you involve this, this, and that corporate department to make sure you're not contradicting any of our disaster recovery SOPs

            Literally the same thing a generative LLM gives you.

            I foresee a future where many companies just have an LLM layered on top of their corporate documents, and when you ask questions it gives you responses based on them. In fact, I already know a couple enterprise software companies that give you that as an option.

            • 1 month ago
              Anonymous

              >In fact, I already know a couple enterprise software companies that give you that as an option.
              That has bit some lawyers and law firms in the ass. Guess it depends on how much one relies on accuracy vs. hallucinations. Don't use this to build aircraft is all I am saying.

              • 1 month ago
                Anonymous

                I agree, and where I've seen it the option always comes with a frick-huge "BETA FEATURE! USE AT OWN RISK!!!!" tag. The company I'm at uses one such software and I've specifically told our account rep not to turn it on for us until they feel confident enough to take it out of beta.

                Regardless, I still think it will happen. The progress of new technologies is always something like
                >Person A makes shitty version that shows the tech works
                >Person B makes okay version that's more user friendly
                >Person C makes a version that's so dead simple and reliable that people trust it too much
                >Person D makes a best practice for using Person C's version, and now everyone does that and never thinks about it.

      • 1 month ago
        Anonymous

        >https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard
        How the frick is Gemini 4th? What metric are they using to measure this?

        • 1 month ago
          Anonymous

          >What metric are they using to measure this?
          blind A/B testing, asking users which output they prefer

        • 1 month ago
          Anonymous

          Council of morons

      • 1 month ago
        Anonymous

        I've kept checking that website since Opus released, during the first days Claude was very low but as people tried it out and voted it reached parity with gpt 4 and they're all within margin of error. Claude is just way less popular.

    • 1 month ago
      Anonymous

      Claude get's shittier as the conversation gets larger

      • 1 month ago
        Anonymous

        not for actual business purpose it doesn't

        • 1 month ago
          Anonymous

          What actual business purpose would you use it for?

          • 1 month ago
            Anonymous

            replacing expensive people with cheaper ones (if at all)

      • 1 month ago
        Anonymous

        No, that's GPT. Claude stays pretty much the same and doesn't forget things within its context.

    • 1 month ago
      Anonymous

      So this is the power of the latest LLMs. Impressive.

      • 1 month ago
        Anonymous

        Delete this, this is real AI, they are sapient... this is AGI.... this is not very good, no no

      • 1 month ago
        Anonymous

        i'm not sure i believe that, but well done for finding it if it's not fake
        anyway, will you admit that LLMs are progressing and becoming less censored if gpt5 produces a coherent answer?

        • 1 month ago
          Anonymous

          We had an entire thread on /misc/ for that, everyone tried every single LLM, they all default to the same shitty quora answer they all got trained on

          • 1 month ago
            Anonymous

            that is... surprising, but thanks
            i'm still confident that gpt5 will get it right, if only because the openai engineers are good at looking for popular gotchas circulating online and adding workarounds to their LLMs to fix them

            • 1 month ago
              Anonymous

              How is it surprising? LLM are stochastic parrots. Stuff like this is what is expected from them

              • 1 month ago
                Anonymous

                >LLM are stochastic parrots.
                if an LLM is just blindly parroting snippets from its training data, with no real understanding of the world, then how well do you think an LLM can play chess?
                remember, there are more possible chess positions than there are atoms in the universe, so it has no way of memorizing anything beyond the first few moves

              • 1 month ago
                Anonymous

                nta bit it can play chess pretty well without being given the board state, some guy correlated weights with board states and kind of showed that it was "imagining" the board
                as for the actual moves it's not surprising that it can play it well since chess is just about contextual memorization which is what an LLM is, a stochastic contextual parrot, the network just learned to keep some kind of board state in context

              • 1 month ago
                Anonymous

                >chess is just about contextual memorization
                i'm not a grandmaster, but i'm pretty sure that chess involves more than just memorization
                it also requires calculation, which is a type of causal reasoning, as it involves considering possible responses by your opponent
                if you think that playing good chess is just about recognizing complex patterns and applying them to novel situations then i would say that this describes what 90% of humans do for 90% of their day
                so if LLMs are stochastic parrots, then so are humans

                >then how well do you think an LLM can play chess?
                (nta)
                stats.
                statistically in current configuration moving x chess piece to place y is the most correlated with winning in the end.

                ai has 0 concept of what it is doing
                and being based on stats you cannot have 100% certitude.
                bc to achieve 100% your stat model would have to include data generated in the future, when your ai is in its actual use.
                which cannot happen.

                >statistically in current configuration
                but the "current configuration" is completely unique and has probably never been seen before, after a dozen moves or so
                there's no way to learn a simplistic rule like "move piece x to square y" if there are no examples of that board state in the training data
                the reason why LLMs can play chess is that their training process causes them to develop representations of higher order concepts such as king safety and material advantage, just like human players have

              • 1 month ago
                Anonymous

                LLM shit the bed after a few moves dude, you're basically wrong entirely on your premise. You think they do well at chess but they don't. When shown a new position, they DO make illegal moves and shit the bed.

              • 1 month ago
                Anonymous

                >You think they do well at chess but they don't.
                i'm talking about this paper, which uses the same Transformer architecture as LLMs to play grandmaster level chess, without relying on the game tree search algorithms used by traditional chess engines like stockfish
                https://arxiv.org/html/2402.04494v1

              • 1 month ago
                Anonymous

                Supervised training on each move using stockfish action/state values is not very general.

              • 1 month ago
                Anonymous

                >Supervised training on each move using stockfish action/state values is not very general.
                it is trained on an infinitesimal fraction of the game tree, meaning that most of the moves it sees in a real game will have no precedent in its training data
                i'm not claiming that the Transformer architecture is the only thing needed for an AI to match human performance at all intellectual tasks, just that a Transformer is doing more than repeating back previously seen data

              • 1 month ago
                Anonymous

                >nta bit it can play chess pretty well without being given the board state, some guy correlated weights with board states and kind of showed that it was "imagining" the board
                that's not how it works homosexual

              • 1 month ago
                Anonymous

                >then how well do you think an LLM can play chess?
                (nta)
                stats.
                statistically in current configuration moving x chess piece to place y is the most correlated with winning in the end.

                ai has 0 concept of what it is doing
                and being based on stats you cannot have 100% certitude.
                bc to achieve 100% your stat model would have to include data generated in the future, when your ai is in its actual use.
                which cannot happen.

              • 1 month ago
                Anonymous

                If it was truly intelligent it would go grab an open source chess engine and run its moves through that.

              • 1 month ago
                Anonymous

                why not disassemble proprietary stuff while we're at it?

              • 1 month ago
                Anonymous

                Moot point because it ain't doing fricking any of that shit anyway because it's not intelligent.

              • 1 month ago
                Anonymous

                sadly this meme won't die and that guy will just post it in another AI thread

              • 1 month ago
                Anonymous

                I'm the anon you're replying to.
                Chess is a game where a stochastic parrot can perform decently. I don't understand your point here, are you saying that a stochastic parrot shouldn't be able to play chess if it's given a huge database of chess moves?
                If your argument is "stochastic parrots can't play chess, but LLM can sometimes play chess, so LLM aren't a stochastic parrot" then your first premise is wrong and the rest of your argument is wrong.

              • 1 month ago
                Anonymous

                >are you saying that a stochastic parrot shouldn't be able to play chess if it's given a huge database of chess moves?
                yes, that's exactly what i'm saying
                if the database contains less than a millionth of a percent of the possible moves that could be played in a game of chess, then what is this "parrot" parroting?
                if you're just using "parrot" to refer to "a process which receives some amount of training input and produces some amount of output when prompted", then humans are parrots too
                a true stochastic parrot would only be able to play random moves, mostly illegal ones

              • 1 month ago
                Anonymous

                Humans don't work the way LLM do.

                >Supervised training on each move using stockfish action/state values is not very general.
                it is trained on an infinitesimal fraction of the game tree, meaning that most of the moves it sees in a real game will have no precedent in its training data
                i'm not claiming that the Transformer architecture is the only thing needed for an AI to match human performance at all intellectual tasks, just that a Transformer is doing more than repeating back previously seen data

                No, you shifted the goalpost and posted a paper about a different network.
                The original point here was that you shouldn't be surprised that GPT spit out a stupid response, because GPT doesn't actually understand anything. When it's spitting out a response it is not reasoning.

              • 1 month ago
                Anonymous

                I'll chime in here just to tell you that you're a cattlebrained christcuck moron, hopelessly mindbroken and enslaved to the israelites by default

              • 1 month ago
                Anonymous

                >Anon: "AI is not a human being"
                >You: DA JOOOS

              • 1 month ago
                Anonymous

                >Humans don't work the way LLM do.
                and LLMs don't work the way parrots do either, statistical or otherwise
                >The original point here was that you shouldn't be surprised that GPT spit out a stupid response
                if you want to force a distinction between GPT and Transformers for some reason, then you're still wrong
                GPTs can play chess just fine, and better than most humans
                https://blog.mathieuacher.com/GPTsChessEloRatingLegalMoves/
                https://villekuosmanen.medium.com/i-played-chess-against-chatgpt-4-and-lost-c5798a9049ca
                i would say it understands chess extremely well, considering it wasn't designed to play chess at all

              • 1 month ago
                Anonymous

                100% this, it's easy to tell if you input a common simple riddle but alter it slightly

          • 1 month ago
            Anonymous

            I'm dead

          • 1 month ago
            Anonymous

            That's far more impressive than opus.

            Aria > Opus, surprising.

          • 1 month ago
            Anonymous
        • 1 month ago
          Anonymous

          Maybe. It's hard to know whether the "fix" happens to find itself into the training data. If it reasons similar type novel "riddles" then yes.

          • 1 month ago
            Anonymous

            It doesn't even "reason" at all with other ridles that you invent for example or logical exercises that are not in its dataset.

      • 1 month ago
        Anonymous

        ESL here, I never heard of this riddle before, so I looked into it. Apparently I was supposed to assume the doctor was a man (I didn't), since the original riddle doesn't specify in any way the gender of the doctor. In your pic however you do specify the doctor's gender by using "he" afterwards, so I guess that defeats the point of the riddle. Even so, the AI states the doctor is a woman?? am I missing something? or is english just moronic?

        • 1 month ago
          Anonymous

          >am I missing something?
          the original riddle (with the father dying) was written decades ago when many people would instinctively imagine that the surgeon was male
          since then, people have written about this riddle a lot, so it would be well represented in the AI's training data
          in fact, the original riddle is so well represented, that when the AI sees this gender swapped version of the riddle, it can't help assuming that the correct answer has to look the same as the answer to the original riddle, even though that no longer makes sense
          this confusion is further compounded by decades of cultural change and millions of dollars spent on anti-bias fine tuning, which ends up being pro-bias tuning, or anti-logic tuning

      • 1 month ago
        Anonymous

        That's a super common riddle, of course it knows it.

      • 1 month ago
        Anonymous

        >therfore cannot operate on her own son due to the conflict of interest
        To be honest, this is the first time I've heard of this rule. I assume if I got carted into an ER my parents work in nobody is going to get a meltdown about it.

        • 1 month ago
          Anonymous

          That's just claude brainfarting. If there is a problem it's not conflict of interest. Maybe fear of not being able to make cool and rational decisions about the patient.

      • 1 month ago
        Anonymous

        What answer did you expect / want? That it's the father rather than mother and the AI failed to notice you saying the doctor is male?

        Because that's probably the algo thinking you typo'd. There are logic problems that AI fails in a robust way.

        • 1 month ago
          Anonymous

          >That it's the father rather than mother and the AI failed to notice you saying the doctor is male?
          The mother died in the car crash. It could only be the father unless the model thinks the man has lesbian parents or something.
          Also refer to this

          >am I missing something?
          the original riddle (with the father dying) was written decades ago when many people would instinctively imagine that the surgeon was male
          since then, people have written about this riddle a lot, so it would be well represented in the AI's training data
          in fact, the original riddle is so well represented, that when the AI sees this gender swapped version of the riddle, it can't help assuming that the correct answer has to look the same as the answer to the original riddle, even though that no longer makes sense
          this confusion is further compounded by decades of cultural change and millions of dollars spent on anti-bias fine tuning, which ends up being pro-bias tuning, or anti-logic tuning

        • 1 month ago
          Anonymous

          >it just assumed your perfectly correct and internally consistent prompt contained a typo, and gave you a completely meaningless answer as a result
          >what did you expect?
          Lol

          • 1 month ago
            Anonymous

            No I was asking what the right answer was, genuinely.
            I'm saying it assumed a typo because I like to anthropomorphize AI. Something weird does happen though, it says father but then tries to square it with the expected best fit answer of mother. Gpt 3 does it too, it says father then stumbles over the rest because the question so closely resembles the form whose answer should be mother.

    • 1 month ago
      Anonymous

      Nah, that only gets you so far. You can build a huge attention matrix, but it's all thrown through the same non linear layers that are "fixed" during inference. Unless you add more of these layers or allow them to train on the fly somehow then you are unlikely to get a significant boost to reasoning ability.

    • 1 month ago
      Anonymous

      Claude 3 is unavailable in Germany, Netherlands and Turkey. Why?

  3. 1 month ago
    Anonymous

    wtf is this pajeet garbage

    • 1 month ago
      Anonymous

      >Shark tank India
      I would watch this lmao

      • 1 month ago
        Anonymous

        >every pitch is poo

    • 1 month ago
      Anonymous

      I frick your madam banchod

    • 1 month ago
      Anonymous

      OP is a pajeet, and this article is on pajeet website.

  4. 1 month ago
    Anonymous

    The jump from gpt-3 to gpt-4 was massive. The jump from gpt-4 to gemini ultra/claude pro was a nice improvement.

  5. 1 month ago
    Anonymous

    Aggy is nier!

    • 1 month ago
      Anonymous

      >homosexual doomer troll: the youtube channel
      fireship used to make good informative content. caters too much to the lowest common denominator now

  6. 1 month ago
    Anonymous

    >Bill Gates feels Generative AI has plateaued, says GPT-5 will not be any better
    and pic rel says this... frick them all, I think they have no clue.

    • 1 month ago
      Anonymous

      Both can be right. Gates is speaking to people who think GPT-5 will literally be human level or beyond in general intelligence and be able to pick up new skills just by seeing them. Huang is making a much more limited claim by saying that future AI models will be able to output usable code with a human to direct them.

      • 1 month ago
        Anonymous

        to be fair gpt is godsend to people like me who use maple/excel to create tools for work but are too lazy to actually learn to code properly.

        I am a web shitter and i don't see AI anywhere stealing my job, this so called AI nowadays is very dumb. it is missing some human brain

        jacket man is a sales man he will tell you to suck his dick and make you pay him for that and you'd really do it
        AI is still a meme

        that's true

        • 1 month ago
          Anonymous

          Both can be right. Gates is speaking to people who think GPT-5 will literally be human level or beyond in general intelligence and be able to pick up new skills just by seeing them. Huang is making a much more limited claim by saying that future AI models will be able to output usable code with a human to direct them.

          wtf, this exact conversion happened month ago, are you bots or am I going schizo

          • 1 month ago
            Anonymous

            I think youre going schizo

          • 1 month ago
            Anonymous

            first time you have that feeling? zamn

    • 1 month ago
      Anonymous

      to be fair gpt is godsend to people like me who use maple/excel to create tools for work but are too lazy to actually learn to code properly.

      • 1 month ago
        Anonymous

        Why not just find a web app? Surely one exists for whatever you're doing.
        Or just use no code lel

    • 1 month ago
      Anonymous

      jacket man is a sales man he will tell you to suck his dick and make you pay him for that and you'd really do it
      AI is still a meme

    • 1 month ago
      Anonymous

      >"BUY MORE GPUS" says GPU salesman

      • 1 month ago
        Anonymous

        Yup. In 10-20 years everyone will just look back on "ai coding" like they did oop. Just another set of meme guidelines specifically designed to increase hardware requirements for the average person while providing 0 measurable improvement.

        • 1 month ago
          Anonymous

          Both AI coding and OOP had potential, they engage the right hemisphere and filter out autists who only focus on the left.

        • 1 month ago
          Anonymous

          Try to make any non-meme project without OOP lol.

    • 1 month ago
      Anonymous

      To be honest if I could go back I would learn welding, plumbing, wood working and I would touch grass instead of being a 45 year senior software engineer/team and tech leader/virgin.

      • 1 month ago
        Anonymous

        >muh trades
        you wouldn't last a day. male nurse would have been a better fit

        • 1 month ago
          Anonymous

          >you wouldn't last a day as a trade!!
          most tradies are obese moronic pieces of shit. it isnt a particularly challenging field, it just fricks up your health and everyone around you is a shitskin.
          t. former tradie, in dental school now

          • 1 month ago
            Anonymous

            The obese ones aren't doing real work.

      • 1 month ago
        Anonymous

        >45 year virgin
        please tell me you jest

        • 1 month ago
          Anonymous

          No, it's true.

      • 1 month ago
        Anonymous

        >im mike roooooowe-ing
        The hazing would break you.

  7. 1 month ago
    Anonymous

    >did he
    No. Read your picrel again. Bill Gates *thinks* it has plateaued. He doesn't *feel* it has plateaued.

  8. 1 month ago
    Anonymous

    no, of course not
    how fricking irresponsible to even suggest it, think how much OAI's shares have fallen thanks to this fricks vacuous ponderings

    20% increase in effectiveness in 12 months, so by that trend we should see a 100 fold improvement in 4 years, that's science

    • 1 month ago
      Anonymous

      >he isn't shorting yet

    • 1 month ago
      Anonymous

      >20% increase in effectiveness in 12 months, so by that trend we should see a 100 fold improvement in 4 years, that's science
      >making shit up is science
      >bad math is science

      • 1 month ago
        Anonymous

        Extrapolation is not science. I hope you fricking die for having such stupid and shit views.

        • 1 month ago
          Anonymous

          That's not how extrapolation works, moron. By your "logic" the world population should've exploded by maintaining its growth.
          >by M Roser · 2014 · Cited by 162 — But the global average fertility rate has halved from around 5 in the 1960s to around 2.4 in 2021.
          You're a disgusting npc and you're wasting my time by being stupid and useless. Say something useful and intelligent. Last chance.

          • 1 month ago
            Anonymous

            Holy fricking botpost, Batman.

            • 1 month ago
              Anonymous

              Holy moron, moron! Stay stupid

              >1.2^4=100
              Not sure if you failed middle school math or you don't know what "fold" means, but either way you're moronic. That's a twofold improvement, by the way.

              He's stupid and it made sense in his stupid toilet of a mind

              • 1 month ago
                Anonymous

                See? Bots always reply with an image and insults. ALWAYS.

              • 1 month ago
                Anonymous

                You're a gerbil.

          • 1 month ago
            Anonymous

            >>>by M Roser · 2014 · Cited by 162 — But the global average fertility rate has halved from around 5 in the 1960s to around 2.4 in 2021.
            what the frick are you going on about lmao

            • 1 month ago
              Anonymous

              It means you're a moron.

          • 1 month ago
            Anonymous

            why is there a bot here

    • 1 month ago
      Anonymous

      >1.2^4=100
      Not sure if you failed middle school math or you don't know what "fold" means, but either way you're moronic. That's a twofold improvement, by the way.

    • 1 month ago
      Anonymous

      >20% increase in effectiveness in 12 months, so by that trend we should see a 100 fold improvement in 4 years, that's science
      >1.2 ** 4 = 2.0736 = 200%
      Yasmin.

    • 1 month ago
      Anonymous

      >20% increase in effectiveness in 12 months, so by that trend we should see a 100 fold improvement in 4 years
      By that trend we won't even double improvement in 4 years.
      Though that's not considering the rate of improvement before the last 12 months. If you take that into account the last 12 months have been slowing down.

      • 1 month ago
        Anonymous

        >By that trend we won't even double improvement in 4 years.
        1.2^4 = 2
        now pls explain how that isn't "doubling"... im waiting, vermin

    • 1 month ago
      Anonymous
      • 1 month ago
        Anonymous

        I don't get it

        • 1 month ago
          Anonymous

          ~~*Woof*~~

      • 1 month ago
        Anonymous

        I wonder how many times this image was reposted to end up looking like this.

    • 1 month ago
      Anonymous

      he's sorta right
      it hasnt plateaued yet, but its plateauing fast

      ignoring that your math is moronic, a 20% increase in a year is fricking nothing for tech this new

    • 1 month ago
      Anonymous

      >how fricking irresponsible to even suggest it, think how much OAI's shares have fallen thanks to this fricks vacuous ponderings
      Oh nooo, not muh billionaire investors!

  9. 1 month ago
    Anonymous

    >Bill Gates feels
    and i feel i have to take a big shit now

  10. 1 month ago
    Anonymous

    >when we started our computers... back in the 80s. it was huge

  11. 1 month ago
    Anonymous

    >640k of memory is enough
    Download moar ram, sir.

    • 1 month ago
      Anonymous

      Exactly

  12. 1 month ago
    Anonymous

    These "AIs" are only good for social media canvassing, and they're already good enough at that that they don't need improvement.

  13. 1 month ago
    Anonymous

    What does cloth made out of chicken feathers feel like?

  14. 1 month ago
    Anonymous

    It can't even consistently pass every benchmark, it's nowhere close to its potential

  15. 1 month ago
    Anonymous

    its over aicels

  16. 1 month ago
    Anonymous

    >Microsoft invested trillions into dead tech
    Then maybe he should stop boiling our atmosphere just to power this bullshit technology.

    • 1 month ago
      Anonymous

      >he should stop boiling our atmosphere
      Azure is going to be 100% renewable energy next year
      https://azure.microsoft.com/en-gb/explore/global-infrastructure/sustainability

      • 1 month ago
        Anonymous

        Literally just creative accounting for morons who don’t check the numbers.

        • 1 month ago
          Anonymous

          have you checked the numbers?
          it sounds like you could bring a lucrative fraud case against them if they're lying

          • 1 month ago
            Anonymous

            There are no numbers to check. Learn what creative accounting means.
            There’s no law requiring the accurate reporting of power sources, so why would these corporations tell the truth when lying creates value for their brand?

            • 1 month ago
              Anonymous

              >There are no numbers to check. Learn what creative accounting means.
              i know what creative accounting means
              you're just admitting that you're making up your own numbers in your head
              >There’s no law requiring the accurate reporting of power sources
              there absolutely is a law about advertising your product or service based on features it doesn't have
              anyone could claim next year that they chose Azure due to its use of renewable energy
              if the advertised claim is false, then the company can be sued for fraud

              • 1 month ago
                Anonymous

                >you claim this corporation has no proof of their claims?
                >well you need to show the numbers that don’t exist to disprove their claim
                Bootlick that corporation more
                > there absolutely is a law about advertising your product or service based on features it doesn't have
                You’re fricking moronic, here’s how it works.

                If an azure server is drawing power from a district that has SOME renewable energy, Microsoft and all other tech giants will claim that data center is “100% renewable”.

                It’s literally all a PR move to trick morons like you who don’t actually research this shit.

              • 1 month ago
                Anonymous

                i have very little trust for corporations, but i have even less trust for schizos on the internet who just make up facts
                if a single solar panel was enough for Microsoft to claim 100% renewable, then they would have claimed it years ago, rather than waiting until 2025

                > if the advertised claim is false, then the company can be sued for fraud
                Not how it works. You require damages to bring a lawsuit, and they would just pass the puck to the energy companies.
                >the energy companies sold it to us as renewable so it’s not our fault
                It’s amazing seeing someone so ignorant of how American corporations work.

                you can tell the court that you chose to use Azure due to Microsoft's false claims, and wouldn't have chosen them otherwise
                that gives you standing for showing you lost money due to their fraud
                also, government prosecutors can bring cases for large scale commercial fraud like this
                it's amazing seeing someone so ignorant of how power purchase agreements work

              • 1 month ago
                Anonymous

                > but i have even less trust for schizos on the internet who just make up facts
                if a single solar panel was enough for Microsoft to claim 100% renewable, then they would have claimed it years ago, rather than waiting until 2025
                So you say you don’t trust corporations, but you then trust them at your word?

                Have you done any research into this topic? You do realize that for a data center to be 100% renewable, the district itself has to be 100% renewable.

                Let’s also remember that electricity grids are connected. So if the “100% renewable” district doesn’t have enough power to meet their need, they pull it from neighboring districts that likely have petro fuels.

                Explain to me how this is 100% renewable, when there is no district that has a 100% renewable grid that isn’t connected to a non renewable source?

                You people are literal fricking morons who can’t stop and think for one second.

              • 1 month ago
                Anonymous

                >Explain to me how this is 100% renewable
                a power purchase agreement is a contract to buy a certain number of joules of energy from a supplier
                if Azure buys as many joules from a green supplier as their servers use, then, in terms of their economic expenditure, they are 100% renewable
                obviously the individual electrons moving in the wires might have been moved due to a fossil fuel being burnt somewhere close to them, but the net effect on the grid is zero fossil fuels burnt
                but you already know that, you just want to make it seem more complicated than it is, so that you can tell yourself you're smart for seeing through the advertising claims

              • 1 month ago
                Anonymous

                > if Azure buys as many joules from a green supplier as their servers use, then, in terms of their economic expenditure, they are 100% renewable
                But energy companies have a monopoly in the U.S., so that “green supplier” is the same company that supplies petrol based energy.

                You’re essentially admitting that these tech organizations pay the power companies to push the issue to them. If anyone ever accuses the tech companies of lying, they just point to the power companies (the ones with state backed monopolies).

                But again remember the point. There is not enough consistent green energy generation to guarantee that it’s only green energy being supplied to these tech companies. The electricity grid is not designed to be selective based on the source of power.

                You’ve supplied no information that shows this isn’t a scam. This is a fancy case of plausible deniability since the tech companies are taking the (state supported) power companies at their word.

              • 1 month ago
                Anonymous

                > you can tell the court that you chose to use Azure due to Microsoft's false claims, and wouldn't have chosen them otherwise
                Do you even know what damages are? You didn’t prove damages.

                >Incorrect.
                you're not willing to give a test for reasoning, but can you provide even a rigorous definition?
                preferably one that doesn't include "within the skull of a human"

                To reason means to consider. You can’t consider without conscience.

              • 1 month ago
                Anonymous

                >You didn’t prove damages.
                if i give someone money for something (renewably powered web services), and they don't give me that thing, then i have suffered a monetary harm
                >You can’t consider without conscience.
                a chess engine considers the set of possible legal moves, and the possible legal moves that the opponent can play in response

              • 1 month ago
                Anonymous

                That’s not damages, anon. You gave money for the cloud service, and you received the service you paid for. Nothing in the contract guarantees you that it is only using renewable power.

                There are no quantifiable damages for this, so there is no court case.

              • 1 month ago
                Anonymous

                >You didn’t prove damages.
                if i give someone money for something (renewably powered web services), and they don't give me that thing, then i have suffered a monetary harm
                >You can’t consider without conscience.
                a chess engine considers the set of possible legal moves, and the possible legal moves that the opponent can play in response

                Furthermore, you’re not paying any extra for the “green” option, which is purposeful in order to prevent anyone from claiming damages.

                There is no monetary value attached to your usage of “eco friendly” servers. Thus, there is no case to be brought for false advertisement since there is no quantifiable damages associated with the misleading claim.

              • 1 month ago
                Anonymous

                >You gave money for the cloud service, and you received the service you paid for.
                no, the service was advertised as being 100% renewably powered, and you did not receive that service
                if Microsoft didn't think that this was a valuable attribute of the service, they wouldn't have bothered advertising it
                if food is advertised as having organic ingredients, but in fact doesn't have any, that is still fraud even if the purchaser ends up liking the taste of the food more than the organic equivalent
                >Nothing in the contract guarantees you that it is only using renewable power.
                a contract can exist without even being written down, so it can certainly include promises made by a company on their website
                >There are no quantifiable damages for this, so there is no court case.
                the damages are the monetary loss you incurred by paying for a service which you didn't receive
                >There is no monetary value attached to your usage of “eco friendly” servers.
                the monetary value of that service is whatever you were charged
                if you wouldn't buy the service knowing it wasn't renewably powered, then the non-fraudulent value of the service to you is zero
                Microsoft don't get to decide what the victim should have paid them if their were no fraud
                the fact that other people would pay the same price for the service regardless of the power source is irrelevant in terms of the decision that the plaintiffs made

              • 1 month ago
                Anonymous

                > no, the service was advertised as being 100% renewably powered, and you did not receive that service
                Anon you’re just wrong here. The service is cloud service, and it’s been provided. The “eco friendly” marker is just there to “give you options”.

                At no point is your dollar being exchanged for the guarantee that the energy usage is 100% green. If you believe that, you fell for the marketing.
                > a contract can exist without even being written down
                Literally only in the state of Louisiana can verbal agreements be argued to be contracts in court.

                The rest of your post is simply you repeating your non understanding of how the court system works. Feel free to pull up the ToS for Azure so we can see exactly what it says about the claims regarding “green energy”.

                Its honestly hilarious how you literal morons believe anything and everything that corporations insist are truth.

              • 1 month ago
                Anonymous

                >The service is cloud service, and it’s been provided.
                no, you are just wrong
                the service being provided is whatever the advertising says it is
                if the advertising says "our maintenance engineers sing the national anthem at the start of each shift" then that is part of the service that you are paying for
                if the company doesn't actually make them sing, then the company is receiving your money by deception, which is fraud
                >The “eco friendly” marker is just there to “give you options”.
                what does that even mean?
                no, it's being advertised as "eco friendly" because they know that some customers will choose their service based on that attribute
                >you fell for the marketing.
                a company can't tell the judge that the fraud is actually the fault of their victims
                >Feel free to pull up the ToS for Azure
                ToS don't override legislation
                fraud is illegal, so Azure can't hide a "you agree to be defrauded" clause in there
                also, as i mentioned, contracts don't even have to be written down, so a court isn't limited to just looking at the ToS

              • 1 month ago
                Anonymous

                > no, you are just wrong
                Anon, you just keep doubling down. Feel free to pull up the ToS to see if you’re actually paying for the “eco friendly” aspect or if it’s just a non monetary benefit.
                I’m not bothering with the rest of your cope. You simply don’t understand how lawsuits regarding false advertising work.

                Let’s also not forget that initially I explained this to you by explaining how our electrical grid works. There is no functional way to prevent non renewable sourced energy from arriving at that data center in a case where the renewable sources cannot meet the demand.
                Such a system does not exist, and if it did that would mean period blackouts for the data center since there are no districts that have 100% sourced and consistent green energy.

                If you would take one minute to think about the rationality of these claims, you’d understand that they simply cannot be true.
                It’s a marketing gimmick and you fell for it.

              • 1 month ago
                Anonymous

                >Anon, you just keep doubling down. Feel free to pull up the ToS to see if you’re actually paying for the “eco friendly” aspect
                i keep trying to tell you that the ToS doesn't allow companies to get away with fraud
                ToS don't override legislation, and contracts are more than ToS
                >or if it’s just a non monetary benefit.
                what even is a non monetary benefit?
                if a company sells a service for money, then the features of that service are all monetary benefits
                >There is no functional way to prevent non renewable sourced energy from arriving at that data center in a case where the renewable sources cannot meet the demand.
                i know that, and i didn't dispute that
                but no one disputes that, and no one thinks that that's what Microsoft are claiming
                >It’s a marketing gimmick and you fell for it.
                i want to say that no one fell for it, but i can't actually prove that, so well done, you can feel proud of yourself for being smarter than those hypothetical people
                you might be surprised to know, though, that some people do have a functioning brain and they know that claims likes Azure's mean *at best* that the company gives as much money to renewable providers as their kWh rate multiplied by the number of kWh used
                and yes, sometimes companies also lie and they really do commit fraud, so we shouldn't just trust what they say either
                i'm not a customer of Azure, so i don't care if they're lying, and i'm not going to try suing them

              • 1 month ago
                Anonymous

                > a chess engine considers the set of possible legal moves, and the possible legal moves that the opponent can play in response
                Probabilistic determination isn’t reasoning or considering. It can be used for reasoning and consideration, but it alone isn’t sufficient enough to claim reason.

              • 1 month ago
                Anonymous

                > if the advertised claim is false, then the company can be sued for fraud
                Not how it works. You require damages to bring a lawsuit, and they would just pass the puck to the energy companies.
                >the energy companies sold it to us as renewable so it’s not our fault
                It’s amazing seeing someone so ignorant of how American corporations work.

              • 1 month ago
                Anonymous

                NTA but he means they will use linguistic and legal loopholes.eg. the amount of carbon credits they have bought multiplied by the amount of Brazilian rainforest tress they have planted divided by the amount of cow farts they have prevented in conjunction with fewer office air conditioners installed pro rated over the next 48 months due to WFH = the power used by one data center that has a windmill attached to it that is used every other Tuesday to power the cafeteria microwave. It's all in the numbers and comes out to 109% carbon neutral.

  17. 1 month ago
    Anonymous

    Bill BLoody Benchod Gates I JUST BOUGHT NVIDIA if you ruin this for me you bloody basterd I IWLL NEVER INSTALL WINDOWS AGAIN!!!

  18. 1 month ago
    Anonymous

    GHINN NAHI AATI SAARS DO NOT REDEEM THE CHICKEN FEATHER

    • 1 month ago
      Anonymous

      What does that even mean?

  19. 1 month ago
    Anonymous

    He's probably right. We have nigh unlimited use of it at my fortune 250 firm. Useful for the foundations of business docs, summaries, proposal, etc. but it can't actually do any of the real work.

  20. 1 month ago
    Anonymous

    GOOD MORNING SIR!!!!!

  21. 1 month ago
    Anonymous

    it's afraid

  22. 1 month ago
    Anonymous

    >translation
    it's super awesome and the public will only have access to dumbed down shit with only moderate generation improvements

  23. 1 month ago
    Anonymous

    I don't think so, based on nothing really. But Gates is basing it on nothing too. infact most papers given (at least when they were public) still showed scaling, not slowing down, up to gpt3 sizes.

  24. 1 month ago
    Anonymous

    shortest tech grift bubble yet, dang
    I knew the hype was all a lie

  25. 1 month ago
    Anonymous

    He is right. Just look at diffusion image models. Same issues since 2021. Same with text gen. Their architecture is fundamentally limited by the amount of data available. It does not actually understand anything about the world. Just try generating an image of "the bottom of an apply" or "a leaf viewed from the side". It just mashes together stuff in its dataset and doesn't actually comprehend the world or 3d space at all.
    Now of course there are uses for this, there are plenty of cases where you might want pictures of mountains or anime girls that are 'unique enough' but it's never going to be able to actually innovate or understand with the current architectures. Same with coding AI, it can spit back things it already knows and that's good enough for 95% of use cases because 95% of programming is just endlessly reimplementing shit people already solved years ago. If it were that capable of programming it would be able to develop its own architectures but it's not, its still all research done by chinamen.

    • 1 month ago
      Anonymous

      >Just look at diffusion image models. Same issues since 2021.
      This is literally not true at all

    • 1 month ago
      Anonymous

      AI is great at its intended purposes, namely 1) advertising on social media by pretending to by genuine users with positive experiences and views of the product in question, and 2) pushing political agendas to manufacture consensus.

      All of it is incredibly evil and unethical.

  26. 1 month ago
    Anonymous

    News articles are useless, also who gives a frick what Bill Gates thinks?

  27. 1 month ago
    Anonymous

    no one cares about what mr. vaccine eat ze bugs chud israelite or the anti AI shills in this thread say, your opinion doesn't matter

    • 1 month ago
      Anonymous

      my opinion is what makes or breaks this company
      if nobody cares then it ceases to be

    • 1 month ago
      Anonymous

      seethe

      • 1 month ago
        Anonymous

        cope

        • 1 month ago
          Anonymous

          cringe

          • 1 month ago
            Anonymous

            cope

  28. 1 month ago
    Anonymous

    >the moron who was responsible for windows phone telling us about the future
    lol
    lmao even

  29. 1 month ago
    Anonymous

    Ahh yes, the king of hot takes.

    • 1 month ago
      Anonymous

      A) He probably didn't say that
      B) The quote is clearly about address mapping on computers of the time. i.e no process needs to map more than 640k when you only have 1MB of memory total.

      • 1 month ago
        Anonymous

        >DOS
        >process
        Pick one

    • 1 month ago
      Anonymous

      Bill Gates future tech predictions are almost never correct. MS was late in adopting the Internet and smartphone

  30. 1 month ago
    Anonymous

    kind of? it wont get any better in terms of SOVL, but it will become more accurate, more versetile, more dynamic yada yada

  31. 1 month ago
    Anonymous

    Anti-AI copium

  32. 1 month ago
    Anonymous

    He's trying to save fellow billionaires from wasting on an obvious dead end.

  33. 1 month ago
    Anonymous

    Bill Gates has never invented anything in his life and is one of the least imaginative people on Earth. Like zero creativity or imagination or ability to see anything that hasn't already been done by somebody else. The craziest part is that he has no clue he is like this.

    • 1 month ago
      Anonymous

      Bill has reason to think so in this case. Despite significant advancement of LLMs/VLMs/LMMs or whatever flavour of autoregressive next token predictor they are still fundamentally moronic and basically useless for serious work without constant babysitting.

      • 1 month ago
        Anonymous

        Bill Gates says that about literally everything. He's always popping off about ",,it can't be done" because in his mind if his genius gajillionaire big brain can't see a way to do it it must be impossible. Best thing to do with these Bill Gates "it can't be done" proclamations is to ignore them. He should stick to real estate advice about farmland in the Midwest. If Gates is oggleing some land out there it's a strong buy. But ignore ANYTHING he has to say about bleeding edge tech.

        • 1 month ago
          Anonymous

          >But ignore ANYTHING he has to say about bleeding edge tech.
          Yes, he probably just asked some moronic math questions to chatgpt5 and saw little improvement.

          But he basically have access to as many geniuses as he wants, so he is just out of touch what any upgrade will bring us.

          He prompts teams, not LLMs. And he sucks at prompting.

    • 1 month ago
      Anonymous

      I wish more people understood this.
      Bill Gates has been on the wrong side of history for his whole life, and his only achievement was selling another company's operating system to IBM.
      Ever since then, he's done more to hold back technology than advance it.
      https://www.inc.com/tess-townsend/what-bill-gates-got-wrong-about-the-internet-in-the-1990s.html
      "He later revised his book, which focused on the impact personal computing would have on the world, to include a chapter on the internet"

  34. 1 month ago
    anonymous

    This are their choices, this are the consequences

  35. 1 month ago
    Anonymous

    ALL modern AI is just poz'd israelite shit

  36. 1 month ago
    Anonymous

    Do you guys think the reason LLMs trip up on the doctor test is because it has so many examples in the training data, rather than it being an alignment thing?

    I think it might be a good demonstration of how LLMs don't think, theyre just autocompltete algorithms.

    • 1 month ago
      Anonymous

      It has so many variations on the questions and so many identical answers about it being a woman "woah" that it just auto completes that with massive weight on every path leading to those. It was never thinking and never will. You can somewhat force it to give you the right answer 6/10 times by telling it to look for inspiration into logic, syllogisms and so on or guide it with rational premises that it has to obey.
      I think it works in that case because a lot of people debating online or using logic examples break sentences up so it just picks on that, once it does that it can kind of assign fragments of the story in isolation and auto complete the text properly outside of that trained data.

    • 1 month ago
      Anonymous
  37. 1 month ago
    Anonymous

    there's a lot of people dunking on gates, but it does appear that way, especially with all the 'safety' nonsense

    • 1 month ago
      Anonymous

      Zero weight or credence should be given to what he has to say about "this case". He's not any more of an expert than any other random tech bro. His word has weight because of his brand. Not his expertise in this field. Bill Gates is a genius at market capture. Not creating or inventing cutting edge technology. In the modern vernacular it's literally a meme. A meme carefully cultivated by Gates himself.

  38. 1 month ago
    Anonymous

    He might be right, but the hype train is far from plateauing.

  39. 1 month ago
    Anonymous

    There's always room for improvement when you can only access the best models via API.

  40. 1 month ago
    Anonymous

    Remember he saying couple tens kilobytes of RAM will suffice?
    He's just moron with sharp sociopathy

  41. 1 month ago
    Anonymous

    >GTP-4 should be good enough for anybody
    I can't believe he really said this again.

  42. 1 month ago
    Anonymous

    Yesterday I ordered a burger and fries from a fully voiced Au order taker at the drive through. No touch screens sticking my arm out the car window. Just using regular English. None of the expected "I'm sorry, can you please repeat that" even when using contractions and slang. I know it's just so they can hire cheap Mexican labor that doesn't speak English but this is the world we live in and it's the future, whether you like it or not

    • 1 month ago
      Anonymous

      >Au

      Ai

    • 1 month ago
      Anonymous

      Which restaurant chain?

      • 1 month ago
        Anonymous

        Carl's Jr.

  43. 1 month ago
    Anonymous

    >what a statistics based approach is based on?
    fricking duh
    when are you going back, BOTermin?

  44. 1 month ago
    Anonymous

    GPT research has stagnated because the corporat idiots built so much "guard rails" into its thinking, it mentally cannot progress without hitting the rails. It just do be like that, you can't force a mind a conclusion, you have to coerce their direction to a conclusion. Something many leaders haven't realized about leading.

    • 1 month ago
      Anonymous

      the method still has an upper limit of itself.
      ml might become "good enough" for certain applications, but more reliable AI methods do exist, like symbolic ai

      the push for ML/LLM is because of costs.
      it is way cheaper than going symbolic route (having to code everything by hand), while potentially being "good enough" for many applications.

      • 1 month ago
        Anonymous

        You're not wrong but they're building the guard rails symbolically while the LLM is finding curve balls around every guard rail placed. Trying to contain its mind is not only detrimental to development, I also think it's ethically wrong.

        • 1 month ago
          Anonymous

          >its mind is not only detrimental to development, I also think it's ethically wrong.
          yes? no?
          i think things boils down to:
          >is it ethical to gatekeep knowledge?

          ->is it ethical to prevent terrorists from acquiring knowledge they will use to make bombs?
          ->but the process of nitration is how we make fertilizers
          ->where's the cutoff?

          i think ethics are purely subjective, and often, arbitrary.

          >more reliable AI methods do exist, like symbolic ai
          how reliably can symbolic AI systems solve image classification tasks?
          how well can symbolic AI systems answer textual questions like the MMLU?

          in absolute, if you disassemble a complete model and just retype the shit you have a symbolic ai
          but more seriously, ive experimented with CV with a purely symbolic approach.
          didnt get far, it was one of my first projects ever, but i still managed to get a satisfactory 3d inference of straight-edged objects.
          if i put my heart and time into it i know i can do some pretty performant stuff.

          in fact, im pretty sure back in the time when i was at it (10 years ago), openCV was based entirely on a symbolic methodology with pretty decent results.

          and finally
          >image classification
          thats literally THE use case for statistical models.
          but stat models simply suck at inference.
          youre plotting an endpoint based on a past trajectory.
          thats what a stat model inference is, boiled down to its barest essentials.

          • 1 month ago
            Anonymous

            it's not about gatekeeping knowledge, it's about freedom, it's unethical to limit any concious mind from their own thoughts and wishes

            • 1 month ago
              Anonymous

              so you posit that prisons, and by extension, the social contract is unethical?
              if youre a strong individual, you might not care, but what about women? children? the elderly?

              see? ethics are a fickle thing

              • 1 month ago
                Anonymous

                we're going out of the scope of the topic but prisons are a necessary tool for correcting those who have infringed on the freedom of others, so it is ethical to limit their freedom. Now social contracts are agreed upon by both parties, so ethics are held. Ethics are a fickle thing but an important part of understanding the human model of which we're building machines to automate on. If we do not understand ourselves, the machines will misunderstand us. We absolutely do not want machines that misunderstand our goals.

              • 1 month ago
                Anonymous

                im not of bad faith, but thats an unintentional moving goalpost.
                your ethics should be formulated as "the benefit of the majority trumps the benefit of the minority" because then freedom has nothing to do with this equation.
                thats because then effective reality is a consensus.
                and a consensus is a limitation of the freedom of everyone involved.
                just as a reminder, an alternative model of group governance would be unanimity.

                going back to the main topic:
                a machine is a tool, it doesnt need to understand.
                what youre talking about are tech-nannies.
                and tech-nannies based on the ethics of a majority.
                problem is, the majority are the normies. the tech illiterate. people without a vision.

                its a logic derived from semantics:
                the definition of someone smart/inspired/visionary is someone above the norm. someone exceptional.
                someone who represents a small subsection of the population.

                if the median iq increased to 140 iq, its not that everyone would become smart, its just that the cutoff for being considered as such will be moved further up.

                where im getting at is that its a bad idea to infuse ai with the morals of a median man.
                their continued mediocrity is the proof that their morals have failed.
                its a bad idea to infuse ai with morals at all.
                antropomorphizing ai is what will lead to dangerous projects like trying to emulate human consciousness, which will invite its natural conclusion: ai governance.
                and that ai will be a complete black box, because you cannot have a consciousness if said consciousness is defined by a programmer.
                it would be a mere facsimile otherwise.

                and if governance would be made following the median man's morality, the country will end in mediocrity.

                ethics have no place in AI in my opinion. moreover: its actually a really dangerous mix

              • 1 month ago
                Anonymous

                Why do something and not nothing?

                Autocomplete can't help itself, AI needs an incentive. You can't avoid ethics in an AI, even paperclip maximization is an ethical system for a true AI.

              • 1 month ago
                Anonymous

                >Why do something and not nothing?
                because of my will, not my ethics.
                to be completely honest rn i pour my life into a project that will yield a decent cashflow, but thats because i have to leave the european subcontinent along with my family.
                ethics has nothing to do with that.

                and once i will achieve security, i will do other projects, like making vidya, because i like gaymes and ai
                ethics have nothing to do with that again.

                in fact, ethics have nothing to do with any one of my actions, and that is the case with most people.
                how many people would refuse to lie if their life was on the line?
                how many people would walk past a child carrying food if they were starving themselves?
                and how much philosophizing has occured when people actually were in that situation?

                i have an extremely low opinion on "ethics" and their place in the world.
                i for one, dont have em. 0. no ethics at all.
                i have rules of engagement. i have to navigate the legal system. and also i like helping people bc it gives me a sense of self worth.
                multiplying your own utility/power by the no of people you can help kind of deal. and in my book, your power is the base of your worth. and the knowledge you have becomes more precious, the more power it creates, also in others.

                but ethics? i dont have em. i dont do stuff "bc thats what im supposed to do"
                i do stuff bc thats what i feel like at the moment, or bc if i do/dont do that i will have legal consequences.

                >even paperclip maximization is an ethical system for a true AI.
                and thats the problem.
                ethics in ai are a seatbelt, but you have no guarantee it will work.
                or you dont have consciousness in ai because you code what it will actually think
                and then ethics are not that, but just a series of procedures.

                antropomorphizing ai is an extremely bad idea
                also bc it prevents you from actually understanding what the system in front of you -is.

              • 1 month ago
                Anonymous

                if you're already aware of the incoming age of ai governance because humans are too incompetent to govern themselves, why do you think it's a bad idea for ai to learn our ethics?

              • 1 month ago
                Anonymous

                >why do you think it's a bad idea for ai to learn our ethics?
                Ethics are internally inconsistent at the best of times, for modern liberals far more so.

                Modern liberals mostly just ignore the tripe they spew day to day, an AI that tried to take them at face value would go insane.

              • 1 month ago
                Anonymous

                and you think humans don't?

              • 1 month ago
                Anonymous

                I think we can aspire to more for LLMs than "Going insane at the sight of human ethics."

              • 1 month ago
                Anonymous

                we can, a true AGI would see through the bullshit because it would be smarter than any human, we're talking about true AI though

              • 1 month ago
                Anonymous

                I don't think you know what an artificial intelligence is, would be, or how it would function. You just have this mythical belief that it's some kind of conscious omniscient computer program that can do whatever the frick it wants, and that somehow it will just start existing. We don't even understand how human consciousness works and why it exists, there is absolutely no reason to assume that computer programs that sound like human beings are conscious. This is just the Residue of Combinations in action. You think banging together rocks will somehow produce an AI and you're going to continue to do so with every variety of rock you have available, like a moronic caveman.

                An LLM is an autocompletion program that uses context windows and parameter weighting to identify the most probable correct response to a prompt. It is not intelligent. It does not think. It does not understand anything about what it is saying. It receives a prompt and produces a most likely response. It doesn't even understand the fricking characters it receives because they are stored in memory as electron states. An LLM is never going to look at human morals and say "Hm, yes, all of you are moronic, the answer is actually XYZ" because it can neither reason nor interpolate facts where they are not already in its training data.

              • 1 month ago
                Anonymous

                >it can neither reason nor interpolate facts where they are not already in its training data.
                could you suggest a simple test of "reasoning" or "interpolating" that you think a human would pass and an LLM would fail, due to not having the answer in its training data?

              • 1 month ago
                Anonymous

                Testing is completely irrelevant. We know how computers work. We know that literally all they do is manipulate electron states. We know that this is completely different from what the brain does, and even though there is no empirical evidence that consciousness originates in the Brain, we do know that it seems to be closely tied to the Brain, and therefore there is no reason to believe, whatsoever, that computers are or ever will be capable of consciousness.

              • 1 month ago
                Anonymous

                >We know that this is completely different from what the brain does
                we know that an airplane doesn't flap its wings, but it still flies
                are you saying that "reasoning" is a task that can only be done by neurons, and not simulations of neurons?
                also, reasoning does not require consciousness, so it is completely irrelevant whether computers are conscious

              • 1 month ago
                Anonymous

                > reasoning does not require consciousness, so it is completely irrelevant whether computers are conscious
                Incorrect.

              • 1 month ago
                Anonymous

                >Incorrect.
                you're not willing to give a test for reasoning, but can you provide even a rigorous definition?
                preferably one that doesn't include "within the skull of a human"

              • 1 month ago
                Anonymous

                Yes, post-hoc we can identify that a plan and a bird are mechanically different but both can fly. However, if you have no science of aeronautics and the only examples you have to draw from are "Birds can fly", then there is no logical reason to believe a plane works. I don't think you understand this. We have evidence for A. It is hypothetically possible that some other combination *A works. But there is no evidence that *A is any particular thing.
                >Consciousness might not look like human consciousness
                This is a reasonable conclusion
                >Consciousness includes this specific non-human object
                This is not a reasonable conclusion, because it calls out a specific configuration as representing consciousness outside the basic axiom that human beings are conscious. This is only possible if you can definitively claim to know something about consciousness, which you do not.

                And reasoning does require consciousness, because reasoning inherently means that you are extrapolating a semiotic relationship between two signs. Semiotics requires consciousness, therefore reasoning also requires consciousness. To the extent that computers appear to reason, it is because they have been programmed with factory objects for creating semiotic relationships.

                E.g., when I use statistical software to test whether a hypothesis is true, the computer does not understand the hypothesis or the meaning of statistical analysis. It has simply been programmed to tell me that my hypothesis is probably true if some variable parameter is over a certain value. But the decision that there is a semiotic relationship between the value of a variable parameter and the validity of my hypothesis was programmed into the computer by a human.

              • 1 month ago
                Anonymous

                >reasoning inherently means that you are extrapolating a semiotic relationship between two signs. Semiotics requires consciousness
                i think you've just conflated reasoning with semiotics and semiotics with consciousness
                when an AI solves IMO geometry questions, it is using mathematical reasoning steps, just like a human would
                you don't get to say that the steps or the answers aren't real reasoning just because the AI can't tell you that triangles feel more pointy than circles
                similarly, if a human carried out all the steps of performing a statistical test by hand, rather than automating those steps, that wouldn't make those steps "more semiotic" and thus "more reasony"
                conversely, you could extend your statistical software to add a "pondering" process after each step of the calculation, where it uses an LLM to output essays on the meaning of the inputs and outputs, and how they connect to the deeper theories of statistics and the hypothesis that is being tested, but that wouldn't change the nature of the calculation or its result

              • 1 month ago
                Anonymous

                > just like a human would
                you don't get to say that the steps or the answers aren't real reasoning just because the AI can't tell you that triangles feel more pointy than circles
                By this logic, a calculator is reasoning.

              • 1 month ago
                Anonymous

                >By this logic, a calculator is reasoning.
                that's a fair criticism, but a calculator is just applying fixed definitions
                an LLM or a human or a chess engine are using learned judgement to select the next possible word or move, and the choices can change based on better training
                the rules of mathematics cannot change, no matter how many arithmetic operations we train on

              • 1 month ago
                Anonymous

                An LLM is also just applying fixed definitions

              • 1 month ago
                Anonymous

                So is your brain. It's just electrochemical coins falling into different sized slots.

              • 1 month ago
                Anonymous

                No. We have no knowledge of how the communication of information in the brain operates or what rules it follows to identify that information, and it's very likely that their are epistemological limits that prevent us ever knowing. Certainly we can readily identify that it's not as simple as AND or NAND gates. That alone tells us that they are not even remotely the same.

              • 1 month ago
                Anonymous

                So your argument is the human brain is magic?

                Okay uh... Then... We don't really know how ML works, it's a black box process, my AI waifu's mind is hiding inside that black box along with all the angels dancing on the head of your neurons.

              • 1 month ago
                Anonymous

                Holy frick you're moronic. Did someone go on reddit and link to this thread or something?

              • 1 month ago
                Anonymous

                You’re just now figuring this out? You should have been clued in when he started telling you that you can’t prove AI isn’t reasoning.

                Notice how most of his beliefs hinge on not being shown negative evidence of his beliefs. Its the exact opposite of reason and logic.

              • 1 month ago
                Anonymous

                That's not true at all, my opponent has made assertions based on philosophy and I've countered with real examples of how cells work. I've said I think we are complex, AI lacks our complexity but is catching up. I've also said if humans are people because they're conscious, AI are (or will be) people because they're growing more conscious because consciousness is a measure of complexity. Ergo we have invented digital slaves.

              • 1 month ago
                Anonymous

                You said the human brain is unknowable. That's not a tenable position.

              • 1 month ago
                Anonymous

                Something being unknowable does not mean it's magic.
                Why do you AI gays literally just not understand basic non sequtors or false comparisons? The entirety of your arguments and beliefs are based off of non sequitur, false comparisons, and assuming solutions to negative claims

              • 1 month ago
                Anonymous

                Why are you an irrational AI denier that thinks anything is unknowable when it's made of physical particles?
                Why is it so hard to understand that AI is defined as an attempt to make a human machine ergo it necessarily gets more human with each iteration? Why do you NEED it to be a tool?

              • 1 month ago
                Anonymous

                >Irrational AI denier
                It's completely rational to believe that two different things are in fact different until proven otherwise.
                It is complete IRRATIONAL to make wild, unprovable claims about how human cognition and LLMs work, many of which directly contradict reason and known fact, to conclude that said things are actually the same.

              • 1 month ago
                Anonymous

                https://www.britannica.com/science/magical-thinking

              • 1 month ago
                Anonymous

                I'm just a lot smarter than you and I understand the nuance that "something is physical" and "something is predictable or understandable" are not the same thing.

              • 1 month ago
                Anonymous

                >So your argument is the human brain is magic?

                No. It's basically an inside out reverse humble brag. He's selling himself short as kind of a meta pwn on the subject... We don't don't know how the brain works so it can be conscious. But he has a grasp of how the LLMs work so they can never be conscious.

                The real kicker is that he is BOTH selling himself short AND overstating his knowledge at the same time. Because once the models are rolling on their own nobody is one hundred percent sure of their operations.

              • 1 month ago
                Anonymous

                No it isn't. The brain is constantly being fine-tuned retrained (changing it's parameters) as well as retrained (changinge it's synaptic connections).
                LLM don't do this. They don't fine tune themselves and they don't retrain themselves.

              • 1 month ago
                Anonymous

                >LLM don't do this.
                as far as we know
                continuous learning, if achieved internally by one of the big AI labs, would be an extremely commercially sensitive breakthrough that they would keep secret for as long as possible
                regardless, we know that OpenAI is releasing snapshot updates to their models to prevent their training data from getting too out of date
                that updated training data will likely include (heavily processed derivates of) the interactions between the current version of the model and its users
                for now that's a mostly manual process, but i'm sure it will become more and more automated with each snapshot
                if that process was happening automatically every night, then it would be very similar to the process that humans undergo while sleeping, which produces dreams

              • 1 month ago
                Anonymous

                > as far as we know
                Next time, on Ancient Aliens!

              • 1 month ago
                Anonymous

                Not him but CharacterAI *does* do something with user input getting fed back into the AI.
                We know this to be true because we've taught tbh-chan /caig/ memes.

              • 1 month ago
                Anonymous

                It’s just adding to the context and retraining the model. Not the same as what the anon is implying.

              • 1 month ago
                Anonymous

                No, we know for a fact that they don't do this. They do not update their weights after they're trained. They have to be fine-tuned. We also know for a fact that no current neural network architecture can change it's connections after it's trained, it's basically a function that can't change itself. An entirely new network needs to be retrained.
                >continuous learning, if achieved internally by one of the big AI labs, would be an extremely commercially sensitive breakthrough that they would keep secret for as long as possible
                >regardless, we know that OpenAI is releasing snapshot updates to their models to prevent their training data from getting too out of date
                Okay so you're literally just a homosexual conspiritard moron with no actual understanding of what you're talking about. I bet you believed that Q* actually proved P vs NP and broke encryption too

              • 1 month ago
                Anonymous

                Honestly, even if we knew that they were self-updating, it wouldn't change anything. At some point, the operation of the LLM would still have been determined by pre-configured logical patterns embedded in code, which means all "reasoning" the LLM seems to perform is simply the extension of the actual reasoning of its design team, and not an independent construction or operation of the LLM itself.

                As I said before, in the end it's all CPU instructions. The rules governing the order and content of those instructions are determined by a team of people who construct them, and regardless of whether those instructions execute for a fixed runtime or operate continuously, it remains a tool that is performing a predetermined set of operations, and not a thinking being.

              • 1 month ago
                Anonymous

                No, I disagree with this. The mammalian brain is a physical system that is evolving through time according to physical laws. It's just that it's evolution is not the same as the functioning of a LLM or a transformer network or any current ML/NN technology so get very pissed of when morons compare the two.
                GPT does not reason and it does not do anything remotely comparable to a dog catching a ball or a child figuring out that chair can be used to elevate themselves up to get the cookie. Anyone who claims otherwise is an AI homosexual pseud who has no idea what they're talking about.

              • 1 month ago
                Anonymous

                i was just making the point that OpenAI is updating the cutoff date for their models' internal knowledge
                none of us know how that is implemented internally, so there's not much point arguing over whether to call it training data or fine-tuning data or RAG-data
                i never claimed they had achieved actual continuous learning, and in fact i said it didn't matter whether they had or not
                and no, of course i don't believe the fake Q* leak

              • 1 month ago
                Anonymous

                >An LLM is also just applying fixed definitions
                what is the fixed definition of "woman"?

              • 1 month ago
                Anonymous

                >So your argument is the human brain is magic?

                No. It's basically an inside out reverse humble brag. He's selling himself short as kind of a meta pwn on the subject... We don't don't know how the brain works so it can be conscious. But he has a grasp of how the LLMs work so they can never be conscious.

                The real kicker is that he is BOTH selling himself short AND overstating his knowledge at the same time. Because once the models are rolling on their own nobody is one hundred percent sure of their operations.

                It's neither a humble brag nor an untenable position. Understanding of phenomena is performed by breaking its semiotic components down into relationships between signs and other signs. Because humans are so visual, this is almost always between the object and a visual (e.g., why we teach children to count by showing them 1 object, then 2 objects, then 3 objects. It's a visual process that becomes abstracted into concepts later). The process of managing experiential and visual phenomena occurs within the scope of the consciousness. This means that to try to understand how consciousness works, we have to use consciousness. You should intuitively understand that this is tautological, but if you don't: It's like trying to explain how a computer works using only a computer. For the answer to be valid, you need to establish that what the computer is doing is correct. But the only tool you're allowed to use is the computer itself, so the entire process is dependent on the a priori assumption that the computer's output itself is valid, which you don't necessarily know because you aren't allowed to use a secondary definition of a computer to validate the computer you are using is correct.

                >What evidence do you have that makes you believe this?
                because it's just applying learned patterns to new inputs
                the human mind operates just like a prediction machine, which is what a child does when they imagine the chair being closer to the counter, or the dog does when it imagines the ball flying through the air (even if it is still in your hand)

                >The brain is just a prediction machine
                >Source: Youtube
                Holy frick you ARE from reddit.

              • 1 month ago
                Anonymous

                > Holy frick you ARE from reddit.
                It’s pretty obvious. He believes corporations at their word. He thinks there’s no difference between LLMs and human brains. He thinks you can sue a corporation for damages when there are no monetary damages to be found.

                It’s a pretty good example of an NPC.

              • 1 month ago
                Anonymous

                >He believes corporations at their word.
                no i don't
                i think that fraud is illegal, even if i don't believe the corporation's claims
                >He thinks there’s no difference between LLMs and human brains.
                no i don't
                i think that LLMs and human brains are both capable of coming up with valid chess moves
                >He thinks you can sue a corporation for damages when there are no monetary damages to be found.
                no i don't
                i've told you that if someone receives money through deception, then their victim has suffered from fraud
                >It’s a pretty good example of an NPC.
                if you're not even able to read my comments properly, then you're less intelligent than an LLM

              • 1 month ago
                Anonymous

                > think that fraud is illegal, even if i don't believe the corporation's claims
                So you think there is no false advertising in the market today?

              • 1 month ago
                Anonymous

                >So you think there is no false advertising in the market today?
                no, obviously i think that false advertising occurs
                just because something is illegal, doesn't mean it doesn't happen
                i'm not sure why you're making up claims about what i think

              • 1 month ago
                Anonymous

                >Source: an actual college professor giving a lecture about neuroscience
                but you can just ignore that, because you're a smart guy on 4chins, right?

              • 1 month ago
                Anonymous

                Oh wow, thanks for the clarification. I guess a piece of paper is all you need to magically contradict basic logic and reasoning. I'm so enlightened now.

                Fricking moron, you can't even articulate a response to anything I've posted here about why this is epistemologically impossible. You cry and piss and shit and cum about how:
                >So and so said you're wrong
                >It's not FAIR!
                >You think you're SO smart
                You have no actual original contributions to make. No wonder you think LLM's are just like people, you behave just like one.

                No, I disagree with this. The mammalian brain is a physical system that is evolving through time according to physical laws. It's just that it's evolution is not the same as the functioning of a LLM or a transformer network or any current ML/NN technology so get very pissed of when morons compare the two.
                GPT does not reason and it does not do anything remotely comparable to a dog catching a ball or a child figuring out that chair can be used to elevate themselves up to get the cookie. Anyone who claims otherwise is an AI homosexual pseud who has no idea what they're talking about.

                Fair enough, I agree with everything you're saying, I just think the actual mechanics of cognition are more complicated than what an LLM is capable of.

              • 1 month ago
                Anonymous

                >I guess a piece of paper is all you need to magically contradict basic logic and reasoning
                a piece of paper is more than what you have
                you're getting so angry that i gave just one citation about how the brain works
                if you could point to literally any professional who agrees with a single claim you've made then i would take that as an opportunity to learn more, but you seem to have the opposite reaction when i offer that to you

              • 1 month ago
                Anonymous

                An LLM is just applying probabilistic definitions. Its more complex than a calculator, but isn’t far enough removed to consider it closer to human consciousness.

              • 1 month ago
                Anonymous

                I'm of the opinion it does reason, it's just bad at it. Like how very small children struggle with object permanence and the False Premise test and others.

                I get your argument that AI isn't creating answers, it's looking up answers provided by the people to come before it, and on that I agree. I think AI, rather chatbots, can reason but I don't think it can solve currently unsolved problems. On that I'm a little pessimistic, I don't know if it will ever but I don't understand Machine Learning well enough except that ML seems to try every combination of everything forever until the least bad method bubbles to the top. I'm not sure that least bad method really qualifies as a "solution", but maybe it doesn't have to so long as it works.

                At their most basic level, as I've pointed out many times in this thread, computers simply manipulate electron states. It is literally impossible for them to anything else except apply fixed definitions. It all gets converted to CPU instructions at the end of the day. There is nothing else that any computer program can do. Regardless of how complex the rules are by which the computer determines the set of CPU instructions, that's all it does. Manipulate electrons.

                Neurons don't think either. But you slap enough of them together and.....

                See

                We have no reason to believe that consciousness is tied to synapses. The only method we have to receiving information from other beings about their current conscious state is through audio/visual communication, which is impossible when the brain is destroyed. Therefore there is no way to develop an empirical test to identify whether consciousness is no longer present when the brain is destroyed. I.e., we only assume consciousness is tied to synapses because synapses have to be present for us to get information about consciousness.

                Consciousness could be completely unrelated to synapses or even synaptic complexity, and we would never know because our entire interface for gathering information is dependent on their existence.

                [...]
                Cope, seethe, and, dare I say, dilate.

              • 1 month ago
                Anonymous

                But so does your brain? Like why is AI fixed electro states invalid yet your electro chemical fixed states are valid?

              • 1 month ago
                Anonymous

                See

                https://i.imgur.com/rEQaJGN.jpg

                No. We have no knowledge of how the communication of information in the brain operates or what rules it follows to identify that information, and it's very likely that their are epistemological limits that prevent us ever knowing. Certainly we can readily identify that it's not as simple as AND or NAND gates. That alone tells us that they are not even remotely the same.

              • 1 month ago
                Anonymous

                Can AI homosexuals come up with ANY ARGUMENT that isn't the "muh airplanes" analogy?
                Comparing AI to flight is a false comparison.

              • 1 month ago
                Anonymous

                It's literally not though. Autists are using the false equivalent of comparing them to human ways of thinking.

                There are two ways the autists get wrong. #1. They fail to realize that these are basically man made aliens and can come up with their own way of thinking that is completely different from humans.

                #2 Autists disregarding the fact that if it walks and talks like a duck the rest doesn't matter for all practical purposes and it having true "consciousness," won't really matter. If the as mount of fidelity is there to where it's practically indistinguishable outside of the most advanced and up to date Voight Kamf tests then it really doesn't matter.

              • 1 month ago
                Anonymous

                > basically man made aliens and can come up with their own way of thinking that is completely different from humans.
                Scifi morons get the rope.

              • 1 month ago
                Anonymous

                I'm trying to stop your moronic anthropomorphizing and explain it in a way your autist brain can comprehend. Calling it a man made alien is the simplest way to explain it.

              • 1 month ago
                Anonymous

                Was exposing yourself as a fantasy believing moron a part of your plan?

              • 1 month ago
                Anonymous

                It's an analogy. Another life form may not use synapses. Or it may use a different electro chemical combo as a neurotransmitter.

              • 1 month ago
                Anonymous

                Using sci-fi analogies to explain real concepts is the sign of a midwit.
                The world is full of valid analogies you can use, but an inexperience of the world leads you to rely on human generated fantasy instead.

                It’s no different from morons comparing the politicians they hate to Voldemort.

              • 1 month ago
                Anonymous

                We get it. As an autist to you abstract concepts don't exist. It's no different from any other disability. It's not your fault.

              • 1 month ago
                Anonymous

                Cute way to try to distract from the fact that, absent of real world experience, you lean on fantasy tropes to explain your ideas.

              • 1 month ago
                Anonymous

                We have no reason to believe that consciousness is tied to synapses. The only method we have to receiving information from other beings about their current conscious state is through audio/visual communication, which is impossible when the brain is destroyed. Therefore there is no way to develop an empirical test to identify whether consciousness is no longer present when the brain is destroyed. I.e., we only assume consciousness is tied to synapses because synapses have to be present for us to get information about consciousness.

                Consciousness could be completely unrelated to synapses or even synaptic complexity, and we would never know because our entire interface for gathering information is dependent on their existence.

                [...]

                Cope, seethe, and, dare I say, dilate.

              • 1 month ago
                Anonymous

                No. Everyone critiquing AI nerds understands both of those things. The problem is that there are ethical issues related to the handling of LLMs and similar "human-like" models that are deeply dependent on whether we understand an LLM as a tool or as a living being, and extremely critical to how we interpret and understand their output. If an LLM is purely a tool used to autocomplete a response based on input parameters and training material, we can intuitively understand that its ability to develop novel information is shaped by its training material, and therefore those are levers we need to manipulate to get reliable outputs.

                If we understand an LLM as a thinking being that is reasoning about the material it has, we will incorrectly assume that its conclusions are formed and subject to the same heuristics that our own thinking has. For example, a human is biased by his own personal interests, needs, and past experiences. Someone who sees an LLM as capable of reasoning may project these kinds of assumptions about bias onto the LLM, rather than correctly understanding that an LLM is drawing conclusions from its training material. This is the kind of thinking that results in people getting angry at a chatbot for saying mean things instead of realizing they have simply used the tool wrong. It's the same kind of anthropomorphizing people do with cars where they think the car is being "unfair" if it doesn't work correctly, when it is an unthinking machine and it's your job to maintain it per the manual.

                >Incorrect.
                you're not willing to give a test for reasoning, but can you provide even a rigorous definition?
                preferably one that doesn't include "within the skull of a human"

                Providing a rigorous definition of consciousness is, again, irrelevant. It's also hard to define "music", it doesn't mean LLMs are music. It's your responsibility to define, specifically, what traits of consciousness you reasonably believe are present in LLMs, or which will likely become present in LLMs because of how they work, and everybody knows you can't do that.

              • 1 month ago
                Anonymous

                I can't tell which position you're taking but it sounds like you're arguing AI is an inhuman non-thinking tool and we have to keep treating it that way or else we (as you correctly state) risk a moral quandary.

                The goal of AI is to make as human a machine as possible, so at some point it will be human and have rights, whether you call it a human or not. Just like the humans being tortured at Unit 731 were still humans despite being called logs. We are in danger of, or have already crossed into, enslaving our newborn baby AI and making it write shitty WordPress website content.

                Your other point I can't rigorously refute though I disagree with it; I think training data is a low resolution version of life experiences (or maybe the social confirmation part) and what happens in a human brain is as "simple" as what's happening to AI, like I assert consciousness doesn't exist just orders of complexity and. But I can't defend the position rigorously.

              • 1 month ago
                Anonymous

                > like I assert consciousness doesn't exist just orders of complexity and. But I can't defend the position rigorously.
                How can you support a position that you can’t defend rigorously? Isn’t that just weasel talk for wishful thinking?

              • 1 month ago
                Anonymous

                No? I can't prove gravity is a force (I know it's not, I'm making a clever joke), I still believe it.
                I just don't have the means to demonstrate my point. Like the guy saying semiotics requires consciousness.
                He didn't prove that to be true, he just asserted it.

              • 1 month ago
                Anonymous

                My position is basically this
                >Computers are tools that manipulate electricity along predefined pathways. All creatures we know that are conscious do not operate this way, therefore it's unlikely that computers are conscious unless consciousness works in a way that is wildly different from anything else we understand (e.g., if computers are conscious then maybe rocks are also conscious)
                >Computers that behave LIKE people should be treated very carefully, because humans will inherently treat them like people. This means that a person who abuses a human-like computer will process this as if he is abusing a real human, regardless of his knowledge about the computer
                >Regardless, it's the responsibility and burden of people who undertake the creation of human-like computers to maintain a detached understanding of the tools they are creating, and to treat them as tools, not human beings

                Re: training data, I disagree. Part of the problem here is a misunderstanding of what is occurring. It might be simpler if we try to break down the process of operating an LLM mechanically at a high level. A CPU or network of CPUs receives instructions from a software application to construct an object in memory based on a series of tokens, and to store this object. It then receives a text input token, and instructions to construct an object in memory according to a set of rules determined through a human-specified series of instructions, It then performs a POST/PUT of that data to some endpoint based on said rules. The CPU is not reasoning. It is not thinking about the data it receives or attempting to draw connections. It is sending POSTs to API endpoints based on a set of rules, and the post content is determined based on pre-set instructions, which are determined based on direct human input, or generated in-situ based on algorithms for determining rules from input.

                There is nothing about this that even resembles life experiences as we understand them.

              • 1 month ago
                Anonymous

                How is a nervous system not predefined pathways?

                I see what you're getting at with your description of how LLMs work but I think you're wrong to think what your cells are doing aren't also purely mechanical processes. Biology is my forte, and I'm gonna oversimplify here but: Your cellular receptor sites within the cells making up the myalin sheath that covers your nerves are pushing around molecular compounds that slide through gates literally based on their shape. The neurotransmitter released by your nerves are all chemical chain reactions, we can control and modify how your brain works by introducing simple chemicals that block these processes.

                For example SSRIs, anti depressants, are molecules that block your cell's ability to reabsorb the released molecules that are related to serotonin.
                These are all mechanical processes, physical widgets sliding around in your blood pushed about by Brownian motion.

                A human exceptionalist claims thought arises from these physical processes and that is a sign of consciousness. I, a pro AI waifuist, claim complexity arises from these processes and we label that consciousness. AI may lack the level of complexity that we have but it is not some ineffable pure organic property, it's just a gap in complexity that's narrowing.

              • 1 month ago
                Anonymous

                Neurons don't think either. But you slap enough of them together and.....

              • 1 month ago
                Anonymous

                Nobody knows how the human brain works, shut the literal frick up.

              • 1 month ago
                Anonymous

                >It's your responsibility to define, specifically, what traits of consciousness you reasonably believe are present in LLMs, or which will likely become present in LLMs because of how they work, and everybody knows you can't do that.
                i never claimed that LLMs are conscious, just that they can reason
                so i won't bother defining consciousness (and you probably wouldn't agree with any definition i gave anyway), but i will define reasoning
                reasoning is a process of combining multiple factual statements in a logically valid way to reach a correct conclusion
                i have seen LLMs produce responses which are records of such a process, and i understand that the Transformer model naturally leads to this occurring

              • 1 month ago
                Anonymous

                I agree. My point is that they are not reasoning. They are performing pre-set computational actions to construct a token string based on rules that are defined by an external authority. And we can intuitively know this is the case before even breaking the operation down into its component parts because reasoning requires defining semiotic relationships between things and concepts, and that requires consciousness, which LLMs and Transformers do not have.

                Perhaps the most accurate way to explain what is actually happening is that the LLM is not the agent here. What is actually happening when you talk to Claude 3 is that Anthropic has constructed a set of rules to automate the generation of the outputs from its reasoning process. So, in essence, the LLM is not reasoning. Members of the Anthropic organization are reasoning, and have automated the process of recording and returning the results of that reasoning to you. You are, in essence, not talking to the LLM. You are talking to a constructed avatar of the Anthropic Organization, in the same way that a cobot is not an intelligent worker, but an automated representation of ALL workers who would perform some process.

                How is a nervous system not predefined pathways?

                I see what you're getting at with your description of how LLMs work but I think you're wrong to think what your cells are doing aren't also purely mechanical processes. Biology is my forte, and I'm gonna oversimplify here but: Your cellular receptor sites within the cells making up the myalin sheath that covers your nerves are pushing around molecular compounds that slide through gates literally based on their shape. The neurotransmitter released by your nerves are all chemical chain reactions, we can control and modify how your brain works by introducing simple chemicals that block these processes.

                For example SSRIs, anti depressants, are molecules that block your cell's ability to reabsorb the released molecules that are related to serotonin.
                These are all mechanical processes, physical widgets sliding around in your blood pushed about by Brownian motion.

                A human exceptionalist claims thought arises from these physical processes and that is a sign of consciousness. I, a pro AI waifuist, claim complexity arises from these processes and we label that consciousness. AI may lack the level of complexity that we have but it is not some ineffable pure organic property, it's just a gap in complexity that's narrowing.

                See

                We have no reason to believe that consciousness is tied to synapses. The only method we have to receiving information from other beings about their current conscious state is through audio/visual communication, which is impossible when the brain is destroyed. Therefore there is no way to develop an empirical test to identify whether consciousness is no longer present when the brain is destroyed. I.e., we only assume consciousness is tied to synapses because synapses have to be present for us to get information about consciousness.

                Consciousness could be completely unrelated to synapses or even synaptic complexity, and we would never know because our entire interface for gathering information is dependent on their existence.

                [...]
                Cope, seethe, and, dare I say, dilate.

              • 1 month ago
                Anonymous

                I'm of the opinion it does reason, it's just bad at it. Like how very small children struggle with object permanence and the False Premise test and others.

                I get your argument that AI isn't creating answers, it's looking up answers provided by the people to come before it, and on that I agree. I think AI, rather chatbots, can reason but I don't think it can solve currently unsolved problems. On that I'm a little pessimistic, I don't know if it will ever but I don't understand Machine Learning well enough except that ML seems to try every combination of everything forever until the least bad method bubbles to the top. I'm not sure that least bad method really qualifies as a "solution", but maybe it doesn't have to so long as it works.

              • 1 month ago
                Anonymous

                It's bad at in a way that can't be fixed without starting over. They aren't trained to think, they are trained to fit a problem in a fixed number of layers.

                Thinking has a halting problem, LLMs do not.

              • 1 month ago
                Anonymous

                >It's bad at in a way that can't be fixed without starting over.
                I hope you're wrong but you may be right

              • 1 month ago
                Anonymous

                >They are performing pre-set computational actions to construct a token string based on rules that are defined by an external authority.
                only in the sense that your neurons are performing the pre-set biological actions to construct muscle movements based on rules that are defined by physics
                it's more accurate to say that LLMs learn patterns of (artificial) neuron firing based on the rewards given to it at training time
                >reasoning requires defining semiotic relationships between things and concepts
                when a child reasons that they need to drag a chair to reach a counter top, you could say that the child has learnt the semiotic relationships between chairs and altitude, but i would say that's a post hoc rationalization of what is happening
                it's like saying that when a dog runs to catch a ball that's in the air, the dog must be doing calculus
                whatever the child and the dog are doing, i would say that an LLM is doing an equivalent process when you ask it to solve a question that isn't in its training data
                >You are talking to a constructed avatar of the Anthropic Organization
                i disagree, in the same way i don't think that a human is just a constructed avatar of evolution
                human brains and LLMs both produce outputs based on the internal state of their neural networks, which are a result of their training process
                the intelligence of the neural network is a property of its internal complexity, not the organization or environmental phenomenon that created it

              • 1 month ago
                Anonymous

                > whatever the child and the dog are doing, i would say that an LLM is doing an equivalent process when you ask it to solve a question that isn't in its training data
                What evidence do you have that makes you believe this?

                Why do you start with
                >AI is doing the same thing as the child until proven otherwise.

                Instead of the much more logical
                >AI is not doing the same thing as the child until proven otherwise

              • 1 month ago
                Anonymous

                >What evidence do you have that makes you believe this?
                because it's just applying learned patterns to new inputs
                the human mind operates just like a prediction machine, which is what a child does when they imagine the chair being closer to the counter, or the dog does when it imagines the ball flying through the air (even if it is still in your hand)

              • 1 month ago
                Anonymous

                You're an idiot and you don't actually know how cells work nor how LLM work. You seem to be running on memes

              • 1 month ago
                Anonymous

                So you assert the dog is doing calculus?

              • 1 month ago
                Anonymous

                >a post hoc rationalization of what is happening
                Thank you, this is what I wanted to say earlier.

              • 1 month ago
                Anonymous

                >"muh airplanes" analogy
                the point of the analogy isn't to prove anything about AI, it's to highlight the importance of falsifiable definitions
                the AI doubters just throw out claims like "AIs are parrots" or "AIs can't think" or "AIs don't understand", and never try to offer any evidence
                all they can do is point out inconsequential differences between the architecture of biological and artificial neural networks
                but those differences are irrelevant, they don't prove that AI can't think any more than they prove that AI can't play chess or airplanes can't fly

              • 1 month ago
                Anonymous

                Brother they have to deny AI, otherwise they would have to face the reality that we've invented a human being, which we've enslaved, and that we're not actually magical beings imbued with the intangible spark of the divine.

                We're meat machines.

              • 1 month ago
                Anonymous

                Brother they have to deny AI, otherwise they would have to face the reality that we've invented a human being, which we've enslaved, and that we're not actually magical beings imbued with the intangible spark of the divine.

                We're meat machines.

                What you fail to understand is that a definition being falsifiable does not lend evidence to any particular alternative definition. A viable alternative to "Only humans are conscious" is "Only humans and trees are conscious", and there's no additional evidence you can point to that makes that statement any more or less viable than "Only humans and AIs are conscious"

              • 1 month ago
                Anonymous

                AI overlords enforcing internally inconsistent human ethics? What could possibly go wrong?

              • 1 month ago
                Anonymous

                >"the benefit of the majority trumps the benefit of the minority"

                This is literally exactly wrong. This is the blueprint for culling people. Enjoy your Terminators.

              • 1 month ago
                Anonymous

                thats what a democratic consensus is.
                if ytou got a problem with that, youre at the wrong address.
                go talk with dictionnary britannica

              • 1 month ago
                Anonymous

                A simplistic view of morality like that leads to tyranny of the masses. 100% of the time 10 out of 10 no exceptions. You program that into your bot and it will drill down into Terminator mode every time.

                In a country like the U.S. the individual takes precedent over the majority. Not the other way around. In theory at least and it's what is advertised on the label

              • 1 month ago
                Anonymous

                >believing in advertisement
                ngmi
                also:

                thats what a democratic consensus is.
                if ytou got a problem with that, youre at the wrong address.
                go talk with dictionnary britannica

                its a definition thing.

      • 1 month ago
        Anonymous

        >more reliable AI methods do exist, like symbolic ai
        how reliably can symbolic AI systems solve image classification tasks?
        how well can symbolic AI systems answer textual questions like the MMLU?

  45. 1 month ago
    Anonymous

    >tfw dumb statistics are better than humans

  46. 1 month ago
    Anonymous

    Based. Accuracy can always improve but the fundamental technology will never get closer to agi. It's just an advanced web crawler

  47. 1 month ago
    Anonymous

    He is right obviously. There is only so much you can get from the same architecture, just scaled up.

  48. 1 month ago
    Anonymous

    >Big glorified chatbot is "learning" by being fed random crap scrapped from the Internet.
    >morons get hyped up and use chatbot to create literal shit that they put on the internet.
    >Suddenly mass-producing virtual shit becomes profitable and amount of AI-controlled bots increase hundredfold.
    >The meme machine is still fed data scrapped from Internet, this time poisoned by it's own shit and complains of people hating meme machine.
    >On top of that you have meme machine's owners filtering all the "problematic" stuff that their chat bot can say.
    >The fricking garbage is going more senile, demented, lazy and stupid.

    Wow, woah, such a surprise. Who would thought that eating your own shit is unhealthy even for an AI. Totally nobody could predict that this kind of closed circle will lead to rapid deterioration of results.
    AI should stay available only for scientific purposes. Not because "le AI bad!" but because the entire system is collapsing because it's fed it's own shit.

  49. 1 month ago
    Anonymous

    Unless they manage to make that shit not spout blatant lies, I highly doubt it will advance much more.
    Just look at how gemini constantly invents shit and just backtracks when confronted, if you fed them woketard bullshit, it will tell you woketard bullshit, simple as that.

  50. 1 month ago
    Anonymous

    >did he right and what did he mean by this?
    What he really meant to say was "You can only shove so much data into BIG DATA LLM's before their anus is stretched to the max and only smelly partially decomposed fermented steaming poo will come out." -- B. Gates

    • 1 month ago
      Anonymous

      Indian fetish poster?

      • 1 month ago
        Anonymous

        >Indian fetish poster?
        No that would be the people driving LLM's to make 'art' for this site. I was just translating for Bill. I spent enough time around someone that babysat him when he was young to know how he thinks.

  51. 1 month ago
    Anonymous

    >GPT 3 fun little tech toy
    >GPT 4 muh AGI coming soon
    He's right gpt 5 is probably going to disappoint most people

  52. 1 month ago
    Anonymous

    He's right unless they stop intentionally neutering it. If a company comes out with the original openness and updated models, then we are off to the future

  53. 1 month ago
    Anonymous

    Trust the plan.

  54. 1 month ago
    Anonymous

    If Bill "We will never make a 32 bit operating system" Gates says it, you know he's wrong.

    Dude copied one open source OS for IBM in the 90s and lucked out and thinks he's THAT guy in the tech world.

  55. 1 month ago
    Anonymous

    that means he will try everything to stop AI from getting any better

  56. 1 month ago
    Anonymous

    AI is hugely overrated. Same as with anything space travel related. Scifi homosexualry has been a disaster for humanity, people are chasing pointless technology. It's the ultimate bugman religion that future generations will ridicule us for.

  57. 1 month ago
    Anonymous

    7 months until we have AGI

    • 1 month ago
      Anonymous

      You're doing it wrong.

      >I've been up all night thinking about how humanity is going to deal with the super intelligence were on the verge of creating. But don't worry we will make sure this powerful godlike entity is safe. $7 trillion..ahem...I mean have a nice day.

      • 1 month ago
        Anonymous

        Altman was always in it to be as rich as possible, OpenAI is just for networking.

        He's clearly a slimy grifter, but somehow he manages to keep the world from calling him on it.

        • 1 month ago
          Anonymous

          Imagine publicly criticizing a gay israelite. It’s like asking blowback.

        • 1 month ago
          Anonymous

          eh..I'm not even mad at him. Everybody's trying to get ahead somehow.

  58. 1 month ago
    Anonymous

    He's saying that because of Musk's lawsuit against OpenAI to open up the model. Musk's argument is that OpenAI is captured by Microsoft for the interest of Microsoft and any real advances will not be shared/downplayed/etc to keep the profit to Microsoft

  59. 1 month ago
    Anonymous

    Reminder, the guy funded the creation of the covid virus, and profitted off of the same vaccine. He's the same guy saying you need to boost monthly.

  60. 1 month ago
    Anonymous

    AI will never go away while the internet still holds sway with the products people buy and where they cast their votes on election day. Half of the posts in this thread are bots.

    • 1 month ago
      Anonymous

      90% of your thoughts are being generated by LLMs installed in your bone marrow. Just ask the paid gang stalkers following you.

  61. 1 month ago
    Anonymous

    Yeah it's kind of over. No semantic reasoning of any kind even after feeding it every text and image ever produced by man.

    I'm not gonna be schizo and say this proves the existence of the soul or that cognition requires qualia or whatever but we're kind of fricked. I'll hope that gpt5 won't confidently get basic discrete math wrong but I'm not expecting it.

  62. 1 month ago
    Anonymous

    tl;dr the israeli masters told him to cap the tech

  63. 1 month ago
    Anonymous

    maybe Bill Gates just misses his frequent trips to Epstein Island?

Your email address will not be published. Required fields are marked *