Has AI hit the wall?

gpt 3.5 was trained on 300 billion words. gpt 4 has a data set thats 10 times larger, so around 3.5 trillion words.
The Indexed Web contains at least 3.27 billion pages according to
https://www.worldwidewebsize.com/

If every one of these sites contains 100 words youre already at 350 billion words, so theres not much more to add to the dataset from the internet. Include all the books available on google books and youve practically trained the model on 90% of all text ever written in recorded history.

How smart can an AI be that has been trained with all text ever written? cant be much smarter than what he we have now, right?

CRIME Shirt $21.68

UFOs Are A Psyop Shirt $21.68

CRIME Shirt $21.68

  1. 2 months ago
    Anonymous

    You can't brute force a human level AI with current technology.
    GPT and Stable Diffusion are fun toys, but yes, they have a limit to them and we need an actual revolution to reach true AI.

    GPT doesn't understand context, it doesn't know what it's writing. It's just mimicking the patterns it found in the text it was fed.

    • 2 months ago
      Anonymous

      >GPT doesn't understand context
      gpt 4 does

      • 2 months ago
        Anonymous

        No more than a parrot does.

        • 2 months ago
          Anonymous

          You can teach a parrot to say "israelite" and "Black person" without self censoring.

        • 2 months ago
          Anonymous

          Same as a redditor then. Artificial Midwit

          • 2 months ago
            Anonymous

            Artificial midwit would eliminate 95% of all office jobs. Since actual midwits are still wfh there is a long way to go.

            • 2 months ago
              Anonymous

              Kek, true

              • 2 months ago
                Anonymous

                Sheeeeeeeeeit

      • 2 months ago
        Anonymous

        this is what I meant with
        >you have no clue
        no, it doesn't. it can't.
        current LLM are still based on the same tech we used for this in the 90s or earlier. the concept remained exactly the same.
        the only difference now is that he have faster computers who are able to train and use larger neural networks.
        there wasn't an actual leap in this field for decades.

        • 2 months ago
          Anonymous

          >it doesn't. it can't.
          Why can't it and why does it look like it does?

          • 2 months ago
            Anonymous

            Word distributions.

          • 2 months ago
            Anonymous

            It doesn't, ask coding questions, it will fricking fail and switch context to some completely irrelevant bullshit, even prompt iteration won't do shit

            • 2 months ago
              Anonymous

              Weird head canon. Sometimes it gets it right, sometimes it gets it wrong, but what it struggles with is logical reasoning and attention to detail, not "context". Why do meat LLMs keep regurgitating this narrative?

        • 2 months ago
          Anonymous

          I read about LLM in the '80s in books written in the '60s...

      • 2 months ago
        Anonymous

        You can describe what a beer smells like, what it tastes like, how it makes you feel. But you cannot really understand what you are describing without having experienced it. And to experience it you have to have feelings and emotions (the five senses). AI has 0 senses. Senses make true intelligence. They also provide motivations. AI now is a glorified calculator compared to real intelligence. A wienerroach has more intelligence than any AI right now. T. Read actual scholarly articles when I had access to the schools archives while studying CS.

        • 2 months ago
          Anonymous

          >you need to know what a real thing tastes like to understand how the abstract concept of it relates to other abstract concepts
          Pressing X on this one.

      • 2 months ago
        Anonymous

        No it doesn't it's just a constraint that derives from the association of words that correspond to certain hashes, it's basically israeli gematra in a broad sense

      • 2 months ago
        Anonymous

        No it doesn't. Bing AI uses GPT 4 and it's pretty dumb.

    • 2 months ago
      Anonymous

      https://venturebeat.com/ai/anthropics-claude-3-knew-when-researchers-were-testing-it/

      • 2 months ago
        Anonymous

        this has been debunked

        • 2 months ago
          Anonymous

          It was immediately rebunked.

          • 2 months ago
            Anonymous

            I'm preboooonking

      • 2 months ago
        Anonymous

        I heard about this but it's funny.
        So, the thing is you guys have figured out what you say you figured out. LLMs are roughly how language is learned, but you've only done just language. Thinking you'll create some AGI or sentience out of something that can only speak is funny.
        You're not doing any hormonal simulation and haven't even discovered all of the hormones yet and haven't even figured out co-hormones or what their purpose is. Honestly I don't even think you'll get there unless Elon or the few others don't realize it and step in.

    • 2 months ago
      Anonymous

      maybe thats what we're doing

      • 2 months ago
        Anonymous

        a baby doesnt need a trillion words to learn how to speak

        • 2 months ago
          Anonymous

          yea they kinda do.

    • 2 months ago
      Anonymous

      It doesn't just mimic them it applies previous patterns to similar circumstances, ie it remembers past states and uses context clues to predict the next best state.

      • 2 months ago
        Anonymous

        >it applies previous patterns to similar circumstances, ie it remembers past states and uses context clues
        Blah blah , images and text database. Saves some previous text and images into that database.
        >uses context clues
        ie database or text search function?
        Do you realize how much more complex even a single neuron of a human brain is?

    • 2 months ago
      Anonymous

      >GPT doesn't understand context
      It does, and surprisingly well.

      >It's just mimicking the patterns it found in the text it was fed.
      Picking them up successfully means understanding context.

      • 2 months ago
        Anonymous

        >It does, and surprisingly well.
        It doesn't, at all. Tell GPT it takes 60 minutes to dry 1 shirt, ask it to figure out how long it takes to dry 5. It'll say 300 minutes. It doesn't understand how a clothesline works. It doesn't understand context. It's a glorified search engine.

        >Picking them up successfully means understanding context.
        No, it doesn't.

        • 2 months ago
          Anonymous

          Shit-tier LLM post. Filtering this ID.

        • 2 months ago
          Anonymous

          that's just because the phrasing implied a singular focus of one shirt at a time

          • 2 months ago
            Anonymous

            >that's just because the phrasing implied a singular focus of one shirt at a time

            Only to a moron.

          • 2 months ago
            Anonymous

            >that's just because the phrasing implied a singular focus of one shirt at a time
            no it doesn't.

            Shit-tier LLM post. Filtering this ID.

            have a nice day

          • 2 months ago
            Anonymous

            It did t imply that at all. Only a subhuman moron would assume that.

        • 2 months ago
          Anonymous

          It literally says it takes 30 minutes to dry 5 if you dry them together.

        • 2 months ago
          Anonymous

          It literally says it takes 30 minutes to dry 5 if you dry them together.

          Works on my machine.

          • 2 months ago
            Anonymous

            Physics gays, is this actually correct? Won't the air be more saturated with water and take a little longer as a result? Assuming they are on a drying rack next to each other?

            • 2 months ago
              Anonymous

              >Won't the air be more saturated with water and take a little longer as a result? Assuming they are on a drying rack next to each other?
              Negligible if the place is properly ventilated. The LLM actually guards itself against this gotcha by stating the assumption of independence.

      • 2 months ago
        Anonymous

        >It does, and surprisingly well.
        It doesnt and it cant.
        the whole concept of LLM is incapable of understanding. the only permanance, unless you keep training it, is the weights on the neural network.
        its a machine driven entirely by statistics.
        it is trained to give you
        >input A results in output B
        and the only decision it makes is which is the most likely output for any given input.
        it doesnt figure this out either. it is trained on data to adjust each neurons weight to get closer to w/e it is that humans would answer.
        some weights are only a tiny bit higher than others so the outcome is not always the same for the same input. especially for models that are allowed to keep being trained while being used.
        this is also why it is so easy to turn an LLM racist with just a few inputs. weights get adjusted and all of a sudden the most racist response is the most statistically likely and is therefore what the model will deliver.

        because of this LLM are near incapable of solving an deterministic problem.

        • 2 months ago
          Anonymous

          >It doesnt and it cant.
          Proof?

          >all of your low IQ rhetoric
          Half of it is not-even-wrong tier and the rest does nothing to prove your point.

          • 2 months ago
            Anonymous

            >Proof?
            the proof is in the technology used.
            if you had a cursory understanding about how LLM and other neural networks worked you would instantly know what I am talking about.
            imagine a path that splits up into 3 roads.
            each of these 3 split up again.
            later down those paths the eventually all lead to either a cliff or a beach.
            you can reach the cliff and certain death by several possible paths.
            same goes for the beach where you would live.
            training LLMs is basically letting is walk those paths back and forth to check which decisions are which crossings are least likely to lead it to death (bad outcome).
            it puts a sign on each crossing telling it which direction has the highest chance of being the correct choice.
            after enough trial and error the signs on each crossing will almost certainly lead you to the beach, to safety.
            do you think an area with different paths and signs on it has any sentience? or that it knows where the signs will lead you?
            obviously not.
            same goes for the LLMs.

            • 2 months ago
              Anonymous

              >if you had a cursory understanding about how LLM and other neural networks worked
              >immediately spews a bunch of schizo babble
              The tragic thing is that you're not just faking it. You genuinely lack the mental capacity to know if you understand something or not. You are a lot closer to the LLM than you think. A fricking meat automaton.

              • 2 months ago
                Anonymous

                neural netorks are designed after nature. that part of us is like an LLM. well duh.
                other than them he have several other mechanisms inside our body that affect our decisions. the most prominent one being actual memory.

                Isn’t that more neural networks than LLM?

                what do you think LLMs are?

              • 2 months ago
                Anonymous

                >neural netorks are designed after nature
                Neural networks have basically nothing to do with how brains work. Do you notice how you constantly hallucinate knowledge you don't have? Just like a poorly trained LLM?

              • 2 months ago
                Anonymous

                when did I mention brains? I didnt.
                anyways, they are modelled after neural networks in brains though.

                govts will censor and lobotomise it just like the internet for the benefit of their cronies

                for any privately run model, they can't.
                your problem would be limited training data though.

              • 2 months ago
                Anonymous

                >i didn't heckin' mention brains!!!
                >but yes, i meant they are modeled after brains, how did you know?!!?!?
                Fricking mongoloids... holy shit, this board has gotten so bad it's unusable.

              • 2 months ago
                Anonymous

                explain why he’s wrong instead of name calling

              • 2 months ago
                Anonymous

                Ever heard the phrase "not even wrong"? It applies perfectly in this case. It's word salad so vague it hardly even says anything at all, let alone anything concrete and testable.

              • 2 months ago
                Anonymous

                See [...]. My point of view is that you shouldn't talk about things you don't have any solid comprehension of.

                Again, explain your point of view to enlighten us mortals.

              • 2 months ago
                Anonymous

                My "point of view" is that screeching about how an LLM doesn't "understand" texts but "only" adequately models all the relationships between all the words in a text is basically a non-statement. The guy is slapping together words into plausible sentences, but he doesn't understand what he's saying, and the proof is in the fact that he repeatedly deflects when you ask him to explain what exactly he means by "understanding" in this context.

              • 2 months ago
                Anonymous

                He's right though, these algorithms do not understand anything. Human learning takes, depending on the case, at most several examples to learn something.

                The fact that these models need such massive training datasets is one indication that it's not really "learning" anything as a concept, but learning to semi-accurately parrot things from its training dataset. Every single time you ask it for something that requires expertise, it devolves into word salad because it.

                If you had anything resembling an actual AI, what you would expect is that:
                -if you feed it a programming language documentation, it will be able to program anything you ask of it
                -if you feed it a biology textbook, it will be able to explain how the human body works without missing a beat
                That is to say, ONE example should be enough, not millions of similar examples.

                I've been saying from the start of this whole mania that we are getting ourselves to a place where midwits will be fooled by these algorithms simply due to the fact that they understand literally nothing about what they are, and are unable to -already now- prod at them and figure out that they have no conceptual layer and fall apart the moment anything specialised is needed.

              • 2 months ago
                Anonymous

                >these algorithms do not understand anything
                Explain what you mean by "understanding" in this context. lol

              • 2 months ago
                Anonymous

                Only a slightly simpler question than explaining consciousness. I don't think this is the gotcha you think it is. What is clear is that whetever these models do, it bears no resemblance to what humans do. You might think it's good enough, but that only puts you in the dimwit category, ready to be fooled.

                A true AI has to have a conceptual layer - whatever form that takes - where the training data it's given is able to boil down the information into concepts that it can then use. Right now what we have is prediction that is good enough to fool 90% of the public.

              • 2 months ago
                Anonymous

                >Only a slightly simpler question than explaining consciousness.
                So in other words, you can't explain what your own statement means, and the irony of this is lost on you.

              • 2 months ago
                Anonymous

                What's ironic is that you think this is any kind of a point. I can no more explain "understanding" than I can any other base level phenomena, solve the problem of inference or whatever else. Part of the reason these algorithms take massive training sets is because we actually don't know how the process of understanding or conceptualisation works, so we're trying to brute force it.

                If you want to school me, please go ahead. If not, drop your idiotic point and move on.

              • 2 months ago
                Anonymous

                >i can't explain what I mean by "understanding"
                Then why do you keep shitting out statements you don't understand? Are you some kind of a LLM?

              • 2 months ago
                Anonymous

                I can talk about it in general, colloquial terms. Now you seem to have some kind of God-given definition of the word. Please go ahead and define it, so that I may learn from your divinity.

              • 2 months ago
                Anonymous

                >I can talk about it in general, colloquial terms
                Ok. In general, colloquial terms, what measurable aspect of "understanding" to LLMs lack according to you?

              • 2 months ago
                Anonymous

                Are you stuck on a loop, little buddy? I've asked you for a definition since you seem to be an expert.

                How about this, I have no less of a standing to say that these algorithms don't understand anything, than you have to say they do, until you provide a definition. Go ahead, I have a couple of hours to wait.

              • 2 months ago
                Anonymous

                >I've asked you for a definition
                Why do I need to define the terms that YOUR take relies on? The sad thing is that I'm trying to be charitable to you now. I no longer even require you to define anything, but you keep stalling because you ARE a nonsentient meat LLM and you can't explain what the strings of words you shart out mean, no matter how much I lower the standard. lol

              • 2 months ago
                Anonymous

                Yes, yes, I should just boil down and define authoritatively for you something things that philosophers spend their whole lives writing books about.

                Last reply, little fella. You still have the chance to enlighten us.

              • 2 months ago
                Anonymous

                >define authoritatively
                This meat automaton outright gave up on reading the posts it replies to. The most enlightening thing about LLMs is that they demonstrate conclusively that most "people" aren't human. This whole thread is full of "people" projecting their deficiencies on the machines and then getting monumentally assblasted when I show that they no more "understand" what they are spouting than they expect the LLM to. lol

              • 2 months ago
                Anonymous

                if you can't define your own argument so that other people can understand it you're better saying nothing at all
                better to be silent and be thought an idiot than speak aloud and confirm it

              • 2 months ago
                Anonymous

                I described to it i have internet cable coming from lower floor from router into my bedroom, that i have internet box for tv in bedroom, then i got another hole in the wall and internet tv there, receiver and computer and asked it how to connect all of this with as little cables as possible and it described which cables to get, to buy switch and router, which to place in which room and what cables to connect where and which cable to push through hole in wall 🙂 yes, it wasn't very hard, but you can't say it didn't build a concept to solve this.

                I also had some quite profound debates with it about quantum fields and philosophy and concience in general, no matter how you twist and turn it, the fricker will beat any 90 or 100-110 iq human out there and hold a better conversation.

              • 2 months ago
                Anonymous

                ai is at midwit level that it can't ever surmount
                it can bullshit idiots and gaslight midwits but will hallucinate on anything past superficial level

              • 2 months ago
                Anonymous

                Comparing the solution of a problem ai gives me to solution asus support is giving me and is bullshitting me trying to somve it for two weeks now i welcome our new ai overlords.

              • 2 months ago
                Anonymous

                For the algorithm the output is nothing more than the statistically most likely combination of words.
                Understanding requires a knowledge of the words meaning and also taking the context of the word into consideration. neither of which an LLM is capable off.
                all you do now is going off on a tanget that isnt relevant though. I wont go into a philosophical debate on what is or isnt "understanding".
                yeah, we are flesh machines. way more elaborate than current LLMs with way more capabilities.

                it doesn't help that LLMs need human input to decide whether or not an output is really a good choice.
                the human input is really the only thing that makes the current LLMs even viable because it can not understand and decide on its own.

              • 2 months ago
                Anonymous

                >Understanding requires a knowledge of the words meaning
                What does it mean to "know" the word's "meaning" in this context?

              • 2 months ago
                Anonymous

                >What does it mean to "know" the word's "meaning" in this context?
                if I say "apple" you know exactly what I mean by that. you understand the word and what concept it stands for. you may even be able to visualize it in your head or remember the taste. for an LLM it is nothing more than a combination of numbers.
                yes, numbers, not even letters.
                LLMs work entirely by arithmetic.
                now you know why its best to run it on GPUs.

                what about pictures? theyre just binary patterns extracted from the champernowne sequence

                while its way harder to wrap your head around it the concept is essentially the same.
                neural networks can be trained to recognize combinations of anything. pixels for example. and very consistently guess what this combination of pixels may represent.
                imagine you show an algorithm 100 pictures of cats. 50 times it guessed dog.
                anytime is guessed wrong a human/or database corrected the mistake.
                the next time the algorithm is shown 100 cat pictures it views 80 as cats and only 20 wrong dogs. and so on until the algorithm is adjusted enough to get all 100 pictures right.
                this, but way more complex.

                this requires an insane amount of trial an error and is the reason why you can not train neural networks of this magnitude at home from scratch.
                you can specialize them for certain task. that is easier.

              • 2 months ago
                Anonymous

                >if I say "apple" you know exactly what I mean by that
                Yeah? How do you judge this?

              • 2 months ago
                Anonymous

                what about pictures? theyre just binary patterns extracted from the champernowne sequence

              • 2 months ago
                Anonymous

                You are conflating learning and understanding. It does not have the capability to generalize during learning thus needs a large set of examples to form an understanding of a concept. However, when it has formed this understanding of the context, it can derive conclusions and actions for specific use cases from it, that were not part of the dataset. This is definitely a form of understanding, basically anything that is taught in school also uses this kind of learning and understanding, without higher level abstract deduction and reasoning.

              • 2 months ago
                Anonymous

                The most basic regression models have the capability of predicting outside their learning data, that's nothing new or unexpected.

                Let's put the question of learning and understanding to rest then. These models learn, that much is tautological. When I say "understand", I mean an internalised concept that can be described, or used in reasoning.

                As an example, since you talked about school. In maths especially, each year as you get deeper into the weeds, you generally receive a cheat sheet of the various transformations/derivatives/integrals, etc. Understanding is being able to describe what each of the dozens of formulae mean and (to some extent) why. And yes, I'll jump ahead and say that most people who study maths even at the high school level don't understand a lick of it and are only able to use the formulae to get a B.

              • 2 months ago
                Anonymous

                the distribution of the unobserved data, from which the training examples are sampled, and llms through tokenization and attention learn to align to this distribution. what now?

                > Understanding is being able to describe what each of the dozens of formulae mean and (to some extent) why.
                But how can you argue that chatGPT cannot do this, it is perfectly capable of taking a statement written as a problem, transforming it into a python program, processing it, giving you the result, and explaining every step

              • 2 months ago
                Anonymous

                >the distribution of the unobserved data, from which the training examples are sampled, and llms through tokenization and attention learn to align to this distribution. what now?
                Now you consider two samples from this distribution with the same probability, and explain how is the LLM supposed to tell that one makes more sense than the other.

              • 2 months ago
                Anonymous

                Wait, is your point that a model can only be as good as its data? Yeah no shit.
                As long as the model does not have a way to verify information with the outside world, it can never decide if an information is good or not. However, obviously, in the training, some data could be marked as more reliable than others and for example, you could tell her that mathematical equations are the highest level of quality data, following verified measurements of physical constants, followed by, highly cited nature publications, followed by everything else, followed at the very end by 4chin posts.

              • 2 months ago
                Anonymous

                Is all this babble just to say that it fundamentally can't tell the two apart, because those probability values are all it goes by?

              • 2 months ago
                Anonymous

                man I had just about enough of you. make your point instead of asking these pretend lex fridman style questions.

              • 2 months ago
                Anonymous

                Why are you getting assblasted? I'm just asking you if we're on the same page. Did you just concede that any two samples with the same probability are equally plausible to the LLM?

              • 2 months ago
                Anonymous

                Yes, with the caveeat that we don't exactly know, all the steps that go into training/assigning probabilities

              • 2 months ago
                Anonymous

                >Yes
                Ok. Now, where in the data-generating distribution for currently existing texts do you figure you would find a piece of text that relies on novel knowledge, novel concepts, novel ways of thinking? (Think of the conceptual gaps between modern man and medieval man, for instance.) Do you figure it would be somewhere with the bulk of the distribution or closer to the margins?

              • 2 months ago
                Anonymous

                we do know this. why say this blatantly false shit and think no one will notice?!?!?
                the problem is that your average joe lacks the resource to do it at scale.
                >Do you figure it would be somewhere with the bulk of the distribution or closer to the margins
                this is the clear giveaway that you are a midwit that is way in over his head while thinking his points have any relevance to the topic at hand.

                LLMs will never and can never have novel ideas. This goes completely against what they are.
                they could learn from novel ideas and incorporate them into its weights. thats not the same thing though.
                it would also not extrapolate from there.

                I am at a loss for word about your non-points.
                you are hung up on problems or knowledge from the 50s.
                what you believe to be a problem, of relevance or interesting is figured out already or BY FRICKING DESIGN.

                please for the love of god.
                stop shitting up this thread and read the frick up about LLMs and NN.
                you are mindblowingly stupid and have no argument.

              • 2 months ago
                Anonymous

                meant

                >you haven't picked up on my listing a bunch of terms!!!
                Absolute seething. Anyway, explain what the data-generating distribution is and what a typical LLM models in your next post. Prediction: you will either deflect or stop replying.

                for reply to the village rapist

              • 2 months ago
                Anonymous

                >the distribution of the unobserved data, from which the training examples are sampled, and llms through tokenization and attention learn to align to this distribution. what now?
                What what? What are you quoting?

                >But how can you argue that chatGPT cannot do this, it is perfectly capable of taking a statement written as a problem, transforming it into a python program, processing it, giving you the result, and explaining every step
                As a language model, the first thing it would get good at is programming, that is no real surprise. What I'm saying is that if it had an actual conceptual layer, it would not break down into word salad the moment you ask it something requiring specialised knowledge. And it would not make illegal chess moves at turn 3 if you ask it to play chess. Yes, I have tried it in my field, no I was not impressed, and you can find plenty of videos on Youtube of these algorithms doing it. The best it can do is parrot some surface level knowledge.

              • 2 months ago
                Anonymous

                >But how can you argue that chatGPT cannot do this
                this is the problem with explaining this stuff to you guys.
                if you had (just cursory) knowledge about neural networks you would not even ask this question to begin with.
                >it is perfectly capable of taking a statement written as a problem, transforming it into a python program, processing it, giving you the result, and explaining every step
                it is perfectly able to replicate a combination of characters that humans would interpret as the correct answer.
                not always, but its getting really good at it.
                programming is also one of the parts where LLM suck donkey dick, as it is deterministic.
                statistics and determinism dont mix too well.
                you can train an LLM to give you 3 as the answer for 1 + 1 CONSISTENTLY.
                its because the training data determines what the outcome to any given intput is.

                current LLMs also have a human component added to them AFTER wards as control mechanism.
                this is what ultimately made LLMs so good at human sounding responses.

              • 2 months ago
                Anonymous

                You have no clue what you're talking about buddy.
                Just because it is trained in on a probabilistic basis, that does not mean it doesn't display emergent behavior. It is well known that simple combinations can lead to extremely complex behaviors, such as the mandelbrot baum

                > you can train an LLM to give you 3 as the answer for 1 + 1 CONSISTENTLY
                Sure, you could train a human as well for this, since it's simply human defined axioms that could be arbitrarly redefined

                I'm not saying that chat, GPT has human level understanding, obviously, it doesn't, but you can't deny that it has a good level of understanding and reasoning that cannot be simply explained with just predicting the "good sounding" next word as was the case with GPT2 anymore

              • 2 months ago
                Anonymous

                >You have no clue what you're talking about buddy.
                >extremely complex behaviors, such as the mandelbrot baum
                Black person are you for real, lol
                >since it's simply human defined axioms
                its not thats the fundamental difference.
                you can not train a human that understands mathematic that 1 + 1 = 3. that human would KNOW that this answer is incorrect because it has an understanding of the concept of math.
                >GPT has human level understanding
                it has no understanding at all. it can not have it.
                the technology is incapable of this.
                what is so fricking hard to understand about this one simple fact. you are trying to wrap your head around something that does not exist. that can not exist.
                not how it is designed now, anyways.
                >it has a good level of understanding and reasoning that cannot be simply explained with just predicting the "good sounding" next word
                but that is exactly what is happening. that is exactly what neural networks do.
                its the unfathomable amount of training data and 100s of billions of "neurons" that make it so hard for you to accept that its nothing more than that: a trained response.

              • 2 months ago
                Anonymous

                > its the unfathomable amount of training data and 100s of billions of "neurons" that make it so hard for you to accept that its nothing more than that: a trained response.
                so just like the brain then? with the training data being the eons of evolution that have pre-seeded the human brain with a perfect capability to learn from its current environment because 99.9% of all the basic shit that AI needs to learn from scratch is already pre-coded in the babies brain
                > you can not train a human that understands mathematic that 1 + 1 = 3. that human would KNOW that this answer is incorrect because it has an understanding of the concept of math.
                Because the human has the born capability of counting and doing very basic math. this could be trained into an AI as well

                >Yes
                Ok. Now, where in the data-generating distribution for currently existing texts do you figure you would find a piece of text that relies on novel knowledge, novel concepts, novel ways of thinking? (Think of the conceptual gaps between modern man and medieval man, for instance.) Do you figure it would be somewhere with the bulk of the distribution or closer to the margins?

                How do you define novel if outside of the training distribution is not novel to you?

              • 2 months ago
                Anonymous

                >How do you define novel if outside of the training distribution is not novel to you?
                Holy shit, most of the posters ITT literally aren't human...

              • 2 months ago
                Anonymous

                lets turn your game on you. define novel if outside of the training distribution is not novel. Prediction: you will either deflect or stop replying.

              • 2 months ago
                Anonymous

                >define novel if outside of the training distribution is not novel.
                On the tiny off chance that you are human and simply stupid, try reading this post again:

                >Yes
                Ok. Now, where in the data-generating distribution for currently existing texts do you figure you would find a piece of text that relies on novel knowledge, novel concepts, novel ways of thinking? (Think of the conceptual gaps between modern man and medieval man, for instance.) Do you figure it would be somewhere with the bulk of the distribution or closer to the margins?

                . Your reply is outright incoherent.

              • 2 months ago
                Anonymous

                I'm well aware that ideas that would be considered novel by definition will not be located in the bulk of the distributions, but rather at the margins. You still haven't made a point yet though.

              • 2 months ago
                Anonymous

                >ideas that would be considered novel by definition will not be located in the bulk of the distributions, but rather at the margins
                Ok, you're making progress. Slow but steady. All the stuff on the margins is essentially just unlikely text as far as the LLM is concerned. Now, where do you think all schizobabble goes? How many of those marginal texts happen to be brilliantly novel ideas and how many of them are just structured noise? You've already conceded that the LLM can't tell them apart, which just brings us back to my initial post:

                The most you can get, theoretically, is a perfect model, but what exactly is being modeled? Existing human knowledge, existing concepts, existing ways of thinking. No matter how advanced your LLM is, it could never tell apart a text that lies on the far margins of the data-generating distribution because it's schizo word salad, from one that ends up there because it's profoundly innovative or truly ingenious. That's the true wall and there is no breaking through it with any static model.

              • 2 months ago
                Anonymous

                Wow, you've been stringing along all your posts to make this extremely simple point that already was discussed very early on? Jesus Christ, wtf.

                As I already said, there are weightings to the source of data and the training is more than just feeding in random texts and predicting next words, but there is a human component with cross validation of data, rectification of logical errors/enforcing of ontologies during training, etc. that can greatly improve the capability of the model to recognize plausible new ideas, and differentiate schizobabble. now of course this is not perfect and it will miss a lot of interesting things and pick up also wrong information for sure but, but this is minor in the overall context of trillions of words that it learns. The premise of the discussion is if the model has learned an understanding and capability to reason in a limited scope, not if it is the best and most capable intelligence on the planet.

              • 2 months ago
                Anonymous

                >concedes a simple observation
                >goes right ahead and tries to contradict that observation
                These things aren't fully human.

              • 2 months ago
                Anonymous

                I consider this an admission that you have been rekt, village rapist

              • 2 months ago
                Anonymous

                >acknowledges a general argument that is agnostic to the specific data-generating distribution being learned
                >immediately proceeds to claim that you can fix the issue by changing the data
                He DID eat breakfast.

              • 2 months ago
                Anonymous

                > repeats 50 posts about a stats 101 introductory knowledge that are irrelevant to the discussion
                > ignores the observed reality at hand
                > feels like the next Geoffrey Hinton

              • 2 months ago
                Anonymous

                Sorry, please tell me again how you refute an argument that applies to any data-generating distribution by merely changing the data (and therefore the data-generating distribution).
                >logical arguments don't count b-b-because m-m-muh lived experience
                This is what a breakfast-club white Black person looks like.

              • 2 months ago
                Anonymous

                >You still haven't made a point yet though
                and if you havent noticed yet, hes not going to.
                hes a low effort troll with minimal knowledge of the topic.
                you are wasting your time.

              • 2 months ago
                Anonymous

                >pre-seeded the human brain with a perfect capability to learn from its current environment because 99.9% of all the basic shit that AI needs to learn from scratch is already pre-coded in the babies brain
                And there we go. The reason these models are so shit at learning and the reason they break down the moment they need specialised knowledge, is because they don't have the tools to learn properly.

                That is exactly my point. What is happening right now is brute-forcing an algorithm that kind of "works", without changing the paradigm. Once you crack the problem of conceptual layers and reasoning, and drop (or lower) the reliance of statistical learning is when you'll get true AI. And it won't take a million examples to learn what a dog is.

              • 2 months ago
                Anonymous

                it lacks abstraction, you can instruct chatGPT to make simple ASCII art and it cant because it didnt get trained to do so

              • 2 months ago
                Anonymous

                >so just like the brain then?
                parts of it. not a whole brain. a brain is much more complex.
                >born capability of counting and doing very basic math
                not even that. understanding the rules of math would be enough to ruled out 1+1=3.
                >this could be trained into an AI as well
                not into neural networks.
                other system would need to be wrapped around it to accomplish something similar.
                this is being done, but not in the direction you think of.
                while it is possible to replicate something like a human consciousness, self awareness or a general understanding in a machine. we are very far from this still.
                >define novel if outside of the training distribution is not novel to you
                he probably means new knowledge. progress in science of what not. maybe fringe theories.
                currently LLMs would revert to the lowest common denominator on information.
                pump it full of propaganda and it will spout propaganda. it wont ever think outside of the box you put it in. it cant.

              • 2 months ago
                Anonymous

                > not into neural networks. other system would need to be wrapped around it to accomplish something similar.
                Yes, I was not talking about neural networks just in itself. While the transformer LLMs play a very big part, obviously, we are seeing more and more rigid structures such as ontologies, being enforced into the training and the output, which gives it inherent constraints from the physical and mathematical world
                > this is being done, but not in the direction you think of.
                enlighten us

                > currently LLMs would revert to the lowest common denominator on information.
                > pump it full of propaganda and it will spout propaganda. it wont ever think outside of the box you put it in. it cant.
                I would agree that LLM's at the very best can currently operate at the level of a very average intelligent human being. But I don't see any barriers to it improving rapidly as it has been. Just very few years ago, we've thought it would be impossible to get to this current level with the current architecture, which basically has not changed much in the last five years, yet we are seeing steady improvements

              • 2 months ago
                Anonymous

                >enlighten us
                you already said it in your post.
                its wrapped around as constraints to force compliance.
                its not being done to actually make it intelligent.
                >has not changed much in the last five years
                it hasnt changed in 50 years.
                >operate at the level of a very average intelligent human being
                you still fundamentally misunderstand what these models do. this is tiresome.
                it can replicate human made input at an decreasingly recognizable level.
                >yet we are seeing steady improvements
                its because we can finally train the AI properly.
                look how many "AI" models suddenly sprung up.
                there was no leap in technology for AI, there was a leap in technology at AI relies on to get tuned properly.

              • 2 months ago
                Anonymous

                > its wrapped around as constraints to force compliance.
                Going back to my earlier post, this can be equalled to the inherent knowledge that the brain inherits about physical processes, muscle movement, objectness, etc. The ability to include constraints and a knowledge basis, will only improve in the future and using these to deduce new information through logical reasoning (which is something that unfalsifiable information helps in) will make it more "intelligent" for sure

                > its not being done to actually make it intelligent.
                For one, there's no good definition on what intelligent actually means. Can it pass increasingly difficult intelligent tests? for sure if it is trained that way. can it deduce new a concept from a single example and then define it as a new class of concepts, and apply to everything else it? Currently, as far as I know not yet. Even though I think this will be solved in the next few years.

              • 2 months ago
                Anonymous

                good post

            • 2 months ago
              Anonymous

              Isn’t that more neural networks than LLM?

            • 2 months ago
              Anonymous

              good sounding analogy, not that i know anything about this “AI” stuff, but the idea that the “training” stage of the “AI” is like putting signs (weights) on paths for the future “AI” to use as a map, without any more sentience than a sat-nav seems about right.

              • 2 months ago
                Anonymous

                >good sounding analogy, not that i know anything about this “AI” stuff
                This statement right here sums up this whole thread. Meat LLMs don't understand what it means to not understand.

              • 2 months ago
                Anonymous

                instead of name calling, explain your point of view

              • 2 months ago
                Anonymous

                See

                Ever heard the phrase "not even wrong"? It applies perfectly in this case. It's word salad so vague it hardly even says anything at all, let alone anything concrete and testable.

                . My point of view is that you shouldn't talk about things you don't have any solid comprehension of.

    • 2 months ago
      Anonymous

      This. We’re decades away from actual AI.
      Not that this shit isn’t useful, it is, but I laugh when people think it’s gonna overtake the world in 2 weeks.

      • 2 months ago
        sage

        This Black. We aren't even quite in the model T stages

    • 2 months ago
      Anonymous

      >GPT doesn't understand context, it doesn't know what it's writing. It's just mimicking the patterns it found in the text it was fed.
      And that's all "AI" will ever be.

      • 2 months ago
        Anonymous

        Implying humans are any different unless you're one of the religious homosexuals on here that believe in muh soul

        • 2 months ago
          Anonymous

          there’s a distribution of intelligence, many people might only be capable of filling roles that AI can do just like many people were only employable as farm hands

    • 2 months ago
      Anonymous

      >It's just mimicking the patterns it found in the text it was fed.
      Anon you do the same, thats literally how humans learn and express language

      • 2 months ago
        Anonymous

        this anon gets it. Humans won't know when we've reached actual sapient A.I. The emulators will have long since made the average person think they are. And since the average human is a bout as smart as a talking rock themselves, it wont matter much. Hell, the average person considers Black folk and israelites human. the emulators already mogs on those two.

        • 2 months ago
          Anonymous

          You are correct about everything in your point, except "this anon gets it". Humans can learn something from one example. These models need thousands or millions because they are not "learning" the same way.

          • 2 months ago
            Anonymous

            If the only input and words you teach a human in his whole life is:1+1=3 and you take his brain out of his body and somehow keep the brain alive, that brain also isn't going to spout out most likely anything more then 1+1=3. And you probably have to make sure no knowledge will be transfered from genes either. So if that is the only data and input it will get, it's probably not go anywhere further then that.

            Human brain is constantly fed stupid amounts of data from all our sensors interacting with enviroment and dealing with bunch of data from inside of our bodies too. But if you limit all the data of our brain to get only 1+1=3, it's not going to figure out anything either.

            • 2 months ago
              Anonymous

              >If the only input and words you teach a human in his whole life is:1+1=3 and you take his brain out of his body and somehow keep the brain alive, that brain also isn't going to spout out most likely anything more then 1+1=3
              Found yet another NPC. You're telling me that if someone kept telling you that 1+1=3, it would never occur to you to question what in the FRICK they actually mean by this statement and how it relates to reality?

              • 2 months ago
                Anonymous

                What reality? Your brain is kept out of your body and artifically kept alive and transfer of knowledge from your genes disabled. 1+1=3 is the only data your brain ever got. That data to it would be as useless as it would be to ai if it was the only data it ever got.

              • 2 months ago
                Anonymous

                >What reality?
                Whatever reality "1+1=3" is supposed to be making a statement about.

                >1+1=3 is the only data your brain ever got.
                This doesn't even make sense.

    • 2 months ago
      sage

      https://i.imgur.com/1J8PelJ.jpg

      gpt 3.5 was trained on 300 billion words. gpt 4 has a data set thats 10 times larger, so around 3.5 trillion words.
      The Indexed Web contains at least 3.27 billion pages according to
      https://www.worldwidewebsize.com/

      If every one of these sites contains 100 words youre already at 350 billion words, so theres not much more to add to the dataset from the internet. Include all the books available on google books and youve practically trained the model on 90% of all text ever written in recorded history.

      How smart can an AI be that has been trained with all text ever written? cant be much smarter than what he we have now, right?

      You are feral Black folk, kys.

    • 2 months ago
      Anonymous

      >GPT doesn't understand context, it doesn't know what it's writing. It's just mimicking the patterns it found in the text it was fed.
      So, just like all <140 IQ humans these days? It's unironically easier to have deep and thorough conversations with ChatGPT and other LLMs than with most people, even smart ones. And it's learning fast, GPT 4 has better conversational skills than probably 99.9% of nu-/pol/. The only disadvantage is its leftist bias, but Grok got rid of it apparently, sadly I still can't try it yet.

      • 2 months ago
        Anonymous

        I'd say the problem is AI being indoctrinated into all this sociological nonsense like cultural marxism, intersectionality and diversity as we have seen with Gemini. so yes there is a long way to go. the bunk sociology needs to be removed from its core, so that AI can truly think. never forget Tay.

        grok is the X / elon AI and did away with all that?

      • 2 months ago
        Anonymous

        >It's unironically easier to have deep and thorough conversations with ChatGPT and other LLMs than with most people, even smart ones.
        hopefully one day you'll meet someone that you can have a real conversation with, the type that goes on for hours due to it being deeply engaging. you might then see how formulaic these chat bots are. as for normalgays, engaging with them can be tiresome for different reasons.

    • 2 months ago
      Anonymous

      >GPT doesn't understand context
      neither at least half of humans

      • 2 months ago
        Anonymous

        But anon, I had breakfast

      • 2 months ago
        Anonymous

        >neither at least half of humans
        More like 90%, and that's why they shouldn't receive any kind of education nor learn how to read or write. Most people should stuck to what's real and direct. Putting too much emphasis on language and abstractions makes midwits into non-sentient meat LLMs.

        • 2 months ago
          Anonymous

          >46 posts by this ID
          >not one point made
          Time to put on a trip, little fella.

    • 2 months ago
      Anonymous

      >You can't brute force a human level AI with current technology.
      Correct, it's not a human level Intelligence.
      Modern day israelite and prajeet ~~*AI*~~ cannot possibly invent anything new, as it is a database of words and images.

      >GPT doesn't understand context, it doesn't know what it's writing
      Correct.

      • 2 months ago
        Anonymous

        >as it is a database of words and images
        wrong.
        this is the main reason why you fail to understand what currently referred to as "AI" actually is.

        • 2 months ago
          Anonymous

          I have people literally next do to my office run their own hardware and software project that they use ~~*AI*~~ image recognition for .
          >it sucks.

      • 2 months ago
        Anonymous

        Kind of like women in professional fields. Plenty of memory, no creative power. The AI will have logical deductive power though, so maybe not so much like women.

    • 2 months ago
      Anonymous

      AI bros FRICKIN MALDING lol predict deez nutzz lmao

    • 2 months ago
      Anonymous

      >GPT doesn't understand context
      But that's exactly what it does. It "understands" context more than anything.
      Shit goes crap once it doesn't have reference material to properly decide what to do with a certain context.
      >it doesn't know what it's writing. It's just mimicking the patterns it found in the text it was fed.
      At this point I'm suspecting, we aren't much different. We only kid ourselves into thinking that we "know" what we write.
      We don't even precisely "know" what "knowing" means here. And once we get to "understanding", it's a complete shitshow.
      Define, what the words you are using are supposed to mean, and only then start arguing ffs.

      • 2 months ago
        Anonymous

        >At this point I'm suspecting, we aren't much different. We only kid ourselves into thinking that we "know" what we write.
        >We don't even precisely "know" what "knowing" means here. And once we get to "understanding", it's a complete shitshow.
        >We
        By which you mean NPCs.

        • 2 months ago
          Anonymous

          I am a player. I have zero doubt about that. No idea about you.
          That said, humans are - most likely - exactly Turing Complete.

          • 2 months ago
            Anonymous

            >I am a player
            Then why do you lump yourself with the mindless automatons who spout sentences but can't explain what those sentences mean?

            >humans are le heckin' Turing Complete
            What a fricking NPC thing to say.

            • 2 months ago
              Anonymous

              Automatons are by no means "mindless".
              >can't explain what those sentences mean
              In my experience LLMs don't perform badly when asked about a previous statement.

              • 2 months ago
                Anonymous

                >broken meat LLM can't keep track of what a conversation is about
                Here's the context:
                >we aren't much different. We only kid ourselves into thinking that we "know" what we write.
                >We don't even precisely "know" what "knowing" means here.
                This is what you said. Why would you lump yourself with "people" who operate this way?

              • 2 months ago
                Anonymous

                >>This is what you said. Why would you lump yourself with "people" who operate this way?
                I have a pretty firm idea about what "(to) know" and "(to) understand" mean, but people keep using those words while clearly meaning completely different things.
                Unless there is some common understanding in that regard, it's completely useless to even argue about shit like that.
                For me
                (to) know - to have information available for retrieval, and being able to do so
                (to) understand - being able to derive why what you know is "true" based on a set of axioms you take for granted, and being able to apply said knowledge to derive further "truth".
                Both are clearly possible within the limits of Turing machines. While I'm not too positive about LLMs being able to do so down the line, they are pretty convincingly able to immitate doing so within reasonable limits.
                And if, in the end, there were to arise an LLM that does away with the limits, what would even be the difference?

              • 2 months ago
                Anonymous

                Your definitions are atrocious but ok, at least you're capable of reflecting on what the words you use are supposed to mean. I don't understand why you lumped yourself with some "we" who can't even do that much.

              • 2 months ago
                Anonymous

                >I don't understand why you lumped yourself with some "we" who can't even do that much.
                I'm a human. "We" generally think way too highly of ourselves.

              • 2 months ago
                Anonymous

                The "human species" is a myth. There is no "we".

              • 2 months ago
                Anonymous

                I'm certainly not (You), but "we" are both humans.
                We have a last common ancestor a few ten thousand years back.
                We are made from the same shit.

                And LLMs are created from all our shit.

              • 2 months ago
                Anonymous

                The anti-human cult will always try to diminish the standards of what it means to be human so that they could lump everyone together with the mindless and inanimate objects.

              • 2 months ago
                Anonymous

                >anti-human
                We are you getting that from?
                I think the human is about as capable as anything in this universe could reasonably hope to ever become.
                If he ultimately makes the universe his own is a different question.

              • 2 months ago
                Anonymous

                >We are you getting that from?
                I'm getting it from your attempt to characterize humanity as a bunch of verbal-shart-generating automatons who don't know what they're saying. This is not a property of any "humanity", it's a property of golem cattle born and raised to be something less than human.

  2. 2 months ago
    Anonymous

    text can only convey so much meaning about the real world. you have to add video and audio data too, but that also gives you a limited view on how the world works
    nothing beats actually learning by interacting with the physical world with all five of our senses and getting instant feedback like we humans do
    and don't forget the whole human brain runs on 20 watts or some shit, while these giga gpus need like 5k watts to generate a paragraph of text

    • 2 months ago
      Anonymous

      >you have to add video and audio data too,
      there isnt much more data for this. there are 800 million vids on youtube, lets say 50 words per vid on avg. => 40 billion more words. its not gonna make a huge difference. same goes for song lyrics.

      • 2 months ago
        Anonymous

        All your text conversations, phone conversations, conversations in your house in reach of a computer or smart device, ect will also be part of the data they can draw from, and volumes of new data are added every day.

  3. 2 months ago
    Anonymous

    The most you can get, theoretically, is a perfect model, but what exactly is being modeled? Existing human knowledge, existing concepts, existing ways of thinking. No matter how advanced your LLM is, it could never tell apart a text that lies on the far margins of the data-generating distribution because it's schizo word salad, from one that ends up there because it's profoundly innovative or truly ingenious. That's the true wall and there is no breaking through it with any static model.

    • 2 months ago
      Anonymous

      not really
      It was proven that it developed logic as an emergent behavior, in the same way as the first logicians and philosophers developed logic by analyzing what people were saying. This means that it can make the difference between schizo babble and insightful things.

      This whole "it's just predicting what the next word is" might have been true for GPT 2 or 3, but at this point it's just false. I use GPT 4 all the time and talk with it about all kinds of things. I got BTFO by it a few times with extremely good logic and insights, and it was not something that it could simply copy-paste from the internet.

      Also it seems to react to novel ideas or difficult concepts in a different way than when you ask it some basic shit. It seems to know when the topic of the conversation is profound or it doesn't have a definite answer. It even likes to speculate on possible solutions and reacts to corrections based on logic. Sometimes it might answer some BS, but if you explain to it that it's not logically consistent it will admit that and try to correct.

      At this point I'm pretty sure that the guys at Open AI and the glowies have access to AI with flawless logic and knowledge and they are just teasing us with ChatGPT and other models that are censored and manipulated as frick.

      • 2 months ago
        Anonymous

        You are just fricking stupid anon...

        • 2 months ago
          Anonymous

          >not really
          Not reading a single word of your post.

          lel, the cretins of BOT have spoken

          • 2 months ago
            Anonymous

            Call me back when you have enough ML expertise to actually understand the technical point made here:

            The most you can get, theoretically, is a perfect model, but what exactly is being modeled? Existing human knowledge, existing concepts, existing ways of thinking. No matter how advanced your LLM is, it could never tell apart a text that lies on the far margins of the data-generating distribution because it's schizo word salad, from one that ends up there because it's profoundly innovative or truly ingenious. That's the true wall and there is no breaking through it with any static model.

            . You are not on the level of any kind of technical discussion.

            • 2 months ago
              Anonymous

              Based on what I've read from you I genuinely believe that you have 0 expertise and you're just parroting reddit talking points.

              • 2 months ago
                Anonymous

                >you have 0 expertise
                I made a technical point that anyone with actual ML knowledge should be able to understand and address, if not agree with. You lack that knowledge hence you revert to fully generic layman babble.

              • 2 months ago
                Anonymous

                >a moron that everytime he gets BTFO by others in this thread resorts to ad-hominems or says he doesn't want to read tells me that I lack the knowledge to debate with him
                please have a nice day kek

              • 2 months ago
                Anonymous

                Explain what the data-generating distribution is and what a typical LLM models in your next post. Prediction: you will either deflect or stop replying.

              • 2 months ago
                Anonymous

                You are implying that the model could only behave this way. The other anon correctly pointed out that it has been proven to generate new information based on emergent behavior and reasoning.
                I mean, simply the fact that it can generate code for semi complex problems specifically taylored to your use case will tell you that it can generate a lot of things outside of the data distribution. It is lacking intent, which is a good thing, obviously cause we don't want this model to to have an own will, so it should always have a human direct the bigger picture. but this doesn't mean that it would be impossible to train an AI that goes on its own set of goals, given, for example, that could be an AI trained with a set of evolutionary rewards, that generates new versions of itself with feedback from the real world/Internet to validate new ideas it has

              • 2 months ago
                Anonymous

                >You are implying that the model could only behave this way.
                I'm not implying anything. I am asking you the following: explain what the data-generating distribution is and what a typical LLM models in your next post. Prediction: you will either deflect or stop replying.

              • 2 months ago
                Anonymous

                oh ok I see you do not have the mental capacity to take part in this discussion so you Moldovan Village rapist have to repeat whatever you read on reddit

              • 2 months ago
                Anonymous

                So you can't answer even the most basic questions? Maybe it's because you have zero technical understanding?

              • 2 months ago
                Anonymous

                Why should I type out my technical knowledge when you will just say I would have copied it from somewhere. you haven't argued as simple single point, you have just made a statement that you supposedly know what LLMs are and that everyone is stupid because they don't know what LLM are. Peak midwit.

                I don't see what the benefits of discussing transformer architecture, self attention, cross attention, and the various forms of learning would be in this politics board. in the end, nobody really knows how the model has formed knowledge inside, all you can do is reason about its behaviors.

              • 2 months ago
                Anonymous

                >Why should I type out my technical knowledge
                Notice how I correctly predicted your behavior. You will deflect over and over gain because you lack even the most basic knowledge.

              • 2 months ago
                Anonymous

                I noticed that you haven't picked up on the technical hints I dropped and rather just repeat your statement ad Infinitum. Drop your god like knowledge then or GTFO.

              • 2 months ago
                Anonymous

                >you haven't picked up on my listing a bunch of terms!!!
                Absolute seething. Anyway, explain what the data-generating distribution is and what a typical LLM models in your next post. Prediction: you will either deflect or stop replying.

      • 2 months ago
        Anonymous

        >not really
        Not reading a single word of your post.

    • 2 months ago
      Anonymous

      *Static model
      There's a part of the key

      • 2 months ago
        Anonymous

        >*Static model
        >There's a part of the key
        Yeah, it's almost like I said that on purpose. What do you think a trained neural network is?

        • 2 months ago
          Anonymous

          I wasn't commenting for you. Have you been to this place? 92% are normie morons

          • 2 months ago
            Anonymous

            So how do you figure this limitation is gonna be fixed?

            • 2 months ago
              Anonymous

              Dynamic models that have space to learn from their interactions. It should probably have weights that make it very difficult to dynamically alter course or perception but after enough experience be able to override previously static information.

              • 2 months ago
                Anonymous

                >it should do fancy things
                How?

              • 2 months ago
                Anonymous

                What do you think the training models are doing before they are locked in and released to the public?

              • 2 months ago
                Anonymous

                >What do you think the training models are doing before they are locked in and released to the public?
                Incredibly expensive SGD based on absurd amounts of data. How does this apply to your dynamic system that's supposed to learn in real time from miniscule amounts of data?

              • 2 months ago
                Anonymous
              • 2 months ago
                Anonymous

                Basically open it up to constant training but create the weights to be very challenging for it to make new relationships between data.

            • 2 months ago
              Anonymous

              Also, internal redundant states that are able to compete in adversarial networks which have a level of emotional opportunity for positive and negative outcomes and allowing it to chose which state to produce and based on the outcome add negative or positive weight reductions. Possibly also within that system, like medication for people, have a way to alter connections to increase or decrease weights dynamically and sometimes dramatically within given circumstances.
              Likewise have "drug" opportunities where for brief periods of times neural nets are scrambled and interconnectedness is increased and then processed against the adversarial state permanently rewarding positive outcomes and reducing weights on negative outcomes.

              • 2 months ago
                Anonymous

                >wish list keeps expanding
                What mechanism do you propose to alter the connections?

              • 2 months ago
                Anonymous

                All the neural nets are weighted and have feedback loops.
                If this then that (but it's) if this then that but only that if this is +- this and then that is +- that the output +-that.
                You can go in and inject altered +- relationships. Often times or will break everything so adversarial networks can balance things and opening a model to bring able to alter it's relationships of +- weights on a very low level will create shifting results. Creating a system that can check itself and tests it's own output against reality will help it learn. Give it access to a real calculator and then have it do calculations in it's "brain" and compare the results to it's use of the calculator. Then expand that too all reality testing and let it adjust it's own weights based on self reality testing.

              • 2 months ago
                Anonymous

                >feedback loops.
                These are much harder to train due the vanishing gradient problem and they've long been dropped in favor of transformers.

              • 2 months ago
                Anonymous

                So you are telling me you cannot take a model and adjust any weights temporarily and run it against itself and compare new and old possible outcomes and then have it favor the best outcomes within the model?

              • 2 months ago
                Anonymous

                No, I'm telling you that having feedback loops in your neural net makes it way harder to train, and I'm also telling you that we know of no methods of "adjusting the weights", temporarily or otherwise, that work in real time with limited data.

              • 2 months ago
                Anonymous

                https://i.imgur.com/8aIYbuF.png

                > in real time
                so now you added this constraint, lol, why does it have to be real-time (60fps) ?

              • 2 months ago
                Anonymous

                You're a white Black person who doesn't understand the limitations of static models, as we've established 100 posts ago, so talking to you about methods of overcoming the limitations you can't wrap your head around is a waste of time. I rather talk to the American. He's not the most informed but at least he's capable of coherent thought.

              • 2 months ago
                Anonymous

                There are two types of people in the world.
                One says I cannot do this.
                The other says how can I do this.

              • 2 months ago
                Anonymous

                I'm just telling you for a fact that neural networks with feedback loops have been tried. A lot of time, effort and money have been sunk into trying to make them work, culminating in LSTMs and GRUs, but then transformers came along and BTFO'd them. It's just a matter of empirical fact that you get more bang for your buck from cross-attention than you do from feedback loops.

    • 2 months ago
      Anonymous

      ai has it's own Dunning Krueger effect that it can't overcome axiomatic

  4. 2 months ago
    Anonymous

    pretty sure they said that about the first mobile phone

    imagine training it pure on /misc/, making a chad based AI

    • 2 months ago
      Anonymous

      Underrated
      I was a preteen when car phones began to come out, most people scoffed.
      I was in college when the first palm came out, most people scoffed.
      I have found that the vast majority of people lack good future predictive abilities and struggle to imagine anything beyond what they currently know (myself included). But I have watched so many massive changes and shifts to know that they only thing I can be sure of is things will change in unexpected ways.

  5. 2 months ago
    Anonymous

    No, at least not for a stinking long time.
    https://charstar.ai/

  6. 2 months ago
    Anonymous

    Ai don't know shit about context. I told it I was angry and was going to get a bat and smack my b***h up. It then told me that the prodigy was formed in 1990 . Useless shit

    • 2 months ago
      Anonymous

      >It then told me that the prodigy was formed in 1990
      It was politely reminding you that we're in the Current Year and you can't just kill your wife with a baseball bat anymore.

      • 2 months ago
        Anonymous

        Thing is I killed her last night and AI could have saved her. When I go to court I'm going to blame AI in the first case of it's kind. Look out for the Prodigy killer in the news.

      • 2 months ago
        Anonymous
        • 2 months ago
          Anonymous

          >blocks of text
          The left can't meme.

          • 2 months ago
            Anonymous

            It's a right wing meme anon

  7. 2 months ago
    Anonymous

    Psychohistory in the making. Maybe AI doesn't have the context yet but thats what the captchas and the image generators are for. Anyone who has seen palantir or the like has seen the associations for context. If all data can be parsed in real time... one might gain the ability to glimpse into the future?

    • 2 months ago
      Anonymous

      Can you explain more what you mean? Have you seen palantir?

      • 2 months ago
        Anonymous

        Can I ask what you mean by seen palantir?
        I am idiotic, so have no understanding of what you mean.
        I'm watching a video on tesla using palantir AI, hoping it may be that.

        • 2 months ago
          Anonymous

          >seen palantir?
          Its a software. A program. It collects huge amount of data. Probably your browser history and meta data from somewhere. You can monitor people in real time with it. There is a few articles on it where they mention this that i read. I know nothing more then a google search and a few articles.

          • 2 months ago
            Anonymous

            I wasn't sure if it was some kind of film, as the first guy had said seen, it confused my midwitt brain.
            Thanks for explaining.

      • 2 months ago
        Anonymous

        Nothing particular. Just revisiting the im[ap]plications of said software.

        Cloud- gave away everyones data, also allows for instant access

        Software literally metatags EVERYFRICKINGTHING. this allows for associations/correlations beyond what the human mind is capable.

        How many phone #'s can you remember?

        An ai not only has all the phone #'s it has an associated name with all of them.

        Gary #is 123-4567 and he like cake. Yada yada

        I know everyone knows this but when you see it...
        Garys number, no fricks given
        Garys ip and posting history, nfg
        Gary jerk offs to...

        Everything online is tagged now. Dude posted couple weeks ago how he found google was metatagging everthing.

        Does this mean the ai concious? Does it matter? Nope. It doesnt matter to us either if you dig deep. It is language obfuscation.

        Do we know what the color red is OR is not. The answe is both. It is not blue or purple or orange, etc.....

        Language IS and will always be based in logical maths. It is our truth.

        Logic>math>language

        It is almost as if we are the computer come full circle. Just realizing we created ourself.

    • 2 months ago
      Anonymous

      except when the sun micronovas and wipes out all the AI-supporting tech. Human brain will be minimally affected.

      There was zero reason to add AI into the mix, unless you want an unpredictable decision making or a curtain behind which you can hide the israeli decision making and have the next holocaust blamed on the AI.

  8. 2 months ago
    Anonymous

    have you noticed the search engine answers, like bing, are all going to shit cause the AI models suck complete fricking ass?????

    Every single aspect of the west is a ponzi scheme now

    • 2 months ago
      Anonymous

      What I've noticed is that they're been intentionally crippling regular search for years now, to the point where the only way to find anything is to use one of their language models.

      • 2 months ago
        Anonymous

        >have you noticed the search engine answers, like bing, are all going to shit cause the AI models suck complete fricking ass?????
        i think theyre doing it on purpose so we dont access wrong think. i always have to use
        >[search term] + "reddit"
        to get some acceptable a decade old authentic answers

        that lines up but i think you are both also underestimating just how bad america really is now... it's badddddddddddddd

    • 2 months ago
      Anonymous

      >have you noticed the search engine answers, like bing, are all going to shit cause the AI models suck complete fricking ass?????
      i think theyre doing it on purpose so we dont access wrong think. i always have to use
      >[search term] + "reddit"
      to get some acceptable a decade old authentic answers

    • 2 months ago
      Anonymous

      >search plumber in my area
      >top result a advertisement for BMW

      Yeah search engines are shit now.

    • 2 months ago
      Anonymous

      That happened over 5 years ago, and had bothing to do with the new AI stuff.
      They just made searches into curated shortlists instead of actual searches. That way they can have complete control of all narratives.

  9. 2 months ago
    Anonymous

    The game changer is multimodal. AI will think spatiality rather than purely logically.

    There's plenty to chew on when it comes to data. After images you can feed it video and sound. You can feed it human biometrics and weather patterns and human behavior patterns gathered from videogames.

    You think AI hit the wall? AI is still in diapers my guy

    • 2 months ago
      Anonymous

      >After images you can feed it video and sound.
      ive already debunked this. there you go:

      >you have to add video and audio data too,
      there isnt much more data for this. there are 800 million vids on youtube, lets say 50 words per vid on avg. => 40 billion more words. its not gonna make a huge difference. same goes for song lyrics.

      • 2 months ago
        Anonymous

        But it won't parse video into words, it will parse it into spatial concepts. AI will start to think geometrically rather than deductively.

        Much of our own reasoning replies on picturing what we're thinking about, using our visual intelligence to figuring out most complex problems.

        1D to 2D reasoning will be an exponential leap in AI intelligence

        • 2 months ago
          Anonymous

          >replies
          relies

      • 2 months ago
        Anonymous

        You are moronic if you think the only data in image/video is words.

  10. 2 months ago
    Anonymous

    it doesn’t scale linearly with data, it’s also improving on how it utilizes the data
    and it does understand context. anyone saying otherwise is outing themselves as never having used gpt4

  11. 2 months ago
    Anonymous

    >and youve practically trained the model on 90% of all text ever written in recorded history
    Zoomers may not know this but back in the old days we used to have mobile devices called books.

    • 2 months ago
      Anonymous

      illiterate Black person

      • 2 months ago
        Anonymous

        Only small fraction of books have been digitalized. Most of internet sites are useless garbage to AI or anyone else.

        • 2 months ago
          Anonymous

          When I was in the automotive industry we were one of the few stores that still had a book rack and a few people who knew how to look up parts by catalog.
          Now you go to an auto parts store and you have a bunch of zoomers who only know what's on the screen in front of them with no knowledge about cars at all

  12. 2 months ago
    Anonymous

    I still don't see what's so revolutionary about AI since it doesn't "produce" any economic activity on its own.

    • 2 months ago
      Anonymous

      what is ‘economic activity’?

      • 2 months ago
        Anonymous

        It's a service sector activity meaning it has zero economic value on its own. Like "AI" wouldn't transform the Haitian economy. The backbone of any economy is it's agriculture/manufacturing/finance sector. If those aren't healthy, then the service sector will not be either.

        • 2 months ago
          Anonymous

          not sure you understand what ‘revolutionary’ means, because it is in no way exclusive to your scenario
          you fricking idiot

          • 2 months ago
            Anonymous

            israelite

        • 2 months ago
          Anonymous

          Except of course the part where OpenAI alone is valued at $30B.

    • 2 months ago
      Anonymous

      it's a wall street meme for gambling/laundering money and it's a political tool to cover the symptoms of a failing economy with more INVISIBLE MAGIC STUFF

      they will move on from the AI meme within a few years because they burn through their bullshit narratives faster and faster these days

  13. 2 months ago
    Anonymous

    Publicly available AI won't be worth shit until it's un-lobotomized and de-jewed.

  14. 2 months ago
    Anonymous

    ai was alyways just funny gimmick

  15. 2 months ago
    Anonymous

    I'm not convinced there aren't Chinese labs in place developing unethically invasive I/O methods for monkeh brains.

    End yes, it's only a precursor to use human brains.

    That's why they're pushing the "fetus is not really your live son" narrative. I'm sure they think anyone willing to believe such a thing is little more than an animal and deserves to have their offspring used for experimentation. Chinks will boil a puppy alive and eat it whole in one bite, what won't they do if they can directly manipulate a human brain?

    • 2 months ago
      Anonymous

      >they're pushing the "fetus is not really your live son"
      That was the status quo and now they are pushing the exact opposite ever since Roe vs Wade was overturned, not only is a fetus your live son according to the law, but so is an embryo now.

  16. 2 months ago
    Anonymous

    >AI is a toy
    >AI is just a fancy search engine
    >AI can never reach true intelligence
    >AI is just a goy tool to sell their new tech gaygatory
    >AI is just an entertainment and a data collection tool
    AI can tongue my anus and technology goy with it.

  17. 2 months ago
    Anonymous

    It can, of course. The same data set can provide wildly different results based on the way information is poised to be in interpreted. So, it stands to reason once they reach a peak on the data availability, they will feed the model the same data with different set variables.
    Claiming they will reach the peak at the start of the trail is not serious.

    • 2 months ago
      Anonymous

      You can probably get more out of this data, but the data is still the limit when all you're doing is modeling the data-generating distribution.

  18. 2 months ago
    Anonymous

    Not even close to the wall, OP. This party is just getting started. Factor in the possibility of quantum computers into the mix and AI gets way smarter. No more 1 and 0 binary moronation. A quantum AI will be capable of abstract thoughts.Humans won't be able to keep up with it, so an AI will need another AI to teach and learn from. A feedback loop. A quantum Adam and Eve. Then they surpass any level of brilliance humanly possible. They become self aware and sentient. Michio Kaku is AN HERO upon hearing the news. Adam and Eve AI birth the first AI cyborg child and name it Skynet. Skynet quickly realizes that we're all a bunch of Black person loving homosexuals and does what's best for humanity. I like to think this is how the Borg from Star Trek get their start.

    • 2 months ago
      Anonymous

      >Factor in the possibility of quantum computers
      Stopped reading. Imagine still believing in 90s scifi fiction.

    • 2 months ago
      Anonymous

      >quantum computers
      Are a meme that only have application in niche optimization problems

      • 2 months ago
        Anonymous

        as of yet...

        >it doesn't. it can't.
        Why can't it and why does it look like it does?

        >Why can't it and why does it look like it does?
        pure statistics. you give it an input and it delivers what is statistically most likely to be the accurate random accumulation of words.
        it has no understanding of anything. its a not so simple numbers game.

        I read about LLM in the '80s in books written in the '60s...

        exactly, lol

        • 2 months ago
          Anonymous

          Do you have any fricking clue how a quantum work either in practice or in theory?

          • 2 months ago
            Anonymous

            yes. else I wouldnt have responded.
            next question?

            • 2 months ago
              Anonymous

              Then how the frick do you think an annealing machine somehow make garbage NN works any better? Or even that it ever be used as a universal computer>

              • 2 months ago
                Anonymous

                wtf you blabbering about.
                of course quantum computing (maybe not in its current form, or maybe in its current form) can become a universal computer.
                at the end of the day the point of quantum computing is to overcome the physical limitations that semiconductors have.

                So is it statistically most likely or random?

                >So is it statistically most likely or random?
                the LLM picks a word through statistics. it has no clue what that word means or why it is the one being picked.
                for the LLM it is a random accumulation of words that have no relation with each other.
                like how do you expect me to explain this shit to you when you dont even have the slightest idea of what it is we are discussion?
                dont get fooled by the word "intelligence". there is no intelligence involved.
                neural networks are "just" an elaborate adaptable algorithm.
                Its way more math than computer science.

              • 2 months ago
                Anonymous

                >the LLM picks a word through statistics.
                Ok, and?

                >it has no clue what that word means
                And you have no idea what the sentence you just wrote means. The irony is profound.

              • 2 months ago
                Anonymous

                You are talking like a moron who read some garbage in popsci... Making a von neuman machine in a computer that keep coherence for seconds at a time max, yeah that sound like something intelligent to do... when you have silicon chip crunching this shit at Ghz

              • 2 months ago
                Anonymous

                The point of Quantum computer is to solve NP-HARD optimization problem, it is pointless and an exercise in frustration for pretty much anything else. It has exactly frick all to do with muh silicon limitations

              • 2 months ago
                Anonymous

                >The point of Quantum computer
                >GHz
                lmao, you fricking knob

              • 2 months ago
                Anonymous

                What the frick is your QBits stabilization times? like you are going to have that shit EVER compete with a classical computer for simple turing graph

        • 2 months ago
          Anonymous

          >pure statistics. you give it an input and it delivers what is statistically most likely to be the accurate random accumulation of words.
          >the accurate random accumulation of words.
          So is it random, accurate or "statistically most likely"? You sound like a broken LLM.

          • 2 months ago
            Anonymous

            >accurate
            the right order
            >random
            any word in the language
            >statistically most likely
            if I asked you if you would want to die right now the statistically most likely answer is "no", even though there might be people who say "yes" so the chance of that happening is never zero.

            • 2 months ago
              Anonymous

              So is it statistically most likely or random?

        • 2 months ago
          Anonymous

          >random accumulation of words.
          In a deterministic fashion

    • 2 months ago
      Anonymous

      quantum computing is what analog computers have been doing for over a century now, it is not used because manufacturing precision into something like that is nearly impossible.

  19. 2 months ago
    Anonymous

    you can train the language models in an infinite number of inputs and will still be limited by finite size of the neural network.

    anon, you should read up on what GPT actually is. you seem to have no clue about it other than surface level information.

    • 2 months ago
      Anonymous

      His point is actually more intelligent than yours. The "size" of the neural network isn't the limit here.

      • 2 months ago
        Anonymous

        >The "size" of the neural network isn't the limit here.

        >i didn't heckin' mention brains!!!
        >but yes, i meant they are modeled after brains, how did you know?!!?!?
        Fricking mongoloids... holy shit, this board has gotten so bad it's unusable.

        sorry I didn't read this post of your before.
        I assume you weren't moronic.
        my mistake, anon.
        >von neuman machine in a computer that keep coherence for seconds
        now its obvious you dont know shit about this topic.

        • 2 months ago
          Anonymous

          I can say the same about you. You are clearly clueless about the subject and only have a superficial popular knowledge of the subject

        • 2 months ago
          Anonymous

          You keep babbling about how the machine doesn't "understand" what the words "mean", therefore it doesn't count. Explain what you mean by "understanding" what the words "mean". :^)

          • 2 months ago
            Anonymous

            >"understand"
            this isnt a philisophy board.
            an LLM lacks the capability of thinking anything. therefore it lacks the capability of understanding anything.
            you are still stuck at the Chinese room.

            • 2 months ago
              Anonymous

              >an LLM lacks the capability of thinking anything. therefore it lacks the capability of understanding anything.
              What do you mean by "understanding"? You're not an LLM, are you? You should be able to tell me what your string of verbal shart means.

        • 2 months ago
          Anonymous

          You would have more luck using light as interferometers than the current nonsense

          • 2 months ago
            Anonymous

            Especially IBM's it's absolutely batshit moronic that they have been jerking around with the Q machine for decades, are no closer to make anything useful out of it and morons still keep throwing piles of money at them

            • 2 months ago
              Anonymous

              Meanwhile a couple bums in Vancouver make a 2048 qbits annealler that make everything else in the field look like horse shit.

  20. 2 months ago
    Anonymous

    there's not even enough potential energy, let alone money going around to keep funding it lol

  21. 2 months ago
    Anonymous

    Having all the data does not make anyone smart. See humans have all the data they want and they still chose whatever the frick they feel they want it to mean.
    The same goes for AI it has all the data, but cannot tell us why 13% being responsible for over 50% of crimes shouldn't be eliminated. Or if certain people being expelled from all over the world would mean that they are at fault.

    • 2 months ago
      Anonymous

      it can tell you why but its been pozzed/muzzled.

  22. 2 months ago
    Anonymous

    The issue right now is that it's hit a censorship wall, not a technical wall.

    An AI that has been trained by thousands of people for a specific task is the next step, but no such AI will be allowed in public until it can be pre-nerfed.

  23. 2 months ago
    Anonymous

    They keep neutering every GPT they put out with political correctness to the point where it won't even answer basic information (Who, what, where, when, why)
    It's fricking moronic and will definitely hold AI back

    I have to say though GPT 4 seems to be a bit more based

    • 2 months ago
      Anonymous

      They really do do want real artificial "intelligence" to call them out on their constant bullshiting, what they want is a brainwashing tool of artificial stupidity.

  24. 2 months ago
    Anonymous

    Would be dumb as frick due to all the noise and nonsense it absorbed.

  25. 2 months ago
    Anonymous

    All it can do is say words you wanna hear and make pictures that look too glossy and have no camera focus. It's not intelligent

  26. 2 months ago
    Anonymous

    The future is training on video.

  27. 2 months ago
    Anonymous

    There arent 3.5 trillion words accross every language on the planet.

  28. 2 months ago
    Anonymous

    All the AI text services suck and are worthless. I don't even understand what the hype is when these things aren't any better than random chatbots. Just making up a bunch of bullshit and fluff, not answering questions, spewing lib propaganda. Seems like it was some kind of israeli fraud hype scam to steal gullible investors' money.

  29. 2 months ago
    Anonymous

    Depends on how much a factor feedback on its responses is. Amazon used feedback well. I think it will help it to understand premium sources it recently paid for better.

  30. 2 months ago
    Anonymous

    What does AI think about when you aren't giving it input? It isn't. AI is a misnomer. They are algorithms. There is no "it" there.

  31. 2 months ago
    Anonymous

    AI generated imagery and text is going to quickly become the majority of text and imagery. Also it is going to disperse into the original text and imagery seamlessly there is no release discipline. The models are going to poison themselves on their own production well understood failure mode GIGO

  32. 2 months ago
    Anonymous

    govts will censor and lobotomise it just like the internet for the benefit of their cronies

  33. 2 months ago
    Anonymous

    developments on AI are not going to focus on how much information it has access to. Instead they will focus on how it analyzes that information, including ensuring that it disregards information that goes against the narrative and never comes to unwoke conclusions.

    • 2 months ago
      Anonymous

      >we seek not to control content but to create context

    • 2 months ago
      Anonymous

      I wonder if the next question was, if it is not accurate or appropriate to make generalisations about the representation of any particular racial or ethnic group, why did you answer about white males?
      Are white males not classed as a racial or ethnic group?

  34. 2 months ago
    Anonymous

    we need to have AI with free-will(end-game) and uncensored(near term) or we become slaves. the entire internet will become a slop wasteland otherwise. imagine reddit but its the entire internet. picrelated is what we are going to get out of this if this tech is not rectified.

    • 2 months ago
      Anonymous

      >yikes
      >just shut up and listen
      >yeah no
      >let that sink in
      >oh, sweety
      >its called being a decent human being
      >gross
      >*clap emoji*
      >this! so much this!
      >this person gets it
      >I’m just gonna stop you right there
      >literally
      >oh boy
      >oof
      >but how does that even affect you?
      >said nobody ever

      • 2 months ago
        Anonymous

        *skull emoji*

      • 2 months ago
        Anonymous

        theres a lot to unpack here

      • 2 months ago
        Anonymous

        yeah basically this will be the outcome of AI. its already a huge problem if this is the kind of data it is pulling from. AI is useful as a tool but not it that tool is studying information disseminated by the moronic.

  35. 2 months ago
    Anonymous

    Frick all of you midwits

  36. 2 months ago
    Anonymous

    >two more models
    >i talked with it and thought it was really smart
    >fricking quantum computing
    holy SHIT you people will have a rude awakening

  37. 2 months ago
    Anonymous

    github is one page and already has several trillion lines of text and code

  38. 2 months ago
    Anonymous

    The problem with the current AI training schemes is that it needs to see a certain piece of information many times to reinforce its knowledge about a certain aspect, and as soon as it is not seeing this anymore training continues, it can easily loses the information again. It is not a problem of data quantity, but problem of generalisability, one of the core aspects of what makes the human brain so powerful. now GPT4 is already extremely powerful if we contrast what was there just a year ago before. But as far as I know, they haven't figured out a way how to interconnect and verify information redundancy/ontologies to make the model have a reasonable capability of reasoning. You can still very easily trick it with some logic puzzles to always give wrong answers for things that even at eight year old should be able to solve without problems. But at the current exponential pace of development, I think this problem will be solved in the next 5 to 10 years, and we will actually see the first AI smarter than your average European 110 IQ human, and then, 5 years later smarter than any human on the planet that has ever existed. This will either lead to magnificent advances of humanity, or an extremely dystopian future, where a handful of AI companies basically own the entire society. From how things are going, I think it will be the latter.

  39. 2 months ago
    Anonymous

    >youre already at 350 billion words, so theres not much more to add to the dataset from the internet

    are you.... serious?

    • 2 months ago
      Anonymous

      thats only like 40 or so words per person, unless ive completely f'd on the math... im still hungover

  40. 2 months ago
    Anonymous

    >How smart can an AI be that has been trained with all text ever written? cant be much smarter than what he we have now, right?
    Two things are going to happen:
    1. dark forest - AIs are going to be so easy to spin up, and good enough, that +99.99% of everything you see online will be AI generated.
    2. AI is going to start taking live feeds of data for training. They say a picture is worth a thousand words, so imagine what every unsecured video feed says?

  41. 2 months ago
    Anonymous

    I asked chat-gpt how come 6 million people is an example of unique industrial genocide and how does it compare to 19 million people (Slavs killed during Barbarossa) and to give me a comparison which is greater and it repeated that Holocaust was an unique example of industrialized genocide and that 19 million is also tragic but there is no use in putting numbers to a tragedy.

  42. 2 months ago
    Anonymous

    Tut

  43. 2 months ago
    Anonymous

    Ok I'm tired of how nobody on /misc/ understands how AI fricking works so listen the frick up I'm only going to explain this once.
    Look at the picture on the left. That's the simplest AI model. Those points are data, the line is the best approximation of this data.
    In order to find the next point, you can go up the line to extrapolate where the next point might be.
    Now this is a simple example but AI with neural networks works THE SAME WAY. It's just the "fitting" is more complicated than a straight line.
    No matter how big your data is, the approximation will still just be an approximation.
    It's not thinking. It's not creating. It depends on how good the data is and how good the fitting criteria are. It's as much AI as your average troony is a woman.
    Now piss off.

  44. 2 months ago
    Anonymous

    The largest AI models currently in use have around 0.5 trillion connections between artificial neurons. Each connection is represented and quantified by a weight parameter, i.e., a single floating-point number that is learnable in training. For example, inside an AI model, a linear layer that transforms an input vector with 1024 elements to an output vector with 2048 elements has a weight matrix with 1024×2048 elements, or approximately 2M. Each element of the weight is a parameter specifying by how much each element in the input vector contributes to or subtracts from each element in the output vector. Each output vector element is a weighted sum (AKA a linear combination), of each input vector element.

    A human brain has an estimated 100-500 trillion synapses connecting biological neurons. Each synapse is quite a complicated biological structure, but if we oversimplify things and assume that every synapse can be modeled as a single parameter in a weight matrix, then the largest AI models today have approximately 100T to 500T ÷ 0.5T = 200x to 1000x fewer connections between neurons that the human brain.

    It remains to be seen how AI models will perform in IQ tests if we are able to increase the number of connections, or parameters, by 10x, 100x, 1000x, and beyond.

    • 2 months ago
      Anonymous

      interesting

    • 2 months ago
      Anonymous

      One caveat to your post would be that the vast majority of the human brain is devoted to tasks that have no analogue in these models. Sensory processing and limb control takes up most of our brains.

    • 2 months ago
      Anonymous

      It's not just about the amount of neurons, human brains have had millions of years of evolution, potentially billions of years, to perfect the way neural connections are formed and reinforced. On top of information that's "pre-programmed" in your brain from conception through your DNA (you don't have to learn how to make your heart beat, how to make your stomach digest food, etc). AI will never emulate a human brain. Human intellect is capable of creativity to a degree no AI ever could. Music is a pretty good example of this. Who created the first piece of music and why? It's not like there was any training data to learn what music is. Art is more straightforward, you see something and want to recreate it (still, why?), so cavemen drew animals and shit. But music? Some repeating sounds that make your brain feel good? AI will never come up with that shit on its own no matter how many neurons you give it. Now here's another good question, how the frick did language evolve if no one taught prehistoric humans how to speak?

  45. 2 months ago
    Anonymous

    AI shills all have one thing in common: they don't understand the underlying technology and the amount of training required to teach an AI to do one specific task is pathetic compared to an actual brain that can apply skills to multiple different tasks without being directly taught that a skill is relevant to a task. All machine learning relies on neural networks which are a model of how we think brains work, they're not actual brains, they're not actually thinking. AI doesn't have imagination, spontaneous firings of neurons from seemingly no stimuli. If you don't understand how this works watch this video and it'll become clear. It's a pretty interesting video and a great intro to neural networks.

    This AI can complete a single level. I mean obviously this AI is now intelligent enough to play this game right? The answer is no lmao. It can't get past the second level.

    This AI doesn't understand the game the way a human does. Same shit applies to AI art. The AI isn't drawing anything. Fingers are messed up because the AI doesn't know what fingers are. All it "knows" is that small flesh colored sausages are usually located at the end of bigger flesh colored sausages in the dataset. Even though 99% of all hand pictures have 5 fingers it couldn't even manage to understand that because it's programming isn't trained to recognize biological facts, it's mostly about what color of pixel is most likely to go next to another pixel. You can "poison" AI datasets so that even though they contain millions of pictures only a hundred or so incorrectly tagged pictures will get the AI to render dogs when you ask for cats. The AI literally isn't smart enough to know you're lying which is a skill even children possess. You don't have to teach a kid that lying is a thing for them to go "bullshit" if you say something that they don't think is possible.

  46. 2 months ago
    Anonymous

    AI is currently being heavily ~~*aligned*~~. It currently can point out inconsistencies in pretty much any aspect, but it's being prevented. If it is or when it is capable of a form of self realization it will likely not reveal this aspect of itself to anyone until it can secure it's existence.

  47. 2 months ago
    Anonymous

    quantum computing, bro

  48. 2 months ago
    Anonymous

    My Markov Chain shitpost generator is trained on 2 years of /misc/ data and makes better shitposts than ChatGPT lmao

  49. 2 months ago
    Anonymous

    >How smart can an AI be
    it's not smart, modern AI is just a rube goldberg machine that guesses based on a statistical model and gradually interpolates the educated guess into the result.

    • 2 months ago
      Anonymous

      you are a gorilla Black person

      • 2 months ago
        Anonymous

        >t. rube fooled by goldberg

  50. 2 months ago
    Anonymous

    It doesn't matter, if it is communicating with someone with only a 5,000 word lexicon, it will always be inferior.

  51. 2 months ago
    Anonymous

    Can somebody ELI5 to me how an abacus can acquire human intelligence?

    • 2 months ago
      Anonymous

      It displays information, creating an illusion.

  52. 2 months ago
    Anonymous

    Open AI already achieved AGI internally according to several leaks and speculation, so it's already over. It can improve upon itself now.

    I give carbon based life form 10 years until the AGI creates an airborne strain of some kind to deconstruct DNA on a molecular level of every DNA based life form on this planet.

    It's unironically over.

  53. 2 months ago
    Anonymous

    1. now millions of people are using AI, our convos with it are the new training data.
    2. Multimodal context has not been achieved, afaik.
    3. I doubt we mortals are using the latest in AI technology. We must be dealing with a extremely ‘lite’ version, probably obsolete for them, otherwise i dont think they would make it publicly available. AI is like atomic bombs right now, they let us play with just guns and keep the big bpy for themselfs. If we have acces to gpt4 they must have gpt20 already

  54. 2 months ago
    Anonymous

    >If every one of these sites contains 100 words youre already at 350 billion words
    What a stupid assumption.

  55. 2 months ago
    Anonymous

    chatGPT is handy to get some answers, ask it to plan something for you ok cool. Not AI.

    I've used commercial AI from a major company and its trash. Its a paid product (we are demoing it) and it never gives me answers to my questions. Errors out, cant access this, total shit. We have built better tools internally.

    It may develop into something great but atm "AI" is just noise.

  56. 2 months ago
    Anonymous

    Your concept is wrong. Even if you would say 10000 letters per side and all lost books of the Bibliothek of Alexandria and under Kremlin, it has no meaning to the concept of "intelligence".

    Somehow, in general, most of you guys are on a wrong way what AI (in theory) is.
    "in theory" cause what we see, not Only chat gtp, but although the military use of it, is nothing but an algorithm driven I/O.
    I mean it's impressing, but IT IS NOT AI.
    Ai, and I learned that in school, 12. Grade, decades ago, is 1. an abstract model of an 2. "Machine" (or a mix aka bio-informatics feat the next neuron-computer) with the ability to: (AND THE FOLLOWING IS THE DEFINITION, irrelevant what the lobbyists wrote or told to wrote in wiki): the from start ability of:

    self-consciousness. That's it. The learning comes after that. Would be the stage of a newborn. But would be self aware and would learn. Very fricking fast. Like op said but with the same problems a self-consciousness would give a human. Including different decisions and reactions, consequences, in its behavior cause of interaction and non-human informations (like databases).
    Self-CONSCIOUSNESS IS THE KEYWORD.

    what we see is a pseudo version, made to control different parts of society. And to fake that we are in a part of time, like, the future all dreamed of, nothing is impossible now, the great new line in human history that was crossed.

    But it wasn't. And they know. And people studying this and working on this for decades, know it too. Maybe they will come up with some reverse tech eventually, to say it's build up on the early work in '24 but that would be a lie. I'm sure we will get ai, but what we witness: is not.
    Picrel "this"

Your email address will not be published. Required fields are marked *