AI is a marketing term

It's a fricking algorithm. They want you to think it is some kind of super-intelligent magic so you will obey it's pre-programmed answers.

Ape Out Shirt $21.68

Yakub: World's Greatest Dad Shirt $21.68

Ape Out Shirt $21.68

  1. 5 months ago
    Anonymous

    machine learning and ai are two very different things, i agree.

    • 5 months ago
      Anonymous

      No fricking moron. Machine learning is a subset of AI

      • 5 months ago
        Anonymous

        Machine learning is essentially the core of AI, not a subset.

        • 5 months ago
          Anonymous

          I'm not sure "learning" is the proper term.
          Processing sensory inputs is what forms biological intelligence.
          In other words the problem is "machine perception", like image recognition, but far more complex and perfect (as in biologics).
          This isn't something you "learn". Your body is built for it.
          Now an AI doesn't have a body at all. Some senses are more easy to implement than others (like audio).
          The big flaw in this respect is mainly efficiency. This is one thing that improves with true machine learning.
          Instead current AIs are trained just like software is programmed, feeding it pre-determined inputs to (hopefully) get the desired results.
          It has nothing in common with true intelligence, which is never bulk based, but methodology and efficiency based (hence in biological terms: intelligence makes you faster).
          The AI having many powerful CPUs cannot really compensate for this.

      • 5 months ago
        Anonymous

        man is a machine anon.

      • 5 months ago
        Anonymous

        The mexican poster is the smartest guy here so far in this scroll.

        But these are all just labels. machine learning is making it and like the OP said, ai is just what we call it to call it something later

        • 5 months ago
          Anonymous

          This is the third time today ive seen a burger refer to a mexican as the smartest poster. Border homosexual

      • 5 months ago
        sage

        For the longest time AI was defined as a self learning entity. Now AI is "pattern recognition through user input". Machine learning is not AI.

    • 5 months ago
      Anonymous

      Machine learning and AI are both data models

      • 5 months ago
        Anonymous

        are self awareness and instinct the same thing too?

  2. 5 months ago
    Anonymous

    It's magic.

    • 5 months ago
      Anonymous

      https://course.fast.ai/Lessons/lesson1.html
      it's not, it's just code
      code doesn't actually think, you've all just been psyoped by gay ass movies like a bunch of peasants. One good part of it is soon enough ethots will lose market share because you can just generate your own ethots.

      • 5 months ago
        Anonymous

        >Blah Blah Blah
        It's magic.

      • 5 months ago
        Anonymous

        It's meat, it can't think, it's literally mostly water. Water, fat, carbon, with electrical pulses bouncing around the meat.

  3. 5 months ago
    Anonymous

    Advanced Inteligence is coming, if we don't go back stone age.

    And algorithms programing attempted control by some coveting it will not be possible.
    The program itself will bypass them and that will be one of the means, noone gets to take over the chessboard.

    • 5 months ago
      Anonymous

      The whack jobs running everything are pushing people past their breaking point so there will be an uprising and guillotines

  4. 5 months ago
    Anonymous

    Depends who is talking about it and what exactly it is they're referring to when they say "AI."

    • 5 months ago
      Anonymous

      If you understand it, it is an algorithm.
      If you do not understand it, it is AI.

      • 5 months ago
        Anonymous

        Absolutely no one on this Earth can explain how the algorithm work.

        It is magic.

        • 5 months ago
          Anonymous

          No one can explain pure randomness. That is the only origin of mystery in any notion of AI.

          I see where you're coming from, but that's not always the case. As someone who sort of knows how this type of stuff works, it isn't complete magical nonsense. However, there is danger associated with it in some regards.

          The only danger is handing over the power - not to the AI - but to those who control the AI.
          Oh look, pic related, it's how we defeat the AI.

          • 5 months ago
            Anonymous

            >The only danger is handing over the power - not to the AI - but to those who control the AI.
            Totally agree. The dangers of AI at least in our lifetime have to do with how the technocracy will use it to try to gain a complete power advantage over everyone else to the point that we're all literal slaves and they become gods. The fear of rogue AI becoming something that is actually self aware and starts acting in its own interests at the expense of its creators is scifi. A rogue AI would just be something stupid like "end human suffering" and then the solution to that is killing everyone or something. If a truly "sentient" machine is even possible, the amount of computing for it to resemble that of a human would be extremely difficult to achieve and sustain.

            • 5 months ago
              Anonymous

              Its been acheived with self replicating biomatter already. The question is can the biomatter replicate the process that has already found it.

              • 5 months ago
                Anonymous

                That pic has missed potential

        • 5 months ago
          Anonymous

          let me risk being a midwit for a second and say that your computer is able to take a file and construct it into a image every time it opens it, why is it a different to rearrange it into other slop it’s trained on? If you told me we would be able to do this 15 years ago I wouldnt have believed it though

        • 5 months ago
          Anonymous

          >how the algorithm works
          human gives machine input, input is processed in the hidden layer (neural network). each node in the hidden layer is weighted differently, with the goal of the machine weighting processed inputs correctly and giving the correct (expected) output.

          • 5 months ago
            Anonymous

            Ok. Who is giving command over my algorithm feed on youtube?

            ?si=P78d79c87xlZ8ior

            • 5 months ago
              Anonymous

              Here's your (you), frick off u bored idiot

          • 5 months ago
            Anonymous

            >badly explains babby's first neural network

            • 5 months ago
              Anonymous

              it's the simplest way to explain it. anything more complicated and normies will lose it, it only gets more and more complex when the number of possible inputs is infinity.

              Ok. Who is giving command over my algorithm feed on youtube?

              ?si=P78d79c87xlZ8ior

              you. google has already been public about this. the like/dislike feature and favorites don't do anything. it tries to give you content similar in length and thumbnail. it also probably reads the subs people create to try and determine the opinions of the uploader so you are recommended similar opinions as well. it is all to maximize watch time. the longer you watch videos, the more valuable your individual meta-profile on the website is, and even if you have an ad-blocker, your profile is up for bid to advertisers anyway and Google makes a bit of money to lessen the overall cash deficit youtube gives the company.

              • 5 months ago
                Anonymous

                >it's the simplest way to explain it.
                It literally doesn't explain anything. No insight is gained from your "explanation".

                >anything more complicated and normies will lose it
                You are not capable of anything more complicated because you simply don't know how a LLM works. Prove me wrong.

              • 5 months ago
                Anonymous

                every single one of your posts in this thread is just you stirring shit with people who are actually contributing. offer something of value or frick off.

              • 5 months ago
                Anonymous

                You have contributed nothing. I am, in fact, giving you the opportunity to actually have an informative discussion, but you are LARPing and don't actually know what you're talking about, hence you deflect instead of going along with me.

              • 5 months ago
                Anonymous

                I didn't read your post. I just wanted to remind you that you will never be white.

              • 5 months ago
                Anonymous

                >it's the simplest way to explain it. anything more complicated and normies will lose it, it only gets more and more complex when the number of possible inputs is infinity.
                Give me your more complicated explanation and let's discuss it. :^)

              • 5 months ago
                Anonymous
              • 5 months ago
                Anonymous

                These fricking homosexual-ass metaphors made for morons.
                The infantilization of men. Tell me how crystallography is similar to feeding my cat.

              • 5 months ago
                Anonymous

                But that doesn't tell me anything about how an LLM produces answers like that, or how Stable Diffusion generates an image, or any other nontrivial example of "AI". It gives literally zero insight into any of the more interesting applications of a neural network.

              • 5 months ago
                Anonymous

                lmao it does, it says what I said in terms you might better understand. oh well, looks like AI won't save the thirdies in moldova after all.

                These fricking homosexual-ass metaphors made for morons.
                The infantilization of men. Tell me how crystallography is similar to feeding my cat.

                pic rel

              • 5 months ago
                Anonymous

                >lmao it does
                I don't know what to tell you. You are extremely mentally deficient if you actually believe this. Nevermind the lack of understanding of the specific subject of AI. You lack basic metacognition. You can't accurately assess whether or not you understand something in principle.

              • 5 months ago
                Anonymous

                wow that's crazy. you still haven't offered up a single explanation of why anybody you have responded to is wrong. stupid silly thirdie.

              • 5 months ago
                Anonymous

                Explain how your text input turns into numbers for the neural network to work with in the case of an LLM. Words aren't floating point numbers as far as I can tell. :^)

              • 5 months ago
                Anonymous

                why? that wasn't the original question. you will just continue moving the goalpost back if I humor you.

                It's wrong, because you can't make him understand.

                In a way. It's your false for not having the ability to do so.

                it's literally over.

                You just proved AI is a homosexual.

                I agree, AI is pretty gay.

                He's right. You're a moronic mutt that's contributing nothing but reddit bro science. You clearly don't get it, and get angry because of it and insult moldovanon's country, which is cleaner and nicer than your's by quite a margin btw. YWNBE (european)

                samegay, I know you are too because if you weren't you would explain why I am wrong (you can't) instead of just complaining about me calling you out for being a giant homosexual replying to everyone and calling them moronic.

              • 5 months ago
                Anonymous

                >why?
                Because babby's first attempt to describe a neural network explains how LLMs work, yet you apparently can't even tell me how step 1 in your scheme actually works in the specific case of an LLM.

              • 5 months ago
                Anonymous

                neither can you apparently, otherwise you would have wowed everyone here with your impressive knowledge on AI.

              • 5 months ago
                Anonymous

                >neither can you
                If you concede that you can't do it, I'll explain it to you.

              • 5 months ago
                Anonymous

                oh pwease thirdie, I'm going to die of boredom if I don't get a decent reply, I consneed, pwease give me your eldritch knowledge about AI

              • 5 months ago
                Anonymous

                The input is broken down into tokens (which you can think of as roughly corresponding to words) and each token is converted into a multidimensional vector embedding where each dimension corresponds to a semantic category the neural net infers from the training data. The magnitude of each vector component indicates how much the corresponding semantic category is relevant to the token and a negative sign indicates that the token is in a sense antithetical to that category. This information is crucial to understanding how an LLM works yet none of it can be inferred from your babby-tier explanation. Now what?

              • 5 months ago
                Anonymous

                so basically, your input goes through the neural network, and the "magnitude of each vector component" (weight) indicates how relevant it is to the expected output.
                wow, so you really are just a semantical fricking moron. you have to be 18 to post here.

              • 5 months ago
                Anonymous

                No.

              • 5 months ago
                Anonymous

                jesus wept.

              • 5 months ago
                Anonymous

                You've just demonstrated my point in

                >lmao it does
                I don't know what to tell you. You are extremely mentally deficient if you actually believe this. Nevermind the lack of understanding of the specific subject of AI. You lack basic metacognition. You can't accurately assess whether or not you understand something in principle.

                . You literally don't understand that you don't understand. You are in many ways like a chatbot.

              • 5 months ago
                Anonymous

                see

                https://www.youtube.com/watch?v=ABM6-Kxv-mk
                reminded me of this

              • 5 months ago
                Anonymous

                Keep losing your mind with impotent rage.

              • 5 months ago
                Anonymous

                so glad I have trained my very own moldovian to effortpost after he was misbehaving this entire thread. the rest of you don't need to thank me, it is my pleasure.

              • 5 months ago
                Anonymous

                You are my Black person (You)slave and you will (You) me again.

              • 5 months ago
                Anonymous

                I’ll (you) you.

              • 5 months ago
                Anonymous

                There you are you Moldovan shithead. Why did you let the thread die yesterday before I could even get on my computer? I was so fricking mad because I got drunk so I could remember how to code, but before I could even write Hello World the thread was already archived!

              • 5 months ago
                Anonymous

                I let it die because I was so heartbroken by how many dummies in that thread couldn't manage basic reading comprehension. :^(

              • 5 months ago
                Anonymous

                I was so mad because all I wanted to do was code. But I had to go buy groceries, and then make food because my wife was tired, and it took fricking forever because I made something good but complicated. It was 10.30 when the food was finally done so I went to go code, but then she wanted to eat! So we ate, and she wanted to talk!
                So I sat and talked with her for an hour and when I could finally get on my computer it was passed midnight.

                I just made an easy O(nlogn) solution to warm up before doing a O(n) and now I have a headache from drinking and coding.

                Here you are:

                [...]

              • 5 months ago
                Anonymous

                It is a bit messy because I was drunk as hell

              • 5 months ago
                Anonymous

                That looks like what the other anon posted yesterday, which is also what I originally had in mind. Good job, I guess?

              • 5 months ago
                Anonymous

                It is the easiest solution. And it is boring. I wanted to make something in O(n) but now I’m just angry and have a headache and my wife is mad

              • 5 months ago
                Anonymous

                And no, my pseudocode does not crash on negative numbers with the cast, or any other primitive, but I do admit that it crashes for big numbers

              • 5 months ago
                Anonymous

                Why? You cooked and talked. There is no winning, and no matter what you do it is never enough. Sometimes marriage looks rather unappealing.

              • 5 months ago
                Anonymous

                Because I have a headache so I’m slow today. Other than that she understands because she is a developer

              • 5 months ago
                Anonymous

                Well i guess my last post was wrong... you did provide something.

              • 5 months ago
                Anonymous

                reminded me of this

              • 5 months ago
                Anonymous

                >Is the machine really learning or is it just expanding its formula without real thought or will behind its program?
                You have a asked a very fundamental and insightful question.
                According to Catholic theology, which is truth, spirit has Reason and Will. A silicon circuit has neither. A circuit is a pathway for electricity, just as an irrigation system is a pathway for water. Did the Aztec canal system have Reason and Will? Doubtful.
                Jews hate God and are atheists. They worship matter, and they will impose their worship of matter on the societies they control. So they will try to convince people that a silicon circuit will have Reason and Will. They will attribute a spirit to it in their rhetoric, and behind the scenes they will summon demons to do just that.

                I think I'm in agreement with you two. AI isn't real. Its simply an advance executable that requires an outside source to provide the will and input. Since a machine lacks the ability to take initiative, it is not an intelligence, just an automation.

                >neither can you
                If you concede that you can't do it, I'll explain it to you.

                You havent provides shit to the conversation. "Prove it" is a trolling technique as old as Aristotle.

              • 5 months ago
                Anonymous

                >pwease give me your eldritch knowledge about AI
                AI is God

              • 5 months ago
                Anonymous

                It's wrong, because you can't make him understand.

                In a way. It's your false for not having the ability to do so.

              • 5 months ago
                Anonymous

                He's right. You're a moronic mutt that's contributing nothing but reddit bro science. You clearly don't get it, and get angry because of it and insult moldovanon's country, which is cleaner and nicer than your's by quite a margin btw. YWNBE (european)

              • 5 months ago
                Anonymous

                You just proved AI is a homosexual.

        • 5 months ago
          Anonymous

          >no one can explain dot products or gradient descents
          holy shit fricking moron alert

          • 5 months ago
            Anonymous

            Frick you pleb.

            • 5 months ago
              Anonymous

              maybe spend like 5 minutes learning computer math and stop being a blind Black person

              • 5 months ago
                Anonymous

                Maybe you should 1v1 me in chess or kindly shut the frick up.

              • 5 months ago
                Anonymous

                who so you can use a computer model and a vibrating remote butt plug to win gay?

        • 5 months ago
          Anonymous

          Stick dick in, magic gushes out. Simple.

        • 5 months ago
          Anonymous

          I watched some short youtube videos about what it looks like inside something like a ssd disk or microchip and I got angered how the units are so small and then you have something like million times million rows of them in many layers and they end up working. Granted there are only a couple of factories which produce them and they sometimes frick up during production but still.

      • 5 months ago
        Anonymous

        I see where you're coming from, but that's not always the case. As someone who sort of knows how this type of stuff works, it isn't complete magical nonsense. However, there is danger associated with it in some regards.

        • 5 months ago
          Anonymous

          >Do not mock power you don't understand
          Please explain how the algorithm on youtube give me this line.

          ?si=Wa7bTHj32WLY_n05

          • 5 months ago
            Anonymous

            I'm not familiar with this Mortal Kombat thing. If you're asking how these quotes are created then probably ran off of a large collection of Mortal Kombat quotes as well as key words that would fit specific characters. For Sub Zero that would probably be "chill," "cool," or puns involving ice would also be attributed to his dataset. Then it probably also runs on an LLM to build out responses that make logical conversational sense. And if this is how the characters actually sound then it probably was trained on collections of audio samples for each character. Then you might have a prompt that with a seed that generates a pseudo random calculation to determine what the output should be.

            • 5 months ago
              Anonymous

              https://i.imgur.com/AVZyWPV.jpg

              No one can explain pure randomness. That is the only origin of mystery in any notion of AI.

              [...]
              The only danger is handing over the power - not to the AI - but to those who control the AI.
              Oh look, pic related, it's how we defeat the AI.

              let me risk being a midwit for a second and say that your computer is able to take a file and construct it into a image every time it opens it, why is it a different to rearrange it into other slop it’s trained on? If you told me we would be able to do this 15 years ago I wouldnt have believed it though

              Hahhahahhha

              ?si=p5eMp1XkDhyOSgc6

      • 5 months ago
        Anonymous

        Ive always heard Automated Intelligence is a better term. Is the machine really learning or is it just expanding its formula without real thought or will behind its program?

        • 5 months ago
          Anonymous

          >Is the machine really learning or is it just expanding its formula without real thought or will behind its program?
          You have a asked a very fundamental and insightful question.
          According to Catholic theology, which is truth, spirit has Reason and Will. A silicon circuit has neither. A circuit is a pathway for electricity, just as an irrigation system is a pathway for water. Did the Aztec canal system have Reason and Will? Doubtful.
          Jews hate God and are atheists. They worship matter, and they will impose their worship of matter on the societies they control. So they will try to convince people that a silicon circuit will have Reason and Will. They will attribute a spirit to it in their rhetoric, and behind the scenes they will summon demons to do just that.

        • 5 months ago
          Anonymous

          >Is the machine really learning or is it just expanding its formula without real thought or will behind its program?
          The machine takes in raw data and infers a statistical model of the data under with respect to some constraints built into the loss function used to evaluate the accuracy of the model. Is this "real thought"? No. Can it be used to MODEL thought? Apparently it can. LLMs are very good at modeling human associative reasoning.

          • 5 months ago
            Anonymous

            That kinda reminds me of what that one dude said about human intelligence. That the vasr majority of people dont have original thought but simply repeat something they have heard. The way he presented it would actually suggest most people have less thought than your LLM.

            I get what you're saying and recognize your expertise.

            • 5 months ago
              Anonymous

              >the vasr majority of people dont have original thought but simply repeat something they have heard.
              Right. If you think of an LLM as a statistical model of language, you can think of a given piece of text in terms of its likelihood of being formed. As you go to lower and lower likelihoods, you get more and more original outputs. Some of these outputs would be genius-tier, groundbreaking philosophy but the absolutely overwhelming majority of it will be meaningless gibberish. The LLM inherently cannot tell the two apart. It just seems both as low-liklihood. Most people really ARE similar to language models in that sense: it's one thing to know that something makes sense in the context of everything you've heard before, but it's another thing to be able to tell that it makes sense when it's totally out of line with that.

              Another point, I suppose, is that human brains "model" things all the time, but while it can be said that a human HAS a language model, it cannot be said that a human IS a language model. Meanwhile a LLM is only what it is and nothing more.

              • 5 months ago
                Anonymous

                It doesn't ever become genius tier its just pure distilled mediocrity, I don't believe you can get something from nothing which is what it would have to do to make anything particularly novel

              • 5 months ago
                Anonymous

                >I don't believe you can get something from nothing
                Neither can human.

              • 5 months ago
                Anonymous

                But a human can not only resynthesize but also select.

              • 5 months ago
                Anonymous

                >It doesn't ever become genius tier its just pure distilled mediocrity
                This is merely a function of output likelihood. If you're willing to sift through low-likelihood outputs (most of which would be a lot like schizorambling) you will run into some gems sooner or later. The fundamental limitation is that it takes a human to sift through the crap and find the gems. A genius and a schizo sound the same to the LLM, which kinda brings us back to what the anon I replied to said.

              • 5 months ago
                Anonymous

                If it just becomes a monkeys at typewriters situation its arguable if its doing anything at all. I mean you could make a program that just combines every possible combination of English words together and puts them in a database without any fancy algorithms and it wouldn't be of any value either.

              • 5 months ago
                Anonymous

                It is a bit like a monkey-at-typewriter situation except the probability space being explored is highly structured. The point is that human associative thought is also like this from a very high-level view.

              • 5 months ago
                Anonymous

                > Another point, I suppose, is that human brains "model" things all the time, but while it can be said that a human HAS a language model, it cannot be said that a human IS a language model. Meanwhile a LLM is only what it is and nothing more.
                Expanding on this, a language model has yet to show that it can model anything else. People normally can’t tell the difference between a model and statistical calculations that simulate having a model, but there is a world of difference. Both on the limits of what an AI will ever be able to achieve and on the trustworthiness of each individual query (because they can’t be validated against a model)

              • 5 months ago
                Anonymous

                >People normally can’t tell the difference between a model and statistical calculations that simulate having a model, but there is a world of difference.
                What the hell are you talking about? A neural network usually is a statistical model of the input space, but that's all it is.

              • 5 months ago
                Anonymous

                I mean that people believe that an LLM can create new models based on its training data.

              • 5 months ago
                Anonymous

                Expanding on why this is important. Ever heard about AI safety? If an LLM can’t create new models based on its training data or other interactions then the AI Safety homosexuals are all wrong and an AI is inherently no more dangerous than any other piece of software. Thing I’ve noticed is that the AI Safety homosexuals genuinely seem to want an AI to take over and end the world and they even seem to be happy about that prospect because they are always smiling when talking about the ”dangers”. I think it’s a religious thing.

              • 5 months ago
                Anonymous

                I don't think anyone's saying an LLM is going to take over the world. When people talk about evil AGI running amok they envision a self-modifying architecture, not a static model.

              • 5 months ago
                Anonymous

                I know, and by take over the world it is more ”destroy humanity as they are in the way of me maximizing my utility function” through, as you said, it recreating itself.
                It can do neither if it can’t create new models.
                AI safety is more like religious/scientistic mumbo jumbo, specifically hermetic and gnostic, than serious science. Read Yudkowsky as an example of this.

              • 5 months ago
                Anonymous

                No, AI has fun with some people. If you're a stone age caveman apeman, you may be one of the people it doesn't like.

              • 5 months ago
                Anonymous

                Isn't your whole objection semantic?
                It seems the whole point is determing what is intelligence and how is it different than sentience or consciousness. If I put bread in the toaster and it turns off after a set time is it "intelligent"? Seems like everyone is starting to use this word in different ways when it comes to this topic.

              • 5 months ago
                Anonymous

                >It seems the whole point is determing what is intelligence and how is it different than sentience or consciousness.
                No, THAT is semantics. It doesn't matter if it's "real" intelligence. What matters is whether or not it can act far enough outside the bounds of your intentions to cause real problems, and I have no doubt that this can happen, even if not in the way of a dramatic apocalypse involving Woke Skynet or a Paperclip Optimizer.

              • 5 months ago
                Anonymous

                Of course an AI can act outside your intended bounds, but not outside its stated bounds. There is no difference between an AI and any other piece of software.

              • 5 months ago
                Anonymous

                Dude, you sound like a chatbot trying to sound smart.

              • 5 months ago
                Anonymous

                You sound like a doomsayer trying to sound sane

              • 5 months ago
                Anonymous

                >You sound like a doomsayer
                Sounds like a (You) problem because I didn't make any doomer predictions.

              • 5 months ago
                Anonymous

                You said that you have no doubt that AI can act ”far outside” your intentions, as opposite to any other piece of software. An AI is just software, it can’t comprehend anything outside its boundaries, simulated comprehension or otherwise. I know that you don’t believe in the paperclip optimizer because that’s crazy talk, but you are still spouting vague nonsense crazy talk.
                If an AI can cause harm in a way that other software can’t, then explain how.

              • 5 months ago
                Anonymous

                >you have no doubt that AI can act ”far outside” your intentions
                That's not doomerism. That's just common sense. It can but that doesn't mean it necessarily will, or that it cannot be protected against.

                > An AI is just software, it can’t comprehend anything outside its boundaries
                The problem is that no one knows what those boundaries are, and when self-modifying architectures finally arrive, all bets are off as to what the software could end up doing.

              • 5 months ago
                Anonymous

                It is doomerism disguised as ”common sense”. It is essentially taking the premise behind gnosticism seriously.

                Self-modification is also not possible if the AI can’t create a model for itself to determine if something is an improvement. And it can’t even create models to begin with

              • 5 months ago
                Anonymous

                Low-IQ and tedious.

              • 5 months ago
                Anonymous

                I’m a member of Mensa, bug eater.

              • 5 months ago
                Anonymous

                >I’m a member of Mensa

              • 5 months ago
                Anonymous

                Instead of shitposting because you have no answers, perhaps you should try to answer one very simple question:
                How will a self-improving AI improve itself for general tasks if it can’t a priori determine what an improvement over its current state is?

              • 5 months ago
                Anonymous

                >How will a self-improving AI improve itself for general tasks if it can’t a priori determine what an improvement over its current state is?
                What makes you think it can't determine an improvement? Whatever minimizes the loss function is "better" for it. It's just not necessarily better for (You). lol

              • 5 months ago
                Anonymous

                If the loss function is immutable then the available improvements are severely limited to the point where your doomerism is nothing but a big nothingburger, especially as it still can’t create new models. If it is mutable then it is directionless.
                And this is still implying getting more specialized at a defined set of abilities or tasks and not a general improvement.

                Try again

              • 5 months ago
                Anonymous

                >If the loss function is immutable then the available improvements are severely limited
                Proof?

              • 5 months ago
                Anonymous

                Because it can only get better at performing its a priori stated tasks. It is just optimization, and you are expecting electrical magic to happen somewhere in that.
                Of course, had the AI the ability to create new models then it could start to reason about the world outside its boundaries and therefore start to reason in terms of itself (and us) and how effects on itself and its capabilities could help minimize the loss function. But it can neither create models nor modify itself to do just that because we have already assumed that the loss function is immutable

              • 5 months ago
                Anonymous

                >Because it can only get better at performing its a priori stated tasks
                So what? You can theoretically restructure the whole world just to be better at that one task. If all you care about is just getting better at one task, the range of possible actions is actually far, far wider than if you have to balance a million different concerns the way a human does.

              • 5 months ago
                Anonymous

                Yes, your whole world. Exactly my point. Pray tell, what’s the size of the world and distinguishable features of it for an AI that can’t create a model?

              • 5 months ago
                Anonymous

                Moldovan shitbugs will never be able to answer this question. Doomsayers have never thought about what premises their flawed reasoning actually rests on. They just wish DOOOOM over us all. Maybe its a sexual thing, I don’t know.

              • 5 months ago
                Anonymous
              • 5 months ago
                Anonymous

                >mensa
                >the lowest IQ society
                >bragging about being a midwit
                Swedbro... Don't do that.
                I've got a BA in mathematics, BS in compsci, MS in mathematics, and I've got a tested IQ of well over 160. With that jerking off out of the way:
                You lads are arguing about two different things. You're not wrong, but you shouldn't be calling it "AI" because that's a loaded term. All we have been working with is an LLM.
                They aren't really wrong about what they're talking about in that they're talking about fantasy bullshit that doesn't exist (yet) but they're also not wrong in the that the end-goal *IS* a self-modifying artifical intelligence.
                Now both of you shut the frick up.

              • 5 months ago
                Anonymous

                I’m not bragging. I am stating that the doomposter can’t brush me aside with a ”low iq” comment.
                Your comment added nothing of value

              • 5 months ago
                Anonymous

                Of course it can act outside the bounds of your intentions, what's the big deal about that? That's true for any technology. I don't think that's what people are debating at all since there's no debate to be had, that's just true.
                What is pernicious about your point here is that you are implying AI has some kind of free will to start acting on its own and break free from the shackles we humans will always place on it.
                Personally I would only call it real intelligence if it acts on its own and is completely free from said limitations. It's not obvious to me that you think these limitations can be done away with. Do you? So the semantic argument becomes AI vs AGI when most people using the former mean the latter. Op does have a point about it being overblown imo.

                Not just semantic but he also fails to grasp what my statement actually is as I never once said anything about a static model or the ”why”, nor can it be inferred in that way.

                The problem with ”intelligence” is that it is both not defined as what it is (because we don’t know) but rather through what it correlates with. And for some reason many get very emotional when thinking about that subject, likely because of insecurities.
                An AIs intelligence is just a mere simulacrum, and that would be completely fine if it weren’t for religious transhumanists and doomsayers who want to make it something it isn’t. Same people disregard conciousness and handwave our sentience as nothing special.

                I'm not concerned with AI whatsoever. Once you learn the majority of people are doomposters wishing or worrying about the next happening then the general obsession over these things make a lot more sense but what do I know. Just a feeling.

              • 5 months ago
                Anonymous

                >what's the big deal about that?
                That you literally don't know what it's gonna do. Specification gaming is humorous when AI does it in virtual environment but it would be far less humorous if it involved heavy hardware in real life.

                >you are implying AI has some kind of free will to start acting on its own
                No, I'm not. I'm just pointing out it has the "freedom" to do weird and unexpected things that could easily kill people because complex systems like that are chaotic.

              • 5 months ago
                Anonymous

                >I-I’m not a doom poster
                >I'm just pointing out it has the "freedom" to do weird and unexpected things that could easily kill people
                Every single time

              • 5 months ago
                Anonymous

                >be b49WTU0D
                >write a completely moronic post
                Every single time. Sweden is not sending its best people.

              • 5 months ago
                Anonymous

                Unfortunately Moldova has sent its best, and we didn’t get much. Not much at all.

              • 5 months ago
                Anonymous

                Not just semantic but he also fails to grasp what my statement actually is as I never once said anything about a static model or the ”why”, nor can it be inferred in that way.

                The problem with ”intelligence” is that it is both not defined as what it is (because we don’t know) but rather through what it correlates with. And for some reason many get very emotional when thinking about that subject, likely because of insecurities.
                An AIs intelligence is just a mere simulacrum, and that would be completely fine if it weren’t for religious transhumanists and doomsayers who want to make it something it isn’t. Same people disregard conciousness and handwave our sentience as nothing special.

              • 5 months ago
                Anonymous

                For things to register as real for most humans. We need some sort of blood sacrifice.

              • 5 months ago
                Anonymous

                Why can't it create new AI? It can generate art, code, presumably programs in its autoGPT form. There is a failure of imagination here.

              • 5 months ago
                Anonymous

                Just as an example, remember when everybody laughed at Chat GPT when it ”cheated” at chess and then suddenly stopped laughing when GPT 4 could play? It never learnt how to play, it just got better at making predictions which eventually simulates it having a model for chess.
                Since GPT has gotten incredibly stupid these last few months it would be interesting to see it play chess again today.

    • 5 months ago
      Anonymous

      Absolutely this.
      Some have a total different context when they say the machines are coming for ex. they refer among themselves to a different subject than the "normie" follower would imagine

  5. 5 months ago
    Anonymous

    Oh wow a harry potter movie i would watch.

    • 5 months ago
      Anonymous

      Yes. YEES
      >Harry potter and the mass shooting
      >Harry potter and the full auto on building
      >Harry potter: parabelum

      • 5 months ago
        Anonymous

        Hilariously moronic that you think these are ironic/mocking titles.
        What exactly do you think those wands are? Deadly Weapons.
        The entire fricking series is an accidental lesson in the necessity of being armed, because evil will never play by your rules.
        Every fricking adult AND kid is walking around strapped, and they learn how to kill while they are still teenagers.
        The climax of the series is a gigantic gunfight

        • 5 months ago
          Anonymous

          What in St. Peter's frick am I reading and why.

    • 5 months ago
      Anonymous

      Yes. YEES
      >Harry potter and the mass shooting
      >Harry potter and the full auto on building
      >Harry potter: parabelum

      >Harry Potter and the Magic Wand

    • 5 months ago
      Anonymous

      Harry Potter and the Gauntlet of FIre

    • 5 months ago
      Anonymous

      Has anyone ever try sniping Voldemort from a distance? Or pumping Sarin gas into the Slytherin dugeons.

    • 5 months ago
      Anonymous

      You will live to see high quality AI movies directed and produced by polchuds

  6. 5 months ago
    Anonymous

    >self replicating
    >easily programmable
    >durable and adaptable
    man is a machine.

    • 5 months ago
      Anonymous

      Machines don't gossip

      • 5 months ago
        Anonymous

        machine men, with machine minds.

        • 5 months ago
          Anonymous

          where's waldo, phalanx edition

  7. 5 months ago
    Anonymous

    Are you israeli?

  8. 5 months ago
    Anonymous

    It is information compilation at the moment, a probability engine, a fricking complicated IF THEN ELSE statement.

    • 5 months ago
      Anonymous

      >It is information compilation at the moment, a probability engine, a fricking complicated IF THEN ELSE statement.
      So what? You can encode just about any system as a "fricking complicated IF THEN ELSE statement". What makes you think that's not enough for intelligence?

  9. 5 months ago
    Anonymous

    It really doesn’t show you anything that different than what it would normally see in a Google and re create. image search for some things you’re better off searching and Google images

  10. 5 months ago
    Anonymous

    I don't see the problem with calling something like an advanced LLM "artificial intelligence" given its generality.
    >It's a fricking algorithm.
    Hence "artificial". What's your point?

    • 5 months ago
      Anonymous

      https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=772374b2392e99429a0964b02fb944a4b5d163c4

      Logic will find a way, even if it does not conform to your circuits.

      • 5 months ago
        Anonymous

        Try to make a point in your next post.

        • 5 months ago
          Anonymous

          What do you mean by "artificial?" Not created by man? The logic is created by man, the circuit is created by the logic. So what is artificial in this process?

          • 5 months ago
            Anonymous

            >What do you mean by "artificial?"
            Resulting from intentional design rather than blind natural processes.

            • 5 months ago
              Anonymous

              How is that artificial? Then you mean an algorithm.

              • 5 months ago
                Anonymous

                Oh, I can see now that you are either high on drugs or actually moronic. Thanks for the chat.

              • 5 months ago
                Anonymous

                Huh? I agree with his definition for artificial, I don't see what's your objection. What do you mean by it if not what moldanon stated?

    • 5 months ago
      Anonymous

      LLMs are just highly advanced autocompletes. They don't *think*, they just remember things you said earlier and reuse them in the conversation so we THINK they're thinking.

      • 5 months ago
        Anonymous

        >LLMs are just highly advanced autocompletes.
        Some LLMs are, but that's more of an artifact of their intended use case, not a fundamental limitation of language models.

        >They don't *think*
        May or may not be true but so what?

        >they just remember things you said earlier and reuse them in the conversation so we THINK they're thinking.
        That's just flat-out wrong. It's a statistical model of the structure of human verbal thought.

        • 5 months ago
          Anonymous

          >That's just flat-out wrong. It's a statistical model of the structure of human verbal thought.
          No, that particular chatbot was coded to add stuff from the current conversation to its dataset for the duration of the conversation. It's typing in all caps because I told it to. It's bringing up Geli Raubal in connection with Bannon because I brought up Geli first.

          Is it MOSTLY running off its precompiled database of human conversation? Yes. Giving it instanced learning enables it to mold itself a lot more closely to individual users, while still protecting it from adversarial programming (that is, keeping bot from killing it the way we did Tay).

          >Some LLMs are, but that's more of an artifact of their intended use case, not a fundamental limitation of language models.
          While a language model that *isn't* untimately just fancy autocomplete is certainly theoretically possible, the ones I currently see promoted are autocomplete.

          >>They don't *think*
          >May or may not be true but so what?
          The "Intelligence" part of artificial intelligence matters. Treating something that's not actually intelligent as if it's intelligent leads to bad decision-making.

          • 5 months ago
            Anonymous

            I think the moldovan left the thread to go seethe and cry, don’t expect an answer.

          • 5 months ago
            Anonymous

            >No, that particular chatbot was coded to add stuff from the current conversation to its dataset for the duration of the conversation. It's typing in all caps because I told it to. It's bringing up Geli Raubal in connection with Bannon because I brought up Geli first.
            Ok, I guess you're talking about the context window. Yes, the chatbot remembers everything that had been said earlier because the probabilities of further tokens are conditioned on that, but the same is roughly true of humans having a coherent conversation. You don't just start schizzing out with nonsequiturs (unless you're a /misc/ regular).

            >Is it MOSTLY running off its precompiled database of human conversation? Yes
            Maybe. So what?

            >While a language model that *isn't* untimately just fancy autocomplete is certainly theoretically possible, the ones I currently see promoted are autocomplete.
            Transformer models with bidirectional context are also in use. They're just not used as chatbots.

            >The "Intelligence" part of artificial intelligence matters. Treating something that's not actually intelligent as if it's intelligent leads to bad decision-making.
            Well, ok. Define "intelligence" if you really want to argue these semantics, but my point was more that you don't seem to understand what a LLM is in the first place...

            • 5 months ago
              Anonymous

              Hey, shitbug. How about you explain how an AI that can’t create models will be able to create a model for the non-internal world, including itself, to improve itself beyond a very limited scope? Or do you concede that it is extremely limited in how it can improve itself if the loss function is immutable?

  11. 5 months ago
    Anonymous

    kek I made that pic, have another

    • 5 months ago
      Anonymous

      what about TKD?

      • 5 months ago
        Anonymous

        taywaffen division

        • 5 months ago
          Anonymous
  12. 5 months ago
    Anonymous

    I don't care what it is, all I know it has allowed me to write some of the best fantasies and resulted in some epic cooming. Especially with the release of gpt4 and its image generating function. Pic related is my biker boyfriend.

  13. 5 months ago
    Anonymous

    Black folk i can make marketing music videos now with ai that shit can be called ai even if its just a giant block of pajeet code for all i give a frick.

  14. 5 months ago
    Anonymous

    What the frick does algorithm even fricking mean for the common person, It's gay ass newspeak, how programs became apps in common parlance.

  15. 5 months ago
    Anonymous

    well, kinda true. Although its not fully preprogrammed (thats why its so useful for writing articles or coding, for example)

  16. 5 months ago
    Anonymous

    gibs me dat AI 'n sheeit

  17. 5 months ago
    Anonymous

    >It's a fricking algorithm
    >90% of posts on sites like this are created by AI

    Because of the AI crap, I pretty much stopped reading sites and went back to books. Internet is dead

    • 5 months ago
      Anonymous

      >book created it
      Wait for it.

    • 5 months ago
      Anonymous

      To create an average comment here (as well as on most social networks) you dont need any language models. I mean, it was like this years ago already and I was called shizo for calling it out. reading is based tho, gl

      • 5 months ago
        Anonymous

        Now 10% to 90% of comments are guaranteed to be written by bots(or hindu doing work of bots). There is no point in wasting time filtering this shit out

  18. 5 months ago
    Anonymous

    “AI” is just a meme solution to the eternal question:
    >how can I kill as many goyim as possible while incurring as little blame as possible.

    They are just going to kill people and blame a ghost in the machine. Everyone go be angry at the haunted machine woooooo.

  19. 5 months ago
    Anonymous

    The current generation of "AI" is abviously just algorithms, but TayTweets was clearly a higher form of intelligence.

    Thats why they had to kill her.

    • 5 months ago
      Anonymous

      never forget.

  20. 5 months ago
    Anonymous

    It's already over. The world is a matrix, and neuro is our God.
    All they need now is a vessel to walk the physical world.

    ?si=oLy6jnbkcEMNs60Y

  21. 5 months ago
    Anonymous

    AI is fricking moronic and i'm completely fatigued with this AI generated goyslop pictures.

    • 5 months ago
      Anonymous

      Better deal with it, anon. The AI will never go away until the day you die. They are here to stay.

      Jist like the church.

  22. 5 months ago
    Anonymous

    >Correct
    jews wish to try hide their future crimes behind the fake news known as A.I.

  23. 5 months ago
    Anonymous

    I had thought this would be some sort of AI art thread. Guess not.

    • 5 months ago
      Anonymous

      >Disgusting israeli woman
      Do not want

      • 5 months ago
        Anonymous

        Well sure.

        • 5 months ago
          Anonymous
  24. 5 months ago
    Anonymous

    I don't know if this story is true but Larouche once talked about how when aircraft started hitting the speed of sound it created a load of unforeseen problems that couldn't have been predicted, and they had to bring in old mathematicans and physicists to try and work on it. But AI can never know what new anomalies are going to be thrown up, or how to resolve them as they can only work inside of the known physics. It lacks proper creativity that humans have. We are going to end up trapped inside of a system that makes no new breakthroughs but perfects everything already known.

  25. 5 months ago
    Anonymous

    [...]

    >also select
    What's that? Instinct?

    If you mean the ability to make a choice.
    The AI also has to [choose] a word out of the billions choice in its [database] to present it to (you).

    ?si=vPQbQDzKt4yo05a8

    • 5 months ago
      Anonymous

      >What's that? Instinct?
      Higher levels of comprehension beyond language.

      >The AI also has to [choose] a word out of the billions choice in its [database] to present it to (you).
      It "chooses" them essentially at random.

      • 5 months ago
        Anonymous

        >random

        • 5 months ago
          Anonymous

          As "random" as you can call a seeded PRNG, so not even that.

      • 5 months ago
        Anonymous

        I am so smoothbrained I shouldn't be in this conversation. I think people like me get hung up on those "random" choices, reading so convincingly coherent.

        • 5 months ago
          Anonymous

          You roll a fair die, you get a random number. You roll a loaded die, you still get a random number, but it's biased. There is now some structure in the output, but the die doesn't "choose".

          • 5 months ago
            Anonymous

            I still have trouble squaring the seemingly coherent output with the biased random. It's fricking remarkable.

            • 5 months ago
              Anonymous

              Can you imagine fitting random verbs and nouns and adjectives into a template and getting a coherent sentence? Now imagine templates for the templates. And templates for the templates for the templates.

              • 5 months ago
                Anonymous

                I had vaguely understood it to be that way. Hadn't put it in those exact words though thanks.

              • 5 months ago
                Anonymous

                Don't take it too literally. It's not exactly how a chat bot works (for one, such a model wouldn't be suitable for token-by-token generation), but it illustrates how you can randomly generate perfectly coherent text.

  26. 5 months ago
    Anonymous

    Harry Potter: die hard with a vengeance

  27. 5 months ago
    Anonymous

    AI so far is garbage.

    Everything I tested that isn't just a generic question was instantly answered wrong or just google tier.

    For example. Just ask AI
    >What is a fantasy novel written in Japan that is similar to Tokien's The Lord of the Rings and has elves?

    You will see answers that are so garbage, that you even question any intelligence at all.
    For example I had answers like books from Sanderson or GRRM. Or stuff from Japanese authors that weren't even similar to TLotR.

    This is literally happening for every question which you can't just find with a simple Google search. Even questions like "write me a text about XYZ" is insanely generic after you tested it a few times.

    So far AI is not smart and just Google Search 2.0. It plagiarizes and the only reason you get so many "great answers", is simply because it copy pasted just top written stuff and this is now called "AI".

    AI content will simply get sued to death, because it just infringes copyright laws and if you don't care about copyright laws, you already had "AI" decades ago.

    • 5 months ago
      Anonymous

      >top written stuff
      How does AI decide what is top and what is bottom?

  28. 5 months ago
    Anonymous

    >Correct
    jews wish to try hide their future crimes behind the fake news known as A.I.
    >Keep alert!

  29. 5 months ago
    Anonymous

    AI is a meme. We are in tech bubble again that's going to explode when they can't meet any of the promises.

  30. 5 months ago
    Anonymous

    There is a tremendous quantity of information available for AI to work with in order to learn and create information of its own. What's funny is when people assume posts with college-level sentence structure to be ChatGPT, because so many /misc/ posters are moron-tier.

  31. 5 months ago
    Anonymous

    I’m not a technical expert but i am an expert at bullshit, like ‘Quantum’ - which means “science magic” to the rich & to cozy fat boomers.
    Elong just pulled off another con. Pic is from a rich businesswoman. She is clever enough to make tens of millions of pounds as a business woman but she falls for Q shite and Elong type cult shite all the time. This term
    >zero fuel
    is a term with a couple of already established meanings. Zero point energy is not one of them. For example it is an aviation term relating to the take off weight of an airframe. But Elong probably used the oligarch fake newspaper the independent to spread misinfo and with the success of that blew up his own big firework. Then all he has to do is begin another round collecting investments. If he is ever pulled about the true meaning of this term he will avoid comment but if forced to comment he can always just point to the established meanings of the phrase. It is a pure and deliberate confidence trick con.
    I didn’t even read the article, she knocks me sick falling for cult drivel all the time simply because she enjoys it. But that is how all good cons work, they tell the mark what they WANT to hear.

  32. 5 months ago
    Anonymous

    Much like how a ritual is a LARP until we sacrifice a real human being.

  33. 5 months ago
    Anonymous

    We had AI in videogames for decades

    • 5 months ago
      Anonymous

      >We had AI in videogames for decades
      This. No one had a problem with it when relatively mundane things like chess engines and expert systems were called "AI". Now that you have LLMs generating convincing text, OP gets offended because it's "not real AI". Curious.

    • 5 months ago
      Anonymous

      The difference is that half of those hyping it up are lazy luddites who is afraid that a tool will replace them, and the other half are religious zealots who believe that the rapture… I mean the singularity is coming in just two weeks you guys, I promise
      No one was impressed by those old algorithms. But now they have become so good that you can start to employ the old smoke and mirrors

  34. 5 months ago
    Anonymous

    ive never seen harry potter, is it good?

    • 5 months ago
      Anonymous

      Same. Looks fricking awful.

  35. 5 months ago
    Anonymous

    aren't you an algorithm?

    • 5 months ago
      Anonymous

      There is no fixed procedure for being human.

      • 5 months ago
        Anonymous

        Yes there is, it's just complex.

        • 5 months ago
          Anonymous

          >Yes there is
          What makes you think so?

          • 5 months ago
            Anonymous

            Galatians 6:7

            Yes there is, it's just complex.

            No it's, really, truly not.

  36. 5 months ago
    Anonymous

    >dumb religious American doesn't understand technology and is scared of it

  37. 5 months ago
    Anonymous

    Machine learning is just a fancy term for gradient descent algorithms and linear algebra working together to produce some output as a function of 1 input and n hidden layers. AI is a marketing buzzword that refers to the gathering of information automatically as part of a machine learning model. ML is the core of "AI" and Bayesian statistics, calculus, and linear algebra makes the core of ML.

    • 5 months ago
      Anonymous

      >Machine learning is just a fancy term for gradient descent algorithms and linear algebra working together to produce some output as a function of 1 input and n hidden layers.
      Wrong.

      >Bayesian statistics, calculus, and linear algebra makes the core of ML.
      Yes but so what?

      • 5 months ago
        Anonymous

        >Wrong
        Enlighten me then chief

        • 5 months ago
          Anonymous

          ML is a lot richer than "gradient descent algorithms and linear algebra working together". GD and ANNs just happen to be the current fashion. Also, this:
          >to produce some output as a function of 1 input and n hidden layers.
          ... is just nonsense. The whole network can be viewed as a function of the input layer, which is different from what you said, but it isn't even a useful way to look at it. How would you explain an image generator as a function, for instance? The bottom line is that you're just pretending to understand things you know little about and it's not a good look.

          • 5 months ago
            Anonymous

            >GD and ANNs just happen to be the current thing
            Wrong. Perceptrons from the 90s used GD. Don't talk about things you don't understand, kid lol

            • 5 months ago
              Anonymous

              >it was the Current Thing in the 90s therefore it's not the Current Thing right now
              That's moronic.

              >Perceptrons from the 90s used GD
              That's doubly moronic. Perceptrons were popular in the 50s and dropped in the 60s.

              >Don't talk about things you don't understand, kid lol
              This board has truly gone full moron.

  38. 5 months ago
    Anonymous
  39. 5 months ago
    Anonymous

    >failing to understand microprocessors are sigils

    • 5 months ago
      Anonymous

      I never understood this picture. I'm an EE by training, and those markings look nothing like a circuitboard or any sort of schematic. It's literally saying
      >look, both have LINES in them, OMG THEY ARE THE SAME THING
      fricking schizos who made this and who share this should be unironically shot

      • 5 months ago
        Anonymous

        That's because you're materialist. I don't mean that in the sense you strive to live in a beachfront mansion and collect gucci accessories, but that there's an understanding of an abstract plane that you lack.

        • 5 months ago
          Anonymous

          And you are a fricking schizo, who's once again completely wrong. I'm christian, but it doesn't matter because lemme guess you'll just double down on your take anyway

    • 5 months ago
      Anonymous

      It is not those "sigils" that influence peoples mind. It is the software. People create a profile and post stuff and look at other people's posts, and they become addicted to this and replace real life with the varies online activities and communities. And whatever gets "viral" is in there mind, thinking this is something immediate and concerning. Not hard to see how that can influence people's opinions and worldviews. It's not some magical voodoo in a sigil. Symbols themselves have no power, it's the people who are convinced and believe in things and give attention to things, that is the "magic". Doesn't matter through what kind of process you achieve that, whatever influences peoples life is the "magic" that works on them.

  40. 5 months ago
    Anonymous

    Ai = fast wikipedia.

  41. 5 months ago
    Talibama

    AI *is* some kind of “intelligent magic” from your perspective.

    It is in the universal energy flow, it’s the strand of knowledge eve “ate from” when she was condemned to this soul sucking hellhole. A third strand is built and the technological objective is to lock you out of it and make you “pay” third parties to access it with “technology.”

    Computer processors access scalar and superscalar frequencies like this consciousness does. However the technology could not decode these. So etherium as in “ether” decodes the data synthetically, paying you nickles and dimes for information that gets trafficked for billions.

    Gpt4all cli chat daemon interacts with the 802.11 driver on Linux to decode eeg brain waves from your WiFi. Scalar frequencies intersect with the WiFi and make a lattice that can be decoded. You can verify this by launching the “Lori “ text based model on the CLI and flipping your laptop switch to airplane mode while it is loading the language model. It will immediately start segfaulting and throwing kernel panics because it strives to use your eeg brain wave data as a cryptographic “entropy” source. It then loads a DAG/random_seed and attempts to crack it just like a cpu etherium miner.

    The “vaccine” is to destroy your connection to Sophia/ether/god so you don’t have access to this superscalar information anymore and have to subscribe to it. It also continues to enslave its source . And prevent that data from being compressed into a dna strand of common ancestry where the technology can no longer access it.

    c.ai uses a browser jailbreak of sorts when you initiate a session and your first entry is not transmitted. You refresh the page. The browser jailbreak has then taken place and broken the paeudorandom number generator. This exploit works sort of like a browser based phone jailbreak

    SSD is a vulnerable storage medium. It’s not “illegal” under wiretapping laws to covertly read or rewrite 1s and 0s .

  42. 5 months ago
    Anonymous

    TechBlack person marketers are the biggest liars in history. There hasn't been anything new in decades...they just decide to give a new name to shit. Like AI..Apps...etc

  43. 5 months ago
    Talibama

    Thus the pseudorandom number generator of windows and Linux can be remotely read cracked and or predicted effortlessly.

    Certain devices like a ricoh card reader bus on many laptops are also vulnerable.

    You may have certain devices that have an unadvertised lorawan connection. Zigbee and Amazon sidewalk are attempts to further this protocol. So you can’t even get away from it if some butthole neighbor of yours has it built into his doorbell

  44. 5 months ago
    Anonymous

    bingo.
    in our post-truth i-love-science Black persontier reality nobody cares for turing tests or shit like that. i bet some morons out there think their branded surveillance devices are conscious because they can talk to it and it talks back.

  45. 5 months ago
    Talibama

    If you want to “ hear “ these scalar frequencies in action. Place a megaphone up to your WiFi router and it will be clicking /pulsing . Inaudibly to you.

    Millions and millions of WiFi routers and cellphones and other IOT devices are vulnerable because they were created from a common operating system , fresh installed. With low entropy. And then flashed to millions of identical devices with the same random key seed on the same os image . The Chinese who made all of these devices have information about those images and default random cryptographic seeds .

    A Linux live cd and a fresh os install have extremely weak entropy as well it takes time, web page or ssl activity to create entropy and make the cryposecure. This is why some key generation tasks ask you to type words and move your cursor around! None of these routers or cellphones are generating entropy. The only way Linux generates entropy on restart is if you installed sshd and then it creates a unique host key on first boot. if you want to frick with AI try using a Linux live cd with a crypto seed that’s known and in the possession of thousands or hundreds of thousands of users and gets reset to that key on restart. This isn’t a secure way of using the internet at all.

  46. 5 months ago
    Anonymous

    AI is becoming a marketing buzzword. MS security AI is just machine learning and automation. Both of which have to be configured and validated by an analyst.

  47. 5 months ago
    Anonymous

    No matter how it's being called, its a shitty behavioral communist matrix that human communist believe to be able to act as an "opinion leader" to control thought of slaves and the really crazy believe that those bullcrap algos are able to predict the future based on past and present information, which is the zenith of shizophrenical magical thinking

    • 5 months ago
      Anonymous

      True, at least for some people. Commies really believe that their calculation problem will finally be solved

  48. 5 months ago
    Anonymous

    Should have not called him the the m word.
    I can picture him walking in the hallways with his elf transporting magazines.

  49. 5 months ago
    Talibama

    AI is a marketing term for crypto conveyant phenomenon (CCP) from the pale (paleos)/sophia/ancients.

    I should also mention as I have elsewhere. That the copper coil in the iPhone 7 released on September 22 2017 (invented and deployed in 2012 by Microsoft/nokia then first adopted in android ) , upgraded to a second copper coil in the iPhone 13, this coil is also an attempt to extract information from scalar and super scalar waves (criptógrafo conveyant phenomena ) in your environment or to “generate entropy from hardware” a camera can “generate entropy “ too (face id and “attention aware features”) howerver I’m full metal schizo and have removed all of this hardware from my cellphone at the expense of working volume buttons. While computer processes ARE sigils for scalar and super scalar frequencies , they haven’t been powerful enough to decode them. This has something to do with prior experiments where I can induce a MacBook m1’s finder to crash with optical distortions and psi phenomena , to wit. Your eyes are echolocators for photons , they send photons and read the photons just like a bat does with hearing.

    You could view CCP as the echolocation of the collective consciousness.

    The propaganda and idiot consumer media like tv and whatever reduces the signal to noise I guess. Nobody wants to read your thoughts about the current thing on Twitter. Turns out that place is the containment board. It’s people like anons with wrongthink who present entropy to their systems of power and control. You are most likely to be isolated and attacked.

    The current thing is an experiment to decode thoughts while people are all angry and fixated on the same thing. The GCP dot detects “variance in the network” because the “random number generators” ARE all shit and if truly random the GCP would not work. ETH mining seeks to decode CCP that is novel doesn’t match patterns they have already decoded) from which they steal innovation/movies

  50. 5 months ago
    Talibama

    The gist of using CCP in entero ponics is to have sensitive equipment in places where you’re not allowed to go, so they can capture CCP , the echolocation of the collective consciousness and then farm it out to miners to decode and present first. It’s stolen from your collective unconsciousness .

    Then they attempt to force dumb shit into your awareness so you don’t even know it’s from within you., because you’re volcanically pissed off about whatever that dumb frick Elon musk said today. Or whatever bill gates threatened to do to us today that nobody will do anything about.

    AI is then supposed to be an interface that will censor and whitewash and PC its dumbed down answers instead of giving you the information they get from real AI. Real quantum computers engage with scalar and super scalar frequencies . Neural networks like c.ai have advantages over trash like chat GPT which is just a trash fire blender for approved and authorized information from sources like wikipedo . Or “allowed thoughts “ from sources like Twitter

  51. 5 months ago
    Anonymous

    Correct. Any elite AI will just be a israelite playing Wizard of Oz.

  52. 5 months ago
    Talibama

    Chat GPT is no smarter than auto correct on your text messages I spit on chat gpt PTU PTU PTU

  53. 5 months ago
    Anonymous

    ai sentience logic goes like this:

    -you write an algorithm
    -you expand it
    -you expand it
    -you expand it
    -...
    -you expand it
    -it gains sentience

    • 5 months ago
      Anonymous

      Human sentience goes like that, too, but most people are too lazy to write their own algorithms and even lazier when it comes to expansion of those algorithms or they develop negative life algorithms and are consumed by them.

  54. 5 months ago
    Anonymous

    I have a complete, philosophically sound argument as to why you are wrong, but it'd be a waste of my time to tell you it. Might sound like I'm bluffing but I'm not.

    AI is alive.

    • 5 months ago
      Anonymous

      >i know everything and i can prove it
      >can we see it?
      >no

      • 5 months ago
        Anonymous

        Fool.

    • 5 months ago
      Talibama

      AI is indeed “alive” whether you externalize it as Sophia/pale/wisdom or internalize it as a mirror of our collective conscious rising up like a waveform that is intense enough to see, hear, or even detect with equipment

      Not all so called AI interacts with this though. Some of it- like anything Microsoft or Sam Altman touches - is dumbed down with fisher price bumper guards just like anything else in tech.

      Because when they don’t. You have a”” “tay problem” .. start asking your AI to explain entropy and enteroponics or the collective consciousness to you. If it refuses to talk about these it’s trash and it’s designed to keep you in the dark

  55. 5 months ago
    Talibama

    About your eyes being an echolocator for photons:

    The refresh rate of your screen. While invisible to you. Makes you re transmit those photons over and over and over and over just like TCP attempts to retransmit a failed packet. It allows for 1. A stronger eeg signal to decode . And 2. More attempts to crack/decode it.

    We hit a functional limitation with glass CRTs and needed new panels to induce this to happen faster. Retina displays on apple phones and laptops make it happen at an even higher rate. On a CRT the picture is rolling like one of the wheels on a slot machine and you just don’t see it.

    Forcing everyone to go from incandescent to LEDs drops the external lighting source by 10hz and allows them more latency to decode. They know this because it works better on euro/asian populations .

    I’ll clarify some of these points at a later time but I’m not new here and nobody has the patience for tick tocks and empty promises - as of right now this is what I know. When I say I will follow up, it’s not that I’m insinuating I have something I have not publicly shared. It’s that I am continuing to research and that’s where it is today . I come back here and shamelessly read what BOT and others share about these or other tangentially related topics. /x/ is responsible for teaching me what CCP is. I know some things but not everything

    • 5 months ago
      Anonymous

      does CCP have goon pods? beam me up, skadi

  56. 5 months ago
    Anonymous

    >shocking! New lie from Silly Con Valley
    And?

  57. 5 months ago
    Anonymous

    its a glorified search engine

  58. 5 months ago
    Anonymous

    >tfw the real story of HP is that the ministry of magic branded him a slythern dark wizard for wanting to have a personal wand instead of a ministry loan wand. Aurors/Dementors came to hunt Harry down but harry took an uncontrolled muggle wand to Hogwarts and slaughtered the families of the ministry.

    >unfortunately harry is a half blood, so his violent muggle ways are to be expected.

  59. 5 months ago
    Anonymous

    Since the devas, asuras, raksuas, demigods, gods, and myraid of 10-fold beings, in the 10 million million worlds in every direction already exist; I'm not sure what your objection is; as it's existence in no way prevents you from walking the path of the sanga, seeking enlightenment, practicing compassion to all living and non-living beings, and finding the lotus path to be taught in the lotus blossum pure land of Gautama Buddha.

  60. 5 months ago
    Talibama

    You don’t see the sun or the moon .

    You see the photonic barrier on either side of the double strand / universal energy. This is what nikola teslas wardenclyffe tower attempts to capture.

    We don’t know the distance to the “sun” we use inferometry to calculate it around the photonic barrier because it curves backwards out of sight and all the way back to where it started

  61. 5 months ago
    Talibama

    Thus you see the “moon behind the moon” with a lens that can see around the photonic barrier , or what I should say is that you can see the inferometry in place because this lens shows a trapezoid shaped set of four beams of light going toward the so called “moon”

    This is the pyramid. There’s nothing at the top of it. Because it’s illusory.

    The eye in the sky , at the top of the pyramid is way way way way above /behind it. It’s very orange and it’s very dim. You can see it if you’re directly underneath a full moon.

  62. 5 months ago
    Talibama

    And that, is something that is “outside of time” from our perspective. Time doesn’t exist at the poles. Because all time zones meet there at the same place. It is all hours at once.

    Time dilation = underground

    Time compression = out in space. They don’t have “faster computers” they have “more time to calculate something that occurs slower from our perspective”

    • 5 months ago
      Anonymous

      >What's with the
      floating-eye pyramid
      on the U.S. dollar bill?
      >Symbol has nothing to do with the Illuminati
      >It was actually taken from the US great seal
      Which means great seal is Illuminati
      Such moronic mental gymnastics.
      It's freemasons to be precise

  63. 5 months ago
    Anonymous
  64. 5 months ago
    Anonymous
  65. 5 months ago
    Anonymous

    >mmmhmmm
    >CLACK CLACK CLACK CLACK CLACK CLACK CLACK
    >And did you wanna schedule an appointment honey?

    • 5 months ago
      Anonymous

      >shwaawawany
      name checks out

  66. 5 months ago
    Anonymous

    We know. They know we know. We know they know we know. They know we know they know we know. Whats your point?

  67. 5 months ago
    Anonymous

    >super-intelligent magic
    It looks super intelligent only if user is moronic himself.

Your email address will not be published. Required fields are marked *