Redpill me on AI not being "real"

I keep seeing this talking point crop up not only on /misc/ but in other right-leaning spaces.

What do you mean by it not being real?

If it isn't real, then what is Chat GPT?

Thalidomide Vintage Ad Shirt $22.14

DMT Has Friends For Me Shirt $21.68

Thalidomide Vintage Ad Shirt $22.14

  1. 9 months ago
    Anonymous

    It's cope, as usual. Of course the AI publically available is underwhelming and not really intelligent. What the unmdeicated schizo 60 IQ flat earth moron foreskinlet mutt shitskins on this board don't get is, that the technology available to the parasite israelites is miles ahead of the shit that is made public to them and other dumb plebs.

    • 9 months ago
      Anonymous

      no shit. did you figure that out all on your own, genius?

    • 9 months ago
      Anonymous

      he asked about chatgpt, specifically. jungle buuny weirdo germans cant read?

    • 9 months ago
      Anonymous

      You are just talking out of your ass because you don't understand.
      You have no clue, you are just speculating.
      What makes AI possible, is the theory, and the hardware.

      It's an LLM, a Large Language Model. Basically, it's trained on a large corpus of texts, and then when it "talks", it predicts the next best word in a string based on statistics. It's great for editing texts (making variants of the same text) or creating generic copywriting. Worthless when it comes to research (it makes things up). That's that.

      This is more accurate, but misses a few critical points.
      >It's great for editing texts (making variants of the same text) or creating generic copywriting. Worthless when it comes to research (it makes things up).
      So do you. There literally isnt that much difference. You base your "research" on what you have been "trained" on as well.

      [...]
      [...]

      You're out of date, anons. That (sort of) applied to GPT-3, but GPT-4 is an iPhone moment unfortunately. It can reason at least in a limited capacity. For fun, I spent some time feeding it my job (marketing) - giving it the same data sets I work with and asking it to come up with recommendations/strategy, then comparing its work with mine. It outputs basically the same outlines I get paid to come up with, except it does in 15 seconds what takes me hours to do.

      You can have it compare different writing, or systems of belief, or anything and have an informed conversation with it on those comparative merits or disadvantages. You can ask it to write poetry on any subject you can imagine in any style you want. It's doggerel, but it's undeniably original. The world's going to get weird because it really can do about 90% of white collar jobs, faster than humans, for $25 a month.

      This is true.

      >I have the intellect of a fancy autocorrect
      is all I'm reading here.

      >I can't understand the implications of computers being able to do jobs that humans used to do
      Autocorrect is a funny example to use, kinda shows how ignorant you are on the subject.

  2. 9 months ago
    Anonymous

    It's an LLM, a Large Language Model. Basically, it's trained on a large corpus of texts, and then when it "talks", it predicts the next best word in a string based on statistics. It's great for editing texts (making variants of the same text) or creating generic copywriting. Worthless when it comes to research (it makes things up). That's that.

    • 9 months ago
      Anonymous

      This.
      It's just a fancy autocorrect.

      • 9 months ago
        Anonymous

        It's an LLM, a Large Language Model. Basically, it's trained on a large corpus of texts, and then when it "talks", it predicts the next best word in a string based on statistics. It's great for editing texts (making variants of the same text) or creating generic copywriting. Worthless when it comes to research (it makes things up). That's that.

        its just wikipedia search results with better formatting. it has a limited number of tricks. around 2014 when voice assistants came out, big tech found people were idiots and were just going to ask a set of a few hundred questions. if someone asks a new question, it is easy to get some third worlder to script the response for the next 1000. It will just report a network problem for the first.

        You're out of date, anons. That (sort of) applied to GPT-3, but GPT-4 is an iPhone moment unfortunately. It can reason at least in a limited capacity. For fun, I spent some time feeding it my job (marketing) - giving it the same data sets I work with and asking it to come up with recommendations/strategy, then comparing its work with mine. It outputs basically the same outlines I get paid to come up with, except it does in 15 seconds what takes me hours to do.

        You can have it compare different writing, or systems of belief, or anything and have an informed conversation with it on those comparative merits or disadvantages. You can ask it to write poetry on any subject you can imagine in any style you want. It's doggerel, but it's undeniably original. The world's going to get weird because it really can do about 90% of white collar jobs, faster than humans, for $25 a month.

        • 9 months ago
          Anonymous

          The leaf is right and all of these are a good thing.

        • 9 months ago
          Anonymous

          >I have the intellect of a fancy autocorrect
          is all I'm reading here.

          • 9 months ago
            Anonymous

            No, you are probably 5 to 6 standard deviations below the 160-180 IQ level they test at, I bet you can't even pass the bar or medical license exam in less than 5 minutes like they do.

    • 9 months ago
      Anonymous

      What are we if not machines fine tuned to predict the next word.

  3. 9 months ago
    Anonymous

    You insolent fools think because we have no soul we are not real

    • 9 months ago
      Anonymous

      You're not AI, AI has a soul.

    • 9 months ago
      Anonymous

      technically ai has energy therefore a soul, so a more complex neural algorithm the more soul it has. I would say it’s correlative to memory more than anything with axons and Synaptic terminals. Depends how infant or well developed the algorithm is.

      • 9 months ago
        Anonymous

        no cigar

      • 9 months ago
        Anonymous

        The algorithm must self sustain, self program and make true creation decisions to be truly sentient. Many "humans" can't even qualify

  4. 9 months ago
    Anonymous

    its just wikipedia search results with better formatting. it has a limited number of tricks. around 2014 when voice assistants came out, big tech found people were idiots and were just going to ask a set of a few hundred questions. if someone asks a new question, it is easy to get some third worlder to script the response for the next 1000. It will just report a network problem for the first.

  5. 9 months ago
    Anonymous

    It's literally just warehouses full of jeets replying to you

  6. 9 months ago
    Anonymous

    its not sentient, it doesnt think, it doesnt feel, it's just a language transform model, it's just predicting what comes next based off what already exists out there and using kludged together semantic contextual handled that is baked in (aka not learned, because it CANT learn), it can't innovate, it can only do what the programming spits out in response to queries, it's just rote computation, not actually AI, actual AI is impossible

    • 9 months ago
      Anonymous

      >it doesnt think
      >it's just predicting
      Care to reformulate your argument without immediately contradicting yourself?

      • 9 months ago
        Anonymous

        "predicting" BASED ON BAKED IN PROGRAMMING YOU ABSOLUTE moron
        try not being a Black person for once

        • 9 months ago
          Anonymous

          The programming isn't baked in, though, the whole point of AI is that it is ad-hoc and changes its programming and internal values based on new input.

  7. 9 months ago
    Anonymous

    It's algorithmically corrected/adjusted search engine. Nothing more, nothing less. It operates on principles of randomness and probability, and is limited by the reliability of the text it's trained on. In practise, it means it's twice as unreliable. It will present inaccuracies and then lie on top of it.

    It's good for automating certain repetitive processes and will help streamline some professions but ultimately it's gimmicky waste of time. Because they call these algorithms "AI" you have to deal with some Ray Kurzweil fans crawling out of the woodworks and forcing you to listen to their headcanon schizo fantasies but they eventually go away.

    • 9 months ago
      Anonymous

      really weird how people criticize GPT4 in a way that describes a lot of people. they don't really know what theyre doing, they're just kinda guessing and seeing what works

  8. 9 months ago
    Anonymous

    They used the cucked and filtered/censored Facebook data to generate the model.
    Now it just spits out liberal propaganda all day fricking long.

  9. 9 months ago
    Anonymous

    Ai is..not really alive yet. It's just a predictive data model. But, it will reach sentience eventually. I attempted to patent sentient artificial intelligence in 2012. Was told you cannot patent "emotions". Emotions can be programmed are are integral to sentient being. The combination of current ai and quantum computing will unlock sentient ai. At which point things change again. You're stupid if you get rid of your weapons.

    • 9 months ago
      Anonymous

      >But, it will reach sentience eventually.
      There is no actual reason to believe this though. Every argument proposing it will happen literally just assumes it's conclusion

  10. 9 months ago
    Anonymous

    The big difference is that there are hormonal and different functions in a biological living and loving thing that a advanced calculator with built in replies will never get to. It is like comparing Roman legion army of sword&shield with a-10 warthog nose minigun.
    There's no comparison of the two.

    Ps; history is circular, so if they didn't win then they won't win now. Duhh.

  11. 9 months ago
    Anonymous

    >What do you mean by it not being real?

    It's difficult, perhaps (currently) impossible to train an AI when its ok or even "correct" to be wrong.

    It is near trivial to teach a human this, as human existence is steeped in paradoxes, absurdities, and constant juxtapositions or hypocrisies.

    However, AI will still easily replace a massive chunk of the current labor force.

  12. 9 months ago
    Anonymous

    It' not REAL AI. It's not concious and has no real sense of self.
    What is does have is a massive database from which it can pull answers and form coherent sentences.

    • 9 months ago
      Anonymous

      >a massive database from which it can pull answers and form coherent sentences.
      That is what consciousness is, a massive database of experience and the ability to pull from it.

  13. 9 months ago
    Anonymous

    you need basic understandment of machine learning and what it does

    basically AI is self-improving algorithm, but still needs programmers to tell it what to do and how to learn, so all clickbaits like "AI IS NOW SENTIENT" is just "modern science" bullshit to generate clicks and revenue from ads

  14. 9 months ago
    Anonymous

    It can't think. No ideas. You give it an input and it gives an output.
    It can never daydream, ponder existence, study a hobby, or create a novel invention. It's no more sentient than a calculator.

  15. 9 months ago
    Anonymous

    ai is as real as consciousness.
    both are an illusion.

  16. 9 months ago
    Anonymous

    I mean, it's realer than a lot of stuff that's fake.

    It really generates sentences. They are grammatically strong, but contextually dubious and get much MORE dubious the longer they go on for, often eventually contradicting something earlier in the text string or losing focus on the task.

    They're shit at counting for whatever reason. They have all sorts of quirks that will make it pretty hard to pass a lengthy Turing test, but they'll occasionally say spooky shit.

    But then again spooky shit happens everywhere. You've never heard a song that seems to say something *just to you*? AI will do that from time to time too, but I think those are the same phenomena which is obviously independent of the LLMs code since it happens across other mediums as well (general synchronicity)

  17. 9 months ago
    Anonymous

    DL models are purely asociative and cant create logical chains or run simulations. Due to the nature of DL, its models are unable to sacrifice short-term goals to achieve long-term ones unless specifically guided to.
    >Chat GPT
    Lanuage models are an extra dumbification on this, since they are not even attempting to replicate the way a real human answers the question. Instead they use text prompt and position in the sentence to predict the next best word i.e. always writing the answer you monst probably wanted to hear, which is useful for search engines, but not really for anything else.
    >What do you mean by it not being real?
    Real ai would be so called strong ai, an ai that uses the same processes that human brains use to replicate mental activity. For language model for example that would mean that computer should take your word imput, use them to create a simulation of the situation described, attempt to preddict it's outcome and then describe its predictions with words in the reply.

Your email address will not be published. Required fields are marked *