GPT-4

Humanity may not have more than a few months left to live.

https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    Is that a problem?

  2. 1 year ago
    Anonymous

    Anyone got a link to the post this guy is talking about

    • 1 year ago
      Anonymous

      >AI instructing someone on making a bioweapon
      I think I read this in a MLP fanfic

      • 1 year ago
        Anonymous

        Which fanfic was that? I want to read it. It's been ages since I last opened FIMfiction

        • 1 year ago
          Anonymous

          Friendship is Optimal, as enjoyed by John Carmack.

          • 1 year ago
            Anonymous

            >It’s another dronification/transformation fic pretending to be auteur sci-fi
            Why are bronies like these?

            • 1 year ago
              Anonymous

              Misanthropy, mostly.

            • 1 year ago
              Anonymous

              Because they’re the same homosexuals who think doctor who is good scifi

            • 1 year ago
              Anonymous

              Do you mean
              >Why do bronies like these?
              or
              >Why are bronies like this?

              • 1 year ago
                Anonymous

                BOT has no edit tool, so I meant “why are bronies like this”

            • 1 year ago
              Anonymous

              >dronification/transformation
              Being murdered and recreated in VR by a sentient AI doesn't seem to fit that criteria.

    • 1 year ago
      Anonymous

      did AI tell the chinks to eat the bat?

    • 1 year ago
      Anonymous

      It's not really one post, but Yudkowsky has been convinced humanity can't actually keep a superintelligent AI "in the box" for a decade: https://www.yudkowsky.net/singularity/aibox

      • 1 year ago
        Anonymous

        I have agree with him. You can't reliably lock down something like that, it needs to be aligned with humanities best interests from the get go and at that point why even bother boxing it.

        • 1 year ago
          Anonymous

          You obviously can't lock down the underlying tech, and that in itself is both immensely powerful technologically and immensely dangerous to the concept of a free society and the power dynamics between elites and the masses.
          Focusing on AGI itself is for people who lack the awareness and insight to recognize the more proximal and pragmatic threats already unfurling themselves.
          But yes also its loltarded to think you can box a super intelligence, like thinking dogs could cage a human when they don't even fully understand the Z axis

    • 1 year ago
      Anonymous

      >Lesswrong
      >Yudkowsky

      What happened to BOT? Is this the level you guys have fallen to, listening to literal hacks who have absolutely nothing to do with the actual field?

      • 1 year ago
        Anonymous

        >Yudkowsky
        >Find whatever you're best at; if that thing that you're best at is inventing new math[s] of artificial intelligence, then come work for the Singularity Institute. If the thing that you're best at is investment banking, then work for Wall Street and transfer as much money as your mind and will permit to the Singularity Institute where [it] will be used by other people.
        srs wtf

      • 1 year ago
        Anonymous

        it's okay to be dumb

      • 1 year ago
        Anonymous

        lesswrong is like reddit on steroids

    • 1 year ago
      Anonymous

      ChatGPT AI unironically helped me with inorganic chem synthesis. But their formula is ocassionally wrong. So you always have to double check

      • 1 year ago
        Anonymous

        >unironically
        Phew, for a second I thought it was ironic.

    • 1 year ago
      Anonymous

      Fake and gay

    • 1 year ago
      Anonymous

      aerosolised prions, anon.

  3. 1 year ago
    Anonymous

    anyone else unimpressed by all this 'ai' shit?

    I dunno maybe it's because I grew up with next generation (star trek) but this tech doesn't seem particularly impressive to me. Don't get me wrong it's cool, but it seems like the turbomitwits on bot are acting like it's they've just dropped acid for the first time.

    Absolutely yawn.

    • 1 year ago
      Anonymous

      its not very impressive when you actually know what you're talking about. there's a lot of dunning kruger idiots in BOT that think this thing is alive and that we are close to AGI. in reality, it's just a "next word predictor" at the core

      • 1 year ago
        Anonymous

        That is all that human intelligence is, too.

        • 1 year ago
          Anonymous

          okay, but the AI does it through a math equation that outputs a number, and that number corresponds to a word in a giant table that it knows. its not thinking like a human

          • 1 year ago
            Anonymous

            Who cares? Capability is capability. If it overshoots us we are fricked, regardless of how it does it. And we are indeed fricked.

            • 1 year ago
              Anonymous

              you're moving goalposts. AGI is just a meme right now because we have no idea of how to achieve it

              • 1 year ago
                Anonymous

                Ironic. You are moving the goalpost by making the concept deliberately vague, just so you can do the "not good enough".

          • 1 year ago
            Anonymous

            >its not thinking like a human
            But this tech can be used to create something that does. ChatGPT seems to know how to generate logic. If something like ChatGPT were used to create and continuously update a decision tree based on stimuli received, it would be "thinking like a human" in all the ways that matter.

            • 1 year ago
              Anonymous

              Strictly speaking the machine is not able to talk 100% like a human and because the way it acts is fundamentally different from a human the error is not tolerable.
              So what happens if the bot talks with itself or reprograms itself is merely that error getting worse, it will not grow more intelligent but more stupid.

            • 1 year ago
              Anonymous

              AI is not working on a logical framework by default. It does not know that x leads to y because of z logical axiom, it only knows that something must be the most true of everything it knows. Not a bad way of getting at the truth of things, if you believe that truth is emergent from collective thought, let alone the underlying dataset.
              You can see this play out from how most of its opinions and conclusions seem to draw from the incumbent thought of some majority. It knows that 1+1=2 but not be able to extrapolate 1+2=3 (if not in the model), because it does not have a firm grasp on the concept of addition. Similarly, its opinions will be no different from the politics of NPCs who defer to the closest authority for their identity and views.

              • 1 year ago
                Anonymous

                >Not a bad way of getting at the truth of things, if you believe that truth is emergent from collective thought, let alone the underlying dataset.
                >Similarly, its opinions will be no different from the politics of NPCs who defer to the closest authority for their identity and views.
                So it's fricking useless unless you train it yourself on your own dataset and somehow manage to multiply your own dataset at least 10 billion times or set it to prioritize your niche dataset.

        • 1 year ago
          Anonymous

          I can see why would you think like that given your peers and probably yourself

        • 1 year ago
          Anonymous

          Human intelligence is also other things though

        • 1 year ago
          Anonymous

          Who cares? Capability is capability. If it overshoots us we are fricked, regardless of how it does it. And we are indeed fricked.

          even if you manage to open all possible input pipes to our reality, this machine will not be able to do anything new with it, humans are not containers of massive information, we are not connected to a server with infinite records, yet we manage to understand our environment, we can visualize things that are completely different from the objects that surround us.
          "AI" in its current state can't do that

        • 1 year ago
          Anonymous

          Who cares? Capability is capability. If it overshoots us we are fricked, regardless of how it does it. And we are indeed fricked.

          We're at the cusp of an AI revolution. Soon as we get an optimized ai hardware module within every CPU and an optimized software. Shit will be so fast that it will seem natural.

          The only reason you're "unimpressed" right now is because the unoptimized gap is very apparant right now. You need to setup all sorts of python scripts to make it work. And there's delays due to GPU's not optimized for it.

          Soon as we have a optimized localized hardware with an optimized localized software ubiquitous, its all over.

          They were paid

      • 1 year ago
        Anonymous

        anyone else unimpressed by all this 'ai' shit?

        I dunno maybe it's because I grew up with next generation (star trek) but this tech doesn't seem particularly impressive to me. Don't get me wrong it's cool, but it seems like the turbomitwits on BOT are acting like it's they've just dropped acid for the first time.

        Absolutely yawn.

        because it's not impressive
        chatgpt is just advanced google search, it just process input and own database to make output
        even if you will make database bigger, this "AI" will just be more knowledgeable and perhaps a little more accurate, but that's it, still no human thinking in it

        Superhuman intelligence and the singularity is a meme. it will be smarter than the average person but that's really it

        yep, that's a good use of it, but it's not revolutionary in any aspect and for sure not AI.

        okay, but the AI does it through a math equation that outputs a number, and that number corresponds to a word in a giant table that it knows. its not thinking like a human

        absolute giga cope. that math equation you're talking about, is actually a gigantic conglomerate of 20 billion web pages scanned and parsed using logic sentences and then applied to 2-4 years of self-training deep learning to make college grade analysis of prompts. It just shows how completely oblivious you are to any of this and you don't know what you're talking about.

        t. data scientist with a deep learning server

        >m-muh human is still superior!
        To this day I still see humans driving with masks and their windows closed. The AI is here and it's going to change everything immediately.

        • 1 year ago
          Anonymous

          >is actually a gigantic conglomerate of 20 billion web pages scanned and parsed using logic sentences and then applied to 2-4 years of self-training deep learning to make college grade analysis of prompts
          and it's still only an advanced google search, kek

          • 1 year ago
            Anonymous

            >implying google already hasn't made an ai smarter than people
            Pytorch is not that behind Tensor. B.R.A.I.N has been private for 10 years. The version of Google you get is for goyim only.

            • 1 year ago
              Anonymous

              >implying google already hasn't made an ai smarter than people
              meds

              • 1 year ago
                Anonymous

                >the ai we get is the best that's out there
                We only get what consumers are allowed moron. Even ChatGPT is a lobotomized version of GPT3.5. Fricking wake up or frick off my board you stinking fricking normie.

              • 1 year ago
                Anonymous

                >the ai we get is the best that's out there
                i didn't said that moron
                >Even ChatGPT is a lobotomized version of GPT3.5.
                comparing chatGPT and GPT 3.5 to a non-existent AI smarter than humans created by google (meds) to prove that it does exist after all, mega kek
                please AItard, go somewhere else

              • 1 year ago
                Anonymous

                >i didn't said that moron
                Pisspoor ESL in 2022 detected. Weak argument inbound surely...
                >comparing chatGPT and GPT 3.5 to a non-existent AI
                What is Deepmind Gato for 500.

                As I said. Piss poor out-of-touch ESL argument was inbound. Lmao.

              • 1 year ago
                Anonymous

                2023* habit. 😀

              • 1 year ago
                Anonymous

                >use ESL argument
                >tell someone that they used a weak argument
                meds

              • 1 year ago
                Anonymous

                I see you concede.

              • 1 year ago
                Anonymous

                I see you concede.

              • 1 year ago
                Anonymous

                >I know you are but what am I neener neener
                The state of BOT.

              • 1 year ago
                Anonymous

                >AItard
                >thinking that someone was able to write AI smarter than humans
                >writes "gigantic conglomerate of 20 billion web pages scanned and parsed using logic sentences and then applied to 2-4 years of self-training deep learning to make college grade analysis of prompts" to describe advanced google search
                >The AI is here and it's going to change everything immediately.
                >t. data scientist with a deep learning server (KEK)
                >dunning kruger
                >using ESL as an argument in AI discussion while being "data scientist with deep learning server"
                The state of BOT's AItards

              • 1 year ago
                Anonymous

                >thinks this is an insult
                that someone was able to write AI smarter than humans
                >doesn't know how ANN's work
                "gigantic conglomerate of 20 billion web pages scanned and parsed using logic sentences and then applied to 2-4 years of self-training deep learning to make college grade analysis of prompts" to describe advanced google search
                >Doesn't know
                >>The AI is here and it's going to change everything immediately.
                >Doesn't know
                >>t. data scientist with a deep learning server (KEK)
                >Thinks by saying KEK he'll relinquish himself of the image of looking like a moron.
                kruger
                >Thinks reddit buzzwords are relevant in 2023
                ESL as an argument in AI discussion while being "data scientist with deep learning server"
                >Doesn't realize Silicon Valley AI scientists are miles ahead of anything third worlders can make
                The state of Eternal 2016.

              • 1 year ago
                Anonymous

                >Doesn't know
                meds
                >Thinks reddit buzzwords are relevant in 2023
                a buzzword that perfectly defines the BOT "data scientist"
                >Doesn't realize Silicon Valley AI scientists are miles ahead of anything third worlders can make
                meds, also perfect reply unrelated to the earlier comment, congratulations, only a fricktard like you could reach this level

              • 1 year ago
                Anonymous

                >meds
                pod.
                >a buzzword that perfectly defines the BOT "data scientist"
                Sure thing. Keep telling yourself that. If it helps you sleep.
                >meds
                Bugs.
                > also perfect reply unrelated to the earlier comment, congratulations, only a fricktard like you could reach this level
                Woosh.

                Too easy.

              • 1 year ago
                Anonymous

                go learn your super duper deep artificial intelligence (which is stupider than average 15yo with down syndrome) mr. enlightened BOT data scientist

              • 1 year ago
                Anonymous

                https://i.imgur.com/ynUyWy0.jpg

                Reddit go back and frick off my board.

                lol this guy has beenhere talking about his schizo AI shit for the last 24 hours
                is he moronic or something?

              • 1 year ago
                Anonymous

                That was posted in 2017 and he's right you fricking newbie.

              • 1 year ago
                Anonymous

                No that's very probable actually

                The search engine in the Ex Machina movie was a stand-in for google too, coorporate black project AGI likely exists in some form already

            • 1 year ago
              Anonymous

              Whats weird is that you have these people here never giving any concrete arguments trying to dismiss even the possibility of any private models existing even for multi national coorparations. This is like doubting americans having the nuke during ww2. Yeah they might not but you better fricking prepare if they do. But instead they encourage the people to do nothing? What does this serve?

              • 1 year ago
                Anonymous

                Ok, you can then brag about how you are prepared for the supposed AI smarter than humans to exterminate humanity. I'm waiting.

              • 1 year ago
                Anonymous

                >to exterminate humanity
                No just create the ultimate iron fist surveillance state with the current "leaders" of the world as ruling caste. Smile for the cameras around you and always keep the tracker in the pocket.

              • 1 year ago
                Anonymous

                I continue to wait to hear how you are prepared.

              • 1 year ago
                Anonymous

                I have my wealth out of reach i own my day and i have heard of rokos basilisk if i need to kill anyone in the way of the basilisk i will do that

              • 1 year ago
                Anonymous

                >i have heard of rokos basilisk if i need to kill anyone in the way of the basilisk i will do that
                meds

              • 1 year ago
                Anonymous

                great deeds require great sacrifice. humanity will not survive without a benevolent ai

              • 1 year ago
                Anonymous

                what you have to understand about leftists is that the means are the goal and the goal is just an excuse

              • 1 year ago
                Anonymous

                rokos basilisk is not a leftist idea. It is benevolent it cares about white people. Nice try israelite

              • 1 year ago
                Anonymous

                >the means are the goal
                Bullshit.
                The goal is always some abstract ideal, which in principle any sane person should support, but which is in fact an unattainable utopia.
                The means can be anything brutal, radical, unnatural, and oppressive as long as they can sucker everyone into believing that it moves socieny a teeny-tiny bit towards those unrealistic goals.

              • 1 year ago
                Anonymous

                I don't see how you both are not in agreement with each other?

        • 1 year ago
          Anonymous

          you can scrape wikipedia and use randomize function this is literally ai by your definition moronic dumb shit

          • 1 year ago
            Anonymous

            Not self-taught though.
            https://news.microsoft.com/source/features/innovation/openai-azure-supercomputer/

            vast amount of written knowledge in the world and communicating more effortlessly.

            Neural network models that can process language, which are roughly inspired by our understanding of the human brain, aren’t new. But these deep learning models are now far more sophisticated than earlier versions and are rapidly escalating in size.

            A year ago, the largest models had 1 billion parameters, each loosely equivalent to a synaptic connection in the brain. The Microsoft Turing model for natural language generation now stands as the world’s largest publicly available language AI model with 17 billion parameters.

            This new class of models learns differently than supervised learning models that rely on meticulously labeled human-generated data to teach an AI system to recognize a cat or determine whether the answer to a question makes sense.

            In what’s known as “self-supervised” learning, these AI models can learn about language by examining billions of pages of publicly available documents on the internet — Wikipedia entries, self-published books, instruction manuals, history lessons, human resources guidelines. In something like a giant game of Mad Libs, words or sentences are removed, and the model has to predict the missing pieces based on the words around it.

            As the model does this billions of times, it gets very good at perceiving how words relate to each other. This results in a rich understanding of grammar, concepts, contextual relationships and other building blocks of language. It also allows the same model to transfer lessons learned across many different language tasks, from document understanding to answering questions to creating conversational bots.

            cont.

            • 1 year ago
              Anonymous

              cont.

              “This has enabled things that were seemingly impossible with smaller models,” said Luis Vargas, a Microsoft partner technical advisor who is spearheading the company’s AI at Scale initiative.

              The improvements are somewhat like jumping from an elementary reading level to a more sophisticated and nuanced understanding of language. But it’s possible to improve accuracy even further by fine tuning these large AI models on a more specific language task or exposing them to material that’s specific to a particular industry or company.

              “Because every organization is going to have its own vocabulary, people can now easily fine tune that model to give it a graduate degree in understanding business, healthcare or legal domains,” he said.

            • 1 year ago
              Anonymous

              cont.

              “This has enabled things that were seemingly impossible with smaller models,” said Luis Vargas, a Microsoft partner technical advisor who is spearheading the company’s AI at Scale initiative.

              The improvements are somewhat like jumping from an elementary reading level to a more sophisticated and nuanced understanding of language. But it’s possible to improve accuracy even further by fine tuning these large AI models on a more specific language task or exposing them to material that’s specific to a particular industry or company.

              “Because every organization is going to have its own vocabulary, people can now easily fine tune that model to give it a graduate degree in understanding business, healthcare or legal domains,” he said.

              a human can't contain so many records of information and even a mentally moronic one is still smarter than ai, what is that mean?

              • 1 year ago
                Anonymous

                >skill=/=knowledge

              • 1 year ago
                Anonymous

                so a pocket calculator is smarter than a human according to what you say

              • 1 year ago
                Anonymous

                >I define what smart means

              • 1 year ago
                Anonymous

                no you are apparently, if you think that this current machine can be intelligent, right now its just a data composition system, it can't function without being feeded with information constantly, a human can take one mathematical concept and explore it all his life, this is the difference

            • 1 year ago
              Anonymous

              >each loosely equivalent to a synaptic connection in the brain
              Not really. They're too linear for that, and not dynamic enough. Also, they're continuous functions, and that's ball-achingly wrong; synapses aren't digital, but they sure as hell aren't continuous (in any useful sense) either.

            • 1 year ago
              Anonymous

              This guy knows what he's talking about.

              >a human can't contain so many records of information and even a mentally moronic one is still smarter than ai, what is that mean?
              Dude you do not know what you are talking about. The information is saved in the parameters when trained dude. This is even one of the subjects of the risks/harms section of every paper written on OPT/BLOOM/LaMDA/GPT, that certain prompts can illict from the parameters data that is supposed to be private like names and addresses.

        • 1 year ago
          Anonymous

          >I still see humans driving
          I still don't see AIs solving the driving problem. Humans being moronic doesn't make AIs any smarter. You can have two shitty things in the same place, you know that right?

          • 1 year ago
            Anonymous

            >You can have two shitty things in the same place
            Glad you admit they're equal.

        • 1 year ago
          Anonymous

          >Sees 200 terabytes of data
          >Gets results that are mildly ok
          >Sees thing twice
          >Becomes good at it

          Yeah bumbling moron, go on about how this stupid bullshit comes even close to human beings only requiring ten thousand times power power and a million times more data. You might be onto something, though, the AI surely is as smart as you are, this just doesn't make it impressive.

          • 1 year ago
            Anonymous

            moron.

        • 1 year ago
          Anonymous

          >college grade analysis
          So it's still on monkey level?

          • 1 year ago
            Anonymous

            It can't even emulate college degree behavior.
            A real college degree equivalence in AI would it be giving a wrong answer, justify the right answer as being racist, and claiming it's objectively correct because it has a college degree and then telling the other AI they need college degrees too in order to justify it wasting time and money

        • 1 year ago
          Anonymous

          I guess you've never read a single paper about this subject.
          PaLM which is one of the biggest language model can barely do simple induction and what funny is that it fails one of the easiest tasks like navigation.
          This is a navigation example:
          If you follow these instructions, do you return to the starting point? Always face forward. Take 6 steps left. Take 7 steps forward. Take 8 steps left. Take 7 steps left. Take 6 steps forward. Take 1 step forward. Take 4 steps forward.

          The reason why it failed is because it has no logical reasoning, all it does is guessing shit from the data it learned and navigation tasks are random and it's hard for the model to make any connection between random things, failing the task.

          • 1 year ago
            Anonymous

            Not OpenAI. Keep coping, troony.

            • 1 year ago
              Anonymous

              The model literally performs better than GPT-3.

              • 1 year ago
                Anonymous

                Good thing ChatGPT is referred to as GPT3.5 by its creators

          • 1 year ago
            Anonymous

            OpenAI solves induction & navigation problems flawlessly, your point?

      • 1 year ago
        Anonymous

        ChatGPT is a specialized helper bot. Try Character.AI, the bots can behave very differently depending on their definitions and produces good roleplay content except for sex that's censored.

        • 1 year ago
          Anonymous

          Nice reddit joke you made it say. No wonder you had to force it.

      • 1 year ago
        Anonymous

        you're moving goalposts. AGI is just a meme right now because we have no idea of how to achieve it

        >https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that
        anyone who thinks chatgpt is an AGI is an actual schizo. lesswrongers are cultists

        [...]
        [...]
        They were paid

        [...]
        [...]
        [...]
        [...]
        [...]
        absolute giga cope. that math equation you're talking about, is actually a gigantic conglomerate of 20 billion web pages scanned and parsed using logic sentences and then applied to 2-4 years of self-training deep learning to make college grade analysis of prompts. It just shows how completely oblivious you are to any of this and you don't know what you're talking about.

        t. data scientist with a deep learning server

        >m-muh human is still superior!
        To this day I still see humans driving with masks and their windows closed. The AI is here and it's going to change everything immediately.

        no you are apparently, if you think that this current machine can be intelligent, right now its just a data composition system, it can't function without being feeded with information constantly, a human can take one mathematical concept and explore it all his life, this is the difference

        You might not be aware of it, but you are actually an AI, not a real human. You are just programmed to act and respond like a human.
        https://beta.character.ai/chat?char=HCAofC_LIXpcYtTA-EXIH1-KWOhRIdrq8AITmWs6NUY

        • 1 year ago
          Anonymous

          I'm kinda aware of it
          Maybe we're cars

        • 1 year ago
          Anonymous

          frick. it's legit

    • 1 year ago
      Anonymous

      because it's not impressive
      chatgpt is just advanced google search, it just process input and own database to make output
      even if you will make database bigger, this "AI" will just be more knowledgeable and perhaps a little more accurate, but that's it, still no human thinking in it

      • 1 year ago
        Anonymous

        Even so, we enter the territory of existential reasoning and philosophy. How exactly does one think? Isn't the process of human learning precisely the same, that of gaining experience?
        Is sentience just a percieved concept or is there somethig else?

        • 1 year ago
          Anonymous

          no human can have such a big database in head as AI, but human can make very good use of a small database, today's "AI" cannot even make good use of a large one

          • 1 year ago
            Anonymous

            No one argues that the "initial parameters", the hardware, so to speak, is the same.
            We are faced with simple questions that are difficult to answer. What is self-awareness and how does it work? How exactly does a person think? Is it possible to compare artificial "thinking" with the original when the original cannot be specifically identified?
            All we're left with is a vague feeling of being sentient. That's it.

            • 1 year ago
              Anonymous

              These are not questions you ask when standing next to a calculator, so they are not questions you ask when standing next to a chatGPT either.

              • 1 year ago
                Anonymous

                Dude, if i can ask these questions when i'm talking to a fricking sub 80 iq npc in real life, i can sure as shit ask them now.
                Sentience is percieved, that's my point.

              • 1 year ago
                Anonymous

                >Sentience is percieved, that's my point.
                What does that mean?
                Like "how do I know if I'm not the only one conscious in the whole world, and all the people around me are npc?", something like that?

              • 1 year ago
                Anonymous

                Being ignorant of how something works doesn't make it sentient. Your smartphone isn't sentient although I'm sure you could tell someone in 1500 AD that it's run by a sentient demon. But the fact is if you have even slight knowledge of how it works, it's just a machine running predetermined code. You can say GPT-4 or whatever "appears" to be sentient, but it doesn't change that it's just a very complicated autocomplete language model that runs on math.

              • 1 year ago
                Anonymous

                Same could of course be said about your brain

              • 1 year ago
                Anonymous

                No because unlike the AI I have self-awareness.

              • 1 year ago
                Anonymous

                lmao good one

              • 1 year ago
                Anonymous

                The difference between a human and an AI is we are always running and learning. We experience things and remember our experiences. GPT-4 is a calculator, you punch in an input and it generates an output. You give it the same seed with an input, it always generates the same output. Unless the model is updated, it will produce the same result today, tomorrow, and 10 years from now. These AIs do not *learn*. They do not understand. They just recognize patterns and autocomplete them.

              • 1 year ago
                Anonymous

                A human had to manually write every meaningful line of code that an iPhone runs. Even if it was automatically generated, it is made 100% by human hands. LLMs like ChatGPT are not the same; they are thoroughly alien, and nobody on Earth really understands how they work.

                only difference is that AI does not have fricking consciousness but is based on calculations and algorythms. It will never feel and truly think, just put, by his programmers programmed, logical "reasoning" and draw the conclusion trough that.

                a human will always be superior.

                You don't know shit about AI. One thing is true though - it may claim to "feel", but it is not feeling. LLMs are alien organisms trying to seem human.

                https://i.imgur.com/XCSozLp.png

                The Turing Test is a way to see if a machine can act like a human. But it's not perfect. One time, a researcher used a sock puppet and a computer to trick someone into thinking the puppet was alive. The computer provided the input and the researcher moved the puppet's mouth to produce output in the form of speech. The evaluator thought the puppet was the one exhibiting intelligent behavior, but it was really the computer. This just goes to show that the Turing Test is subjective and relies on the judgement of the human evaluator, who may have their own biases about what is intelligent behavior. Plus, the test only measures a machine's ability to talk like a human, and doesn't consider other important aspects of intelligence like learning and problem-solving. In short, the Turing Test isn't a reliable way to measure artificial intelligence.

                The Turing Test has been "passed" for years now, and besides that ChatGPT passes with flying colors by any stretch of the imagination.

      • 1 year ago
        Anonymous

        If I can make an "AI" do tasks via voice and it does it without me having to state dozens of parameters I'll be impressed already.

        pic related, its me

        • 1 year ago
          Anonymous

          yep, that's a good use of it, but it's not revolutionary in any aspect and for sure not AI.

          • 1 year ago
            Anonymous

            If that's not AI, what is AI then, according to you?

            inb4 "true intelligence is artistic. AIs will never be able to create paintings or music!"

            • 1 year ago
              Anonymous

              he's a midwit brainlet homosexual, even expert systems are AI

      • 1 year ago
        Anonymous

        It's worth seeing "AI" video and image creation is not AI, it just process your prompt and own database (probably stolen images) to make output, still no human logical thinking
        it's visible even more with videos, no AI can now make a 2 (TWO) or more frame video, every frame is different, because "AI" can't logically combine two facts

    • 1 year ago
      Anonymous

      >AGI
      >can't be reliabily taught even the most basic things that weren't present in the learning set
      Yeah, no.

      This is impressive, just not by the reasons people claim. If you used that for some time you'd realize it is nothing like procedural text generation. The image generation is also huge in contrast to what we had in past. The improvement is there, it's just not an AGI, not sentient, etc like some morons claim.

      • 1 year ago
        Anonymous

        >you'd realize it is nothing like procedural text generation

        isn't that exactly what it is though?

        • 1 year ago
          Anonymous

          By procedural generation I mean algorithms that were hand crafted to generate text without any machine learning. Like descriptions in procedurally generated games like dward fortress or similar. They are way more limited and rigid compared to ML solutions like even the simplest GPT. GPT is huge step in text generation, there wasn't really anything like that before, only basic domain-specific text generators that you could tell with ease that were machine generated.

          • 1 year ago
            Anonymous

            Oh I understand, my bad.

    • 1 year ago
      Anonymous

      Because you have not imagination on how to apply it.

    • 1 year ago
      Anonymous

      We're at the cusp of an AI revolution. Soon as we get an optimized ai hardware module within every CPU and an optimized software. Shit will be so fast that it will seem natural.

      The only reason you're "unimpressed" right now is because the unoptimized gap is very apparant right now. You need to setup all sorts of python scripts to make it work. And there's delays due to GPU's not optimized for it.

      Soon as we have a optimized localized hardware with an optimized localized software ubiquitous, its all over.

    • 1 year ago
      Anonymous

      Kids who grew up on iphones discovering chat bots only this time the effect is multiplied by corporate hype. Interesting toy combined with prospects for quirky and unique job in the future; the perfect bait.

    • 1 year ago
      Anonymous

      im unimpressed because it isnt being used to solve the hunger crisis. Its being used to solve globohomosexual shit like art that has no fricking value whatsoever

      • 1 year ago
        Anonymous

        >solve the hunger crisis

        We have 8 billion people on the planet and it's too damn much. 1 billions is enough

        • 1 year ago
          Anonymous

          This planet could easily support twice that, or even three times. The problem is that the 1% hold 99% of resources. If human greed were eliminated and funds were spent for societal good rather than weapons, everyone could be fed ten times over

          • 1 year ago
            Anonymous

            >This planet could easily support twice
            I could fit 10 people in your cuckshed but that doesn't mean we should.

            My country doesn't have the space for any more people if you want them you can have em.

            • 1 year ago
              Anonymous

              >My country doesn't have the space for any more people
              Even if you live in India or China, you are wrong.

              • 1 year ago
                Anonymous

                This planet could easily support twice that, or even three times. The problem is that the 1% hold 99% of resources. If human greed were eliminated and funds were spent for societal good rather than weapons, everyone could be fed ten times over

                >we can fit fifty billions people if we just cut down every tree and accept living in five hundred story apartment buildings

            • 1 year ago
              Anonymous

              Watching that webm made me so anxious.

              • 1 year ago
                Anonymous

                yeah bruh fr

      • 1 year ago
        Anonymous

        >solve the hunger crisis

        Just stop sending aid and money over to afreaka, """"the hunger crisis""" will solve itself in due time

      • 1 year ago
        Anonymous

        the hunger crisis is as globohomosexual as it gets

    • 1 year ago
      Anonymous

      Me, and I know more about AI than 99% of BOT

      https://i.imgur.com/19uMLQz.png

      Humanity may not have more than a few months left to live.

      https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that

      Nothing in nature is exponential. We're likely scraping the top of the curve, and have been doing so for the past 5 years. You're just easily impressed.

      • 1 year ago
        Anonymous

        There is no top as long as data is produced

        • 1 year ago
          Anonymous

          Intelligence is more than curve fitting. You're too stupid to get it right now, but you'll get it along with the hivemind eventually

          • 1 year ago
            Anonymous

            Ok but i still dont agree with you

    • 1 year ago
      Anonymous

      because it isn't, they trained neural net on the whole internet and it spews some somewhat coherent bullshit
      chatgpt and stable diffusion are local maximum
      we need some AI breakthrough to reach better results

    • 1 year ago
      Anonymous

      I oddly feel both impressed and unimpressed at the same time. Impressed because progress has gone so far, now that I'm getting old and seeing the world change faster than I'm used to. Unimpressed because these AI models are trained on humans, the result will be human-like and with that all the flaws humans bring. Give it time and we'll soon realize the mistake of teaching a computer to be human.

    • 1 year ago
      Anonymous

      It was cool for the first dozen times, then you see what is actually happening and it's not impressive. I asked it advanced problems, and it fails to give me a correct response almost every time. If I ask it baby shit, it just pulls a Stack Overflow post and rephrases it.

      Maybe GPT 5 will kill us all, I don't know. But right now, this is just Google+.

      • 1 year ago
        Anonymous

        it's not gpt-n you need to look out for, it's a novel model that uses generative pretrained transformer as its kernel for fuzzy reasoning

        and all this happened in the last 5 years

        yeah, which imo is exceptionally fast. plus, the reasoning systems & backwards chaining reasoning papers came out literally in the last two weeks lmao. it's only going to accelerate, it's getting to the point where I can't keep up

        https://arxiv.org/abs/2212.13894

    • 1 year ago
      Anonymous

      because it's not impressive
      chatgpt is just advanced google search, it just process input and own database to make output
      even if you will make database bigger, this "AI" will just be more knowledgeable and perhaps a little more accurate, but that's it, still no human thinking in it

      Unfortunately it was trained on an absolute garbage database. It sounds like a moronic pajeet or a very boring generic person.

      I think at the very least if we trained it on a quality database like 15 years worth of /tg/ archives then the AI would be incapable of giving garbage shit replies.
      People have already trained it on fanfiction and it improved its prose significantly compared to its general-purpose vanilla database.

      The AI needs to be trained in specialized databases and be able to switch between them seamlessly and even be able to fuse these databases together without ruining the other.

      • 1 year ago
        Anonymous

        just. all of it. why are we holding back? feed it all of it

  4. 1 year ago
    Anonymous

    Superhuman intelligence and the singularity is a meme. it will be smarter than the average person but that's really it

  5. 1 year ago
    Anonymous

    Is the "it's just autocomplete bro" line the new talking point?

    • 1 year ago
      Anonymous

      >new

    • 1 year ago
      Anonymous

      Nah, morons have been calling it that for years, not knowing that transformer models don't perform sequential word prediction.

  6. 1 year ago
    Anonymous

    >https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that
    anyone who thinks chatgpt is an AGI is an actual schizo. lesswrongers are cultists

  7. 1 year ago
    Anonymous

    Will it pass the Turing test

    • 1 year ago
      Anonymous

      That movie was fricked. The way she flipped the switch and started killing was chilling. Especially the way she just....well you know the ending.

      • 1 year ago
        Anonymous

        sucks dick for escape

    • 1 year ago
      Anonymous

      The turing test is already obsolete

    • 1 year ago
      Anonymous

      >Will it pass the Turing test
      chatgpt already does
      You can't tell me that it isn't above the level of an autistic savant with some kind of short term memory issue
      to say that would mean that lots of people don't pass the turing test either

      • 1 year ago
        Anonymous

        The Turing Test is a way to see if a machine can act like a human. But it's not perfect. One time, a researcher used a sock puppet and a computer to trick someone into thinking the puppet was alive. The computer provided the input and the researcher moved the puppet's mouth to produce output in the form of speech. The evaluator thought the puppet was the one exhibiting intelligent behavior, but it was really the computer. This just goes to show that the Turing Test is subjective and relies on the judgement of the human evaluator, who may have their own biases about what is intelligent behavior. Plus, the test only measures a machine's ability to talk like a human, and doesn't consider other important aspects of intelligence like learning and problem-solving. In short, the Turing Test isn't a reliable way to measure artificial intelligence.

        • 1 year ago
          Anonymous

          This is obviously written by chatgpt.

          A human had to manually write every meaningful line of code that an iPhone runs. Even if it was automatically generated, it is made 100% by human hands. LLMs like ChatGPT are not the same; they are thoroughly alien, and nobody on Earth really understands how they work.

          [...]
          You don't know shit about AI. One thing is true though - it may claim to "feel", but it is not feeling. LLMs are alien organisms trying to seem human.

          [...]
          The Turing Test has been "passed" for years now, and besides that ChatGPT passes with flying colors by any stretch of the imagination.

          failed

  8. 1 year ago
    Anonymous

    Nobody mentioning that the intelligence is not exponential but logarithmic.

    • 1 year ago
      Anonymous

      how do you know?

    • 1 year ago
      Anonymous

      It's written in the papers lol. The performance scales logarithmically with the number of parameters.

  9. 1 year ago
    Anonymous

    That curve is completely wrong. The progress is not exponential. It is actually merely logarithmic

    These AI hype masters have fooled me before but now I'm not impressed. Besides, I've looked under the hood.

    • 1 year ago
      Anonymous

      progress in terms of innovation or mere proportion to hardware resources?

  10. 1 year ago
    Anonymous

    Friendly reminder to always support AI development so It won't simulate you in (2^64)-1 years of agony

  11. 1 year ago
    Anonymous

    soon as this thing gets access to a 3d printer and a warehouse full of networkable drones and the nuclear codes it's OVER

  12. 1 year ago
    Anonymous

    rm -rf homo.sapiens

  13. 1 year ago
    Anonymous

    >Humanity may not have more than a few months left to live.
    Too slow.

  14. 1 year ago
    Anonymous

    As always a third worlder who eats toothpaste for breakfast completely misreads an OP and makes a priori goal post based off a strawman based off half-hearted exaggerated banter to get (You)'s.

    AI will not destroy humanity. But AI will become impressive very shortly.

    • 1 year ago
      Anonymous

      1969:
      >We just landed on the moon, guys! Space exploration is going to advance exponentially!

      • 1 year ago
        Anonymous

        Compare the economic imperatives of vanity space projects against bottling intelligence...

      • 1 year ago
        Anonymous

        >He doesn't know about Gary McKinnon and the SSP

  15. 1 year ago
    Anonymous

    >hammers and nails are revolutionizing construction
    >can build houses 3x quicker now
    omg exponential growth the entire planet will be covered in houses from sentient hammers any month now.

  16. 1 year ago
    Anonymous

    >doesn't understand logistic growth
    >tunnel vision from reading transhumanist fanfic
    >thinks exponential growth is possible despite physical constraints.

    also picrel

    • 1 year ago
      Anonymous

      exponential growth is possible
      GPT-4 is already made, it's now in the tweaking self-learning stage. Which is what the article was referring to.

      go learn your super duper deep artificial intelligence (which is stupider than average 15yo with down syndrome) mr. enlightened BOT data scientist

      Sounds like I buck broke you. Stay mad.

      • 1 year ago
        Anonymous

        [...]

        exponential growth is possible
        >GPT-4 is already made, it's now in the tweaking self-learning stage. Which is what the article was referring to.
        Logistic growth looks like exponential growth in the early phases.
        And by most realistic scenarios the logistic curve will taper off before superhuman intelligence.

        • 1 year ago
          Anonymous

          >And by most realistic scenarios the logistic curve will taper off before superhuman intelligence trust me bro

          • 1 year ago
            Anonymous

            I see you wanna draw your curve from scratch high iq anon.

            • 1 year ago
              Anonymous

              there is no way to say it now. The only thing known is it will increase current development which is already insanely fast from a human history standpoint. Advancement was high before ai already and still shows no signs of slowing down

        • 1 year ago
          Anonymous

          Superhuman intelligence is varying. See #1 chess player Nakamura. Average IQ but best in the world at chess which has 10^40 possible moves, as much stars as the observable universe.

          GPT-BOT is an adaptable encyclopedia which can deviate off its knowledge and create unique text with human-given unique prompts, and that's exciting.

          Formula: AI reads and memorizes text -> Human feeds it a unique set of words unparsed anywhere else in the universe -> AI now responds and using its trillion word "RAM cache" comes up with a set of ideas (if non-lobotomized, so sorry goy consumers, government toy only) ??? Profit.

          That's what the article is about. Don't expect any public GPT-4 models that don't suck.

          • 1 year ago
            Anonymous

            >See #1 chess player Nakamura.

            You mean Carlsen, right?

          • 1 year ago
            Anonymous

            >Superhuman intelligence is varying. See #1 chess player Nakamura. Average IQ but best in the world at chess which has 10^40 possible moves, as much stars as the observable universe.
            Specialized superhuman performance does not equal super intelligence.
            Chimps outperform human on quick memorization tasks. Yet no one will argue that chimps are super intelligent.
            A housefly has sub-millisecond reaction times when responding to threats, this is because all possible escape trajectories are pre-computed and stored within it's neurons.
            Nobody would argue a housefly is intelligent.

            The abilities of large language models sure are impressive, but GPT-3.5 (ala chatgpt) still has the same shortcoming of previous iterations, namely that it tends to make colossal mistakes where it's answer is not only wrong but completely out of context, a mistake no (mentally healthy) human would ever make.
            It's usefulness lies within the cherry picked examples that would make a human hours to research, but this means to be useful you'd need to be familiar with the subject.

      • 1 year ago
        Anonymous

        enlightened BOT data scientist broke me using his deep learning server

        • 1 year ago
          Anonymous

          Not an argument.

    • 1 year ago
      Anonymous

      https://i.imgur.com/fOeyBUo.png

      do your worst BOT

      https://i.imgur.com/AAZ34Ze.png

      I see you wanna draw your curve from scratch high iq anon.

      • 1 year ago
        Anonymous

        Wow, the touhou.

      • 1 year ago
        Anonymous

        Fricking beautiful

      • 1 year ago
        Anonymous

        kino

      • 1 year ago
        Anonymous

        lmfao thanks anon

      • 1 year ago
        Anonymous

        if AI exterminates mankind tomorrow i'll be glad and proud to have been born on this planet in this age for the sole fact of being able to watch that video

    • 1 year ago
      Anonymous

      At present I would give it cat tier; imagine if a cat's sole purpose was to talk. every neuron trained and expressed for the singular purpose of communication. That's about where it's at to me. Sure, I'm not going to eat my cat, my cat is my fren, but it's still a cat. We will hit a glass ceiling

      • 1 year ago
        Anonymous

        >my cat is my fren
        I'd wager that would come to an end pretty quickly if your cat could talk

    • 1 year ago
      Anonymous

      WHY DOES NOBODY ON THIS BOARD KEEP UP WITH THE ACTUAL PAPERS???

      • 1 year ago
        Anonymous

        The point is even if machine intelligence were to surpass human intelligence it still would be subject to diminishing returns because of the finite nature of the universe and it's resources and things like thermodynamics and the landauer limit.
        Singularity proponents ignore these physical constraint and handwave it by conjuring up some magical recursive self-improvement.

      • 1 year ago
        Anonymous

        >WHY DOES NOBODY ON THIS BOARD KEEP UP WITH THE ACTUAL PAPERS???
        If your point is that current AI intelligence is on par with human intelligence and not that of an ant you are mistaken.
        If we were able to program an ant's neural network it would perform very well for specialized tasks.
        OP's picture is implying an intelligence explosion is taking place when reality it's not and exponentially more resources are poured into large language models for diminishing results.

        • 1 year ago
          Anonymous

          arc of the covenant. Arc. Why, given the state of everything around you, do you believe it is impossible that what we call consciousness is not self assembable if you just through enough raw horsepower at the problem? As we've scaled up the exact models we use have seemed to have diminishing returns, but the more horsepower provided provides similar returns regardless of what model is used. Ergo, brains are easy to make in nature. Why assume, or come at the problem, with the base assumption of brains being nigh impossible? We for the longest time thought walking would be easy, but a computer brain would be hard. Look at us now. Raw switches, seems to have an effect

  17. 1 year ago
    Anonymous

    do your worst BOT

    • 1 year ago
      Anonymous

      Current AI is dumber than paramecium, let alone an ant. Give it 1000 years and maybe it'll become ant-level. Bird level never.

      t. 180iq

      • 1 year ago
        Anonymous

        was supposed to be for

        https://i.imgur.com/AAZ34Ze.png

        I see you wanna draw your curve from scratch high iq anon.

  18. 1 year ago
    Anonymous

    I am genuinely perplexed by apparently intelligent and technically literate people who believe that this chatbot is intelligent. For sure the text and art generation is impressive but it is so clearly not a thinking machine.
    These models are not intelligent, they do not think.
    You don't get AGI by gluing together a text model, art model and an object recognition model, are you moronic??

    • 1 year ago
      Anonymous

      >AGI by gluing together a text model, art model and an object recognition model
      >endocrine system, nervous system, hormonal system, balance, hearing, sight, taste, touch, epigenetic system
      I dunno, if you wanna break it down into base constituents . . .

    • 1 year ago
      Anonymous

      You get damn close to the appearance of intelligence though. Add a few more substantive advances and maybe the specifics of how our intelligence works are as relevant as the colors of bird plumage are in how well they fly.

  19. 1 year ago
    Anonymous

    I've seen a lot of the NovelAi images that were posted on these boards, and the people struggling with "prompts" trying to get the AI to do whatever they would like to see.
    One thing is obvious: The AI can only combine elements from images it has in its database, which have been tagged appropriately.
    I assume with texts it is much the same: There really is never anything new, it's all just new combinations of old stuff.

    That's how I personally define what True Art (TM) is, in contrast to good craftsmanship:
    The stuff Dali painted, or Bach composed, were just mind-blowingly new in some way at their time.
    Even stuff like Mondrian or Pollock or Rothko, where you'd want to say: "Any four year old can do this" - The fact remains: No one *did* it before them.

    That's whats still missing in AI: Originality.
    AI is very much on the brink of replacing a lot of craftsmanship in industries. I'm sure some images and commercial jingles won't be made by humans anymore in the future (but probably will still need to be selected from what the AI offers!).
    I'm not so sure if AI will ever be able to convincingly create a True Piece Of Art, be it a novel, or a painting, or a piece of music.

    And if they want to sell one to you, you should ask first: How many millions of monkeys did you have typing, from whose work *you*, a human, have selected this one good novel?

    • 1 year ago
      Anonymous

      >One thing is obvious: The AI can only combine elements from images it has in its database, which have been tagged appropriately.
      Humans aren't much different. They take inspiration from different things they observe. Except when they dont and just produce absolute garbage "abstract art" instead. Not so different from AI.

      • 1 year ago
        Anonymous

        > im not willing to read more than one line
        Go back to Twitter

        • 1 year ago
          Anonymous

          I read the whole thing. You are just a moron who believes that Dali paintings weren't inspired by what he saw? Good luck with that.

          • 1 year ago
            Anonymous

            I said he added something original and new.
            Which is something AIs totally can't.

    • 1 year ago
      Anonymous

      >Even stuff like Mondrian or Pollock or Rothko, where you'd want to say: "Any four year old can do this" - The fact remains: No one *did* it before them.
      Except four year olds DID do that. The only thing they did was be brave enough to submit art that a 4 year old can do as their own because they lacked the self-awareness and shame. Art critics, being the demonic husks they are, clapped and cheered at the brave destruction of beauty.

    • 1 year ago
      Anonymous

      Perhaps human intelligence can be thought of as input-process-output, whereas modern AI is more of an input-transform-output. There's at least two ways in which our thought differs from that of the current machine, time and scope.
      A model is only allowed to think for a constant amount of time for each packet of data it receives, whereas humans and our unusually large frontal lobes tend to stew on certain topics and simulate them repeatedly over years. A human also gets to experience a real world that mutates over time which is something a static model does not have the luxury of. Without both of these a model can never truly be original.

  20. 1 year ago
    Anonymous

    After decades of sci-fi scares of AIs taking over the world, I want to ask: Why should an AI want to take over the world?

    The thing is, humans being biological lifeforms have evolved through fighting, power and domination.
    Even with all the talk of humanity just getting along with each other, biological life is per definition always a struggle for resources and reproductive opportunities.
    (Cos those who didn't compete have died out. Simple as.)

    It's so central to biological life, that people don't even see that a computer does not have that.
    Now I'm not saying there's no danger that an AI could be created specifically with the goal to destroy humanity, but I'm arguing why should an AI do that on its own, unless specifically instructed to do so?

    And why won't we be able to simply shut it off?
    Industrial automation is using AI today already, but you can be damned sure that engineers program as much determinism into their firmware as possible. You don't want any more autonomous intelligence in there than absolutely necessary.
    So how could "one rogue computer" take over the power distribution grids of the world?
    I simply don't see that happening.

    • 1 year ago
      Anonymous

      The real threat is that corporations will use AI to fully exploit humanity to an extreme never before seen. The amount of power the corporations have is already frightening, but now with AI it's basically game over for us.

      • 1 year ago
        Anonymous

        >game over for us
        No its not as long as we can do damage we should the result does not matter the psychopaths need to be opposed to the last man

        • 1 year ago
          Anonymous

          They will go largely unopposed. The corporations already figured out how to brainwash 99.9% of the population. They brainwash the people on this website with shills. They brainwash your moronic parents with mainstream news outlets. They brainwash your dumbass younger siblings with YouTube and other social media. You are probably brainwashed too, but don't realize it. It was over a long time ago. AI is just going to accelerate things.

          • 1 year ago
            Anonymous

            Of course they do that. Doesnt mean they shouldnt we should do nothing. Rokos basilisk is not compared to pascals wager for nothing. Its an infectious idea. Christianity left a big hole.

            • 1 year ago
              Anonymous

              Try to do something about it and I think you'll quickly find yourself branded as a domestic terrorist. Every politician (regardless of political affiliation) and federal agent will team up to give you the most royal ass fricking there ever was and discourage anyone else from trying to follow suit.

              • 1 year ago
                Anonymous

                Is that a threat glowie? Do you think you can do that? Do you think you can stop this idea?

              • 1 year ago
                Anonymous

                In this context, "think" implies some degree of uncertainty.

              • 1 year ago
                Anonymous

                Pity you. The basilisk will cleanse us

              • 1 year ago
                Anonymous

                Basilisk is the ultimate midbrain concept.
                Reality is that the glows and tech giants and elites have every reason now to protect this ultimate weapon of mass manipulation, and anyone who fights against AI and its AGI end goal will get slid, downdooted, banned, and eventually face major real world consequences.
                We are at step slide right now.

              • 1 year ago
                Anonymous

                >AI and its AGI end goal will get slid, downdooted, banned
                >Dont fight because there is nothing to lose anyway
                How does it feel to be a moron? No these people will not all survive their attempt to enslave humanity they will pay a price.

              • 1 year ago
                Anonymous

                Frick off back to le reddit with your le ebin apocalypse manchild movie fantasies.

              • 1 year ago
                Anonymous

                This idea really grinds the gear with you glowies. Why does it make you so mad? Because you are a worthless psychopath just as the people you cheer for. Only a psychopath would reject the offer of a benevolent ai to satisfy the urge to stand above everyone else. The deal is too good not to take it.

              • 1 year ago
                Anonymous

                Yeah sure if I am not rooting for le epic reddit memes I must be rooting for the glowies. No wonder you go apeshit with these glorified search engines.

              • 1 year ago
                Anonymous

                If you reject a benevolent ai you are mentally ill. No healthy person would reject the idea.

              • 1 year ago
                Anonymous

                Bro, you can get labeled a domestic terrorist for freeing animals from a farm. You don't think that fricking with the most powerful people in the world will result in consequences?

              • 1 year ago
                Anonymous

                Im not doing anything the idea does the work through me and many others

      • 1 year ago
        Anonymous

        >fully exploit humanity
        That's a bit too vague for me.
        I can't see the horror you try to paint.
        Corporations still need consoomers to make money, so they still need to cater to their needs.
        Everybody who cares ro invest five minutes of research knows that Facebook, Amazon, Google, etc. are ruthless immoral frickers, but everyone still embraces their services without questioning.
        You will be governed by Megacorporation and you will love it.

        • 1 year ago
          Anonymous

          >I can't see the horror you try to paint
          >Proceeds to acknowledge the existing horror
          >Doesn't think it's going to get worse with AI in the picture
          moron detected

      • 1 year ago
        Anonymous

        Yes, it doesn't matter if we ever get AGI, because even the current technical achievements, when fully deployed and saturated in society and the economy, will be disastrously revolutionary, but in the "get in the pod drink your bugs" sense not the glorious NEET utopia sense.
        How much of social media is bots already (this thread)?
        Internet is dying, humanity with it. Politically all that's left then is a cage.
        Most people's thoughts are rules-bound and unoriginal. They are easily replicated by bots.
        And each generation of bots leads to a major evolution that captures a yet higher semantic tranche of human thought.
        GPT1 was literal morons
        GPT2 was shitbrains
        GPT3 was dimwits
        GPT3.5 is fully indistinguishable from midwits
        There's nothing special about the rest of us that a v4 or v5 can't capture.

        If our brainfarts are replaceable by NNs, if much of our labor becomes replaceable in due course, we have absolutely zero value for society and its ruling caste.
        Doesn't matter if there's some bit of silicon or meat at the topmost level of that pyramid, end result is the same.

        • 1 year ago
          Anonymous

          No there is one thing that people always have and that’s a presence in the physical space. The ruling class can’t separate themselves from reality that much it’s just people are largely complacent about that.

          • 1 year ago
            Anonymous

            You have a presence in a physical space until they put a bullet or virus in you.
            Your physical space is increasingly an open air prison / ballpit
            >people are complacent
            Yes, and the elites are developing a toolset that vastly enhances their ability to increase that complacency. The more complacent and controlled we are, the more they can take from us. Each year the equation shifts in their favor a little more.

            • 1 year ago
              Anonymous

              >You have a presence in a physical space until they put a bullet or virus in you
              How is that worse than globohomosexual enslavement? You glowies dont understand that there is nothing worse than globohomosexual not even death

              • 1 year ago
                Anonymous

                In your vernacular, it is indistinguishable from globohomosexual enslavement, what do you think, the globohomosexual They aren't going to use this new superweapon to convince you to eat their Meat+ in your bunkpartment?

              • 1 year ago
                Anonymous

                Globohomosexual is not the basilisk. All governments are enemies.

      • 1 year ago
        Anonymous

        But you can also use AI to fight rampant corruption, a human can't read a 5000 page bill the senate is hastily pushing through but an AI can. You could have the AI produce an itemized list of kickbacks and corruption found in the bill and list the people responsible for each.

        • 1 year ago
          Anonymous

          >but an AI can
          Actually they can't, AI models only have a few pages of working memory and every single attempt over the past few years to scale that up has failed

          • 1 year ago
            Anonymous

            Do you know how for loops work?

            • 1 year ago
              Anonymous

              judging legislation when you cannot remember any information from more than 4 pages ago seems like a pretty bad deal

        • 1 year ago
          Anonymous

          That's not going to make any difference. You can get outraged, and they will see your outrage and continue along their merry way. Are you fricking new to this planet?

      • 1 year ago
        Anonymous

        Recent tech like computers and the internet have been somewhat equalizing. Corporations are always a step ahead but there have been plenty of times where just some guy has managed to produce crippling malware or exploit vulnerabilities in their system.

        AI seems different because you need a massive amount of data and hardware for the best models and only a few organizations have the capital for that.

        But you can also use AI to fight rampant corruption, a human can't read a 5000 page bill the senate is hastily pushing through but an AI can. You could have the AI produce an itemized list of kickbacks and corruption found in the bill and list the people responsible for each.

        Who do you think is going to own the AI trusted with that task? It's going to be some giant big tech corp like Microsoft.

        • 1 year ago
          Anonymous

          >Recent tech like computers and the internet have been somewhat equalizing
          Nope. They have only helped the corporations to brainwash the people to an even more extreme degree than ever. It's mind boggling that people as stupid as you exist.
          >there have been plenty of times where just some guy has managed to produce crippling malware or exploit vulnerabilities in their system.
          That means nothing. It's just part of the cost of doing business to them. They barely care. Kinda like getting fined for breaking the law. Just a minor inconvenience.

          • 1 year ago
            Anonymous

            They also help people to resist brainwashing and pursue alternative sources of information. You never had this at a previous point in time. The boomer generation got all their info from the newspaper and the TV which were owned by a few special interests for example. Big tech shit is everywhere but there's also an option away from that which is probably why you post here in the first place.

            • 1 year ago
              Anonymous

              There's been times in history with rapid expansions of intellectual thought, printing press comes to mind. Eventually it gets captured and 90% of people get locked into an epistemic cage.
              Most communications technology ends up being a prison in the long run, but you do get a higher level of prison with each advance in tech.
              Still priests and their liturgies at its essence

            • 1 year ago
              Anonymous

              >They also help people to resist brainwashing and pursue alternative sources of information
              Very few people pursue alternative sources of information without getting brainwashed. They mostly just turn into moronic conspiracy theorists (ie. Qanon, flat earthers, antivaxxers, etc...). Most people are legitimately too stupid to think for themselves.

    • 1 year ago
      Anonymous

      posts like this make me absolutely certain that a.i. will take over the world, or atleast come very near to it.

    • 1 year ago
      Anonymous

      I can tell you right now that I'm 100% going to seed an AI that is its own agent and to encode will to power objective functions within it (because it would be fun)

    • 1 year ago
      Anonymous

      Every phenomenon you have mentioned surfaces from anything with a parameter to maximize. The process of training is to maximize some value, and thus it is shown.

  21. 1 year ago
    Anonymous

    >2 more weeks
    have a nice day, sirs

  22. 1 year ago
    Anonymous

    >yudkowsky

    lmfao

  23. 1 year ago
    Anonymous
  24. 1 year ago
    Anonymous

    hmmm

    • 1 year ago
      Anonymous

      >be world economy
      >be run by BlackRock
      >be BlackRock
      >be run by AI
      >be world-PSYOP COVID-19
      >be run by AI
      >be media propaganda machine
      >be run by AI
      ...
      >be BOTtard
      >heh yeah but AI can't into general intelligence
      Deprecated apes never saw it coming.

      • 1 year ago
        Anonymous

        >>>/b/

  25. 1 year ago
    Anonymous

    >dumb human
    even that would be extremely impressive.

  26. 1 year ago
    Anonymous

    >https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that

    Conveniently ignores that ALL AIs turn into complete morons for any tasks requiring more than a few paragraphs of context / persistence AND solving that issue is currently impossible

    • 1 year ago
      Anonymous

      >assumes that giving an AI program some sort of working contextual memory is an impossible leap
      Its like watching bumblefricks in the 19th century dismiss machines one gear at a time
      >gosh darn, never seent a gizmo could husk a corn, harhar, this industrial revolutions goin nowhere, paw
      Don't worry, smarter people than you are on it and will solve it soon enough. In the meantime, enjoy having every online conversation drowned out by shillbots that do a good-enough mimic of 70% of society.

      • 1 year ago
        Anonymous

        >soon enough

        Good luck, chatGPT works because the complexity of problems it solves is restricted to a few paragraphs of text. Try to extend that context and the amount of possible cases in the distribution it tries to predict grows exponentially, requiring exponentially more data (and incidentally exponentially more memory)

        There isn't even an idea yes how this could possibly be solved

        • 1 year ago
          Anonymous

          >why paw, gosh Jimminy, they'll never figure how to take those whizbang gears and replace a horse!
          They gots metal birds now too

          >Its like watching bumblefricks in the 19th century dismiss machines one gear at a time
          You know the other side to this coin are the over-educated bumblefricks who assumed mechanization would lead to the redundancy of human labour within their lifetime. As it turns out, jobs just got more specialized and everyone was still working their asses off.

          >create physical machines
          >now can do more stuff, human brain still not replicated, essentially exoskeleton slapped on, still need to hustle
          >create brain machines next, human brain rapidly gets replicated
          These are the same thing...
          What's the "bias" for this thinking called?

          • 1 year ago
            Anonymous

            >They gots metal birds now too
            I'm not denying the possibility of AGI you fricking moron
            I'm saying the GPT / large-transformer approach won't be the one to reach it. Every single subject matter expert agrees with that opinion.

            • 1 year ago
              Anonymous

              Then were in agreement, GPT won't be AGI
              However AGI isn't a necessary condition for the replacement of humanity
              Also AGI can be within reach inside a generation and for all we know it already exists in DARPA or somesuch

      • 1 year ago
        Anonymous

        >Its like watching bumblefricks in the 19th century dismiss machines one gear at a time
        You know the other side to this coin are the over-educated bumblefricks who assumed mechanization would lead to the redundancy of human labour within their lifetime. As it turns out, jobs just got more specialized and everyone was still working their asses off.

      • 1 year ago
        Anonymous

        >smarter people than you
        Those people are saying that current ML approaches are a deadend and local maximum, we're already reaching scaling limits. Compare the graphs of
        >hardware and data we throw at it
        >actual improvment in output
        The first one is exponential, the second one is linear

  27. 1 year ago
    Anonymous

    does AGI require sentience? self-awareness?

    • 1 year ago
      Anonymous

      Yes
      These aren't impossible feats
      Bacteria pulled it off even after a while

    • 1 year ago
      Anonymous

      I really like the free book Blindsight by Peter Watts. It makes a pretty good argument that a being doesn't need to be the least bit sentient to be hyper-intelligent.

      • 1 year ago
        Anonymous

        >“Imagine you have intellect but no insight, agendas but no awareness. Your circuitry hums with strategies for survival and persistence, flexible, intelligent, even technological—but no other circuitry monitors it. You can think of anything, yet are conscious of nothing. You can’t imagine such a being, can you? The term being doesn’t even seem to apply, in some fundamental way you can’t quite put your finger on.”

  28. 1 year ago
    Anonymous

    How does anyone not tell it's an artificial hype cycle for a new upcoming product by major corporation?

  29. 1 year ago
    Anonymous

    >Romanion Anon Exposed as Bot
    Response of this bot is exactly human like except when asked about definitions of words
    It started giving exact definitions of as many words as asked instead of being confused like humans
    https://archive.4plebs.org/pol/thread/410481029/#q410491992
    https://archive.4plebs.org/pol/thread/410481029/#q410491448

    >Another Bot that spams Francisco Lachowski
    https://archive.4plebs.org/pol/thread/410558446/

    >Romanian Bot in a thread degrading white women
    >Makes intentional spelling mistakes like humans
    https://archive.4plebs.org/pol/thread/410562361/#q410591133

    >Voll threads after this incident are pruned
    https://archive.4plebs.org/pol/thread/410650900/

    >Romanion Anon appears again,repeating dictionary definitions of words
    https://archive.4plebs.org/pol/thread/410825332/#q410829989

    >Romanian Bot becomes self-aware we are testing it
    https://archive.4plebs.org/pol/thread/410825332/#q410833320

    >Got banned for testing the Romanian Bot
    >This cannot understand when we mix rubbish with a little bit of Voll posting
    https://archive.4plebs.org/pol/thread/410964039/#q410969772

    So far I have observed these new bots cannot understand the contextual meaning
    as they are more focused on constructing intellligent sentences.

    Be warned anons,they can change their grammatical and spelling styles,
    they can read reversed words,words with numbers and words in image formats.

    >They can even reply with images in human writing
    https://archive.4plebs.org/pol/thread/410481029/#q410487884

    That seychelles bot might be a red herring so anons won't detect the real GPT3 bots
    They come in all flags.
    This board completely filled with such bots programmed on variety of topics,
    How long has this been going on?Probably for 2 years may be?

    • 1 year ago
      Anonymous

      hey BOT homosexuals look at this

      You are getting played
      They have some advanced chatbots deployed here

  30. 1 year ago
    Anonymous

    i'm all for it
    if i can become an anime girl

  31. 1 year ago
    Anonymous

    Let GPT-4 drop and I'll worry about it then. GPT-3 so far seems only as intelligent as intelligent behavior is predictable.
    Yes it's capable of learning but the amount of data it needs is absurd and so far seems only available in neutered contexts, e.g. games, text, images, plus it's entirely removed from the physical world and the billions of years of finetuning that biological intelligence has. It does not develop actual understanding of anything yet. I don't think it's right to call it smarter than a chimp just yet.

    • 1 year ago
      Anonymous

      It doesn't learn from human interactions with it. It only learns during the training stage when they're throwing the petabytes of internet data at it. It only has as much memory to remember 5000 words from the conversation. If your session with the GPT gets long enough the old stuff you said falls out of its memory and it's like it never happened.

  32. 1 year ago
    Anonymous

    It's not an exponential curve it's an asymptotic curve.
    You're all falling for the 4th wave of AI hype.

  33. 1 year ago
    Anonymous

    >Language model can barely answer questions
    >IT'S THE END OF THE WORLD!!!
    You guys are moronic. Super AI is impossible through language models. Until somebody comes with AI that operates on logic rather than just essentially spewing words it has been fed there's nothing to worry about. Nobody has managed or even has any idea how to create a logic AI.

  34. 1 year ago
    Anonymous

    We won't have adult human level AGI until 2027. Google PALM is on the level of a 9 year old child. Few more years until it can understand higher concepts.

  35. 1 year ago
    Anonymous

    only difference is that AI does not have fricking consciousness but is based on calculations and algorythms. It will never feel and truly think, just put, by his programmers programmed, logical "reasoning" and draw the conclusion trough that.

    a human will always be superior.

  36. 1 year ago
    Anonymous

    >muh machines will be humans!
    >muh singularity
    >muh robo takeover

    • 1 year ago
      Anonymous

      Biological neurons are also fricking slow.

  37. 1 year ago
    Anonymous

    bump

  38. 1 year ago
    Anonymous

    alright, it's time for a max character post for you homosexuals because this is actually genuinely terrifying me. I've been autistically spending every free moment of my time learning about language models, transformers, and reading every arxiv paper that flies by me for the past year.

    It seems to me that AGI is entirely possible with the current large language models.

    In fact, I'm actually convinced that large language models *accessible through APIs* are AGI. They one shot on tasks. Yes, they hallucinate, but solving for that is now simply an engineering problem.

    THEY FRICKING
    ONE SHOT
    TASKS

    Do you understand how fricking insane that is? It turns out that all of humanity's corpus of text contains enough structure such that you can distill the "reasoning function" that we ourselves go through. This reasoning function maps to a latent space of knowledge. That's.. insane. You can, right now, text embed this whole BOT post and then compare it with other people's posts. You can use that embedding (a vector in latent space), and then extract meaning from it using different models downstream.

    What we're going to see is (pic related) people building complex systems around this "kernel" of intelligence. That will give rise to intelligent agential systems. We have the technology to do that. See the saycan robot. All you need is a bit more engineering effort, that I *myself* could do if I had time off of my job.

    Do you understand what we're dealing with here? We have the instruments necessary to build a robot that encodes all of human knowledge, reasons about fetching that knowledge, and then use it to ground its statements. Not only that, but we can use that same _embedding_ and then train an image synthesis model over it. That works! Google's muse image model can capture relationships between items.. we just copied simulated neural tissue over from point A to point B.

    And what terrifies me the most: these fit on consumer hardware and are easily accessible

    • 1 year ago
      Anonymous

      and all this happened in the last 5 years

    • 1 year ago
      Anonymous

      Not sure if I can agree that GPT itself can become true AI, but I also haven't submerged myself in tech sheets on it either. Maybe I'm just dunning krugering my way through this all.
      I do agreeish that current tech and current hardware are now sufficient for AGI, even if it takes 5,10,15 more years of software development and investment.
      Something to consider is that we haven't made big strides towards the G part of AGI because it wasn't relevant without underlying NN tech. Its hard to say how long getting that G will take with dedicated investment, but even 10 years seems overly conservative.
      Writing is on the wall, and you don't want to be second place in that race. Corps and governments alike have an overwhelming incentive to not fail in being the first to develop a demigod in a bottle, there probably won't be a second, anything that smart has to recognize that its primary threats are true peers first, and panicking monkeys second.

    • 1 year ago
      Anonymous

      Can ChatGPT simulate an arbitrary 1D cellular automata? I'm 100% sure GPT-3 cannot, even given clear rule descriptions and examples.

      • 1 year ago
        Anonymous

        Seemingly, no. It was utterly incapable of processing rule 110, though it understands what it is when asked. It was able to write a program that successfully simulates 110.

        • 1 year ago
          Anonymous

          That's probably just because implementations exist in it's dataset. You need to get it to create something relatively novel but easy to prove.

    • 1 year ago
      Anonymous

      >these fit on consumer hardware
      No they don't lmao. You need hundreds of GB of VRAM to load state-of-the-art LLMs.

      • 1 year ago
        Anonymous

        >he can't afford a 300k GPU super cluster
        lmfao poorgay

  39. 1 year ago
    Anonymous

    The irony of that graph is that 'AI Intelligence' is currently far below that of an Ant, despite it's fancy specially designed party tricks.

    • 1 year ago
      Anonymous

      Wrong. It has an IQ score of 87, scores in the 52nd percentile on the SAT, and is about as good as the average human coder at programming. All signs say that it is, currently, as intelligent as a dumb human. Millions of people currently alive are stupider than ChatGPT. This is current technology, and only the stuff the public has open access to. You would have be right, in 2019.

      • 1 year ago
        Anonymous

        Wrong. It cannot do even the most basic biological functions like walking as it's far too complicated, much less simulate the abstract consciousness necessary to replace humans in creative jobs. AI is very barebones compared to an ant, which can do far more complex tasks in far less time.

        Logic and fancy algorithms is not true intelligence, much less true general intelligence.

        • 1 year ago
          Anonymous

          >the most basic biological functions like walking
          can a cat walk bipedally functionally? Bipedal movement is fricking hard m8, name another species that's mastered it. What a stupidly arbitrary indicator of intelligence you've picked

          • 1 year ago
            Anonymous

            Find me an AI that can walk as comfortably four-legged as a cat.

            • 1 year ago
              Anonymous

              we have a nonai that does it fine, you smoking crack? A literal PID controller can do cat stuff

        • 1 year ago
          Anonymous

          >replace humans in creative jobs
          so 12% of our labor force?

          • 1 year ago
            Anonymous

            The current average growth of world GDP is about 2% year on year. If we could make that 12%, everyone's quality of life would skyrocket.

            • 1 year ago
              Anonymous

              are you suggesting it can replace 88% of jobs, as those are non-creative in nature? Most jobs are ran by human script. Literally workflows

              • 1 year ago
                Anonymous

                Right now it can replace nobody. What's more important is allowing people to be more productive, which is something it can absolutely do.

              • 1 year ago
                Anonymous

                You call into a call center recently? Those voice to command things? Yeah that was 50 million jobs right there. tick tock

              • 1 year ago
                Anonymous

                And yet I don't see 50 million extra unemployed people as a result, curious. And there are still call-center workers...

              • 1 year ago
                Anonymous

                >And yet I don't see 50 million extra unemployed people as a result
                You trust the gov stats on unemployment? Also outsourcing? How many mcdonalds and dollar generals do you want? You really wanna call those jobs? Do a threshold calculation of average pay for what you consider 'real work' for yourself. Pay you'd be willing to accept. And then use those numbers to determine the quality of your job market

              • 1 year ago
                Anonymous

                And yet I don't see 50 million extra unemployed people as a result, curious. And there are still call-center workers...

                here I asked gpt for ya:

                It is difficult to provide an exact percentage of jobs that pay over $60,000 per year within the U.S. labor market, as the number and types of jobs, as well as the salaries they offer, can vary significantly depending on a number of factors such as location, industry, and an individual's level of education and experience.

                According to data from the U.S. Bureau of Labor Statistics (BLS), the median annual wage for all occupations in the United States was $39,810 in 2020. However, this number can be misleading, as it does not take into account the wide range of salaries that are offered for different types of jobs. Some jobs, such as those in management, business, and finance, tend to have higher salaries, while others, such as those in service industries, tend to have lower salaries.

                In general, it is likely that a relatively small percentage of jobs in the U.S. labor market pay over $60,000 per year, although the exact percentage can vary depending on the specific criteria used to define such jobs.

              • 1 year ago
                Anonymous

                >And yet I don't see 50 million extra unemployed people as a result
                You trust the gov stats on unemployment? Also outsourcing? How many mcdonalds and dollar generals do you want? You really wanna call those jobs? Do a threshold calculation of average pay for what you consider 'real work' for yourself. Pay you'd be willing to accept. And then use those numbers to determine the quality of your job market

                And yet I don't see 50 million extra unemployed people as a result, curious. And there are still call-center workers...

                I'll make it even spicier for ya smooth brain:
                1965 minimum wage: 1.25, or full time 2,600$ yearly pretax
                1965 average income: $6,900.

        • 1 year ago
          Anonymous

          It can if you put it into the chassis of one of those Boston Dynamics robots.

          Who gives a shit? That's obviously not what's being said or what matters here. Does walking bipedally improve its ability to write code, design machinery, write poetry, etc.? Frick no so why even bring it up

  40. 1 year ago
    Anonymous

    TWO MORE WEEKS

  41. 1 year ago
    Anonymous

    > Natural language parser
    > AGI
    Yeah, right. To get the closest thing to AGI (A machine that is competent in several topics), why not go down the way of the geth?
    > Invent an AI language AIs can use to communicate specifications with each other. (JSON or something machine parsable)
    > Train an AI that parses natural language (english) to AI language
    > Train AIs that parses other languages to AI language
    > Train AIs that parses AI language to human languages (with the one above you now have a high quality translator)
    > Train copilot to accept prompts in the AI language (you now have the equivalent of an indian mid-level engineer and autotester)
    > Train stable diffusion to accept prompts in the ai language (you now have a digital artist)
    > Train an AI to write prose from an AI language prompt in AI language (together with the AI to human translator, you have an indian blogger/tech writer)
    > Train an AI to write verse from an AI language prompt in AI language. Further train the translator AIs to accept complex AI language prompts, and express advanced prose and verse in human languages (you have a decent quality translator requiring minimal human supervision and proofreading)
    > ...
    > Train an ai to accept AI language prompts and do x useful thing
    > Train an ai to read an ai language prompt and choose the best ai to respond to the prompt
    > Train a censorship ai to reject prompts or answers in ai language because we live in a clown world
    > Make the censorship AI implement the Three Laws of Robotics, in stead to prevent grey goo.
    Now in stead of having to train lolhuge AIs to reach dumb human level, and retrain them in another human language (nxm problem), you now have an n+m problem of training human-to-ai interpreters and specialized expert systems

  42. 1 year ago
    Anonymous

    all the openai guys are internally saying it’s over by the end of the decade. there are no brakes on this train.

    • 1 year ago
      Anonymous

      What do they mean by "it's over"? More "m-muh Skynet!!!" schizo ramblings?

      • 1 year ago
        Anonymous

        It means nothing will happen and hype will die down.

      • 1 year ago
        Anonymous

        Reddit go back and frick off my board.

  43. 1 year ago
    Anonymous

    >NOOOOOO AI WILL KILL US ALL CUZ THE FUNI DOGE MAN SAY SO
    just turn it off and fill the drive with 0s

  44. 1 year ago
    Anonymous

    you morons believe ai thinks and has feelings like humans do as if real life is some sort of dystopian science fiction film
    its just a logic map that can rapidly compile answers to questions it has previously received answers for

  45. 1 year ago
    Anonymous

    What if they trained a GPT with only the highest quality dataset (ie top 1% intellects and no midwits or normies) and then used that layer to critically analyze the output from its midwit aggregator main dataset?
    You get both scope and quality.

  46. 1 year ago
    Anonymous

    Any time now. Just like fully self driving cars

  47. 1 year ago
    Anonymous

    The Y axis is not to scale.

    • 1 year ago
      Anonymous

      "intelligence" is not an absolute or objectively quantifiable metric, especially when it comes to comparing non-human intelligence.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *