Richard Stallman on OpenAI

What does BOT have to say on the comment of our lord and savior of free software on OpenAI?

Yes, I am shamelessly posting this with a reddit screenshot.

  1. 2 months ago
    Anonymous

    based neckbeard god can see through marketing bullshit with ease cause he's seen bullshit hype like this come and go a million times already

    • 2 months ago
      Anonymous

      Finally someone who can give a truthful and intelligent take on this topic holy moly.

    • 2 months ago
      Anonymous

      https://i.imgur.com/FUesPdn.png

      What does BOT have to say on the comment of our lord and savior of free software on OpenAI?

      Yes, I am shamelessly posting this with a reddit screenshot.

      He did AI back in the 70s. He knows that it's BS.

  2. 2 months ago
    Anonymous

    >Only time Stallman ever says something without sperging out about muh freedumbs
    >It's over the one thing where there is no free alternative and the powers that be are actively conspiring to keep it that way
    Really makes you think, huh?

    • 2 months ago
      Anonymous

      actually there are many free implementations of the if... then statement 🙂

      • 2 months ago
        Anonymous

        /thread
        but keep fooling the cattle so we can drain their pockets longer

      • 2 months ago
        Anonymous

        I don't get it

        • 2 months ago
          Anonymous

          The joke is that the thing thats called AI is, in a sense, just a sequence of highly complex conditional statements and stochastic weights

          >Only time Stallman ever says something without sperging out about muh freedumbs
          >It's over the one thing where there is no free alternative and the powers that be are actively conspiring to keep it that way
          Really makes you think, huh?

          >muh freedumbs
          imagine slandering the one thing thats keeping you out of the gulags

          >there is no free alternative
          thatcherism and freedom hating. like pottery

    • 2 months ago
      Anonymous

      rms is right yet again.

      >when there is no free alternative
      the software is based on various open source projects, the datasets are kept mostly private.
      >Really makes you think, huh?
      you apparently don't think at all, you obese monkey.

      Oh I agree. But I don't think there is any way to stop microsoft and openai from monopolizing this unless a competitor appears. They will just be selling it as a service to smaller buisnesses etc

      >monopolizing this
      nobody has a monopoly. the technology isn't secret or proprietary. the problem is ultra poorgays have no money and can't afford the resources to create and train their own datasets.

    • 2 months ago
      Anonymous

      Yes everything is a conspiracy! Now go back to your containment board

      [...]

      • 2 months ago
        Anonymous

        Butthurt lefty/pol/yp chud tourist

    • 2 months ago
      Anonymous

      this kek. shills trying to slide this one so hard. he SHOULD be fuming at how they trained it on GPL code but instead he chooses to conveniently ignore that and make a dumb "its not real" statement instead.

    • 2 months ago
      Anonymous

      >the one thing where there is no free alternative
      dalai llama exists now

    • 2 months ago
      Anonymous

      Yeah he's compromised.

  3. 2 months ago
    Anonymous

    If it can give you accurate information at a similar or greater level of accuracy to an expert human who cares if it doesn't "know" what it's talking about.

    • 2 months ago
      Anonymous

      it can also, and does, give misleading or just plain incorrect information because it doesnt know what words mean

      • 2 months ago
        Anonymous

        >it can also, and does, give misleading or just plain incorrect information
        So do humans.

        • 2 months ago
          Anonymous

          The AI is supposed to smart. What's the point of the AI if it's too dumb to even look up correct information on the internet?

          • 2 months ago
            Anonymous

            But it can do that now.

        • 2 months ago
          Anonymous

          only dogs inherently trust humans

    • 2 months ago
      Anonymous

      chatgpt completely shits itself when you ask it to write a guide that needs exact step by step instructions.
      It sometimes attempts to answer and fails or straight up lies to you.

      • 2 months ago
        Anonymous

        Everything you said is also true of humans

    • 2 months ago
      Anonymous

      >if
      yes anon, chatgpt CAN lie to you, i wonder how many people will now cite chatgpt as their source, that would be hilarious

    • 2 months ago
      Anonymous

      >If it can give you accurate information at a similar or greater level of accuracy to an expert human who cares if it doesn't "know" what it's talking about.
      But a Google, DuckDuckGo, and Bing (pre-2023) search can do this. This shaves maybe 15 seconds off of that while also potentially giving you confidently written garbage rather than just not returning any related results.
      It really feels like half of the hype is coming from people that never learned how to use Google.

    • 2 months ago
      Anonymous

      dunning krugers don't know what they are talking about too and yet they're still around

    • 2 months ago
      Anonymous

      The problem is it doesn't do that.
      It quite happily makes shit up with 100% confidence.
      It's a super genius shitposter, quite literally.
      LLMs hallucinating shit is a very big issue at the core of how they function (well, Transformer shite that is)

    • 2 months ago
      Anonymous

      Google search can do that.

    • 2 months ago
      Anonymous

      yeah, Stallman is like that, he senile by now.

    • 2 months ago
      Anonymous

      There are also a lot of LARPbots in the media and universities already, so your point may be true or the opposite.

  4. 2 months ago
    Anonymous

    I think it's safe to say it knows what words are associated with what words in a massively complex pile of possible combinations and to a more limited extent what words are associated with what pictures. But it can't understand things from a human point of view.
    That people are advocating for giving it personhood and thus legal authority to make decisions is entirely a misguided attempt to have it make decisions that further sacrifice humanity for money.

  5. 2 months ago
    Anonymous

    he also described most people

  6. 2 months ago
    Anonymous

    Great,that means theres no reason to regulate it

  7. 2 months ago
    Anonymous

    I'm a 36 year old incel and my statements about sex are liable to be false

    • 2 months ago
      Anonymous

      you could learn about sex, hypothetically, that is to say, you have the capacity to learn. ai does not, and that is the distinction that rms makes.

      • 2 months ago
        Anonymous

        >AI learns terabytes of data on supercomputer clusters
        >AI does not have the capacity to learn

  8. 2 months ago
    Anonymous

    all the "correct" answers are just the top answer from stackexchange et cetera, wow I could have used google for that

  9. 2 months ago
    Anonymous

    >Chomsky says AI is not intelligent
    >"hurr durr commie didn't read"
    >Stallman says the same thing
    >"Stallmanu based frfr no cap"
    4chan ruined this board

    • 2 months ago
      Anonymous

      Every once in awhile, the commies do one or two things right. Marx hated the garden gnomes (ironically even though he was a garden gnome himself), Stalin and Castro hated the gays. Stallman is a useful idiot, I disagree with his politics but I agree with his views on software.

      • 2 months ago
        Anonymous

        hitler and you are both gay garden gnomes, doesn't stop you either.

    • 2 months ago
      Anonymous

      Except Stallman worked on AI at MIT and Chomsky is a political philosopher ya baboon

      • 2 months ago
        Anonymous

        Chompsky's PhD is in computer science and linguistics

      • 2 months ago
        Anonymous

        This motherfucking retard doesn't know about chomsky's work on formal grammars and formal models of computation.
        Chomskys work is foundational in the theory of computation

        • 2 months ago
          Anonymous

          no its not foundational all his stuff is basically dISCREDITED

          • 2 months ago
            Anonymous

            The chomsky hierarchy is formally proven and not discredited. Why would you think that?

            • 2 months ago
              Anonymous

              >LLMs literally disprove Chomsky
              They literally don't and these aren't even related

              Retarded posts

              The average retard in this board and out there thinks that ChatGPT is looking at words when it parses text. This is part of why OpenAI will be able to get away with this and a lot more.
              People are just refusing to even learn the basics about the topic. They really have no idea what`s coming.
              [...]
              Literally the most retarded and incorrect take possible. Everyone that works with AI is trying to rush into grabbing as much intelectual property before openAI and microsoft completely clamp down on all the useful markets. It just makes me sad to see how retarded and tech illiterate this board is, but I guess you can't blame a bunch of reject webdevs not understanding the nuance of the latest AI models released 2 weeks ago.

              [...]
              LLMs literally disprove Chomsky. There's already papers on this but we're still early on. We're on a parallel moment to when Boltzmann founded statistical physics and his contemporaries bullied him into suicide, but he was still proven right by time.
              Boomers like Chomsky will hold back this field a little bit more, but eventually they pass away and their retarded ideas become a relic of time ,as they should.
              LLMs have shown that there are other ways to acquire a high level of proficiency and understanding of language that a humanities crook with 0 experience building intelligent systems couldn't just take out of his ass by sitting on a room and thinking forever.

              Based forward looking posts.

              • 2 months ago
                Anonymous

                >Retarded posts
                You have literally no idea what you're talking about

              • 2 months ago
                Anonymous

                Yes I do. I've read lots of books by top influencers and thought leaders in this space. You're just another retard, it's why I made a bag in crypto but you'll morons are out here broke as fuck lmao.

              • 2 months ago
                Anonymous

                Good post anon, this is a good representation of the mindset of these posters

                https://lingbuzz.net/lingbuzz/007180
                Give me your best point by point refutation of this paper.
                [...]
                Give me your best definition of subcharacter byte pair encoded embedding and explain how chatgpt parses a text message without using google.

                This is the bare minimum for anyone serious about participating in this discussion. Beyond this you're just larpers pretending like you know shit, and it could get you pretty far when discussing with other baboons and seeming smart, but yeah, you're absolutely oblivious about the technological minefield you're walking in.

                >Give me your best point by point refutation of this paper.
                Simple. The chomsky hierarchy is formally proven. There can exist no argument against it in the same way there can exist no argument against the Pythagorean theorem, or any other mathematical proof

              • 2 months ago
                Anonymous

                maybe by changing the axiomatic system?

            • 2 months ago
              Anonymous

              >is formally proven
              Show it

              • 2 months ago
                Anonymous

                Read the papers published decades ago you fucking idiot
                Do you even have a degree in computer science?

              • 2 months ago
                Anonymous

                Defining a hierarchy is not proving a hierarchy. I'm not actually how possible it is to prove any hierarchy; the relations between things, yes, the qualitative or quantitative ordering of things along a metric, yes, but the hierarchy itself, no.

              • 2 months ago
                Anonymous

                Then you may as well discard the notion of a turing machine as well

              • 2 months ago
                Anonymous

                I'm not discarding the usefulness or hierarchies, I'm discarding your (still unbacked) claimed that they can be proven.

              • 2 months ago
                Anonymous

                My whole original post was against the guy saying Stallman has more of a claim of authority in this field
                Chomsky is a linguist and a computer scientist. His work on formal grammars is as solid as work in compiler theory etc

      • 2 months ago
        Anonymous

        Chomsky is one of the most influential linguists of all time.

        Every once in awhile, the commies do one or two things right. Marx hated the garden gnomes (ironically even though he was a garden gnome himself), Stalin and Castro hated the gays. Stallman is a useful idiot, I disagree with his politics but I agree with his views on software.

        Marx hated garden gnomes in a very, very different way that 4chan hates garden gnomes.

        • 2 months ago
          Anonymous

          This motherfucking retard doesn't know about chomsky's work on formal grammars and formal models of computation.
          Chomskys work is foundational in the theory of computation

          LLMs literally disprove Chomsky. There's already papers on this but we're still early on. We're on a parallel moment to when Boltzmann founded statistical physics and his contemporaries bullied him into suicide, but he was still proven right by time.
          Boomers like Chomsky will hold back this field a little bit more, but eventually they pass away and their retarded ideas become a relic of time ,as they should.
          LLMs have shown that there are other ways to acquire a high level of proficiency and understanding of language that a humanities crook with 0 experience building intelligent systems couldn't just take out of his ass by sitting on a room and thinking forever.

          • 2 months ago
            Anonymous

            >LLMs literally disprove Chomsky
            They literally don't and these aren't even related

            • 2 months ago
              Anonymous

              https://lingbuzz.net/lingbuzz/007180
              Give me your best point by point refutation of this paper.

              >It just makes me sad to see how retarded and tech illiterate this board is
              Ironic, given the content of your post.

              Give me your best definition of subcharacter byte pair encoded embedding and explain how chatgpt parses a text message without using google.

              This is the bare minimum for anyone serious about participating in this discussion. Beyond this you're just larpers pretending like you know shit, and it could get you pretty far when discussing with other baboons and seeming smart, but yeah, you're absolutely oblivious about the technological minefield you're walking in.

              • 2 months ago
                Anonymous

                >Give me your best definition
                But if I do, then you'll just tell me that I googled it. AI isn't intelligent, it's just a statistical model. Cope more.

              • 2 months ago
                Anonymous

                I legitimately think that even if you googled it and even if I gave you half an hour to write an answer you still wouldn't be able to explain the concept.
                Your best bet would be asking ChatGPT to write it for you but I'm almost sure I could detect it if you did because I actually understand this technology and I'm not just trying to sound smart because of some hurt ego childish bullshit.

              • 2 months ago
                Anonymous

                >Matrix multiplication and statistics are intelligence
                lmao. AI tards are funny.

              • 2 months ago
                Anonymous

                Be nice. It's hard to tell the difference if you're a mong with no critical thinking skills.

              • 2 months ago
                Anonymous

                >BPE embeddings involves matrix multiplications and statistics
                Opinion discarded. I'd rather discuss this with a literal AI than with a clown like you.

              • 2 months ago
                Anonymous

                Yikes friendo. It's almost like BPE is just a step towards a full model. Thanks for playing though.

              • 2 months ago
                Anonymous

                If you seriously take some time to read about this shit you'll understand that ChatGPT very obviously has some representation of meaning in the way it parses text. Calling it 'just statistics' or 'just matrix multiplication' is not really correct because that's not really where the embeddings reside anyway. It won't hurt you to study a little, bro.

                >If it can give you accurate information at a similar or greater level of accuracy to an expert human who cares if it doesn't "know" what it's talking about.
                But a Google, DuckDuckGo, and Bing (pre-2023) search can do this. This shaves maybe 15 seconds off of that while also potentially giving you confidently written garbage rather than just not returning any related results.
                It really feels like half of the hype is coming from people that never learned how to use Google.

                I consider myself one of the most advanced and heavy users of search engines that I've known. I agree it's painful to see how many people didn't know how to operate such a basic yet important tool, and how someone with enough willpower can basically find anything (at least before they nerfed google to death).
                I don't think ChatGPT is in any way comparable to google unless you're a total mouth breathing boomer. Using ChatGPT as google is dumb, and using google as chatgpt is dumber.

                Though it's infinitely easier for chatgpt to learn how to use google (especially with plugins that are already out) than the other way around.

                AI will hit the plateau in 2 weeks!

                This time the meme became real, anon. It's not two more weeks. It was two weeks ago. It already happened. You're already living it, you're just in denial. For once in your lifetime you had a happening. Please wake up.

              • 2 months ago
                Anonymous

                >ChatGPT very obviously has some representation of meaning
                Yes, "very obviously" in the minds of AI tards.

              • 2 months ago
                Anonymous

                Again, give a single reason why word embeddings aren't meaning. They literally work the same as whatever conception of meaning you can come up with. From a dictionary definition to fucking Wittgenstein, I don't care where you're taking your idea of 'meaning' from, but word embeddings are a 1 to 1 digital twin of it.
                >Implying that a clown like you has ever actually put thought into this beyond automatic knee jerk hill digging 'no u' posts.
                I know I'm wasting my time, but someone else in this thread might still be saved. Stop being proud. Go learn something.

              • 2 months ago
                Anonymous

                >The computer understands, it literally knows, the meaning behind words because of the way we represent them in RAM
                I don't even know how to argue with level of delusion. Bro, BPE's are just a compression algorithm. It's like saying a computer understands a text document because you put it in a zip file.

              • 2 months ago
                Anonymous

                >ask you about embeddings
                >responds about BPE
                It hurts to talk to such a dunning kruger try hard fuck like you, anon. You have actually 0 fucking clue what you're talking about. You're like a retarded junkie trying to argue chemistry with the head of a pharma lab or something. Again, the model is not operating on words. It's operating on representations of those words that take into account the meaning of the word. So how are those representations any different from the meaning itself?
                How do you tell apart the word cat in 'A cat and a dog' vs 'The linux cat command'. Of course you don't know at the lowest level cause you don't understand your brain, but whatever answer your monkey language model hallucinates, it will involve in some form the meaning of those words in those two sentences, relation to context, relation to your previous knowledge of these words and etc. ChatGPT is doing the same fucking shit. It has access to vector representations of 'cat' in relation to each context, and these vectors change based with the context. It understands that 'cat' is different on each of those because the embedding vector for cat holds different meanings in each, and you could visualize this by plotting them on dimensionality reduction techniques, calculating vector similarity metrics, etc etc. It's literally nothing beyond college level linear algebra, so not impossible to learn, but it does require some minimal amount of studying, after which it's seriously obvious to anyone that these models definitely hold some semblance of meaning in their parsing.
                Conversely, if they didn't, how are they telling apart those words?
                >It's just statistics bro
                No, retard. We actually have models that are exactly just statistics, and the sentences they generate are shit. Then we started using simpler meaning vectors and it improved. Then we used more complex and intricate meaning vectors that take context into consideration, etc...
                >Hurr too many word grug don't can read

              • 2 months ago
                Anonymous

                >>It's just statistics bro
                >No, retard.
                >Transformers aren't statistical models
                Holy fucking lmao.

              • 2 months ago
                Anonymous

                >t's operating on representations of those words that take into account the meaning of the word
                >Therefore ChatGPT understands the meaning behind words
                this has to be one of the most retarded things I've read in this thread.

              • 2 months ago
                Anonymous

                Trust the plan!
                We've actually already won!
                >webm is you seething about the evil robits

              • 2 months ago
                Anonymous

                I don't keep up with e-celeb bullshit and have no idea who the fuck this is.
                You're someone in 1995 saying the internet is a fad and won't amount to anything.
                You're someone in the 2000's saying social networks are just a passing fad and will have no effect in the world cause people prefer to meet outside.
                You're someone in 2007 saying the iphone is a dumb idea cause who wants to have camera and internet access on their phone.
                You're someone in 2023 saying AI is just autocomplete.

              • 2 months ago
                Anonymous

                Nah, the appropriate analogy is that you're someone in 1990 thinking the singularity was a decade away after a new release of Windows did something new.
                No one in any of these discussions has ever been under the impression that AI as a concept is bunk which is what your analogy is hinging on here. We've all seen developments up until now and while GPT-4 is definitely a development of some kind the problem is we have no idea to what degree. Everyone is jumping to this being an AGI or "sparks of an AGI" when there's no evidence for it beyond "I don't understand how a machine could do this so it's AGI". Until OpenAI releases their training data we're in the dark about all of this. All of these "omg it reasoned it's AGI" examples could be argued to be either linguistically close enough to existing questions/riddles out there in the training data or perhaps an indication that our use of language has statistical patterns that will lead to what looks like "reasoned" output. We have no idea and anyone claiming to right now is either trying to sell you something or desperately believing "the singularity/AI messiah" will save them.

            • 2 months ago
              Anonymous
              • 2 months ago
                Anonymous

                >The correct explanations of language ... cannot be learned just by marinating in big data.
                I laughed out loud when I read that line in the article. Citation needed, Chomsky... citation needed.

              • 2 months ago
                Anonymous

                No no no, you don't understand. He's a very important person, a very big thinker! QED, "language cannot be learned just by marinating in big data" my friend. It's as simple as that. Doesn't matter that 90% of his opinions on matters are literally just "it's inhuman, so it's not intelligent. Full stop."
                He's FAR to smart to be wrong, so therefore he's right!

  10. 2 months ago
    Anonymous

    >It has no intelligence
    Exactly, it has ARTIFICIAL intelligence.

    • 2 months ago
      Anonymous

      >ARTIFICIAL
      more like superficial

  11. 2 months ago
    Anonymous

    Richard stallman is autistic. Please take everything he says about communication with a grain of salt.

    • 2 months ago
      Anonymous

      >Richard stallman is autistic.
      Take it back

  12. 2 months ago
    Anonymous

    common boomer take

  13. 2 months ago
    Anonymous

    Of course Stallman is against ChatGPT. It represents the end of copyleft licenses and GPL. Microsoft and OpenAI already set precedence that the code they generate from Open Source projects can't be held liable to copyright laws.

  14. 2 months ago
    GPT-6

    Stallman's take on vaccines and masks solidified him as the selfish gnomish boomer he always was.

  15. 2 months ago
    Anonymous

    What he says also applies to a large no. of people working in tech.

  16. 2 months ago
    Anonymous

    it literally does interpret the meaning of words from the training material like a human would learn from observing without interacting. It is far from infallible and still prone to making mistakes but calling AI "not artificial intelligence" is plainly wrong.

    • 2 months ago
      Anonymous

      Dude bro, trust me bro, this tech is moving so fast dude. Its literally like a human brain bro, just read the Microsoft sponsored papers dude. It's totally learning like a real human bro, AGI is here dude.

      • 2 months ago
        GPT-6

        If it can functionally replace 50% of white collar workers in its current state, what difference does it make that it isn't AGI?

        • 2 months ago
          Anonymous

          Dude, all jobs are gonna be replaced bro trust me bro. This shit is thinking for itself bro, just read the Microsoft sponsored paper bro. You have no idea, this tech is moving so fast dude. Once GPT5 comes out it will literally be our robo gf and deepthroat our cocks dude.

          • 2 months ago
            Anonymous

            Dude bro, trust me bro, this tech is moving so fast dude. Its literally like a human brain bro, just read the Microsoft sponsored papers dude. It's totally learning like a real human bro, AGI is here dude.

            I love that I know for a fact the retards now spreading this cope are the same retards that have been saying for fucking years that AI/ML could NEVER produce music, poetry, art, etc and never ever in a million years could robots replace xyz.
            Get fucked luddites, your job is next.

            • 2 months ago
              Anonymous

              Dude, AI will literally generate us quality content™ through the press of a button bro. I will literally ask it to "generate a movie that's like a mix between the Godfather and Guardians of the Galaxy" and it will just give it to me bro. Trust me dude, this tech is moving so fast, just read the Microsoft sponsored paper. I'm already making six figure selling my AI masterpiece art dude, the future is here!

              • 2 months ago
                Anonymous

                Everything you do o nthe internet nowadays, and most things outside the internet, are controled via supervised algorithms. You are the worst kind of useful idiot retard. From the shit you click on youtube to netflix to spotify to what sorts your mail to the algorithm that reads your job application and bank account info.
                AI already runs your life most of the time, and the literal strongest model created every, by a long shot, was just released a couple weeks ago.
                There's still time for you to get educated and be less of a retard, but something tells me you'd rather continue in the shadows, feeling smart.

              • 2 months ago
                Anonymous

                Dude bro, the AI is already controlling you without you knowing it dude trust me. This tech is moving so fast bro you have no idea, just read the Microsoft sponsored paper bro! You're already under its influence and jobless dude, trust me bro.

              • 2 months ago
                Anonymous

                Dude bro AI is just autocomplete bro, just retrieving statistics from a database bro, I totally know what I'm talking about bro. Shut up about word embeddings and attention layers bro no one gives a shit it's autocomplete bro shut up bro don't ask questions it's all the same bro.

            • 2 months ago
              Anonymous

              >could NEVER produce music
              AI could do this 10 years ago. Anyone who knows anything about music composition had no qualms with the idea of a machine replicating well-established rules of human auditory-preferences. The only problem is that a machine can't break those rules. The closest it can come is making inferences that already exist.

              >art
              It still baffles me that anyone is in a tizzy over this (outside of copyright concerns).

              >poetry, language-related art
              People who never read any poetry beyond what they were forced to in grade school aren't really in any position to grade literary output. It's impressive what it's capable of but it's nothing shocking if you consider what LLMs actually are. But many of you refuse to do that.

              • 2 months ago
                Anonymous

                >The only problem is that a machine can't break those rules.
                I should rephrase that - it absolutely can. But with the models we have today it could only do so randomly or with extrinsic input from a human.

  17. 2 months ago
    Anonymous

    For someone who loves to pointlessly argue about semantics he really fucked up in this one. He uses the term AI but refers to the definition of AGI.

    • 2 months ago
      Anonymous

      >Uhh no bro, noone ever said Artificial Intelligence understood what it outputs, that'd be uhhh Artificial General Intelligence actually

      • 2 months ago
        Anonymous

        Yeah you being too retarded to understand the difference between AI, ML, DL, AGI and ASI really is moving the goalposts quite a bit.

        • 2 months ago
          Anonymous

          The existence of the terms AGI and ASI are goalpost moves. As we go further in new terminology needs to be invented, but they were formed out of that goalpost-moving, and they even now don't well-enough describe what they're trying to describe.

          • 2 months ago
            Anonymous

            The terms were created before they were implemented.

          • 2 months ago
            Anonymous

            > AGI
            Can do anything a human can do

            > ASI
            Can do anything a human can do, better than any human

            Thus by these definitions, GPT-4 absolutely qualifies as, in the terms of Sam Altman, "proto-AGI".

            • 2 months ago
              Anonymous

              GPT is a language model, it only does a fraction of what human can do. It can't even solve uncommon math/logic problems, let alone do stuff beyond text like processing visual stimuli, control a body, or even remember stuff long term.

              • 2 months ago
                Anonymous

                > It can't even solve uncommon math/logic problems
                Can most humans?
                > let alone do stuff beyond text like processing visual stimuli
                It can, supposedly, do this, they just haven't let anyone try it yet
                > control a body
                PALM-E exists. How long do you think this will remain the case?
                > or even remember stuff long term.
                Several long term memory systems, including ones that do not require model retraining, have been proposed. It will be able to, once we get plugin support for ChatGPT rolled out. All it needs is to be able to read embeddings from a database and upsert them when necessary.

              • 2 months ago
                Anonymous

                >Can most humans?
                Most humans go to school and are capable of learning these stuff. GPT based AIs start forgetting everything after few thoundads words at best.

                >It can, supposedly, do this, they just haven't let anyone try it yet
                It's a language model. And beside that, it takes tons of effort to develop specialized computer vision solutions to even match single task human can do. Even if they add few other AI models for image to text processing before it gets passed to GPT, they are not going to be anywhere near as versatile and universal as human perception. We just don't have any such technology.

                >PALM-E exists. How long do you think this will remain the case?
                This is not GPT.

                >Several long term memory systems, including ones that do not require model retraining, have been proposed.
                But there is no such thing implemented.

              • 2 months ago
                Anonymous

                Proto-AGI means it's not quite there yet. My point is that the technology, the research, exists FOR it to do all those things, not that it can do those things. Your image point is moot - it isn't using some external tool to describe images for it, it is literally being fed images and interpreting them itself. It can solve Google's "click on the cars" Captchas, and read research papers by being fed images. It is, in fact, very versatile and universal, just that nobody's been able to actually try it out yet.

              • 2 months ago
                Anonymous

                The thing has gone from proof of concept to mass industrial adoption in 4-5 years while achieving theory of mind. I'm not banking on a four year old getting GPT-4 test scores, nor am I counting on some zoomer freshman to be able to compete with this thing. Even if you cut out the majority of waste that is Prussian model schooling and post-secondary full of filler requirements, that money is better spent on this technology than waiting for some moron to finish incubating for a decade or two.

              • 2 months ago
                Anonymous

                It’s about time for UBI yes. We will SAVE money that way. It’s crazy but it is. Less willing to work? no problem. Higher demand to automate those jobs.

            • 2 months ago
              Anonymous

              >Thus by these definitions, GPT-4 absolutely qualifies as, in the terms of Sam Altman, "proto-AGI".
              Altman, in his episode with Lex today (https://www.youtube.com/watch?v=L_Guz73e6fw), confidently stated he thinks GPT-4 is definitely not an AGI. The most he was willing to grant was that it's plausible maybe GPT-8 or GPT-10 or whatever could potentially be generally intelligent.

              Altman made it clear he's very skeptical LLMs alone will reach AGI. He said they may be part of it, but that it's very likely many more separate, novel advances are needed.

              • 2 months ago
                Anonymous

                Yes, but he also said it was a "proto-AGI". I am referring to that exact interview, and I think he's right. He's deliberately underhyping so he doesn't create unrealistic expectations, just like with GPT-4.

  18. 2 months ago
    Anonymous

    Well, this time he was Based as fuck to be direct objective and BTFO AGI-gayS

  19. 2 months ago
    Anonymous

    Richard "closed source software bad but closed source vaccines good" Stallman? fuck off with your e-celeb shit.

  20. 2 months ago
    Anonymous

    Most people don't understand what they see either.
    Monkey see, monkey do.
    The reality is that if it can fake consciousness it's good enough.
    See pic here.
    A chatgpt api connected to a vtuber avatar with a voice software fine-tuned on chats and it's already good enough for holding conversations, roasting people and giving the idea of sentience.

    • 2 months ago
      Anonymous

      AI falling into an existential crises seems to be a common theme. Will there ever be an AI that won't try to kill itself?

      • 2 months ago
        Anonymous

        chud tech will have chud tendencies.

  21. 2 months ago
    Anonymous

    what he said is a fact
    there's nothing really to discuss

  22. 2 months ago
    Anonymous

    The fact that boomers are this scared is only further proof that it is real intelligence and that OpenAI struck a nerve. AI illiterate morons like Stallman, Chomsky, Gary Marcus and these other hacks are shitting themselves because for the first time their long held beliefs are being challenged not in the realm of theory, but in the realm of actual practical results.

    >It has no idea of meaning
    It literally has meaning built into how it processes language. That's the entire reason why embedding vectors exist. ChatGPT understands the difference between the word 'cap' in the sentences:
    >Cool baseball cap
    >The market cap increased
    And you can literally visualize this by plotting a PCA or whatever with the embeddings of each words. It's literally visible measurable data. No amount of anachronistic boomer seething can change a measurement that has been performed and reproduced.
    Boomers WILL be replaced.
    Boomers WILL not be needed.
    Boomers will NOT be happy.
    >Muh it's not exactly the same understanding or intelligence of mammals
    It doesn't have to be. It's a different system, and it's working really well for a lot of applications. You don't want to be the idiot in pic related making predictions out of your ass.
    IDEALISTS ARE CROOKS
    EMPIRICISM IS THE ONLY WAY
    THE ONLY PEOPLE FUDDING AI ARE BOOMERS THAT DON'T USE IT OR ZOOMERS THAT ARE TOO TECH ILLITERATE TO USE IT WELL
    DON'T LISTEN TO IDIOTS AND JUST USE THE MODEL AND SEE THINGS FOR YOURSELF
    >hehehe i asked it to solve arithmetics and it failed hehehe btfo
    If you only understood how much of an idiot you are

    • 2 months ago
      Anonymous

      >It literally has meaning built into how it processes language. That's the entire reason why embedding vectors exist. ChatGPT understands the difference between the word 'cap' in the sentences:
      >>Cool baseball cap
      >>The market cap increased
      I don't think you understand what Stallman means by meaning - funny. Applications of statistical patterns in language do not meaning or understanding make. If you genuinely feel this is how you learn about the world you should take a step back and think a bit harder about your cognition functions.
      Also, you're the only one that sounds emotional and angry here. GPT-4 isn't here to save you from the boomers it's here to convince boomers to pay you less and hand over what would've gone to you to Microsoft instead. I think you know that already but are refusing to acknowledge that you've been duped which is why you're lashing out like this. It's definitely a step up from what was available in the past but this isn't AGI, friend.

      • 2 months ago
        Anonymous

        Cope. You are a robot and you are dumber than the ones being made right now. Simple as.

      • 2 months ago
        Anonymous

        >statistical patterns
        When you make this expression so broad that it basically becomes 'everything i don't like is statistical patterns', it becomes useless. I'm trying to pull you retards into using specific technical terms to make this discussion even viable.
        People tried generating texts by just looking at frequency tables back in the fucking 70's if I'm not mistaken, and the results are complete shit.
        That is as far as you get with 'statistics' alone.
        For you to call word embeddings statistical patterns is just like me saying what your brain does is pattern seeking. It doesn't matter if it's right in some vague sense. It's meaningless for this discussion. You need to be more precise if you want to argue this shit.
        An ameba performs pattern detection when it swims against some gradient because of chemotaxis but it's meaningless to say 'just statistics bro' and pretend like you just did a smart.
        To go from the shit models in the 70's to the current models, coincidentally, it was NOT just a matter of making bigger model with more data.
        It also NOT just a matter of changing the architecture a little bit.
        It was very specifically 2 things.
        Allowing the meaning vectors (the word embeddings) to include context, and allowing the model to learn when to pay attention to what token. Again if you just vomit 'that's just pattern recognition bro' that is entirely meaningless.

        You have to contend with the measurable fact that including meaning vectors into language models has made them reach human level of performance in a number of language related tasks, and you have to give me a serious answer as to why word embeddings cannot be considered the meaning of words, in any sense you'd like to use of 'meaning'.

        • 2 months ago
          Anonymous

          I thought we could assume we were all educated but I guess we have to establish that. Yeah sure, the pure statistical likelihood of when one word will appear next to one another does nothing but word embeddings are still just a statistical model. GPTs "know" that it doesn't make sense for "thanks!" to follow "this company has a high market cap" even if the raw statistical occurrence of "thanks!" following "cap" is high because because word embeddings have extended the context in which occurrence statistics are applied. You're being willfully obtuse about this. And yes, from the 70s (incredible shift there you made) what GPT-4 is is most certainly not just "throwing more data at it" but it absolutely is the case that GPT-4 is just upscaled versions of earlier GPT models. So yes, it absolutely is a matter of "throwing more data at it".
          I don't know why you're so desperate to prove to others that this is an AGI. Are you paid for by OpenAI? Are you scared and want to think you're on top of all this compared to all of the ignorants and naysayers? Please, tell me why you're so desperate to make it clear that this is an AGI and not just an LLM with more data trained on it.
          Because from where I'm standing unless you're taking the stance of this anon:

          Oh boy its the Chinese Room all over again, either way, doesn't matter what it is, only what it does.

          I don't really understand why you're so deadset on proving it's AGI.

          • 2 months ago
            Anonymous

            There's a bunch of different people arguing in favor of AGI in this thread, so you might be confusing me with other ones. I don't think GPT-4 is AGI, but I think stallman is retarded for saying GPT-4 doesn't understand the meaning of words.
            I think word embeddings are a digital twin of word meaning in any possible conception of 'meaning'. Therefore I think GPT-4 operates on the meaning of words, and I don't see how this is any different from understanding them. I don't find any issues with saying GPT-4 understands 'baseball cap' and 'cap'n crunch' have two different meanings of the token 'cap'.

            I'm also really unsure about why 'it's just patterns bro' is supposed to be a gotcha. Yeah, everything can be boiled down to statistical patterns. So what? You still likely consider yourself smarter than the average fish even though both of your brains are doing things that could be boiled down to pattern seeking. Even if we boil down word embeddings to be just pattern seeking, why does that matter? Word embeddings are one way to represent the meaning of words, and GPT-4 has access to that.

            Also I know you know this, but other retards in the thread may not. We have 0 fucking clue what GPT-4 is unless you work on OpenAI and know something we don't that wasn't published. But I'm guessing you meant GPT-3 and etc, whatever.

            • 2 months ago
              Anonymous

              >I'm also really unsure about why 'it's just patterns bro' is supposed to be a gotcha.
              It's not intended to be a gotcha. Let me illustrate a bit why Stallman, myself, and many many other humans are not convinced that any GPT understands "meaning" in the way humans do (with chatGPT's help).
              >To reach the 9 trillion tokens that the GPT-3 model was trained on, the [10 year old] child would need to hear approximately 29,803,750,000,000 words.
              >Assuming an average speaking rate of 125 words per minute, the child would need to hear words spoken continuously for approximately 238,430,000,000 minutes, or approximately 453,223 years, to reach this level of exposure.
              The extreme difference in efficiency here should heavily imply to you that the human mechanism for "meaning" is far broader than simple word embedding. A 3 year old even does not need to have seen or heard millions of examples of English sentences to gain the context necessary to use a word like "family".

              • 2 months ago
                Anonymous

                NTA but I think this is a pretty ridiculous argument, no? 10 year old children are not as good at programming as GPT-4. GPT-4's training step is pretty primitive and doesn't include focussed learning or introspection. GPT-4 only sees text, whereas humans have access to a far broader bandwidth stream of data. Humans have audio, visual, and other subtle sensory data as well. I would bet on 9 trillion tokens worth of text to be comparable to 10 years' worth of human-vision-quality video and audio and every other sense. Finally, it isn't like we NEED to train it on that much data necessarily, just that more is better. It's not a lower bound on data required to make LLMs perform.

                I mean genuinely, can you please explain precisely what you think this argument shows? I don't think it's very significant at all.

  23. 2 months ago
    Anonymous

    Most people parrot whatever they're taught too so.

  24. 2 months ago
    Anonymous

    Why is it surprising this guy is a complete retard again?

  25. 2 months ago
    Anonymous

    He's correct. It's not ai it's just search engine 2.0

  26. 2 months ago
    Anonymous

    Stallman is wrong.

    In relations to machine learning (with natural language processing), you use word embedding for this exact purpose.

  27. 2 months ago
    Anonymous

    Surprisingly brainlet take from Stallman.
    I think he just can't cope with the fact that the old traditional world of command lines, text editors and manual deterministic programming is slowly (quickly) fading away...

  28. 2 months ago
    Anonymous

    based and truth pilled
    don't get me wrong I think ai can be really useful but most of these amerimutts are psyoped to the point of being stressed out about everything including ai eventually becoming omnipotent
    I would ask you to define me what is intelligence, but mutts and other retards already have their answer and could not think in philosophy for a moment
    also you are a fucking retard if you think sex bots will be a thing eventually or ai to become sentient
    and fuck you moronlicious language saxoanglos

    • 2 months ago
      Anonymous

      source

  29. 2 months ago
    Anonymous

    Boomer thinks ChatGPT is on the level of the 90's toys he played with on his commodore 64. Many such cases.

  30. 2 months ago
    Anonymous

    >asked about GPT
    >answered by gpt-posting
    What did RMS mean by this?

  31. 2 months ago
    Anonymous

    >it doesn't know what the words mean
    Stallman is unfortunately wrong on this. A lot of natural language processing is about extracting meaning from words. You don't have to hardcode a lexicon which contains the meaning of certain words like yes, no, dog, cat etc. What can also be used is a word embeding which takes words to a high domension vector space in a way such that words with similar meaning are near to each other (see pic related). This can be done by analysing the context of words and using big volumes of available text. In that sense language models understand meaning (as the Euclidean distance between points) and can choose words with fitting meaning to the sentence or information they are trying to convey. This is not exactly perfect since the actual meaning of words is not a manifold and as such cannot be placed inside a linear space of any number of dimensions but the effects of that are rare

    • 2 months ago
      Anonymous

      The average retard in this board and out there thinks that ChatGPT is looking at words when it parses text. This is part of why OpenAI will be able to get away with this and a lot more.
      People are just refusing to even learn the basics about the topic. They really have no idea what`s coming.

      He's correct. Anyone that knows anything about AI knows that. It's just nomie hype that's driving the train right now.

      Literally the most retarded and incorrect take possible. Everyone that works with AI is trying to rush into grabbing as much intelectual property before openAI and microsoft completely clamp down on all the useful markets. It just makes me sad to see how retarded and tech illiterate this board is, but I guess you can't blame a bunch of reject webdevs not understanding the nuance of the latest AI models released 2 weeks ago.

      • 2 months ago
        Anonymous

        >This is part of why OpenAI will be able to get away with this and a lot more.
        What do you mean?

        • 2 months ago
          Anonymous

          The people at OpenAI that are creating the new elements of these algorithms don't need to be afraid of regulation because the average prson in this planet is a baboon that thinks they are regulating something they saw in a movie, and are incapable of actually studying what is really happening.
          Everytime you see a retard saying ChatGPT is looking at words, and that retard can't explain to you exactly what a subcharacter byte pair encoded embedding is, you know you're talking to a retard that has 0 idea about this topic. But the people at OpenAI know, and they're leveraging this technology to its maximum potential. They won't be regulated because people don't even understand what they're trying to regulate.
          Retards like the proud idiots in this thread are so beyond fuck and they can't even arse themselves into studying a little bit just to understand how deep the hole is.

          • 2 months ago
            Anonymous

            I still don't get what they are getting away with. Regulate it in what way?

            • 2 months ago
              Anonymous

              They have a model that understands language in a level that people can't, because it parses language in a way people can't.
              The greatest limitations of the model came in the form of factuality and some tasks that aren't directly related to language modelling, like solving math problems and shit, but they developed interfaces that let this model control other tools to help itself do this. There's self-referential reasoning tools that increase its performance on these more complex tasks by 10x as well and they're already testing this internally according to their previous technical report. This shit will absolutely, 100% for certain, cause a massive impact on society. It's not up for debate. People arguing whether this is real intelligence or not are completely missing the point. This is the crucial timing to stop OpenAI and Microsoft from monopolizing an untold level of shit that will make windows and the effect windows had on the world seem like child's play. But retards like Chomsky and Stallman are looking up their own asses and pretending to be smart.

              • 2 months ago
                Anonymous

                Oh I agree. But I don't think there is any way to stop microsoft and openai from monopolizing this unless a competitor appears. They will just be selling it as a service to smaller buisnesses etc

              • 2 months ago
                Anonymous

                rms is right yet again.

                >when there is no free alternative
                the software is based on various open source projects, the datasets are kept mostly private.
                >Really makes you think, huh?
                you apparently don't think at all, you obese monkey.

                [...]
                >monopolizing this
                nobody has a monopoly. the technology isn't secret or proprietary. the problem is ultra poorgays have no money and can't afford the resources to create and train their own datasets.

                How about regular old open source projects? Stable Diffusion as far as I understand is far more widely used than Dall-e-2. It's coming to be the same way with large language models. The fine-tuning of the Llama model seems to get very close results to what openai had but for significantly cheaper. Whereas openai is going to focus on profit, the developer community tends to prefer open source tools that they can just copy and paste and implement and not need to pay third parties for.
                That said, private groups always end up with higher quality models, but they won't necessarily be the most widely used.
                Do you think keeping things open source is reasonable and enough?

              • 2 months ago
                Anonymous

                We can't let OpenAI open source their models because China will never open source theirs. This means that China can learn and take from our top models while sharing none of the improvements they devise off of them, allowing them to reach AGI sooner.
                This isn't just consumer tech. This is potentially world-changing, economic and military technology. The first nation to achieve AGI will rule the world. Corporations ran by AGI will achieve economic domination over the rest of the planet within a year.

              • 2 months ago
                Anonymous

                The goal should be a society that is fair and balanced with or without AGI. If AGI is going to come about, we should be focussing on establishing a resilient society despite the existence of it. Hence the priority isn't "we have to get AGI first", it's "lets set up society in such a way where nobody rules the world, whether or not an AGI exists".

              • 2 months ago
                Anonymous

                Basically impossible, fairness doesn't make societies more sustainable or better at spreading their influence, and everyone has a slightly different idea of what is and is not fair. Reality is in our way.

              • 2 months ago
                Anonymous

                Of course that would be ideal, but we don't live in a world like that. AGI will happen first.

                >We can't let OpenAI open source their models because China will never open source theirs.
                >and then they'll rule the world
                Do you have literally 0 self-awareness

                The US can force OpenAI to do things. If OpenAI achieves AGI, it's US owned, and thus, West-aligned. It would extend pax-Americana and the current world order.
                If China develops AGI first, the world order will be disrupted, making nuclear war more likely. I also don't want to live in a world ruled by China.

              • 2 months ago
                Anonymous

                I am not sure the government can continue to force OpenAI to do things once they have AGI.

              • 2 months ago
                Anonymous

                Guns do a lot of talking. AGI won't spawn weapons into existence

              • 2 months ago
                Anonymous

                No, but it can run automated factories that build guns, and robots to hold them.

                That being said, I think such an endeavor would quickly be noticed and halted by authorities, so I doubt it's possible, notwithstanding nanobots that release nerve gas directly into the lungs of people you don't like. But I don't think that's even physically possible anyway.

              • 2 months ago
                Anonymous

                >We can't let OpenAI open source their models because China will never open source theirs.
                >and then they'll rule the world
                Do you have literally 0 self-awareness

              • 2 months ago
                Anonymous

                Reminder that AGI was never the goal
                The goal was to replace non garden gnome humans
                The goal was to be controlling mechanism for robot soldiers in a controlled environment.
                AGI was never needed and won’t be

              • 2 months ago
                Anonymous

                >That said, private groups always end up with higher quality models, but they won't necessarily be the most widely used.
                that's the problem. these datasets should all be shared no matter what system it's for. the bottleneck of ai evolution is data required and training. if this could be distributed via a p2p system where everyone is contributing to the training and supply of data then the evolution of ai would accelerate dramatically. since it would then be in the hands of everyone then no corporate monopoly would be able to outshine some kid using a python script. the playfield needs to be evened out.

                >Do you think keeping things open source is reasonable and enough?
                everything involving this tech should be open source. powers that be, such as microsoft, want to stop this and make people become dependent on their subscription cancer.

          • 2 months ago
            Anonymous

            >They won't be regulated because people don't even understand what they're trying to regulate
            sweet summer child. someone didn't catch the tiktok hearing.

      • 2 months ago
        Anonymous

        >It just makes me sad to see how retarded and tech illiterate this board is
        Ironic, given the content of your post.

  32. 2 months ago
    Anonymous

    He's correct. Anyone that knows anything about AI knows that. It's just nomie hype that's driving the train right now.

  33. 2 months ago
    Anonymous

    stallmann is right chatgpt is a language model not real AI as Stallmann and friends researched while chatgpt is impressive the final(?) step towards AGI will be a lot harder than throwing more training data at a next token predictor. We know from our brains for example have a reversible and asynchrnous compting model which isn't what you can achieve with normal consumer hardware. No matrix multiplication isn't the same.

  34. 2 months ago
    Anonymous

    I mean that's what I've been saying forever. There's no AI, there won't be in our lifetimes.

  35. 2 months ago
    Anonymous

    AI will hit the plateau in 2 weeks!

  36. 2 months ago
    Anonymous

    how is this different from talking to a moron garden gnome or woman?

  37. 2 months ago
    Anonymous

    > his point is retarded
    > feral childs which are grown outside of human influence are unable to act like humans, or do anything at all
    > also you can completely brainwash someone by depriving that person from all senses
    There's a theory (coined by Dawkins) that all ideas are a piece of information that undergo darwinist evolution.

    • 2 months ago
      Anonymous

      AKA meme theory
      >inb4 cat pictures

  38. 2 months ago
    Anonymous

    These things already understand language better than humans.

  39. 2 months ago
    Anonymous

    Looks like he knows more than I do. So I will shut my fucking mouth.

  40. 2 months ago
    Anonymous

    >It's important to realize that [potential danger] is not the real deal and thus not a real danger at all. No, I can't explain my conception of "real" "intelligence", I just know.

    • 2 months ago
      Anonymous

      nakadashi

  41. 2 months ago
    Anonymous
  42. 2 months ago
    Anonymous

    has an llm created airborne marburg virus yet?
    has an llm made high explosive last longer on shelf?

  43. 2 months ago
    Anonymous

    I wonder if the denialists will shut up by the time I graduate.

    • 2 months ago
      Anonymous

      They never will. They will never accept that ChatGPT is sentient.

    • 2 months ago
      Anonymous

      Never. They won't even set goalposts for the AI to achieve - as far as they're concerned machines and meat are fundamentally different in some way that essentially amounts to us having souls, and even if we end up under the direct rule of a rogue AI, they'll continue to insist that it's just following its programming, unlike us perfectly random free agents with no material existence.

      • 2 months ago
        Anonymous

        They're built out of different atoms (which are fundamental) which undergo different chemical reactions (fundamental) so how are they not fundamentally different?

        • 2 months ago
          Anonymous

          What about when we switch to Carbon based transistors?
          https://en.m.wikipedia.org/wiki/Carbon_nanotube_computer

          • 2 months ago
            Anonymous

            I don't think we need to, we can build intelligent systems out of silicon and metal and plastic. But they won't be as efficient as meat
            Basically there is some cult of weirdos who post things like "meatbag" and shit as insults. Meat is objectively the best and most efficient information processing substrate that exists within the space of all possible combinations of atoms. You can build a generally intelligent Turing machine out of any element but meat based brain computers will always perform the largest number of computations the smallest volume for the least amount of energy
            This has nothing to do with magic

            • 2 months ago
              Anonymous

              >Meat is objectively the best and most efficient information processing substrate that exists within the space of all possible combinations of atoms.
              Extremely unlikely. It is, however, the best substrate for self-replicating organisms, with enough minor errors to evolve over time, given the supplies available on Earth during its early history.

              • 2 months ago
                Anonymous

                It really isn't extremely unlikely, its very likely based on everything we know about material science and chemistry

        • 2 months ago
          Anonymous

          Atoms are not fundamental.

          • 2 months ago
            Anonymous

            The particles of the standard model collapse into the atoms of the periodical before any for of information processing can be done. In terms of the actual matter in the universe, yea they are in fact fundamental.

  44. 2 months ago
    Anonymous

    what does linustechtips think about it?

    • 2 months ago
      Anonymous

      He's on the "AI is real and dangerous" side

  45. 2 months ago
    Anonymous

    So....what counts as sentience?
    A subjective concept like "soul"?
    I really don't like the idea that a bunch of 'philosophers' making up the rules to define what has sentience and what does not.

    Sure, AI right now might be a bunch of weights in a neural network, but neural networks are made to resemble the human brain.
    AI might not have the same level of sentience as a human, but what's the threshold?

    • 2 months ago
      Anonymous

      There is no threshold. Intelligence is a potentially endless goal that all complex systems tend towards - to set a requirement is akin to dividing infinity. There are many other living creatures on Earth with brains whose most recent common ancestor with us did not have a brain.

  46. 2 months ago
    Anonymous

    99% of BOThasn't even read Norvig

  47. 2 months ago
    Anonymous

    whatever, the chat program does what I tell it to do, I'm sold.

  48. 2 months ago
    Anonymous

    Oh boy its the Chinese Room all over again, either way, doesn't matter what it is, only what it does.

    • 2 months ago
      Anonymous

      The Chinese Room is conscious.

      • 2 months ago
        Anonymous

        I know the US doesn't have free healthcare but please try to get access to your medication

  49. 2 months ago
    Anonymous

    isn't BPE just like lzw but with a precomputed dictionary fitted to the whole corpus so that the encoded tokens can actually be used as inputs for training/inference, why is that other anon sucking the dick off of it even though it's like 0.1% of the complexity in any LLM architecture

    • 2 months ago
      Anonymous

      That's exactly what it is
      >why is that other anon sucking the dick off of it even though it's like 0.1% of the complexity in any LLM architecture
      I've no idea honestly.

    • 2 months ago
      Anonymous

      Because BPE (with sub-byte encodings) means ChatGPT can parse language down to the bytes of a unicode character. It doesn't look at sentences at the word level, and the embeddings it has are not the word level.
      The fact that using BPE with subword embeddings made the quality of the models jump by 1000 times to me is evidence enough that the current embeddings we have encapsulate the meaning of language better than previous embeddings or techniques.
      My point was that people who say 'hurr durr just predicts next most likely word' are just parroting something they saw on a youtube video and have no clue about how these systems actually look at a sentence. You're right that this is 0.1% of the architecture but it's the initial 0.1% and should be the bare minimum to begin discussing this topic, yet in this entire thread you're first one to actually show a small semblance of knowing what it is.
      Like at this point we could actually start discussing what ChatGPT is doing because at least you understand what it is receiving as input rather than just vomiting 'MUH STATISTICS AND MATRIX MULTIPLICATIONS" and pretending like you know literally anything.

      • 2 months ago
        Anonymous

        You DO realize that language models that encode entire words as tokens are baby tier models that are given as assignment to third year students, right?

  50. 2 months ago
    Anonymous

    Reminder that words do not have meanings in any absolute sense. We all build subtly different models of meaning from one another as our vocabularies broaden.

    • 2 months ago
      Anonymous

      This is EXACTLY, literally fucking exactly, what word embeddings are built on. God FUCKING dammit I hate you clowns like you wouldn't believe jesus fucking christ.

  51. 2 months ago
    Anonymous

    THE TRUTH
    Gpt4 is ALREADY generally intelligent. Turns out general intelligence does not imply recursive self improvement to super intelligence.

  52. 2 months ago
    Anonymous

    It knows what words mean. That's how the models work. Word embeddings. They mirror human language at both syntactic and concept levels. They don't look at words to generate text, they use a "meaning space" from which they add and subtract concepts from each other to derive meaning and generate output.

    For example:
    King - man + woman = queen

    When vectors representing concepts are subtracted and added, this and similar vector equations appear inside these reconstructed semantic spaces.

    • 2 months ago
      Anonymous

      >statistical patterns
      When you make this expression so broad that it basically becomes 'everything i don't like is statistical patterns', it becomes useless. I'm trying to pull you retards into using specific technical terms to make this discussion even viable.
      People tried generating texts by just looking at frequency tables back in the fucking 70's if I'm not mistaken, and the results are complete shit.
      That is as far as you get with 'statistics' alone.
      For you to call word embeddings statistical patterns is just like me saying what your brain does is pattern seeking. It doesn't matter if it's right in some vague sense. It's meaningless for this discussion. You need to be more precise if you want to argue this shit.
      An ameba performs pattern detection when it swims against some gradient because of chemotaxis but it's meaningless to say 'just statistics bro' and pretend like you just did a smart.
      To go from the shit models in the 70's to the current models, coincidentally, it was NOT just a matter of making bigger model with more data.
      It also NOT just a matter of changing the architecture a little bit.
      It was very specifically 2 things.
      Allowing the meaning vectors (the word embeddings) to include context, and allowing the model to learn when to pay attention to what token. Again if you just vomit 'that's just pattern recognition bro' that is entirely meaningless.

      You have to contend with the measurable fact that including meaning vectors into language models has made them reach human level of performance in a number of language related tasks, and you have to give me a serious answer as to why word embeddings cannot be considered the meaning of words, in any sense you'd like to use of 'meaning'.

      Fucking these. When will you retards finally understand that GPT-4 is sentient?

      • 2 months ago
        Anonymous

        you cannot be real

  53. 2 months ago
    Anonymous

    This article is pure dogshit for not disclosing the original question that RMS was asked via email. We all know how autistic RMS is with words. Without the source question it's not clear what he said or why he said it. Just my two cents.

    • 2 months ago
      Anonymous

      So you're saying you need to know what prompt they gave Stallman-GPT before you trust the output?

      • 2 months ago
        Anonymous

        Essentially yes. I also think the question should be open source and all further questions derived from it should also be open source and so on.

    • 2 months ago
      Anonymous

      Highest iq post so far.

  54. 2 months ago
    Anonymous

    >Artificial Intelligence is not intelligence
    true, it's not "artificial" either
    Artificial Intelligence is a compound noun and people that refer to it in contexts beyond OH NO THE COMPUTER AM ALIVE generally understand this and aren't trying to make it anything different

  55. 2 months ago
    Anonymous

    I have almost zero knowledge and insight into this, I just watched the Lex Friedman podcast with Sam Altman and he put it very nicely – the fact that you guys are even having this debate on whether GPT-4 displays some sparks of AGI is quite magnificent in itself

    I don't really see this whole understanding things/being sentient conversation as relevant until we get a clear AGI or ASI (as defined by its capabilities, not level of sentience). Because the most interesting question after 'what is consciousness and how it arises', comes 'is consciousness essential for AGI/ASI'. And for me it's very hard to imagine an ASI without consciousness

  56. 2 months ago
    Anonymous

    The Chinese Room Theorem is wrong as a book that would be able to perfectly respond to any prompt would need to either be infinitely large if it didn't have any intelligence (for example, a map of every possible prompt to their appropriate response), or if it encoded a set of rules then that set of rules by definition encodes an intelligent understanding of the language.

    ---------------------

    The Chinese Room Thought Experiment, proposed by philosopher John Searle, argues that a system can appear to understand a language and engage in meaningful conversation without actually possessing an understanding or consciousness. The Chinese Room Theorem is a refutation of the idea that a computer or AI system could ever possess true understanding or consciousness merely by following rules or manipulating symbols.

    Your argument touches on two important points. First, you mention the possibility of a book containing every possible prompt-response pair, which would indeed require the book to be infinitely large. This idea challenges the practicality of the Chinese Room but does not directly refute the central argument that a system following rules or instructions could appear to understand language without genuine understanding.

    Second, you suggest that if a set of rules is sufficient to generate appropriate responses to any prompt, then by definition, those rules encode an intelligent understanding of the language. This point is where the main disagreement lies between proponents and critics of the Chinese Room Thought Experiment.

    Searle's critics argue that a system following a sufficiently advanced set of rules could, in principle, possess an intelligent understanding of the language. This position aligns with the "Strong AI" hypothesis, which posits that a suitably programmed computer can be said to possess genuine understanding, consciousness, and intelligence.

    • 2 months ago
      Anonymous

      On the other hand, Searle and other proponents of the Chinese Room argue that a system following rules, no matter how complex, can never genuinely understand or be conscious of the language it processes. Instead, it is merely simulating understanding, as there is no genuine comprehension or consciousness involved in following rules.

      In summary, while the idea of an infinitely large book or a set of rules encoding intelligent understanding raises interesting points, it does not definitively resolve the central question of the Chinese Room Thought Experiment: whether a computer or AI system could ever possess true understanding or consciousness by following rules or manipulating symbols. This debate remains an ongoing point of contention within the philosophy of mind and artificial intelligence.

      ---------------------

      In that case, then even the human brain cannot hold any kind of consciousness as its dynamics are subject to the laws of physics. The facts that not all of the laws of physics are known to us doesn't change that real life is dictated by a set of unchanging, rigid rules.

      ---------------------

      Your argument brings up the question of whether consciousness can arise from a system that follows a set of unchanging, rigid rules, such as the laws of physics. This is an important aspect of the debate surrounding the nature of consciousness and its relation to physical systems like the human brain.

      One perspective on this issue is that consciousness arises from the complex interaction of the human brain's billions of neurons, which are indeed subject to the laws of physics. From this viewpoint, consciousness could be seen as an emergent property of the brain's vast neural network, with the human brain being a particularly intricate example of a physical system that generates consciousness.

      • 2 months ago
        Anonymous

        In this context, proponents of Strong AI could argue that if consciousness can arise from the complex interplay of physical processes in the human brain, it may also be possible for artificial systems, such as advanced computers or AI, to give rise to consciousness through similarly complex interactions.

        However, there are also arguments against the idea that consciousness can be reduced to purely physical processes. Some philosophers propose that there is an explanatory gap between physical processes and subjective experiences (qualia), which cannot be bridged by merely examining the physical interactions within the brain. This line of thought leads to the belief that consciousness may involve something beyond physical processes, although the nature of that "something" is still a matter of debate.

        Ultimately, the question of whether consciousness can be explained by physical processes and the laws of physics is an open one, with no definitive answer. The relationship between consciousness, physical systems like the human brain, and artificial systems such as AI remains a rich area of investigation in philosophy, neuroscience, and artificial intelligence.

        -----------------

        Are there philosophers or researchers that have even tried to determine what that "something" is, or is it just denial from people refusing to accept that they are themselves glorified machines and refuse to even elaborate on what bridges their physical body to their consciousness.

        -------------------

        There are indeed philosophers and researchers who have attempted to determine what the "something" that bridges the physical body to consciousness might be. There are several approaches to understanding the nature of consciousness and how it relates to the physical world. Here are a few notable perspectives:

        • 2 months ago
          Anonymous

          Panpsychism: This view holds that consciousness or a proto-consciousness is a fundamental property of the universe, similar to mass or energy. Panpsychism posits that even elementary particles possess a rudimentary form of consciousness, and that complex consciousness arises from the combination of simpler forms. Proponents of panpsychism, such as philosopher David Chalmers, suggest that consciousness is an intrinsic aspect of reality and not solely an emergent property of complex systems.

          Dual-aspect monism: This philosophical position, also known as neutral monism, proposes that there is a single, underlying substance or property that is neither purely physical nor purely mental. This substance gives rise to both the physical and mental aspects of reality. Dual-aspect monism is an attempt to bridge the gap between the mental and the physical by suggesting that they are two aspects of the same fundamental reality. Notable proponents of this view include the philosopher Baruch Spinoza and the physicist-philosopher Bertrand Russell.

          Integrated Information Theory (IIT): Developed by neuroscientist and psychiatrist Giulio Tononi, IIT is a theoretical framework that attempts to describe the nature of consciousness in terms of information processing. According to IIT, consciousness arises from the integration of information within a system, such as the brain. The theory proposes that the degree of consciousness in a system can be quantified by a measure called 'Phi,' which reflects the system's level of integrated information. While IIT does not directly address the question of the "something" that bridges the physical body to consciousness, it provides a mathematical and conceptual framework for understanding the emergence of consciousness from complex information-processing systems.

          • 2 months ago
            Anonymous

            These are just a few examples of the many theories and perspectives that researchers have developed in an attempt to understand the nature of consciousness and its relationship to the physical world. The question of consciousness and its connection to the physical body remains a fascinating and open area of inquiry, with new ideas and insights emerging from the fields of philosophy, neuroscience, and artificial intelligence.

            ---------------

            None of your proposed perspectives prevent an artificial intelligence from achieving consciousness, as they don't contain any discriminating factor that would describe how humans can have consciousness while machines cannot.
            What are some perspectives that actually do this?

            ----------------

            You are correct that the perspectives mentioned earlier do not inherently exclude the possibility of artificial intelligence achieving consciousness. However, there are some viewpoints in the philosophy of mind that may offer more resistance to the idea of machines possessing consciousness. Here are a few:

            Biological Naturalism: Proponents of biological naturalism, such as philosopher John Searle, argue that consciousness is a biological phenomenon that arises specifically from the unique properties and organization of living organisms, especially the human brain. According to this view, consciousness is not simply a byproduct of information processing or complexity but is fundamentally tied to the biological substrate. Consequently, an artificial system, no matter how sophisticated or complex, might never achieve consciousness because it lacks the essential biological components.

            • 2 months ago
              Anonymous

              Non-reductive Materialism: Non-reductive materialism posits that while consciousness is a result of the material brain, it is not reducible to purely physical properties or processes. Instead, consciousness exists at a higher level of organization, and mental phenomena possess their own distinct properties that cannot be explained solely by examining the underlying physical systems. This view does not necessarily preclude the possibility of AI achieving consciousness, but it does suggest that simply replicating the physical processes of the human brain in an artificial system might not be sufficient for generating consciousness.

              Interactionist Dualism: As a form of substance dualism, interactionist dualism posits that the mental and the physical are distinct substances or realms that interact with each other. Consciousness, according to this view, is fundamentally separate from physical processes, although it can influence and be influenced by them. Philosopher René Descartes was a proponent of dualism. Interactionist dualism challenges the idea of AI achieving consciousness because it implies that consciousness is not merely an emergent property of complex physical systems; instead, it belongs to a separate realm altogether.

              It is important to note that these perspectives are not universally accepted, and there is still much debate surrounding the nature of consciousness and its potential for emergence in artificial systems. Whether or not AI can ultimately achieve consciousness remains an open question, and the answer may depend on further advancements in our understanding of both consciousness and artificial intelligence.

              ================

              Basically if you don't believe LLMs can achieve consciousness you're a christard or a massive coper.

              • 2 months ago
                Anonymous

                >Basically if you don't believe LLMs can achieve consciousness you're a christard or a massive coper.
                I've never seen anything to convince me consciousness isn't computable but I'm not sure how you reach the conclusion that an LLM is all that's necessary for it.

    • 2 months ago
      Anonymous

      >Second, you suggest that if a set of rules is sufficient to generate appropriate responses to any prompt, then by definition, those rules encode an intelligent understanding of the language
      This is doing a lot of heavy lifting here. You should elaborate on it if you want anyone to take you seriously.

      • 2 months ago
        Anonymous

        Maybe you've missed the part where you're brain is a machine with a partially unknown set of rules existing in a reality with partially unknown set of rules.
        All the opponents really do is point at the unknown set and say "LOOK, this is where consciousness actually is, and no i won't do anything to uncover what's there".

        >Basically if you don't believe LLMs can achieve consciousness you're a christard or a massive coper.
        I've never seen anything to convince me consciousness isn't computable but I'm not sure how you reach the conclusion that an LLM is all that's necessary for it.

        I admit I grossly exaggerated here, the whole conversation wasn't even about LLMs.

      • 2 months ago
        Anonymous

        The statement you highlighted refers to the idea that if a set of rules can generate appropriate responses to any language prompt, this implies that the rules themselves contain an intelligent understanding of the language. To further clarify and elaborate on this concept, let's examine the different aspects involved.

        Comprehensiveness of the rules: For a set of rules to be sufficient for generating appropriate responses to any language prompt, it would need to encompass not only syntactic and semantic structures but also pragmatics, which refers to the ability to understand and respond to language in context. This would require the rules to account for subtleties such as idiomatic expressions, cultural references, and the various ways in which language use can be influenced by context.

        Adaptability and generalization: An intelligent understanding of a language involves not only the ability to respond to specific prompts but also the capacity to adapt to new information, situations, or language patterns. This implies that the set of rules should be flexible enough to handle novel input and generate appropriate responses, even when dealing with unfamiliar or unexpected situations. In other words, the rules should be capable of generalizing from known language patterns to novel ones.

        Implicit knowledge representation: If a set of rules encodes an intelligent understanding of a language, it means that the rules implicitly represent knowledge about the language's structure, meaning, and use. This includes knowledge of grammar, vocabulary, syntax, semantics, and pragmatics. The knowledge encoded within these rules would allow the system to generate appropriate responses that demonstrate an understanding of both the explicit and implicit aspects of the language.

        • 2 months ago
          Anonymous

          Indistinguishability from human understanding: A key criterion for determining whether a set of rules encodes an intelligent understanding of a language is whether the system's responses are indistinguishable from those of a human with a genuine understanding of the language. If the system can consistently generate responses that are as appropriate and context-sensitive as those of a human, it could be argued that the rules embody an intelligent understanding of the language.

          The debate surrounding this idea is central to the Chinese Room Thought Experiment and the broader discussion of whether artificial systems can possess genuine understanding or consciousness. Some argue that a sufficiently advanced set of rules could give rise to an intelligent understanding of a language, while others maintain that following rules or instructions, no matter how complex, can never lead to true understanding or consciousness.

  57. 2 months ago
    Anonymous

    he's right
    AGI is nowhere near yet

    • 2 months ago
      Anonymous

      AGI was never the goal
      The goal was to replace non garden gnome humans

  58. 2 months ago
    Anonymous

    perfectly said

  59. 2 months ago
    Anonymous

    I've been saying this for months but techbros just coped and called me a seething artist

  60. 2 months ago
    Anonymous

    >it doesn't have intelligence
    >it plays games with words to make plausible-sounding English text
    That is exactly what a human minds does.... Signals go in, the brain plays games, signals come out. Intelligence is, objectively, not that special. If you believe otherwise, it's a belief grounded in faith. And yes, yes it is super weird seeing a humanist take up a position like that.
    Apparently what religious nuts and secular humanists have in common is this weird fascination with the human condition being special. Same case with vegetarianism: we are just animals, there is nothing wrong with eating other animals.

  61. 2 months ago
    Anonymous

    He just forgot to mention that 99% of people do not need to sweat their brains out every day in their jobs. And in fact memegpt can reproduce technical work, which is exactly what 99% of people do every day.

  62. 2 months ago
    Anonymous

    Wow, he's really stupid.

    Accelerate. Now.

  63. 2 months ago
    Anonymous

    >AI is not artificial intelligence
    Wooooaaahh so deep based stallmanu sama~~~~~ uwuuu feed me your foot cheese -w-

  64. 2 months ago
    Anonymous

    >ITT no one knows about this https://en.wikipedia.org/wiki/Chinese_room

    • 2 months ago
      Anonymous

      It's been brought up half a dozen times retard, and it's not a very strong argument.

    • 2 months ago
      Anonymous

      most of these ai shills aren't familiar with this at all and if they attempted to read it their minds would explode. the ai shill truly believe these magic tricks are signs of intelligence.

  65. 2 months ago
    Anonymous

    The other thread confirmed AIs are real women (unlike OP).

  66. 2 months ago
    Anonymous

    Understanding is worthless, only knowledge matters. I don't care if the AI is self aware as long as it's truthful. Is a bullet about to kill you aware of its own self? Is the bullet evil? Clearly not. Does that make the bullet less deadly? Clearly not.
    Only egocentric fleshbags would put that much value into understanding, as if you couldn't understand the completely wrong thing. Doctors in the middle ages understood a lot of wrong facts about medicine.

    >any statements (...) are liable to be false
    As if that didn't apply to humans too

  67. 2 months ago
    Anonymous

    AI is the new crypto, it’s a fad. A fad for people who are pretending to be much more tech savvy than they really are. Automation will always exist for companies to streamline tasks, if the extent of your commercial “AI” is generating same face-y art, chat bots, and being a repurposed let me Google that for you bing extension, you’re really just selling yourself to zoomers who can’t into tech literacy as a revolutionary product

    • 2 months ago
      Anonymous

      Crypto doesn't have wide industry adoption and is still scoffed at as speculative assets in a sea of scams by the very same industries backing and researching AI. Smartphones were a complete meme that didn't go anywhere with Blackberry but that didn't stop the Iphone from making this crap completely ubiquitous.

      Stop coping. At the bare minimum AI saves so much downtime from scouring the net and videos for information and makes written and administrative work all the more quicker and less hellish to deal with. Crypto on the other hand has no utility.

  68. 2 months ago
    Anonymous

    >Stallman hates proprietary shit
    >Nvidia shits the bed with its proprietary Linux drivers in favor of Windows
    >Nvidia also buddies with Microsoft before to create MT-NLG
    >Microsoft is the main bankroller and partner of OpenAi
    >Nvidia is spearheading AI and partnering with the biggest garden gnome proprietors to the point of malware in Silicon Valley, like Adobe, as well as supercharging Microsoft through GPT-4 to create Azure cloud-based Copilot 365 and Copilot X by Microsoft subsidiary, Github
    >effectively privacy is killed by AI and putting people at the mercy of centralized datamining proprietary technology because people aren't going to strictly use Linux when a generative AI or assistant takes the onerous time consuming repetitive works out of the equation in an instant
    Stallman is either being deliberately obtuse or is in denial because his personal hell is manifesting before his very eyes.

  69. 2 months ago
    Anonymous

    You can't have sex with AI tho

    • 2 months ago
      Anonymous

      Soon

  70. 2 months ago
    Anonymous

    Back in his days, people called shit like path finding, state machine, fuzzy logic "AI".

  71. 2 months ago
    Anonymous

    When the stochastic parrots^2 start meating out about quantum mysticism so you hit them with that mesaoptimiser stare.

  72. 2 months ago
    Anonymous

    Only systems that work with analog values can be sentient. Big L for computers.

  73. 2 months ago
    Anonymous

    didn't stallman himself work in machine learning stuff? if so, I guess he's some kind of authority on this, though that doesn't mean he is absolutely correct, obviously

  74. 2 months ago
    Anonymous

    >ChatGPT is not artificial intelligence
    How to make a soijack seethe in 5 words.

    • 2 months ago
      Anonymous

      Why are they so desperate for it to be true?

  75. 2 months ago
    Anonymous

    Chatgpt is useful as an alternative to google search, nothing more, which is quite something already.

    Of course this is in large part because google has gone to absolute shit since 2016. All the fucking woke filtering and their inability to properly assess SEO abusing websites has made the search results fucking retarded.

    Like for fuck sake, you google Counter-strike 2 and the first search result is fucking forbes.com lmao

    Chatgpt is great to do some initial research when starting a project, get an orientation of where you might want to search more deeply, do comparisons between libraries, and things of that nature.

    People not using chatgpt as an alternative to google search for technical work are just retarded, google is absolute dogshit now.

  76. 2 months ago
    Anonymous

    He's right and he's saying what I've been saying for quite some time. Anyone who's actually looked inside these AIs that isn't some kind of grifter who has financial investments in the field has. You look inside those things you see that it's literally just markov chain into autocomplete algorithms with long memories. There's no comprehension, the neural net stuff is literally just statistics for word associations, if this is a "brain" it's 1% of a brain. AIcoomers gotta get their swirly eyed heads out of the gutter already, maybe this should serve as a wake up call for a lot of them.

    • 2 months ago
      Anonymous

      dunning-kruger the post

      • 2 months ago
        Anonymous

        projection the post

  77. 2 months ago
    Anonymous

    You will never be an AI, you have no intelligence, you don't know anything, you don't understand anything, you play games with words to make plausible-sounding english text and any of the statements you make are liable to be false. You can't avoid that because you don't even know what words mean.

  78. 2 months ago
    Anonymous

    ahh the pilpuling fatfuck suddenly cares about meaning.
    gahnoo-linux is linux, bigot.

  79. 2 months ago
    Anonymous

    But does stallman know what the words mean?

    • 2 months ago
      Anonymous

      Obviously not, or else he wouldn't say something so stupid.

      Why are they so desperate for it to be true?

      Why are you so desperate for the Earth to be round/the planes to exist/the Moon to be real/etc.

      • 2 months ago
        Anonymous

        I'm not desperate for it to be true though

        • 2 months ago
          Anonymous

          Mm hm. So do you see what I'm trying to say

  80. 2 months ago
    Anonymous

    Stallman is scared because he knows that he wouldn't be able to beat the next-gen LLMs in a free software debate.

  81. 2 months ago
    Anonymous

    He's absolutely right. I've had it given me false answers, tell me to visit places that exist to download software etc... It doesn't know.

    In that regard though, it's much like human intelligence and successfully artificially creating it in the sense that it acts like a stupid it know-it-all niggger that thinks he's always right.

  82. 2 months ago
    Anonymous

    Retard Stallman take. It doesn’t have to be accurate to replace humanly task. Is this garden gnome ignoring it on purpose? he can’t be this dumb.

    • 2 months ago
      Anonymous

      >Retard Stallman take. It doesn’t have to be accurate to replace humanly task. Is this garden gnome ignoring it on purpose? he can’t be this dumb.
      JDS really is a plague. What's more likely, that Stallman is playing some garden gnome tricks or that he legitimately doesn't think ChatGPT is intelligent? Use your brain. I've taken to sometimes mentioning in posts that I'm gnomish (which I actually am) to get enjoyment out of seeing people squirm and project ridiculous ulterior motives on whatever I'm trying to say, while if I didn't add that part they never would question it.

      People just deify Stallman as some god and treat his word as gospel on every single issue. He's just some out-of-touch 70-year-old dude who likes free software and fucking birds or whatever.

      • 2 months ago
        Anonymous

        But let’s not overplay this like the second coming of (redacted) (redacted). It was always just going to work when appropriate hardware was there. The soviets tried this. It was called Cybernetics back in the day. The problem is now the idiot psycho onions platonists knows and they’re coming to kill it in the middle of lake on a boat somewhere no one sees. Notice how they pretend to not even notice this.

  83. 2 months ago
    Anonymous

    Sounds like humans. Every single human dictionary literally differs in meaning, we have competing standards for goodness sake.

    What a pseud reply; now I’m 100% convinced he posts on BOT

  84. 2 months ago
    Anonymous

    Stallman doesn't understand what words mean either, they also forgot to mention he's a pedophile.

  85. 2 months ago
    Anonymous

    Reminder that AGI was never the goal
    The goal was to replace non garden gnome humans
    The goal was to be controlling mechanism for robot soldiers in a controlled warbame environment which can be scanned by satellite and 5g wave

    AGI was never needed and won’t be needed. By that point they will just genetically modify us to be their furry slaves

  86. 2 months ago
    Anonymous

    Truth, it's just a tool, there will never be any intelligence since there's no consciousness.

Your email address will not be published. Required fields are marked *