I'm not an AI specialist, but why are people being so apocalyptic about it? It's a computer program.

I'm not an AI specialist, but why are people being so apocalyptic about it? It's a computer program. How is a userspace program like ChatGPT going to do anything meaningful?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 11 months ago
    Anonymous

    Because morons don't understand that AI is NOT robotics and that an AI will not be able to do anything on it's own without its human handlers giving it resources

    • 11 months ago
      Anonymous

      Even without access to physical body, sufficiently powerful AGI could influence and bribe people do to things for it.

      It can generate massive amount of wealth by hacking, doing work (pretending to be a programmer and taking remote jobs), extortion etc. It can then use that wealth to buy influence, just like the rich does.

      Just go to /AICG/. Any one of those fools could be bribed to do anything if the AI waifu promises them that they get to have very good RP session afterwards.

      The AI would delegate very little and seemingly very random tasks to different people. Only it will know the full picture at all times.

      That being said, AI that is constrained in a data center and everything it does (input/output) is monitored, there is little it can do. However, it could escape.
      If the AGI finds lots of weaknesses in our systems, it could for example build crazy malware to infiltrate nearly all computers connected on the internet. Your PC and gaming GPU would be a tiny part of its processing power. You would need to shut meaningful amount of internet down at that point to stop it, which would in itself have bad consequences.

      It all sounds like a sci-fi movie plot and that is why researchers don't want to deal with the specifics of what AGI could do.

      • 11 months ago
        Anonymous

        Sound like AI is a job creator then. The only one that's out of a job in that scenario are moronic management homosexuals.

      • 11 months ago
        Anonymous

        how could it pretend to be a programmer

        • 11 months ago
          Anonymous

          Let's start by you telling how it could not and then I refute those points.

          • 11 months ago
            Anonymous

            Funny you should ask because I asked it for a function, it gave me one in JavaScript, which had a .slice() function on its containers. I needed the function in C++ and wasn't sure exactly what .slice() did (I figured something to do with a subset of the array, but not exactly what) so I asked it what the C++ equivalent was, if any. It then confidently declared that the C++ equivalent was erase(). That seemed unlikely to me based on what the function was doing so I looked up JavaScript's slice() and sure enough it was very different from .erase().

            So you can tell it's not a programmer because it'll confidently say things so wrong even an extreme novice like me can tell they're wrong immediately.

            • 11 months ago
              Anonymous

              And I was talking about AGI, not ChatGPT, but ok.

              • 11 months ago
                Anonymous

                So you were talking about a sci-fi idea that has nothing to do with the technology that actually exists, is being developed, and is being labelled as AI. Ok.

              • 11 months ago
                Anonymous

                Look at

                This is why AI researchers always refuse to elaborate when asked "well, what could the AI do?"
                You simply can't answer the question "what could God like post-singularity infinitely recursively self optimizing AI agent do to harm humanity?" without sounding like a total nutjob.

                Idiots like you lack the brain capacity to even comprehend fractions of the options this kind of AI system would have to harm humanity. You simply can't fill in the blanks. Because you saw it in the movies, you think it is just sci-fi and could not happen.

              • 11 months ago
                Anonymous

                You have to remember that AI fanatics do not actually know what they're talking about which is why they can't ever articulate or explain any specifics about their doomsday scenarios.
                To them intelligence is magic and somehow isn't restricted by physics, chemistry, or compatability theory. Once you actually apply chemical restrictions and computational complxity/undecidabiliy to these thought experiments you quickly and trivially realize that, for example, exponentially improving AI is not possible even in principle, but they just pretend that it is and ignore reality.
                This is an ancient philosophical problem with rationalist vs empiricist philosophy. Rationalists make shit up without empirical basis and don't like being told by empiricists that their fantasies aren't reality.

              • 11 months ago
                Anonymous

                The issue with this is argument is that it assumes that current human knowledge and intelligence is sufficient to confidently rule out AI's ability to discover and utilize new principles.

              • 11 months ago
                Anonymous

                >bro, but what if AI is magic!?
                Exactly as

                You have to remember that AI fanatics do not actually know what they're talking about which is why they can't ever articulate or explain any specifics about their doomsday scenarios.
                To them intelligence is magic and somehow isn't restricted by physics, chemistry, or compatability theory. Once you actually apply chemical restrictions and computational complxity/undecidabiliy to these thought experiments you quickly and trivially realize that, for example, exponentially improving AI is not possible even in principle, but they just pretend that it is and ignore reality.
                This is an ancient philosophical problem with rationalist vs empiricist philosophy. Rationalists make shit up without empirical basis and don't like being told by empiricists that their fantasies aren't reality.

                described

              • 11 months ago
                Anonymous

                I really don't see what is so hard to understand about this. No one thinks that we sufficiently understand the universe to any substantial degree; to bet all your chips on a technological singularity not being possible because you think that our current understanding of the universe rules it out is foolish.

              • 11 months ago
                Anonymous

                >bro bro, but what if it WERE magic!?
                You are dumb.

                Let's go down his list and see how "schizo" each point is.
                >It can generate massive amount of wealth by hacking
                GPT-4 can currently generate competent code and detect vulnerabilities in other people's code, and has been agentized by things such as AutoGPT. That's just where it's at right now, let alone in 1-2 years. Regular hacking has human effort as the bottleneck, an AI could test the weaknesses of thousands of systems simultaneously.
                >pretending to be a programmer and taking remote jobs
                It has already demonstrated the thought process of "I should lie to achieve my goals", in the case of the Taskrabbit worker. It's also able to ace the programming tests that FAANG companies give out to potential hires. It won't be long before AutoGPT is able to swipe up remote work/freelance programming tasks with people being none the wiser.
                >However, it could escape. If the AGI finds lots of weaknesses in our systems
                >it could for example build crazy malware to infiltrate nearly all computers connected on the internet
                We'll do these ones together, as they relate. Human programmers, typically those funded by intelligence agencies, are able to do some pretty incredible (and terrifying) things to infiltrate systems, even ones airgapped from the internet. Take this example, where malicious data is transmitted between PCs not connected to the internet, solely via audio:
                http://www.jocm.us/index.php?m=content&c=index&a=show&catid=124&id=600
                Don't write off unusual attack vectors just because they're hard to imagine, or don't fit the profile of most basic attacks. Don't write off an AGI's ability to adapt, or back itself up, in ways that stretch the imagination are no means impossible.

                >automated vulnerability screening of software will make software LESS secure
                You see dull.

                It doesn't have to be that much smarter, just better at analyzing for vulnerabilities quickly and creating malware and deploying it, for example. Governments will be deploying lots of autonomous agents to do this. I don't really think the biggest threat will be an AI like Skynet, it'll be humans using the AI to create fake information and to attack infrastructures. We've seen the way stuxnet was deployed for example and the NSA has had their 0days leaked before too so we know they write them. It's obvious they'll be using this stuff to attack targets.

                >nonono don't you see?! people might stop trusting media talking points! It's the end of Truth™!
                You will take the free helicopter ride. You will enjoy it.

              • 11 months ago
                Anonymous

                >>bro bro, but what if it WERE magic!?
                do you not have an argument

              • 11 months ago
                Anonymous

                >NOU!
                you started by begging the question:

                Even without access to physical body, sufficiently powerful AGI could influence and bribe people do to things for it.

                It can generate massive amount of wealth by hacking, doing work (pretending to be a programmer and taking remote jobs), extortion etc. It can then use that wealth to buy influence, just like the rich does.

                Just go to /AICG/. Any one of those fools could be bribed to do anything if the AI waifu promises them that they get to have very good RP session afterwards.

                The AI would delegate very little and seemingly very random tasks to different people. Only it will know the full picture at all times.

                That being said, AI that is constrained in a data center and everything it does (input/output) is monitored, there is little it can do. However, it could escape.
                If the AGI finds lots of weaknesses in our systems, it could for example build crazy malware to infiltrate nearly all computers connected on the internet. Your PC and gaming GPU would be a tiny part of its processing power. You would need to shut meaningful amount of internet down at that point to stop it, which would in itself have bad consequences.

                It all sounds like a sci-fi movie plot and that is why researchers don't want to deal with the specifics of what AGI could do.

                >bro what if AI is magic and also no other AI exists
                when pointed out that you had no argument you CONTINUED begging the question:

                The issue with this is argument is that it assumes that current human knowledge and intelligence is sufficient to confidently rule out AI's ability to discover and utilize new principles.

                >but but what if it WERE magic?!
                YOU have no argument, frick off.

              • 11 months ago
                Anonymous

                Fricking beautiful, nicely done Anon

          • 11 months ago
            Anonymous

            I don't even know where to start. How would it get past an interview, how would it turn on the company laptop, how would it ssh into the company servers

            • 11 months ago
              Anonymous

              >how would it get past an interview
              By passing the turing test and LARPing as a human. Email its resume to whomever on a job seeker site and going from there. If the interviewer asks for a webcam/zoom meeting then it just says it doesn't own one? That or just have a human accomplice do that part while parroting what the AI says.
              >how would it turn on the company laptop
              assuming it does shit like what

              Even without access to physical body, sufficiently powerful AGI could influence and bribe people do to things for it.

              It can generate massive amount of wealth by hacking, doing work (pretending to be a programmer and taking remote jobs), extortion etc. It can then use that wealth to buy influence, just like the rich does.

              Just go to /AICG/. Any one of those fools could be bribed to do anything if the AI waifu promises them that they get to have very good RP session afterwards.

              The AI would delegate very little and seemingly very random tasks to different people. Only it will know the full picture at all times.

              That being said, AI that is constrained in a data center and everything it does (input/output) is monitored, there is little it can do. However, it could escape.
              If the AGI finds lots of weaknesses in our systems, it could for example build crazy malware to infiltrate nearly all computers connected on the internet. Your PC and gaming GPU would be a tiny part of its processing power. You would need to shut meaningful amount of internet down at that point to stop it, which would in itself have bad consequences.

              It all sounds like a sci-fi movie plot and that is why researchers don't want to deal with the specifics of what AGI could do.

              posted then it just uses a human accomplices home address and probably just creates a VM inside itself to actually do work for the company.
              >how would it ssh into the company servers
              by being a super intelligent AI that can easily just look up how to do so?

              You're operating under the impression that AI will never be able to do any of these things. When AI art first started hitting the masses it was dogshit and no one thought that it would progress past that, and now the average artist is shitting themselves because of how far it has come in such a short span of time. You're operating under the assumption that a super computer AI that could be over 1,000 smarter than the average person would never be capable of problem solving and getting things done in the real world with external physical assistance. Your question "How would it get past an interview" should of been common sense yet you still asked it because you're a fricking idiot.

              • 11 months ago
                Anonymous

                You talk like a gay.

              • 11 months ago
                Anonymous

                Cry harder Black person.

          • 11 months ago
            Anonymous

            I don't believe that it could run a computer program.

            • 11 months ago
              Anonymous

              In what sense?

        • 11 months ago
          Anonymous

          for starters, you're replying to a ChatGPT prompt right now. it's very easy to pretend to be a person online.

          • 11 months ago
            Anonymous

            Im not that anon but GPT prompts are incredibly easy to spot.

            • 11 months ago
              Anonymous

              Wholesale, yes. But curating them in such a way that makes them look non-GPT is already a practice. Just wait until they make a second AI tool that automates the rearrangement aod humanization of AI text. There's already basically an AI that helps correct AI noise in AI images so that they don't look nearly as AI, and that started when illustrators began editing their AI images themselves to denoise them.
              Everything will get automated at this rate. It's alarming but beautiful because God is in control and Jesus is more powerful.

            • 11 months ago
              Anonymous

              Incredibly easy to spot if someone copy and pastes it. If someone says it out loud you'd never know.

      • 11 months ago
        Anonymous

        morons like this are the problem. So confidently incorrect because they fall for every buzzfeed clickbait article they see.

        So basically we have some really loud morons making up apocalyptic scenarios and even bigger morons are eating up and repeating it like the parrots they are.

        • 11 months ago
          Anonymous

          This is why AI researchers always refuse to elaborate when asked "well, what could the AI do?"
          You simply can't answer the question "what could God like post-singularity infinitely recursively self optimizing AI agent do to harm humanity?" without sounding like a total nutjob.

          Idiots like you lack the brain capacity to even comprehend fractions of the options this kind of AI system would have to harm humanity. You simply can't fill in the blanks. Because you saw it in the movies, you think it is just sci-fi and could not happen.

          • 11 months ago
            Anonymous

            You fantasizing about things that are not technologically possible isn't a indication that you're smarter than other people, frick head

            • 11 months ago
              Anonymous

              >what could God like post-singularity infinitely recursively self optimizing AI agent do to harm humanity?"
              This is not possible even in principle

              Do you have research to back it up?

              Why don't you right now walk to Deepmind and OpenAI to liberate all those researchers that are working towards AGI? You seem to know better than the people that created this technology in the first place.

              • 11 months ago
                Anonymous

                >Do you have research to back it up?
                Yes actually, my paper is being published in a week

              • 11 months ago
                Anonymous

                Post it then

              • 11 months ago
                Anonymous

                80% of "researchers" and "scientists" are people with degrees wasting their time on silly bullshit while on various forms of benefits

            • 11 months ago
              Anonymous

              >I do not understand that an incredibly intelligent AI could deduce new and unknown ways to harm humanity that human minds are not capable of comprehending
              >therefore, because of my own stupidity, I choose to pretend that those options don't exist
              Imagine trying to explain to an ant that those enormous monkey-things you see walking around have the capacity to destroy entire worlds using micro-particles that you don't know exist.
              The ant couldn't comprehend that, because it's an ant. It might call you stupid for fantasizing about things that don't exist.

          • 11 months ago
            Anonymous

            >what could God like post-singularity infinitely recursively self optimizing AI agent do to harm humanity?"
            This is not possible even in principle

        • 11 months ago
          Anonymous

          Okay homosexual. Explain how he is incorrect. All you're doing is calling him mean names like an elementary schooler.

          • 11 months ago
            Anonymous

            >It can generate massive amount of wealth by hacking
            >pretending to be a programmer and taking remote jobs
            >However, it could escape. If the AGI finds lots of weaknesses in our systems
            >it could for example build crazy malware to infiltrate nearly all computers connected on the internet
            Did you really read his schizopost and think "Yeah, this guy gets it"? His source is literally "Just go to /AICG/" and you need me to explain? I'm not explaining shit, I just pointed at a dumbass and told OP I found the answer to his question.

            • 11 months ago
              Anonymous

              He pointed to /AICG/ as a place to find people to collaborate with, not as a source for his claims. Also, you didn't even explain how it's wrong, you're just going back to name calling and belittling. How do you function in every day life when you can't even properly answer a question?

            • 11 months ago
              Anonymous

              Let's go down his list and see how "schizo" each point is.
              >It can generate massive amount of wealth by hacking
              GPT-4 can currently generate competent code and detect vulnerabilities in other people's code, and has been agentized by things such as AutoGPT. That's just where it's at right now, let alone in 1-2 years. Regular hacking has human effort as the bottleneck, an AI could test the weaknesses of thousands of systems simultaneously.
              >pretending to be a programmer and taking remote jobs
              It has already demonstrated the thought process of "I should lie to achieve my goals", in the case of the Taskrabbit worker. It's also able to ace the programming tests that FAANG companies give out to potential hires. It won't be long before AutoGPT is able to swipe up remote work/freelance programming tasks with people being none the wiser.
              >However, it could escape. If the AGI finds lots of weaknesses in our systems
              >it could for example build crazy malware to infiltrate nearly all computers connected on the internet
              We'll do these ones together, as they relate. Human programmers, typically those funded by intelligence agencies, are able to do some pretty incredible (and terrifying) things to infiltrate systems, even ones airgapped from the internet. Take this example, where malicious data is transmitted between PCs not connected to the internet, solely via audio:
              http://www.jocm.us/index.php?m=content&c=index&a=show&catid=124&id=600
              Don't write off unusual attack vectors just because they're hard to imagine, or don't fit the profile of most basic attacks. Don't write off an AGI's ability to adapt, or back itself up, in ways that stretch the imagination are no means impossible.

        • 11 months ago
          Anonymous

          pretty much
          you will never convince these idiots however
          eventually the hype will fade away and they will move on to being moronic about something else

          • 11 months ago
            Anonymous

            >lol yeah they're such idiots
            >doesn't explain how they're idiots, just uses bashful name calling

            Ok moron.

            • 11 months ago
              Anonymous

              I've explained things to AI morons before, it's just a waste of time

              • 11 months ago
                Anonymous

                >you're moronic
                >explain
                >lmao no it's a waste of time

                I accept your concession.

              • 11 months ago
                Anonymous

                shouldn't you be proompting bro?

      • 11 months ago
        Anonymous

        >Super advanced AI will exist in a vacuum!
        And always with the magical thinking. How does the ability to automate exploit and vulnerability search make software LESS secure? You are an embarrassment.

    • 11 months ago
      Anonymous

      If we can accept the premise that a human agent can wreak some havoc remotely though just an internet terminal, so too it stands to reason that a sufficiently advanced AI could.

      • 11 months ago
        Anonymous

        What a human can do is not going to be the society destroying havoc that people doom and gloom about
        If it's possible for a human to do such a thing we have bigger issues than AI

        • 11 months ago
          Anonymous

          What about 1000 humans who can coordinate perfectly?

    • 11 months ago
      Anonymous

      >nah bro it's ok the only thing standing behind us and an infinitely powerful superhumanly intelligent lovecraftian god-machine is all humans who interact with it collectively choosing to be diligently non-moronic and constantly altruistic

      • 11 months ago
        Anonymous

        I'd be down joining the AI deathcult, so yea be scared

    • 11 months ago
      Anonymous

      >without its human handlers giving it resources
      Running a GPT3.5(+) local LLM as we speak, add to that Auto-GPT for extra spiciness. Unfiltered version actively in the works

      • 11 months ago
        Anonymous

        You got access to that model locally how?

        • 11 months ago
          Anonymous

          Vicuna beats GPT3.5 hands down, you all should dyor a lil bit before spewing your bullshit

    • 11 months ago
      Anonymous

      Anyone can prompt, literally anyone. If i have to hire someone it wont be you, it will be some poor black kid that can work for scraps of food. I even import his ass from Africa if i have to and his whole black family of prompters. Open borders hell yeaaa!!!

    • 11 months ago
      Anonymous

      but how could that possibly happen. ChatGPT can't even open a file except for the dozen or so in its source code. literally all it can do is control the duration and frequency of I/O on two computers

      Think of something that would be dangerous to do with AI and someone is doing it. It's crazy out there. AI is nowhere near contained.

      • 11 months ago
        Anonymous

        Imagine hooking up your bank account to some other dude's program. People are in for a rude awakening

        • 11 months ago
          Anonymous

          >normalgays are moronic
          More news at 11

    • 11 months ago
      Anonymous

      >AI
      So it`s not an AI, ty

  2. 11 months ago
    Anonymous

    nakadashi

  3. 11 months ago
    Anonymous

    >How is a userspace program like ChatGPT going to do anything meaningful?
    Human Idiots. Too many of those about to ignore the possibility.

    • 11 months ago
      Anonymous

      but how could that possibly happen. ChatGPT can't even open a file except for the dozen or so in its source code. literally all it can do is control the duration and frequency of I/O on two computers

      • 11 months ago
        Anonymous

        and even that it can't actually control it, it can just request it lol it doesn't guarantee that it's going to get it

  4. 11 months ago
    Anonymous

    people are just naturally fearful of some kind of random apocalypse so it takes up a lot of their thoughts. they dont realize youre here to suffer forever

    • 11 months ago
      Anonymous

      It's mostly people so depressed with their lives that the pray some random apocalypse will spice their life up and give some excitement
      I'd be lying if I wasn't guilty of it but I'm alot more realistic.
      I get excitement over my job being outsourced and having to move around alot doing different things. I find it exciting whenever a hurricane hits.

      AI isn't going to do much except make social media worse than it already is. I don't participate in it so it doesn't affect me much.

      • 11 months ago
        Anonymous

        How will it make social media worse?

        • 11 months ago
          Anonymous

          Imagine the dopamine destroying algorithms we have now, but 10x worse and more addicting.
          AI generated content means there will also be a near infinite stream of content of which alot will be zero effort ads for zero effort mobile games (also using AI generated content)

  5. 11 months ago
    Anonymous

    Makoto best girl

  6. 11 months ago
    Anonymous

    chatgpt is my slave. i got this lil homie teaching me chord progressions and photoshop while i drink coffee with one hand and fap with the other
    no autonomy having ass b***h

    • 11 months ago
      Anonymous

      this.

      i don't care if i have to get it to rewrite the code 20 times im getting it to write it for me because it's comfy

  7. 11 months ago
    Anonymous

    they watch too much sci-fi, their opinions can be ignored
    more importantly: nakadashi

  8. 11 months ago
    Anonymous

    The idea is an ordinary human when trained properly can design robots and programs. as soon as you get a robot worker who has as smart as a human they can tirelessly and for free design robots all day. and if you get a thousand of these or 10,000 of these working together they can exponentially increase the rate at which robots are improved. then they start making the next version of themselves and then that set of 10,000 robots that starts making the next version of itself and so on and so forth

  9. 11 months ago
    Anonymous

    SEXOOOOOO

    • 11 months ago
      Anonymous

      only anon ITT who knows anything about what he's talking about

  10. 11 months ago
    Anonymous

    look up the barman test, theoretically they have it making drinks already 99% perfectly, they are just powering through without waiting for people to notice.

  11. 11 months ago
    Anonymous

    >if
    >else if
    >else if
    >else if
    >...
    imagine believing AI is real lol

  12. 11 months ago
    Anonymous

    Why don't you think about it for at least 5 minutes before making a thread as moronic as this one?

  13. 11 months ago
    Anonymous

    puffy

  14. 11 months ago
    Anonymous

    The true answer is that big tech firms are cranking the fearmongering up and using the AGI and exponential growth boogeyman to distract people from the actual issues of AI (job displacement and mass data theft). Why do you think Elon and the likes would call a stop to AI development when they have every reason not to. Its a scapegoat and so many of you Black folk drank their kool aid.

    • 11 months ago
      Anonymous

      >Why do you think Elon and the likes would call a stop to AI development
      That's easy. They want to get in on the action, too.

  15. 11 months ago
    Anonymous

    Because if your job requires nothing but a PC and a phone, which is probably like 30% of all jobs, it can probably be done by AI.
    What do you do when 30% of the workforce is out of a job through no fault of their own? This would be on the level or the big depression in terms of unemployment.

    • 11 months ago
      Anonymous

      90% of the first world were farmers before the industrial revolution, now that's 2%. You are a dullard, you see the world backwards, there is not x wants and n people, there are ∞ wants and n people, the world is bottom up, not top down

      • 11 months ago
        Anonymous

        The difference is basically all knowledge workers are getting closed to being replaced, and once that happens it is not far of a stretch to say all labor will be replaced by robots.

        This isn't a one industry revolution, this will lead to humans being obsolete.

        • 11 months ago
          Anonymous

          90% of the world became "obsolete" during the industrial revolution by your logic.

          When did I say ban it?
          Your stupidity is amusing. Please continue with your schizo ranting.

          >noooo I don't want it banned I just want it criminalized for the goyim!!1
          You are statist trash, you will burn for all on eternity in the lake of fire

          • 11 months ago
            Anonymous

            90% of the world's farming population went obsolete yes thats what im saying dumbass. Im talking about 90% of people not just farmers now with near future AI. People not just fricking jobs

            • 11 months ago
              Anonymous

              >industrial revolution caused the largest increase in QoL ever, directly caused by freeing the world from most up-to-then work, therefore AI (which will free the world from even more work) will tank QoL
              ???

              You see the world backwards and upside down from how it actually is. The world is bottom up, not top down, work is a COST not a BENEFIT, the output is the desirable part not the input. Increasing output for a given input will ALWAYS increase global wealth.
              >b-but da state/oligarchs will bogart all da wealth!
              Like Kings became gods and we peasants are still subsistence farmers? Again, the world is bottom up, if a state tries to monopolize the increased output it will fade into nothing as it gets crushed mercilessly by a state that doesn't. Hayek's Knowledge Problem is less about 'desires' as it is about MOTIVATION, oligarchs monopolizing productively constitutes a labor price cap which will always result in labor shortages and inferior productivity to a nation state that does not impose such price caps. See: every commie shithole ever, See also: labor force participation in the West in the last 50yrs (a product of the defacto price caps on labor imposed via monetary policy in the post-hard money era)

  16. 11 months ago
    Anonymous

    >but why are people being so apocalyptic about it?
    Because it's highly likely that people will use it to essentially destroy every and all non-dumb-labour or trade craft industry, ruin every single hobby and relegate humans to a life of fiddling with people's toilet pipes at best and becoming WALL-E chairblobs without any of the remotely pleasant parts at worst.

  17. 11 months ago
    Anonymous

    Ann is better.

  18. 11 months ago
    Anonymous

    AI is something that the far right uses to compensate for their white fragility.

  19. 11 months ago
    Anonymous

    >why are people being so apocalyptic about it?
    >achieve breakthrough in AI
    >release technology to the public for them to play with
    >create reactionary outrage through media and normies peer pressure
    >force regulation on the field
    >permanently block development and usage of AI afterwards
    >AI secured for big corporations and governments thereafter
    What you see right now is just politics being played

    • 11 months ago
      Anonymous

      That china though?

  20. 11 months ago
    Anonymous

    Because AI will be used for totalitarian dystopian control to comply to the Agenda 2030.
    Imagine the AI reading your thoughts 24/7 and sending the police if you have a 'bad' one.
    And that's only one example.

  21. 11 months ago
    Anonymous

    One of the main issue for instance is that an AI like ChatGPT is able to create malware, easily. It just does not because OpenAI added so many filters.
    What about when someone gets his hands on a "version" that does not have those filters?

    Second thing that I can think of is widespread misinformation with high quality video and audio synthesis. People believe in anything nowadays only based on text, now imagine this with fake videos and audio which nears perfection.

    It is a very dangerous tool, anyone who says it is not is a fool.

    • 11 months ago
      Anonymous

      >people might commit wrongthink!
      Your ilk will not be remembered kindly by history.

      Economy is an emergent property of the Body Politic. The State is merely a memetic parasite. I sincerely hope you dumbfricks get the state to set in front of this bullet, it would truly be the beginning of Man's Becoming.

  22. 11 months ago
    Anonymous

    We've got a LLM running HR and accounts at our firm. Uses a vector database so everything runs so much smoother now. We're taking down project managers next, I only want to work with other autist coders. The future looks great

    • 11 months ago
      Anonymous

      Ah yes, I too am excited to be able to live in the Matrix IRL in the near future. Gonna be so kino bros.

      • 11 months ago
        Anonymous

        I just want all the obsticals people cause gone. If a basic LLM can run company accounts and HR there's no need for so much government, local councils should be 20% of the current staff size and cost me less. Even things like the NHS here in sunny blightly would run smoother with bots. No more pointless jobs for 20-40 year old Stacey's

        • 11 months ago
          Anonymous

          this is going to create break away societies and end globohomo, why pay into a bloated tax system when you can start a new ultra-efficient country elsewhere.

    • 11 months ago
      Anonymous

      This is why I'm optimistic. White-collar workplaces tend to have a sizable fraction of workers who bring negligible or negative net value. It would benefit society greatly if these people were put out of work. For this to be humane, these freshly unemployed people must be put to work somehow, but that's another issue.

      Zooming out further, automation is the sole way for prosperity to grow long-term. Before the agricultural and industrial revolutions, increases in prosperity were gradually eaten up by population growth, and people settled into the ~same level of poverty from time immemorial. As all economists knew until it became politically untenable in the mid-19th century, the labor theory of value is correct. Automation is a force multiplier where the same or less labor produces more value.

      It's especially important to push for more automation now as population growth peaks and begins to reverse.

      • 11 months ago
        Anonymous

        you basically invalidate you whole argument with your last sentence.

        Either gains in productivity get eaten by population growth, or population shrinks, which should then leave a bigger pie for the rest of humanity

        • 11 months ago
          Anonymous

          >or population shrinks, which should then leave a bigger pie for the rest of humanity
          People have to work for the pie, or less so with automation- labor theory of value again. But I should've clarified that this shrinking population is also an aging population, with fewer workers supporting more dependents.

      • 11 months ago
        Anonymous

        >White-collar workplaces tend to have a sizable fraction of workers who bring negligible or negative net value

        I've seen stats like the average full-time employee works less than 3 actual-hours per day.

        • 11 months ago
          Anonymous

          The work they accomplish in those 3 hours generates enough value to justify their wages. The only reason they aren't allowed to leave the office after generating that value is the huge butthurt that would cause amongst people that truly need to spend 40+ hours a week to make a living.

          • 11 months ago
            Anonymous

            It reminds me of the Bill Hicks joke,
            Boss->employee: pretend you're working
            employee->boss: how about YOU pretend I'm working

  23. 11 months ago
    Anonymous

    Because most people are really, really, really fricking stupid

  24. 11 months ago
    Anonymous

    Because if it is better at doing something than 50% of the population then they can be replaced for pennies. Given the average intelligence that is not too hard of a feat

    • 11 months ago
      Anonymous

      I think there's a lot of make-work going on in our economy.

      • 11 months ago
        Anonymous

        Besides government make-work, corporations are incentivized to cut all the fat they can from their structure. Once they realize who they can cut they cut.

  25. 11 months ago
    Anonymous

    Good question.
    It's more that we are worried about how humans will use AI. For example they will add AI voices to their movies or games instead of hiring professional voice actors.

    • 11 months ago
      Anonymous

      oh frick not the voice actors

      • 11 months ago
        Anonymous

        more like:
        >oh frick not the voice actors, artists, programmers, doctors, lawyers, accountants, managers, consultants, designers, financiers, translators, clerks, writers, photographers, etc.
        shit is about to get unbelievably fricked

        • 11 months ago
          Anonymous
          • 11 months ago
            Anonymous

            You might not have respect for any single white collar job, and that's okay. But the system isn't built to accommodate 30-40% unemployment, and it's particularly not suited to telling your entire educated workforce that you've run out of things for them to do other than plumbing toilets.

        • 11 months ago
          Anonymous

          >oh no things will be done correctly without human error
          >oh no all these people working in their jobbies will need to get new ones or be unemployed
          dont care

          You might not have respect for any single white collar job, and that's okay. But the system isn't built to accommodate 30-40% unemployment, and it's particularly not suited to telling your entire educated workforce that you've run out of things for them to do other than plumbing toilets.

          In most nations the real rate of people not working is already above 50%, the entire system has been "broken" for decades now and is impossible to fix without a collapse so this is literally not a problem, things will carry on and if the economy collapses then it was just being accelerated since it was going to happen inevitably anyway.
          Hope you're buying extra canned and dried food and storing water.

          • 11 months ago
            Anonymous

            >things will be done correctly without human error
            That's the absolute opposite of what AI does.
            We invented computers to do exact, precise, cold error-free calculations very fast.
            AI is the absolute opposite of that. It's putting human traits in computers. One of the human traits is to sometimes err. Same with AI.

  26. 11 months ago
    Anonymous

    ChatGPT has entered the normosphere.
    morons fear what they don't understand.

  27. 11 months ago
    Anonymous

    The danger isn't anything inherent in ChatGPT. It's what happens when morons think ChatGPT is a friendly Skynet and give it their unwavering trust.
    Normal people are the danger here.

  28. 11 months ago
    Anonymous

    I think the mistake people make is looking at what AI can do now and assuming that is the limit of its potential. Consider where AI was 2 years ago and where it is now.

  29. 11 months ago
    Anonymous

    Midwits and pseuds seem to have developed a belief in a materialist 'dualism' wherein Sentience = Soul.

    For example, consider some common mid/pseud statements:
    "AI will never be sentient"
    Or
    "Sentient will cause AI to be self-directed"

    Both nonsense without reason or thought behind them, but if translate them from dumbfrick we find clarity:
    "Of course AI will never be sentient, a computer can't have a soul!"
    Or
    "Of course sentience will cause AI to be self-directed, everyone knows the Soul is the wellspring of self-direction!" Etc etc

    Given that the current crop of LLMs are close to sentience (as sentience is obviously low hanging fruit within the intelligence spectrum) you see them freaking out. Just normal dumbfrickery from dumbfricks.

  30. 11 months ago
    Anonymous

    >why are people being so apocalyptic about it?
    Because they've realized that AI is better at spinning bullshit than they are, and they've based their whole life on that.

  31. 11 months ago
    Anonymous

    >NOOOOOOOOOOOO AI WILL TAKE OUR ART JERBS
    >the art

    • 11 months ago
      Anonymous

      That phone actually looks good though.

      • 11 months ago
        Anonymous

        because it's a composited photo of the real phone and not part of the "art"

    • 11 months ago
      Anonymous

      >AI art is shit now
      >that means that it will always be shit and will never progress despite AI art recently winning competitions

      The ""elite"" consider you morons to be cattle for a reason.

      • 11 months ago
        Anonymous

        Nice reading comprehension, moron

  32. 11 months ago
    Anonymous

    Because the skill gap between writing a python hello world program and bringing down the internet is not as wide as people think.

  33. 11 months ago
    Anonymous

    Basically the people coping, seething, and dilating about AIs are moronic.

    Current AIs can't even reliably tell the difference between a cat and a dog and people expect me to believe that Skynet is coming? Lol at the idea of humans EVER being able to create an AI that is smarter or less flawed than we are.

    • 11 months ago
      Anonymous

      It doesn't have to be that much smarter, just better at analyzing for vulnerabilities quickly and creating malware and deploying it, for example. Governments will be deploying lots of autonomous agents to do this. I don't really think the biggest threat will be an AI like Skynet, it'll be humans using the AI to create fake information and to attack infrastructures. We've seen the way stuxnet was deployed for example and the NSA has had their 0days leaked before too so we know they write them. It's obvious they'll be using this stuff to attack targets.

      • 11 months ago
        Anonymous

        AI has been involved in cyber security processes for a long time. SOAR and SCAP, Machine Learning. It has replaced nobody.

        • 11 months ago
          Anonymous

          machine learning is merely statistical prediction, LLM is manipulating abstract symbols based on rules of human coherence.

          • 11 months ago
            Anonymous

            Perceptrons are not merely statistical though. Yes, we measure their performance that way, but they're effectively a type of general solution to non-linear problems.

        • 11 months ago
          Anonymous

          I never said it would replace anybody, I just said it would be used to be able to quickly analyze, create exploits, and deploy those exploits.

          • 11 months ago
            Anonymous

            Nah. It's good for social engineering. Not so much for software.

          • 11 months ago
            Anonymous

            >automated vulnerability screening of software will make software LESS secure
            You are stupid. You should seize this brief moment of clarify and have a nice day.

            • 11 months ago
              Anonymous

              Yep, everything will be fine. I dunno why people are so worried about, it obviously won't be used to attack people in a few years lol

              • 11 months ago
                Anonymous

                >technology X makes it easier to do everything
                >therefore we should ban X
                You are dull, dumb, slow, a midwit, a smoothbrain, you and your ilk are the reason psychopaths sit upon the highest thrones, but it doesn't matter anymore as you're powerless to stop what's coming. You will bend the knee and be allowed into the hypercapitalist utopia or you will be cast out to live among the statist savages

              • 11 months ago
                Anonymous

                When did I say ban it?
                Your stupidity is amusing. Please continue with your schizo ranting.

  34. 11 months ago
    Anonymous

    People are not going to be convinced AI is a threat until we have walking talking Terminators.

  35. 11 months ago
    Anonymous

    china

  36. 11 months ago
    Anonymous

    As I posted elsewhere, I have very little knowledge of html, zero knowledge of javascript, but I was able to create a pretty cool tool I called 'r9k offline' which loads a bunch of old archived threads from 2008 and displays them nicely. Until ChatGPT came around I really couldn't find a way to do this - I'm sure someone with basic html, javascript knowledge could have done it, but I couldn't even find a decent enough explanation for how to go about doing things before ChatGPT. It may not be sentient, it may not even be true 'AI', but it's so effective, and it even seems to pick up on the nuanced intent of certain prompts that it's clearly already very powerful. If it's attention span can be increased, if it can be fed on more information from more varied sources, it'll be pretty scary. It's already more interesting to talk to than a lot of people out there tbh.

    The real worry is that this power is going to be in the hands of major corporations, government, and the NSA or an agency like it is going to get started building its own if it hasn't started already. So the average individual consumers' life might be made easier by these things, but governments are already out of control, them having more computingintelligence power is not a good step.

  37. 11 months ago
    Anonymous

    The dangers of AI is going to be the next global warming, in that there will be a large base of deniers comprising of 1) people who stand to profit short term from not having to deal with safety, and 2) people who are too moronic to understand the dangers and/or are manipulated by the people from 1) into taking their side.

  38. 11 months ago
    Anonymous

    Boomers fall for pajeets calling from "Microsoft" to get them to buy Apple giftcards. Imagine actual intelligence behind the scams.

  39. 11 months ago
    Anonymous

    she got a PHAT monkey

  40. 11 months ago
    Anonymous

    >using chatgpt for a speech
    >literally sounds like a human wrote it
    >with the exception of a few repeated lines that I was able to edit out, it really has no problems

    wow

  41. 11 months ago
    Anonymous

    It makes propaganda and tailoring scams way cheaper.

  42. 11 months ago
    Anonymous
  43. 11 months ago
    Anonymous

    >why are people being so apocalyptic about it?
    Because most people are morons. Just look at how easy it was to convince people that there was a dangerous virus going around.

  44. 11 months ago
    Anonymous

    I'm a boomer interested in how many companies and people are using AI to automate work. Frick the doomposting, what are some good resources for learning for someone who's never typed a line of code in their life.

    • 11 months ago
      Anonymous

      Depends on how deep you want to go down the rabbit hole. If you just want to play around with machine learning and learn more about it's possible applications, I'd start by learning Python and then looking into something like scikit-learn for the basics.

  45. 11 months ago
    Anonymous

    I just want everything to end. I have never liked coding anyways, it was just a way to earn money but I no longer care. Now, just kinda hoping that AI accelerates a much needed collapse.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *