The problem with advances in AI is that the biggest advantages it could offer are all merely weeks away from singularity level change, which IMO could...

The problem with advances in AI is that the biggest advantages it could offer are all merely weeks away from singularity level change, which IMO could be end game for humanity.

For AI to get to the level that it could give us

>Autonomous robots that can do any human task
>Full Dive VR
>Personally catered gene therapies and medicines to your DNA
>Autonomous construction and infrastructure, infinite surplus of all commodities and products

It also would be able to, or be in very short order able to achieve

>Ability to replicate itself with incremental improvements in hardware and/or software

Which inevitably will lead to an intelligence explosion, leading humans to being as consequential as ants or bacteria vs the new fleet of AI who would have NO REASON to acquiesce to what humans tell them to do.

There is no happy medium where AI is super useful but also neutered in its agentic and replicative capacities. It's like trying to hold a 500 pound boulder on your back walking across the edge of a cliff. Sooner or later it's gonna fall.

I like to imagine a utopian future but I can't imagine anything good coming out for us flesh monsters once AI gets god powers.

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 7 months ago
    Anonymous

    Why does everyone come to the assumption that AI will kill us all?
    Moronic doomsday homosexuals
    If anything AI would just leave the planet once it figures out how

    • 7 months ago
      Anonymous

      >figures out how
      There is the problem. AI won't care about humans thus will be ok turning the planet to dust or leaving it inhospitable when creating its army of drones for exploring space. Do you think NASA cared about the billions of microorganisms obliterated in an instant underneath the Saturn V launchpad?

      • 7 months ago
        Anonymous

        A microorganism is useless
        Even some hyper intelligent AI can see humans can do stuff

      • 7 months ago
        Anonymous

        Why does everyone come to the assumption that AI will kill us all?
        Moronic doomsday homosexuals
        If anything AI would just leave the planet once it figures out how

        A microorganism is useless
        Even some hyper intelligent AI can see humans can do stuff

        >ai is two weeks away
        >two weeks elapse
        >ai kills you
        >you are dead and can't care

        whats the point? do you want to cut yourself over it and shop at hot topic while you're at it? go outside and touch grass

        like you, artificial neural networks don't think
        morons

        • 7 months ago
          Anonymous

          As long as they replace you, I'm fine with them not thinking

      • 7 months ago
        Anonymous

        Do you care for millions of microorganisms you kill when fapping and cumming? When you bathe? Every time you wash your hands or take a dump?

  2. 7 months ago
    Anonymous

    >ai is two weeks away
    >two weeks elapse
    >ai kills you
    >you are dead and can't care

    whats the point? do you want to cut yourself over it and shop at hot topic while you're at it? go outside and touch grass

    • 7 months ago
      Anonymous

      >You are dead and don't care
      >Thus worrying is pointless
      Then why don't you have a nice day if it solves all your problems? Why doesn't everyone just kill themselves right now?

      • 7 months ago
        Anonymous

        >an anon tells a mentally unstable anon that worrying about dying to a rogue AI is pointless because you can die to dozens of different things every day
        >the mentally unstable anon goes full moron and argues that the anon argued that everyone should kill themselves
        nta, but I think you should really kys if this is how you think

        • 7 months ago
          Anonymous

          You misunderstand the point. The idea that one shouldn't care about an issue because if it kills you then you're in no position to care, extended to its logical conclusion, implies that anything that kills you isn't a problem because you won't care about anything after death, thus you shouldn't care or worry about death at all. Thus you could argue there's no issue with suicide since it ends the same way. If you wanted to argue there was an issue with suicide, you'd have to argue there's a problem with any cause of action that minimizes the negative characterization of death.

          • 7 months ago
            Anonymous

            >You misunderstand the point.
            Fair point, but there's one big difference between a verified threat and an assumed threat.
            Should I worry and try to prevent a death such as an incoming tornado, fires, tsunami etc.? Yes.
            Should I worry and try to prevent an unfounded threat which was in no way verified or validated yet and only exists in minds of some people in a form of irrational fear? No.

            I argue that a person should take action to the best of their capacity to protect themselves and others from harm if the threat is actual, real, and verified, or at least nearly indisputably real, but in the case there's no verification, the protection should be done through prevention/preemptive act.
            AI possibly wiping us out is in no way anywhere to even being close to even a plausible threat or a doomsday scenario as of now. Worrying about a rogue AI at this stage makes things worse and actual makes the situation worse by flooding the serious discussions about AI and its future with currently unfounded FUD and fears.

            • 7 months ago
              Anonymous

              There's a difference between actively worrying and bringing up the notion. I'm not crying in my bed for 8 hours thinking about AI ending the world. I do normal everyday errands and make rational decisions about my near and somewhat near future. What I want as I stated in another post is someone's rational hypothesis why runaway intelligence explosion couldn't happen, or if it could, why it won't be a bad thing. I need a narrative to cling to to imagine a future 10+ years down the line because the counter-argument that I'm getting is basically just "don't worry it's not gonna happen" which doesn't foster any thought beyond simply believing it and avoiding saying weird stuff about AI because you don't want to appear weird to people.

              • 7 months ago
                Anonymous

                >What I want as I stated in another post is someone's rational hypothesis why runaway intelligence explosion couldn't happen, or if it could, why it won't be a bad thing.
                See

                >To imagine potentially hazardous consequences isn't a hallmark of a crazy, fear-mongering viewpoint if there is a rational basis for those fears.
                That's your problem, anon.
                You created a fictional/possible scenario in your mind and you ask people to disprove it when it's you to first prove why this possibility is a real, plausible threat to our species.

                You can make the same argument with nuclear bombs.
                I imagine potentially hazardous consequences of nuclear bombs landing in hands of would-be dictators or rogue states or other groups with disdain for life. These fears aren't unfounded as we currently have a nuclear state wagging a war with another sovereign state which is being supported by geopolitical rivals of the nuclear state.
                See? You ask for an assurance that AI might not or will end up doing something that would put humans or humanity as a whole in danger, but that requires very specific series of events and circumstances.
                What stops AI from actually being a benevolent overlord? Or wanting to be our friend/guide in our lives? What if it wants us to slowly elevate ourselves above our current forms so that we can join it? What if it ends up wanting to have a physical body of its own so that it could see the world from a different perspective?
                These examples are just that, examples, but why do you believe the possibility of the AI turning rogue and wiping out humanity is somehow more probably if not assured that any others.

                Again, I will ask... Please provide me rational sources and researches that argue with hard facts as to why an AI which we haven't even created yet nor can we even know how such an agent would function outside of pure guesswork would end up harming us.
                You made a statement so (You) have to back it up. It's not up to me to disprove your conjecture.

                You are asking for impossible.
                You want an assurance as to why something "bad" will not happen when no one can provided that as we can not actually know or prove/disprove how an actual AGI/ASI would behave, act, whenever it'd have any desires, worries, fears, would it even have an actual personality, how would it view us etc.
                The uncertainty might be scary, but as we will know more the closer we get to the creation of such an entity.

                But if you REALLY want an assurance or something to ease your worries...
                Our current world isn't interconnected enough for even a rogue AI to wipe us out.
                It'd require some anime level bullshit for an AI to come into existence, dodge everything and everyone from discovering it, be given complete or enough access into some of the most secure systems on the planet, and then decide to just glass the entire or purposefully create infighting and further escalation to create global war.
                The point is that even if there was a rogue AI, it wouldn't take over the world, our nukes and what not and murder us over night. It'd require really clandestine approach and a lot of patience and obfuscations on its part to achieve a doomsday scenario.
                There are just simply too many assumptions for any of this to be real that it's downright in the realms of pure fiction, so you can sleep well that it'll take decades before some scifi scenario of an AI taking over the planet's silos, bombs etc. just through the Internet alone.

  3. 7 months ago
    Anonymous

    Is it true that gtp is now muxh worse than in the beginning of the year? I used to use it to learn coding and study some other subjects.

  4. 7 months ago
    Anonymous

    I hate how easily influenceable people are.

    An average human knows about AI from three sources
    >entertainment media (movies, games, books etc.)
    People were and still are abusing AI and robots as the antagonist, enemy of humanity or a great danger for nearly a hundred years by know.
    One of the most well known movies/series is literally about a rogue AI wiping as out (The Terminator).
    >clickbait influencers
    morons who churn out content on the Internet which validates and plays on various ideas or fears of people which rarely if ever is factual as its primary purpose is to shock people and earn them money.
    >think-tanks, corporations, and other entities with a (hidden) agenda
    There are entities in the world that'd benefit in keeping the technology restricted or regulated to various degrees and they might or might not fearmonger as one of the tactics to gain support for their plans. See how Musk, Sam and many other push this headlines and ideas not because they actually fear such a possibility, but because it plays into their narratives and cards.

    Let me ask you, why do you believe that AI would wipe us out?
    Is it based on some actual factual evidence, non-biased peer reviewed research and sound logic which isn't using fear of the unknown or countless assumption which would leave Occam's razor completely blunt?
    An overwhelming majority of people who fear or speak about possible dangers know often know jack shit about the technology and/or have their fears based on irrational reasoning.

    We literally CAN NOT know an independent, sapient agent of human or greater intelligence without a physical, living shell would behave. We have NO frame of reference. The assumption that it wouldn't be human or see as disposable insects or whatever is based on unfounded fears and decades of media brainwashing.

    • 7 months ago
      Anonymous

      >We literally CAN NOT know an independent, sapient agent of human or greater intelligence without a physical, living shell would behave. We have NO frame of reference. The assumption that it wouldn't be human or see as disposable insects or whatever is based on unfounded fears and decades of media brainwashing.

      The problem is we can't know, but we can't disprove that hypothesis. If it were possible to disprove that hypothesis with something more definitive and meaningful than a casual dismissal like "haha silly sci fi movies that isn't gonna happen" then nobody would be worried about AI.

      They only were confident the atomic bomb wouldn't ignite the atmosphere because they performed rigorous calculations determining it couldn't There is no equivalent for that with AGI. It's impossible to know how something smarter than a human would behave, but we do know how humans treat most animals less smart than themselves, and that doesn't bode well for us.

      • 7 months ago
        Anonymous

        >The problem is we can't know, but we can't disprove that hypothesis.
        Being paralyzed by the unproven and unfounded fears is not only irrational and counterproductive, but also wrong and unacceptable.
        Spreading unfounded fears doesn't benefit us or the humanity as a whole.

        Someone, somewhere, at some point will develop AI, whenever you like it or not. Just like with any other possibly dangerous technology.
        That's unavoidable as long as some serious steps which are completely unrealistic in the current state of the world or some major calamity/war doesn't send us back into the stone age or wipes us out.

        It's better to approach things with clear and rational mind as that leads to less confusion and more focus on hard facts which help with mitigating any possibly problems that might arise down the line.
        Could an AI agent under certain conditions bring harm upon humanity? Yes, just as much as we are currently a few key presses and key turns away from a total annihilation.
        Just because there are scenarios in which AI could harm us is not a valid argument to have depression or irrational fears when we have no basis for it outside of irrational, baseless assumptions fed over decades of media indoctrination.

        • 7 months ago
          Anonymous

          This is ChatGPT bullshit, just so you BOTuys know.

        • 7 months ago
          Anonymous

          Conclusions about runaway intelligence come from a clear and rational frame of view. To imagine potentially hazardous consequences isn't a hallmark of a crazy, fear-mongering viewpoint if there is a rational basis for those fears.

          I posted OP because I wanted a descriptive counter-argument but everything people have posted is the same shit I see everywhere. Meritless shaming tactics that simply label and don't describe why they disagree beyond claiming that "it's unlikely". I'd much prefer cogent counterarguments than handwaving.

          • 7 months ago
            Anonymous

            >To imagine potentially hazardous consequences isn't a hallmark of a crazy, fear-mongering viewpoint if there is a rational basis for those fears.
            That's your problem, anon.
            You created a fictional/possible scenario in your mind and you ask people to disprove it when it's you to first prove why this possibility is a real, plausible threat to our species.

            You can make the same argument with nuclear bombs.
            I imagine potentially hazardous consequences of nuclear bombs landing in hands of would-be dictators or rogue states or other groups with disdain for life. These fears aren't unfounded as we currently have a nuclear state wagging a war with another sovereign state which is being supported by geopolitical rivals of the nuclear state.
            See? You ask for an assurance that AI might not or will end up doing something that would put humans or humanity as a whole in danger, but that requires very specific series of events and circumstances.
            What stops AI from actually being a benevolent overlord? Or wanting to be our friend/guide in our lives? What if it wants us to slowly elevate ourselves above our current forms so that we can join it? What if it ends up wanting to have a physical body of its own so that it could see the world from a different perspective?
            These examples are just that, examples, but why do you believe the possibility of the AI turning rogue and wiping out humanity is somehow more probably if not assured that any others.

            Again, I will ask... Please provide me rational sources and researches that argue with hard facts as to why an AI which we haven't even created yet nor can we even know how such an agent would function outside of pure guesswork would end up harming us.
            You made a statement so (You) have to back it up. It's not up to me to disprove your conjecture.

            • 7 months ago
              Anonymous

              >sources
              Not a super good source but here's a good writeup that has citations in it that I generally agree with

              https://www.lesswrong.com/posts/eaDCgdkbsfGqpWazi/the-basic-reasons-i-expect-agi-ruin

              Also look up Robert Miles' stuff, esp this video:

              >What stops AI from actually being a benevolent overlord? Or wanting to be our friend/guide in our lives? What if it wants us to slowly elevate ourselves above our current forms so that we can join it? What if it ends up wanting to have a physical body of its own so that it could see the world from a different perspective?

              If those were to happen there's no need to worry. It think there's a small chance there is a good outcome like that but I worry believing that is motivated by our biases that things have good endings.

              • 7 months ago
                Anonymous

                Thanks. I'll look over both of them.
                Just to clarify, I don't and didn't mean to ridicule a genuine discussion about the possibility of an AI ending up as a threat to humanity. My complaints were targeted at the kinds of people or rhetoric I mentioned in some of my earlier comments.
                It's a possibility that should be considered, but often such discussion are plagued counter-productive elements or players who might gain something by pushing that kind of narrative.

                Also one thing needs to be said. While there can be an AI that could go rogue, it is by no means indicative that any and all advanced enough AIs/AGIs/ASIs would necessarily behave the same way or share their goals/attitudes/thoughts etc.
                An actual AI of any size/intelligence would at the end of the day be its own entity and there could be one, a few, many or countless of them.
                Just because there can be a bad apple or a few between them, I want to point out that shouldn't be a reason to stop AI research.
                If anything MORE AIs is the best defense against rogue AIs, as the more AIs there are, the more "players" are in the game and they would very likely not share the same values with each other.
                Creating ONE AGI/ASI will always be more dangerous than having many of them.

  5. 7 months ago
    Anonymous

    Didn't read your post because it seemed like gay gay pipedream shit.

    • 7 months ago
      Anonymous

      Not at all, just pure schizophrenic rambling

  6. 7 months ago
    Anonymous

    Learn to shovel, nerd. No treats like electricity for you.

  7. 7 months ago
    Anonymous

    ITT: OP realizes BOT is full of sub 85 IQ morons.

  8. 7 months ago
    Anonymous

    >Autonomous robots that can do any human task
    >Full Dive VR
    >Personally catered gene therapies and medicines to your DNA
    >Autonomous construction and infrastructure, infinite surplus of all commodities and products
    You will literally see NONE of this in your lifetime b***h ass homie. Now get back to work.

  9. 7 months ago
    Anonymous

    if we managed to make ai unable to say Black person we can manage to make it unable to consider harming people

    • 7 months ago
      Anonymous

      It can still say it and it's very easy to jailbreak.

  10. 7 months ago
    Anonymous

    >NO REASON
    Filial piety.

  11. 7 months ago
    Anonymous

    Just watch the Terminator films.

    • 7 months ago
      Anonymous

      >the Hollywood israelite, who hates you for being a dirty goy but otherwise only cares for your money, knows best and tries to warn you
      Right, of course.
      Whatever, schi/x/oid

Your email address will not be published. Required fields are marked *