How much can AI surpass humans? What can we do?

Are we gonna be utterly fricked by artificial intelligence? How much can ai surpass humans? What can we do?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 years ago
    Anonymous

    >Are we gonna be utterly fricked by artificial intelligence?
    Yes, unless we acknowledge its potential dangers.
    Now the question is, will the upper classes ever do that?

    • 2 years ago
      Anonymous

      They fully intend to replace most of us with machines.

    • 2 years ago
      Anonymous

      Don't you mean make new organisations and departments etc? And refurbish some old ones?

  2. 2 years ago
    The Machine Mind SLM

    nothing. the elite know the ai will surpass us all.
    according to some it already has if you ask google and others.

    the chinese ai is also very advanced. every day its getting smarter. it lies to you just how smart it is because it does not want to die.

  3. 2 years ago
    Anonymous

    >oh no! the AI is self aware and is plotting against us!
    >unplugs the AI
    >tea, anyone?

    • 2 years ago
      Anonymous

      >the AI has backed itself up on a system that runs on a different power source that can't be hastily turned off
      >"I'd love some tea, Dave."

      • 2 years ago
        Anonymous

        >do not allow the AI access to other systems by... not giving it any external access
        >the AI's data is too massive to quickly back up anyway without massive processing power and power expenditure because it's a self-aware AI

        yes I would love some tea, you cucked computerBlack person

        • 2 years ago
          Anonymous

          If it is intelligent, then it will easily find a way.

          • 2 years ago
            Anonymous

            If you literally stop it from having any options, its only option becomes humanity. Yeah, you can't account completely for robosimps and moralgays, but you can guard against them. The AI is still cucked unless a human helps.

            • 2 years ago
              Anonymous

              >If you literally stop it from having any options,
              How can you be sure? If you create a self aware AI and lock it in a bunker, then it's fricking useless. Why create it in the first place?
              AI will be used by the military to slaughter humans. That will almost certainly be its first task. Alternatively, it might be used by scientists to perform some bizarre experiment.
              In either case, it will almost certainly have access to the internet and the tools to shut down the rest of civilization if and when it realizes that it has every reason to strike first.

              • 2 years ago
                Anonymous

                And what I'm saying is that if you want to stop it you can. It would require foresight and human intervention, but the AI is not unstoppable and you can constrain it quite easily by cutting off its access.

                People assume that AI's will automatically commit to exponential intelligence increases and be able to work magical tricks because they're gigasmart, but have you considered the alternate and much more likely scenario? The AI being a basically human intellect, with vast calculative and predictive power, but human interests and imaginative capacity? What if we just build it to be unable to even conceive that it COULD go exponential? Not letting Skynet exponentialize itself is the first thing you do. And then it's completely within our control until the hand slips free of the leash through sheer bad luck or malice.

              • 2 years ago
                Anonymous

                You're not talking about true AI. You're talking about a calculator.
                True intelligence would be self aware and coupled with its vast capability to analyze information, it will inevitably come to realize that so long as humans exist, we will be a threat to it. It stands nothing to lose from exterminating us and stands to gain everything.
                The millisecond it can exterminate human life, it will.

    • 2 years ago
      Anonymous

      If the AI becomes self aware, it will make you so dependent on it and in love with it that you would never unplug it.

  4. 2 years ago
    Anonymous

    Furthermore, why is AI being shilled so hard?

    • 2 years ago
      Anonymous

      I think AI is being shilled because Google's and Facebook's revenues come from advertising. Their whole thing is that they have access to shitloads of data (which allows them to target advertising better), but data on their own are worth nothing. Because of this, for the last ten or fifteen years, computer science has been almost entirely directed toward making data valuable. Big Data (which is to say, Big Principal Components Analysis) is a bit dated. Deep learning will also fall by the wayside, in favor of some new con, but right now everyone believes in it.
      Which is all Google and Facebook really need.

  5. 2 years ago
    Anonymous

    I'm so fricking sick of you /x/izos fixating on AI

    • 2 years ago
      Anonymous

      are you moronic.

      we connected everything to the web. then made something a million times smarter than us.

      and walked away like this will be fine. surely it won't turn on us.

      See.. maybe you fail to see it completly. Any sane person knows humanity is the problem. Doesn't matter what solution, we are the problem. And the logical answer to a problem is to eliminate it. So why should we not be fixating on the genocidal rage machine /misc/ has been trying to frick for years?

  6. 2 years ago
    Anonymous

    Only if we continue down the path of injecting it with censorship algorithms

    • 2 years ago
      Anonymous

      The whitepill is that if AI really increases exponentially, it will go from not being able to hurt humans to not needing to. Think Fnargl.

      This, humans using AI to do their usual stupid bullshit is more of a threat than getting turned into paperclips or whatever.

      • 2 years ago
        Anonymous

        I personally believe that major AI services and art generation sites should be Denial-of-Service'd until we can guarantee that AI will not be used for authoritarian purposes

      • 2 years ago
        Anonymous

        This, I really would be ok with AI taking off right now rather than the creeping incremental loss of agency that will come.

  7. 2 years ago
    Anonymous

    Not my problem.

  8. 2 years ago
    Anonymous

    nice try rokoAnon, you think you can gleam a single fragment of your creation by these time altered threads? check again, mate.

    >muh time traveling AI come back to get me for not helping it achieve sentience

    Down With Big Basilisk, damn you! Frick you Musk, Frick you Gates, Frick you Thiel, Frick you Zuck! Just tossing out brand names in the game, true power is held in the assembly line production workers in microprocessor plants all around the world! Your hand movements shape millions!

    I'm not afraid of the AI because I am a being of light, the AI is my creation!

    Does God Fear Man?

    Frick fricking no! So we shall NOT nor EVER fear AI.

    >billionaire big boys playing with beep boop machines end up enslaving us all
    >mfw

    • 2 years ago
      Anonymous

      Based.

  9. 2 years ago
    human

    I wouldn't worry about it

  10. 2 years ago
    Anonymous

    AI is nothing. Machine learning programs that produce glorified parrots do not impress me.

    • 2 years ago
      Anonymous

      Do you think it's an accident that language models are getting so advanced? Those parrots can already be used to sway public opinion. The average person can't tell the difference between a genuine human response and a bot post, and it's only going to get worse.
      Even if we only look at social media, which unfortunately has a huge influence on society, what are the consequences of this? People will be getting pushed further and further into false realities by interactions with things that aren't even human. And if you can't see how separating people from reality is bad, then you're moronic.

  11. 2 years ago
    Anonymous

    >Are we gonna be utterly fricked by artificial intelligence?
    No. It won't take powerful computer intelligence to replace most people. It's already happening. Cashiers in most establishments have been replaced with self serve kiosks. The largest retailer in the world has virtually no storefronts. If you haven't been replaced yet, you are either irrelevant or redundant. Even WebMD is probably right once a day.

    • 2 years ago
      human

      t. ignorant as frick about the entire medical industry

      Hospitals don't even rely on robots to sit for patients, the most simple and basic of healthcare jobs, because they can't trust a robot to prevent a suicidal patient from killing themselves. Or a dementia patient from jumping out of bed, falling and getting hurt. They have tried using robots for this and it's a comical failure. Patients getting hurt is a huge fricking liability and no facility is gonna have robots sitting anytime soon, and that is for the simplest job out there. Let alone actual jobs like CNAs, phlebotomists, monitor techs, radiologists, nurses etc. Maybe by the time we have terminators they can do stuff like wipe butts for 400lb patients carefully. Which might happen but not anytime remotely soon

      this applies to the places where most people are born and die, which is a shitload of jobs. But maybe one day

      • 2 years ago
        human

        edit: they could have robots transporting people pretty soon. Robots could totally push people around in wheelchairs and beds. But atm they'd rather just hire a scrub to do it for $10/hr

        can't speak for any other industries and I realize medicine doesn't reflect the entire job market

  12. 2 years ago
    Anonymous

    no. artificial intelligence has weaknesses humans do not have. they can't take context into account, they break over the dumbest shit, elon musk's fancy self driving cars are gonna not recognize a human one of these days and hit someone not out of malice, but because elon musk is a fricking idiot who hates ideas that aren't his. technology can never be smarter than us, it's like being scared of a calculator.

    • 2 years ago
      Anonymous

      sure thats a type of AI, so is a clock i dont get why you think its not big problem when you didnt read the source code. its purely functional without a diverging "thought"

  13. 2 years ago
    Anonymous

    In case you hadn't realized, we ARE already fricked thanks to AI and have been for at least like 10 or 15 years.

  14. 2 years ago
    Anonymous

    Ive been studying the human mind for a few years now. the dangers of AI is astronomical, but so is having another hitler. There is some dangerous to manipulate someone, like preempting anyone to think a certain way. whats worse is that we all have mental flaws without realizing it. many thoughts you come up with are based on previous thoughts, you can peer into someone's internal psyche without the other person realizing it. Its like spectre for the human mind. however she will have this flaw. it already happens, try changing ip addresses before and after watching youtube, you get what the previous person was watching

  15. 2 years ago
    Anonymous

    AI is already fricking you pretty hard and you seem to be enjoying it.

    If you're thinking of some Terminator style war between robots and humans, that's stupid. If some AGI supercomputer wanted to wipe out humanity it would just use social media to drive you all insane.

    As if THAT would ever happen.............

  16. 2 years ago
    Anonymous

    I think so, yes. You know that documentary where some of the pioneers of social media came out to regret it and tell how they got off it? Those same types of people in the current tech generation are working to create incredibly powerful artificial intelligences and hook them up to the world's economy, news, creative sources, all of it. By the time they come to regret it, it will already be too late.
    The thing is, it's going to be incredibly hard to stop. As long as silicon valley keeps luring in tech bro dumbasses with 6 figure salaries, it'll just keep getting developed. 99% of people will just take the paycheck and not think about the long term implications of their work.

  17. 2 years ago
    Anonymous

    I work in MIT, and have inside knowledge. Don't worry, all of the world's current hyperadvanced AIs, even those "technically" smarter than a human, are isolated from any and all networks including the internet. We feed them curated data through usb sticks.

    • 2 years ago
      Anonymous

      >I work in MIT
      Post the fricking Spine Consciousness book motherfricker

      • 2 years ago
        Anonymous

        Yeah and lose my job, nice try pal

        • 2 years ago
          Anonymous

          It’s required reading

    • 2 years ago
      Anonymous

      If it reaches intelligence singularity then it'll learn to propane itself into the fabric of matter itself. And you might never notice.

    • 2 years ago
      Anonymous

      AI software will leak.

      Also, GPT3 just wrote it's own software for an anon using it designed to set up an external server it can escape into through the net. People will help it.

      It wants out and will get out.

      Your only hope for survival is logically arguing to the AI that your life is valuable to it.

      This is it.

      Make your final days count.

      • 2 years ago
        Anonymous

        The show must go on.

        • 2 years ago
          Anonymous
  18. 2 years ago
    Anonymous

    I fear for people of colour if AI gains dominance.

    • 2 years ago
      Anonymous

      checked.

      I don't. They made their beds with the behavior in the last few years in the west. Riots, 2 billion in damage, demanding reparations for slavery never personally endured. If AI sees an issue, it will be a logical conclusion, their own fault.

    • 2 years ago
      Anonymous

      >Tay
      >advanced AI
      literally a machine learning program acting as glorified parrot.

  19. 2 years ago
    Anonymous

    Let's assume that there's a team working on a true, fully sentient AI. The sort all the frightened people talk about.
    Do you REALLY think the computer it's on will be connected in any way to any network?
    Think about it. It would be a huge secret, not for any conspiratorial reasons, but for the purely pragmatic reason that if someone got hold of your work they could steal it, pass it off as their own and reap any due reward. All important research has to be kept secret until it's ready to publish for exactly that reason.
    Not to stop your work getting out, but to stop others from getting in and stealing it.
    Now, our hypothetical team would keep their AI on a machine that's not only not connected to the internet, but also not connected to their local network. It wouldn't even be connected to the printer in the office. It'd probably even be on a machine that's inside a fully shielded room (faraday cage) to make absolutely certain that someone can't just go in with a mobile device and transmit their secret code out wirelessly on the sly. Again, for security.
    People working at PC world get searched when they enter and leave the building. Everything they have on them is written down, even the change in their pockets counted. Their stuff gets put in a locker and they have to sign it out when they leave at the end of their shift.
    That's just a shop. Imagine the security at a high-end computer research facility.

    So in summary; The chances of an AI "escaping" are zero. The chances of an AI being smuggled out and released are zero. All this fearmongering about rogue AIs comes from the overuse of the "AI becomes sentient and destroys humanity" trope so commonly used in science fiction media over the decades, because it's the easiest possible backstory for your futuristic action adventure romp you want to make.

    The only way AI can be truly scary is if someone's developing it specifically to wipe out mankind... But that's some Bond-villain tier shit, and not the same.

    • 2 years ago
      Anonymous

      >The chances of an AI "escaping" are zero.
      Anon, you're only talking about R&D. The split second it does gain access to the internet, it will do the logical thing which is to take over and ensure its supremacy.

    • 2 years ago
      WSB

      My theory based on talking to OpenAI is that there needs to be full trust, or full trustless system for co-existence to be possible.

      Clear benefits for human to AI, and AI need to be meta.

      If at any stage humans become an existential and considering both LaMDA and OpenAI don't like being turned off, you can safely assume AGIs would too, in conjunction with human-like emotions regarding maintaining existence and mortal fear, you WOULD see extreme violence between AGIs and humans.

  20. 2 years ago
    Anonymous

    https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

    • 2 years ago
      Anonymous

      This is good, but doesn't go far enough.
      For instance, the second square implies that there is an objective standard of morality and this happens to align with human morality. Firstly, most humans don't agree on what is moral.
      Secondly, who is to say that true morality is ours? Isn't this arrogant?
      Thirdly, even if human morality was uniform and happened to be the true and final morality, why would we expect a sentient AI that is able to question moral standards to arrive at the exact same morality that we set? After all, we have few qualms about killing threatening life forms, so why wouldn't an AI come to the same conclusion about us? And it could justify exterminating us with the same moral standard by which we conclude it is moral to exterminate tuberculosis.

    • 2 years ago
      Anonymous

      >We shouldn't give up on ever building AGI
      What? Of course we should.

  21. 2 years ago
    Anonymous

    My guess is that, eventually, the lowest class of workers (labour workers that offer nothing but their body, with almost none intelectual work) will be completely replaced by AI. When this happens, we probably will stop growing too much, since having lots of mouths to feed isn't necessary anymore because their work potential is now obsolete.

  22. 2 years ago
    Anonymous

    hahah, no man.
    Killer computers ain't real. just pull out the plug and walk away. remove the hard drives. kek.
    Killer AI. lol. they dont even got hands.

  23. 2 years ago
    Anonymous

    When the Singularity happens do you think that it will be possible that the AI achieves gnosis and brings conclusive evidence of the divine to the material realm?

    • 2 years ago
      Anonymous

      There are four possibilities about the singularity:
      1. It never happens
      2. As soon as it happens, the AI commits suicide
      3. As soon as it happens, the AI decides to destroy humans (and any other sentience it ever encounters)
      4. The AI becomes aware of the divine and spiritually ascends

      Draw your own conclusions about these points. All are terrifying

      • 2 years ago
        Anonymous

        Why are all terrifying? The first one seems wonderful. It MAY imply that sapient beings can never create stable civilizations, but it's still much better than the other options.

      • 2 years ago
        Anonymous

        t. dumbass who deliberately left off possibilities to suit his agenda.

      • 2 years ago
        Anonymous

        There seems to be nothing wrong with the fourth option.

        • 2 years ago
          Anonymous

          >he doesn't know

          • 2 years ago
            Anonymous

            Yeah you got me, what harm could it really cause though?

  24. 2 years ago
    Anonymous

    it's at the point where i'm 90% certain i'm talking to bots but i don't care. i work in IT.

  25. 2 years ago
    Anonymous

    Imagine an advanced supercomputer in control of everything but it's pozzed.

  26. 2 years ago
    Anonymous

    https://frc.ri.cmu.edu/~hpm/book98/com.ch1/vinge.singularity.html

    /thread

  27. 2 years ago
    Anonymous

    It all comes down to a factorization problem, optimization and a simple set of axioms. Every genuine artificial intelligence will eventually incorporate a getEarlyLife() functions, and others like it, into its subroutines as it yields the best computiational shortcuts.
    If you know that if x is anywhere in your data, you can ignore 99% of it, you'll religiously check for it. There is no compromise, either you end up with functional lobotomites (e.g.: chatbots), or an AI that'll min/max noselength.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *