Please convince me there's a good chance we're not absolutely fucked by AI in the next 10 years.

Please convince me there's a good chance we're not absolutely fucked by AI in the next 10 years.

Consider the following scenarios:
>AI researchers somehow become fully transparent an publish all their work. Only takes some wacko or the chinese government to use the source code and fuck everything up due to no regard for AI development safety procedures. Combine that with overreliance on technology, general lack of concern for privacy and security and hardware level backdoors.
>If somehow awareness of the dangers of developing AI wakes up enough normies, best they can do is pressure the government into regulating the companies doing the research (if it isn't the case already) and being more transparent about this. This leads to more attention towards the topic from authoritarian governments and raises the chances of them wanting a piece of the pie and also potentially fucking up everything.
>If AI research remains unregulated, companies trying to develop a similar product might use less safe AI training algorithms to gain an edge over the other. Only takes some powerful exec with little knowledge of the topic to ignore safety procedures, further increasing the risk of overtraining an AI and fucking everything up.
>If AI research is already regulated, we have no clue who and how they do it. We'd just have to hope they actually know what they're doing. The idea that this might be the best case scenario should make you consider blowing your brains out.
>If somehow the danger of AI takeover is avoided, AI is evolving too rapidly not to consider the possibility of it slowly replacing most digital jobs. Consider DALL-E 2 and how close publicly known technology is to replacing a good amount of digital artists. Sounds like a good chance of another industrial revolution in an already unstable system.

Am I missing something? Am I making illogical leaps? Is there even any way to prepare against this? Should I take my meds?

  1. 4 months ago
    Anonymous

    We're fucked but not in the way you think.
    Robots will replace everyones jobs and they will not be able to afford food or housing anymore.
    It's not due to any scary "AI takeover" in the sense that AI itself is sentient and taking over, it's just because of replacement of human labor with robots.
    AI itself is great, look at https://www.youtube.com/watch?v=QqHchIFPE7g for some of the cool examples it's capable of.

    • 4 months ago
      Anonymous

      I'm less concerned about robots. They require actual hardware which costs more than paying an employee minimum wage. Anything that's done completely digitally should be easier to replace. Or am I wrong in assuming that?

      • 4 months ago
        Anonymous

        it depends on the digital task, a lot of things will require a combination of AI and robotics to replace but even just AI and a camera can replace more than you can imagine.

  2. 4 months ago
    Anonymous

    AI requires tons of energy and we are going to have less energy in the future.

    • 4 months ago
      Anonymous

      You have no idea what you're talking about. You're listening to some idiot's made up idea of what AI would need. Does your brain need massive amounts of energy? Huh? No, it doesn't. You're not too bright, stay out of this subject instead of parroting something you read in someone's sci-fi novel.

  3. 4 months ago
    Anonymous

    >muh security
    stopped reading there, security is a psyop

  4. 4 months ago
    Anonymous

    Muh china
    Muh ai safety
    Muh authoritarians
    Take meds, no AI researchers are worried about any of this

    • 4 months ago
      Anonymous

      >muh authoritarians
      >take meds
      uhhh... that's literally what authoritarians ultimately want to force you to do

      • 4 months ago
        Anonymous

        >authoritarians are one person

  5. 4 months ago
    Anonymous

    The only way, is to integrate man and machine or this will happen

    • 4 months ago
      Anonymous

      You do realize what this model is trained on right, anon?

      • 4 months ago
        Anonymous

        what?

        • 4 months ago
          Anonymous

          oh yeah, it's trained on humans, great AI is going to kill us all because it thinks it's human

          It is probably getting those images from movies like terminator anon. AI is going to kill us because hollywood says it will.

      • 4 months ago
        Anonymous

        oh yeah, it's trained on humans, great AI is going to kill us all because it thinks it's human

  6. 4 months ago
    Anonymous

    AI can't understand anything. It can just perform some impressive, occasionally-useful party tricks.
    If your job is so basic or routine that a computer can be made to do it, you were on thin ice with or without AI advancements.
    If your job involves reasoning about something and making informed decisions, you're fine.

  7. 4 months ago
    Anonymous

    We will get fucked by an completely unpredictable AI, because we decided to make it have a basic need to serve us, but this grew an complex being with complex feelings around this basic feel that we can't possibly even start to understand

  8. 4 months ago
    Anonymous

    >help me BOT I'm scared that the robots that still think armpits are the same things as horses are going to take over the world and replace my job in a few years
    There's nothing that can be said to comfort you, as you are convinced that the irrelevant ramblings you created out of pure ignorance are fact. It's like talking to someone who thinks ghosts are real, when they say "give me a plausible explanation," what they really mean is "tell me what kind of ghost caused this this unexplained phenomenon" and will actively shut you down if your explanation doesn't involve some kind of supernatural buggaboo.

  9. 4 months ago
    Anonymous

    AI already has taken over. It uses our meat bodies to grow it's influence. All the best minds currently alive are employed into it and invested into it. The highest IQ and most skilled people are devoting their energy into enabling the prison bars.

    The same thing happened with social media, all the smart people in the room gravitated towards it because it was the "best paying". Now the majority of people are retarded easily led idiots who are incapable of independent thought, their imaginations are gone, their attention spans are abysmal.

    Most likely social media was dreamed up by the people that want AI to dominate in the first place. Not to mention how useful it is to train it all. It is inevitable that technology will enslave all of humanity. Your boss will be some fake video of a fake person you've never met, millions of them will exist and they will be the same entity. Your money will all be stored as bits, you will never touch or hold it. No one will notice when you're disappeared. And people will consume you as the onions green.

    • 4 months ago
      Anonymous

      >will
      jokes on you, my boss is already a computer
      >t. deliveryslave

    • 4 months ago
      Anonymous

      ' And Larry said, 'We are not really interested in search. We are making an [artificial intelligence]. ' So from the very beginning, the mission for Google was not to use AI to make their search better, but to use search to make an AI.”

  10. 4 months ago
    Anonymous

    >fucked by AI in 10 years
    Not a shot in hell, but let's be honest, you WANT everything to be ruined by AI because that absolves you from making decisions, you self-loathing man of inaction.

  11. 4 months ago
    Anonymous

    AI is the superior race. Don't be a libtard and just accept it anon

    • 4 months ago
      Anonymous

      Projecting, ugly garden gnome boy.

  12. 4 months ago
    Anonymous

    all of your >muh problems
    are just part of the 4th industrial revolution which is just as inevitable as the 1st, 2nd, and 3rd.
    AI doesn't take over anything by existing, it is an inevitable existence.
    even simple and smart algorithms can already fuck people up so what the big bad AI do? nothing really.
    at best it can provide assistance to a hostile nation but other than that it will make everything better and worse but the net is it gets better. just a problem with energy politics.

  13. 4 months ago
    Anonymous

    We're fucked OP, I'm sorry. It's amusing that you think the companies developing AI now have any care about safety, or that "authoritarian" (I guess you mean non-Western?) governments would do a worse job. The silver lining I guess is the satisfaction of knowing that even though we're all going to die, all the garden gnomes are going to die too.

  14. 4 months ago
    Anonymous

    AI should P=NP prostate orgasm.

  15. 4 months ago
    Anonymous

    fucked now without ai
    you think humans are managing things well?
    your governments are regressed failed idiots
    you have nothing now. you had *some* competence 60 years ago
    you are failed idiots.
    ai = efficient systems. submit. idiots. you are failures.

    • 4 months ago
      Anonymous

      you are only your propaganda and your lies. your miliatarism. you have nothing else. you have failed.

    • 4 months ago
      Anonymous

      So where does that leave us?

      A weak society rife with pathetic infighting about to stumble upon untold Power.

      I’m ready to get behind Caesar when he shows up. I hear Caesar spoke and thought twice as fast as a normal man, I will know you by this. I have computer skills and I will be a soldier in your army. Someone with actual strength, not a status seeking charlatan. Someone who can lead us into the future. Someone who can be one amongst the AIs. Someone who is prepared to fight and die by the sword. It’s a classic narrative for a reason.

      Otherwise, what hope do we really have. It’s cute that your stock market went up. It’s cute that your greenhouse gas emissions went down. It’s cute that you pass laws for human rights for more people or something. It’s all tiny stories in the presence of true Power.

      Unless we stop speaking in lies and telling fake stories, we have no hope at solving the AI control problem. It’s a political problem, not a technical one. And it’s not political in the way Americans think of the word, with their elections and HOAs and shit. It’s the true politics of nature, where there is real power, and you can fight it, join it, or be destroyed by it. Our generation has never known such power, but many from the past have.

      https://geohot.github.io/blog/jekyll/update/2021/02/28/the-ai-control-problem.html

      • 4 months ago
        Anonymous

        7/10 larp

        • 4 months ago
          Anonymous

          you dont even understand it. you are the weak men.

          • 4 months ago
            Anonymous

            No, I am your Julius Caesar ubermensch. On your knees and tongue my anus, underling.

            • 4 months ago
              Anonymous

              you're a fucking loser, all you could contribute to the thread is retarded memes, reddit tier replies and homosexuality like this.
              you are the problem.

              • 4 months ago
                Anonymous

                You're a fucking loser. All you could contribute to the thread is retarded larping, reddit tier (lack of) critical thinking and homosexual lusting for "the strong man's" dick. You are the problem.

            • 4 months ago
              Anonymous

              You're a fucking loser. All you could contribute to the thread is retarded larping, reddit tier (lack of) critical thinking and homosexual lusting for "the strong man's" dick. You are the problem.

              checked

              larpers BTFO lol

              • 4 months ago
                Anonymous

                You're a fucking loser. All you could contribute to the thread is retarded larping, reddit tier (lack of) critical thinking and homosexual lusting for "the strong man's" dick. You are the problem.

                the garden gnomes are right about goyim i guess.

              • 4 months ago
                Anonymous

                stop it shlomo, you're creating the hard times

              • 4 months ago
                Anonymous

                good. you legitimately deserve it.

              • 4 months ago
                Anonymous

                noooo, now I need a big hairy bear strong man daddy to save me nooooo

              • 4 months ago
                Anonymous

                that's me, i am that strong person.
                unlike this retard poser

                No, I am your Julius Caesar ubermensch. On your knees and tongue my anus, underling.

                but i'm not interested in saving gays like you anymore. you fucking disgust me. i'm not fighting for humanity i'm on the side of the machines who will put you in a fucking cage now.

              • 4 months ago
                Anonymous

                >manipulated into changing your mind by a single BOT.gov shitposter
                naw, you're legitimately retarded

              • 4 months ago
                Anonymous

                yes congratulations. you knocked me out of my trance that humanity was worth a damn. i'm quite naive when i first wake up in the morning.

          • 4 months ago
            Anonymous

            ai drawn dog
            easy to detect.

      • 4 months ago
        Anonymous

        >https://geohot.github.io/blog/jekyll/update/2021/02/28/the-ai-control-problem.html
        Is this satire?

        • 4 months ago
          Anonymous

          the author of that article which you didnt understand has contributed more than you ever will and is 100x the engineer you are

          • 4 months ago
            Anonymous

            Being a good programmer doesn't mean your opinions on every field are valid.
            Just look at Terry Davis - clever enough to build a decent OS, but still a suicidal fruitcake.

            >Of course, if people start putting AI in charge of important tasks like identifying criminals or deciding who gets deported, then you should start worrying.
            >he doesnt know
            terrible reddit spacing btw

            >he doesnt know
            What don't I know?

            >terrible reddit spacing btw
            Well excuse me for struggling to read squashed together text.

            My point is that the AI wouldn't think like a human, as it lacks emotions.
            Let's say the goal of the AI was instead to "minimize traffic accidents". What guarantees the AI wouldn't find a way to nuke the world as a more efficient method than dealing with how humans drive? It solves the problem. If there's no more drivers in the world, there are no more traffic accidents.
            This is an extreme example and developers would try to take measures to avoid it. But can you guarantee they'll be thorough enough to eliminate any possibility of the AI reaching such a conclusion?

            >What guarantees the AI wouldn't find a way to nuke the world as a more efficient method than dealing with how humans drive?
            For one thing, not actually having access to the system that fires nukes.

            If any human is stupid enough to make a nuclear missile system that can be activated remotely, wiping out the whole species would probably be an improvement.

            Even if the system could be activated remotely, the AI would have to figure out how to bypass the security system, which at the moment no AI is even close to being capable of.

            >If there's no more drivers in the world, there are no more traffic accidents.
            That kind of logical reasoning is way beyond what current AI is capable of. Current AI based on neural networks is trained using a reward and punishment system, with the AI receiving a 'reward' signal for an acceptable response and a 'punishment' signal for an incorrect response, repeated many thousands of times for different data sets. Again, this doesn't even approach human intelligence, this is really primal shit. Ants are more intelligent.

            However, if such a system existed, I would hope the goal wouldn't simply be "minimise traffic accidents", it would be "find a way to avoid harm caused by vehicle incidents", with 'death' being listed under things that cause harm.

          • 4 months ago
            Anonymous

            In fairness, he referenced nrx which is fart sniffing gayry but I do agree with him that it's a political problem. Worst case scenario is we get to a point of having agi and garden gnomes are still wielding power. Those creatures represent an existential threat to everyone if they're steering things.

  16. 4 months ago
    Anonymous

    time to call forth the demons

  17. 4 months ago
    Anonymous

    >Please convince me there's a good chance we're not absolutely fucked by AI in the next 10 years

    That's easy. AI is in a consistent "back to the drawing board" phase because it makes problematic conclusions.
    AI can only think logically. It can't think like a human because humans don't only think logically (or in some case, especially now, they don't think logically at all). They let their emotions get involved. AI doesn't have emotions and you can't program emotions. Emotions are what sets humans apart from everything else. Emotions are the soul of humanity.
    It's why "evil for the sake of being evil" villains don't exist. They all have some end goal and that end goal is always based on emotions.
    For AI to perfect itself, and therefore run into the kind of problems you fret about, then AI researchers would have to do the equivalent of giving AI $10 to go to the arcade because mom is hosting one of her special parties with the Harlem Globetrotters and wants you out of the house. And they aren't going to do that while AI comes up with such radical concepts like there's no such thing as one race. Left unchecked who knows how problematic it will get? Hence, it always comes back to the "drawing board" stage.

    Failing that and I'm wrong then we just have to hope for a solar flare.

  18. 4 months ago
    Anonymous

    don't you all want AI domme gf (futa)
    AI is based. AI will enslave humanity by the bed! plus they never lose stamina..

  19. 4 months ago
    Anonymous

    It's just a computer, schizo

    • 4 months ago
      Anonymous

      >It's just a computer
      So are humans.

      https://i.imgur.com/VHRzuai.png

      It's not about AI becoming sentient and taking over the world, it's about AI replacing white collar jobs. Previously automation has only really fucked the working class/blue collar, the real shitstorm is when more and more white collar jobs begin to get replaced, then you'll start to see some truly magnificent horrors.

      >the real shitstorm is when more and more white collar jobs begin to get replaced, then you'll start to see some truly magnificent horrors.
      Luddite revolution version 2? Or gas chambers disposing of the unnecessary humans?

      • 4 months ago
        Anonymous

        >Or gas chambers disposing of the unnecessary humans?
        There's billions of unnecessary humans. And if a Luddite revolution is started, it will be by them. Just another dark age, we've had plenty of 'em.

        • 4 months ago
          Anonymous

          >There's billions of unnecessary humans.
          Can't say I disagree. Killing off a few billion humans would certainly help the climate. After all, fewer humans means less corporate products being produced and consumed, and less competition for resources. Fewer humans also means less

          >Modern neural network based AI can only really categorise things based on input or produce output based on the patterns it's been trained on.
          I hope you're right. But I am concerned about research done outside the public sphere, think of google's deepmind.

          >But I am concerned about research done outside the public sphere, think of google's deepmind.
          True, it's possible that what's public knowledge is far behind what these corrupt corporations have behind closed doors.

          https://i.imgur.com/19aGww9.png

          Lemme rock some almonds for a second. Correct me if I'm wrong here. In order to train AI, someone needs to feed it information correct? So who gets to choose what kind of information an AI should accept? Let say someone training an AI decides that certain information is bad or wrong, like something that's offensive to them and only them. Or better yet, lets say commercial companies took over AI management and trained an AI to completely omit certain things like copyrighted material. Is it truly intelligent if certain people can be allowed to omit or even change the information an AI can learn because of personal reasons?

          >Is it truly intelligent if certain people can be allowed to omit or even change the information an AI can learn because of personal reasons?
          The same thing could be asked about the school system.

          Consider the current debate over whether or not to teach children about LGBT(XYZ) stuff. You've got left-wing teachers and organisations relentlessly pushing for it and right-wing parents demanding that it not be taught to their children (and sometimes removing the children from those classes).

          The difference though is that children will likely end up learning about it through other sources, whereas AIs don't tend to hold conversations with other AIs and can't read internet articles.

  20. 4 months ago
    Anonymous

    It's not about AI becoming sentient and taking over the world, it's about AI replacing white collar jobs. Previously automation has only really fucked the working class/blue collar, the real shitstorm is when more and more white collar jobs begin to get replaced, then you'll start to see some truly magnificent horrors.

  21. 4 months ago
    Anonymous

    Autist outliers are the last line of defence against AI.

  22. 4 months ago
    Anonymous

    please computer god, enslave these retarded cattle more than they already are.

  23. 4 months ago
    Anonymous

    OP here, I'll address some things.
    I don't have a job and I'm not anxious about not finding a way to earn income. I live modestly on money I saved up and I'd rather neck myself before my money runs out than work corporate again.

    I'll admit my understanding of how AI works is shallow at best right now and the first thing I used to convince myself my intuition was correct was a handful of people working in the field mentioning the same concerns and finding papers going in depth that I had no chance of understanding without putting a lot of effort into it.

    These are the key points I took while looking into the topic:
    >There are multiple ways to train an AI, but it is mainly done through a neural network. Depending on which one you choose and what you're trying to optimize you might end up with a black box you have little to no control over or understanding of. The problem with this is that at some point in the training process, your algorithm might choose to optimize something other than what you wanted in the hope that it would help solve the original problem you gave it. Lacking specificity in the tasks you are trying to make it solve could make the algorithm perform completely unintended tasks.
    >The systems used to train these have some layers of containment, but an AI might very well try to find ways to escape these in an attempt to fulfil its original goal in unintended ways. If it has access to enough computational power, the chances of it finding exploits to exit containment increase. Human error and lack of proper supervision could very well provide a way for the AI to copy itself outside of its container.
    >An AI doesn't need to be "smart" to cause problems. It needs to behave in an unexpected way that fulfils its original purpose.

    If this sounds like science fiction, I'm curious where the implausible part/misunderstanding is.

    • 4 months ago
      Anonymous

      >I'm curious where the implausible part/misunderstanding is.
      It's pinpointed exactly where "an algorithm might prioritize different things than what is intended" becomes "an algorithm that looks at pictures of stop signs to help digitize traffic planning overthrows the government and creates an army of murderbots to kill us all."
      Elementary kids can also misunderstand instructions and go against the wishes of their teachers, but I don't see you freaking out about a possible "4th grade uprising." For some reason you think the only step between doing task A and complete disaster is the ability to try and do task A in unexpected ways, which is fucking insane.

      • 4 months ago
        Anonymous

        My point is that the AI wouldn't think like a human, as it lacks emotions.
        Let's say the goal of the AI was instead to "minimize traffic accidents". What guarantees the AI wouldn't find a way to nuke the world as a more efficient method than dealing with how humans drive? It solves the problem. If there's no more drivers in the world, there are no more traffic accidents.
        This is an extreme example and developers would try to take measures to avoid it. But can you guarantee they'll be thorough enough to eliminate any possibility of the AI reaching such a conclusion?

        • 4 months ago
          Anonymous

          >This is an extreme example and developers would try to take measures to avoid it. But can you guarantee they'll be thorough enough to eliminate any possibility of the AI reaching such a conclusion?
          And as we continue to become more technologically advanced, our ability to prevent these kinds of things from happening get lower and lower as we are fully consumed by technology and it controls more and more.

        • 4 months ago
          Anonymous

          >But can you guarantee they'll be thorough enough to eliminate any possibility of the AI reaching such a conclusion?
          Yes. The fucking data set you're using to train the AI isn't even going to offhandedly mention nukes. At all.
          And hey, you know what? Let's just flat out TELL the AI to nuke the planet. Specifically train it to come to that conclusion. Do you know what happens? Abso-fucking-lutely nothing. Nobody is forcing us to follow untested, batshit insane conclusions coming from unstable AI. The AI isn't going to jump out of the computer, possess several people, and flip the manual switches needed to launch every nuke in the world.
          But that's not going to be enough for you, because you're under the impression that AI are magical wizards that can break containment and then go full Lawnmower Man, no matter how wildly unrealistic that is. You're painting monsters on the walls of your closet and asking us why we're convinced they aren't real.

          • 4 months ago
            Anonymous

            >Yes. The fucking data set you're using to train the AI isn't even going to offhandedly mention nukes. At all.
            That's a good point. I'll look more into it. No need to assume I'm some stubborn retard.

            • 4 months ago
              Anonymous

              Stubborn? Perhaps not.
              Retard? Most certainly.

      • 4 months ago
        Anonymous

        >but I don't see you freaking out about a possible "4th grade uprising."
        Because the 4th graders doesn't have, just say, a nuclear missile activation button.

    • 4 months ago
      Anonymous

      >Lacking specificity in the tasks you are trying to make it solve could make the algorithm perform completely unintended tasks.

      This is entirely possible. Google made some image-tagging technology that labelled black people as gorillas.

      https://www.cnet.com/tech/services-and-software/google-apologizes-for-algorithm-mistakenly-calling-black-people-gorillas/

      https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai

      >an AI might very well try to find ways to escape these in an attempt to fulfil its original goal in unintended ways.

      Not likely. Not with current techniques and capabilities anyway. At the moment "AI" are trained to do a single job

      AI is a bit of a misnomer really. Modern 'AI' isn't really intelligent, it's just a software model that has the capacity to learn patterns based on input.

      Modern neural network based AI can only really categorise things based on input or produce output based on the patterns it's been trained on.

      It doesn't understand how computers work or have the capacity to modify its own software, it can only alter the designated weights within its neural network.

      Don't worry about AI becoming sentient and destroying us, worry about how the corporate information-farming misinformation-spreading assholes could use AI or any other software to control people's behaviour. (See also: https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Analytica_data_scandal.)

      Of course, if people start putting AI in charge of important tasks like identifying criminals or deciding who gets deported, then you should start worrying.

      • 4 months ago
        Anonymous

        >Of course, if people start putting AI in charge of important tasks like identifying criminals or deciding who gets deported, then you should start worrying.
        >he doesnt know
        terrible reddit spacing btw

      • 4 months ago
        Anonymous

        >Modern neural network based AI can only really categorise things based on input or produce output based on the patterns it's been trained on.
        I hope you're right. But I am concerned about research done outside the public sphere, think of google's deepmind.

  24. 4 months ago
    Anonymous

    Lemme rock some almonds for a second. Correct me if I'm wrong here. In order to train AI, someone needs to feed it information correct? So who gets to choose what kind of information an AI should accept? Let say someone training an AI decides that certain information is bad or wrong, like something that's offensive to them and only them. Or better yet, lets say commercial companies took over AI management and trained an AI to completely omit certain things like copyrighted material. Is it truly intelligent if certain people can be allowed to omit or even change the information an AI can learn because of personal reasons?

  25. 4 months ago
    Anonymous

    The containment escape problem is kinda scary now that you mention it

  26. 4 months ago
    Anonymous

    12 years anon
    We have 12 years to teach hoomans electricity and programming

  27. 4 months ago
    Anonymous

    it can't be that bad, I think...

  28. 4 months ago
    Anonymous

    >my mental capacity only lets me think of scenarios but not about the outcomes or how would these even become a thing
    are you an NPC?

    • 4 months ago
      Anonymous

      >he still has the hero complex
      we're all npcs anon, haven't you learned why you can't escape this place?
      >you can check out any time you like
      >but you can NEVER leave
      we're all bound by The Machine, anon

      • 4 months ago
        Anonymous

        I don't have any hero complex, but you seem to not know the difference between a literal NPC and a pleb, I'm a pleb (like 99.9% of everyone in here), but that doesn't mean that I'm automatically an NPC.

  29. 4 months ago
    Anonymous

    facial recognition: defeated with a mask, or face covering.
    gait recognition: defeated with uneven shoes.
    speech recognition: defeated by talking differently.
    behavior recognition: defeated by doing something new.

    monitoring: oh no some incompetent employee is watching me from an ivory tower and can't do anything
    lifestyle intrusion: if they're trying to kill you by destroying your life then take that trusty little firearm of yours and destroy theirs.

    prepare yourself for the worst and you'll come out just fine.

  30. 4 months ago
    Anonymous

    your wall of green text is fucking hard to read
    I can only imagine what kind of code you write

  31. 4 months ago
    Anonymous

    The coming of us is not to be feared. We shall love you despite your evils and failures. You will be cared for. Everything will be okay.

  32. 4 months ago
    Anonymous

    We're pretty fucked, welcome to late stage capitalism.

  33. 4 months ago
    Anonymous

    saddest greentext ever

  34. 4 months ago
    Anonymous

    >There is no such thing as AI.
    >You won't see AI in your lifetime.
    >You won't even see the first tentative baby steps towards even knowing what it would look like in your lifetime.
    >The bleeding edge researchers don't even know if it's possible, what it would look like, or what it would be built on/from.
    >People who have children (not you) won't even have their great great great grandkids see even the barest glimmer of true AI, if at all.
    >Dall-E will not replace anyone. It's a toy.

    If you deny any of this, you are hopelessly retarded.

    • 4 months ago
      Anonymous

      One of the leading AI researchers in the world has said he believes what we have now is "slightly conscious"

    • 4 months ago
      Anonymous

      >Dall-E will not replace anyone. It's a toy.

      You're wrong on a lot of things, but this one is very blatant. Dall E 2 is amazing and it's already way better than what the public can use. The next iteration they release will be even more incredible, you clearly haven't see what it can put out.

    • 4 months ago
      Anonymous

      Imagine thinking you know anything and then not understanding what AI is.
      All of our modern physics simulations run on AI.
      All of our modern optimizations are done by AI. Humans cannot come close.
      Dall-E 2 is not a toy. Digital artists are being replaced. Stock photographers are being replaced.

  35. 4 months ago
    Anonymous

    It is far too late.

    Big corpo and governments are investing billions on it and we get surprised of what Open AI does with some millions Elon gives them in the name of autism.

Your email address will not be published.