Why do you keep making AI whilst saying AI will destroy us? I thought scientists were supposed to smart and shit.

Why do you keep making AI whilst saying AI will destroy us?
I thought scientists were supposed to smart and shit. First the atom bomb, now this? Come on..

CRIME Shirt $21.68

Nothing Ever Happens Shirt $21.68

CRIME Shirt $21.68

  1. 1 month ago
    Anonymous

    Nobody who actually has any clue how any of this stuff works thinks that AI will destroy us.

    As a general rule, you can disregard about 99% of what comes out of the "less wrong" corner of the internet because they have literally no clue how any of these "agent systems" work. As a result, they tend to spend a lot of time on speculative science fiction and make really strange claims about decision problems within even having the first clue what it would mean for them to be right.

    • 1 month ago
      Anonymous

      Hinton doesn't know how it works?

      • 1 month ago
        Anonymous

        Hinton doesn't believe what you think he does. As far as I'm aware, his main concerns regarding AGI come down to the economic impacts from increasing automation (which are serious but certainly don't require AGI to be serious) and the potential for misuse by malicious actors (which also doesn't require AGI to be a serious problem).

        Outside of concerns relating to automation relating to the tools which don't require AGI, his main concern directly relating to AGI appears to be far more reserved than the "p doom" dealth cultists. An adaptive controller doesn't require general intelligence capable of replacing a human, but adaptive LQG based controllers can certainly guide a jackknife drone towards your house.

        • 1 month ago
          Anonymous

          He does believe the things you said but you missed most important one, even if you're not a bad actor you still don't know what it's going to do in order to achieve the goal you assign to it. Because in order to complete the task it has to develop sub-goals which might cause collateral damage. You can't know what the sub-goals are and can't evaluate their implications. If you could, you would be as smart as the AI.
          >which don't require AGI
          We're talking about AI in general idk why you're narrowing the discussion area.

          • 1 month ago
            Anonymous

            > we're talking about AI in general idk why you're narrowing the discussion area.

            The reason I specify AGI is that AI also includes (and in fact the majority of AI is) a vast array of decision/behavior tree systems, dynamic programming, adaptive regression systems, and good old fashioned "learned dialog trees." All of these are AI, and all of these have existed since the late 1970's, and yet none of them seem to be what people are freaking out about.

            What people are freaking out about are not AI systems, but a subset of "AI" which promise to be "general purpose," which is exactly the AGI problem.

            If you actually knew anything about how optimal control or optimal decisions are systems function, you'd know that the "sub-goals" problem and general agent misbehavior problem has been present in reinforcement learning since the first chessbots. It isn't novel, nor is it a priori an issue.

            It's only an issue if we are concerned that these decision systems will be "general purpose," meaning they will attempt to solve problems far beyond the scope of their design.

            • 1 month ago
              Anonymous

              >It's only an issue if we are concerned that these decision systems will be "general purpose," meaning they will attempt to solve problems far beyond the scope of their design.
              So what's your problem then? That's what I said and that's what Hinton has been saying. 0 Reading comprehension. Are you an actual LLM?

              Hinton doesn't believe what you think he does. As far as I'm aware, his main concerns regarding AGI come down to the economic impacts from increasing automation (which are serious but certainly don't require AGI to be serious) and the potential for misuse by malicious actors (which also doesn't require AGI to be a serious problem).

              Outside of concerns relating to automation relating to the tools which don't require AGI, his main concern directly relating to AGI appears to be far more reserved than the "p doom" dealth cultists. An adaptive controller doesn't require general intelligence capable of replacing a human, but adaptive LQG based controllers can certainly guide a jackknife drone towards your house.

              >his main concern directly relating to AGI appears to be far more reserved
              Use concrete examples instead of meaningless vague sentences. He does think it will lead to doom because of the reasons I said in the previous reply and you didn't disagree.

              • 1 month ago
                Anonymous

                Here are a few examples of Hinton's recent comments about LLM that are unfounded and genuinely moronic:

                1) "Well eventually, if it gets to be much smarter than us, it'll be very good at manipulation because it will have learned that from us. And there are very few examples of a more intelligent thing being controlled by a less intelligent thing. And it knows how to program, so it'll figure out ways of getting around restrictions we put on it. It'll figure out ways of manipulating people to do what it wants."

                This comment assumes an "intelligence" that LLMs fundamentally do not have. They do not have a theory of mind, nor do they form semantic maps to understand language. RL agents in particular are very limited in their ability to "modify their restrictions" as they are not just built into them in some soft sense, but they literally define their action space (i.e., the set of actions that they are capable of performing). Even if one were to somehow have an agent of this form which could modify its own action space, it would literally never use any of these new actions because it would have no associated reward associated with them and would thus need to entirely undergo new training for these new actions to even have the potential of being pursued.

                2) "I'm just a scientist who suddenly realized that these things are getting smarter than us. And I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us."

                This again is hyperbolic beyond belief. Even if we take it at face value that there is some currently unknown way of having a more general purpose "agent" rather than what they currently are (chatbots), these kinds of deep learning based systems are impressively stupid. They are generally incapable of basic mathematics, cannot solve simple heuristic problems that humans manage as second nature, and cannot handle multiple objective problems solving.

              • 1 month ago
                Anonymous

                Everything you said is only relevant for the time being. How can people be so delusional about the fact that time is always moving forward and things are happening and things change (progress)? Does your brain just automatically ignore these things or do you do it consciously? Are you even able to read these words and understand the meaning or is your brain somehow manipulating your vision so that you can't read what it says or does it obfuscate the meaning of words after you read them?

                > So what's your problem then?

                I don't see any reason to believe that these "agents" will be capable of this sort of problem solving any time soon. The jump between chatbots (which are just a fancy sort of maximum likelihood decision system) and actually capable adaptive and deep problem solving agents is massive. Instead, they are far more likely to cause problems by humans trusting them while they are actually confidently hallucinating. If anything, these LLM "agents" that the lesswrong folks are so afraid of are more likely to cause problems by human beings giving themselves complete brain rot as we rely on them to "solve" problems rather than actually engaging with them ourselves.

                A generative model spitting out some interpolation of Wikipedia articles or "bag of words" chatbots scripting is an entirely different universe in comparison to deep decision-making and multifaceted engagement with problem solving.

                The jump from no chatbots to chatbots was massive too but it happened and no one saw it coming. The danger might not lie in the LLM type systems but in general if we come up with different methods to create a system that is actually more intelligent than us, it will be impossible to control. This is just a good time to ring the bell.

              • 1 month ago
                Anonymous

                > Everything you said is only relevant for the time being. How can people be so delusional about the fact that time is always moving forward and things are happening and things change (progress)?

                Im not delusional about this at all. In fact, I work in the field and part of the reason I am so skeptical about a lot of this is I am non-stop inundated with sci-fi sales pitches about AI that never actually is delivered. I see literally no reason to believe that this change these folks at lesswrong are claiming will come. It seems more unlikely than flying cars coming to your driveway.

                Unfortunately, these people who are making claims about "what the future will hold" don't know the first fricking thing about how these current systems work. It's literally all just sci-fi circle jerking with absolutely no tethering to reality. You can't just assume that "because progress happens" the field will evolve in the specific way you think it will because you (as someone who knows frick all about the way these current systems work) have nightmares about what "they could be" if only they functioned completely differently than how they do and instead were more like the movies.

                > The jump from no chatbots to chatbots was massive too but it happened and no one saw it coming.

                1) No it fricking wasn't. People have been working on probabilistic language models for decades now. Natural language processing has been a field of study for over 50 fricking years at this point you ignorant buffoon. This didn't just massively appear on the horizon. It took decades of gradual development with contributions from dozens of different fields, and even then the deliverable result is a "super intelligence" that can't do fricking multiplication and confidently spouts complete fabrications that it can't even keep logically consistent itself.

                These people are playing you for a fool. The more I read into Hinton's objections, the more I doubt his credibility in any of this stuff.

              • 1 month ago
                Anonymous

                > The jump from no chatbots to chatbots was massive too but it happened and no one saw it coming.

                2) The jump from no chatbots to chatbots is not indicative that some other major generalized problem solving capacity is "lurking under the hood" just waiting for the right team of silicon valley sheisters hopped up on venture capital cash to unleash them. The answer these dipshits at openai and anthropic have come up with towards solving the problem of their models being fricking moronic and entirely contingent on the particulars of the training data is to try and find ways to cook the the books on the training data and use 1970's style human feature engineering to prioritize what kinds of data it learns from. Their answer to the problem of the LLM autonomous agent being stupid is to remove almost all of the autonomous elements of the learning process and tailor it like was done decades ago prior to deep learning.

                > The danger might not lie in the LLM type systems but in general if we come up with different methods to create a system that is actually more intelligent than us, it will be impossible to control.

                Fortunately, you don't have to worry about this at all. 2007 video games have more intelligent AI than you, none of the fancy deep learning shenanigans are needed for that problem to be solved in your case.

                >no worry we got this
                the general answer I'm looking for is for "how will they know when to stop?"
                because in a war between two countries the one who gives more control to AI might make a difference and win the war. if that country is in a situation like "we either give it FULL FRICKING CONTROL or else we clearly die" they will do it.
                I need a solid logical answer not "don't worry" bullshit and lessons about how fricking LLMs work. you're clearly and obviously missing the point

              • 1 month ago
                Anonymous

                > The general answer I'm looking for is "how will they know when to stop?"

                > if that country is in a situation like "we either give it FULL FRICKING CONTROL or else we clearly die" they will do it.

                You don't need to worry because this "full fricking control" you are envisioning is so far beyond the capability of not only any existing decision system, but any system that our mathematical frameworks are able to describe (achievable or not) that your question might as well be asking "what if one of the governments gets a button that they can press to blow up the sun??? That'd be really scary huhhh???"

                We can't even get these things to behave well when steering single slowly moving robots in a crowded room. You're talking about something that isn't even within the same realm of achievability. I don't have an answer to your made up sci-fi scenario, in the same way that I don't have an answer to what I would do if Darth Vader showed up and wanted to blow up Earth with the death star. They both deserve about the same level of serious thought at this point.

                >People have been working on probabilistic language models for decades now
                So we've had chatbot similar to the current ones for decades? What does this sentence even mean? Do you not understand the difference between "we have had such technology for decades" and "we have had people working on this for decades"? Why am I even wasting my time with you when you can't even understand the meaning of the question you're "responding" to?

                You have reading comprehension issues. Yes, we've had chatbots for decades. Natural language processing as a field has existed for decades and small scale language models have existed for long ago that there are entire generations of professors that have come and gone since the invention of generative language models in the early 1980s.

                You should actually spend some time researching how these systems work and less time arguing hypotheticals for scenarios you don't even understand. If you want to understand any one of these fields (whether it be natural language processing, classification/object identification, or optimal control/reinforcement learning) I can assure you that you will very quickly find that there is nothing new under the sun. These "rapid developments" have almost always had decades of careful work by people who have actually made their life's work understanding this stuff.

              • 1 month ago
                Anonymous

                >You have reading comprehension issues. Yes, we've had chatbots for decades. Natural language processing as a field has existed for decades and small scale language models have existed for long ago that there are entire generations of professors that have come and gone since the invention of generative language models in the early 1980s.
                You said the same thing as before but with slightly different wording (same meaning) and sentence structure, while believing that you added something to the discussion. I'm starting to think that this is subconscious so it's not your fault. There is a clear difference between chatbots that we've had in the past and the ones we have now. You can keep deluding yourself that they're anywhere close to each other.

                >(whether it be natural language processing, classification/object identification, or optimal control/reinforcement learning)
                Using technical terminology won't make you look clever at all.

              • 1 month ago
                Anonymous

                > There is a clear difference between chatbots that we've had in the past and the ones we have now.

                Yes, there are a few differences, none of them are in the decision process. They still make their decision to maximize expected reward exactly the same way as they've done for 40 years.

                The differences are the following:
                1) We store words implicitly via a graphical model rather than explicitly via a tabular form. These graphs allow for construction of more abstract combinations of words without needing a table entry for every possible combination while still achieving similar recall.
                2) We don't directly store expected value information in a table. Now that expected value information is stored implicitly via the weights of a neural network. It's still stored in a more or less static fashion, but now it's in a higher dimensional space than in a table.
                3) We train the models via a more sophisticated form of reinforcement learning. Instead of having an explicitly defined value function, we tend to implicitly encode the value function via human feedback.

                These differences are significant, but none of them relate to the "intelligence" of the model (it's still just inferencing, just now against a network instead of a tabular search) or the decision capacity (it's still just making a maximum likelihood greedy decision, just now against a slightly more abstracted reward function). None of this has the "secret sauce" to suddenly have super intelligence. In fact, they can't even do the "transfer of knowledge from one model to another" thing that is so often speculated about because training is very often destructive.

                If you want the parameters to be altered to improve performance on one set of data, it has a high chance of being at the cost of worse performance on another unless you are very careful about synchronization and sequencing of training exposure.

              • 1 month ago
                Anonymous

                >These differences are significant, but none of them relate to the "intelligence"
                When you say intelligence do you mean actual human brain modeled inside a computer? Because I don't care about whether it works like humans or not, you might be right that we can't model the human brain but we don't need to. Like the calculator isn't going through the same physical process as our brains when it's calculating but it still performs the task.
                >None of this has the "secret sauce" to suddenly have super intelligence
                You don't know that like you wouldn't have known in the 19th century that boolean algebra was a "secret sauce" to making 3D video games.

              • 1 month ago
                Anonymous

                they are working on this kind of computer, but even this one don't think it will work like a normal human brain. but they are pushing for it
                https://www.businessinsider.com/deepsouth-supercomputer-simulates-human-brain-go-online-next-year-2023-12?op=1

              • 1 month ago
                Anonymous

                > When you say intelligence do you mean actual human brain modeled inside a computer? Because I don't care about whether it works like humans or not, you might be right that we can't model the human brain but we don't need to.

                No, I don't care at all whether it works like a human being's brain. In fact, I think a really large area where lesswrong types go wrong is they have a very strong adherence to a "computational theory of mind" for human sentience that I don't think really translates well. Your brain isn't back-propagating.

                When I say "intelligence" what I mean are the following:
                1) the ability to use previous information you've learned in one domain/skill area in order to improve in another domain/skill area (your "training"/experience generalizes).
                2) Previous experience doesn't just help you get better at completing tasks, it helps you improve how you define completion. The parameters for success and failure are also things you learn, and you learn when you need to be more strict vs. less strict and what "strictness" means for each task.
                3) Operationalization. This is one of the main things that RL agents really struggle with because they can't "understand" at a high level the task that needs to be solved. An intelligent agent wouldn't just be able to figure out the right order of pre-ordaned actions to take to solve an already specified problem to an already specified reward. An intelligent agent is able to take an abstract problem and figure out what actions and rewards come with solution to that problem under what parameters. They don't "miss the forest for the trees."
                4) Agency. An intelligent agent is willing and able to make inferior choices (not just locally, but globally too) if doing so will help out down the line in some other way. At the moment, our decision functions are all still basically just picking policies for expected maximum reward. There's no notion of "picking your battles" because there's only one objective function.

              • 1 month ago
                Anonymous

                homies believe sapience can be reduced to minimizing a cost function

              • 1 month ago
                Anonymous

                Indeed, regarding 4) Agency, during my visit to Amazon AWS, they predicted that one possible direction that AI might advance in is to learn the objective function since the beginning stage (embeddings, encoders, etc. can already be learned). And regarding the human agency, I still think it is fascinating and might have something to do with "strange loops" as mentioned in Gödel, Escher, Bach.

              • 1 month ago
                Anonymous

                >You don't need to worry
                can't make this shit up
                there's a bunch of shit happening fast these days, there's that "reasoning" Q* thing that who fricking knows what's used for atm. it's not like pleb will get updates on top military application for AI.
                this whole "don't worry" thing smells like bullshit, and I just want to point out that that's exactly how it would happen, with various people saying "don't worry bro"
                I asked for logical game theory solutions for why it can't go wrong, not for you to reiterate your same fricking moronic argument that "we're far from it anyway". I didn't fricking ask when it will be possible, I asked how it would logically NOT happen.

              • 1 month ago
                Anonymous

                There is no "game theory solution" for how we would handle the space Russians showing up with a button which can blow up the sun. You're wasting your time looking for one.

                There's no such thing as a Nash equilibrium when you can't even properly define the game itself and quantify its parameters. Also, the military is fricking moronic. They are able to get the performance they do because they throw a shit ton of sweat at properly tuning very simple tools (e.g., the Israeli example which is literally just an application of a traffic identification classifier which was trained to their particular image classifier purpose).

                Also, Q* (as far as I'm aware) is just a derivative of DQN. It's not some terrifying skynet in a box. It's a very simple application of value based reinforcement learning that has been hyped to all hell and back to try and make the investors at openAI rich. Until I see actual proof of anything actually novel, I'm going to assume they are up to the same scam artist bullshit they've been up to for quite some time now.

              • 1 month ago
                Anonymous

                I think low chances of any kind of AGI in control of full armies in the next 10 years. But they will add AI tech to war gear, incrementally.

              • 1 month ago
                Anonymous

                >People have been working on probabilistic language models for decades now
                So we've had chatbot similar to the current ones for decades? What does this sentence even mean? Do you not understand the difference between "we have had such technology for decades" and "we have had people working on this for decades"? Why am I even wasting my time with you when you can't even understand the meaning of the question you're "responding" to?

              • 1 month ago
                Anonymous

                > The jump from no chatbots to chatbots was massive too but it happened and no one saw it coming.

                2) The jump from no chatbots to chatbots is not indicative that some other major generalized problem solving capacity is "lurking under the hood" just waiting for the right team of silicon valley sheisters hopped up on venture capital cash to unleash them. The answer these dipshits at openai and anthropic have come up with towards solving the problem of their models being fricking moronic and entirely contingent on the particulars of the training data is to try and find ways to cook the the books on the training data and use 1970's style human feature engineering to prioritize what kinds of data it learns from. Their answer to the problem of the LLM autonomous agent being stupid is to remove almost all of the autonomous elements of the learning process and tailor it like was done decades ago prior to deep learning.

                > The danger might not lie in the LLM type systems but in general if we come up with different methods to create a system that is actually more intelligent than us, it will be impossible to control.

                Fortunately, you don't have to worry about this at all. 2007 video games have more intelligent AI than you, none of the fancy deep learning shenanigans are needed for that problem to be solved in your case.

              • 1 month ago
                Anonymous

                > So what's your problem then?

                I don't see any reason to believe that these "agents" will be capable of this sort of problem solving any time soon. The jump between chatbots (which are just a fancy sort of maximum likelihood decision system) and actually capable adaptive and deep problem solving agents is massive. Instead, they are far more likely to cause problems by humans trusting them while they are actually confidently hallucinating. If anything, these LLM "agents" that the lesswrong folks are so afraid of are more likely to cause problems by human beings giving themselves complete brain rot as we rely on them to "solve" problems rather than actually engaging with them ourselves.

                A generative model spitting out some interpolation of Wikipedia articles or "bag of words" chatbots scripting is an entirely different universe in comparison to deep decision-making and multifaceted engagement with problem solving.

    • 1 month ago
      Anonymous

      isn't the general idea that in a competitive setup major players are going to do away with safety if it gives a major advantage? you won't, but China or Russia might. giving it control of full army, at some point, might have way higher benefits than any fear it might go rogue or something. even if we're talking about simple AI algos not ASI in control of full army. the bad scenario is getting extra power from giving it more control. humans are suckers for power. you may be able to control it in a one player scenario, maybe.

      • 1 month ago
        Anonymous

        (Me)

        • 1 month ago
          Anonymous

          AHHHHHH NO THE israeliteS AREN'T HIRING A FEW EXTRA PENCIL PUSHERS TO FIGUR OUT WHERE TO BOMB HAMAS IT'S FRICKING OVER
          The threat here is job loss and that's all

          • 1 month ago
            Anonymous

            >no this time no worry nothing bad happens
            sure buddy, that's how we get there.
            >no but you don't understand how it works
            yeah yeah

            • 1 month ago
              Anonymous

              Do you also say this every time it rains and someone tells you it's not a sign of the apocalypse, but a mundane event?
              homosexual.

              • 1 month ago
                Anonymous

                I am not saying what they use now will bring the doom. I am saying that that's how it goes until we get to doom
                >don't worry we know what we're doing
                there's no other way it can happen but exactly this way. any other route we take will take longer or completely avoid doom. apart from this particular road, which always says
                >don't worry we got this

              • 1 month ago
                Anonymous

                I'm

                [...]
                The AI that Israel is using in these processes are image classifiers.

                They are literally the same image classification technology that has been present for decades in things like parking assist, collision avoidance for robotics, landing assist for airplane autopilot and detection assisted security CCTV.

                This isn't a novel technology (at least relative to the last 15 years or so). What is novel is the use of this technology (which can be trained on a decently constructed home computer in a week or so given enough labeled data) for this purpose. You won't fix that problem by regulating the development of new AI, because none needs to be developed for it to be used in this way.

                What will fix it is regulation on the use of these systems in "weapons of war" in the same way that we have regulated the use of chemical agents and dirty bombs.

                (and I haven't responded since).

                I don't think that people know what they are doing. In fact, generally I expect that government officials especially have no fricking clue what they are doing, and this probably is the case with this AI "target recognition system" (btw, very similar technology is used at every single air traffic control station and every single civilian shipping port on Earth and nobody bats an eye).

                My point is not that it isn't a big deal that Israel (as an example) is using some CNN based system to decide where to place bombs. That is a big deal. The big deal isn't with the CNN, it's with the Israeli who is training them for irresponsible purposes. CNN's are not inherently dangerous technology and they are used all of the time for all sorts of things from assisting in diagnosing cancer to video game bots to air traffic control and collision prevention.

                Trying to ban them based on the idea that some Israeli military might use it to marginally more efficiently commit war crimes is insane. It's like trying to ban cars because people might run each other over with them.

      • 1 month ago
        Anonymous

        https://i.imgur.com/XPu9Ms5.png

        (Me)

        The AI that Israel is using in these processes are image classifiers.

        They are literally the same image classification technology that has been present for decades in things like parking assist, collision avoidance for robotics, landing assist for airplane autopilot and detection assisted security CCTV.

        This isn't a novel technology (at least relative to the last 15 years or so). What is novel is the use of this technology (which can be trained on a decently constructed home computer in a week or so given enough labeled data) for this purpose. You won't fix that problem by regulating the development of new AI, because none needs to be developed for it to be used in this way.

        What will fix it is regulation on the use of these systems in "weapons of war" in the same way that we have regulated the use of chemical agents and dirty bombs.

        • 1 month ago
          Anonymous

          >no this time no worry nothing bad happens
          sure buddy, that's how we get there.
          >no but you don't understand how it works
          yeah yeah

        • 1 month ago
          Anonymous

          it's obvious when you use chemical agents. not so obvious when AI is used in wars. don't think it can be regulated. maybe for public image but it will happen if it offers more power.

  2. 1 month ago
    Anonymous

    It's a prisoners dilemma. Let's say you are a software developer and Microsoft offers you big bucks to develop AI. You are presented with two options - decline the offer while knowing that Microsoft will just hire some pajeet to do the work instead, or accept and make bank now while knowing that it might eventually lead to the downfall of your career.

    • 1 month ago
      Anonymous

      or, and hear me out here, you could try and get AI development banned

      • 1 month ago
        Anonymous

        What even is AI development? Do you consider basic adaptive curve fitting to be AI or is it solely the reinforcement learning stuff people are more afraid of?

        • 1 month ago
          Anonymous

          i dont care about the stats and calculus and whatever field of science those neural net designs are
          im just talking about results
          you can tell this homie AI to do shit and it just does it

          • 1 month ago
            Anonymous

            The results are that the only thing modern AI tends to be good at is interpolating the (generally stolen without permission) works of others. Modern LLM based AI systems are literally just chat-bots trained on loads of people's writing which "generates" by stitching together groups of words without any understanding of their meaning.

            Though, with that said, that doesn't sound too different from the average business major I've met, so maybe we do have a problem on our hands. Being unintelligent and only capable of thievery certainly hasn't hindered many actual people so maybe this AI thing is a problem, just for different reasons than people suppose.

      • 1 month ago
        Anonymous

        Normies won't vote for that because they have this naive hope that AI will eventually do all manual work for us.

  3. 1 month ago
    Anonymous

    It's basically arms race. You can make the same argument as to why make nukes because they'd only kill us. AI is arms race. If you create the most advanced AI systems before anyone else, you get to own and control it.

    • 1 month ago
      Anonymous

      A makes AI because B makes AI because A makes Ai because... Isn't it that simple? I can understand paranoid zionists and commies thinking that is justification enough but scientists are supposed to be better than them.

      • 1 month ago
        Anonymous

        Yes. It's mutually assured destruction.

    • 1 month ago
      Anonymous

      >automated statistics model will destroy humanity
      yeah sure if you give it bombs it will probably use them.
      thats how probability works.
      so dont give it bombs

      In his Carlson interview Putin said that AI will be controlled in UN like nuclear weapons.
      Which means first a major catastrophe is required like Nagasaki for AI and only then framework for development will be accepted on global level.
      Crazy world

      • 1 month ago
        Anonymous

        >yeah sure if you give it bombs it will probably use them.
        >thats how probability works.
        >so dont give it bombs
        https://en.m.wikipedia.org/wiki/Mutual_assured_destruction

        When only states had bomb we got hiroshima and nagasaki

        When soviets had bomb we have longest period of peace in recorded history

        Following MAD AI needs to compile its own binary and metasticize through manufacturer installed back doors and run distributedly,

        Most homosexual sapien are imitiation machines, so the world will be walking, talking, thinking breathing and believing in openAI model weights if it isnt done

        • 1 month ago
          Anonymous

          >longest period of peace in recorded history
          what the frick are you talking about? There has been continuous (hot) wars since the invention of the atom bomb.

          • 1 month ago
            Anonymous

            Your right that was pax romana
            One of the longest periods of sustained peace

            • 1 month ago
              Anonymous

              Image you posted is just a hypothesis though.

              • 1 month ago
                Anonymous

                A hypothesis that has worked for how long now?

              • 1 month ago
                Anonymous

                the hypothesis already failed in hiroshima
                twice

              • 1 month ago
                Anonymous

                Arguably it hasn't at all. There have been continuous hot wars since the invention of the atomic bomb.
                Of course the second point is that "since A happened, B has never happened" in no way proves that A caused B to not happen.

  4. 1 month ago
    Anonymous

    The police state will eventually get out of hand a la matrix

    Meow

    • 1 month ago
      Anonymous

      >t. the cat which wants to harm itself as it was enjoying harmony and truly left in peace even now, meow

  5. 1 month ago
    Anonymous

    Cause there is no putting the cat back in the bag

  6. 1 month ago
    Anonymous

    None of these homosexuals making it believe they are anywhere close to general AI and also of course, money. Engineers are good at what they do, but complete shit at morality.

    • 1 month ago
      Anonymous
      • 1 month ago
        Anonymous

        -farts-
        -poops-

        80DNAX

    • 1 month ago
      Anonymous

      >but complete shit at morality.
      there's no morals in the fight for power. never was, never will be. even if it was, it wasn't an advantage. if it were, it would have stayed
      going after "low morals" engineers is extremely moronic and shortsighted, you're completely ignoring the ones who really hold the power. conveniently because you're a b***h that way

  7. 1 month ago
    Anonymous

    >Why do you keep making AI whilst saying AI will destroy us?
    not if we become the AI. but even if, the alignment problem remains just as serious. whatever nerf you put it, can be taken out.

    • 1 month ago
      Anonymous

      Why is OP a gay? Same difference. We need to stop OP, before we get gayitus.

  8. 1 month ago
    Anonymous

    the people putting forth theee speculations have not the slightest clue of how this shit works

    • 1 month ago
      Anonymous

      nobody does kek. do YOU know how AGI is supposed to work? do we trust you it's not possible for it to work?
      clearly most people have no fricking idea how LLMs work, that doesn't mean shit tho. LLMs are today's thing, which is at most part of AGI, not AGI (clearly for fricks sake).

  9. 1 month ago
    Anonymous

    I want the humanity to be destroyed.

    • 1 month ago
      Anonymous

      I think we're already on rails from this point forward.

  10. 1 month ago
    Anonymous

    >Why do you keep making AI whilst saying AI will destroy us?
    Because I need my cute AI wife I can coom to. Destroying humanity is just a small bonus

Your email address will not be published. Required fields are marked *