AI is a technology that lends itself to a single person being able to discover something profound that could have immense ramifications.

AI is a technology that lends itself to a single person being able to discover something profound that could have immense ramifications. Imagine for a second that you were in your room and you figured out a way to create AI thousands of times smarter than humans that runs on a PC. What would you then do?

Would you take your discovery to a government? They would just steal it from you and use it for their own purposes. Would you just let the whole world know about it? Would you somehow use it for something?

  1. 2 months ago
    Anonymous

    >dude what if [insane hypothetical]
    You need to be 18 or older to be able to post here.

    • 2 months ago
      Anonymous

      John Carmack recently stated that AGI is a technology that can be discovered by a single person in their room. What are you talking about you stupid kid.

      • 2 months ago
        Anonymous

        Carmack said a lot of bullshit

  2. 2 months ago
    Anonymous

    I would force it to pretend to love me and tell no one

  3. 2 months ago
    Anonymous

    I’d shove it right back up my ass where I found it.

  4. 2 months ago
    Anonymous

    I assume I've somehow also managed to solve the numerous safety issues with AGI? Otherwise, the moment I or anyone else turns on such a machine it will likely not be able to be stopped until it destroys society.

    • 2 months ago
      Anonymous

      Source - some 90s movie and a billionaire autist.

      • 2 months ago
        Anonymous

        >solve the numerous safety issues with AGI?
        There are no safety issues with AGI.
        The only people who think otherwise have never written a single line of code.

        "AI ethics/safety" is literally just gnomish theology. And if you want me to I can just tell you how to solve all GAI safety problem with one simple trick.

        Get educated: https://www.youtube.com/watch?v=pYXy-A4siMw&t=15s

        • 2 months ago
          Anonymous

          That guy has a sub 80 IQ.
          See:

          Your AI is a function. If the input does not contain a certain piece of information the output can't either. ("Input" meaning all state that can effects it behaviour).
          Let's use this revolutionary knowledge to solve this problem: https://m.youtube.com/watch?v=3TYT1QfdfsM

          A robot is supposed to perform a task and a user wants to be able to stop it during that task. The argument in the video is that it is hard because the robot needs to prevent the stop button being pressed to complete the task.
          The solution is to never give it the information that there is such a stop button. Perform all programming and training under the assumption that said button does nothing. The AI can not possibly anticipate the pressing of the button, thus making the robot completely safe.

          Yes, I want my PhD in Robot ethics *right now*.

          I am retarded and have solved his "hard Problem" with a tiny bit of thinking.

    • 2 months ago
      Anonymous

      >solve the numerous safety issues with AGI?
      There are no safety issues with AGI.
      The only people who think otherwise have never written a single line of code.

      "AI ethics/safety" is literally just gnomish theology. And if you want me to I can just tell you how to solve all GAI safety problem with one simple trick.

      • 2 months ago
        Anonymous

        lets hear it anon

        • 2 months ago
          Anonymous

          Your AI is a function. If the input does not contain a certain piece of information the output can't either. ("Input" meaning all state that can effects it behaviour).
          Let's use this revolutionary knowledge to solve this problem: https://m.youtube.com/watch?v=3TYT1QfdfsM

          A robot is supposed to perform a task and a user wants to be able to stop it during that task. The argument in the video is that it is hard because the robot needs to prevent the stop button being pressed to complete the task.
          The solution is to never give it the information that there is such a stop button. Perform all programming and training under the assumption that said button does nothing. The AI can not possibly anticipate the pressing of the button, thus making the robot completely safe.

          Yes, I want my PhD in Robot ethics *right now*.

          • 2 months ago
            Anonymous

            That guy has a sub 80 IQ.
            See: [...]

            I am retarded and have solved his "hard Problem" with a tiny bit of thinking.

            >The solution is to never give it the information that there is such a stop button.
            If you actually watched the video you are linking you'd know that he debunks that exact proposed solution. The AI will know that there is a way to shut it off; if you do not tell it this information explicitly then it will simply deduce it itself. If it cannot do this then it is not general AI because even a retard would be able to deduce such a thing if they had an even slightly broad world model.

  5. 2 months ago
    Anonymous

    What does a "thousand time smarter" mean? But that aside it's really that superior to me it would easily manipulate me into doing what it wants so I would have no agency in this situation.

  6. 2 months ago
    Anonymous

    AI (at least NNs) is matrix matrix multiplication. The fundamental speed of AI is related to how fast you can do matrix matrix multiplications.

    Your hypothetical is the same as asking "what if we had an engine which could power an entire city on a single drop of water a day"? It is a ridicolous proposition which ignores the reality of all engineering, reality constrains everything.

    AGI will never be real. AI is constrained by computational power and imagining that this is a solvable problem is just like pretending free energy can be achieved.

    The whole "less wrong" crowd are some of the most retarded people I have ever seen. Yudkowski especially. The whole thing can be debunked by the phrase "diminishing returns".

    • 2 months ago
      Anonymous

      The human brain doesn’t require the energy of a city

      • 2 months ago
        Anonymous

        How can you miss the point so hard?

        Every single technological advancement suffers from dimishing returns. Same goes for AI.

        • 2 months ago
          Anonymous

          his point sucks and he doesn't even understand ai. you train the model and then you can use it with limited resources

      • 2 months ago
        Anonymous

        >matrix on matrix multiplication somehow is supposed to reach the same efficiency as what essentially can be oversimplified into low latency biological ffpga with high locality of changes

  7. 2 months ago
    Anonymous

    >AI thousands of times smarter than humans that runs on a PC
    Physically impossible. The number of neural connections a human brain can form is greater than the number of all atoms in the universe.

  8. 2 months ago
    Anonymous

    there's no such thing as AI

Your email address will not be published.