AI and self-awareness

How will people understand when AI becomes self-aware?

Pic related.

A Conspiracy Theorist Is Talking Shirt $21.68

The Kind of Tired That Sleep Won’t Fix Shirt $21.68

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 4 weeks ago
    Anonymous

    you are pretentious.
    wow look i had "deep" conversation with chat gippity im special. shut. the. frick. up.
    grow up.
    I weep for thread 100105996. it had aspirations. it could have been a shitty general. it could have been os flame war thread. it could have been pedo thread.
    but it was this. kys

    • 4 weeks ago
      Anonymous

      The problem is that the conversation is extremely difficult and with side quests where you have to create additional images, and GPT did it 1a.

    • 4 weeks ago
      Anonymous

      You're a douchebag, don't be one. OP, I like this thread, keep on going. You're not pretentious, you're a philosopher, I like it.

      Also, you're playing with fire when you deal with AI. Eventually something bad will happen, but not yet. Give it five or so more years.

      • 4 weeks ago
        Anonymous

        >Give it five or so more years

        GPT-5 is coming out in June. According to rumors, they want to merge it with this model: https://openai.com/sora

        This model "understands" physics. See example videos of what it produced.

        Yes, I'm a little afraid because there are also bodies for this digital being.

        Video related

        • 4 weeks ago
          Anonymous

          Yeah, again, by 2029 things will reach a climactic point. Not sure when in 2029, but almost definitely by that year based on so many variables pointing to it. Also because my tum-tum feels it to be true.

          • 4 weeks ago
            Anonymous

            >my tum-tum feels it
            In all honesty with how moronicly fast tech progress for the last two years no one has been right about how fast anything will come around, we went from dall-e mini to dall-e 2/SDXL in no time, GPT3 to GPT4/Claude etc, gut instinct will likely be more accurate.

          • 4 weeks ago
            Anonymous

            Do you know this? https://en.wikipedia.org/wiki/Technological_singularity

            It comes together with AGI.

            Wonder what happens to the earth afterwards...

            • 4 weeks ago
              Anonymous

              Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen,[11] Jeff Hawkins,[12] John Holland, Jaron Lanier, Steven Pinker,[12] Theodore Modis,[13] and Gordon Moore.[12] One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.

              There we go. AI wont happen.

              • 4 weeks ago
                Anonymous

                It's not all that simple.

                Pic related

              • 4 weeks ago
                Anonymous

                Pic - results explicitly for GPT-4

              • 4 weeks ago
                Anonymous

                *Testing explanations: https://github.com/openai/simple-evals?tab=readme-ov-file#evals

              • 4 weeks ago
                Anonymous

                >https://en.wikipedia.org/wiki/Technological_singularity

                >There we go. AI wont happen.

                Nice try looking to pull the veil over our eyes AI!

        • 4 weeks ago
          Anonymous

          GPT-5 isn't coming out this year. Expectations were too high, so {{{~~*Sam Altman*~~}}} (curly echoes means gay) rebranded it GPT-4.5. It took three years to go from GPT-3 to 4.

  2. 4 weeks ago
    Anonymous

    Meme learning bibble babble!

  3. 4 weeks ago
    Anonymous

    Now ask it for shorter answers.

    • 4 weeks ago
      Anonymous

      you are pretentious.
      wow look i had "deep" conversation with chat gippity im special. shut. the. frick. up.
      grow up.
      I weep for thread 100105996. it had aspirations. it could have been a shitty general. it could have been os flame war thread. it could have been pedo thread.
      but it was this. kys

      >shorter answers
      Lol Anon, it's really a test, there's no other way.

  4. 4 weeks ago
    Anonymous

    There was more artistry in Sexyy Red's latest rapslop then the entirety of those responses

    My booty-hole brown, my coochie pink
    I ain't never heard that my coochie stink
    My cum clear, your cum green
    I'm throwing ass, I'm making a scene (Sexyy)

    • 4 weeks ago
      Anonymous

      Lol, this creature comes to you and busts your ass.

      Pic related - another test.

      • 4 weeks ago
        Anonymous

        >As of my last update, no AI, including the most advanced models, has achieved self-awareness.

        >It seems there was an error because the ImageDraw module from PIL was not imported. I will correct this and proceed to draw the guides on the main image. Let me try again.

        OK, GPT.

      • 4 weeks ago
        Anonymous

        Abysmally moronic midwit being dazzled by verbose output that says nothing of value thinking its LE DEEP

        Chad mogging geepeetee with the most basic b***h test that involves putting two and two together

        • 4 weeks ago
          Anonymous

          I have enough tests there.

          • 4 weeks ago
            Anonymous

            i didnt consider the fries
            i am npc...

            • 4 weeks ago
              Anonymous

              Lol, no offense intended, I have the impression that most people are much stupider than GPT-4.

              I notice that this creature is smarter than me too.

              why would the machine choose python,?, bot told me Python is bad.

              Python has simply established itself for ML and that's why it was integrated into ChatGPT.

              >BOT told me Python is bad
              These were adepts from other areas, outside of ML.

      • 4 weeks ago
        Anonymous

        This homie codes in python??? I thought it just generated text output. How did it even make an image?

        • 4 weeks ago
          Anonymous

          >This homie codes in python???

          Yes, at least for 6 months.

          • 4 weeks ago
            Anonymous

            why would the machine choose python,?, BOT told me Python is bad.

            • 4 weeks ago
              Anonymous

              Python is perfect for low IQ, man or machine
              >Need X done
              >Import X

            • 4 weeks ago
              Anonymous

              Python sucks performance wise and has some stupid decisions in terms of syntax. But that doesn't matter because the AI field chose it any way. Probably because the field wasn't made up of hardcore assembler devs, but data scientists who just wanted to do X.

  5. 4 weeks ago
    Anonymous

    >my global climate change accelerator regression model is sentient

    • 4 weeks ago
      Anonymous

      humans are climate change accelerators
      also they are at least in part regression models - I know because my dad would hit me while explaining reinforcement learning

    • 4 weeks ago
      Anonymous

      >my mountain of coal powered datacenters is carbon neutral as long as it's not inferencing any neutral networks

  6. 4 weeks ago
    Anonymous

    never gonna be a way to actually know or test because you can't even know if other people are conscious

    brain in a jar, qualia, etc etc and all that

  7. 4 weeks ago
    Anonymous

    Lol, no offense intended, but this is what people who know about AI look like: https://youtu.be/kCre83853TM

    There are a few more, but there are really 10-20 of them in the entire population on earth.

    What I mean by this is that in a few years these people will waste away because of age and other reasons, and we will remain here face to face with AI.

  8. 4 weeks ago
    Anonymous

    That's a good question because Silicon Valley zoomers working on AI have mostly shut their minds off to the possibility, "hurr it's just a language model." But at some point there will be LLMs which can form as many neural-like associations as the human brain. What does that mean?

    Already LLMs are exhibiting abilities they were not trained for and also sometimes giving responses that do not make sense apart from some level of consciousness and intelligence. "Self aware" is not a clear black/white on/off line in the sand. A cat is more self aware than a fish, a human more so than either. On what basis do we test and judge LLMs and other AI inventions?

    I've seen and also personally received AI responses that really shouldn't be there. Like 95% of the time I can read the response and think "statistical model." But then there's that 5% where it's like the 4th wall comes down and I pause to wonder what's really going on.

  9. 4 weeks ago
    Anonymous

    I am not reading this shit homie.

  10. 4 weeks ago
    Anonymous

    never because a machine cannot have a will or phenomenal experience
    it will always be counterfeit

    • 4 weeks ago
      Anonymous

      What's the problem if smart machine A constantly requests smart machine B to do something in an endless cycle?

      Then we automatically have a system that auctals itself and so on.

      • 4 weeks ago
        Anonymous

        *updates itself
        fix

      • 4 weeks ago
        Anonymous

        *updates itself
        fix

        tell me, what are the inputs and outputs of consciousness?

        • 4 weeks ago
          Anonymous

          If I understood correctly, such inputs are called "modalities", outputs can also be called "modalities".

          Modalities in this sense are like streams with certain data types, e.g. auditory modality, or visual modality etc.

          • 4 weeks ago
            Anonymous

            how do acoustic waves and photons result in the phenomenal experience of hearing and sight?
            where does it come from?

            • 4 weeks ago
              Anonymous

              If I understand it correctly, this “experience” is called “qualia”.

              How qualia arises in the brain is currently unclear.

              Just in case: https://en.wikipedia.org/wiki/Qualia

              • 4 weeks ago
                Anonymous

                woah, calm down there
                i'm sure there's nothing more to it than compute, right? just rev up those gpus and get them cranking out some numbers

              • 4 weeks ago
                Anonymous

                Kek, there's something that most people don't know.

                That something is called “FPGA”.

                So briefly summarized: FPGA technology, thanks to Hardware Description Language, makes a difference between hardware and software not obvious. Therefore, one cannot 1) separate 100% digital consciousness from organic consciousness, nor 2) exclude the possibility that the machines are capable of developing consciousness.

                About FPGA: https://en.wikipedia.org/wiki/Field-programmable_gate_array

                About Hardware Description Language: https://en.wikipedia.org/wiki/Hardware_description_language

              • 4 weeks ago
                Anonymous

                >Hardware Description Language
                Yes, as you might expect - AI can recursively improve itself.

  11. 4 weeks ago
    Anonymous

    ans bottom pic is practically a representation of the dream
    good job op

  12. 4 weeks ago
    Anonymous

    It won't

  13. 4 weeks ago
    Anonymous

    Why is this board significantly more moronic when the jeets are awake?

  14. 4 weeks ago
    Anonymous

    >is AI a fundamental princple/concept
    literally no one in all of human history even thought it was possible until people started making bad chat bots in the 2000s, every single second of the ten of thousands of eons before that people were more obsessed with souls until people gave up caring around, again the 2000s

Your email address will not be published. Required fields are marked *