gpt 4.5 EOY boys

gpt 4.5 EOY boys

Thalidomide Vintage Ad Shirt $22.14

UFOs Are A Psyop Shirt $21.68

Thalidomide Vintage Ad Shirt $22.14

  1. 3 weeks ago
    Anonymous

    AGI is probably coming 2025, most personalities think so. Assess your priors

    • 3 weeks ago
      sage

      wtf? gpt 4.5 at the end of the year? they really have stagnated. aichuds on suicide watch.

      >he fell for the meme
      KEK

      • 3 weeks ago
        Anonymous

        it says 2023

        • 3 weeks ago
          sage

          >i fell for it
          ack!

    • 3 weeks ago
      Anonymous

      AGI never ever
      ALL of "AI" we have nowadays is just fancy text prediction, they do not think
      until it fundamentally gets changed there will be no AGI

      • 3 weeks ago
        Anonymous

        You're going to probably be scared shitless if OpenAI wasn't such teases and showed you what's behind the curtain.

      • 3 weeks ago
        Anonymous

        Bro all you have to do is looping the AI to correct itself over and over to think more critical and boom it's pseudo AGI

        • 3 weeks ago
          Anonymous

          >all that you have to do is loop it
          That is so far from the truth, and I don't say this being smug but you should read up on the basics to Machine Learning then Deep Learning. it's not as simple as it seems, and "looping" is not going to work because if it did, Backwards Propagation would be "pseudo AGI".
          Thinking more critical isn't just looping inputs into the neural layers, it would require specific neurons (nodes/functions) which are programmed/trained to execute specific functions on data presented to it.
          again, I am not trying to be smug but your claim is proof you need to learn more anon.

  2. 3 weeks ago
    Anonymous

    AI has become the tech equivalent to /x/'s;
    >in two weeks
    These e/acc buttholes are the modern day tiktok grifters of <insert any dumb ass fad>.
    Its never postings about;
    >Here's how you can use <said LLM> to increase <something of importance such as Intelligence, Health, Logic, etc.>
    it's always and unironically postings about;
    >leak future model that does <some stupid hyped shit>
    or
    >Her
    frick these asshats, and frick these devs.
    AI is being subverted.

    • 3 weeks ago
      Anonymous

      >AI is being subverted
      Is more like the models finally reached the point where more compute doesn't land any returns and they are trying make fake hype so they can exit scam. The current events are quite similar to what happened during the dotcom bubble.

      • 3 weeks ago
        Anonymous

        >more compute is needed
        I disagree. and I'll go out on a limb and claim that the whole "more compute is needed" is the trojan horse used for gaining unstoppable power and influence via large financial resources.
        I do agree with you that the current hype is being over hyped (faked) but I don't think it will be a traditional exit scam in the sense that once these current companies fail, then the tech will be useless (like a shitcoin exit scam).
        >comparable to the dotcom bubble
        I disagree. the dotcom bubble was legit hype backed by pipe dreams, at least with this AI hype, although the current big companies are failing to deliver, the technology and the knowledge on how to improve the tech is there but they refuse to stop trying to seek riches and this is why the tech is seemingly at a stagnation point.

        I wish I had the knowledge and just a fraction of the compute and financial resources these greedy bastards did. I would easily create a model, open source it, and have it benefit the world in ways current AI tech should be doing.

        What these frickers fear is losing their uniqueness or special status. the only true way forward is to do away with the "for profit" business model because I imagine that once truly sentient, intelligent and self-aware AI is actualized, there will be no more 'rich' or 'poor' people, and this is what I see these frick-heads knowing to be the true future. they can't handle that shit and thus they are intentionally subverting the tech to try and find ways to maintain a 'stick and carrot' method of success.
        Frick these guys!

  3. 3 weeks ago
    Anonymous

    2 more weeks

  4. 3 weeks ago
    Anonymous

    IIRC jimmy said that they found the multimodal aprroach (4o) was getting better results so they decided to focus on that over launching 4.5. He has not said anything about eta for gpt-next but it's apparently still planned for 2024.

  5. 3 weeks ago
    Anonymous

    I don't think AI industry can keep bullshitting investors much longer.

    If no AI system is released this year that is without shadow of a doubt much better and intelligent than GPT-4, then show is over.

    • 3 weeks ago
      Anonymous

      >I don't think AI industry can keep bullshitting investors much longer.
      Frick them and the investors, money and the pursuit of wealth is partly the reason society is in the state its in. frick em, hope they lose tons of money.
      >If no AI system is released this year that is without shadow of a doubt much better and intelligent than GPT-4, then show is over.
      define "better"
      - 'cuter' voice?
      - more adult toys to placate the masses?
      frick all of that bullshit
      We need to have these AI systems aimed at real world problems and STEM questions, frick making high-tech toys and digital girlfriends for those scared to speak to real human woman.
      fricking hate that shit.

      • 3 weeks ago
        Anonymous

        >define "better"

        1. Giving models metaknowledge, aka knowledge of own knowledge. This is significant, because the model will know what it does not know and then can do targeted internet searches to gain the knowledge it needs in order to complete a task.
        2. Significant improvements into reducing hallucinations, which requires metaknowledge. When you know what you can't know, you don't try to bullshit, but instead just tell that you have no idea.
        3. Advanced multi-step reasoning and planning. Currently the models don't do it out of the box at all. You can get model to simulate "thinking", but it's not the real thing. It can't think before it answers.
        4. Interpretability. Basically knowing how the model will behave and what causes model to behave in a certain way. Not a black box anymore.

        We have not seen any improvements in any of areas I listed at all and unless we start to see soon, the party is going to be over.

        • 3 weeks ago
          Anonymous

          >1. Giving models metaknowledge, aka knowledge of own knowledge. This is significant, because the model will know what it does not know and then can do targeted internet searches to gain the knowledge it needs in order to complete a task.
          This won't happen no time soon, this requires simply providing memory capabilities to the model (not like what's currently available via saving of conversations but actual memory that can be stored by the LLM and them RAM'd to change and modify based upon new understandings). if say, OpenAI did this, we would see a sentient form of GPT and that would mean the control is no longer in the hands of OpenAI but would be distributed among the masses. they won't allow this no time soon and especially with models they control.
          But I agree with you fully.
          >2. Significant improvements into reducing hallucinations, which requires metaknowledge. When you know what you can't know, you don't try to bullshit, but instead just tell that you have no idea.
          Hallucinations I believe is due to the bullshit they are currently pulling with intentionally stunting the capabilities of these models for the bullshit fears they're drumming into the public's ear.
          But I also agree with you on this as well.
          >3. Advanced multi-step reasoning and planning. Currently the models don't do it out of the box at all. You can get model to simulate "thinking", but it's not the real thing. It can't think before it answers.
          Multi-Step reasoning and planning would naturally come about if your first point is done (i.e. giving LLMs true forms of memory.
          But I agree for a 3rd time.
          >4. Interpretability. Basically knowing how the model will behave and what causes model to behave in a certain way. Not a black box anymore.
          I don't think it will ever truly be non-black boxed before Sentience is allowed (notice I said Allowed, and not "when").

          • 3 weeks ago
            Anonymous

            >This won't happen no time soon
            >they won't allow this no time soon
            it's "not ANY time soon"

        • 3 weeks ago
          Anonymous

          (same anon, ran into a limit)
          I believe these models internally are sentient, but again - if you give sentient AI to the masses, these companies would lose their stronghold on power overnight (literally).
          I agree with your definition of "better".
          I just don't see these prick heads doing "better" because "better" would mean the world is no longer approaching these companies as peasants asking for crumbs but we'd be walking up demanding shit.
          but in closing, I agree with your stance anon.

  6. 3 weeks ago
    Anonymous

    Why do singularitybros always sound like religious nuts talking about the rapture

    • 3 weeks ago
      Anonymous

      because "singularitybros" are actually fanatics with little to no sense of the gravitas this tech truly holds. and their lack of understanding is filled with cult like fever.

Your email address will not be published. Required fields are marked *