Has Generative AI Already Peaked?

Has Generative AI Already Peaked?

DMT Has Friends For Me Shirt $21.68

Yakub: World's Greatest Dad Shirt $21.68

DMT Has Friends For Me Shirt $21.68

  1. 1 week ago
    Anonymous

    probably, it seems you need exponentially more data for a linear improvement, and they're alreading training them on the fricking library of alexandria, five gazillion terabytes of literature and the whole internet
    something else is gonna have to change

    • 1 week ago
      Anonymous

      >they're alreading training them on the fricking library of alexandria, five gazillion terabytes of literature and the whole internet

      and still no useful use case for it

      • 1 week ago
        Anonymous

        wdym, it creates plausible advertising slop so the internet can become even more of a landfill

        • 1 week ago
          Anonymous

          Hush. Don't go against the narrative.
          Bad goy.

      • 1 week ago
        Anonymous

        >and still no useful use case for it
        Ebussy single handedly defeats Roko's basilisk.
        "No use case for this feature."

  2. 1 week ago
    Anonymous

    sora is the only "new" thing i've been impressed by.
    but ai chatbots feels like they haven't improved in 12+ months.

    • 1 week ago
      Anonymous

      they've gotten worse if anything, they realized that using 20 dollars worth of compute and electricity is not worth responding to a single chat query, and are scrambling to "optimize" it by using shittier models
      i can't wait for all these negative revenue AI startup companies to burn in flames

      • 1 week ago
        Anonymous

        Is that why it's worse? They're doing shit to make it consume less computing power?

        • 1 week ago
          Anonymous

          Nah, that might be their excuse but if they could make a better AI they would.
          It really seems like they have reached a plateau on what neural networks can do so now they look for other improvements like lower power consumption so they don't have to tell their investors that their tech is stagnant.

          • 1 week ago
            Anonymous

            They're looking for new original works to steal from.

        • 1 week ago
          Anonymous

          Llama3 is a really good model. 8B is still excellent.

  3. 1 week ago
    Anonymous

    At what point is/will the internet so filled with AI garbage that it probably can't detect and trains so much off it that it starts imploding

  4. 1 week ago
    Anonymous

    >Has Generative AI Already Peaked?
    Well, that is what scientific evidence suggests right now.
    And hyping up AI made a lot of sense to get billions of investment money, but when did any AI company ever show results?

    • 1 week ago
      Anonymous

      AI's profitable usecases are mostly amoral. They are not going to advertise their success in those fields.

  5. 1 week ago
    Anonymous

    Well, yeah. This is how AI research has gone from the 1970s:
    >new ai technique gives excellent results if you throw enough hardware and training at it
    >laymen press tells us that we'll have agi in five years
    >peanut gallery gathers around and waits for the birth of the hyperintelligent ai that will solve all our problem
    >the expert system doesn't develop into a hyperintelligent ai
    >the ocr software doesn't develop into a hyperintelligent ai
    >the numberplate reader doesn't develop into a hyperintelligent ai
    >the pathological liar of a search engine doesn't develop into a hyperintelligent ai
    >the thing that draws people with way too many fingers doesn't develop into a hyperintelligent ai
    >another ai winter arrives as everybody loses interest for a decade
    >hardware improvements allow a new ai technique that's never been tried before
    >cycle repeats

    • 1 week ago
      Anonymous

      I've lived through all of these, no one thought any of those would result in AGI, all of them succeeded at their purpose, they have nothing to do with AI or AGI (unless you want to make some weird inference about these contributing to the tools available to an AGI but you didn't state that), the closest thing to the public being aware of and interacting with some kind of AI/communicative thing was Cleverbot and that was just kind of a toy that you could put in "What is a man?" and it would reply "A miserable pile of secrets!", but it couldn't actually talk at all.

      I don't disagree with you that LLMs (that's what we're talking about here, just LLMs, strictly AI agents) are only possible via the hardware that exists today and the big data that's been gathered.
      But I don't agree that the ceiling on today's LLMs are one of compute; the underlying method itself is probably fundamentally limited, which is why work is being done on other ways of generating output.

      • 1 week ago
        Anonymous

        >no one thought any of those would result in AGI
        Yes they did, and even made movies about it

        • 1 week ago
          Anonymous

          i skipped a lot of class in high school. i remember one of my teachers told me "when you get to college, it'll be different and you're going to do great".
          and, when i was in college, actually it wasn't all that different and i still fricked around a lot and skipped class. i was actually almost angry with my teacher for deceiving me-- for making me think that i would do well.
          as i got older, i came to realize that's really the only thing they could say. i mean, it would be ridiculous to go "college is going to be the same, you're going to suck there too," because that would be discouraging. it's like a pascal's wager thing. being pessimistic results in a lower likelihood of positive outcomes.

          similarly, if you are researcher and it looks like maybe there's a chance, you're better off vouching for yourself. it's a game theory thing. if people will give you money to explore the idea, and you think "yeah maybe it might go somewhere", you're not going to decline because you're not super sure about it. that would be moronic.

        • 1 week ago
          Anonymous

          Nobody did, who cares about movies for normies

      • 1 week ago
        Anonymous

        >In a 1958 press conference organized by the US Navy, Rosenblatt made statements about the perceptron that caused a heated controversy among the fledgling AI community; based on Rosenblatt's statements, The New York Times reported the perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence."

      • 1 week ago
        Anonymous

        Yes they did, anon. The expert systems of the late 1970s are the direct ancestor of the dystopian AGI film genre (of which The Terminator is the most famous example) of the early 1980s.
        It was repeated again when OCR came along. "COMPUTERS CAN NOW READ AGI SOON" bellowed the press and the peanut gallery.

  6. 1 week ago
    Anonymous

    >Has Generative AI Already Peaked?
    no

  7. 1 week ago
    Anonymous

    Maybe but Yann Lecun did an interview with Lex Friedman a few months ago and the tl;dr was LLMs today are steam engines, we've yet to invent the combustion engine but we're working on it.

  8. 1 week ago
    Anonymous

    i am worried about AI shitting where it eats. using the entirety of reddit today as training data is just going to reinforce the same garbage that already exists, because how how heavily LLM bots are posting there. same with stack exchange and any other public forum.

  9. 1 week ago
    Anonymous

    LLMs have peaked and are basically limited by compute compacity and volume of data. I don't think we're done with AI image generation though.

  10. 1 week ago
    Anonymous

    if OpenAI Q* turned out to be a nothingburger then it over.

    • 1 week ago
      Anonymous

      I think it's a smaller model, maybe a new architecture but we have to wait and see.

  11. 1 week ago
    Anonymous

    Is Gen AI and LLM different?
    I'm being forced to learn LLM right now and from their example, it really is impressive. Giving instructions on a computer using natural language and performing them almost perfectly. But what I noticed is all of the examples they have are all textbook examples of a highschool's programming 101.

    • 1 week ago
      Anonymous

      those examples are the most likely solutions assuming the information in the environment is complete and accurate (it's not). They have strong resemblance to the context of the prompt. As a contextual object, those code snippets it finds are, as a whole, the thing that best matches the prompt.
      >HDD2K

  12. 1 week ago
    Anonymous

    I guess that's kinda why Gpt and other models get worse the longer you talk with them. Seems like we're at a plateau.

    • 1 week ago
      Anonymous

      conversations don't naturally go on forever, anything past a certain point is necessarily extrapolation and hallucination

  13. 1 week ago
    Anonymous

    Everything we see today is pretty much the result of a single paper called "Attention Is All You Need". So if things are peaking now, they are only peeking until the next landmark paper comes out and changes everything again. Sorry to the homosexuals who get a hardon by proclaiming that X things is just hype and will fail.

  14. 1 week ago
    Anonymous

    Probably not but the bubble is real. AI is going to keep innovating, but I don't think significant enough gains to make new commercial products are going to appear predictably. We still don't have good integration of current AI in the consumer space. Only next year are consumer phones, laptops, and PCs all going to have AI accelerator hw available across the board, and Microsoft Copilot is the only only product with a large userbase to benefit. Copilot is just a chatbot, a modern day Clippy, but why couldn't it also directly control a PC? If I want to change one annoying UI setting that would require looking through registry or group policy, why can't Copilot take care of it, instead of just directing me to a forum post explaining how? If I want to create a macro for various routine tasks, why can't I just ask Copilot to record my keyboard and mouse inputs to do it for me?

  15. 1 week ago
    Anonymous

    His entire argument is that we will never have enough data. Just stick a camera on a dozen newborns heads and by the time they're 3 it should have all the data and context it needs to understand nuances and specifics of language and image recognition. That's already happened with social media, smartphones, and big tech watching everything you do.

  16. 1 week ago
    Anonymous

    Excitards status: btfo, Balancels coping, Evidenchads keeping it real as per usual

  17. 1 week ago
    Anonymous

    Go buy an ad for your shitty chain

Your email address will not be published. Required fields are marked *