>“I think we're at the end of the era where it's going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We'll make them better in other ways.”
Remember all those gays that said GPT-5 is already in training and it will achieve AGI? Turns out the transformer LLM model is out of juice and it makes no financial sense to throw in orders of magnitude more compute for incremental gains. Not to mention that there is no more high quality data for them to ingest at that size.