FIRST: If you're legit stupid, don't even engage with what I'm about to say.
CHATGPT/LLMs ARE FRAUD AND HYPE - THEY ARE THE NEW MECHANICAL TURKS
They COMPLETELY fail at reasoning and no amount of parameters or training data can fix this.
LLMs dont do anything, all they can do is write emails and help people with homework, and code a bit faster.
I have given GPT4 simple logic problems that a 6th grader could solve and GPT4 can simply not do them because they are very reason based.
Take a reasoning problem and drastically reword so it's not pulling off it's trained data and you will see what I mean.
For instance, if the logic problem dealt with an arrangement of colored blocks, change it to sports equipment items and put a narrative around it.
Because, it is ultimately picking the next word and it's been trained on most logic textbooks so all the common questions it has already seen, but when you throw in weird objects it greatly increases the chances it picks and obscure next word and you will then see all the "reasoning" breakdown.
It cannot reason, therefore, it is only useful for doing work that was pointless to begin with.
It will never produce anything new or be able to be used as a decision system in any area with novel situations, as it will inevitably "hallucinate".
>inb4, have multiple LLMs to argue with each other to induce reasoning
Nonsensical, at the best case this is asymptotic, even if you kept reducing the chances by adding another layer by 20% it will never get to zero, and for many decision systems it must be zero. You cant have your checkout price on a vendor be correct, 99% of time, it has to be determined correctly 100% of the time, which it does under logic based systems.
Statistic has proven to be very practical, but in the end the world requires logic and reasoning, something LLMs can only simulate, therefore they will never achieve "AGI" or replace humans in reason based systems.