Why do we keep calling it AI when it's really just advanced search engines? Real AI can learn and adapt to situations, ChatGPT can't even remember my name for more than 5 prompts or tell me the current date. Those are just large static databases that output fancy text and images, there is nothing intelligent about it.
It's All Fucked Shirt $22.14 |
It's All Fucked Shirt $22.14 |
"AI" has the potential to be the biggest scam of the century, way bigger than cryptocurrency.
why do we call the cloud, the cloud?
it also sounds better for marketing and that's what they are trying to build, even though its a massive joke. them changing the description now would be shameful to them.
But clouds is an invented marketing term, AI already had a technical definition before it was misused.
Machine learning architecture came from mimicking neurons, so technically it's "artificial intelligence" if you define a lump of neurons as intelligence.
shut up highschooler
Rude.
No it didn't. It stems from a paper incorrectly explaining how brains work.
which paper?
toilet paper
Cause that's what we've all collectively decided to call post-algorithmic computer programs.
Good marketing.
>it's really just advanced search engines
So you're saying LLMs can only return answers that they saw in their training data? Then, given that there are more possible chess games than atoms in the universe, how well do you think an LLM can play chess?
>ChatGPT can't even remember my name for more than 5 prompts
They literally added that last month.
https://www.forbes.com/sites/tylerroush/2024/02/13/chatgpt-can-now-remember-users-including-their-voice-preferences/
>or tell me the current date.
Google Bard can tell me, so that's a skill issue.
That still doesn't make it AI
The field of Artificial Intelligence disagrees with you, but tell me, what is the least complicated task a machine would have to be able to do before you would be willing to accept it was an AI? It has to be something which is subjectively and repeatably measurable.
Learn and adapt dynamically
>Learn and adapt dynamically
That's actually a perfectly reasonable definition, I'll give you that, even if it's not the most widely accepted one. So, suppose that OpenAI announces GPT5 this year, and they've solved online learning, so that they never need to begin a training run from scratch any more, and just keep adding incremental data and hardware improvements. Also, suppose that it learns from its interactions with users and the feedback they give it, and from its own self-reflection on its answers. Would that make it a learning and dynamically adapting system?
Yes that would make it AI in my eyes but that will never happen because then the AI could basically doxx people e.g. user A telling his sexual preferences and user B asking for the sexual preferences of user A. It would become a massive privacy problem unless you run it locally or a separate instance/database for every single user. Also you could train it to become racist and (they) will never allow that.
That's a good point, I hadn't considered that.
good morning !
>Why do we keep calling it AI
Because it's AI, Artificial Interpolation.
You're the equivalent of a caveman referring at the computer as a magic box. Learn what machine learning as a field is then come back moron.
What does ChatGPT learn while I use it? It's nothing but a fancy search engine.
>What does ChatGPT learn while I use it?
It can use all the information within its context window (which for some LLMs is now 1 million tokens). That's a type of short term memory, however OpenAI also takes people's interactions with the service and uses that as part of the fine tuning and training data for the next iteration, so that's how it becomes part of its long term memory.
>It's nothing but a fancy search engine.
Search engines can only return values they've seen before. LLMs can generate completely new outputs that no human has ever created before, and, in principle an LLM can compute any computable function, which makes it as powerful as a universal Turing machine, unlike a search engine.
>LLMs can generate completely new outputs that no human has ever created before
But it literally can't
>LLM can compute any computable function, which makes it as powerful as a universal Turing machine
It can't do that either
>it literally can't
Are you really saying that every output from ChatGPT is a text which can be found from a book or webpage that existed at the time of training? I'm not sure if we disagree on the meaning of the word "new", or "created", or "literally" here. Anyway, here's a research paper you can read where they trained an LLM on a dataset of 10 million chess games, and the LLM could play as well as a grandmaster, despite the fact that none of the games it was asked to play were in its training set.
https://arxiv.org/abs/2402.04494
>It can't do that either
There are limitations, you're right. Transforms can only "statistically meaningfully approximate Turing machines", so they are not perfect replacements for each other.
https://openreview.net/pdf?id=VOyYhoN_yg
That doesn't make it "intelligent". It would be if you only fed it with the rules of chess and then it figures out itself how to become a grandmaster. And yes whatever ChatGPT outputs is just a remix of whatever it was fed with, it can't come up with new concepts without learning them first, it can't invent something that wasn't invented yet, real AI could do that.
>That doesn't make it "intelligent". It would be if you only fed it with the rules of chess and then it figures out itself how to become a grandmaster.
So we at least agree that AlphaZero is intelligent, then, since it worked out how to play grandmaster level chess from just the rules and playing against itself. I still think you've unfairly penalizing ChatGPT though for being pre-trained, which means it gained all its intelligence before users got to access it. Perhaps you'll be more convinced by result from Google where they fed Gemini a grammar manual for rare language called Kalamang and it managed to become as proficient at that language as a human who trained using the same manual. That surely counts as learning a new skill.
https://www.kapler.cz/wp-content/uploads/gemini_v1_5_report.pdf
>whatever ChatGPT outputs is just a remix of whatever it was fed with
When was the last time you created something that wasn't just a remix of ideas you learnt from someone else? If every word you know has been published in a dictionary, then every sentence you say or write is "just a remix" of the dictionary. I think you need to be more precise about what you mean by "new concepts". The idea that "nothing can travel faster than the speed of light" was in one sense revolutionary, but you could also say that it's "just a remix" of the pre-existing ideas of light beams and speed limits.
There's no such thing as "real AI." We will never create artificial minds or beings. Intelligence is just the ability to make judgments or provide solutions. We can achieve that artificially and we have been able to to some measure for a long time. Now we can do it more and better and in a way that more closely resembles human understanding. "AI" is perfectly apt terminology.