This AI stuff is getting out of hand. What do I tell my friends who are off in cloud cuckoo land over it? Or should I join in the fun?
Thalidomide Vintage Ad Shirt $22.14 |
CRIME Shirt $21.68 |
Thalidomide Vintage Ad Shirt $22.14 |
This AI stuff is getting out of hand. What do I tell my friends who are off in cloud cuckoo land over it? Or should I join in the fun?
Thalidomide Vintage Ad Shirt $22.14 |
CRIME Shirt $21.68 |
Thalidomide Vintage Ad Shirt $22.14 |
You can start by speaking English
Ironic. You must be a Europoor
We're just getting started.
What exactly is your average person using AI for that is fun?
https://voca.ro/1gagsLZ3cT61
anon...
Are you going to tell me? Generating audio/video shit is boring, idc if the quality has improved. The novelty isn't there anymore. Seriously what are people still doing with this shit? I don't personally know anyone who still gives a crap.
the real fun is taking a side on x-risk and spending all day debating idiots from the other side online
AI hype is getting out of hand. These are just advanced search algorithms/programs, they're not intelligent by any metric. AI's usefulness is being grossly overstated
>ask ChatGPT to write a basic C program consisting of 2 functions
>it doesn't work
>try copilot
>it won't compile
While it's true there is no semantic reasoning occuring in the models, the human response is for the most part equally pathetic.
It's like we're living in an /x/ shitpost where there aren't enough souls to go around and the rest have to make do with symbolic automata
>they're not intelligent by any metric
except, you know, IQ tests
https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq
This is stupid. If a human with no familiarity with the test gets 100 that tells you something. If an AI that has who knows what in its training set passes that tells you absolutely nothing.
most humans that take IQ tests have at least a few decades of training data, not counting 600 million years of architectural optimization
There are answers to the test online. There are a lot of reasons to suspect the results are tainted. Even if the test was new to it there are a lot of subtypes of questions that it could recognize and answer correctly simply because it has such strong recall capabilities.
>t. election tourist
kys
Squabbling over "It's intelligent! it's not intelligent! it's self aware! it has no semantics!"/whatever is a psychic trap. Either discuss properties you can actually quantify, or moral dilemmas you think are relevant NOW, or move on.
There's so many midwits on r/singularity claiming that "humans are just token predictors" lmao
Materialist reductionism taken to absurd extremes
>"humans are just token predictors"
Are we not?
Well, some people aren't even that sophisticated.
I've yet to see an example of some type of intellectual task that couldn't be done by a sufficiently scaled up token predictor.
Later this year, when GPT5 is released with Q* built into it, the question will be moot, as ChatGPT will no longer be just predicting tokens but searching through the space of possible outputs, like AlphaGeometry did for mathematical reasoning.
>I've yet to see an example of some type of intellectual task that couldn't be done by a sufficiently scaled up token predictor.
how do LLMs do with prompt engineering themselves?
>how do LLMs do with prompt engineering themselves?
Unsurprisingly, researchers have tried to get LLMs to write their own prompts, which is a bit like asking a genie to give you more wishes.
https://aclanthology.org/2023.findings-acl.216/
> we propose Consistency-based Self-adaptive Prompting (COSP), a novel prompt design method for LLMs. Requiring neither handcrafted responses nor ground-truth labels, COSP selects and builds the set of examples from the LLM zero-shot outputs via carefully designed criteria combining consistency, diversity and repetition. In the zero-shot setting for three different LLMs, we show that using only LLM predictions, COSP significantly improves performance up to 15%
>prompt engineering
I can't believe this is a real job. For women.
>as ChatGPT will no longer be just predicting tokens but searching through the space of possible outputs, like AlphaGeometry did for mathematical reasoning.
Isn't that just predicting tokens with a slightly different algorithm? not that I think there's anything invalid about token prediction
The only reasonable criticism about the way LLMs "predict tokens" is that they are a single pass through the inference process, and they can't backtrack or search around for novel ideas.
But you're right that criticizing an artificial neural network for "predicting tokens" is like criticizing a biological neural network for "predicting inclusive genetic fitness".
It's true that humans are constantly trying to model the world and picking actions that will avoid pain and achieve pleasure, including having healthy offspring, but that doesn't mean that our intellectual achievements aren't real, just like the achievements of LLMs.