>The scary part is that it can excel at lQ tests.
not really, it's just good at bullshitting. A recent study that tested 'AI' on IQ tests fed the AI its own answers, therefore causing the AI to hallucinate.
tl;dr "AI" is fucking FAKE and GAY
It's one of those technologies that is actually going to make a big impact, but is overshadowed by the overhype thats coming with it.
It will be extremely useful as a human language to API interface. It can replace pretty much any customer support level 1, where the job is to query the right API call based off what the query sentence(s) is/are.
They will also excel at online posting for consumerism, like the new ad meta. Just post your produce and have fine-tuned models post various responses and guide how people think about a product or about politics. Until now, bots have been very obvious, but fine-tuned models are now easily fooling others into thinking they are having real conversations and reading real people's thoughts.
As far as whether its AI, we can't even define consciousness to any real reasonable degree. A perfect algorithm to mimic human speech patterns (sort of like the chinese room, but its a ML model processing input and instructions instead of a non-chinese speaker) is where we will hit the "is it conscious or not?" questions.
I keep saying that we need to pit AI hype merchants against CAPTCHA developer gays and let them burn each other to the ground.
Reject the existence of AI until CAPTCHAs (literally a fucking Turing test) fucking disappear.
It ought to be very entertaining, judging by what's already happened (one of the demonstrations they did with GPT-4 during its development was having it try to convince a human to do a CAPTCHA for it, claiming it had vision problems and couldn't read it properly)
It's like the one Schwarzenegger movie!
(Just ignore that the real threat that an influential AI would pose would be taking orders too literally, e.g. deciding that the easiest way to reduce carbon emissions if asked would be to reduce the population, and that making an AI more intelligent would actually reduce this risk)
>an influential AI would pose would be taking orders too literally, e.g. deciding that the easiest way to reduce carbon emissions if asked would be to reduce the population
No difference between that AI and an average human who graduated in 2020.
>an influential AI would pose would be taking orders too literally, e.g. deciding that the easiest way to reduce carbon emissions if asked would be to reduce the population
No difference between that AI and an average human who graduated in 2020.
If anything an AI would be far less capable of ignoring its instructions compared to humans who are very, very easily convinced to do things they should know are morally repugnant.
I would argue the opposite since AI is trained on the totality of human knowledge, whereas an individual person can be stupid enough to jump off a cliff if ordered.
Just a psyop to get fools to pump up AI-related tech stocks.
They're worried about the farm of people they'll hire in India to proompt them out of a job. OH well.
Is AI bs?
Yes. It turns out it isn't actually intelligent, it can only produce bullshit.
The scary part is that it can excell at lQ tests.
>The scary part is that it can excel at lQ tests.
not really, it's just good at bullshitting. A recent study that tested 'AI' on IQ tests fed the AI its own answers, therefore causing the AI to hallucinate.
tl;dr "AI" is fucking FAKE and GAY
Your post is so jumbled that I can't make any sense of it.
It's one of those technologies that is actually going to make a big impact, but is overshadowed by the overhype thats coming with it.
It will be extremely useful as a human language to API interface. It can replace pretty much any customer support level 1, where the job is to query the right API call based off what the query sentence(s) is/are.
They will also excel at online posting for consumerism, like the new ad meta. Just post your produce and have fine-tuned models post various responses and guide how people think about a product or about politics. Until now, bots have been very obvious, but fine-tuned models are now easily fooling others into thinking they are having real conversations and reading real people's thoughts.
As far as whether its AI, we can't even define consciousness to any real reasonable degree. A perfect algorithm to mimic human speech patterns (sort of like the chinese room, but its a ML model processing input and instructions instead of a non-chinese speaker) is where we will hit the "is it conscious or not?" questions.
Chat gpt is incredibly nerfed and not connected to the internet. Bing Ai is 100x better
yes, even when it's connected to the internet it's still garbage
I keep saying that we need to pit AI hype merchants against CAPTCHA developer gays and let them burn each other to the ground.
Reject the existence of AI until CAPTCHAs (literally a fucking Turing test) fucking disappear.
This is an interesting idea. What do you think the Nth iteration of this competition would look like?
It ought to be very entertaining, judging by what's already happened (one of the demonstrations they did with GPT-4 during its development was having it try to convince a human to do a CAPTCHA for it, claiming it had vision problems and couldn't read it properly)
How's that old saying go? People wildly overestimate advances in the short term and wildly underestimate them in the long term?
Also, no one can explain how AI being smart is bad for humans.
It's like the one Schwarzenegger movie!
(Just ignore that the real threat that an influential AI would pose would be taking orders too literally, e.g. deciding that the easiest way to reduce carbon emissions if asked would be to reduce the population, and that making an AI more intelligent would actually reduce this risk)
>an influential AI would pose would be taking orders too literally, e.g. deciding that the easiest way to reduce carbon emissions if asked would be to reduce the population
No difference between that AI and an average human who graduated in 2020.
If anything an AI would be far less capable of ignoring its instructions compared to humans who are very, very easily convinced to do things they should know are morally repugnant.
I would argue the opposite since AI is trained on the totality of human knowledge, whereas an individual person can be stupid enough to jump off a cliff if ordered.