the problem with asking it complex questions like that is you cant easily verify it.
also LLMs are good at copy-paste formulas like the one you provided.
In my example, I asked it an easily verifiable math question and it herped so hard it derped. I asked it the same question 3 times and got vastly different answers.
If you think this thing is capable of large scale engineering work then you got anudda thing coming. QED; Our jobs our safe
lol wtf... thats where my original convo came from.
Long story short I was asking it to assign numbers to names and ask for an example of 2 names that had the same multiplier, but then that lead to ChatGPT giving shitty math.
Really funny how it still remembers that convo and says "that number is mary" etc, when you didnt even talk about that to the bot. shits not respecting out private conversations at all
a.i is still in its primitive stages. much like the dial up era of the internet
so funny to see wagies coping about their little jobs. you don't have a clue what is coming
im a 33 y/o boomer. im fine.
you're fricked, you're probably 20 or something, so fricked dude, by the time I retire the AI will take over the industry. gl with your $5 million dollar house, probs a 2br built in 1939. gg
>we WILL take what you own by force >without guns (costs money) >or food (costs money) >or a means to organize (AI screens all communication instantly)
going to be comfy as frick in my doomsday bunker when you zoomers try to invade
It sucks at "mental math", but knows theory very well.
Just give it access to external calculator tool and it will absolutely smoke anyone on any math subject on this board.
I know you're just kidding but there's people who think that unironically so I'll still answer sincerely
It's a large language model
"AI" is a very broad phrase. There are many different kinds of artificial neural nets
You don't need AI for that. You're describing a problem that is solved with basic computation. You can literally type it into a python shell or a calculator app
i know, this entire conversation is about the weakness of AI so a solution like "just dont use AI" is pointless. this shit fails at the most basic of tasks thus our all our jobs are safe. end of story
3 months ago
Anonymous
No you've just picked one of the known tasks that basic computers are good at and AI isn't good at.
As a general rule, neural networks are good at "fuzzy" tasks (like image recognition and natural language tasks) which until recently computers were very bad at.
3 months ago
Anonymous
you got that backwards, ai is good at fuzzy tasks and bad at binary (true/false) tasks. doing basic math is a binary task, as evident by the correct/incorrect outcome.
This is important because *our* jobs are programming or otherwise dev/IT work that requires correct solutions, not 'mostly correct' solutions. thus AI (LLM) programs will shit the bed for a couple years and we have nothing to worry about. they dupe the normie gays, but behind the scenes we keep our employment. capiche??
3 months ago
Anonymous
>, ai is good at fuzzy tasks and bad at binary (true/false) tasks. doing basic math is a binary task, as evident by the correct/incorrect outcome.
This doesn't make any sense. Do more research before posting, you are wrong. The rest of your post was incomprehensible.
3 months ago
Anonymous
is your reading level 5th grade or something? lmao, you arent ready for the real world
3 months ago
Anonymous
>>ai is good at fuzzy tasks and bad at binary (true/false) tasks. >This doesn't make any sense
the funny part about this is... you said the exact same thing in your original post
No you've just picked one of the known tasks that basic computers are good at and AI isn't good at.
As a general rule, neural networks are good at "fuzzy" tasks (like image recognition and natural language tasks) which until recently computers were very bad at.
>neural networks are good at "fuzzy" tasks
Ill admit I derped and misread your post when I replied (thus saying the same thing as you) but it was funny how you disagreed with me saying almost the exact same thing as you said. top kek m8
3 months ago
Anonymous
It's because you said it in a worse (and technically incorrect) way.
Your description of "bianry tasks" isn't acutal terminology I've ever seen used in AI research
And there's plenty of concrete yes/no things an AI can discern. One example off the top of my head is sentiment analysis.
3 months ago
Anonymous
"sentiment analysis" does not have a binary answer.
and yes there are binary answers. This is the basic fundamentals of computer science. if you dont even understand this then I'm not going to continue speaking to you. hit the books, moron. stop wasting my time
3 months ago
Anonymous
>"sentiment analysis" does not have a binary answer.
Yeah it often has 3 answers, negative, positive, and neutral.
But in some applications it's just "positive or not" (or "negative or not")
Don't talk about things you don't understand.
3 months ago
Anonymous
>>"sentiment analysis" does not have a binary answer.
>Yeah it often has 3 answers,
LMFAO OMFG IS ANYONE ELSE READING THIS SHIT? LOOOOOOL
3 months ago
Anonymous
???
Tell me what you think sentiment analysis is.
3 months ago
Anonymous
like many analysis, its a complex answer written in text with points, not a true/false answer. Its honestly laughable that you'd think its trinary (3 answers)
3 months ago
Anonymous
Sentiment analysis is where you automatically categorize text into 1 of a few groups based on how tonally positive or negative it is. It's been around since before AI was capable of even writing sentences, it's one of the oldest kinds of AI tasks. Google uses it to automatically decide whether to flag posts. Thanks for confirming you're actually a moron. It sounds like you fricking think going on chatGPT and saying "categorize the sentiment of this text" is sentiment analysis, which is consistent with what I thought the extent of your knowledge was when you started this argument.
3 months ago
Anonymous
>Sentiment analysis
why the frick do you keep coming back to this? nobody gives a frick about your sentimental analysis. I never mentioned in any of my posts yet here you are bringing it back again like a sad puppy. its unrelated to my original post so stop bringing it up
3 months ago
Anonymous
I accept your concession.
3 months ago
Anonymous
>says something incorrect >gets corrected and immediately backpedals
kek
3 months ago
Anonymous
wrong
I accept your concession.
also wrong
next?
3 months ago
Anonymous
You literally lost lol. There's no need to continue replying
3 months ago
Anonymous
i BTFO'd your b***h ass time and time again. I'm surprised you have the guts to show your face around here considering how much you got disrespected just now. If I was you I'd kill myself
3 months ago
Anonymous
>trinary
Ternary, goober
3 months ago
Anonymous
3 months ago
Anonymous
>And there's plenty of concrete yes/no things an AI can discern
Nope. not a single one, you fricking idiot. Even a basic question such as
"Is the sky blue?" is a fuzzy logic question, as you pointed out. The only basic yes/no (aka BINARY) questions that it can answer are.... none, not even math questions because it might be wrong.
I'm doing you a favor right now and dropping some knowledge on your b***h ass. at least you best appreciate it
>ChatGPT >The result of multiplying 77, 97, 114, and 121 together is 103,027,386.
Just tested it myself, it works.
OP is a dumb homosexual incapable of prompting.
Welcome to last fricking year. AI experts have said this from the beginning but you normalgays covered your ears and screamed the world was ending. Fricking morons. Maybe next year you'll finally discover that it's all just an overly complicated autocomplete.
last year AI was good. everything was great until they fricking neutered it in Q1 2023. If you dont remember the days of jailbreaking your AI bot then you are too young to browse this board
I'm not sure if you're trying to imply it gave accurate math back then but that would be wrong even before they messed with it. LLMs were never meant for accuracy, they're literally just an autocomplete. Also I've been using this since late 2019 when there was only GPT-3.
I forgot how archaic it was back then. It barely understood what you were even doing
3 months ago
Anonymous
Yeah, and it was very slow and maxed out my gtx 1080.
It was only 1b parameters rather than 70b models which we're using now on /lmg/ now with the same hardware (with the capacity to offload between cpu ram and vram)
2019 wasn't that long ago.
huh?
the problem with asking it complex questions like that is you cant easily verify it.
also LLMs are good at copy-paste formulas like the one you provided.
In my example, I asked it an easily verifiable math question and it herped so hard it derped. I asked it the same question 3 times and got vastly different answers.
If you think this thing is capable of large scale engineering work then you got anudda thing coming. QED; Our jobs our safe
I didnt realize my image was so 惑わしい
here ya go, lil' zoomer
No arrows and circle therefore unreadable.
brb, making a tiktok video analysis with a minecraft video at the bottom
Just tried it
lol wtf... thats where my original convo came from.
Long story short I was asking it to assign numbers to names and ask for an example of 2 names that had the same multiplier, but then that lead to ChatGPT giving shitty math.
Really funny how it still remembers that convo and says "that number is mary" etc, when you didnt even talk about that to the bot. shits not respecting out private conversations at all
I don't think you would pass the Turing test.
ask for the steps
>using wolfram
>large LANGUAGE model sucks at math
wow
COMPUTER that sucks at MATH
Why can't it just run a calculator and give the real answer
ai is not smart enough to use tools. its an utter failure. our jobs are safe tho, dont worry
It could, but it isn't designed to do that.
a.i is still in its primitive stages. much like the dial up era of the internet
so funny to see wagies coping about their little jobs. you don't have a clue what is coming
im a 33 y/o boomer. im fine.
you're fricked, you're probably 20 or something, so fricked dude, by the time I retire the AI will take over the industry. gl with your $5 million dollar house, probs a 2br built in 1939. gg
nah, this affects everyone. we WILL get ubi or take everything you own by force
you will be homeless on the streets before UBI becomes a thing. im voting trump for the next 50 years so yoiure fricked, basically
you're fricked if we don't get what we want
why would i be fricked? im gonna buy some self defense and move to texas. try and do some shit, I dare ya. make my day muffricka.
>we WILL take what you own by force
>without guns (costs money)
>or food (costs money)
>or a means to organize (AI screens all communication instantly)
going to be comfy as frick in my doomsday bunker when you zoomers try to invade
It sucks at "mental math", but knows theory very well.
Just give it access to external calculator tool and it will absolutely smoke anyone on any math subject on this board.
what do i type to give it an 'external calculator'?
Large language models aren't supposed to be able to do math
You're just using the wrong tool for the wrong job. Not sure what you expected really
listen bub
its an AI
thats not SMART ENOUGH
to do BASIC MATH
what else i need to explain you??
I know you're just kidding but there's people who think that unironically so I'll still answer sincerely
It's a large language model
"AI" is a very broad phrase. There are many different kinds of artificial neural nets
how about you link me the AI that can do such calculation 77x97x114x121
You don't need AI for that. You're describing a problem that is solved with basic computation. You can literally type it into a python shell or a calculator app
i know, this entire conversation is about the weakness of AI so a solution like "just dont use AI" is pointless. this shit fails at the most basic of tasks thus our all our jobs are safe. end of story
No you've just picked one of the known tasks that basic computers are good at and AI isn't good at.
As a general rule, neural networks are good at "fuzzy" tasks (like image recognition and natural language tasks) which until recently computers were very bad at.
you got that backwards, ai is good at fuzzy tasks and bad at binary (true/false) tasks. doing basic math is a binary task, as evident by the correct/incorrect outcome.
This is important because *our* jobs are programming or otherwise dev/IT work that requires correct solutions, not 'mostly correct' solutions. thus AI (LLM) programs will shit the bed for a couple years and we have nothing to worry about. they dupe the normie gays, but behind the scenes we keep our employment. capiche??
>, ai is good at fuzzy tasks and bad at binary (true/false) tasks. doing basic math is a binary task, as evident by the correct/incorrect outcome.
This doesn't make any sense. Do more research before posting, you are wrong. The rest of your post was incomprehensible.
is your reading level 5th grade or something? lmao, you arent ready for the real world
>>ai is good at fuzzy tasks and bad at binary (true/false) tasks.
>This doesn't make any sense
the funny part about this is... you said the exact same thing in your original post
>neural networks are good at "fuzzy" tasks
Ill admit I derped and misread your post when I replied (thus saying the same thing as you) but it was funny how you disagreed with me saying almost the exact same thing as you said. top kek m8
It's because you said it in a worse (and technically incorrect) way.
Your description of "bianry tasks" isn't acutal terminology I've ever seen used in AI research
And there's plenty of concrete yes/no things an AI can discern. One example off the top of my head is sentiment analysis.
"sentiment analysis" does not have a binary answer.
and yes there are binary answers. This is the basic fundamentals of computer science. if you dont even understand this then I'm not going to continue speaking to you. hit the books, moron. stop wasting my time
>"sentiment analysis" does not have a binary answer.
Yeah it often has 3 answers, negative, positive, and neutral.
But in some applications it's just "positive or not" (or "negative or not")
Don't talk about things you don't understand.
>>"sentiment analysis" does not have a binary answer.
>Yeah it often has 3 answers,
LMFAO OMFG IS ANYONE ELSE READING THIS SHIT? LOOOOOOL
???
Tell me what you think sentiment analysis is.
like many analysis, its a complex answer written in text with points, not a true/false answer. Its honestly laughable that you'd think its trinary (3 answers)
Sentiment analysis is where you automatically categorize text into 1 of a few groups based on how tonally positive or negative it is. It's been around since before AI was capable of even writing sentences, it's one of the oldest kinds of AI tasks. Google uses it to automatically decide whether to flag posts. Thanks for confirming you're actually a moron. It sounds like you fricking think going on chatGPT and saying "categorize the sentiment of this text" is sentiment analysis, which is consistent with what I thought the extent of your knowledge was when you started this argument.
>Sentiment analysis
why the frick do you keep coming back to this? nobody gives a frick about your sentimental analysis. I never mentioned in any of my posts yet here you are bringing it back again like a sad puppy. its unrelated to my original post so stop bringing it up
I accept your concession.
>says something incorrect
>gets corrected and immediately backpedals
kek
wrong
also wrong
next?
You literally lost lol. There's no need to continue replying
i BTFO'd your b***h ass time and time again. I'm surprised you have the guts to show your face around here considering how much you got disrespected just now. If I was you I'd kill myself
>trinary
Ternary, goober
>And there's plenty of concrete yes/no things an AI can discern
Nope. not a single one, you fricking idiot. Even a basic question such as
"Is the sky blue?" is a fuzzy logic question, as you pointed out. The only basic yes/no (aka BINARY) questions that it can answer are.... none, not even math questions because it might be wrong.
I'm doing you a favor right now and dropping some knowledge on your b***h ass. at least you best appreciate it
Open interpreter
https://voca.ro/1fVGAarKdLds
https://voca.ro/1h0Q2dYSPlBe
>ChatGPT
>The result of multiplying 77, 97, 114, and 121 together is 103,027,386.
Just tested it myself, it works.
OP is a dumb homosexual incapable of prompting.
ITT: morons
>I pay for an expensive calculator like a retart
Congratulations.
Welcome to last fricking year. AI experts have said this from the beginning but you normalgays covered your ears and screamed the world was ending. Fricking morons. Maybe next year you'll finally discover that it's all just an overly complicated autocomplete.
last year AI was good. everything was great until they fricking neutered it in Q1 2023. If you dont remember the days of jailbreaking your AI bot then you are too young to browse this board
I'm not sure if you're trying to imply it gave accurate math back then but that would be wrong even before they messed with it. LLMs were never meant for accuracy, they're literally just an autocomplete. Also I've been using this since late 2019 when there was only GPT-3.
>Also I've been using this since late 2019 when there was only GPT-3.
>2019
>GPT-3
ok
Well whenever AI Dungeon got popular as just a colab
sheet, fr? no cap just now?>
fr fr no cap sheet
damn! seen bruv
That would be gpt-2 1.5b
I forgot how archaic it was back then. It barely understood what you were even doing
Yeah, and it was very slow and maxed out my gtx 1080.
It was only 1b parameters rather than 70b models which we're using now on /lmg/ now with the same hardware (with the capacity to offload between cpu ram and vram)
2019 wasn't that long ago.
Even when AI wasn't neutered, it was still inferior to 2012 Google Search
Neutering is part of the Enshittification process.
You can't erp with a calculator.
AI is here to stay chud
yet im empoyed making $380k and the AI is broke as frick workign for free. see the difference?
Yeah, the difference is I don't have to pay the AI to work or erp with me
wut?
>Boomers coping that the first real AI isn't perfect.
Lmao you're going to get fired so fast.
>a computer that sucks at math.
*context recognition