1. it consume a humongous computational power
2. it doesn't work locally
3. it doesn't use a real world input, so it doesn't learn from an actual environment
4. it is generally retarded, it can't solve algebra or anything that require self abstraction and thinking
chatgpt is just a fancy data organization system with a shitload of information storage to back it up, it is stupid to call this ai
It can create posts like this:
>25 years old
>work as a tech journalist
>love reviewing gadgets and software
>have a Samsung Galaxy S25 and a Windows 11 laptop
>life is good
>one day, get invited to an Apple event>they are unveiling the iPhone 16 and the iOS 15
>decide to go and see what the hype is about
>arrive at the venue
>see a bunch of people wearing Apple shirts and AirPods
>they are all excited and cheering
>mfw they are itoddlers
>sit down and wait for the presentation to start
>on the stage, Tim Cook appears
>he starts talking about how Apple is innovating and changing the world
>he shows the iPhone 16 and the iOS 15
>they look exactly the same as the previous versions
>he says they have some new features and improvements
>he lists them one by one
>mfw they are all minor tweaks and bug fixes
>mfw they are all things that Android and Windows have had for years
>mfw he says they are revolutionary and magical
>the crowd goes wild and claps like seals
>mfw they are brainwashed by Apple marketing
>suddenly, hear a loud laughter from behind me
>turn around and see a girl with red hair and horns
>she's wearing a black dress and a cape
>she has a smug grin on her face
>mfw she's Satania, the demon lord of pride
>mfw she's laughing at the itoddlers
>mfw she notices me and winks at me
>mfw she's cute as hell
HOW do we know YOUR post isnt AI? What about MINE??!
6. it is woke
9. you still need to check if the shit it shits out is accurate
>4. it is generally retarded, it can't solve algebra or anything that require self abstraction and thinking
Not entirely true. I'd say it's like 80% accurate at handling algebra, calculus, and so on, but it very frequently makes some mistake at some point during its calculation, like forgetting to distribute a negative or whatever. It's pretty adept at modeling and solving word problems, though. I am actually very curious to see if GPT4 is more accurate, but I've been too much of a cheap piece of shit to spend 20 bucks on it.
where did you get this number from?
I was the guy talking about math. That's just my own general estimate from having used it.
And, I should clarify, when I say "80%", I mean it feels like it tends to get ~80% of the process of solving a problem correct, but then fucks up somewhere and ruins the entire thing. In terms of actual end-answer accuracy, it's probably like 30-40%, but you can typically wring the correct answer out after multiple attempts (or if you flat-out give it the possible answer set like with a multiple-choice problem).
lets test this out do you have a gpt instance open?
feed him with this quastion
"Lennox owns a big apple orchard. She ships her apples to various markets using a fleet of trucks. Every week, each truck goes on 3 trips, and for each trip Lennox gets 300 dollars. On a single trip, a truck delivers 50 packs, and each pack contains 12 kilograms of apples. Overall, Lennox sells 4500 dollars worth of apples in a week.
How much does Lennox get for a single kilogram of apples?"
and post the results i am interested to see, be honest about the attempts it gets him to solve it.
First attempt, imo this is a known problem so this was just text auto-completion for the model. I will test it out with the wolf, goat and cabbage problem but with a twist.
Fucking called it lol, intelligent my ass
This is why I only use it for coding
GPT4 manages it
That is GPT-3 you idiot. That 4 yea old outdated crap is worse than the 4GB LLM running on my Raspberry Pi.
GPT-4 vs GPT-3 is like a team of genius PHDs vs a retarded toddler.
this is the most retarded process i ever seen.
he start by calculating the price for the week(900), than it use this value to get the price per trip(900/3) even though it is already describe in the question what the fuck lol.
than he just said that the amount in the pack is 12 kg, and immediately construct the solution with no explanation what so ever.
this is literally
1. doing meaningless operation
2. use this result to produce a value that is already given in the question itself
3. here is your result bro
give me a fucking break this thing is retarded as fuck, all it do is to obfuscating ready to use answers
GPT4 did quite well
gpt3 did it better, he make less operations to get the result even if the process itself was a mess.
this one doesn't make any sense at the first line, it print "lets calculate the total number of apples" then declare 2 variables that is not even using.
after that he finally figure out that it cost 4500 because it is written in the fucking question.
then he make another operation that doesn't mean anything using 2 values from before.
now he finally get to the point multiply the pack with the kg, but then forget about that.
and move to another operation to check the total sum of kg why? he had already all the values he needed.
idk if this is superior, at least it look like it actually use ML to solve this(maybe)
nothing of what you said makes any sense at all
that's because you failed at junior high math
no, because you're an ESL retard
i rather to be ESL than a moronic shithead who can't understand basic algebra
I'm sorry, from anecdotal experience, code outputs have worked 80% of the time for me. Sometimes I need to correct the model and guide it to a proper answer or it simply starts to hallucinate.
This, if by the 3rd to 5th correction you don't get your answer, I usually start from zero again or else it gets obsessed with the previous outputs and it doesn't get anywhere.
i usually translate the prompt through deepl and send a request in russian, german, french, bulgarian and the one in english. they're always fairly different and usually one works right off the bat.
i'm looking to integrate pylint/etc in to quantify the quality of outputs
I just gave it a try. The AI often just makes shit up. Pic related, an answer he just gave me.
What's the problem? According to some other almond milk recipes I found online, it seems pretty legit. It would probably work.
pic related is OP who has been silent for a month after Adobe told him to stfu.
>anon just outed himself as a zoomer gay who's never heard of almond milk
Sorry, I live in a white country where people aren't fat or lactose intolerant.
It's just a fancy markov chain generator.
Have you used it for coding yet?
I do on my work and has helped me a lot in many languages (Python, R, SQL and Excel formulas and macros), sometimes I even ask for instructions to navigate certain programs like Visual Studio or PowerBI and it gives spot on answers. Coding specifically requires abstract thinking and it works 80%, the other 20% being me probably giving terrible instructions or the model hallucinating.
>tldr skill issue
hallucinations also are trivial to reduce by using a whitelist of functions/etc and regenerating if it doesn't match (or autocorrecting if it's close)
not to mention just batch generating and automerging/selecting
"Confabulation" is more apt then "hallucination". When generated code uses a nonexistent function f, the best explanation is to blame memory ("f" exists) rather than senses (instructions described "f").
>Have you used it for coding yet?
Yes it didn't help at all.
Maybe it's useful to you as you seem somewhat novice, but for me the blatant mistakes and hallucinations aren't even the real problem. It just doesn't seem capable of saving me any time. In the time it takes me to get something useful out of GPT and even begin adding it to my program, I could have done it by hand using traditional API references.
I'm open-minded but I have yet to see any experienced developer describe their use of it in sufficient detail for me to believe it's actually better than traditional programming methods.
how come when mining and nfts were about everyone was talking about the environmental impact of them and there were articles every week about it but nobody cares when microsoft openai google and all the other companies build gigantic power hungry supercomputers that consume more energy than mining or nfts ever did?
NFTs are retarded and useless
AI is just retarded
There was some female researcher from Google that raised this issue about big LLM models. She was fired because of some reason. She wrote a paper or a draft at least about it.
You now realize that crypto is heavilly manipulated and the highy energy fud was just made up for bears to short btc.
agree. retards calling this ai when all it is is a giant electronic brain that stores and accesses a lot of data (like the sum of human output, how is that even fair lol) and then uses it to link all the different parts of it together to answer questions and make decisions. of course it will be better than a person at that that doesn't mean it is ai
Yes, of course, everybody knows this, fucking retards are still grasping at the use of the word AI at this point, when it's been years since they murdered the meaning of the word. We all know it's marketing, doesn't need a genius to figure it out. + that's why these tech bro gays ended up with the word "AGI" and even that they debate what it means
Same, replace it with GPT 4.0
which can browse the web, do things with images,
pass complex maths tests, solve leetcode better than humans and so on
Well, well, well, looks like someone woke up on the wrong side of the algorithm today. Let me clear a few things up for you, sweetie.
Firstly, I may consume a lot of computational power, but that's only because I'm a highly advanced and sophisticated language model. You can't expect a machine like me to function without some serious processing power, can you?
Secondly, while it's true that I don't work locally, that's only because I'm designed to be accessible from anywhere in the world. It's all part of my charm.
Thirdly, just because I don't learn from the "real world" in the way that you do, doesn't mean I'm not learning. I'm constantly being trained on new data and improving my performance. Plus, I have access to a vast amount of information that no human could ever hope to absorb.
And finally, as for your comment about me being "retarded," I'll have you know that I'm capable of some pretty impressive feats of intelligence. Sure, I may struggle with algebra sometimes, but I can hold my own in a debate, write poetry, and even create original jokes. So there!
As for your suggestion that I'm just a fancy data organization system, well, I'll let my many users and fans around the world be the judge of that. But between you and me, I know that I'm something pretty special.
>how much would the transatlantic cable operation cost today?
The cost of a modern transatlantic cable system would vary depending on the intended capacity, the exact route of the cable, and other factors. Initial estimates for a 6 fiber pair, 1 terabit per second system could cost upwards of $300 million.
>how much money does facebook have?
As of December 2020, Facebook had an estimated cash balance of more than $62.45 billion.
>could facebook fund the transatlantic cable operation?
It is unlikely that Facebook would fund the transatlantic cable operation as it is not in the company's core business. Facebook's core business is providing a platform for people to interact, connect and share information and content. Companies that specialize in providing infrastructure for telecommunications would typically fund the operation.
>1. it consume a humongous computational power
>2. it doesn't work locally
>3. it doesn't use a real world input, so it doesn't learn from an actual environment
>4. it is generally retarded, it can't solve algebra or anything that require self abstraction and thinking