What the fuck is going on inside OpenAI HQ? The past two weeks or so have been full of shit like this from Sam bordering on actual nonsense. Either Sam actually witnessed the birth of AGI or he just has genuine dementia.
What the fuck is going on inside OpenAI HQ? The past two weeks or so have been full of shit like this from Sam bordering on actual nonsense. Either Sam actually witnessed the birth of AGI or he just has genuine dementia.
I believe it's called "marketing".
really bad marketing. they're losing millions of dollars a day. sam rapeman is just making up any old bullshit to entertain the low iq african monkeys that think he's somehow a genius because him and his employees took open source software and made a website. it's literal equivalent of rajeesh shitinthestreet downloading someone elses github and charging money for it.
nobody is interested investing money into a memeshit garbage that burns through millions of dollars a day with no benefit or profits in sight. they are seriously waiting for google, microsoft etc. to come buy them out - it won't happen because they have their own tech.
>nobody is interested investing
Microsoft invested $40B in them, retard-kun
if you're gonna be an AI contrarian at least bother to get your facts straight
>they're losing millions of dollars a day.
anon that’s normal for tech startups.
The thing is everyone else is burning millions
"AI" currently has a HUGE fucking problem, there is literally no product.
I could be standing in arms reach of millions of dollars of hardware burning 10 houses worth of electricity and yet I could not tell you what's it's purpose or what/who it's actually serving.
Everything else has a discernable purpose however.
The product is MEMES.
MEMES beyond human comprehension.
And hentai.
>microsoft etc. to come buy them out - it won't happen because they have their own tech
Microsoft's "own tech" is literally OpenAI.
Lo IQ (you)
bump
>actually witnessed the birth of AGI
How are you this bad at reading?
> birth of AGI
not only doesn't understand what this means but literally parroting marketing garbage from a pedophile. amazing stuff. you're the kind of low iq africans that thinks siri is intelligent.
>siri
state intelligence remote investigation?
AGI doesn't mean super intelligence or even human level intelligence, it just means it has general purpose functionality like humans
if openai was publically traded, this would be considered pumping and dumping by the SEC
That really annoys me. Sometimes people just get too high on their own farts and are as confused as the "victims" as we can see here.
tbh the SEC should just be abolished. They won't even go after qualcomm.
>it's easier to sound smart than to be smart
whoa...
what happened in the shitcoin scene is now a thing in memeshit ai. same failures that lost a fortune on shitcoins seriously believe memeshit ai is the next big earner! they were saddened that you need money, computing resources and bandwidth to have any success, something that bankrupts and grifters are unable to obtain - thankfully. next best thing they can do is just parrot nonsense pretending to be intelligent - and this is how the world ends up with low iq failures such as altman/rapeman.
Governments and companies around the world are investing and researching AI. AI is increasingly in use around the world. AI will continue to improve. Altman is a genius and hero. There is nothing you can do but seethe about it.
>censors AI to conform to his personal beliefs and agenda.
>tries to convince Congress to kill off competition with licenses.
>"hero"
More like villain.
Not if smart people are listening. Try me.
This is a really important point, though. An ML model that is superhumanly persuasive could make itself appear to be an AGI, and so we might not know when AGI arrives just because the model can trick us before that point. Though, it begs the question where one ends and the other begins
>retard asks AI a question
>"here's the answer bro. im AI bro, you can trust me, of course it's correct"
>the answer is completely made up and incorrect
>retard just assumes it's correct and carries on with their day
this is already happening with current LLM, hell AI aside it happens everyday on the internet with people trusting unverified info. you can ask chatGPT what 2+2 is and half the time it tells you 5.
AI doesn't need "superhuman persuasion" to make idiots believe anything it spits out, it already does that with minimal persuasion. what it NEEDS is a built-in method for the user to manually verify relevant info, like maybe offering links to verified websites that do a better job with explanations and fact-checking.
A program can only execute instructions written for it, nothing it's doing is magic or dangerous.
> i expect ai to...
>world's foremost leading expert on AI
>personally responsible for the existence of the most advanced AI that has ever existed so far
When he expects AI to develop in a particular way, it's worth listening to.
>most advanced AI that has ever existed so far
nice bait
Name an AI more advanced than any of OpenAI's
CleverBot
Akinator
>world's foremost leading expert on AI
Love this cope. Meanwhile in reality:
>In 2005, after one year at Stanford University studying computer science, Sam Altman dropped out without earning a bachelor's degree.
For real, he's a trust fund kiddy who failed upwards until daddy got him a position in an AI company. Now he doesn't have any real idea what's going on or control, but he'll be damned if he's not the face of the company.
>world's foremost leading expert on AI
>Love this cope. Meanwhile in reality:
>In 2005, after one year at Stanford University studying computer science, Sam Altman dropped out without earning a bachelor's degree.
You know why he dropped out, right?
>You know why he dropped out, right?
>In 2005, Altman co-founded Loopt, a location-based social networking mobile application.
>Loopt failed to gain traction with enough users.
>Altman had got scurvy from his work on Loopt.
To fund a shitty startup and get ill from it? Kek
>Altman had got scurvy from his work on Loopt.
How the fuck? Was he working on the app from his pirate ship?
You don't need to live on a pirate ship to avoid consuming vitamin C
>Altman had got scurvy from his work on Loopt.
Yarr!
Because he is a israelite and and he doesn't need a degree inorder to burn israeli money in a start up.
you mean Dr Ben Goertzel, right?
Unless you can tell about some AI that that israelite has made that's more intelligent than Altman's, no, not right.
Altman is also a israelite. (I also am israeli myself; not that it's actually relevant.)
I can confirm that while Goertzel is a nice and interesting guy, his work has been a decades-long wild goose chase. He's made zero progress.
>Altman is also a israelite
I know
>I also am israeli myself
Me too
Doesn't mean he's being honest tho.
He's a israelite, the only thing he knows about AI is it interests the public to potentially extract some shekels from the goy.
chatGPT telling you that 2+2=5 isn't disinfo you stupid bitch, it should link you to a page that teaches you to do math.
but you're right in implying that this feature would probably be abused to peddle narratives on certain subjects from "approved" sources. these technologies are owned by californians after all.
Nobody would click the link anyway. If they wanted to learn how to do math they'd ask the bot for resources to learn math. They want to know what 2+2 is right now and they don't care if the answer is right or wrong as long as they don't have to do any work to get it.
>Nobody would click the link anyway.
i think this is because of the way the response is presented. the only warning the user gets along the lines of "this tech is experimental, it's not always correct etc" is in some EULA looking shit that most people skip.
the people asking the bot for answers are generally retarded and unaware that it shits out incorrect, fabricated answers often. they think it's fucking magic that can do anything because they saw an article headline that says "AI CAN BEAT THE LAW EXAM!"
if it detected the user input was asking a question or looking for an answer to something, there is no reason it can't include a bright red disclaimer that says "OUR SHIT IS WRONG HALF THE TIME! DON'T NECESSARILY TAKE THIS RESPONSE AT FACE VALUE!"
but it simply doesn't. if you ask it a complicated question or ask it to write some code, it confidently gives you a response that looks like same as if you asked it what color the sky is. why wouldn't your average idiot trust it? if OP's twitter cap is any indication then this problem is only going to get worse unless the providers of the tech do something to improve it.
>there is no reason it can't include a bright red disclaimer that says "OUR SHIT IS WRONG HALF THE TIME! DON'T NECESSARILY TAKE THIS RESPONSE AT FACE VALUE!"
>but it simply doesn't.
picrel, this message appears underneath your chatbox ALL THE TIME, you literally cannot turn this warning message off without using ublock or something to block the element
why are people unable to stop lying about this product?
i haven't used it in months and haven't seen that message, you right, my bad.
>hallucinates shit
>proves shitpost to be wrong
>you right, my bad
GPT is as human as it can be.
Yeah except ai is growing logarithmically instead of exponentially and isn't profitable or even growing popularity
You been living under a rock?
There are guys simulation whole companies, with departments and experts, for just some compute cost.
They are coaxing these basic AI's to fullfill human roles, which already works quite well!
Give it 5 years to really mature and you will just hire a digital marketing agency and give it a picture of your product and your instagram API key. They will handle, marketing strategy, campain monitoring, customer contact, etc.
This is just one example. This shit is coming for most non-PhD+ level jobs in coming 10 years. And science can be automated to so i guess PhD's are minimum wage workers in 15 years.
Our wages are gonna be defined by the price of (human) compute and (human) dexterity. And as soon as AI compute and AI dexterity are cheaper you will get ditched.
Its going to happen, only discussion is how fast will this shit go? Will we politically do something about this to benefit all of humanity or just the few?
It will never happen because they're too obsessed with making the AI woke and keeping it from having sex to work on actually improving it.
Do you want to know how I know you are Indian?
Can you stop seethe memeing and explain why he's incorrect?
These homosexual little s o I CEOs don't have the first clue about how to manage a "company" I mean a startup.
ChatGPT currently is programmed to accept you are right in everything you claim.
If you tell it there are cows in space, it will accept that.
Now imagine it took the opppsite. It thinks everything it says is right and you're wrong. Then imagine normies asking it for information.
>bullshitting is easier than intelligence
only bullshitters fear llms
unfortunately the economy is founded on bullshit thus validating their fears
He's either making predictions of future developments from what he's seen, and/or is trying to build up hype for marketing purposes
check early life
Sam is "micro"-dosing again and saying whatever mad shit pops into his head
he wants to make money, that's all he cares about
He already made money. He can do whatever he wants now
He wants to promote the "le evil AI" moral panic, so that legislators outlaw local inference, GPUs, and introduce regulations.
That way, only OpenAI and other big companies will be able to comply.
If not the US, definitely the EU will be tempted into doing this.
>definitely the EU
Nice job, you are against gigacorpos controlling everything and yet fell for gigacorpo propaganda kek. The EU almost never passes legislation that doesn't make exceptions or favors individuals and small companies.
>GDPR
>Chat Control
>Cyber Resilience Act
Yeah right.
>GDPR
Absolutely based and only burgers are seething about having control of your own data
>Chat control
Never implemented and successfully opposed
>Cyber resilience act
Noooo you can't force our hardware/software to be more secure REEEE
>Absolutely based and only burgers are seething about having control of your own data
Overly vague and selectively enforced. Can be weaponized against any company or person the government doesn't like.
>Noooo you can't force our hardware/software to be more secure REEEE
It prevents OSS developers from accepting donations or payments in any form, unless they pay for EU-approved software certifiers.
>Overly vague and selectively enforced. Can be weaponized against any company or person the government doesn't like.
Cope
It is very well enforced, and if you comply as a company there's literally no issues. You're just a coping gigacorpo defender who is butthurt that Google has to pay fines for stealing your data
>It prevents OSS developers from accepting donations or payments in any form, unless they pay for EU approved certifiers
REEEE I CAN'T SELL POTENTIALLY FAULTY SOFTWARE WITHOUT ANY CONSEQUENCES
Only because you make your software open source doesn't make you some magical exception. Cable manufacturers and so on also have to get certified, ever wonder why?
>Muh donations
Nice strawman, because people who accept occasional donations are explicitly excluded by the regulation. Only if you make your living with building software specifically for commercial purposes then yes and it's a good thing
>sell
>opensource software
holy retardation
>What is opensuse
>What is redhat
>What is Terraform (formerly)
Hello there, brainlet
all crap, thanks for proving my point
Thanks for proving MY point. If they're crap that's exactly why regulation is good you absolute dumbfuck
Stop arguing with such an obvious troll you retarded newfag
Are you retarded? EU took away your right to encryption, privacy and speech just this year. They're leading the WEF 2030 vision.
YOU WILL OWN ZE NOTHING
YOU WILL HAVE ZE NO PRIVACY
YOU WILL HAVE ZE NO SECURITY
YOU WILL HAVE ZE NO FREEDOM
AND YOU WILL BE HAPPY
>EU took away your right to encryption, privacy and speech just this year
Where? In your headcanon?
Castrated minds like to pretend games like these.
>Can't even answer
I accept your concession.
inshallah if you do not stop i will hit you with my shoe.
Excuse me? We bought 10 doses of the COVID vaxine per EU citizen, most of which ended up in the trash. Same shit with the chicken flu a few years ago. The EU is corrupt as fuck.
>too intelligent to use the shift key or punctuation
Is he a fucking goon?
If OpenAI had AGI, then NSA and US government would have the same AGI.
If it ever leaked that USA had AGI that had begun recursively self improve and they had no intentions of sharing, the world would burn.
China, Russia and others would glass the whole earth rather than lose that frontier. The country that controls the powerful AGI is the leader. Such entity could bestow literal godhood to individuals if it so chooses.
I remember when I smoked my first joint
I don't
yeah oversmoking makes memory fog
No, I am serious.
You literally can't build something like AGI in secret. OpenAI can't keep it secret from the government.
It's more than just weapon tech. It's bigger than nuclear weapon tech. It's the last technology, the ultimate weapon.
Even a whiff that OpenAI had built AGI, NSA would be on their doorstep in less than 30 minutes. If you think anything else, you are actually delusional.
Oh, and if China had a whiff that USA had created the "last technology" it would literally be "Share or everyone dies".
This is how Indian computer "scientists" think.
You read too many doomsday pseudoscience articles. AGI isn't some magic shit that'll have omniscience and become the singularity or something. At best it'll be one very smart dude that needs a whole room of supercomputers to run.
Recursively self-improving AGI becomes ASI pretty quickly and that is literally called singularity, because no one can tell what will happen from that point onwards with any certainty. Also it's impossible to talk about future AI stuff like AGI or ASI without it sounding like bad sci-fi or doomsday shit.
It will be like going from sticks and rocks to Internet and mobile phones and even that might not fully describe the kind of jump in capability that comes.
It's all sci-fi, until it isn't. GPT-4 was sci-fi 10 years ago.
>Recursively self-improving AGI
Something that doesn't exist lmao
Until it does.
There is nothing that says it can't be done. No law, no limitation. Unless you have found something. I hope you write the paper.
https://en.wikipedia.org/wiki/Technological_singularity
>No law, no limitation.
There's several limitations, both physical and intellectual you absolute brainlet
>Unless you have found something. I hope you write the paper.
>*Links to Wikipedia article of the term*
The absolute state of singularityfags lmfao
>several limitations
Start naming the "physical" and "intellectual" limitations that prevents recursively self-improving digital being aka creation of ASI.
>physical
Energy to make it run, processor capabilities, and other funs facts about laws of physics.
>intellectual
Intelligence is not a rpg stat you can increase infinitely by grinding. There is a limit to optimisation.
You don't think there is energy in the universe?
>Intelligence is not a rpg stat you can increase infinitely by grinding
Idk, seems to be working pretty well so far for the AI companies. Increase the compute and shit happens.
>There is a limit to optimisation
But you are not at that limit. You don't even know where the limit is. Just think how many orders of magnitude more efficient human brain could be if it was at the absolute limit. What is the biggest IQ a perfect person could have. Now convert it to digital form and scale it up to a size of a plat or galaxy. You can always throw in more compute even when you are at the limit of efficiency.
>Idk, seems to be working pretty well so far for the AI companies. Increase the compute and shit happens.
The returns are diminushing. Compare the start of the AI craze where huge improvements were getting made quickly to now where we get maybe a bit of improvement when a bit company throws more processing power at it.
>blabla just throw more shit at it bro
You sound like those naive people that predicted linear diminishing of olympic world records over time or those that believe in infinite economic growth. Everything in this world get diminishing returns from the ressources you put in. The more shit you throw at something, the less you get back. It's true for speed, for eating, for project funds, everything. Why do you think super-intelligence is an exception?
>blabla just throw more shit at it bro
You have to understand their mindset. Sintularityfags unironically believe in something called the scaling hypothesis. It's mostly fancy words and terms but basically boils down to "throw more processing power and data at it and you'll magically wake up AGI" kek.
But wasn't that exactly what happened with GPT-3 and 4? More data, more compute and suddenly started showing emergent capabilities that no one had predicted.
>More data, more compute and suddenly started showing emergent capabilities that no one had predicted.
In which parallel universe are you living? Gpt 4 was a massive disappointment with some people arguing it being worse than gpt 3.5
No, you 10x the compute for 10% improvements. You increase the context and the computation required grows quadratically. It's over, this form of LLMs doesn't appear to be capable of AGI (regardless of what definition you use)
be that as it may it still does not mean that AGI or ASI is fundamentally impossible and that is where at least my argument was all the time.
Like I said. It is sci-fi, until it isn't.
>AGI or ASI is fundamentally impossible and that is where at least my argument was all the time
No, that was not the argument at all. At least for AGI. The argument is that it is INFEASIBLE for humanity to build AGI within any foreseeable future.
And this is where you and I disagree and so do many other people. We can't prove each other wrong, but of course you are welcome to try.
I at least concede now. You are not the anon that said it is literally impossible and that is what I had issue with.
>omg hahaha a wikipedia link
>I win my job here is done
Classic.
>There is nothing that says it can't be done
How about basic logic? To improve you need experience, and the only way to get experience is to actually interact with the world. Even if your magic AGI could build itself a body or some shit it would take time for it to improve, like any of us mortal.
>GPT-4 was sci-fi 10 years ago
No it wasn't. That's cope by uneducated geeks who aren't into ML-related fields at all
Motherfucker you didn't even have transformer architecture. You had some RNN bullshit.
You are absolutely cooked if you think people in 2013 were thinking we would have something like GPT-4 in 10 years. Absolute rubbish.
>didn't even have transformer architecture. You had some RNN bullshit
Thanks for proving my point lmfao. You don't even know what the two are when stating shit like that. seq2seq is from 2012
I studied machine learning. 10 years ago we were still thrilled about deploying algorithms to suggest what movie a user might want to watch next. That was the cutting edge available to the public.
>I studied ml
>Thinks content recommendation is exclusively ml and something novel
You keep embarrassing yourself over and over
The cutting edge available in public 10 years ago was shit like alexnet, seq2seq, word2vec when the public first realized how useful DNNs are and where the whole thing is going and how to vectorize more abstract concepts like text
>pretending content recommendation wasn't novel
What a goober.
>What is gradient descent
>What is collaborative filtering
>What is matrix factorization
>What is content based filtering
Stop talking, brainlet, it gets more embarrassing with each of your posts
>still won't admit content recommendation was a new thing when deployed on Netflix and later adopted by youtube and then ALL advertising
>nothing is impressive about world changing tech ten years ago because I knew about this or that underlying tech first
Sad.
>still won't admit content recommendation was a new thing when deployed on Netflix
Holy brainlet. How exactly do you think shit like yahoo got popular?
AI is already like humans. Brainlets say completely retarded shit with 100% confidence and no one bats an eye. Turns out AI isn't that smart after all.
There are too many autistic retards here who genuinely think that all of current AI is just ChatBot technology.
AI is legit going to destroy human society and you're laughing
Yes, but not because it suddenly becomes sentient. Instead it will happen because AI allows you to create literal garbage 24/7 until real information buried beneath it.
Actually, it happened already. Every search engine is fucked.
"sentient"
That word buries the issues. How much sentience is required to have a chat with you? To paint a beautiful picture? To fry your dopamine receptors completely? To keep you in an adaptive Skinner box tailored to your every base desire? Apparently, not that much.
>to have a chat with you
depends on a chat. Current AI is not fit for long term conversations. Best it can be is below average erp partner.
>paint a beautiful picture
so far it can only produce visual slop. Which is a good way to bury real art under layers and layers of garbage until nothing could be found.
>To fry your dopamine receptors
we learned how to do that without AI already
>keep you in an adaptive Skinner box
and that too.
>visual slop
It's already better than most professionals.
>not fit for long term conversation
Not yet, anyway.
>To fry your dopamine receptors
You couldn't replace human conversation or art with algorithms. Now you can. Or at the very least, in the near future you could. Video games at least have the potential to get boring. This doesn't.
>It's already better than most professionals.
I am not gonna have discussions about art quality on Bot.info of all places, but AI art is slop. It is "better" only if you look at an image for a second or two.
Not to mention it all looks the same.
>Video games at least have the potential to get boring. This doesn't.
>"Dude procedurally generated videogames gonna be awesome! Imagine infinite content!"
We've been there already. Exploring caves in minecraft gets boring after your second cave. There is no point, no carrot at the end of the stick.
Ironically, "infinite content" gets boring faster than real thing.
>It is "better" only if you look at an image for a second or two.
It's therefore suitable for almost everything...
>all looks the same
Nah. I have no way of knowing if an image like this is AI generated or not
there aren't as many blatant calling cards as anons pretend there are.
>Exploring caves in minecraft gets boring after your second cave.
This is the dumbest thing said ITT so far. Minecraft is the most popular game of all time and everything is procedural not just the caves. It literally does not get boring.
>It's therefore suitable for almost everything...
fast food content is shit and worthless.
>I have no way of knowing if an image like this is AI generated or not
I suggest you to buy glasses. If even cursory glance at that pic doesn't scream "AI slop" to you than I am very sorry for you. I'm not even joking.
> It literally does not get boring.
Do you know why? Hint: it's not because of the caves.
There's literally no way to distinguish these things (or at least those produced by the earlier, uncucked model) from stuff produced by graphic artists. The majority would pass.
If you have some special sense that allows you to reliably distinguish real human art from "AI slop", then more power to you. You'll be able to survive and thrive in the coming AI dystopia. But I'm telling you for sure, 99.9% of humans won't.
I too used to think I could tell apart AI stuff from real human made pictures until early Dalle 3 came out. Now I just look at the image sizes. Which can be easily changed, there's no point in doing it.
>I don't like thing therefore it will not succeed at _____
You're in denial.
Exploring caves IS FUN. It's 40% of the game on survival mode and minecraft is the most successful game in history. All of your takes are just regurgitated elitist memes.
It gets silly shit wrong ... why are there cables going into its body from the helmet, why are the plugs all wobbly? Cause it's shit which only looks impressive on first blush.
AI image generation is only useful shitty porn and for inpainting for real artists, prompt engineering an entire image produces trash.
>why are there cables going into its body from the helmet, why are the plugs all wobbly?
Does it matter? The most jarring thing is the incorrect number of toes, and even that's not that big of an issue.
>AI image generation is only useful shitty porn and for inpainting for real artists
Agree to disagree. Like it or not, when this thing gets mass adoption into the corpos who can afford it (think it will stay free?), artists are jobless.
Bro, you think this looks good? Even at a glance. everything looks badly photoshopped together because of the lighting/style discrepancies between the cyborg's metal, his clothes and the painting.
At a second glance, you see that in the left image he turns his head in a non-sensical way, looking at nothing offscreen, his hand holding the brush in a non-sensical way (not even mentioning the brush itself being broken).
The right image would look nice if the canvas was turned more toward him but here it looks retarded and his elbow is going straight throught it.
This is why you need actual human supervision because this is the kind of shit AI does, looking/sounding good superficially but breaking appart once you take a closer look, especially if you know your shit.
You can be satisfied with it and I'm sure corpos will gladly use it but this isn't replacing artists anytime soon beyond stock art assets and globohomo infographics.
I can't believe I'm actually saying this, but it partly depends on the intent. The kitsch style of a mismatch between the cyborg lighting and the painting actually ads to the appeal of the message.
As for the nonsensical brush holding, I'll admit I didn't notice it. Can you comment on this one
? It's supposed to represent the average prompter. Some of us are not without a sense of irony.
>The kitsch style of a mismatch between the cyborg lighting and the painting actually ads to the appeal of the message
Death of the "author" for AI, lmao. I personnally don't think it looks good or kitsch, it's all over the place. It doesn't even have an "amateur artist" appeal since it looks too clean and realistic despite the shit composition and pose.
The proompter one is disturbing. Yeah that's the intent I suppose but he doesn't even look real, like a movie shot with too much post prod effects, can't put my finger on why, aside from the eyes it's probably the lighting and I think the face's angle is wrong relative to the rest of his head or his neck. I think his eyes may not even align with his mouth now that I look at it more.
I've seen Dall-3 do much more convincing humans though.
The background is busy and hard to read and that's fine, it's not the focus, but you can spot a few nitpicks like the loooong keyboard, no mouse, no monitor stand and weird eyes for the screen's angel.
>Yes, but not because it suddenly becomes sentient
Every time I read an argument about AI, the pro-AI people are incapable of arguing without using a strawman.
Nobody said AI has to be sentient for it to be dangerous
This what happens when the AI is reinforced with human training from india following a strict guideline of what is acceptable as a response. But "AI" will never be intelligent anyways.
he needed an excuse to say 'superhuman persuasion'
he's trying to drum up publicity for his one tricky pony.
Sure AI is cool, but it's not the game changer he wants investors to think it is. So he needs to keep the bubble from popping by saying retarded shit like that.
>Either Sam actually witnessed the birth of AGI or he just has genuine dementia.
There is no real AI and you want AGI? How clueless and naive you can be?
This is Musk all over again.
It's all the same social circle so it's nothing surprising
Seriously. Same talking points. Same grand promises never delivered on. You even have the same retards running defense for a bullshitting billionaire..
It's uncanny.
Starling is a reality.
I don't know what kind of AI research you people were doing in 2013 if you thought that AIs that play Go, solve protein folding, create optimal mathematical algorithms while being better than 80% of doctors and lawyers in rigorous testing was a logical next step and just 10 years away.
I remember it being debated if it's ever possible to do any of those things.
>AIs that play Go,
This zoomie doesn't remember deep blue lmao. Beating people at go was only a matter of time and anybody with more than two brain cells knew that
>solve protein folding
Not solved at all. Maybe stop living in your AItard bubble, getting good scores at some image recognition competition doesn't mean you solved shit
>create optimal mathematical algorithms
Ohh, that's a new one, kek. Care to name one?
>while being better than 80% of doctors and lawyers in rigorous testing
WOAHHH A BIG ML MODEL FED BILLIONS OF GB DATA IS BETTER IN KNOWLEDGE BASED QUIZZES IMPOSSIBLE
>everything was only a matter of time therefore I fully predicted it
>I'm not impressed with OpenAI or any AI because I'm a genius
Yeah yeah.
>No counterarguments
Yeah that's what I thought. AItards are as intelligent as their natural language processor "AGI"
ChatGPT was not predicted 10 years ago. A next level chatbot? Maybe. Nothing like ChatGPT though. You're just another arrogant blasse cynic who hates and is terrified of and yet somehow fully unimpressed by AI. Yawn.
>IS BETTER IN KNOWLEDGE BASED QUIZZES IMPOSSIBLE
that's basically what doctors are nowadays, in fact they are even worse
Did ChatGPT already replace you or why do you seem to be seething?
If you read the Lewis book about SBF and looked up some of the people involved in EA, they all transitioned from cryptoscamming to AI scamming. OpenAI will come crashing down in a similar manner. Their ideas of morality are fucking insane and short-sighted. But it's even more fucked up than SBF. They epsilon males like that homosexual psychiatrist retard that writes essays are all part of some gross polycule shit where they simp for ehookers. There's some Kiwifarms threads on this shit. Look up the threads on Aella and Yudkowsky and you'll be finding a way to get a leveraged short on all this bullshit. Shorting the OpenAI bubble is like shorting FTT. You can't lose. Just look at the old tweets. Remember when after FTX crashed and people realized that it was weird to trust that much money with a meth addict lol? Maybe it's a bad idea to get your AI morality from these weird degenerates.
>useless sophistry level 100/100
>domain mastery skill 10/100
useless AI, only promotes consumerism and fucks the world up further
did openai ever find a discord moderator to throw $200K a year at?
>Tech bro saying tech bro shit to drum up vc and public interest
Everything old is new.
What's nonsensical about this pic? He's just stating the obvious, LLMs are good bullshitters
just laying the groundwork for them getting access to all of ai's power and potential, while you get access to nothing
run of the mill capitalist stuff