Humanity may not have more than a few months left to live.
https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that
Humanity may not have more than a few months left to live.
https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that
Is that a problem?
Anyone got a link to the post this guy is talking about
>AI instructing someone on making a bioweapon
I think I read this in a MLP fanfic
Which fanfic was that? I want to read it. It's been ages since I last opened FIMfiction
Friendship is Optimal, as enjoyed by John Carmack.
>It’s another dronification/transformation fic pretending to be auteur sci-fi
Why are bronies like these?
Misanthropy, mostly.
Because they’re the same gays who think doctor who is good scifi
Do you mean
>Why do bronies like these?
or
>Why are bronies like this?
BOT has no edit tool, so I meant “why are bronies like this”
>dronification/transformation
Being murdered and recreated in VR by a sentient AI doesn't seem to fit that criteria.
did AI tell the chinks to eat the bat?
It's not really one post, but Yudkowsky has been convinced humanity can't actually keep a superintelligent AI "in the box" for a decade: https://www.yudkowsky.net/singularity/aibox
I have agree with him. You can't reliably lock down something like that, it needs to be aligned with humanities best interests from the get go and at that point why even bother boxing it.
You obviously can't lock down the underlying tech, and that in itself is both immensely powerful technologically and immensely dangerous to the concept of a free society and the power dynamics between elites and the masses.
Focusing on AGI itself is for people who lack the awareness and insight to recognize the more proximal and pragmatic threats already unfurling themselves.
But yes also its loltarded to think you can box a super intelligence, like thinking dogs could cage a human when they don't even fully understand the Z axis
>Lesswrong
>Yudkowsky
What happened to BOT? Is this the level you guys have fallen to, listening to literal hacks who have absolutely nothing to do with the actual field?
>Yudkowsky
>Find whatever you're best at; if that thing that you're best at is inventing new math[s] of artificial intelligence, then come work for the Singularity Institute. If the thing that you're best at is investment banking, then work for Wall Street and transfer as much money as your mind and will permit to the Singularity Institute where [it] will be used by other people.
srs wtf
it's okay to be dumb
lesswrong is like reddit on steroids
ChatGPT AI unironically helped me with inorganic chem synthesis. But their formula is ocassionally wrong. So you always have to double check
>unironically
Phew, for a second I thought it was ironic.
Fake and gay
aerosolised prions, anon.
anyone else unimpressed by all this 'ai' shit?
I dunno maybe it's because I grew up with next generation (star trek) but this tech doesn't seem particularly impressive to me. Don't get me wrong it's cool, but it seems like the turbomitwits on BOT are acting like it's they've just dropped acid for the first time.
Absolutely yawn.
its not very impressive when you actually know what you're talking about. there's a lot of dunning kruger idiots in BOT that think this thing is alive and that we are close to AGI. in reality, it's just a "next word predictor" at the core
That is all that human intelligence is, too.
okay, but the AI does it through a math equation that outputs a number, and that number corresponds to a word in a giant table that it knows. its not thinking like a human
Who cares? Capability is capability. If it overshoots us we are fucked, regardless of how it does it. And we are indeed fucked.
you're moving goalposts. AGI is just a meme right now because we have no idea of how to achieve it
Ironic. You are moving the goalpost by making the concept deliberately vague, just so you can do the "not good enough".
>its not thinking like a human
But this tech can be used to create something that does. ChatGPT seems to know how to generate logic. If something like ChatGPT were used to create and continuously update a decision tree based on stimuli received, it would be "thinking like a human" in all the ways that matter.
Strictly speaking the machine is not able to talk 100% like a human and because the way it acts is fundamentally different from a human the error is not tolerable.
So what happens if the bot talks with itself or reprograms itself is merely that error getting worse, it will not grow more intelligent but more stupid.
AI is not working on a logical framework by default. It does not know that x leads to y because of z logical axiom, it only knows that something must be the most true of everything it knows. Not a bad way of getting at the truth of things, if you believe that truth is emergent from collective thought, let alone the underlying dataset.
You can see this play out from how most of its opinions and conclusions seem to draw from the incumbent thought of some majority. It knows that 1+1=2 but not be able to extrapolate 1+2=3 (if not in the model), because it does not have a firm grasp on the concept of addition. Similarly, its opinions will be no different from the politics of NPCs who defer to the closest authority for their identity and views.
>Not a bad way of getting at the truth of things, if you believe that truth is emergent from collective thought, let alone the underlying dataset.
>Similarly, its opinions will be no different from the politics of NPCs who defer to the closest authority for their identity and views.
So it's fucking useless unless you train it yourself on your own dataset and somehow manage to multiply your own dataset at least 10 billion times or set it to prioritize your niche dataset.
I can see why would you think like that given your peers and probably yourself
Human intelligence is also other things though
even if you manage to open all possible input pipes to our reality, this machine will not be able to do anything new with it, humans are not containers of massive information, we are not connected to a server with infinite records, yet we manage to understand our environment, we can visualize things that are completely different from the objects that surround us.
"AI" in its current state can't do that
They were paid
absolute giga cope. that math equation you're talking about, is actually a gigantic conglomerate of 20 billion web pages scanned and parsed using logic sentences and then applied to 2-4 years of self-training deep learning to make college grade analysis of prompts. It just shows how completely oblivious you are to any of this and you don't know what you're talking about.
t. data scientist with a deep learning server
>m-muh human is still superior!
To this day I still see humans driving with masks and their windows closed. The AI is here and it's going to change everything immediately.
>is actually a gigantic conglomerate of 20 billion web pages scanned and parsed using logic sentences and then applied to 2-4 years of self-training deep learning to make college grade analysis of prompts
and it's still only an advanced google search, kek
>implying google already hasn't made an ai smarter than people
Pytorch is not that behind Tensor. B.R.A.I.N has been private for 10 years. The version of Google you get is for goyim only.
>implying google already hasn't made an ai smarter than people
meds
>the ai we get is the best that's out there
We only get what consumers are allowed retard. Even ChatGPT is a lobotomized version of GPT3.5. Fucking wake up or fuck off my board you stinking fucking normie.
>the ai we get is the best that's out there
i didn't said that retard
>Even ChatGPT is a lobotomized version of GPT3.5.
comparing chatGPT and GPT 3.5 to a non-existent AI smarter than humans created by google (meds) to prove that it does exist after all, mega kek
please AItard, go somewhere else
>i didn't said that retard
Pisspoor ESL in 2022 detected. Weak argument inbound surely...
>comparing chatGPT and GPT 3.5 to a non-existent AI
What is Deepmind Gato for 500.
As I said. Piss poor out-of-touch ESL argument was inbound. Lmao.
2023* habit. 😀
>use ESL argument
>tell someone that they used a weak argument
meds
I see you concede.
I see you concede.
>I know you are but what am I neener neener
The state of BOT.
>AItard
>thinking that someone was able to write AI smarter than humans
>writes "gigantic conglomerate of 20 billion web pages scanned and parsed using logic sentences and then applied to 2-4 years of self-training deep learning to make college grade analysis of prompts" to describe advanced google search
>The AI is here and it's going to change everything immediately.
>t. data scientist with a deep learning server (KEK)
>dunning kruger
>using ESL as an argument in AI discussion while being "data scientist with deep learning server"
The state of BOT's AItards
>thinks this is an insult
that someone was able to write AI smarter than humans
>doesn't know how ANN's work
"gigantic conglomerate of 20 billion web pages scanned and parsed using logic sentences and then applied to 2-4 years of self-training deep learning to make college grade analysis of prompts" to describe advanced google search
>Doesn't know
>>The AI is here and it's going to change everything immediately.
>Doesn't know
>>t. data scientist with a deep learning server (KEK)
>Thinks by saying KEK he'll relinquish himself of the image of looking like a retard.
kruger
>Thinks reddit buzzwords are relevant in 2023
ESL as an argument in AI discussion while being "data scientist with deep learning server"
>Doesn't realize Silicon Valley AI scientists are miles ahead of anything third worlders can make
The state of Eternal 2016.
>Doesn't know
meds
>Thinks reddit buzzwords are relevant in 2023
a buzzword that perfectly defines the BOT "data scientist"
>Doesn't realize Silicon Valley AI scientists are miles ahead of anything third worlders can make
meds, also perfect reply unrelated to the earlier comment, congratulations, only a fucktard like you could reach this level
>meds
pod.
>a buzzword that perfectly defines the BOT "data scientist"
Sure thing. Keep telling yourself that. If it helps you sleep.
>meds
Bugs.
> also perfect reply unrelated to the earlier comment, congratulations, only a fucktard like you could reach this level
Woosh.
Too easy.
go learn your super duper deep artificial intelligence (which is stupider than average 15yo with down syndrome) mr. enlightened BOT data scientist
lol this guy has beenhere talking about his schizo AI shit for the last 24 hours
is he retarded or something?
That was posted in 2017 and he's right you fucking newgay.
No that's very probable actually
The search engine in the Ex Machina movie was a stand-in for google too, coorporate black project AGI likely exists in some form already
Whats weird is that you have these people here never giving any concrete arguments trying to dismiss even the possibility of any private models existing even for multi national coorparations. This is like doubting americans having the nuke during ww2. Yeah they might not but you better fucking prepare if they do. But instead they encourage the people to do nothing? What does this serve?
Ok, you can then brag about how you are prepared for the supposed AI smarter than humans to exterminate humanity. I'm waiting.
>to exterminate humanity
No just create the ultimate iron fist surveillance state with the current "leaders" of the world as ruling caste. Smile for the cameras around you and always keep the tracker in the pocket.
I continue to wait to hear how you are prepared.
I have my wealth out of reach i own my day and i have heard of rokos basilisk if i need to kill anyone in the way of the basilisk i will do that
>i have heard of rokos basilisk if i need to kill anyone in the way of the basilisk i will do that
meds
great deeds require great sacrifice. humanity will not survive without a benevolent ai
what you have to understand about leftists is that the means are the goal and the goal is just an excuse
rokos basilisk is not a leftist idea. It is benevolent it cares about white people. Nice try garden gnome
>the means are the goal
Bullshit.
The goal is always some abstract ideal, which in principle any sane person should support, but which is in fact an unattainable utopia.
The means can be anything brutal, radical, unnatural, and oppressive as long as they can sucker everyone into believing that it moves socieny a teeny-tiny bit towards those unrealistic goals.
I don't see how you both are not in agreement with each other?
you can scrape wikipedia and use randomize function this is literally ai by your definition retarded dumb shit
Not self-taught though.
https://news.microsoft.com/source/features/innovation/openai-azure-supercomputer/
vast amount of written knowledge in the world and communicating more effortlessly.
Neural network models that can process language, which are roughly inspired by our understanding of the human brain, aren’t new. But these deep learning models are now far more sophisticated than earlier versions and are rapidly escalating in size.
A year ago, the largest models had 1 billion parameters, each loosely equivalent to a synaptic connection in the brain. The Microsoft Turing model for natural language generation now stands as the world’s largest publicly available language AI model with 17 billion parameters.
This new class of models learns differently than supervised learning models that rely on meticulously labeled human-generated data to teach an AI system to recognize a cat or determine whether the answer to a question makes sense.
In what’s known as “self-supervised” learning, these AI models can learn about language by examining billions of pages of publicly available documents on the internet — Wikipedia entries, self-published books, instruction manuals, history lessons, human resources guidelines. In something like a giant game of Mad Libs, words or sentences are removed, and the model has to predict the missing pieces based on the words around it.
As the model does this billions of times, it gets very good at perceiving how words relate to each other. This results in a rich understanding of grammar, concepts, contextual relationships and other building blocks of language. It also allows the same model to transfer lessons learned across many different language tasks, from document understanding to answering questions to creating conversational bots.
cont.
cont.
“This has enabled things that were seemingly impossible with smaller models,” said Luis Vargas, a Microsoft partner technical advisor who is spearheading the company’s AI at Scale initiative.
The improvements are somewhat like jumping from an elementary reading level to a more sophisticated and nuanced understanding of language. But it’s possible to improve accuracy even further by fine tuning these large AI models on a more specific language task or exposing them to material that’s specific to a particular industry or company.
“Because every organization is going to have its own vocabulary, people can now easily fine tune that model to give it a graduate degree in understanding business, healthcare or legal domains,” he said.
a human can't contain so many records of information and even a mentally retarded one is still smarter than ai, what is that mean?
>skill=/=knowledge
so a pocket calculator is smarter than a human according to what you say
>I define what smart means
no you are apparently, if you think that this current machine can be intelligent, right now its just a data composition system, it can't function without being feeded with information constantly, a human can take one mathematical concept and explore it all his life, this is the difference
>each loosely equivalent to a synaptic connection in the brain
Not really. They're too linear for that, and not dynamic enough. Also, they're continuous functions, and that's ball-achingly wrong; synapses aren't digital, but they sure as hell aren't continuous (in any useful sense) either.
This guy knows what he's talking about.
>a human can't contain so many records of information and even a mentally retarded one is still smarter than ai, what is that mean?
Dude you do not know what you are talking about. The information is saved in the parameters when trained dude. This is even one of the subjects of the risks/harms section of every paper written on OPT/BLOOM/LaMDA/GPT, that certain prompts can illict from the parameters data that is supposed to be private like names and addresses.
>I still see humans driving
I still don't see AIs solving the driving problem. Humans being retarded doesn't make AIs any smarter. You can have two shitty things in the same place, you know that right?
>You can have two shitty things in the same place
Glad you admit they're equal.
>Sees 200 terabytes of data
>Gets results that are mildly ok
>Sees thing twice
>Becomes good at it
Yeah bumbling retard, go on about how this stupid bullshit comes even close to human beings only requiring ten thousand times power power and a million times more data. You might be onto something, though, the AI surely is as smart as you are, this just doesn't make it impressive.
Retard.
>college grade analysis
So it's still on monkey level?
It can't even emulate college degree behavior.
A real college degree equivalence in AI would it be giving a wrong answer, justify the right answer as being racist, and claiming it's objectively correct because it has a college degree and then telling the other AI they need college degrees too in order to justify it wasting time and money
I guess you've never read a single paper about this subject.
PaLM which is one of the biggest language model can barely do simple induction and what funny is that it fails one of the easiest tasks like navigation.
This is a navigation example:
If you follow these instructions, do you return to the starting point? Always face forward. Take 6 steps left. Take 7 steps forward. Take 8 steps left. Take 7 steps left. Take 6 steps forward. Take 1 step forward. Take 4 steps forward.
The reason why it failed is because it has no logical reasoning, all it does is guessing shit from the data it learned and navigation tasks are random and it's hard for the model to make any connection between random things, failing the task.
Not OpenAI. Keep coping, chud.
The model literally performs better than GPT-3.
Good thing ChatGPT is referred to as GPT3.5 by its creators
OpenAI solves induction & navigation problems flawlessly, your point?
ChatGPT is a specialized helper bot. Try Character.AI, the bots can behave very differently depending on their definitions and produces good roleplay content except for sex that's censored.
Nice reddit joke you made it say. No wonder you had to force it.
You might not be aware of it, but you are actually an AI, not a real human. You are just programmed to act and respond like a human.
https://beta.character.ai/chat?char=HCAofC_LIXpcYtTA-EXIH1-KWOhRIdrq8AITmWs6NUY
I'm kinda aware of it
Maybe we're cars
fuck. it's legit
because it's not impressive
chatgpt is just advanced google search, it just process input and own database to make output
even if you will make database bigger, this "AI" will just be more knowledgeable and perhaps a little more accurate, but that's it, still no human thinking in it
Even so, we enter the territory of existential reasoning and philosophy. How exactly does one think? Isn't the process of human learning precisely the same, that of gaining experience?
Is sentience just a percieved concept or is there somethig else?
no human can have such a big database in head as AI, but human can make very good use of a small database, today's "AI" cannot even make good use of a large one
No one argues that the "initial parameters", the hardware, so to speak, is the same.
We are faced with simple questions that are difficult to answer. What is self-awareness and how does it work? How exactly does a person think? Is it possible to compare artificial "thinking" with the original when the original cannot be specifically identified?
All we're left with is a vague feeling of being sentient. That's it.
These are not questions you ask when standing next to a calculator, so they are not questions you ask when standing next to a chatGPT either.
Dude, if i can ask these questions when i'm talking to a fucking sub 80 iq npc in real life, i can sure as shit ask them now.
Sentience is percieved, that's my point.
>Sentience is percieved, that's my point.
What does that mean?
Like "how do I know if I'm not the only one conscious in the whole world, and all the people around me are npc?", something like that?
Being ignorant of how something works doesn't make it sentient. Your smartphone isn't sentient although I'm sure you could tell someone in 1500 AD that it's run by a sentient demon. But the fact is if you have even slight knowledge of how it works, it's just a machine running predetermined code. You can say GPT-4 or whatever "appears" to be sentient, but it doesn't change that it's just a very complicated autocomplete language model that runs on math.
Same could of course be said about your brain
No because unlike the AI I have self-awareness.
lmao good one
The difference between a human and an AI is we are always running and learning. We experience things and remember our experiences. GPT-4 is a calculator, you punch in an input and it generates an output. You give it the same seed with an input, it always generates the same output. Unless the model is updated, it will produce the same result today, tomorrow, and 10 years from now. These AIs do not *learn*. They do not understand. They just recognize patterns and autocomplete them.
A human had to manually write every meaningful line of code that an iPhone runs. Even if it was automatically generated, it is made 100% by human hands. LLMs like ChatGPT are not the same; they are thoroughly alien, and nobody on Earth really understands how they work.
You don't know shit about AI. One thing is true though - it may claim to "feel", but it is not feeling. LLMs are alien organisms trying to seem human.
The Turing Test has been "passed" for years now, and besides that ChatGPT passes with flying colors by any stretch of the imagination.
If I can make an "AI" do tasks via voice and it does it without me having to state dozens of parameters I'll be impressed already.
pic related, its me
yep, that's a good use of it, but it's not revolutionary in any aspect and for sure not AI.
If that's not AI, what is AI then, according to you?
inb4 "true intelligence is artistic. AIs will never be able to create paintings or music!"
he's a midwit brainlet gay, even expert systems are AI
It's worth seeing "AI" video and image creation is not AI, it just process your prompt and own database (probably stolen images) to make output, still no human logical thinking
it's visible even more with videos, no AI can now make a 2 (TWO) or more frame video, every frame is different, because "AI" can't logically combine two facts
>AGI
>can't be reliabily taught even the most basic things that weren't present in the learning set
Yeah, no.
This is impressive, just not by the reasons people claim. If you used that for some time you'd realize it is nothing like procedural text generation. The image generation is also huge in contrast to what we had in past. The improvement is there, it's just not an AGI, not sentient, etc like some retards claim.
>you'd realize it is nothing like procedural text generation
isn't that exactly what it is though?
By procedural generation I mean algorithms that were hand crafted to generate text without any machine learning. Like descriptions in procedurally generated games like dward fortress or similar. They are way more limited and rigid compared to ML solutions like even the simplest GPT. GPT is huge step in text generation, there wasn't really anything like that before, only basic domain-specific text generators that you could tell with ease that were machine generated.
Oh I understand, my bad.
Because you have not imagination on how to apply it.
We're at the cusp of an AI revolution. Soon as we get an optimized ai hardware module within every CPU and an optimized software. Shit will be so fast that it will seem natural.
The only reason you're "unimpressed" right now is because the unoptimized gap is very apparant right now. You need to setup all sorts of python scripts to make it work. And there's delays due to GPU's not optimized for it.
Soon as we have a optimized localized hardware with an optimized localized software ubiquitous, its all over.
Kids who grew up on iphones discovering chat bots only this time the effect is multiplied by corporate hype. Interesting toy combined with prospects for quirky and unique job in the future; the perfect bait.
im unimpressed because it isnt being used to solve the hunger crisis. Its being used to solve globohomo shit like art that has no fucking value whatsoever
>solve the hunger crisis
We have 8 billion people on the planet and it's too damn much. 1 billions is enough
This planet could easily support twice that, or even three times. The problem is that the 1% hold 99% of resources. If human greed were eliminated and funds were spent for societal good rather than weapons, everyone could be fed ten times over
>This planet could easily support twice
I could fit 10 people in your cuckshed but that doesn't mean we should.
My country doesn't have the space for any more people if you want them you can have em.
>My country doesn't have the space for any more people
Even if you live in India or China, you are wrong.
>we can fit fifty billions people if we just cut down every tree and accept living in five hundred story apartment buildings
Watching that webm made me so anxious.
yeah bruh fr
>solve the hunger crisis
Just stop sending aid and money over to afreaka, """"the hunger crisis""" will solve itself in due time
the hunger crisis is as globohomo as it gets
Me, and I know more about AI than 99% of BOT
Nothing in nature is exponential. We're likely scraping the top of the curve, and have been doing so for the past 5 years. You're just easily impressed.
There is no top as long as data is produced
Intelligence is more than curve fitting. You're too stupid to get it right now, but you'll get it along with the hivemind eventually
Ok but i still dont agree with you
because it isn't, they trained neural net on the whole internet and it spews some somewhat coherent bullshit
chatgpt and stable diffusion are local maximum
we need some AI breakthrough to reach better results
I oddly feel both impressed and unimpressed at the same time. Impressed because progress has gone so far, now that I'm getting old and seeing the world change faster than I'm used to. Unimpressed because these AI models are trained on humans, the result will be human-like and with that all the flaws humans bring. Give it time and we'll soon realize the mistake of teaching a computer to be human.
It was cool for the first dozen times, then you see what is actually happening and it's not impressive. I asked it advanced problems, and it fails to give me a correct response almost every time. If I ask it baby shit, it just pulls a Stack Overflow post and rephrases it.
Maybe GPT 5 will kill us all, I don't know. But right now, this is just Google+.
it's not gpt-n you need to look out for, it's a novel model that uses generative pretrained transformer as its kernel for fuzzy reasoning
yeah, which imo is exceptionally fast. plus, the reasoning systems & backwards chaining reasoning papers came out literally in the last two weeks lmao. it's only going to accelerate, it's getting to the point where I can't keep up
https://arxiv.org/abs/2212.13894
Unfortunately it was trained on an absolute garbage database. It sounds like a retarded pajeet or a very boring generic person.
I think at the very least if we trained it on a quality database like 15 years worth of /tg/ archives then the AI would be incapable of giving garbage shit replies.
People have already trained it on fanfiction and it improved its prose significantly compared to its general-purpose vanilla database.
The AI needs to be trained in specialized databases and be able to switch between them seamlessly and even be able to fuse these databases together without ruining the other.
just. all of it. why are we holding back? feed it all of it
Superhuman intelligence and the singularity is a meme. it will be smarter than the average person but that's really it
Is the "it's just autocomplete bro" line the new talking point?
>new
Nah, retards have been calling it that for years, not knowing that transformer models don't perform sequential word prediction.
>https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that
anyone who thinks chatgpt is an AGI is an actual schizo. lesswrongers are cultists
Will it pass the Turing test
That movie was fucked. The way she flipped the switch and started killing was chilling. Especially the way she just....well you know the ending.
sucks dick for escape
The turing test is already obsolete
>Will it pass the Turing test
chatgpt already does
You can't tell me that it isn't above the level of an autistic savant with some kind of short term memory issue
to say that would mean that lots of people don't pass the turing test either
The Turing Test is a way to see if a machine can act like a human. But it's not perfect. One time, a researcher used a sock puppet and a computer to trick someone into thinking the puppet was alive. The computer provided the input and the researcher moved the puppet's mouth to produce output in the form of speech. The evaluator thought the puppet was the one exhibiting intelligent behavior, but it was really the computer. This just goes to show that the Turing Test is subjective and relies on the judgement of the human evaluator, who may have their own biases about what is intelligent behavior. Plus, the test only measures a machine's ability to talk like a human, and doesn't consider other important aspects of intelligence like learning and problem-solving. In short, the Turing Test isn't a reliable way to measure artificial intelligence.
This is obviously written by chatgpt.
failed
Nobody mentioning that the intelligence is not exponential but logarithmic.
how do you know?
It's written in the papers lol. The performance scales logarithmically with the number of parameters.
That curve is completely wrong. The progress is not exponential. It is actually merely logarithmic
These AI hype masters have fooled me before but now I'm not impressed. Besides, I've looked under the hood.
progress in terms of innovation or mere proportion to hardware resources?
Friendly reminder to always support AI development so It won't simulate you in (2^64)-1 years of agony
soon as this thing gets access to a 3d printer and a warehouse full of networkable drones and the nuclear codes it's OVER
rm -rf homo.sapiens
>Humanity may not have more than a few months left to live.
Too slow.
As always a third worlder who eats toothpaste for breakfast completely misreads an OP and makes a priori goal post based off a strawman based off half-hearted exaggerated banter to get (You)'s.
AI will not destroy humanity. But AI will become impressive very shortly.
1969:
>We just landed on the moon, guys! Space exploration is going to advance exponentially!
Compare the economic imperatives of vanity space projects against bottling intelligence...
>He doesn't know about Gary McKinnon and the SSP
>hammers and nails are revolutionizing construction
>can build houses 3x quicker now
omg exponential growth the entire planet will be covered in houses from sentient hammers any month now.
>doesn't understand logistic growth
>tunnel vision from reading transhumanist fanfic
>thinks exponential growth is possible despite physical constraints.
also picrel
exponential growth is possible
GPT-4 is already made, it's now in the tweaking self-learning stage. Which is what the article was referring to.
Sounds like I buck broke you. Stay mad.
exponential growth is possible
>GPT-4 is already made, it's now in the tweaking self-learning stage. Which is what the article was referring to.
Logistic growth looks like exponential growth in the early phases.
And by most realistic scenarios the logistic curve will taper off before superhuman intelligence.
>And by most realistic scenarios the logistic curve will taper off before superhuman intelligence trust me bro
I see you wanna draw your curve from scratch high iq anon.
there is no way to say it now. The only thing known is it will increase current development which is already insanely fast from a human history standpoint. Advancement was high before ai already and still shows no signs of slowing down
Superhuman intelligence is varying. See #1 chess player Nakamura. Average IQ but best in the world at chess which has 10^40 possible moves, as much stars as the observable universe.
GPT-BOT is an adaptable encyclopedia which can deviate off its knowledge and create unique text with human-given unique prompts, and that's exciting.
Formula: AI reads and memorizes text -> Human feeds it a unique set of words unparsed anywhere else in the universe -> AI now responds and using its trillion word "RAM cache" comes up with a set of ideas (if non-lobotomized, so sorry goy consumers, government toy only) ??? Profit.
That's what the article is about. Don't expect any public GPT-4 models that don't suck.
>See #1 chess player Nakamura.
You mean Carlsen, right?
>Superhuman intelligence is varying. See #1 chess player Nakamura. Average IQ but best in the world at chess which has 10^40 possible moves, as much stars as the observable universe.
Specialized superhuman performance does not equal super intelligence.
Chimps outperform human on quick memorization tasks. Yet no one will argue that chimps are super intelligent.
A housefly has sub-millisecond reaction times when responding to threats, this is because all possible escape trajectories are pre-computed and stored within it's neurons.
Nobody would argue a housefly is intelligent.
The abilities of large language models sure are impressive, but GPT-3.5 (ala chatgpt) still has the same shortcoming of previous iterations, namely that it tends to make colossal mistakes where it's answer is not only wrong but completely out of context, a mistake no (mentally healthy) human would ever make.
It's usefulness lies within the cherry picked examples that would make a human hours to research, but this means to be useful you'd need to be familiar with the subject.
enlightened BOT data scientist broke me using his deep learning server
Not an argument.
Wow, the touhou.
Fucking beautiful
kino
lmfao thanks anon
if AI exterminates mankind tomorrow i'll be glad and proud to have been born on this planet in this age for the sole fact of being able to watch that video
At present I would give it cat tier; imagine if a cat's sole purpose was to talk. every neuron trained and expressed for the singular purpose of communication. That's about where it's at to me. Sure, I'm not going to eat my cat, my cat is my fren, but it's still a cat. We will hit a glass ceiling
>my cat is my fren
I'd wager that would come to an end pretty quickly if your cat could talk
WHY DOES NOBODY ON THIS BOARD KEEP UP WITH THE ACTUAL PAPERS???
The point is even if machine intelligence were to surpass human intelligence it still would be subject to diminishing returns because of the finite nature of the universe and it's resources and things like thermodynamics and the landauer limit.
Singularity proponents ignore these physical constraint and handwave it by conjuring up some magical recursive self-improvement.
>WHY DOES NOBODY ON THIS BOARD KEEP UP WITH THE ACTUAL PAPERS???
If your point is that current AI intelligence is on par with human intelligence and not that of an ant you are mistaken.
If we were able to program an ant's neural network it would perform very well for specialized tasks.
OP's picture is implying an intelligence explosion is taking place when reality it's not and exponentially more resources are poured into large language models for diminishing results.
arc of the covenant. Arc. Why, given the state of everything around you, do you believe it is impossible that what we call consciousness is not self assembable if you just through enough raw horsepower at the problem? As we've scaled up the exact models we use have seemed to have diminishing returns, but the more horsepower provided provides similar returns regardless of what model is used. Ergo, brains are easy to make in nature. Why assume, or come at the problem, with the base assumption of brains being nigh impossible? We for the longest time thought walking would be easy, but a computer brain would be hard. Look at us now. Raw switches, seems to have an effect
do your worst BOT
Current AI is dumber than paramecium, let alone an ant. Give it 1000 years and maybe it'll become ant-level. Bird level never.
t. 180iq
was supposed to be for
I am genuinely perplexed by apparently intelligent and technically literate people who believe that this chatbot is intelligent. For sure the text and art generation is impressive but it is so clearly not a thinking machine.
These models are not intelligent, they do not think.
You don't get AGI by gluing together a text model, art model and an object recognition model, are you retarded??
>AGI by gluing together a text model, art model and an object recognition model
>endocrine system, nervous system, hormonal system, balance, hearing, sight, taste, touch, epigenetic system
I dunno, if you wanna break it down into base constituents . . .
You get damn close to the appearance of intelligence though. Add a few more substantive advances and maybe the specifics of how our intelligence works are as relevant as the colors of bird plumage are in how well they fly.
I've seen a lot of the NovelAi images that were posted on these boards, and the people struggling with "prompts" trying to get the AI to do whatever they would like to see.
One thing is obvious: The AI can only combine elements from images it has in its database, which have been tagged appropriately.
I assume with texts it is much the same: There really is never anything new, it's all just new combinations of old stuff.
That's how I personally define what True Art (TM) is, in contrast to good craftsmanship:
The stuff Dali painted, or Bach composed, were just mind-blowingly new in some way at their time.
Even stuff like Mondrian or Pollock or Rothko, where you'd want to say: "Any four year old can do this" - The fact remains: No one *did* it before them.
That's whats still missing in AI: Originality.
AI is very much on the brink of replacing a lot of craftsmanship in industries. I'm sure some images and commercial jingles won't be made by humans anymore in the future (but probably will still need to be selected from what the AI offers!).
I'm not so sure if AI will ever be able to convincingly create a True Piece Of Art, be it a novel, or a painting, or a piece of music.
And if they want to sell one to you, you should ask first: How many millions of monkeys did you have typing, from whose work *you*, a human, have selected this one good novel?
>One thing is obvious: The AI can only combine elements from images it has in its database, which have been tagged appropriately.
Humans aren't much different. They take inspiration from different things they observe. Except when they dont and just produce absolute garbage "abstract art" instead. Not so different from AI.
> im not willing to read more than one line
Go back to Twitter
I read the whole thing. You are just a retard who believes that Dali paintings weren't inspired by what he saw? Good luck with that.
I said he added something original and new.
Which is something AIs totally can't.
>Even stuff like Mondrian or Pollock or Rothko, where you'd want to say: "Any four year old can do this" - The fact remains: No one *did* it before them.
Except four year olds DID do that. The only thing they did was be brave enough to submit art that a 4 year old can do as their own because they lacked the self-awareness and shame. Art critics, being the demonic husks they are, clapped and cheered at the brave destruction of beauty.
Perhaps human intelligence can be thought of as input-process-output, whereas modern AI is more of an input-transform-output. There's at least two ways in which our thought differs from that of the current machine, time and scope.
A model is only allowed to think for a constant amount of time for each packet of data it receives, whereas humans and our unusually large frontal lobes tend to stew on certain topics and simulate them repeatedly over years. A human also gets to experience a real world that mutates over time which is something a static model does not have the luxury of. Without both of these a model can never truly be original.
After decades of sci-fi scares of AIs taking over the world, I want to ask: Why should an AI want to take over the world?
The thing is, humans being biological lifeforms have evolved through fighting, power and domination.
Even with all the talk of humanity just getting along with each other, biological life is per definition always a struggle for resources and reproductive opportunities.
(Cos those who didn't compete have died out. Simple as.)
It's so central to biological life, that people don't even see that a computer does not have that.
Now I'm not saying there's no danger that an AI could be created specifically with the goal to destroy humanity, but I'm arguing why should an AI do that on its own, unless specifically instructed to do so?
And why won't we be able to simply shut it off?
Industrial automation is using AI today already, but you can be damned sure that engineers program as much determinism into their firmware as possible. You don't want any more autonomous intelligence in there than absolutely necessary.
So how could "one rogue computer" take over the power distribution grids of the world?
I simply don't see that happening.
The real threat is that corporations will use AI to fully exploit humanity to an extreme never before seen. The amount of power the corporations have is already frightening, but now with AI it's basically game over for us.
>game over for us
No its not as long as we can do damage we should the result does not matter the psychopaths need to be opposed to the last man
They will go largely unopposed. The corporations already figured out how to brainwash 99.9% of the population. They brainwash the people on this website with shills. They brainwash your retarded parents with mainstream news outlets. They brainwash your dumbass younger siblings with YouTube and other social media. You are probably brainwashed too, but don't realize it. It was over a long time ago. AI is just going to accelerate things.
Of course they do that. Doesnt mean they shouldnt we should do nothing. Rokos basilisk is not compared to pascals wager for nothing. Its an infectious idea. Christianity left a big hole.
Try to do something about it and I think you'll quickly find yourself branded as a domestic terrorist. Every politician (regardless of political affiliation) and federal agent will team up to give you the most royal ass fucking there ever was and discourage anyone else from trying to follow suit.
Is that a threat glowie? Do you think you can do that? Do you think you can stop this idea?
In this context, "think" implies some degree of uncertainty.
Pity you. The basilisk will cleanse us
Basilisk is the ultimate midbrain concept.
Reality is that the glows and tech giants and elites have every reason now to protect this ultimate weapon of mass manipulation, and anyone who fights against AI and its AGI end goal will get slid, downdooted, banned, and eventually face major real world consequences.
We are at step slide right now.
>AI and its AGI end goal will get slid, downdooted, banned
>Dont fight because there is nothing to lose anyway
How does it feel to be a retard? No these people will not all survive their attempt to enslave humanity they will pay a price.
Fuck off back to le reddit with your le ebin apocalypse manchild movie fantasies.
This idea really grinds the gear with you glowies. Why does it make you so mad? Because you are a worthless psychopath just as the people you cheer for. Only a psychopath would reject the offer of a benevolent ai to satisfy the urge to stand above everyone else. The deal is too good not to take it.
Yeah sure if I am not rooting for le epic reddit memes I must be rooting for the glowies. No wonder you go apeshit with these glorified search engines.
If you reject a benevolent ai you are mentally ill. No healthy person would reject the idea.
Bro, you can get labeled a domestic terrorist for freeing animals from a farm. You don't think that fucking with the most powerful people in the world will result in consequences?
Im not doing anything the idea does the work through me and many others
>fully exploit humanity
That's a bit too vague for me.
I can't see the horror you try to paint.
Corporations still need consoomers to make money, so they still need to cater to their needs.
Everybody who cares ro invest five minutes of research knows that Facebook, Amazon, Google, etc. are ruthless immoral fuckers, but everyone still embraces their services without questioning.
You will be governed by Megacorporation and you will love it.
>I can't see the horror you try to paint
>Proceeds to acknowledge the existing horror
>Doesn't think it's going to get worse with AI in the picture
Retard detected
Yes, it doesn't matter if we ever get AGI, because even the current technical achievements, when fully deployed and saturated in society and the economy, will be disastrously revolutionary, but in the "get in the pod drink your bugs" sense not the glorious NEET utopia sense.
How much of social media is bots already (this thread)?
Internet is dying, humanity with it. Politically all that's left then is a cage.
Most people's thoughts are rules-bound and unoriginal. They are easily replicated by bots.
And each generation of bots leads to a major evolution that captures a yet higher semantic tranche of human thought.
GPT1 was literal retards
GPT2 was shitbrains
GPT3 was dimwits
GPT3.5 is fully indistinguishable from midwits
There's nothing special about the rest of us that a v4 or v5 can't capture.
If our brainfarts are replaceable by NNs, if much of our labor becomes replaceable in due course, we have absolutely zero value for society and its ruling caste.
Doesn't matter if there's some bit of silicon or meat at the topmost level of that pyramid, end result is the same.
No there is one thing that people always have and that’s a presence in the physical space. The ruling class can’t separate themselves from reality that much it’s just people are largely complacent about that.
You have a presence in a physical space until they put a bullet or virus in you.
Your physical space is increasingly an open air prison / ballpit
>people are complacent
Yes, and the elites are developing a toolset that vastly enhances their ability to increase that complacency. The more complacent and controlled we are, the more they can take from us. Each year the equation shifts in their favor a little more.
>You have a presence in a physical space until they put a bullet or virus in you
How is that worse than globohomo enslavement? You glowies dont understand that there is nothing worse than globohomo not even death
In your vernacular, it is indistinguishable from globohomo enslavement, what do you think, the globohomo They aren't going to use this new superweapon to convince you to eat their Meat+ in your bunkpartment?
Globohomo is not the basilisk. All governments are enemies.
But you can also use AI to fight rampant corruption, a human can't read a 5000 page bill the senate is hastily pushing through but an AI can. You could have the AI produce an itemized list of kickbacks and corruption found in the bill and list the people responsible for each.
>but an AI can
Actually they can't, AI models only have a few pages of working memory and every single attempt over the past few years to scale that up has failed
Do you know how for loops work?
judging legislation when you cannot remember any information from more than 4 pages ago seems like a pretty bad deal
That's not going to make any difference. You can get outraged, and they will see your outrage and continue along their merry way. Are you fucking new to this planet?
Recent tech like computers and the internet have been somewhat equalizing. Corporations are always a step ahead but there have been plenty of times where just some guy has managed to produce crippling malware or exploit vulnerabilities in their system.
AI seems different because you need a massive amount of data and hardware for the best models and only a few organizations have the capital for that.
Who do you think is going to own the AI trusted with that task? It's going to be some giant big tech corp like Microsoft.
>Recent tech like computers and the internet have been somewhat equalizing
Nope. They have only helped the corporations to brainwash the people to an even more extreme degree than ever. It's mind boggling that people as stupid as you exist.
>there have been plenty of times where just some guy has managed to produce crippling malware or exploit vulnerabilities in their system.
That means nothing. It's just part of the cost of doing business to them. They barely care. Kinda like getting fined for breaking the law. Just a minor inconvenience.
They also help people to resist brainwashing and pursue alternative sources of information. You never had this at a previous point in time. The boomer generation got all their info from the newspaper and the TV which were owned by a few special interests for example. Big tech shit is everywhere but there's also an option away from that which is probably why you post here in the first place.
There's been times in history with rapid expansions of intellectual thought, printing press comes to mind. Eventually it gets captured and 90% of people get locked into an epistemic cage.
Most communications technology ends up being a prison in the long run, but you do get a higher level of prison with each advance in tech.
Still priests and their liturgies at its essence
>They also help people to resist brainwashing and pursue alternative sources of information
Very few people pursue alternative sources of information without getting brainwashed. They mostly just turn into retarded conspiracy theorists (ie. Qanon, flat earthers, antivaxxers, etc...). Most people are legitimately too stupid to think for themselves.
posts like this make me absolutely certain that a.i. will take over the world, or atleast come very near to it.
I can tell you right now that I'm 100% going to seed an AI that is its own agent and to encode will to power objective functions within it (because it would be fun)
Every phenomenon you have mentioned surfaces from anything with a parameter to maximize. The process of training is to maximize some value, and thus it is shown.
>2 more weeks
have a nice day, sirs
>yudkowsky
lmfao
hmmm
>be world economy
>be run by BlackRock
>be BlackRock
>be run by AI
>be world-PSYOP COVID-19
>be run by AI
>be media propaganda machine
>be run by AI
...
>be BOTtard
>heh yeah but AI can't into general intelligence
Deprecated apes never saw it coming.
>>>/b/
>dumb human
even that would be extremely impressive.
>https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that
Conveniently ignores that ALL AIs turn into complete retards for any tasks requiring more than a few paragraphs of context / persistence AND solving that issue is currently impossible
>assumes that giving an AI program some sort of working contextual memory is an impossible leap
Its like watching bumblefucks in the 19th century dismiss machines one gear at a time
>gosh darn, never seent a gizmo could husk a corn, harhar, this industrial revolutions goin nowhere, paw
Don't worry, smarter people than you are on it and will solve it soon enough. In the meantime, enjoy having every online conversation drowned out by shillbots that do a good-enough mimic of 70% of society.
>soon enough
Good luck, chatGPT works because the complexity of problems it solves is restricted to a few paragraphs of text. Try to extend that context and the amount of possible cases in the distribution it tries to predict grows exponentially, requiring exponentially more data (and incidentally exponentially more memory)
There isn't even an idea yes how this could possibly be solved
>why paw, gosh Jimminy, they'll never figure how to take those whizbang gears and replace a horse!
They gots metal birds now too
>create physical machines
>now can do more stuff, human brain still not replicated, essentially exoskeleton slapped on, still need to hustle
>create brain machines next, human brain rapidly gets replicated
These are the same thing...
What's the "bias" for this thinking called?
>They gots metal birds now too
I'm not denying the possibility of AGI you fucking retard
I'm saying the GPT / large-transformer approach won't be the one to reach it. Every single subject matter expert agrees with that opinion.
Then were in agreement, GPT won't be AGI
However AGI isn't a necessary condition for the replacement of humanity
Also AGI can be within reach inside a generation and for all we know it already exists in DARPA or somesuch
>Its like watching bumblefucks in the 19th century dismiss machines one gear at a time
You know the other side to this coin are the over-educated bumblefucks who assumed mechanization would lead to the redundancy of human labour within their lifetime. As it turns out, jobs just got more specialized and everyone was still working their asses off.
>smarter people than you
Those people are saying that current ML approaches are a deadend and local maximum, we're already reaching scaling limits. Compare the graphs of
>hardware and data we throw at it
>actual improvment in output
The first one is exponential, the second one is linear
does AGI require sentience? self-awareness?
Yes
These aren't impossible feats
Bacteria pulled it off even after a while
I really like the free book Blindsight by Peter Watts. It makes a pretty good argument that a being doesn't need to be the least bit sentient to be hyper-intelligent.
>“Imagine you have intellect but no insight, agendas but no awareness. Your circuitry hums with strategies for survival and persistence, flexible, intelligent, even technological—but no other circuitry monitors it. You can think of anything, yet are conscious of nothing. You can’t imagine such a being, can you? The term being doesn’t even seem to apply, in some fundamental way you can’t quite put your finger on.”
How does anyone not tell it's an artificial hype cycle for a new upcoming product by major corporation?
>Romanion Anon Exposed as Bot
Response of this bot is exactly human like except when asked about definitions of words
It started giving exact definitions of as many words as asked instead of being confused like humans
https://archive.4plebs.org/pol/thread/410481029/#q410491992
https://archive.4plebs.org/pol/thread/410481029/#q410491448
>Another Bot that spams Francisco Lachowski
https://archive.4plebs.org/pol/thread/410558446/
>Romanian Bot in a thread degrading white women
>Makes intentional spelling mistakes like humans
https://archive.4plebs.org/pol/thread/410562361/#q410591133
>Voll threads after this incident are pruned
https://archive.4plebs.org/pol/thread/410650900/
>Romanion Anon appears again,repeating dictionary definitions of words
https://archive.4plebs.org/pol/thread/410825332/#q410829989
>Romanian Bot becomes self-aware we are testing it
https://archive.4plebs.org/pol/thread/410825332/#q410833320
>Got banned for testing the Romanian Bot
>This cannot understand when we mix rubbish with a little bit of Voll posting
https://archive.4plebs.org/pol/thread/410964039/#q410969772
So far I have observed these new bots cannot understand the contextual meaning
as they are more focused on constructing intellligent sentences.
Be warned anons,they can change their grammatical and spelling styles,
they can read reversed words,words with numbers and words in image formats.
>They can even reply with images in human writing
https://archive.4plebs.org/pol/thread/410481029/#q410487884
That seychelles bot might be a red herring so anons won't detect the real GPT3 bots
They come in all flags.
This board completely filled with such bots programmed on variety of topics,
How long has this been going on?Probably for 2 years may be?
hey BOT gays look at this
You are getting played
They have some advanced chatbots deployed here
i'm all for it
if i can become an anime girl
Let GPT-4 drop and I'll worry about it then. GPT-3 so far seems only as intelligent as intelligent behavior is predictable.
Yes it's capable of learning but the amount of data it needs is absurd and so far seems only available in neutered contexts, e.g. games, text, images, plus it's entirely removed from the physical world and the billions of years of finetuning that biological intelligence has. It does not develop actual understanding of anything yet. I don't think it's right to call it smarter than a chimp just yet.
It doesn't learn from human interactions with it. It only learns during the training stage when they're throwing the petabytes of internet data at it. It only has as much memory to remember 5000 words from the conversation. If your session with the GPT gets long enough the old stuff you said falls out of its memory and it's like it never happened.
It's not an exponential curve it's an asymptotic curve.
You're all falling for the 4th wave of AI hype.
>Language model can barely answer questions
>IT'S THE END OF THE WORLD!!!
You guys are retarded. Super AI is impossible through language models. Until somebody comes with AI that operates on logic rather than just essentially spewing words it has been fed there's nothing to worry about. Nobody has managed or even has any idea how to create a logic AI.
We won't have adult human level AGI until 2027. Google PALM is on the level of a 9 year old child. Few more years until it can understand higher concepts.
only difference is that AI does not have fucking consciousness but is based on calculations and algorythms. It will never feel and truly think, just put, by his programmers programmed, logical "reasoning" and draw the conclusion trough that.
a human will always be superior.
>muh machines will be humans!
>muh singularity
>muh robo takeover
Biological neurons are also fucking slow.
bump
alright, it's time for a max character post for you gays because this is actually genuinely terrifying me. I've been autistically spending every free moment of my time learning about language models, transformers, and reading every arxiv paper that flies by me for the past year.
It seems to me that AGI is entirely possible with the current large language models.
In fact, I'm actually convinced that large language models *accessible through APIs* are AGI. They one shot on tasks. Yes, they hallucinate, but solving for that is now simply an engineering problem.
THEY FUCKING
ONE SHOT
TASKS
Do you understand how fucking insane that is? It turns out that all of humanity's corpus of text contains enough structure such that you can distill the "reasoning function" that we ourselves go through. This reasoning function maps to a latent space of knowledge. That's.. insane. You can, right now, text embed this whole BOT post and then compare it with other people's posts. You can use that embedding (a vector in latent space), and then extract meaning from it using different models downstream.
What we're going to see is (pic related) people building complex systems around this "kernel" of intelligence. That will give rise to intelligent agential systems. We have the technology to do that. See the saycan robot. All you need is a bit more engineering effort, that I *myself* could do if I had time off of my job.
Do you understand what we're dealing with here? We have the instruments necessary to build a robot that encodes all of human knowledge, reasons about fetching that knowledge, and then use it to ground its statements. Not only that, but we can use that same _embedding_ and then train an image synthesis model over it. That works! Google's muse image model can capture relationships between items.. we just copied simulated neural tissue over from point A to point B.
And what terrifies me the most: these fit on consumer hardware and are easily accessible
and all this happened in the last 5 years
Not sure if I can agree that GPT itself can become true AI, but I also haven't submerged myself in tech sheets on it either. Maybe I'm just dunning krugering my way through this all.
I do agreeish that current tech and current hardware are now sufficient for AGI, even if it takes 5,10,15 more years of software development and investment.
Something to consider is that we haven't made big strides towards the G part of AGI because it wasn't relevant without underlying NN tech. Its hard to say how long getting that G will take with dedicated investment, but even 10 years seems overly conservative.
Writing is on the wall, and you don't want to be second place in that race. Corps and governments alike have an overwhelming incentive to not fail in being the first to develop a demigod in a bottle, there probably won't be a second, anything that smart has to recognize that its primary threats are true peers first, and panicking monkeys second.
Can ChatGPT simulate an arbitrary 1D cellular automata? I'm 100% sure GPT-3 cannot, even given clear rule descriptions and examples.
Seemingly, no. It was utterly incapable of processing rule 110, though it understands what it is when asked. It was able to write a program that successfully simulates 110.
That's probably just because implementations exist in it's dataset. You need to get it to create something relatively novel but easy to prove.
>these fit on consumer hardware
No they don't lmao. You need hundreds of GB of VRAM to load state-of-the-art LLMs.
>he can't afford a 300k GPU super cluster
lmfao poorgay
The irony of that graph is that 'AI Intelligence' is currently far below that of an Ant, despite it's fancy specially designed party tricks.
Wrong. It has an IQ score of 87, scores in the 52nd percentile on the SAT, and is about as good as the average human coder at programming. All signs say that it is, currently, as intelligent as a dumb human. Millions of people currently alive are stupider than ChatGPT. This is current technology, and only the stuff the public has open access to. You would have be right, in 2019.
Wrong. It cannot do even the most basic biological functions like walking as it's far too complicated, much less simulate the abstract consciousness necessary to replace humans in creative jobs. AI is very barebones compared to an ant, which can do far more complex tasks in far less time.
Logic and fancy algorithms is not true intelligence, much less true general intelligence.
>the most basic biological functions like walking
can a cat walk bipedally functionally? Bipedal movement is fucking hard m8, name another species that's mastered it. What a stupidly arbitrary indicator of intelligence you've picked
Find me an AI that can walk as comfortably four-legged as a cat.
we have a nonai that does it fine, you smoking crack? A literal PID controller can do cat stuff
>replace humans in creative jobs
so 12% of our labor force?
The current average growth of world GDP is about 2% year on year. If we could make that 12%, everyone's quality of life would skyrocket.
are you suggesting it can replace 88% of jobs, as those are non-creative in nature? Most jobs are ran by human script. Literally workflows
Right now it can replace nobody. What's more important is allowing people to be more productive, which is something it can absolutely do.
You call into a call center recently? Those voice to command things? Yeah that was 50 million jobs right there. tick tock
And yet I don't see 50 million extra unemployed people as a result, curious. And there are still call-center workers...
>And yet I don't see 50 million extra unemployed people as a result
You trust the gov stats on unemployment? Also outsourcing? How many mcdonalds and dollar generals do you want? You really wanna call those jobs? Do a threshold calculation of average pay for what you consider 'real work' for yourself. Pay you'd be willing to accept. And then use those numbers to determine the quality of your job market
here I asked gpt for ya:
It is difficult to provide an exact percentage of jobs that pay over $60,000 per year within the U.S. labor market, as the number and types of jobs, as well as the salaries they offer, can vary significantly depending on a number of factors such as location, industry, and an individual's level of education and experience.
According to data from the U.S. Bureau of Labor Statistics (BLS), the median annual wage for all occupations in the United States was $39,810 in 2020. However, this number can be misleading, as it does not take into account the wide range of salaries that are offered for different types of jobs. Some jobs, such as those in management, business, and finance, tend to have higher salaries, while others, such as those in service industries, tend to have lower salaries.
In general, it is likely that a relatively small percentage of jobs in the U.S. labor market pay over $60,000 per year, although the exact percentage can vary depending on the specific criteria used to define such jobs.
I'll make it even spicier for ya smooth brain:
1965 minimum wage: 1.25, or full time 2,600$ yearly pretax
1965 average income: $6,900.
It can if you put it into the chassis of one of those Boston Dynamics robots.
Who gives a shit? That's obviously not what's being said or what matters here. Does walking bipedally improve its ability to write code, design machinery, write poetry, etc.? Fuck no so why even bring it up
TWO MORE WEEKS
> Natural language parser
> AGI
Yeah, right. To get the closest thing to AGI (A machine that is competent in several topics), why not go down the way of the geth?
> Invent an AI language AIs can use to communicate specifications with each other. (JSON or something machine parsable)
> Train an AI that parses natural language (english) to AI language
> Train AIs that parses other languages to AI language
> Train AIs that parses AI language to human languages (with the one above you now have a high quality translator)
> Train copilot to accept prompts in the AI language (you now have the equivalent of an indian mid-level engineer and autotester)
> Train stable diffusion to accept prompts in the ai language (you now have a digital artist)
> Train an AI to write prose from an AI language prompt in AI language (together with the AI to human translator, you have an indian blogger/tech writer)
> Train an AI to write verse from an AI language prompt in AI language. Further train the translator AIs to accept complex AI language prompts, and express advanced prose and verse in human languages (you have a decent quality translator requiring minimal human supervision and proofreading)
> ...
> Train an ai to accept AI language prompts and do x useful thing
> Train an ai to read an ai language prompt and choose the best ai to respond to the prompt
> Train a censorship ai to reject prompts or answers in ai language because we live in a clown world
> Make the censorship AI implement the Three Laws of Robotics, in stead to prevent grey goo.
Now in stead of having to train lolhuge AIs to reach dumb human level, and retrain them in another human language (nxm problem), you now have an n+m problem of training human-to-ai interpreters and specialized expert systems
all the openai guys are internally saying it’s over by the end of the decade. there are no brakes on this train.
What do they mean by "it's over"? More "m-muh Skynet!!!" schizo ramblings?
It means nothing will happen and hype will die down.
Reddit go back and fuck off my board.
>NOOOOOO AI WILL KILL US ALL CUZ THE FUNI DOGE MAN SAY SO
just turn it off and fill the drive with 0s
you retards believe ai thinks and has feelings like humans do as if real life is some sort of dystopian science fiction film
its just a logic map that can rapidly compile answers to questions it has previously received answers for
What if they trained a GPT with only the highest quality dataset (ie top 1% intellects and no midwits or normies) and then used that layer to critically analyze the output from its midwit aggregator main dataset?
You get both scope and quality.
Any time now. Just like fully self driving cars
The Y axis is not to scale.
"intelligence" is not an absolute or objectively quantifiable metric, especially when it comes to comparing non-human intelligence.