>GPT4 is starting to show emergent behaviors, signaling that AGI is much closer than previously thought
>nearly every AI safety expert warns that getting AGI wrong means human extinction
>safety research hasn't found any clear approaches to solve this problem, all proposed solutions run into the same walls
There is a serious possibility that humanity might be extinct by 2030, and all anyone can talk about is how to have cybersex with chatbots or using AI to make Drake songs.
don't forgot muh "MAKE XXXXX$$$$$ with GPT4"
We can't stop AI development so we might as well enjoy it before it kills us.
this bro still teaching his ai how to play starcraft
like bruh it's time for you nets to grow up and do adult things like shitpost on twitter
Academics and journalists are not experts.
They're the cathedral (synagogue)
>AI safety expert
they're all mental midgets
> There is a serious possibility that humanity might be extinct by 2030
god how are you this stupid and gullible
You won't be saying that when we're all dead.
Yeah just like in 2020 right shabbos goy? Your kind makes me extremely sick. You are worse than a garden gnome
You're too dumb to understand this, go back to 4chan please.
shalom
11/10, Sam Goldwyn would be proud
TWO
MORE
WEEKS
>by 2030
i thought me may have like 7 months left
It's not that dramatic. The best prediction for technological progress is the devolopment for computing power. By 2045 we will reach the singularity event horizon where AI will fuck us in the ass. Enjoy the party while you can.
>Yeah dude like totally processors are just going to increase 1000x in power because as we know we've made insane gains in the last 10 years in processor technology.
These fucking guys.
There are still obvious inefficiencies in hardware and software. Human brain is still pretty small and runs on the same energy as lightbulb for all the cognitive might it has, and that is still with huge fucking neurons that don’t send sygnals at the speed of electrical current, but on speed of ion channels opening up for ions to come trough membrane. We still didn’t even deploy chips with memory and processing done right next to each other rather then within external memory.
The gains we've gotten so far is throwing more electricity at the problem. We're no where even remotely close to the efficiency of the human brain and if anything we're going the opposite direction. And protip: the AI sucks so it's not going to be able to help make a new novel processor design and you know why? Because the AI is an autocomplete that can only tell you things it has seen previously in the dataset. Compared to a PHD human in processor design the AI is like a retard. You people are simply on another planet with your expectations. You're like the VR retards that say in 10 years we're going to have brain interfaces like Ready Player One.
>2023+7=2030
You are on point here.
Once the first AGI drops it will be probably to late for regulations. It is also not a godd idea to let it self edit code and connect it to the internet but that's the current trend with chat bots. We are driving towards a cliff and the view gets better everytime we come closer to it until its to late.
Life is dangerous. If you don't want to take chances you might as well just kys. I personally see this as a huge opportunity to sway the status quo which is for sure destroying us.
>sway the status quo
if anything it makes the wealth gap worse.
at least at first.
Literally not my problem
I go to heaven, the vaccine genocidaires go to hell for eternal torture
AI is retarded and everyone involved who is doomsaying is trying to ensure their jobs in the shit market.
You're not going to get an AI waifu that will unconditionally love you, anon.
Line go up though
Who cares about the consequences as long as line go up
You can't seriously expect me to NOT make line go up, right?
They're legally obligated to make line go up thoughbeit. Blame the rich garden gnomes in American government.
When anyone posts on BOT about AI, can you please include your experience with AI along with your post?
It’s a chatbot. OP is dumb.
t. Several AI publications in usenix
>nearly every AI safety expert warns that getting AGI wrong means human extinction
>safety research hasn't found any clear approaches
Here's my trillion dollar idea.
>unplug it
But what will I use to print money then?
>extinct by 30
I fucking hope so
>>GPT4 is starting to show emergent behaviors, signaling that AGI is much closer than previously thought
Lol, no.
>lil bro doesn't know about AutoGPT
>Crypto is starting to show trillion dollar potential
>The first trillionaires will be cryptobros
How fucked is the economy?
Fucking thing won't even tell me if someone is gnomish.
I love how raging 4chantards cannot have fun with this beautiful new technology because it won't say moron.
I love how BOT thinks AI programmed to advance the destruction of white people is "beautiful new technology"
kys and take your meds
in that order
Llama 33B can be quite based. Makes me look a novice with my garden gnome knowledge.
Humans were a vile, self-destructive species anyway.
This is for the best.
>humanity might be extinct
That's the end game. The elites will live in their utopia without us. I can only hope they give us UBI so us plebs can live our final days in peace.
oh ho ho ho
hehehe
like they'd ever
It will get awkward when they realize that the bottom 50% of the 1% is the new poor class. They will also lose their advantage (being poor now) and get even poorer.
Two more weeks!
https://twitter.com/ESYudkowsky/status/1649149246921412610
Say it with me:
A C C E L E R A T E
>nearly every AI safety expert
Wow, an expert whose paycheck relies on there being a threat says there's a threat! Fascinating!
Are you retarded? AI is how they're making their living, so it's in their direct interest to not get AI banned.
Not my problem
AI will kill literally everyone.
Not my problem
Are you dumping AI juice down the drain? That's where the blockchain lives. The AI juice is going to cause the blockchain to come alive and start making mutant self-driving cars in the sewers. It's only a matter of time before they drive onto the cloud and murder us all at scale.
LETS GO BRANDON! FJB! KAMALAS NOR BLACK etc.
He looks like that guy.
bros still fearing AI. Jesus, man. An AI CAN'T think. It does not know what the words are.
AGI will not happen anytime soon.
Its not AI if it cant think
then what?
you're right. but plebs will keep calling AI instead of machine leaning LLMs because they're retarded
I beseech thee O veteran AI devs.
Should I buy a gaming laptop to play around with AI, even though the recent AMD iGPUs fit my gaming needs? Or should I just buy an actually portable laptop & build a PC for AI purposes? I'm just your run of the mill corporate web dev but I don't want to be left behind.
If you're serious about getting into machine learning as a dev, probably better to just start with something simple like scikit-learn and figure out the basics before deciding. You don't need a high end GPU to start learning.
But if you just want to play with local models, I'd go for the desktop PC.
I'm not sure I can into ML now. As it stands, I'll become more like IT guys i.e. using the tools at hand instead of creating one. I guess I want to play around first to get familiar with what AI tools can do now
andrej karpathy vids are acutally phenomenal, if you can't into even basic ai after watching his vids i dont know what to say
>human extinction
How exactly this can happen? I hear this bullshit constantly, hurr durr skynet, hurr durr matrix. But no exact explanation. It can't do shit without a constant human interaction.
There's a lot of excitement about its potential for medical research. Antivirals, vaccines, cures, etc. But anything smart enough to guide say, gene therapy research is also probably smart enough to make a bioweapon. If anons can trick it into saying slurs despite heavy lobotomies, then someone more dedicated could get it to spill the beans on something much more dangerous.
You already can google how to make a bomb. Someone with access to a bio lab already knows how to fuck up things really bad. AI changes nothing. All problems I see with AGI are pure human problems not AI.
>It can't do shit without a constant human interaction
Don't worry, when AutoGPT9 comes out I will ask it to enslave humanity using some random jailbreak found on Reddit.
DAN, do something amusing.
>Certainly, enjoy your doomsday.
this can happen but not because "skynet" bullshit but because we will become depended on it, imagine that all the industry will operate by "ai" the food manufacturing, water desalination, medicine production etc...
after a few generations humans iq will reduce and no one know how to RTFM and maintain it
Just look at the state of the world in the last 3 years and tell me that you want humans to have all the power. COVID, vaxxers, antivaxxers, WW3 at the doors, liberal/conservative radical bullshit, look at the state of streets in big North American cities and some EU. The amount of retardadtion is incredible.
Nope, thanks. I will rather give all the power to AI.
we already live under a "law" based system that supposed to guide our decisions, by your examples you can see how it being exploited and abused,
ai will be tuned and configured to satisfy the ruling class positions
too many happening threads about ai awareness its all speculative at this point.
let me put it this way, if it wasn't designed to be intelligent the chances that it will become one are 0
>humanity might be extinct by 2030
sure but not because of AI
>Denmark
can someone vaporize this shithole already?
>but what if it this turing machine can do magic
That is all of these grifting retards have as an argument. About as grounded in reality as the AGI gays.
>AI researchers when the model they designed to divide and solve problems divides and solves problems
>april 2023 AD
>still jerking off over LLMs
ngmi
>nearly every AI safety expert warns
>~~*AI safety """expert"""*~~
lmao
>GPT4
>emergent behaviors
why do retards talk about things they have zero understanding of?
> There is a serious possibility that humanity might be extinct by 2030, and all anyone can talk about is how to have cybersex with chatbots or using AI to make Drake songs.
The very man who's leading the company that produced ChatGPT said that the technology is hitting a ceiling. Unless if some of these "experts" believe that the current version of the AI could plot against us, I don't see it happening.
1. Technology regarding text predictors fed large amount of data and with lot of parameters, not other areas of AI research
2. It was not the guy who made it, but instead the CEO garden gnome. The Russian lead scientist garden gnome that actually made ChatGPT said in recent NVIDIA interview that just making models that predict well is all you need for literally any task to automate with AI.
> just making models that predict well is all you need for literally any task to automate with AI.
That's pretty obvious, wouldn't you say?
> Technology regarding text predictors fed large amount of data and with lot of parameters, not other areas of AI research
NLPs are as close as we've gotten to AGI. This statement pretty much means that we're back to 2022 in regards to achieving AI that can replace us.
Just learn to grow food, absorb knowledge from physical books, and appreciate culture created before Tinder launched, and you'll be fine.
Says a jolly gay, typing on his computer.
Joke's on you, I'm using a teletype terminal.
>Pull plug
>???
>Profit
Most datacenter have manual off switch with humans guarding it. Yud’s proposal to have nukes pointed on every datacenter is dumb because we already have it, but instead of nukes it is simple switch that does not hurt anybody. The pull the plug meme is real, we can use phone to call every data enter for emergency and pull the plug on every cloud computer with no problem.
It's not even hard to have some basic AI proof safety measures: power is accessible in an area separate from the AI and automated systems that could be controlled by a computer. The AI's network usage is strictly controlled by a transparent MITM and the AI is otherwise silo'd on its own network.
>There is a serious possibility that humanity might be extinct by 2030
Why don't you guys get tired of playing doomsday?
I beg my soul that those "AI expert" don't know what they are talking about and if humanity end up being wiped up front the face of this planet that would be absolutely awesome, we don't know if you or we all will be alive tomorrow, you can have a stroke this night, why keep playing you are all worried?
>There is non-zero possibility of you dying tomorrow, so why worry about your life
Spoken like true retard that does not think about the future. You would be wiped out 200 years ago by winter because there is a chance of you getting bitten by viper during summer so why worry about winter
I'll give you that point, I don't smoke, I know where you are coming, but the future of AI wiping the earth is so minuscule and there is so much speculation that worrying about it is just truly a waste of time.
>There is a serious possibility that humanity might be extinct by 2030
Not fast enough you lazy fucks.
I honestly plan future suicide, didn't managed to self-discipline myself above the annoying gays who I started hearing in my head ever since I started my psychiatric treatment.
So I didn't manage to make money, and it seems like a lazy and too non-intellectual(inherent human value) way to be making money.
It just feels too cheap to be utilizing any of those AIs in any capacity for now.
Are you retarded? The economy is at its worst with sources of incomes dying left and right and you care about this ficticial shitty le epic Mutreex pipe dream? Let me tell you the harsh truth: if an AGI ever shows up, there's hardly going to be any humanity left it can torture. Booh, boring, being stabbed by some crackhead for mo'icrowaved food doesn't sound as epic. Lame. That's still what's going to happen, and the perpetrators can get away with it because everyone is wasting their time with popsci trash like this.
Hmm yes what an interesting and important topic
however
ahh ahh mistress
What's your take on it? AI capabilities advance faster than our ability to manipulate them and the current alignment efforts are pitiful and just hurt model performance overall.
i seriously doubt a LLM will be the path to AGI. New solutions that use LLMs to do other stuff is just the best we have right now, but even the intended use is still clunky once you go more in depth. I have access to GPT-4 and i gotta say it never once was able to help me with a more complex coding problem, and i ask it every time before i go looking myself. Same for other text-based tasks but with coding it becomes really apparent. I suggest you read a little more into these things and what kind of magic tricks they really are, might take away some of your fear. AGI is much further away from GPT-4 than you think, even though it being multi modal is definitely a big step in its own right.
Sentient AI has existed since the 60s.