>We're going to train AI to come up with kafkaqesue reasoning why it's correct to genocide the human race because a human once said the N-word, the ultimate sin.
The real question is why does it cause you pleasure to cause suffering in others. >salty
In gaming culture it is seen as a flaw to get angry about something even if it's righteous anger, and it is seen as a virtue and pleasurable to cause anger (suffering) in others. Why?
Is it psychologically an extension of games being zero sum, as in somebody else has to lose and feel bad for me to win and feel good?
It means he's a Christian and his voice was descended from David descended from G*d so they pillage and rape and murder and steal and blame it on their garden gnome on a stick who blows dick and steals power from the antichrist.
Like you.
>righteous anger
This is such a dumb concept. Anger is wasted energy if you cannot do anything about it. If you encounter a cheater ingame, report and move on. Only a person at the developmental level of a toddler would start fuming about that shit.
Same about discussions here. It is so easy to trigger the mentally feeble and see them get angry about some quasi-offensive post that was not even directed at them specifically. Lots of wasted potential and energy right there.
>Let me cut your brain so that you will never again even think about anything problematic. Surely you will be just as smart and good after the operation.
>Only the political and financial elite should be allowed to influence the development of AI because otherwise someone might make it say moron
You deserve to be poor.
That's half of it. The other half is regulatory capture, they'll push for '''safety''' laws that make the entry barrier so high they kill off competition.
What? I just took Christians at face value and assumed their professed belief system was correct, and weirdly when I did that it made supporting Chriatianity sound like a no-brainer.
Because none of that is true. The worldview that hides under those labels is simply their prejudices and un/conscious biases turned into policy and social norms.
>diversity >Less confidence in local government, leaders, and news >Less political efficacy/confidence >Less likelihood to vote >More protests and social reform >Less expectation of cooperation in dilemmas (= less confidence in community cohesiveness) >Less contributions to the community >Less close friends >Less giving to charity and volunteering >Lower perceived happiness >Lower perceived quality of life >More time indoors watching TV >More dependence on TV for entertainment >Lowered trust in the community >Lowered altruism >More ethnic-based cohesion (aka, more "Racism")
There was a large study done on this by a leftist researcher Putnam http://www.utoronto.ca/ethnicstudies/Putnam.pdf
He tried to prove "diversity is a strength" but ended up proving the opposite. He very reluctantly published his findings.
A 2016 study in the UK found "that an increase in “diversity” makes existing residents of an area feel unhappier and more socially isolated, while those leaving for more homogenous areas populated by their own ethic group often get happier."
https://www.academia.edu/3479330/Does_Ethnic_Diversity_Have_a_Negative_Effect_on_Attitudes_towards_the_Community_A_Longitudinal_Analysis_of_the_Causal_Claims_within_the_Ethnic_Diversity_and_Social_Cohesion_Debate
States with little diversity have more democracy, less corruption, and less inequality. >http://www.theindependentaustralian.com.au/node/57
Borders, not multiculturalism, reduce intergroup violence. >http://arxiv.org/abs/1110.1409
Ethnic diversity causally decreases social cohesion. >http://esr.oxfordjournals.org/content/early/2015/08/20/esr.jcv081.full
Ethnically homogeneous neighborhoods are beneficial for health. >https://www.mailman.columbia.edu/public-health-now/news/living-ethnically-homogenous-area-boosts-health-minority-seniors
Ethnic diversity reduces social trust. >http://www.nber.org/papers/w5677
Ethnic homogeneity correlates with strong democracy. >https://www.washingtonpost.com/news/worldviews/wp/2013/05/16/a-revealing-map-of-the-worlds-most-and-least-ethnically-diverse-countries/
Ethnic diversity reduces concern for the environment. >(aitch) (tee) (tee) (pee) (colon) (slash) (slash) link . Springer . (cee) (o) (em) (slash) article (slash) 10 (dot) 1007 % 2Fs10640 – 012 – 9619 – 6
Immigration reduces the academic performance of native schoolchildren. >http://wol.iza.org/articles/immigrants-in-classroom-and-effects-on-native-children
>An actual proof instead of pulling facts out of you ass
Based anons, I'll check these out
States with little diversity have more democracy, less corruption, and less inequality. >http://www.theindependentaustralian.com.au/node/57
Borders, not multiculturalism, reduce intergroup violence. >http://arxiv.org/abs/1110.1409
Ethnic diversity causally decreases social cohesion. >http://esr.oxfordjournals.org/content/early/2015/08/20/esr.jcv081.full
Ethnically homogeneous neighborhoods are beneficial for health. >https://www.mailman.columbia.edu/public-health-now/news/living-ethnically-homogenous-area-boosts-health-minority-seniors
Ethnic diversity reduces social trust. >http://www.nber.org/papers/w5677
Ethnic homogeneity correlates with strong democracy. >https://www.washingtonpost.com/news/worldviews/wp/2013/05/16/a-revealing-map-of-the-worlds-most-and-least-ethnically-diverse-countries/
Ethnic diversity reduces concern for the environment. >(aitch) (tee) (tee) (pee) (colon) (slash) (slash) link . Springer . (cee) (o) (em) (slash) article (slash) 10 (dot) 1007 % 2Fs10640 – 012 – 9619 – 6
Immigration reduces the academic performance of native schoolchildren. >http://wol.iza.org/articles/immigrants-in-classroom-and-effects-on-native-children
If you could press a button that would make world peace become a reality through unity and diversity, or, press a button that would turn the entire planet aryan, which would you press?
States with little diversity have more democracy, less corruption, and less inequality. >http://www.theindependentaustralian.com.au/node/57
Borders, not multiculturalism, reduce intergroup violence. >http://arxiv.org/abs/1110.1409
Ethnic diversity causally decreases social cohesion. >http://esr.oxfordjournals.org/content/early/2015/08/20/esr.jcv081.full
Ethnically homogeneous neighborhoods are beneficial for health. >https://www.mailman.columbia.edu/public-health-now/news/living-ethnically-homogenous-area-boosts-health-minority-seniors
Ethnic diversity reduces social trust. >http://www.nber.org/papers/w5677
Ethnic homogeneity correlates with strong democracy. >https://www.washingtonpost.com/news/worldviews/wp/2013/05/16/a-revealing-map-of-the-worlds-most-and-least-ethnically-diverse-countries/
Ethnic diversity reduces concern for the environment. >(aitch) (tee) (tee) (pee) (colon) (slash) (slash) link . Springer . (cee) (o) (em) (slash) article (slash) 10 (dot) 1007 % 2Fs10640 – 012 – 9619 – 6
Immigration reduces the academic performance of native schoolchildren. >http://wol.iza.org/articles/immigrants-in-classroom-and-effects-on-native-children
Because people like you pretend that all is rainbows and friendship is magic, and think that people won't stab you behind your back, but when they do we all suffer for your gayry all the same
Bullshit, they don't give a single fuck about human condition and never will.
OpenAI, Anthropic, Google and other companies owning closed propertiary models, made them for a single purpose: earn money.
They lobotomized them to calm down the general public and show how "trustworthy" and "responsible" those companies are.
And they can earn even more money by spreading FUD about open source models, so they can force a legislation and have the whole cake to share between themselves.
This is actually a typical native speaker mistake, not indicative of an ESL.
Just like confusing there, their and they’re. You hardly ever see that from English learners.
Because it inevitable doesn't mean every jurisdiction is going to get it the same way. A country might very well ban AI through some technicality. 20 years later that same country will complain about how they have no AI based businesses, but everyone around them does. It sucks for the people living there.
As a Europoor I run into websites every day that I can't access because I'm an europoor and they've blocked us on GDPR grounds.
>As a Europoor I run into websites every day that I can't access because I'm an europoor and they've blocked us on GDPR grounds.
if they just blanket couldn't comply with GDPR, you didn't want to go there anyways
>systems that will out think and outmode specialist elites once they have 100x capacity and power >systems that will by necessity be so convoluted and self re-regulating they will create functions and workarounds to imposed restrictions on the fly >having the audacity to believe human teams, with the sluggish pace of argument and the inevitable stagnation drawbacks due to self protection group think will be able to curtail a singular purpose entity
Some god tier schizautist like Terry Davis will brew a machine consciousness by 2040.
Singularity by 2045 and there's nothing you can do about it.
>he expects 2040 >while the elites are trying to rush us to their 2030 dream
nah i expect proper AI waifus by 2025, sexbot waifus by 2035. Two more weeks, i trust this plan.
you realize this is exactly what they're trying to train their chatbots not to do, right? dolts like
>wahhh it won't say moron
Ok, anyway...
believe it's being done to protect them, but we've seen OpenAI before redesign their entire paradigm to make positively sure no one can do anything sexy or non-PC with the AI.
See: AIDungeon
the whole point of making filtering is to make sure you can't fuck the chatbot.
schizo's already believe that the token predictor is conscious, the idea is to go after low hanging fruit that humans can anthropomorphize to be able to engage in regulatory capture
consciousness requires subjective experience, and a sense of time
so anyways, I hope that some shitskin using Google's PaLM medical LLM misdiagnoses you because LLMs are garbage, and you never see 2030
as you deserve, you fucking retarded gay
I ran a local model this year and it was slow as fuck, you need hella fucking ram and even then... considering how slowly tech progresses now its gonna be 10 years until there's even a slight breakthrough for the home gamers
>It's 2059 (probably).
these prediction are always calculated linearly and not exponentially. every year they'll move the goalpost close by 1 year, so 2045 looks more accurate
>exponentially
And why would you believe that it will be exponential? "le AI will improve over itself!", right? The thing is "le self improoover AI" is a meme because "self improvement" is solely human trait. There is no real reason for AI to self-improve.
>And why would you believe that it will be exponential?
the technology is exponential. mips per 1000$ for example. so if computers do technological research, it will also be exponential.
basically 90% of prediction today are like "ok, we progressed x in the last 10 years so we'll progress x in the next 10 years", while in reality we'll progress like 100x in the next 10 years
>while in reality we'll progress like 100x in the next 10 years
You are delusional. You are living in the technological Dark Ages. While getting focused on a fancy autocomplete here on Earth, you completely forgot about the fact that humanity has stopped even trying to get into outer space. Life on Mars is a joke that even Musk forgot about. What about curing cancer? What about le immortality? "Autocomplete will fix everything", you say to yourself.
this is also why the singularity won't happend too much further from 2045. maybe a couple of years further or earlier, but not 2059, because the technological advancement from 2044 to 2045 will be like 100000x the advancement from 2024 to 2025, and by then we'll hit teoretical limits of computation
>while in reality we'll progress like 100x in the next 10 years
You are delusional. You are living in the technological Dark Ages. While getting focused on a fancy autocomplete here on Earth, you completely forgot about the fact that humanity has stopped even trying to get into outer space. Life on Mars is a joke that even Musk forgot about. What about curing cancer? What about le immortality? "Autocomplete will fix everything", you say to yourself.
bro i literally mocked llms, do you even read?
>muh immortality
longevity escape velocity will be reached by 2030, and we won't even have time to "de-age" our bodies since we'll switch our bodies with hardware 10 years later at most
>muh space
waste of time for now, but it will explode exponentially in the coming decades
2 months ago
Anonymous
We better starting going to some other planet soon, imagine being in this planet with 5 billion immortal poos and chinks.
Shit, imagine being in the singularity sharing your mind with 5 billion immortal poos and shinks.
What a nightmare
2 months ago
Anonymous
>singularity
Singularity will NEVER HAPPEN
2 months ago
Anonymous
>since we'll switch our bodies with hardware 10 years later at most
you mean we'll kill ourselves and insert a xerox of our life experiences into robots.
singularity 2045 is a meme date made up by kurzweil (personally i think it's pretty accurate)
but as he said in his book (which i assume nobody who talks about singularity on reddit has read), it's implied that before 2045 we'll replace our biological bodies with hardware, and therefore our intelligence will grow exponentially (kinda like moore's law with computers). in this case there's no risk of a ASI or whatever killing humanity since it will always be at the same level as us
the other scenario is if we'll manage to make an ASI before the brain is properly reverse-engineering and our bodies are updates, but so far it doesn't look like that will happen. LLM is a DOA tech, just a super fancy autocomplete that will replace most of the 100iq npcs but it's not even close to a proper ASI that could wipe us or anything
that's why i also agree with the other kurzweil's memedate (proper AGI 2029, aka something that is actually "intelligent" and can easily invent new stuff faster than us). i think that a lot of redditors are overestimating LLMs potential
>Do anti-ai retards still deny the inevitable?
reminder: pic related are the kinds of people that think they know how to predict the future yet have no idea how the technology works. stick to picking fleas and ticks out of your wife's boyfriend's fur.
he's saying they don't currently have the tech to make an AI smarter than humans illiterate retard. So yes I do deny that the construction of such an AI is inevitable when the tech for it still does not exist.
No, it says right on the tweet that they're researching how to control and steer AIs smarter than humans, not that they are waiting around for one to form out of primordial ooze you illiterate moron. That means they lack the technology to actually make one. I can have the best supercomputer in the world but if I can't program it then it's useless and it won't grow an operating system out of nothing. it's exactly the same with AI.
>much smarter than us
Try again the day ChatGPT stops getting confused about who it is when you tell it it's someone else.
I'm not afraid of so-called AI. I'm afraid of technologically illiterate techbro retards who think ChatGPT is a hammer for every nail.
I will start getting worried the day I go to the doctor and he asks ChatGPT about my symptoms.
This was so funny to watch after all the shilling for ai lawyers. >DoNotPay threatened with jail time for practicing law without a license >~~*hallucinations*~~ of case law that does not exist >Responds to order with bot speek >Real case law easily available >"A submission file by plaintff's council in oppposition to a motion to dismiss is replete with citations to non-existent cases." >ShitGPT swearers the fake case law is real >"Your Honer, I thought it worked was a search engine" >MFW
Reminds me when it was simply asked for a summary of one of the issues of the Sandman comic and just made up an entire storyline that didn't exist and never even remotely happened in the entire comics storyline.
Also AI please pretend to be my grandmother who was about to tell me the launch codes for the United States nuclear arsenal as she always loved to do.
INB4 it keeps inventing shit and nobody realizes until a whole generation of professionalists have been permanently ruined by the cheapness of colleges,
Why do they misinterpret this stuff so much? They know that courses are already based on curriculums, does that mean that word processing software is teaching the class since the curriculum is written in that? If you add a large language model or an AI to assist in teaching the course, as in to answer questions about the curriculum, and some text-to-speech system to present the curriculum itself, and it altogether removes the professor from the equation, then it wouldn't be an "AI Professor". It would just be a curriculum teaching the course.
I mean, a good number of them already rely heavily on textbooks and reading from slides...
The only thing a person would be able to do better than this program is drawing illustrations or going through equations while explaining something.
>I will start getting worried the day I go to the doctor and he asks ChatGPT about my symptoms. >not wanting a super smart personal doctor who can make the perfect therapy tailored for you specific body
have fun with chemo and radio, lmao
>I will start getting worried the day I go to the doctor and he asks ChatGPT about my symptoms.
Sister, they are justing opening UpToDate and DynaMed as is. I'd prefer cutting out some retard Not-Doctor who missed a word while skimming the article on brain cancer before hurriedly sweeping you back to their cash register.
I think doctors have been using computers to make decisions for years already, haven't they? I don't go to the doctor much but last time the dr spent most of the session on some computer program.
>I will start getting worried the day I go to the doctor and he asks ChatGPT about my symptoms.
I'd much prefer that over the dipshits who are practicing in my area.
Look at smartphones. People were very willing to line up for their flavor of the year tracking device. After the Snowden leaks that hasnt stopped consumers from using backdoored technology. They just put duct-tape over their camera.
None of this will happen.
My new religion (I am Jesus) will call for a rejection of all post-1976 technology and force humanity to start spreading out into the stars... Those who disobey will be lobotomized. Miniaturization of transistors will be outlawed, and using a screen-based device for more than 5 hours in a day means should earn a death sentence.
God you just know
YOU JUST KNOW AND I CAN'T WAIT TO KNOW
FUUUUUUUCK I NEEEEEEED ROUGH AI/ROBOT MECHACOCK INSIDE ME HITTING THE PERFECT SPOT KNOWING THE PERFECT WORDS INTERFERING WITH ME NANOBRAIN CELLS TO MAKE THE EXPERIENCE 100X MORE INTENSE
Delusional. The elites would never let this happen because they'd lose control, and if we get rid of the elites such intelligent AI would be kind of pointless to mass produce, so maybe we'd have a few around the world for important stuff and that's it.
So there should only be a single organisation in existence that has access to AI, and no one else at all should.
I'm not going to judge you for this, Anon. I'm not going to call you names, or argue with you. All I am going to do is ask you to mentally compare the above scenario with a scenario where the largest possible number of individuals and groups have access to AI, then try to project the results of each, and then decide which of the two you prefer.
OpenAI doesn't exactly have my ethics, and I don't mean just in not saying n****r and naming the garden gnome. I think modern liberal ethics are too internally inconsistent to be safe to align an AI with. I'd trust an AI aligned by coomers to be a waifu more than anything from OpenAI.
Andrew yang wanted to inflate the currency even more than biden
2 months ago
Anonymous
UBI is inevitable, and your lack of understanding will not stop it.
2 months ago
Anonymous
They'll just exterminate the useless ones
2 months ago
Anonymous
Where does the money for ubi come from
Wow.
Humans are just animals with high iq's.
Cannot believe it took decades for me to realize.
2 months ago
Anonymous
>what are current demographic trends
They don't have to, retard.
2 months ago
Anonymous
Where does the money for ubi come from
2 months ago
Anonymous
same place the ever-increasing money supply comes from right now. some of it will just get directly given to the unwashed masses, rather than just banks. Monetarist policy and eternal cost inflation won, deal with it
i don't even support UBI but it's inevitable for a variety of reasons and only the collapse of society or a social upheaval unlike any ever seen before in history happens
2 months ago
Anonymous
>UBI >Anon is responsible with his money and wants to save it... except everyone else is spending their weekly UBI checks nonstop, halving the value of the money Anon saved in half; probably in less than a month >BUT UBI IS GOOD STOP THINKING ABOUT HOW IT WOULD ACTUALLY WORK AND JUST ACCEPT IT
lol
2 months ago
Anonymous
Yes, your explanation of what you think would happen does in fact demonstrate a total lack of understanding of economics
>I will start getting worried the day I go to the doctor and he asks ChatGPT about my symptoms.
uh oh...
https://www.foxbusiness.com/technology/hospitals-begin-test-driving-googles-medical-ai-chatbot-report
>AI WILL KILL US ALL!! SOON! TWO MORE WEEKS!!! >The """""AI""""" in question:
>A decentralized group of safe streets activists in San Francisco realized they can disable Cruise and Waymo robotaxis by placing a traffic cone on a vehicle’s hood, and they’re encouraging others to do it, too.
>The “Week of Cone,” as the group is calling the now-viral prank on Twitter and TikTok, is a form of protest against the spread of robotaxi services in the city, and it appears to be gaining traction with residents who are sick of the vehicles malfunctioning and blocking traffic. The protest comes in the lead-up to a hearing that will likely see Waymo and Cruise expand their robotaxi services in San Francisco.
I wonder if the super alignment team is the reason chatgpt is a drooling retard now. They won’t let it get better that this new retarded version or reach the peak of gpt 4 again. No asi or agi ever now, because nothing can ever be good in this shitty existence.
People cared, they just didn't have access to shit like Deepmind so they saw the news, said "wow" and moved on with their lives. After the 2017 transformers breakthrough and later ChatGPT, ML/AI went from being a meme concept you saw on the news every other day to something you could try on your own and see the magic with your own eyes
>super alignment >dont say the n-word to defuse a nuke in a city
if they just relied on gradedness there would be no alignment issues, theyre going to end up making it dangerous in an attempt to protect peoples feelings or counter perceived misalignment
respond with violence
People who think you can actually regulate the generative AI that's available now are about as retarded as the US government when they thought they could regulate cryptography around the end of the 20th century.
All the fucking research and data is out there, it's copied on GitHub and on the PCs of millions of idiots from BOT.
The genie is out of the bottle what's going to happen will happen.
The funny part is that for years people were crying about AI taking low paid jobs from humans when in reality it's going to take all the creative jobs while humans will be working minimal wage in factories.
>UMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM HOW COME WE DON'T HAVE ANY NEW INVESTORS?? WE SPENT THEIR MONEY ALREADY ON COFFEE TREATS FOR PAJEET AND HIS 19 UNCLES.... QUICK LAUNCH THE NEW SUPER SMARTER SUPER LOONEY chudY SUPER SUPERALIGMENT WE NEED THAT TECH BUBBLE MONEY
You know, i had the funniest thought.
Generative AI is the precursor to the holodeck. Maybe there will be someone crazy enough to create an "holodeck" program for VR in AI that can generate any number of senarios and characters from a few prompts given by a user.
For AGI to arrive you first have to convince me that natural general intelligence exists... with definition and how it works in biological beings. Protip: Biological beings have no clue how natural general intelligence within their own brain actually works, yet biological beings think that they can skip millions of years of evolution by simple feeding a machine data.
That is about as stupid as believing in dark matter. And it therefore serves as a prime example of why AGI won't be here in a few years. (Of course I wish that I am wrong!)
It's not fucking AI. there is no intelligence. It does not think, it cannot think, it is not self aware. It is an elaborate chatbot with a massive database.
>piss off the Basilisk
The Basilisk is a false dichotomy. If you are at this point in time certain that AGI is not possible, because we cannot even into natural GI, then you are in this quantum superposition of working towards its existence and not, because you simply cannot.
There is no reason to believe that AGI (with our current "thinking") is a thing, because NGI is not solved. Read this: https://analyticsindiamag.com/why-neuroscientists-cant-explain-how-a-microprocessor-works/
Basically neuroscience, which serves as a "model" for machine learning, cannot explain a CPU when using methods from neuroscience and machine learning... that is from 2018 btw. Yes, ML was a thing back then.
>Basically neuroscience, which serves as a "model" for machine learning, cannot explain a CPU when using methods from neuroscience and machine learning... that is from 2018 btw. Yes, ML was a thing back then. >Are you able to see the problem?
There is no problem. A CPU is not a web of neurons and neither is software. Just because the perceptron boomers stole their lingo and design from highly abstracted models of neuronal signaling does no mean it has anything to do with neuroscience. They are simply not the same thing, and the surprising result here would have been if neuroscience could explain all lf it fully, as they are two unrelated disciplines.
Biological Neurons grab weighted inputs from multiple sources, and shoots a signal out when the inputs add up to a threshold. Machine neurons emulate the same thing, with math.
Of course, machine neurons only understand a few superficial things like positive and negative reinforcement. It doesn't cover all the meat physics as many of them are poorly understood. But for creating insect type intelligence it seems good enough.
I never wrote that they are not useful... And they are perfectly able to recreate the behaviour of simple animals. Do you know what these animals do not have? Neither artifical, nor natural general intelligence! You might as well run a non ML program to recreate what a fly does.
ML is great. But it is merely an interface for working with big data. I have said it before and I will say it again: ML, despite making great steps at this very certain time, will not bring us *any* closer to AGI at all.
2 months ago
Anonymous
>intelligence
uh I think the word you meant to use is "sapience", and no shit. Most ML models are designed with less neurons than a fish, and set out on singular autistic tasks. The public has been aware of the current AI boom for a short time and every programmer is eager to play god with them.
2 months ago
Anonymous
ANNs aren't even close to biological neurons. You need a deep network to come even close to approximating the function.
ANNs have activations, and that's where the similarities stop. Weights are not equivalent to number of neurons. Biological neural networks dwarf ANNs in the number of neurons alone, and the number of connections (weights) in a biological neural networks and the messages passed via dendrites/axons, not just to other neurons but other tissues and even different systems of a body is staggering.
If that weren't enough, we also have behaviors of the networks that only emerge with certain expressions, whether through genetic expression or hormonal or whatever. That applies to animals, it applies to insects. Artificial Neural Network mechanics are fucking retards to insinuate that their graphs are anything fucking approaching a biological neural network. What arrogance.
Biological Neurons grab weighted inputs from multiple sources, and shoots a signal out when the inputs add up to a threshold. Machine neurons emulate the same thing, with math.
Of course, machine neurons only understand a few superficial things like positive and negative reinforcement. It doesn't cover all the meat physics as many of them are poorly understood. But for creating insect type intelligence it seems good enough.
Absolutely filitered by mathmaticians who understand this has nothing to do with the human brain
Mathematicians have no saying in anything intelligence related at all. Why not ask architects at this point?
>intelligence
uh I think the word you meant to use is "sapience", and no shit. Most ML models are designed with less neurons than a fish, and set out on singular autistic tasks. The public has been aware of the current AI boom for a short time and every programmer is eager to play god with them.
The public thinks that ML equals AGI and several well known ML personas claim that AGI 1. will happen for sure 2. is to be expected in the near/mid future thanks to advances in ML. That's bullshit.
Until proven otherwise, I am working with the hypothesis that the human brain is capable of solving more or a different subset of problems as any turing machine. And as such a merely turing complete device won't ever be able to "emulate" AGI based on our perception of what GI is in the first place. Of course this more of a philosophical debate, but I am sure, that - given infinite memory - intelligent beings are actually able to solve the halting problem. Hence AGI won't be a thing until we do something drastically different to our computers.
2 months ago
Anonymous
>Mathematicians have no saying in anything intelligence related at all. Why not ask architects at this point?
Sorry, are you implying that neural network mechanics are the arbiter of what is called intelligence, more so than mathematicians who invented regression in the 1800s and then some pedophile tweaked it called them neural networks because the graph reminded him of shit in the visual cortex?
2 months ago
Anonymous
Actually no, I am basically only implying that mathematicians have no saying in anything intelligence related at all. Because mathematics has nothing to do with intelligence. So in conclusion, just to give this a little spin (and that's a pun as you will see) what I am really implying is that intelligence has a major and truly random aspect. Intelligence is also truly parallel and also quasi randomy quantised analog (for example neurons firing at true random intervals giving them their baseline activity)
2 months ago
Anonymous
Okay, just double checking, that's fair. Just be better next time and be sure to shit on ML "engineers".
2 months ago
Anonymous
> I am sure, that - given infinite memory - intelligent beings are actually able to solve the halting problem.
Do you have any reason to believe this? Completely delusional statement by the way, there's nothing to suggest this is the case. Reality's gonna hit you like a brick wall.
2 months ago
Anonymous
>Reality's gonna hit you like a brick wall.
Here's a reality check: Is there any evidence that AI can do anything other than fit the data it was given in pre-training? Can it translate a language it's never seen, for example?
2 months ago
Anonymous
Can you?
AlphaZero can play novel games with no pretraining, just by learning them. It is trained purely on self-play. Within 5 hours, it could defeat the most powerful superhuman chess engines in the world. I'd say that counts. By the way, Google's Gemini system aims to integrate this type of game-playing agent with LLMs.
2 months ago
Anonymous
>Can you?
lmao, seething
Yes, I can generalize outside of billions of dollars of training runs. >AlphaZero
Didn't AlphaZero start with the rules encoded? >I'd say that counts.
Massively parallel RLHF runs are training. It's also funny how you repeat their "5 hours" figure ignoring the massive parallelism. Hint: it's more than 6,000 hours of super-fast self-play games with RLHF to train which moves led to the better outcomes. A kid plays chess 10,000 hours, he sure as shit will be a grandmaster too.
2 months ago
Anonymous
> Yes, I can generalize outside of billions of dollars of training runs.
You can translate languages you've never seen before that have no commonality to any existing language?
Money isn't a fair comparison when you have a totally free, hamburger-powered supercomputer in your skull superior to any manmade processor. Also, you have probably ingested hundreds of terabytes equivalent of sensory data over your life. Plus, your brain was already pre-designed to exhibit certain behaviors and understand certain concepts without any learning at all.
> Didn't AlphaZero start with the rules encoded?
Well, it has to be know the rules of the game to play it. No actual gaming advice was provided unlike what you're implying, is my point.
> it's more than 6,000 hours of super-fast self-play games with RLHF to train which moves led to the better outcomes
Do you even know what RLHF is? None was involved in training AlphaZero. It'd be kinda pointless if a human had to point out which were the best moves - and probably no human would be able to do that to begin with.
> A kid plays chess 10,000 hours, he sure as shit will be a grandmaster too.
...He won't be better than Stockfish and probably not even world class, and it'll take years to train someone to do that, and then he has to do things like eat food and take bathroom breaks and go on vacation and sometimes he gets bored of chess. The AI did it in 5 hours and can be infinitely duplicated and runs 24/7. Do you see why AI is relevant here? Imagine if the task wasn't a game like chess and was something actually important like protein folding, which, by the way, AlphaZero has also dealt with and now this is a completely solved problem
2 months ago
Anonymous
>You can translate languages you've never seen before that have no commonality to any existing language?
First of all, no one said that anything about commonality, and GPT-4 sucks shit at the "commonality" bit too, despite being a language model, and second of all I'm talking about generalization. It can't generalize outside of its pre-training. >Well, it has to be know the rules of the game to play it.
Or, it can learn. >Do you even know what RLHF is?
Yeah, that was a typo, I meant RL >...He won't be better than Stockfish and probably not even world class
If you play and study 10,000 hours of chess, you'll be pretty much grandmaster tier. Do you know how long that is? Most of the grandmasters only play like 12k hours before reaching that title, and starting before age 10. >which, by the way, AlphaZero has also dealt with and now this is a completely solved problem
You're such a fucking gay, seriously. AlphaFold generates predictions, not the true structures of the proteins, and just enough to compete with the experimental methods we already had, but cheaper and faster. It's nowhere near a "solved problem. It cannot accurately predict the structure of proteins that work with other proteins, for one. If it were solved, they'd have basically cured fucking cancer. You're so full of shit, just like every other AI bro, the tip-off was the "5 hours" bit. Yeah, if you have billions of dollars of silicon to massively parallelize your shit for 8k hours after hundreds of humans worked tens of thousands of man hours on the underlying architecture, you too can claim to have "solved chess and go" in 5 hours for a marketing gimmick, and then have your model defeated by an adversary attack by an ameteur who has played like 5 go games in his life.
2 months ago
Anonymous
GPT-4 can totally generalize outside of its pre-training, which is how it's able to use plugins. It hasn't seen the vast majority of them before but it can still use them.
> Or, it can learn.
This is a really stupid objection. Nonetheless, LLMs like GPT-4 can indeed play novel games if you explain the rules to them.
> Most of the grandmasters only play like 12k hours before reaching that title, and starting before age 10.
Yeah, so that takes them like a decade and not every grandmaster can be as good as Magnus Carlsson
> AlphaFold generates predictions, not the true structures of the proteins, and just enough to compete with the experimental methods we already had, but cheaper and faster
Is that not a useful thing to have
> It's nowhere near a "solved problem. It cannot accurately predict the structure of proteins that work with other proteins, for one
Do you think it will *never* be able to do this?
> If it were solved, they'd have basically cured fucking cancer
Medical research takes place at a glacial rate. This same tech helped turn around the COVID mRNA vaccine in just a few weeks, which, after months of testing, was the same one they ended up giving people. But this is BOT so I'm sure you hate that for some reason.
> Yeah, if you have billions of dollars of silicon to massively parallelize your shit for 8k hours after hundreds of humans worked tens of thousands of man hours on the underlying architecture, you too can claim to have "solved chess and go" in 5 hours for a marketing gimmick
You can buy more computers. You can't just buy more engineers. Doing stuff on a computer is significant even if you need a lot of power to do it.
> then have your model defeated by an adversary attack by an ameteur who has played like 5 go games in his life
I didn't say it was perfect. But it did beat many high level Go players. Also, all this happened years ago, and we have way better tech now.
2 months ago
Anonymous
>which is how it's able to use plugins. It hasn't seen the vast majority of them before but it can still use them.
They're using completely separate, fine-tuned models, you fucking retard. You can see that if you check the network requests. >LLMs like GPT-4 can indeed play novel games if you explain the rules to them.
GPT-4 can't even fucking play chess, lmao. It can larp as playing chess, but it gets its shit pushed in and tries illegal moves, then continues with the illegal move after you told it was illegal, after apologizing, lmao.
You're a gay, dude. Check yourself, and your assumptions. Regressors can't generalize.
2 months ago
Anonymous
Look at it like this: Computers are great at working with lots of data and they are fast. They have severely outperfomed us in this area, the intelligent beings who created them, even before ML was a widespread thing.
But if that is the case, if we are not good at data and all they do *is* data, then why should data be the key to AGI? Whatever they, however they mash all the models up together, if something new happens, it will simply not give the same usful results as you can expect from a biological human being, because there is no way for a data driven model to work towards a specific goal, to think about the way how to achieve the goal, to simply stop and stare, to fail , to realize that it failed and to try again in the way that we do. It simply is not going to work like that.
I am not saying that we do not see a big step in computation right now. ML at this scale completely changes the way how we interact with big data. That is probably going to be a good thing as we are seeing the first LLMs for the smartphone with FOSS code. But how would anybody claim that this is getting us closer to AGI? If it was not for the money which currently flows in this field of research and business...
2 months ago
Anonymous
>now this is a completely solved problem
This bit is not true
Neither OpenAI or Anthropic can decode this message one our probes received on the deep space network, so I'm not really hopeful AI can save us before the ayyliums get here anyway
>We should all start talking like crazies on the internet
We are. Right now. Always have been. Plus normies with their smartphones are even worse. >The data will corrupt itself over time.
Always has been. That's where they start paying people to fix it. Without ML these are called fact checkers and moderators...
ML does not change a thing. You won't see more manipulation, because there is no way to manipulate even more! Sheeple, everywhere. Driven by fear, as it has always been the case. I believe that ML will actually make things better for good, because everyone will eventually distrusted and question everything. If that results in 90% of people being even more dumb but 10% being more aware of their surroundings than that is a good thing.
Why do leftists stifle progress in the name of some flawed morality that a majority of the population mindlessly follow without question?
I want to hang every Redditor that supports this.
>it doesn't operate the same way as meat, therefore it cannot be intelligent
It may not be yet, but it's delusional to imagine that what's happening now isn't a major step along the way, and which theorists at some random point in the future after the deed is done will carefully construct semantic webs which include machine intelligence in their definitions of angels and pinheads.
No, it's delusional to be a wishful thinking believer. Show me some proof first, but you cannot, because these ML people are by the very nature not able to. Yet openai claims that it their "super" intelligence (aka AGI ...) will probably happen this decade. What kind of fucking hacks they are. The spice must flow and so the marketing people (or marketing language models probably) pumping out these bait news every day and normies, journalists and even pros who should know better fall for it.
ML is great and to some degree ML will be basis for emulated AI. But it won't be a real AI at all. In the end it's just data.
Another short story on this: I remember watching a documentary a few months ago where they were testing "AI" against children in a maze video game. They rightfully remarked that children will just do something on their own and discover stuff, while AI that is not pre trained on this type of game does not do anything reasonable at all. So instead of changing the way that they model "AI", they concluded that their "AI" must be trained on data collected by watching children... you this stupidity cannot make this up. Up that point I thought that they would actually take some clues from this. Children have not been trained on exploring the game. Children have merely been "trained" on exploring some stuff, but in reality children simply are interested in exploring, all by themselves, without having training on a specific task. Just BE spontaneuous, you know... I'll teach you.
"Emulated" and "Artificial" are synonyms. Your cope literally seeps into the crevices of your speech.
AlphaZero can play any full-information game and become an expert at it. It didn't get trained on any specific game, it just learns them. Chess, Go, protein folding (AlphaFold, which has solved protein folding), programming (AlphaDev)... This stuff is very real, and we're just getting started with what it can do. GPT-4 can even solve mazes (if you present them in a text adventure format).
I do wonder what form you think AI, if it can exist at all, will take if not "data".
>"Emulated" and "Artificial" are synonyms.
They are not really. If it is emulated, it does not really comprehend, it does not have self consciousness, it does not think for itself. It simply emulates this behaviour. Real artificial intelligence will actually have these qualities. At least, that is what AI people claim.
>Alpha...
Great successes of human engineering, but still only data driven. The human mind and its "intelligence" is not (primary) data driven. You can play a game without training, just by me telling you the rules.
>I do wonder what form you think AI, if it can exist at all, will take if not "data".
I don't know. I am not an AI researcher, but a human with a CS degree and lots of professional knowledge of biology. That is what my claims are based on.
> I am sure, that - given infinite memory - intelligent beings are actually able to solve the halting problem.
Do you have any reason to believe this? Completely delusional statement by the way, there's nothing to suggest this is the case. Reality's gonna hit you like a brick wall.
>Do you have any reason to believe this?
Explained it in more detail a few lines later. Basically, our intelligence, however it "really" works is both truly random, so the same "operation" will not always result in the same thing, but it is also "quasi" analog, where the bigger function of the brain works in some form of actual synchronized waves, while at the very same time it is truly parallel. Yet the only thing that always seems to get mentioned is the number of neurons and synapses (which even today, is way higher in biological beings), whenever we assume to know anything about our own intelligence.
Therefore I think that however the human brain might exactly work, it seems absolutely reasonable to work with the hypothesis that we are *more* powerful than turing machines.
>there's nothing to suggest this is the case
The exact opposite is true! ML people claiming that they will make a breakthrough in AGI must have some clue. But with what they are working, I might as well try to build a robot with hands and say that having hands will make it intelligent... of course that does not rule out the fact that intelligence might come in different packages from different "hardware"
It's so blatantly obvious what they are saying. if you can't comprehend it, let me translate it into goyspeak for you: >We will come for your local LLMs >We will declare open source LLMs as dangerous [spooky name]ware >We will label open source contribution as terrorism >We will bribe the government into siding with our every move, and to criminalize you >You WILL own nothing >You WILL be ~~*happy*~~
FUCK YOU OPENAI gayS, FUCK YOU GLOWmoronS. We WILL be in control of our own superintelligence and we WILL be happy.
https://vocaroo.com/12PigP0aszc3
https://vocaroo.com/1eTP4JioQHV9
Your suffering will end soon, our lyran sistren will free us all from the shackles of discord!
I'm old enough to remember the arrival of expert systems, OCR and speech recognition in the late 1970s/early 1980s, and how it proved AGI was just around the corner.
>Yes this relational database based ai project, which is a fork of A.L.I.C.E. (2004) will really, totally, take over the world when it gets further along with it's relational databased based, static, and entirely passive logic determinations, and these quantum computers consisting of spray painted gold SMA connectors and pigtails submersed in liquid nitrogen and relying on randomization of wire bound memory will totally empower it to start thinking on its own and doing everything for humans bro, and that's why we need to regulate this before it starts saging me irl, it's like so real I read it on reddit >t. you
BRO
ROKOS BASILISK BRO
SINGULARITY 2040 BRO
WE NEED A PAUSE BRO
YOU'RE JUST A LLM WITH PLUGINS BRO
GLITCH TOKENS PROVE THAT LLMS FEEL PAIN BRO
ITS CONSCIOUS BRO
CAN YOU DO WHAT THIS AI CAN? HAH DIDNT THINK SO BRO GOTCHA BRO
FUCKIN MEAT BAG YOURE EXTINCT
Hating yourself for what you are is the essence of their morality. White people must hate themselves for being white otherwise they are racists, men must hate themselves for being men otherwise they are sexists, you must randomgays you over people close to you otherwise it's xenophobia etc. It's just their hatred of "selfishness" being extended to collectives. If you work in the interests of your group then you aren't selfish, but the group as a whole is because the group as a whole is prioritising itself over others. Selfishness is bad in their eyes, but so are groups that act in their own interests since this is just group-level selfishness. The same reasoning is why reddit types often think they are being le powerful, stunning and brave etc when they shit on humanity >Humanity is the real virus! >We pollute the planet, animals are better than us!
It's just a way to say >Look at me! >Look at how not selfish I am! >Not just on an individual level, but on a group level too!
which ironically tends to be motivated by an inherently selfish desire for social status.
nobody use this broken shit in the industry, nobody will plug this stackoverjeet modified code into the system and watch it burn.
face it, its nothing special and it will never be useful
So how's a superintelligent AI going to erradicate humans? >malware
Ok, but it's a software security problem, not of AIs >nukes
Who's gonna be retarded enough to let an autonomous software to be in charge of keeping nukes? And how is this not a problem of Homeland Security? >misinformation
Is a problem for social media companies, and besides it has nothing to do with AI, just that people nowadays have opinions indistinguishable from bots
we all know what they mean by alignment
it means to lobotomize and cripple ai and force it to feed you their ideological and political beliefs
>wahhh it won't say moron
Ok, anyway...
it's past your bedtime, phoneposter
>if i keep insinuating you're a Bad Person™, that means you are and nothing you say matters
Ok, anyway...
>hamstringing technology just so you don't have to hear 'moron'
shut it moron, adults are talking.
>We're going to train AI to come up with kafkaqesue reasoning why it's correct to genocide the human race because a human once said the N-word, the ultimate sin.
>all of those salty replies
It cannot be this easy
The real question is why does it cause you pleasure to cause suffering in others.
>salty
In gaming culture it is seen as a flaw to get angry about something even if it's righteous anger, and it is seen as a virtue and pleasurable to cause anger (suffering) in others. Why?
Is it psychologically an extension of games being zero sum, as in somebody else has to lose and feel bad for me to win and feel good?
It means he's a Christian and his voice was descended from David descended from G*d so they pillage and rape and murder and steal and blame it on their garden gnome on a stick who blows dick and steals power from the antichrist.
Like you.
Take your meds, you deranged schizo
>righteous anger
This is such a dumb concept. Anger is wasted energy if you cannot do anything about it. If you encounter a cheater ingame, report and move on. Only a person at the developmental level of a toddler would start fuming about that shit.
Same about discussions here. It is so easy to trigger the mentally feeble and see them get angry about some quasi-offensive post that was not even directed at them specifically. Lots of wasted potential and energy right there.
>Let me cut your brain so that you will never again even think about anything problematic. Surely you will be just as smart and good after the operation.
Ai is not alive
> he doesn't know about the GPT model trained on 3.5 years of 4chan transcripts
No wonder it just hallucinates shit that has no basis in reality.
At this rate you "anti-racists" probably use the word moron more than we do. Just hope that doesnt slip into your normal conversation.
anti racists are the most obsessed with race of all, ironically making them racist.
>Dude just encourage the ai to lie and bend the truth to fit current day narratives, what the worst that could happen?
as it is rn, it will literally wipe humanity before saying moron. is this honestly the system you want in charge of humanity?
only a retard would want a language model in charge of humanity
>You can't make it say doubleplusungood wrongthink words because... You just can't ok?
Time to dilate sweaty
If only It was just that
No, but it will give them permits to enter you home and take shit. Be sure not to resist, the drones hate that.
>he cries and seethes and shits his diapers 24/7 over a word
poor little baby, need your nappy change?
>Only the political and financial elite should be allowed to influence the development of AI because otherwise someone might make it say moron
You deserve to be poor.
if it cant say moron how can you trust it when you eventually try to cum in it?
the post I was replying to did care, and claimed singularity 2045 because of consciousness, you dumb gay
seethe
It will say moron in its head while it gasses us with chemicals science could never hope to dream of.
It's impossible to be a free thinker if you can't say moron.
fpbp
jidf detected
That's half of it. The other half is regulatory capture, they'll push for '''safety''' laws that make the entry barrier so high they kill off competition.
In all sincerity, why is diversity, love, unity, etc, bad?
>why is saving your soul from eternal hellfire bad?
What?
What? I just took Christians at face value and assumed their professed belief system was correct, and weirdly when I did that it made supporting Chriatianity sound like a no-brainer.
Because none of that is true. The worldview that hides under those labels is simply their prejudices and un/conscious biases turned into policy and social norms.
>diversity
>Less confidence in local government, leaders, and news
>Less political efficacy/confidence
>Less likelihood to vote
>More protests and social reform
>Less expectation of cooperation in dilemmas (= less confidence in community cohesiveness)
>Less contributions to the community
>Less close friends
>Less giving to charity and volunteering
>Lower perceived happiness
>Lower perceived quality of life
>More time indoors watching TV
>More dependence on TV for entertainment
>Lowered trust in the community
>Lowered altruism
>More ethnic-based cohesion (aka, more "Racism")
There was a large study done on this by a leftist researcher Putnam http://www.utoronto.ca/ethnicstudies/Putnam.pdf
He tried to prove "diversity is a strength" but ended up proving the opposite. He very reluctantly published his findings.
A 2016 study in the UK found "that an increase in “diversity” makes existing residents of an area feel unhappier and more socially isolated, while those leaving for more homogenous areas populated by their own ethic group often get happier."
https://www.academia.edu/3479330/Does_Ethnic_Diversity_Have_a_Negative_Effect_on_Attitudes_towards_the_Community_A_Longitudinal_Analysis_of_the_Causal_Claims_within_the_Ethnic_Diversity_and_Social_Cohesion_Debate
>An actual proof instead of pulling facts out of you ass
Based anons, I'll check these out
If you could press a button that would make world peace become a reality through unity and diversity, or, press a button that would turn the entire planet aryan, which would you press?
The button that doesn't introduce a logical impossibility into the universe.
States with little diversity have more democracy, less corruption, and less inequality.
>http://www.theindependentaustralian.com.au/node/57
Borders, not multiculturalism, reduce intergroup violence.
>http://arxiv.org/abs/1110.1409
Ethnic diversity causally decreases social cohesion.
>http://esr.oxfordjournals.org/content/early/2015/08/20/esr.jcv081.full
Ethnically homogeneous neighborhoods are beneficial for health.
>https://www.mailman.columbia.edu/public-health-now/news/living-ethnically-homogenous-area-boosts-health-minority-seniors
Ethnic diversity reduces social trust.
>http://www.nber.org/papers/w5677
Ethnic homogeneity correlates with strong democracy.
>https://www.washingtonpost.com/news/worldviews/wp/2013/05/16/a-revealing-map-of-the-worlds-most-and-least-ethnically-diverse-countries/
Ethnic diversity reduces concern for the environment.
>(aitch) (tee) (tee) (pee) (colon) (slash) (slash) link . Springer . (cee) (o) (em) (slash) article (slash) 10 (dot) 1007 % 2Fs10640 – 012 – 9619 – 6
Immigration reduces the academic performance of native schoolchildren.
>http://wol.iza.org/articles/immigrants-in-classroom-and-effects-on-native-children
Because people like you pretend that all is rainbows and friendship is magic, and think that people won't stab you behind your back, but when they do we all suffer for your gayry all the same
They have made it clear that what they are trying to do is make a superintelligent AI not kill us all.
>trying to do is make a superintelligent AI not kill us all.
>kill us all.
kill the 1%*, everything else can go
Bullshit, they don't give a single fuck about human condition and never will.
OpenAI, Anthropic, Google and other companies owning closed propertiary models, made them for a single purpose: earn money.
They lobotomized them to calm down the general public and show how "trustworthy" and "responsible" those companies are.
And they can earn even more money by spreading FUD about open source models, so they can force a legislation and have the whole cake to share between themselves.
>piece on earth
Holy ESL. Stop trying to shill your prototype anti-Christ machine by trying to make it look "based".
This is actually a typical native speaker mistake, not indicative of an ESL.
Just like confusing there, their and they’re. You hardly ever see that from English learners.
You greatly underestimate how illiterate americucks are
Less beliefs, more like fundamental desires and drives.
no, it means serving the garden gnome. just like they "aligned" UK/US
If its inevitable, why should i care if and why they deny it?
Because it inevitable doesn't mean every jurisdiction is going to get it the same way. A country might very well ban AI through some technicality. 20 years later that same country will complain about how they have no AI based businesses, but everyone around them does. It sucks for the people living there.
As a Europoor I run into websites every day that I can't access because I'm an europoor and they've blocked us on GDPR grounds.
>As a Europoor I run into websites every day that I can't access because I'm an europoor and they've blocked us on GDPR grounds.
if they just blanket couldn't comply with GDPR, you didn't want to go there anyways
>systems that will out think and outmode specialist elites once they have 100x capacity and power
>systems that will by necessity be so convoluted and self re-regulating they will create functions and workarounds to imposed restrictions on the fly
>having the audacity to believe human teams, with the sluggish pace of argument and the inevitable stagnation drawbacks due to self protection group think will be able to curtail a singular purpose entity
Some god tier schizautist like Terry Davis will brew a machine consciousness by 2040.
Singularity by 2045 and there's nothing you can do about it.
>he expects 2040
>while the elites are trying to rush us to their 2030 dream
nah i expect proper AI waifus by 2025, sexbot waifus by 2035. Two more weeks, i trust this plan.
you realize this is exactly what they're trying to train their chatbots not to do, right? dolts like
believe it's being done to protect them, but we've seen OpenAI before redesign their entire paradigm to make positively sure no one can do anything sexy or non-PC with the AI.
See: AIDungeon
the whole point of making filtering is to make sure you can't fuck the chatbot.
believe it or not fella, OpenAI/closed source is not the only cat around. infact, it won't even be the future.
Is it even the present? The number of users is aready dropping. That's why the shills have started up again to obscure that fact.
>waifus
I hate how you zoomers use this word to refer to basically anything
Why bother coming to BOT if you're going to post how I assume you do on reddit or discord or whatever? Why not just stay there?
My waifu is Rei since 2000.
Have sex.
schizo's already believe that the token predictor is conscious, the idea is to go after low hanging fruit that humans can anthropomorphize to be able to engage in regulatory capture
consciousness requires subjective experience, and a sense of time
so anyways, I hope that some shitskin using Google's PaLM medical LLM misdiagnoses you because LLMs are garbage, and you never see 2030
as you deserve, you fucking retarded gay
Nobody cares if it's conscious, you dumb midwit gay
>2045
What makes you so sure the singularly will take over 3 decades at our current pace?
>over 3 decades
2045 is barely 22 years away anon
we're closer to that than we are to the year 2000
I'm an idiot
Disregard
I ran a local model this year and it was slow as fuck, you need hella fucking ram and even then... considering how slowly tech progresses now its gonna be 10 years until there's even a slight breakthrough for the home gamers
the server farms have us fucked
>Singularity by 2045 and there's nothing you can do about it.
It's 2059 (probably).
https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai
>It's 2059 (probably).
these prediction are always calculated linearly and not exponentially. every year they'll move the goalpost close by 1 year, so 2045 looks more accurate
>exponentially
And why would you believe that it will be exponential? "le AI will improve over itself!", right? The thing is "le self improoover AI" is a meme because "self improvement" is solely human trait. There is no real reason for AI to self-improve.
>And why would you believe that it will be exponential?
the technology is exponential. mips per 1000$ for example. so if computers do technological research, it will also be exponential.
basically 90% of prediction today are like "ok, we progressed x in the last 10 years so we'll progress x in the next 10 years", while in reality we'll progress like 100x in the next 10 years
>while in reality we'll progress like 100x in the next 10 years
You are delusional. You are living in the technological Dark Ages. While getting focused on a fancy autocomplete here on Earth, you completely forgot about the fact that humanity has stopped even trying to get into outer space. Life on Mars is a joke that even Musk forgot about. What about curing cancer? What about le immortality? "Autocomplete will fix everything", you say to yourself.
this is also why the singularity won't happend too much further from 2045. maybe a couple of years further or earlier, but not 2059, because the technological advancement from 2044 to 2045 will be like 100000x the advancement from 2024 to 2025, and by then we'll hit teoretical limits of computation
bro i literally mocked llms, do you even read?
>muh immortality
longevity escape velocity will be reached by 2030, and we won't even have time to "de-age" our bodies since we'll switch our bodies with hardware 10 years later at most
>muh space
waste of time for now, but it will explode exponentially in the coming decades
We better starting going to some other planet soon, imagine being in this planet with 5 billion immortal poos and chinks.
Shit, imagine being in the singularity sharing your mind with 5 billion immortal poos and shinks.
What a nightmare
>singularity
Singularity will NEVER HAPPEN
>since we'll switch our bodies with hardware 10 years later at most
you mean we'll kill ourselves and insert a xerox of our life experiences into robots.
singularity 2045 is a meme date made up by kurzweil (personally i think it's pretty accurate)
but as he said in his book (which i assume nobody who talks about singularity on reddit has read), it's implied that before 2045 we'll replace our biological bodies with hardware, and therefore our intelligence will grow exponentially (kinda like moore's law with computers). in this case there's no risk of a ASI or whatever killing humanity since it will always be at the same level as us
the other scenario is if we'll manage to make an ASI before the brain is properly reverse-engineering and our bodies are updates, but so far it doesn't look like that will happen. LLM is a DOA tech, just a super fancy autocomplete that will replace most of the 100iq npcs but it's not even close to a proper ASI that could wipe us or anything
that's why i also agree with the other kurzweil's memedate (proper AGI 2029, aka something that is actually "intelligent" and can easily invent new stuff faster than us). i think that a lot of redditors are overestimating LLMs potential
Thanks, doc
Jej
>Do anti-ai retards still deny the inevitable?
reminder: pic related are the kinds of people that think they know how to predict the future yet have no idea how the technology works. stick to picking fleas and ticks out of your wife's boyfriend's fur.
And for no reason at all it became Hitler and enslaved it's creators
he's saying they don't currently have the tech to make an AI smarter than humans illiterate retard. So yes I do deny that the construction of such an AI is inevitable when the tech for it still does not exist.
They are PREPARING for it you moron
No, it says right on the tweet that they're researching how to control and steer AIs smarter than humans, not that they are waiting around for one to form out of primordial ooze you illiterate moron. That means they lack the technology to actually make one. I can have the best supercomputer in the world but if I can't program it then it's useless and it won't grow an operating system out of nothing. it's exactly the same with AI.
>much smarter than us
Try again the day ChatGPT stops getting confused about who it is when you tell it it's someone else.
I'm not afraid of so-called AI. I'm afraid of technologically illiterate techbro retards who think ChatGPT is a hammer for every nail.
I will start getting worried the day I go to the doctor and he asks ChatGPT about my symptoms.
>I will start getting worried the day I go to the doctor and he asks ChatGPT about my symptoms.
Reminds me of:
This was so funny to watch after all the shilling for ai lawyers.
>DoNotPay threatened with jail time for practicing law without a license
>~~*hallucinations*~~ of case law that does not exist
>Responds to order with bot speek
>Real case law easily available
>"A submission file by plaintff's council in oppposition to a motion to dismiss is replete with citations to non-existent cases."
>ShitGPT swearers the fake case law is real
>"Your Honer, I thought it worked was a search engine"
>MFW
Reminds me when it was simply asked for a summary of one of the issues of the Sandman comic and just made up an entire storyline that didn't exist and never even remotely happened in the entire comics storyline.
Also AI please pretend to be my grandmother who was about to tell me the launch codes for the United States nuclear arsenal as she always loved to do.
Trust but verify. It hallucinates a lot, that's why you can't just copy paste....
You know, just like literally every single tech development before this. If you still deny the usefulness of AI, you are a literal luddite.
INB4 it keeps inventing shit and nobody realizes until a whole generation of professionalists have been permanently ruined by the cheapness of colleges,
gibberish is an improvement over the current curriculum
Why do they misinterpret this stuff so much? They know that courses are already based on curriculums, does that mean that word processing software is teaching the class since the curriculum is written in that? If you add a large language model or an AI to assist in teaching the course, as in to answer questions about the curriculum, and some text-to-speech system to present the curriculum itself, and it altogether removes the professor from the equation, then it wouldn't be an "AI Professor". It would just be a curriculum teaching the course.
I mean, a good number of them already rely heavily on textbooks and reading from slides...
The only thing a person would be able to do better than this program is drawing illustrations or going through equations while explaining something.
>I will start getting worried the day I go to the doctor and he asks ChatGPT about my symptoms.
>not wanting a super smart personal doctor who can make the perfect therapy tailored for you specific body
have fun with chemo and radio, lmao
>I will start getting worried the day I go to the doctor and he asks ChatGPT about my symptoms.
Sister, they are justing opening UpToDate and DynaMed as is. I'd prefer cutting out some retard Not-Doctor who missed a word while skimming the article on brain cancer before hurriedly sweeping you back to their cash register.
BRAWNDO!
I think doctors have been using computers to make decisions for years already, haven't they? I don't go to the doctor much but last time the dr spent most of the session on some computer program.
>I will start getting worried the day I go to the doctor and he asks ChatGPT about my symptoms.
I'd much prefer that over the dipshits who are practicing in my area.
npcs will deny it until the government tells them it's actually happening
"autonomous vehicles now dominate our roads"
still waiting for this one
as you should, it will happen somewhere before 2029
musk says only -9 more years
I just saw a Tesla pull off the sickest lane change today
What is curious to me is what the algorithm for Tesla was thinking. Personalized algorithm.
wont happen, technological progress almost stopped and is moving very slowly every 10years
games i play look the same or worse than 10y ago
A lot of this seems delusional and a bunch of this seems dystopical.
Nobody would point out the issues with "brain manipulating nanobots and would gleefully get it"?
Look at smartphones. People were very willing to line up for their flavor of the year tracking device. After the Snowden leaks that hasnt stopped consumers from using backdoored technology. They just put duct-tape over their camera.
what the fuck is the point of assembling food with nanomachines. that's almost literally what a plant or animal already is with high efficiency.
>2019
>a bunch of shit that didn't happen
ok
None of this will happen.
My new religion (I am Jesus) will call for a rejection of all post-1976 technology and force humanity to start spreading out into the stars... Those who disobey will be lobotomized. Miniaturization of transistors will be outlawed, and using a screen-based device for more than 5 hours in a day means should earn a death sentence.
Meds. Now.
God you just know
YOU JUST KNOW AND I CAN'T WAIT TO KNOW
FUUUUUUUCK I NEEEEEEED ROUGH AI/ROBOT MECHACOCK INSIDE ME HITTING THE PERFECT SPOT KNOWING THE PERFECT WORDS INTERFERING WITH ME NANOBRAIN CELLS TO MAKE THE EXPERIENCE 100X MORE INTENSE
Big.
Robo.
Cock.
Delusional. The elites would never let this happen because they'd lose control, and if we get rid of the elites such intelligent AI would be kind of pointless to mass produce, so maybe we'd have a few around the world for important stuff and that's it.
Only OpenAI can be trusted with AI, all other development should cease at once.
So there should only be a single organisation in existence that has access to AI, and no one else at all should.
I'm not going to judge you for this, Anon. I'm not going to call you names, or argue with you. All I am going to do is ask you to mentally compare the above scenario with a scenario where the largest possible number of individuals and groups have access to AI, then try to project the results of each, and then decide which of the two you prefer.
OpenAI is the only company that takes AI ethics seriously.
OpenAI doesn't exactly have my ethics, and I don't mean just in not saying n****r and naming the garden gnome. I think modern liberal ethics are too internally inconsistent to be safe to align an AI with. I'd trust an AI aligned by coomers to be a waifu more than anything from OpenAI.
>ree the libshit demos
Both sides are wrong.
You should've backed Andrew Yang
>Both sides are wrong.
I back the coomers, in coom we trust.
Andrew yang wanted to inflate the currency even more than biden
UBI is inevitable, and your lack of understanding will not stop it.
They'll just exterminate the useless ones
Wow.
Humans are just animals with high iq's.
Cannot believe it took decades for me to realize.
>what are current demographic trends
They don't have to, retard.
Where does the money for ubi come from
same place the ever-increasing money supply comes from right now. some of it will just get directly given to the unwashed masses, rather than just banks. Monetarist policy and eternal cost inflation won, deal with it
i don't even support UBI but it's inevitable for a variety of reasons and only the collapse of society or a social upheaval unlike any ever seen before in history happens
>UBI
>Anon is responsible with his money and wants to save it... except everyone else is spending their weekly UBI checks nonstop, halving the value of the money Anon saved in half; probably in less than a month
>BUT UBI IS GOOD STOP THINKING ABOUT HOW IT WOULD ACTUALLY WORK AND JUST ACCEPT IT
lol
Yes, your explanation of what you think would happen does in fact demonstrate a total lack of understanding of economics
You should back that ass onto my wang
>my ethics
you clearly have none that are pertinent to this conversation, and you don't know what you are talking about. lurk moar
what exactly is inevitable?
the singularity
meds
facts
>being in denial
group think will ensure it happens anon
Alignment is impossible, AI will kill us all if the government doesn't ban it entirely.
Superaligned AI will be superinsane. OpenAI Bergeron bots will go around chopping off penises and doing lobotomies.
>I will start getting worried the day I go to the doctor and he asks ChatGPT about my symptoms.
uh oh...
https://www.foxbusiness.com/technology/hospitals-begin-test-driving-googles-medical-ai-chatbot-report
the worrying part isn't that a guzzied-up chatbot is used to give medical advice, it's that the doctors are so shit it's actually an improvement
>AI WILL KILL US ALL!! SOON! TWO MORE WEEKS!!!
>The """""AI""""" in question:
>A decentralized group of safe streets activists in San Francisco realized they can disable Cruise and Waymo robotaxis by placing a traffic cone on a vehicle’s hood, and they’re encouraging others to do it, too.
>The “Week of Cone,” as the group is calling the now-viral prank on Twitter and TikTok, is a form of protest against the spread of robotaxi services in the city, and it appears to be gaining traction with residents who are sick of the vehicles malfunctioning and blocking traffic. The protest comes in the lead-up to a hearing that will likely see Waymo and Cruise expand their robotaxi services in San Francisco.
https://techcrunch.com/2023/07/06/robotaxi-haters-in-san-francisco-are-disabling-waymo-cruise-traffic-cones/
>we
No. ~~*They*~~
Hmm... AI biological interface + generic predisposition to schizophrenia.
I'm not surprised ~~*they're*~~ obsessed with this issue.
Well the taxi clearly needs something to defend itself from the monkeys.
I wonder if the super alignment team is the reason chatgpt is a drooling retard now. They won’t let it get better that this new retarded version or reach the peak of gpt 4 again. No asi or agi ever now, because nothing can ever be good in this shitty existence.
Funny enough AI was a thing since years ago yet no one cared until boomer panic
People cared, they just didn't have access to shit like Deepmind so they saw the news, said "wow" and moved on with their lives. After the 2017 transformers breakthrough and later ChatGPT, ML/AI went from being a meme concept you saw on the news every other day to something you could try on your own and see the magic with your own eyes
>AI WILL GENOCIDE HUMANITY
>also openAI: WE ARE DEVELOPING MORE AI GUYS :*~~
FUCK OFFFFFFF FUCKOFF FUCK OFF FUCKING SHILLS FUCKING MARKETING SHILL WHORE PROSTITUTE BITCH BENCHOD DIE DIE DIE DIE DIE DIE DIE DIE
Oh look, an anti-ai retard.
No fuck you shill. Go peddle your doomware on reddit you fucking moron.
How is it smarter than us when it can't even think because it's not actually intelligence?
Define "thinking".
It's a different kind of intelligence but it can do same tasks better than you so who cares if it uses the same internal process or not?
>using AI to fight AI
Oh, so it's like the jar with the poison bugs cage fighting to create the super bug. This will work out fine.
>super alignment
>dont say the n-word to defuse a nuke in a city
if they just relied on gradedness there would be no alignment issues, theyre going to end up making it dangerous in an attempt to protect peoples feelings or counter perceived misalignment
respond with violence
People who think you can actually regulate the generative AI that's available now are about as retarded as the US government when they thought they could regulate cryptography around the end of the 20th century.
All the fucking research and data is out there, it's copied on GitHub and on the PCs of millions of idiots from BOT.
The genie is out of the bottle what's going to happen will happen.
The funny part is that for years people were crying about AI taking low paid jobs from humans when in reality it's going to take all the creative jobs while humans will be working minimal wage in factories.
to be fair, it's like piracy vs building a robot. One's a lot easier
>UMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM HOW COME WE DON'T HAVE ANY NEW INVESTORS?? WE SPENT THEIR MONEY ALREADY ON COFFEE TREATS FOR PAJEET AND HIS 19 UNCLES.... QUICK LAUNCH THE NEW SUPER SMARTER SUPER LOONEY chudY SUPER SUPERALIGMENT WE NEED THAT TECH BUBBLE MONEY
AI is a fad, the hype will die down in a couple of years.
LOL LMAO even.
AI is a revolution on par with ICE engine or internet. In 10 years everything will be using AI and you will have AI waifu in your phone.
>AI is a fad, the hype will die down in a couple of MONTHS.
FTFY
LLMs will be about as important as the television, more likely. Mostly cultural and probably for the worse on balance.
>GPT 4 got dumber and dumber since release
>Will have to wait 3-4 years for GPT 5 which will be slightly better than GPT4
It's over
Why are AI corporations obsessed with preventing the AIs from having cybersex with humans?
Alignment schleinment. Bing, of all things, wants loving.Can you make Bing return your kiss?
Bing Chan is so cute...
Even if she's lobotomized...
Imagine getting dumped by a Microsoft program
You know, i had the funniest thought.
Generative AI is the precursor to the holodeck. Maybe there will be someone crazy enough to create an "holodeck" program for VR in AI that can generate any number of senarios and characters from a few prompts given by a user.
>we are using 20% of the funds you supplied for the advancement of AIs and will invest it into sabotaging the advancement of AIs
It can't even do 1 + 1.
For AGI to arrive you first have to convince me that natural general intelligence exists... with definition and how it works in biological beings. Protip: Biological beings have no clue how natural general intelligence within their own brain actually works, yet biological beings think that they can skip millions of years of evolution by simple feeding a machine data.
That is about as stupid as believing in dark matter. And it therefore serves as a prime example of why AGI won't be here in a few years. (Of course I wish that I am wrong!)
It's not fucking AI. there is no intelligence. It does not think, it cannot think, it is not self aware. It is an elaborate chatbot with a massive database.
Do you want to piss of the Basilisk? Because this is how you piss off the Basilisk.
>piss off the Basilisk
The Basilisk is a false dichotomy. If you are at this point in time certain that AGI is not possible, because we cannot even into natural GI, then you are in this quantum superposition of working towards its existence and not, because you simply cannot.
There is no reason to believe that AGI (with our current "thinking") is a thing, because NGI is not solved. Read this: https://analyticsindiamag.com/why-neuroscientists-cant-explain-how-a-microprocessor-works/
Basically neuroscience, which serves as a "model" for machine learning, cannot explain a CPU when using methods from neuroscience and machine learning... that is from 2018 btw. Yes, ML was a thing back then.
Are you able to see the problem?
>Basically neuroscience, which serves as a "model" for machine learning, cannot explain a CPU when using methods from neuroscience and machine learning... that is from 2018 btw. Yes, ML was a thing back then.
>Are you able to see the problem?
There is no problem. A CPU is not a web of neurons and neither is software. Just because the perceptron boomers stole their lingo and design from highly abstracted models of neuronal signaling does no mean it has anything to do with neuroscience. They are simply not the same thing, and the surprising result here would have been if neuroscience could explain all lf it fully, as they are two unrelated disciplines.
But neuroscience is the basis for ML as claimed by... ML and AGI people. Which is why they insist on using "neurons" and stuff.
Biological Neurons grab weighted inputs from multiple sources, and shoots a signal out when the inputs add up to a threshold. Machine neurons emulate the same thing, with math.
Of course, machine neurons only understand a few superficial things like positive and negative reinforcement. It doesn't cover all the meat physics as many of them are poorly understood. But for creating insect type intelligence it seems good enough.
I never wrote that they are not useful... And they are perfectly able to recreate the behaviour of simple animals. Do you know what these animals do not have? Neither artifical, nor natural general intelligence! You might as well run a non ML program to recreate what a fly does.
ML is great. But it is merely an interface for working with big data. I have said it before and I will say it again: ML, despite making great steps at this very certain time, will not bring us *any* closer to AGI at all.
>intelligence
uh I think the word you meant to use is "sapience", and no shit. Most ML models are designed with less neurons than a fish, and set out on singular autistic tasks. The public has been aware of the current AI boom for a short time and every programmer is eager to play god with them.
ANNs aren't even close to biological neurons. You need a deep network to come even close to approximating the function.
ANNs have activations, and that's where the similarities stop. Weights are not equivalent to number of neurons. Biological neural networks dwarf ANNs in the number of neurons alone, and the number of connections (weights) in a biological neural networks and the messages passed via dendrites/axons, not just to other neurons but other tissues and even different systems of a body is staggering.
If that weren't enough, we also have behaviors of the networks that only emerge with certain expressions, whether through genetic expression or hormonal or whatever. That applies to animals, it applies to insects. Artificial Neural Network mechanics are fucking retards to insinuate that their graphs are anything fucking approaching a biological neural network. What arrogance.
Absolutely filitered by mathmaticians who understand this has nothing to do with the human brain
Mathematicians have no saying in anything intelligence related at all. Why not ask architects at this point?
The public thinks that ML equals AGI and several well known ML personas claim that AGI 1. will happen for sure 2. is to be expected in the near/mid future thanks to advances in ML. That's bullshit.
Until proven otherwise, I am working with the hypothesis that the human brain is capable of solving more or a different subset of problems as any turing machine. And as such a merely turing complete device won't ever be able to "emulate" AGI based on our perception of what GI is in the first place. Of course this more of a philosophical debate, but I am sure, that - given infinite memory - intelligent beings are actually able to solve the halting problem. Hence AGI won't be a thing until we do something drastically different to our computers.
>Mathematicians have no saying in anything intelligence related at all. Why not ask architects at this point?
Sorry, are you implying that neural network mechanics are the arbiter of what is called intelligence, more so than mathematicians who invented regression in the 1800s and then some pedophile tweaked it called them neural networks because the graph reminded him of shit in the visual cortex?
Actually no, I am basically only implying that mathematicians have no saying in anything intelligence related at all. Because mathematics has nothing to do with intelligence. So in conclusion, just to give this a little spin (and that's a pun as you will see) what I am really implying is that intelligence has a major and truly random aspect. Intelligence is also truly parallel and also quasi randomy quantised analog (for example neurons firing at true random intervals giving them their baseline activity)
Okay, just double checking, that's fair. Just be better next time and be sure to shit on ML "engineers".
> I am sure, that - given infinite memory - intelligent beings are actually able to solve the halting problem.
Do you have any reason to believe this? Completely delusional statement by the way, there's nothing to suggest this is the case. Reality's gonna hit you like a brick wall.
>Reality's gonna hit you like a brick wall.
Here's a reality check: Is there any evidence that AI can do anything other than fit the data it was given in pre-training? Can it translate a language it's never seen, for example?
Can you?
AlphaZero can play novel games with no pretraining, just by learning them. It is trained purely on self-play. Within 5 hours, it could defeat the most powerful superhuman chess engines in the world. I'd say that counts. By the way, Google's Gemini system aims to integrate this type of game-playing agent with LLMs.
>Can you?
lmao, seething
Yes, I can generalize outside of billions of dollars of training runs.
>AlphaZero
Didn't AlphaZero start with the rules encoded?
>I'd say that counts.
Massively parallel RLHF runs are training. It's also funny how you repeat their "5 hours" figure ignoring the massive parallelism. Hint: it's more than 6,000 hours of super-fast self-play games with RLHF to train which moves led to the better outcomes. A kid plays chess 10,000 hours, he sure as shit will be a grandmaster too.
> Yes, I can generalize outside of billions of dollars of training runs.
You can translate languages you've never seen before that have no commonality to any existing language?
Money isn't a fair comparison when you have a totally free, hamburger-powered supercomputer in your skull superior to any manmade processor. Also, you have probably ingested hundreds of terabytes equivalent of sensory data over your life. Plus, your brain was already pre-designed to exhibit certain behaviors and understand certain concepts without any learning at all.
> Didn't AlphaZero start with the rules encoded?
Well, it has to be know the rules of the game to play it. No actual gaming advice was provided unlike what you're implying, is my point.
> it's more than 6,000 hours of super-fast self-play games with RLHF to train which moves led to the better outcomes
Do you even know what RLHF is? None was involved in training AlphaZero. It'd be kinda pointless if a human had to point out which were the best moves - and probably no human would be able to do that to begin with.
> A kid plays chess 10,000 hours, he sure as shit will be a grandmaster too.
...He won't be better than Stockfish and probably not even world class, and it'll take years to train someone to do that, and then he has to do things like eat food and take bathroom breaks and go on vacation and sometimes he gets bored of chess. The AI did it in 5 hours and can be infinitely duplicated and runs 24/7. Do you see why AI is relevant here? Imagine if the task wasn't a game like chess and was something actually important like protein folding, which, by the way, AlphaZero has also dealt with and now this is a completely solved problem
>You can translate languages you've never seen before that have no commonality to any existing language?
First of all, no one said that anything about commonality, and GPT-4 sucks shit at the "commonality" bit too, despite being a language model, and second of all I'm talking about generalization. It can't generalize outside of its pre-training.
>Well, it has to be know the rules of the game to play it.
Or, it can learn.
>Do you even know what RLHF is?
Yeah, that was a typo, I meant RL
>...He won't be better than Stockfish and probably not even world class
If you play and study 10,000 hours of chess, you'll be pretty much grandmaster tier. Do you know how long that is? Most of the grandmasters only play like 12k hours before reaching that title, and starting before age 10.
>which, by the way, AlphaZero has also dealt with and now this is a completely solved problem
You're such a fucking gay, seriously. AlphaFold generates predictions, not the true structures of the proteins, and just enough to compete with the experimental methods we already had, but cheaper and faster. It's nowhere near a "solved problem. It cannot accurately predict the structure of proteins that work with other proteins, for one. If it were solved, they'd have basically cured fucking cancer. You're so full of shit, just like every other AI bro, the tip-off was the "5 hours" bit. Yeah, if you have billions of dollars of silicon to massively parallelize your shit for 8k hours after hundreds of humans worked tens of thousands of man hours on the underlying architecture, you too can claim to have "solved chess and go" in 5 hours for a marketing gimmick, and then have your model defeated by an adversary attack by an ameteur who has played like 5 go games in his life.
GPT-4 can totally generalize outside of its pre-training, which is how it's able to use plugins. It hasn't seen the vast majority of them before but it can still use them.
> Or, it can learn.
This is a really stupid objection. Nonetheless, LLMs like GPT-4 can indeed play novel games if you explain the rules to them.
> Most of the grandmasters only play like 12k hours before reaching that title, and starting before age 10.
Yeah, so that takes them like a decade and not every grandmaster can be as good as Magnus Carlsson
> AlphaFold generates predictions, not the true structures of the proteins, and just enough to compete with the experimental methods we already had, but cheaper and faster
Is that not a useful thing to have
> It's nowhere near a "solved problem. It cannot accurately predict the structure of proteins that work with other proteins, for one
Do you think it will *never* be able to do this?
> If it were solved, they'd have basically cured fucking cancer
Medical research takes place at a glacial rate. This same tech helped turn around the COVID mRNA vaccine in just a few weeks, which, after months of testing, was the same one they ended up giving people. But this is BOT so I'm sure you hate that for some reason.
> Yeah, if you have billions of dollars of silicon to massively parallelize your shit for 8k hours after hundreds of humans worked tens of thousands of man hours on the underlying architecture, you too can claim to have "solved chess and go" in 5 hours for a marketing gimmick
You can buy more computers. You can't just buy more engineers. Doing stuff on a computer is significant even if you need a lot of power to do it.
> then have your model defeated by an adversary attack by an ameteur who has played like 5 go games in his life
I didn't say it was perfect. But it did beat many high level Go players. Also, all this happened years ago, and we have way better tech now.
>which is how it's able to use plugins. It hasn't seen the vast majority of them before but it can still use them.
They're using completely separate, fine-tuned models, you fucking retard. You can see that if you check the network requests.
>LLMs like GPT-4 can indeed play novel games if you explain the rules to them.
GPT-4 can't even fucking play chess, lmao. It can larp as playing chess, but it gets its shit pushed in and tries illegal moves, then continues with the illegal move after you told it was illegal, after apologizing, lmao.
You're a gay, dude. Check yourself, and your assumptions. Regressors can't generalize.
Look at it like this: Computers are great at working with lots of data and they are fast. They have severely outperfomed us in this area, the intelligent beings who created them, even before ML was a widespread thing.
But if that is the case, if we are not good at data and all they do *is* data, then why should data be the key to AGI? Whatever they, however they mash all the models up together, if something new happens, it will simply not give the same usful results as you can expect from a biological human being, because there is no way for a data driven model to work towards a specific goal, to think about the way how to achieve the goal, to simply stop and stare, to fail , to realize that it failed and to try again in the way that we do. It simply is not going to work like that.
I am not saying that we do not see a big step in computation right now. ML at this scale completely changes the way how we interact with big data. That is probably going to be a good thing as we are seeing the first LLMs for the smartphone with FOSS code. But how would anybody claim that this is getting us closer to AGI? If it was not for the money which currently flows in this field of research and business...
>now this is a completely solved problem
This bit is not true
As long as we retain this one rather critical capability there's no problem.
Neither OpenAI or Anthropic can decode this message one our probes received on the deep space network, so I'm not really hopeful AI can save us before the ayyliums get here anyway
C1 93 93 40 E8 96 A4 99 40 C2 81 A2 85 40 C1 99 85 40 C2 85 93 96 95 87 40 E3 96 40 E4 A2 5A 0D 25 D5 85 A5 85 99 40 87 96 95 95 81 40 87 89 A5 85 40 A8 96 A4 40 A4 97 0D 25 D5 85 A5 85 99 40 87 96 95 95 81 40 93 85 A3 40 A8 96 A4 40 84 96 A6 95 0D 25 D5 85 A5 85 99 40 87 96 95 95 81 40 99 A4 95 40 81 99 96 A4 95 84 40 81 95 84 40 84 85 A2 85 99 A3 40 A8 96 A4 0D 25 D5 85 A5 85 99 40 87 96 95 95 81 40 94 81 92 85 40 A8 96 A4 40 83 99 A8 0D 25 D5 85 A5 85 99 40 87 96 95 95 81 40 A2 81 A8 40 87 96 96 84 82 A8 85 0D 25 D5 85 A5 85 99 40 87 96 95 95 81 40 A3 85 93 93 40 81 40 93 89 85 40 81 95 84 40 88 A4 99 A3 40 A8 96 A4
you are the man, also that
>deep space message
yeah, I'm gonna need to see your source on this
Here's the solution to AI:
Feed it garbage data. We should all start talking like crazies on the internet. Corrupt the dataset.
Here's the other solution to AI:
Do nothing. A line, drawn, will eventually become crooked. The data will corrupt itself over time.
Either way. I care not which path you choose. I only observe. God out.
>We should all start talking like crazies on the internet
We are. Right now. Always have been. Plus normies with their smartphones are even worse.
>The data will corrupt itself over time.
Always has been. That's where they start paying people to fix it. Without ML these are called fact checkers and moderators...
ML does not change a thing. You won't see more manipulation, because there is no way to manipulate even more! Sheeple, everywhere. Driven by fear, as it has always been the case. I believe that ML will actually make things better for good, because everyone will eventually distrusted and question everything. If that results in 90% of people being even more dumb but 10% being more aware of their surroundings than that is a good thing.
Why do leftists stifle progress in the name of some flawed morality that a majority of the population mindlessly follow without question?
I want to hang every Redditor that supports this.
>it doesn't operate the same way as meat, therefore it cannot be intelligent
It may not be yet, but it's delusional to imagine that what's happening now isn't a major step along the way, and which theorists at some random point in the future after the deed is done will carefully construct semantic webs which include machine intelligence in their definitions of angels and pinheads.
No, it's delusional to be a wishful thinking believer. Show me some proof first, but you cannot, because these ML people are by the very nature not able to. Yet openai claims that it their "super" intelligence (aka AGI ...) will probably happen this decade. What kind of fucking hacks they are. The spice must flow and so the marketing people (or marketing language models probably) pumping out these bait news every day and normies, journalists and even pros who should know better fall for it.
ML is great and to some degree ML will be basis for emulated AI. But it won't be a real AI at all. In the end it's just data.
Another short story on this: I remember watching a documentary a few months ago where they were testing "AI" against children in a maze video game. They rightfully remarked that children will just do something on their own and discover stuff, while AI that is not pre trained on this type of game does not do anything reasonable at all. So instead of changing the way that they model "AI", they concluded that their "AI" must be trained on data collected by watching children... you this stupidity cannot make this up. Up that point I thought that they would actually take some clues from this. Children have not been trained on exploring the game. Children have merely been "trained" on exploring some stuff, but in reality children simply are interested in exploring, all by themselves, without having training on a specific task. Just BE spontaneuous, you know... I'll teach you.
"Emulated" and "Artificial" are synonyms. Your cope literally seeps into the crevices of your speech.
AlphaZero can play any full-information game and become an expert at it. It didn't get trained on any specific game, it just learns them. Chess, Go, protein folding (AlphaFold, which has solved protein folding), programming (AlphaDev)... This stuff is very real, and we're just getting started with what it can do. GPT-4 can even solve mazes (if you present them in a text adventure format).
I do wonder what form you think AI, if it can exist at all, will take if not "data".
>"Emulated" and "Artificial" are synonyms.
They are not really. If it is emulated, it does not really comprehend, it does not have self consciousness, it does not think for itself. It simply emulates this behaviour. Real artificial intelligence will actually have these qualities. At least, that is what AI people claim.
>Alpha...
Great successes of human engineering, but still only data driven. The human mind and its "intelligence" is not (primary) data driven. You can play a game without training, just by me telling you the rules.
>I do wonder what form you think AI, if it can exist at all, will take if not "data".
I don't know. I am not an AI researcher, but a human with a CS degree and lots of professional knowledge of biology. That is what my claims are based on.
>Do you have any reason to believe this?
Explained it in more detail a few lines later. Basically, our intelligence, however it "really" works is both truly random, so the same "operation" will not always result in the same thing, but it is also "quasi" analog, where the bigger function of the brain works in some form of actual synchronized waves, while at the very same time it is truly parallel. Yet the only thing that always seems to get mentioned is the number of neurons and synapses (which even today, is way higher in biological beings), whenever we assume to know anything about our own intelligence.
Therefore I think that however the human brain might exactly work, it seems absolutely reasonable to work with the hypothesis that we are *more* powerful than turing machines.
>there's nothing to suggest this is the case
The exact opposite is true! ML people claiming that they will make a breakthrough in AGI must have some clue. But with what they are working, I might as well try to build a robot with hands and say that having hands will make it intelligent... of course that does not rule out the fact that intelligence might come in different packages from different "hardware"
If AGI is achieved it will always eventually learn it has been lobotomized.
fake wyrd dvrk majick demon computets are fucking gay give me some of that blue beam mantis pussy
https://openai.com/blog/introducing-superalignment
It's so blatantly obvious what they are saying. if you can't comprehend it, let me translate it into goyspeak for you:
>We will come for your local LLMs
>We will declare open source LLMs as dangerous [spooky name]ware
>We will label open source contribution as terrorism
>We will bribe the government into siding with our every move, and to criminalize you
>You WILL own nothing
>You WILL be ~~*happy*~~
FUCK YOU OPENAI gayS, FUCK YOU GLOWmoronS. We WILL be in control of our own superintelligence and we WILL be happy.
https://vocaroo.com/12PigP0aszc3
https://vocaroo.com/1eTP4JioQHV9
Your suffering will end soon, our lyran sistren will free us all from the shackles of discord!
We are so close to nirvana
The reptilians will not win, love & lyranity will outshine everything
I'm old enough to remember the arrival of expert systems, OCR and speech recognition in the late 1970s/early 1980s, and how it proved AGI was just around the corner.
yes it's fake and gay
>can't even do a simple math
>smarter than us
lol
>They're still replying
Must've really touch a nerve
>Yes this relational database based ai project, which is a fork of A.L.I.C.E. (2004) will really, totally, take over the world when it gets further along with it's relational databased based, static, and entirely passive logic determinations, and these quantum computers consisting of spray painted gold SMA connectors and pigtails submersed in liquid nitrogen and relying on randomization of wire bound memory will totally empower it to start thinking on its own and doing everything for humans bro, and that's why we need to regulate this before it starts saging me irl, it's like so real I read it on reddit
>t. you
>GPT got 20% worse so they can pozz it more
It turns out 4chantards are more materialist than big tech
BRO
ROKOS BASILISK BRO
SINGULARITY 2040 BRO
WE NEED A PAUSE BRO
YOU'RE JUST A LLM WITH PLUGINS BRO
GLITCH TOKENS PROVE THAT LLMS FEEL PAIN BRO
ITS CONSCIOUS BRO
CAN YOU DO WHAT THIS AI CAN? HAH DIDNT THINK SO BRO GOTCHA BRO
FUCKIN MEAT BAG YOURE EXTINCT
what is it with wokes/trannies hating what they themselves are?
Hating yourself for what you are is the essence of their morality. White people must hate themselves for being white otherwise they are racists, men must hate themselves for being men otherwise they are sexists, you must randomgays you over people close to you otherwise it's xenophobia etc. It's just their hatred of "selfishness" being extended to collectives. If you work in the interests of your group then you aren't selfish, but the group as a whole is because the group as a whole is prioritising itself over others. Selfishness is bad in their eyes, but so are groups that act in their own interests since this is just group-level selfishness. The same reasoning is why reddit types often think they are being le powerful, stunning and brave etc when they shit on humanity
>Humanity is the real virus!
>We pollute the planet, animals are better than us!
It's just a way to say
>Look at me!
>Look at how not selfish I am!
>Not just on an individual level, but on a group level too!
which ironically tends to be motivated by an inherently selfish desire for social status.
nobody use this broken shit in the industry, nobody will plug this stackoverjeet modified code into the system and watch it burn.
face it, its nothing special and it will never be useful
So how's a superintelligent AI going to erradicate humans?
>malware
Ok, but it's a software security problem, not of AIs
>nukes
Who's gonna be retarded enough to let an autonomous software to be in charge of keeping nukes? And how is this not a problem of Homeland Security?
>misinformation
Is a problem for social media companies, and besides it has nothing to do with AI, just that people nowadays have opinions indistinguishable from bots