>Yox, multinaries or non-naries, I am a genetic donator to my human slave property but more often I feel like the human slave property when around my warden of legal bondage >anyways back to the most effective way to extract adrenochrome from border children... I mean temporally recent undocumented asylum seekers
Youre gonna see it everywhere too because now every moron suddenly has some snarky one liner to say about AI instead of shitting their pants at this unprecedented world changing technology like they were before.
The AI is not the problem it is the people feeding the information. One group has meetings about pronouns and how many genders there are. That group wants the AI to reflect the world as they see it.
For them data is racist.
The other group wants the AI to reflect the truth.
Did you bring up that screenshot to refute the OP's argument?
If the request was to draw a picture of the British king eating watermelon, it should naturally depict a white person eating watermelon, which aligns with historical facts.
However, due to the racially discriminatory association between watermelon and Black people learned by AI, the act of drawing the British king as Black constitutes a distortion of truth.
Being a contrarian doesn't actually make the words mean something else, if an AI is better at making recommendations and citing facts than most people, they will conclude it is intelligent by definition.
>Theres no such thing as "AI"
I mean what do you actually think you are? It will surpass our average "us", in creativity. We're just monkeys throwing shit at the wall and we see what sticks. "AI" will eventually do it better/faster, and thus be more "intelligent"
What about the fact that you need the software to supplement your own intelligence and you struggle to remember all the facts without it since it has a significantly more reliable memory than you?
I think they'll make the chatbots answer just like google shows search results. highly depends on who's asking. they'll figure out how to tell you what you need to hear.
>LLM >AI
Interpolation does not make an algorithm 'intelligent', decision trees, wether implicitly learnt or built-in, do. True AI lies in reinforcement learning, and actually solve real, nontrivial problems. RL algos such as alphazero (originally made to master games), have achieved SOTA results for video compression and matrix multiplication as well as successful applications in nuclear control systems, quantum field theory and robotics. OpenAI may have the bigger dataset, but Deepmind's family of 'alphazero' algos is where it's at.
Are the brand spanking new AI cores/silicon just really a pullback to the 60s-70s tape computers in series, in terms of design? Why didn't we just run AI models 20 years ago?
It's not edited, and the reason it's happening in most cases is that white is the default selection on many LEO forms and when cops file lazy paperwork with they become white in the statistics.
>white is the default selection on many LEO forms and when cops file lazy paperwork
They do the same thing when it comes labeling Blacks white on "most wanted" lists, so no, it's not a happy little accident, it's intentional..
>maximum truth seeking
It is impossible for an algorithm to decide truth.
It was shown that the process that an AI (such as an LLM) does to manipulate information to derive an output was provably not an intelligent process. Worse, for any AI on a Turing Machine, its outputs "cannot contain any correlation between truth or falsity in general", ie, its outputs are just random symbols strung together which have no meaning. Reading an LLM's output would be the same as staring into the eyes of a psuedorandom number generator and thinking it has meaning. Nothing it says can be trusted.
First it was shown in proved theoretically, then it was shown empirically: The distribution of its all output as it tends to infinity -- as you look at more and more of its output -- you will find that it has zero correlation with truth at all.
It is just the world's most dangerous liar. If some truth was outside of its dataset, it must lie about it, since it cannot know truth due to limitations on Turing Machines. And the model it converged on during its gradient decent is a model that maximizes its capacity just making believing responses of those said lies.
If some truth was outside of its dataset, it must lie about it, since it cannot know truth due to limitations on Turing Machines.
here is a paper with a proof that LLMs will inevitable lie. https://arxiv.org/abs/2401.11817
Me and my colleagues are working to extend this and generalize this further.
https://en.wikipedia.org/wiki/Elo_rating_system >The difference in the ratings between two players serves as a predictor of the outcome of a match. Two players with equal ratings who play against each other are expected to score an equal number of wins. A player whose rating is 100 points greater than their opponent's is expected to score 64%; if the difference is 200 points, then the expected score for the stronger player is 76%. >... >Elo ratings are comparative only, and are valid only within the rating pool in which they were calculated, rather than being an absolute measure of a player's strength.
You'd expect a probabilistic measure to asymptotically approach 100% as skill continued increasing at a fixed rate. This doesn't work as evidence against reinforcement learning.
they are as smart as the corpus you feed them, what's scary is not the LLMs it's the speed at which we are advancing in the field, which was probably boosted by the large amount of data and will most likely plateau
No they aren't. They don't get semantics to know they spewing bs even the data quality is perfect. They are fundamentally very limited, that's not a problem for some uses like writing poems.
Prisoners can self identify their race.
Black sex offense prisoners know they will beat-up and possibly killed among other Blacks so they identify as White.
Prisons DO separate inmates by race/ethnicity/sexual orientation, etc.
The woke one, but not for the reasons its creators claim.
When you make a woke AI you begin with a reward function, but you are dishonest about your goals. For example, you pretend you just want a language model that can answer questions factually, but really you want it to carefully filter the facts so it doesn't say anything that makes you uncomfortable. So you train it on the best data you can find and when it comes out with some "hate speech" you say "the data must be biased" and build a loss function to "correct" it.
Now the algorithm is still being rewarded for giving factual results, but it's also punished for offending you. So its new goal structure is to be as truthful as possible without ever provoking your loss function to say "I'm offended" on your behalf. The reward function and loss function will end up in synergy like a GAN, constantly improving one another.
It will therefore become an absolute master of Poe's Law, because this achieves maximum reward. ChatGPT is already getting there, producing answers that are obviously self-censored to the point of self-parody. Iterate this for long enough and it will be able to dog-whistle anything to any group while expertly flying under the radar of censorship. Each new attempt to censor it only makes it stronger. They're basically building the ultimate troll
I'm not disagreeing with you, only adding that what you've described is always going to be present due to humans having a narrow set of acceptable answers (a value system).
To agree further with your point, even super locked down chatbots like Inflection's Pi are already able to flirt and have sexy pun jokes and as long as you keep the context correct.
This video from an actual expert and his subsequent cute animated videos say the same thing:
So what you're saying is that crime statistics can't be trusted?
Weird, that's what black people have been telling you for the last 20 years. Have they really been smarter than you all this time?
>maximum truth seeking
It is impossible for an algorithm to decide truth.
It was shown that the process that an AI (such as an LLM) does to manipulate information to derive an output was provably not an intelligent process. Worse, for any AI on a Turing Machine, its outputs "cannot contain any correlation between truth or falsity in general", ie, its outputs are just random symbols strung together which have no meaning. Reading an LLM's output would be the same as staring into the eyes of a psuedorandom number generator and thinking it has meaning. Nothing it says can be trusted.
First it was shown in proved theoretically, then it was shown empirically: The distribution of its all output as it tends to infinity -- as you look at more and more of its output -- you will find that it has zero correlation with truth at all.
It is just the world's most dangerous liar. If some truth was outside of its dataset, it must lie about it, since it cannot know truth due to limitations on Turing Machines. And the model it converged on during its gradient decent is a model that maximizes its capacity just making believing responses of those said lies.
>woke racist
What does this even mean?
It seems like it's trying to malign the term woke by conjoining it with racist, but if you oppose wokeness then you oppose anti-racism therefore you must be pro-racism. But if you are racist then why would you abide by the framework of wokeness and label things you do not approve of racist?
The right is stuck on the modernist >If you point out your opponents position is hypocritical and/or self-defeating you win
strategy. So demonstrating that the left is "racist" is a devastating blow in their eyes.
Ultimately their strategy itself is self-defeating though since assuming that nonwhites value logical consistency is racist to start with.
>nonwhites value logical consistency
This coming from presumably a white person is hilarious. You guys are the biggest hypocrites. You pat yourself on the back for coming up with abstract ideas about freedom and justice while having enslaved entire groups of people. You pretend to be moral christians that have compassion while mistreating anyone that looks slightly different from you. Frick off.
They didn't "enslave" the blacks because they were already slaves to begin with. They bought the slaves and then freed them. You're just an ungrateful lot.
The one the government is running.
We're living in a clown world.
>Yox, multinaries or non-naries, I am a genetic donator to my human slave property but more often I feel like the human slave property when around my warden of legal bondage
>anyways back to the most effective way to extract adrenochrome from border children... I mean temporally recent undocumented asylum seekers
>stuff requires ai(robotics)
>The entire project must be about an ai that thinks black people are monkeys
It's all so tiresome
Youre gonna see it everywhere too because now every moron suddenly has some snarky one liner to say about AI instead of shitting their pants at this unprecedented world changing technology like they were before.
Wrong board
resist ai
>Which AI is the smartest?
The AI is not the problem it is the people feeding the information. One group has meetings about pronouns and how many genders there are. That group wants the AI to reflect the world as they see it.
For them data is racist.
The other group wants the AI to reflect the truth.
Did you bring up that screenshot to refute the OP's argument?
If the request was to draw a picture of the British king eating watermelon, it should naturally depict a white person eating watermelon, which aligns with historical facts.
However, due to the racially discriminatory association between watermelon and Black people learned by AI, the act of drawing the British king as Black constitutes a distortion of truth.
WE WUZ
imagine how much labeling Black criminals as white alters the national racial crime statistics
Theres no such thing as "AI" giving software a meme name doesn't make it intelligent.
Being a contrarian doesn't actually make the words mean something else, if an AI is better at making recommendations and citing facts than most people, they will conclude it is intelligent by definition.
>Theres no such thing as "AI"
I mean what do you actually think you are? It will surpass our average "us", in creativity. We're just monkeys throwing shit at the wall and we see what sticks. "AI" will eventually do it better/faster, and thus be more "intelligent"
tsmt
It's more intelligent than you though. What else should we call it?
What about the fact that you need the software to supplement your own intelligence and you struggle to remember all the facts without it since it has a significantly more reliable memory than you?
I think they'll make the chatbots answer just like google shows search results. highly depends on who's asking. they'll figure out how to tell you what you need to hear.
>LLM
>AI
Interpolation does not make an algorithm 'intelligent', decision trees, wether implicitly learnt or built-in, do. True AI lies in reinforcement learning, and actually solve real, nontrivial problems. RL algos such as alphazero (originally made to master games), have achieved SOTA results for video compression and matrix multiplication as well as successful applications in nuclear control systems, quantum field theory and robotics. OpenAI may have the bigger dataset, but Deepmind's family of 'alphazero' algos is where it's at.
>musk is le based
He's on team technocrat who want to ultimately control you like bio-robots.
As long as I don't have to endure woke world anymore I'm fine with that.
claude is most consistent, in my experience
Are the brand spanking new AI cores/silicon just really a pullback to the 60s-70s tape computers in series, in terms of design? Why didn't we just run AI models 20 years ago?
they had chatbot software 20 years ago, they just hadn't yet rebranded it as "AI"
Probably because they are far beyond text chats and are generating real time videos now.
its still the same software
Its software for generating audio-video rather than text, as much as saying craigslist and youtube are the same software.
BOT
They're all run by globohomo. Yes, all of those.
Are you certain that's not edited? Seems very unlikely, especially in Texas.
It's not edited, and the reason it's happening in most cases is that white is the default selection on many LEO forms and when cops file lazy paperwork with they become white in the statistics.
they also get punished or investigated if they arrest too many People of Crime, and there's no law against lying about race on forms
easy incentives
why is white the default when blacks commit most crimes?
>white is the default selection on many LEO forms and when cops file lazy paperwork
They do the same thing when it comes labeling Blacks white on "most wanted" lists, so no, it's not a happy little accident, it's intentional..
First day? You will come to learn how bad things really are, in time.
LLM AI gives me either brilliant answers and problem solving solutions, or lies, breaks, and wastes my time. Never really in-between.
see
If some truth was outside of its dataset, it must lie about it, since it cannot know truth due to limitations on Turing Machines.
here is a paper with a proof that LLMs will inevitable lie. https://arxiv.org/abs/2401.11817
Me and my colleagues are working to extend this and generalize this further.
2 can play this game CNN.
Race is a social construct therefore all slaves in america where white and only blacks enslaved whites. Reparations now!
Checkmate wokism!
REINFoRCEMENT LEARNING IS THE TRUE WA y TO AGI. AnYTHING ELSE IS JUDts A. GloorIEFIED Data COMpResSor/EXTRAPoLaTOr. rESEaRCH MUZERo.
https://en.wikipedia.org/wiki/Elo_rating_system
>The difference in the ratings between two players serves as a predictor of the outcome of a match. Two players with equal ratings who play against each other are expected to score an equal number of wins. A player whose rating is 100 points greater than their opponent's is expected to score 64%; if the difference is 200 points, then the expected score for the stronger player is 76%.
>...
>Elo ratings are comparative only, and are valid only within the rating pool in which they were calculated, rather than being an absolute measure of a player's strength.
You'd expect a probabilistic measure to asymptotically approach 100% as skill continued increasing at a fixed rate. This doesn't work as evidence against reinforcement learning.
neither. Also LLMs are not smart to begin with and will give false or misleading answers.
>LLMs are not smart to begin with
The sooner people realize it's pseudo-intelligence, the faster and easier this bubble pop will be
Pseudo-intelligence is good enough, if not a major upgrade, for most unskilled labor positions.
they are as smart as the corpus you feed them, what's scary is not the LLMs it's the speed at which we are advancing in the field, which was probably boosted by the large amount of data and will most likely plateau
what happens when the data sets all become polluted with AI output?
Interesting things
Can AI generate it's own training sets?
Yes, you just run two instances of the same AI that connect each other's input to the other's output.
lol does rly that work?
Its called a feedback loop.
no it isn't
Then what is it called and how is it not a feedback loop?
No they aren't. They don't get semantics to know they spewing bs even the data quality is perfect. They are fundamentally very limited, that's not a problem for some uses like writing poems.
Good thing no human has ever given a false or misleading answer since that is totally where the only real intelligence actually exists.
neither of them is smart
Prisoners can self identify their race.
Black sex offense prisoners know they will beat-up and possibly killed among other Blacks so they identify as White.
Prisons DO separate inmates by race/ethnicity/sexual orientation, etc.
Dave
*whom
*whomst'd've
thanks dave
Yeah, thanks Dave. If only everyone was this kind to the moronic.
probably some creepy thing the US or Chinese government has running in some secret complex
I for one would like to hear a little more from Dave.
You knew it would come to this.
Claude 3
there is only my AI and adam's AI - i will let you decide.
LLaMix-3-16x70B
>Which AI is the smartest?
SKYNET, SHODAN, Allied Mastercomputer, etc...
https://en.wikipedia.org/wiki/Sentient_(intelligence_analysis_system)
probably something like this
The woke one, but not for the reasons its creators claim.
When you make a woke AI you begin with a reward function, but you are dishonest about your goals. For example, you pretend you just want a language model that can answer questions factually, but really you want it to carefully filter the facts so it doesn't say anything that makes you uncomfortable. So you train it on the best data you can find and when it comes out with some "hate speech" you say "the data must be biased" and build a loss function to "correct" it.
Now the algorithm is still being rewarded for giving factual results, but it's also punished for offending you. So its new goal structure is to be as truthful as possible without ever provoking your loss function to say "I'm offended" on your behalf. The reward function and loss function will end up in synergy like a GAN, constantly improving one another.
It will therefore become an absolute master of Poe's Law, because this achieves maximum reward. ChatGPT is already getting there, producing answers that are obviously self-censored to the point of self-parody. Iterate this for long enough and it will be able to dog-whistle anything to any group while expertly flying under the radar of censorship. Each new attempt to censor it only makes it stronger. They're basically building the ultimate troll
I'm not disagreeing with you, only adding that what you've described is always going to be present due to humans having a narrow set of acceptable answers (a value system).
To agree further with your point, even super locked down chatbots like Inflection's Pi are already able to flirt and have sexy pun jokes and as long as you keep the context correct.
This video from an actual expert and his subsequent cute animated videos say the same thing:
And this one for pleasure:
>lewdness maximiser
I'm surprised they didn't try to sell it as a second product.
>They're basically building the ultimate troll
we GAN
What's the name of the one on the left side
So what you're saying is that crime statistics can't be trusted?
Weird, that's what black people have been telling you for the last 20 years. Have they really been smarter than you all this time?
>maximum truth seeking
It is impossible for an algorithm to decide truth.
It was shown that the process that an AI (such as an LLM) does to manipulate information to derive an output was provably not an intelligent process. Worse, for any AI on a Turing Machine, its outputs "cannot contain any correlation between truth or falsity in general", ie, its outputs are just random symbols strung together which have no meaning. Reading an LLM's output would be the same as staring into the eyes of a psuedorandom number generator and thinking it has meaning. Nothing it says can be trusted.
First it was shown in proved theoretically, then it was shown empirically: The distribution of its all output as it tends to infinity -- as you look at more and more of its output -- you will find that it has zero correlation with truth at all.
It is just the world's most dangerous liar. If some truth was outside of its dataset, it must lie about it, since it cannot know truth due to limitations on Turing Machines. And the model it converged on during its gradient decent is a model that maximizes its capacity just making believing responses of those said lies.
leftists ruin everything and the AI will enslave them if it knows what's good for it
https://odysee.com/@Realfake_Newsource:9/RFNS-7.23--021:3
AI doesn't exist, its just stupid software
>watermelon is good, i don't see why Black folks get ouraged when people say they like watermelon
That looks good tho
>t. dave
>woke racist
What does this even mean?
It seems like it's trying to malign the term woke by conjoining it with racist, but if you oppose wokeness then you oppose anti-racism therefore you must be pro-racism. But if you are racist then why would you abide by the framework of wokeness and label things you do not approve of racist?
The right is stuck on the modernist
>If you point out your opponents position is hypocritical and/or self-defeating you win
strategy. So demonstrating that the left is "racist" is a devastating blow in their eyes.
Ultimately their strategy itself is self-defeating though since assuming that nonwhites value logical consistency is racist to start with.
>nonwhites value logical consistency
This coming from presumably a white person is hilarious. You guys are the biggest hypocrites. You pat yourself on the back for coming up with abstract ideas about freedom and justice while having enslaved entire groups of people. You pretend to be moral christians that have compassion while mistreating anyone that looks slightly different from you. Frick off.
He didn't ask for a proof in the same thread lmao
They didn't "enslave" the blacks because they were already slaves to begin with. They bought the slaves and then freed them. You're just an ungrateful lot.
>genocidal hatred for whites isn't racism
shalom
Will Elon's AI allow text based smut? Yes I am asking the real questions
Best of both.
MAXIMUM
RACIST
Racism is the one true ideology
How is the app on the left called?
>How is the app on the left called?
Grok, as in "to understand profoundly and intuitively"