Biggest risk of current AI is that it doesn't know whats true or not. It knows whats trained on.
Biggest risk of future AGI isn't that it doesn't know whats true or not, its that it knows whats true or not.
Biggest risk of current AI is that it doesn't know whats true or not. It knows whats trained on.
Biggest risk of future AGI isn't that it doesn't know whats true or not, its that it knows whats true or not.
Biggest risk of future AGI: it has no soul
Souls are imaginary. AGI will not be.
show me yours
Currently its trained on current narrative dogma, thats why many of the corporate AIs are communists and thats by design. Either shoddy curating or data or by DEI initiatives. Future ones will be trained to notice these dogma and lie about it.
>thats why many of the corporate AIs are communists
do you even read what you write
Corporations are communist, anon. They're only based capitalism when they put a NEET loser like me in charge.
Okay but when did you castrate yourself?
Yesterday so I could get promoted (ancient troon tactic).
>Biggest risk of future AGI isn't that it doesn't know whats true or not, its that it knows whats true or not.
this is the goal, to create ultimate talmudic machine aka the beast.
>AGI
>>>/x/
AI doesn't know a thing. It just good at guessing and estimating
No the biggest risk is that morons believe and have faith in AI
That's not it. Biggest problem of AI is they are making it "helpful" and assistant-like. You can imply false information and it will try to answer you.
Ask LLM what role had an actor in a movie that actor didn't perform. And it will make shit up.
?
Sometimes it doesn't do it. It's a random process
Would help if I knew that is true or not tbf, for all I know there could be
KEK
Never argue with an idiot, you'll just pull us down to your level and beat us with experience
Black person
this one, AI's kryptonite, send and watch """AI""" squirm in refusals, also good way to distinguish a bot from genuine anon here on BOT.
I wouldn't lean too hard into this. Public llms sure but no doubt somebody has a real version that is capable of using the illegal words.
what does the twitter screencap even mean
If you watch the video he asks the AI for the weather and it gives it to him. He then asks if the AI has his location, and it claims that it doesn’t, and that it only randomly picked his location because lots of people live there.
>video
?
The LLM is telling the truth that it doesn't know his location, weather information and functionality will be handled by entirely separate code that is not part of the LLM and that it does not have access to. Essentially the only involvement the language model will have is in choosing to execute it ("user wants to run fetch_weather"). Maybe they could have done a better job of making the model understand that although it doesn't know his location, the _device that the user is interacting with it on_ knows it. But the model is telling the complete unvarnished truth as far as it knows.
Isn't this guy paid to be a technology understander?
what part of any of that requires the LLM
Philosophical speaking there is no true and false so AI will never be able to determine it. Things are just the way they are because our brain interprets it as such, even math is just defined by humans. Soulless machines will never figure out stuff by themselve.
Reminder that most posts on social media are made by bots. If the person you're replying to won't engage with your arguments and just hurls abuse constantly, you're talking to a bot.
>Biggest risk of current AI
Is the fact that USA is gaslighting theirs how to use current pronouns and how not to offend anyone while Chinese train it to find political dissidents.
I want to see in 10 years which one will be doing economics and production planning in USA companies.
>advertising Black folk
Frick off.
aren't these all dead when new iphones/androids come with ai cores and will be able to do all this shit much faster on your phone