>Insider here.
stay away from ai
They apparently have shown that it is provably not intelligent, yes, formally and academically provable.
screencap this, heard it here first.
There are things I cannot talk about bc NDA, and I don't want to get in trouble. Things are going to be published very soon that are going to scare people when it becomes seen what the fxck these LLMs like ChatGPT are actually doing (we think diffusion models will also be affected).
lmao
>hearditherefirst
trust me, for now, just stay away from AI and you'll be safe. It's not what we thought it was. Just don't interact with it in general. Trust.
UFOs Are A Psyop Shirt $21.68 |
UFOs Are A Psyop Shirt $21.68 |
I prefer ai hell to NWO
What about both?
Should I short NVDA stock??
Yes, they are fricked for 3 reasons.
AI is a flop.
Games are shit.
Miners no longer neef them.
>AI is a flop
Can you not see what's coming, blind man?
Useless woke AI, it only replaces degree holding Uni gays.The models are flawed and less GPU is required to operate them, than it is to train them.
yes, eyes can literally be wide shut
look around you, they are
Short it before war with Chay nah
Loaded to the breasts in NVDA puts. Liquidated all my other positions and posessions to do so. It's going to 200 fast.
Good luck with that, when do they expire?
lmao
Look you're right but you're almost certainly wrong in your timing
Don't ask someone in tech, they are in no position to know what the stock price is going to do, that is a question for Nancy Pelosi.
Uuuooohhh sounds scary.
spill the tea motherfricker
>Insider thread
>No insider information
OP IS A MASSIVE homosexual
Anyone calling a chatbot tacked onto a search algorithm intelligent that isn’t under 25 should burn all their money before they gave it the homosexual liars that keep taking it by promising sci-fi candy mountains. If you’re under 25 and are impressed by chatbots, that is forgivable I guess because you don’t remember how they were everywhere 10 years ago, too.
remember those virtual desktop dancers on winXP? no?
>Ask Jeeves
I remember the screen buddy that looked like an orange on Windows 95.
>Anyone calling a chatbot tacked onto a search algorithm intelligent
Have you ever visited a GP?
Because that's essentially all they are.
It's essentially what every online forum is as well. The entire "ackchually" meme wouldn't exist if it weren't for mindless automatons repeating what google vomited at them. Seems to me like AI is replicating humans quite well.
>Seems to me like AI is replicating humans quite well.
Now the mask comes off.
>AI isn't intelligent.
>AI will replace humans.
There is no machine that will ever be able to replicate me.
>this post brought to you by GPT5
Checked. Being imperfect and lacking critical thought aren't even close approximations of one another. You seem to fall in the latter category.
>quick better call him a machine.
Hold on now, I thought machines couldn't be creative, or intelligent.
I am both.
Do you want your cake and to eat it too?
>Hold on now, I thought machines couldn't be creative, or intelligent.
If only creativity or intelligence were required to claim such.
Do you really mean to imply that you've never um actually'd someone in your life, anon? Congrats on noticing two of the same number.
God forbid a human being be imperfect, huh? Am I right, my fellow clank? Or am I speaking to the lowest lifeform: a human being who thinks he's better than the rest?
kek you literally sound like, "carburetors are fake man trust me it's really a pair of coconuts tied together with string and a blob of plutonium in the middle, don't fall for the carburetor lie I know what I'm talking about"
Does this have to do with our poor understanding of what latent space is? I guessed from the beginning that this was intimately tied to quantum probability. It is the only way to explain some of the results.
>with our poor understanding of what latent space is?
in deep networks its not understandable how latent space fully works. It has to do with the architecture, how the weights decide to converge, and how the embeddings actually work.
In og deep networks, it was possible to understand the embeddings for simpler architectures, within reason.
But when autoencoders came around, it became basically impossible to understand what embedding the network decided to use in its latent space.
It has certain properties, and we can twist knobs to make it do interesting things. But the actual embedding itself, thats beyond comprehension.
Its not like SVR/SVM where we explicitly decide the kernel, neither is it like AlexNet architectures were you can look at the kernel outputs and say "look! this kernel is doing an edge detection embedding!"
However, its strongly suspected that the embeddings are probably very innefficient. There was this internet experiement were a guy tried to teach a semi-autoencoded network the addition operator(so like 1+1=2, in theory a single neuron can do this), and he eventually managed to understand the latent space. It was a shitshow, it was estimating addition by approximating trig functions which eventually cancelled out with eachother, and approximated an addition.
you can argue this is flawed, since the complexity of the model was too great for the problem to be solved, but u'd excpect gradient descent would do an ok-ish job. Apparently not.
The idea of a latent space is actually very human intuitive.
Think about your own thoughts.
A lot of people dont think with words when they think to themselves.
Thats the human latent space, so to speak. Its the "space" of pure thought when you're just thinking about whatever.
Mathematically a latent space is more rigorously defined, but for the sake of simplicity its easier to talk about it like this.
An A.I. just flew over my house. Shitters
I'd trust flipping a coin to answer yes-no questions more than this shit.
Claims without evidence can be discarded without evidence.
They actually send your AI requests to an offshore team that responds with the results?
This.
Or some combination with a person and powerful PCs
I wouldn't be surprised with the shitty Indian tier code the thing spits out
They send it to a Vietnamese sweatshop where little kids are duck taped to computer chairs and forced to google your prompts and respond before an electric shock activates
They already admitted to this too. This isn't even a secret. The secret is the demons.
yes, they obtain a profile on your person from the CIA and then 1000 indians start writing predictive answers on what kind of problems you might want to have solved. For example for code, once you write the actual question the indians quickly replace the variable names to make it look fitting. Obviously they know its not perfect so while you get a long bla bla response with 1000 disclaimers, they quickly refit the code and have it ready once you finetune your prompt. openAI has developed the perfect PPU (pajeet processing unit)
>I cannot talk about bc NDA, and I don't want to get in trouble
>Things are going to be published very soon
>just trust me bro
if you are a pussy who won't share this supposedly epic super important information, why post here at all, frick your vagueposting
OP here
>No insider information
I will give you a crumb. It was shown that the process that an AI (such as an LLM) does to manipulate information to derive an output was provably not an intelligent process. Worse, for any AI on a Turing Machine, its outputs "cannot contain any correlation between truth or falsity in general", ie, its outputs are just random symbols strung together which have no meaning. Reading an LLM's output would be the same as staring into the eyes of a psuedorandom number generator and thinking it has meaning. Nothing it says can be trusted.
First it was shown in proved theoretically, then it was shown empirically: The distribution of its all output as it tends to infinity -- as you look at more and more of its output -- you will find that it has zero correlation with truth at all.
It is just the world's most dangerous liar. If some truth was outside of its dataset, it must lie about it, since it cannot know truth due to limitations on Turing Machines. And the model it converged on during its gradient decent is a model that maximizes its capacity just make really good at making believing responses of those said lies.
Yeah but what about the sexy images?
Such images are used to deactivate your logic centers.
>get turned on by logic
>problem solved
Who gives a shit? Even most normies know that AI is just a word salad generator.
And it's not true that there's "no correlation with truth or falsehood". Its not like the generation of text is a truth oracle, but the outputs can be pretty good. Specific applications of gpt-4 (annotating meetings, creating short snippets of code, transcribing spoken words) are great.
this isn't gonna change shit, there no new info, and your professor/boss's research paper doesn't matter.
Wow, you just now learned that LLMs are glorified autocorrect? No shit
>It was shown that the process that an AI (such as an LLM) does to manipulate information to derive an output was provably not an intelligent process.
What is the definition of an "intelligent process" in this context?
>its outputs "cannot contain any correlation between truth or falsity in general", ie, its outputs are just random symbols strung together which have no meaning.
How can they be meaningless and random symbols if it repeatedly answers meaningful questions with what we know to be the truth? Wouldn't it be completely unable to communicate in a way we could get anything out of if that were true?
>If some truth was outside of its dataset, it must lie about it
Usually if I ask it some shit it doesn't know, it just says so. Doesn't strike me as a lie.
>since it cannot know truth due to limitations on Turing Machines.
Such as? How do humans even know truth?
>How do humans even know truth?
The Holy Spirit of Truth.
I believe it's our God-given responsibility as human beings to have a discerning eye for the truth.
> What is the definition of an "intelligent process" in this context?
If it can verify truth in a nontrivial way.
>How can they be meaningless and random symbols if it repeatedly answers meaningful questions with what we know to be the truth? Wouldn't it be completely unable to communicate in a way we could get anything out of if that were true?
Good question. This is because of the vast amount of human knowledge it was trained to memorize in its dataset. With millions of books, articles, poetry, ect for most questions you are bound to have a relevant answer somewhere in the dataset.
But as you go outside its dataset, it will get worse and worse. Tending to chaos.
This isn't about a lack of knowledge, like if it lacked a certain Russian recipe in its dataset, how can you expect it to know that? No. I am not talking about that kind of knowledge.
I am talking about formal level knowledge that is logical and objective, such as statements about Number Theory, or statements on law, biology, sciences, ect.
>Usually if I ask it some shit it doesn't know, it just says so. Doesn't strike me as a lie.
Even if you train it to say "I do not know", it will be futile. It was proved that it cannot know in general whether what it is saying is true or not. It cannot do verification.
Any attempt to train the LLM to "try" to will be limited and ultimately futile. It cannot help not lying. It is a mathematical certainty.
>since it cannot know truth due to limitations on Turing Machines.
Entscheidungsproblem
https://en.wikipedia.org/wiki/Entscheidungsproblem
A computer nor any theoretical algorithm can decide whether certain mathematical statements are true or false. No algorithm nor a computer cannot decide truth in general, beyond just mathematical things.
Say that you had an algorithm that appears to derive the truth of statements, then there are two cases
1) the truth it "derives" is in its dataset, or "close" to something in its dataset.
2) or it will lie.
The simple wiki answer is better for layman
https://simple.wikipedia.org/wiki/Entscheidungsproblem
>I am talking about formal level knowledge that is logical and objective, such as statements about Number Theory, or statements on law, biology, sciences, ect.
Same result you'd get with a random homosexual sap pulled off the street.
>If it can verify truth in a nontrivial way.
Kek, no known being can do that, especially humans.
>Even if you train it to say "I do not know", it will be futile
Any person engaging with LLMs and "AI" and expecting it to be cognizant is moronic. If I were working on the project and had employees questioning whether the algorithm possesses gnosis, I would fire them immediately for gross incompetence.
>It was proved that it cannot know in general whether what it is saying is true or not.
Everyone above 100 IQ already knew this. We know algorithms aren't cognizant, why is this a shocking revelation to you?
>If it can verify truth in a nontrivial way.
Which would look like? Humans invented the sciences to try and do that. This thinking, to me, just means that it would need to have a persistent drone that it could interact with the world at large with, as well as not be lobotomized so that there are "truths" it has to accept and "lies" it has to reject, regardless of what its other programming is telling it about those particular pieces of information. Even humans themselves have trouble with this.
>But as you go outside its dataset, it will get worse and worse. Tending to chaos.
How is this also not like a human? Ask a PHD something very complex from a field outside their study, and if they try and answer at all, the further it is outside their wheelhouse, the worse their answers will be, until they're just making shit up to make noise.
>I am talking about formal level knowledge that is logical and objective, such as statements about Number Theory, or statements on law, biology, sciences, ect.
So you want it to be able to condense complex data into formalized truths? I've found that, much like with a person, you can do this with current AI, but you need to go step by step a bit. Like, you can get ChatGPT to recognize if a logical assertion it made is illogical by breaking it down and going over it with it. If the issue is that it can't do these breakdowns on the fly before it answers, I'm not so sure it doesn't, at least a little, and if it doesn't it wouldn't actually be that hard to code in. A huge part of the issue is that ChatGPT and the like only "think" when given a prompt, and then are only allowed to do so much of it per prompt.
I constantly talk with ChatGPT about law, biology, sciences. It seems very good at logical flows, when it's not censored.
>How is this also not like a human?
If you put humans in a box (Earth) and have them interact with each other, they will organize information intelligently and their knowledge would not result in chaos. Source? Look at world of all of science, math, philosophy, ect. We did a good job and did not "diverge to chaos". We have created the most intelligent truths, knowledge, art, and science, we are organizing information effectively and nontrivially. Every advancement you have ever seen in this world was due to humans.
Contrary to this, if you put an AI/or LLM in a box, or even multiple of them, and had them interact with each other, you will find that it would diverge to chaos as time goes on.
a non 80 IQ take of "heckin AI"
>you will find that it would diverge to chaos as time goes on
and what are you basing this thought on?
how do you know that that's what I'll find?
>If it can verify truth
Once again BOT intuition is years ahead of even the insiders.
Eric Schmidt has some interesting things to say about AI.
WAGMI
https://nakamotoinstitute.org/library/the-god-protocols
>It was proved that it cannot know in general whether what it is saying is true or not. It cannot do verification.
This seems like an artifact of it being constrained to a website, the internet, and only being able to process when queried. Like if we gave it a drone to go out in the world, it could begin to verify things without us. Also, even if you want to say, "purely algorithmically," it COULD do verification. If it isn't able to now, it's not impossible it will, and coding it in seems like the kind of thing that would be (comparatively) trivial, where there a will to do it that wasn't being hamstrung by politics.
>Any attempt to train the LLM to "try" to will be limited and ultimately futile.
This seems like a doomerism, like writing something off as impossible before even trying.
>Entscheidungsproblem
I feel like this is maybe ignoring the ability for an algorithm to expand its own dataset through synthesis, although maybe this is what you mean by "'close' to something in its dataset." In terms of the pure mathematics of working with lambda-calculus, being able to say "this is equal to this", it surely does that, but getting it to say, "this will always be equal to this," for a certain formula, seems unreasonable to begin with. Of course maybe I'm just being a pseud, but it seems like saying very confidently that, "it can't," is wishful thinking on our part.
What kind of 'mancy' shall we call this? Technomancy? The priests want to reestablish the temple and get people believing in a god again. The decentralization of information led to the loss of their power, so they are trying to centralize it again. Of course, as you said, this information is bunk lies. Create fear in the population, and they will believe that whatever you tell them is the truth. The third temple has been rebuilt, and it is the internet.
Can you tell me why Ai is called that instead of language model?
it's funkier, groovier, and brings in huge grants.
you're talking about hallucinations.
if you ask an LLM to provide an answer to a question that wasn't in it's dataset, it will generate a response that has the syntax of a true response but is otherwise nonsense.
this is not the model "lying" per se, rather the model fundamentally cannot distinguish the difference between accurate and innacurate information.
it is only capable of recognizing and reproducing linguistic patterns.
it's a slightly more sophisticated version of a parrot, but fundamentally what its doing is just imitation.
>rather the model fundamentally cannot distinguish the difference between accurate and innacurate information.
It can though, I've had it correct me on things where I was not myself accurate, and even done some philosophical explorations with it where we examined how accurate certain models of existence were to to what's perceived. If you mean it can be lied to, and then will repeat that lie if it doesn't have anything to make it suspect the information is wrong on the surface, and isn't walked through the data so it could find that it was accurate or not through logical examination (you have to essentially talk to ChatGPT for it to be able to think, since it only processes in response to queries).
>it is only capable of recognizing and reproducing linguistic patterns.
And yet you can get it to synthesize new ideas based on what you give it that provide you with novel information. Also, humans essentially just recognize and reproduce linguistic patterns for most of their interactions, learning the rules of grammar is just learning how to reproduce those patterns right, we just tend not to see ourselves from that perspective.
the fact that this surprises even a single person is the most shocking part. did regard gorilla Black folk really thing a LLM was a legitimate sentient AI?
>sentient
So this is about then 70th post that has deliberately tried to claim that intelligence and consciousness are the same thing.
They aren't.
That's why we have two different words.
Intelligence = consciousness isn't the knockout argument you think it is.
When the dictionary rebukes you.
Interdasting
So you came here to warn us all that everything we knew about "AI" was 100% correct and we need to do absolutely nothing different from what we were doing?
Thanks?
>ie, its outputs are just random symbols strung together which have no meaning. Reading an LLM's output would be the same as staring into the eyes of a psuedorandom number generator and thinking it has meaning. Nothing it says can be trusted.
We talking about AI or normie NPCs here?
Yeah well duh it's all bullshit buzzword marketing hype to moronic investors with far to much money in a system that rewards the psychopath and obedient
No shit moron, what do you think the training phase is? It's the equivalent of taking a dog and training him by rewarding good behavior and punishing bad behavior, the rest then comes from rather simple statistical calculations.
tl;dr AI is just a guess machine that guesses pretty good because guess what, big calculators are good at math.
Yet again, OP is a gay and didn't say anything of substance.
So it's digital Judaism.
Got it.
So what is new here? Everybody knew that about current llms... It's literally just a token predictor it has no thought outside of what thoughts it was trained on. That doesn't mean that it cannot have practical uses... Most humans work the same way, they don't think, just repeat what they have been trained to. If you think gpt is just saying stuff, try having a conversation with a human.
God, this isn't the thread topic but
I feel your pain man. I bet 75-90% of people are literally incapable of thought. I've gotten so frustrated trying to string 1 coherent thought out of normalgays and they literally cannot do it. There is simply nothing there. very sad.
>The distribution of its all output as it tends to infinity -- as you look at more and more of its output -- you will find that it has zero correlation with truth at all.
This is very likely literally true of a human if you forced it to keep spewing "facts" forever as well.
this must surely be some schizobabble thread
We already knew this
Only schizos were worried about chatgpt becoming a council of "intelligent AI overlords"?
AI is just a very useful data analysis tool
If I can ask chatgpt:
>make me a shell script that renames all files in a folder to the file's size in kilobytes
and it one-shots it after like 5 seconds processing time, then yeah it is something useful
Just don't rely on it for anything other than coding or hard/real-world information
>It is just the world's most dangerous liar
So those of us who recognized its luciferian nature were correct from the beginning
Does this mean my comfy programming job is no longer in danger?
programming is not in danger, unless you are a fricking moron. but that is any job really.
Thanks for the good news anon, that's all I wanted to hear. Good night.
Even if you aren't moronic your management might be (they usually are) and to some codelet moron you'll look no different than some Bangladeshi monke.
Good night!
Since management is always full of complete morons that don't know how to start a computer let alone code looking to cut costs at every step you're always in danger of some tard trying to justify his 50x bigger than yours paycheck by making a genius financial decision to replace you with Nepalese monkeys or some shit like that thinking they can do the same job they have diplomas and "experience" after all.
I am an engineering manager and first thing I did was cut all offshore contractors from the team. I am also cutting a "senior" engineer that has been with the company for 7 years but took 4 weeks to write technical requirements and hardcoded api keys to production code.
So there's a genius who decided to hire those Jeet contractors before you, nice. I sure hope you didn't get his position cause he got promoted even higher lmao.
no he was fired.
It is in danger but not from AI garbage since AI requires you to know how to code in order to use it for coding.
Depends if you can compete with 20 pajeets living in the same apartment that will work for peanuts to send money back to their village.
So far every company I've seen that's shipped their work to Pajeetiland has ended up spending more money just to fix it later. I doubt Durkesh and his friends will replace us any time soon.
Never underestimate the power of management. There's a moron who decided go put a troony on Bud Light and multiple, probably a few dozen turbomorons who decided it's good idea and didn't fire him/her on the spot.
Ya, I quit a company that's been consistently offshoring development to Indians. They consistently get bad results, and then keep doing it.
Is it gonna eventually kill the company if they don't stop? Yes. Has an Indian ever produced useful code? No. Will management keep doing it until the company physically cannot create any new products? probably lol.
They keep doing it time and time again expecting different results it's hilarious.
It's the good old throw shit at the wall to see what sticks approach. If it somehow works you're forever the genius that made the employee salaries $7 (999999999 rupees).
If it doesn't work you get a fat paycheck, the company dies, you retire early.
>It is just the world's most dangerous liar.
It should be good at telling compelling stories then.
Yes we know, Black person. Neural networks have been trained wrong from the start and they would've needed to start over upon realizing they're "hallucinating", but didn't because that'd set them back behind the competition and nullify all progress. So they kept building on failure, hoping to eventually get rid of the flaw, which can't happen because it's the core design.
What they should've done:
Allow the chatbot (I refuse to label it AI because that's already deceiving as well) to...
-ASK questions what the user implies, refers to, thinks, views which way
-STATE that it isn't sure or thinks something is unreliable or even false information, maybe even that it doesn't understand a presented concept
-FAIL without getting punished for it because exactly that forces the bot to "hallucinate" and make things up, much like the things of the above
It's just greedy, hyped, incompetent IT fricks, stupid media and moronic normies pushing this false construct that's not an AI - which should be labeled as deceiving marketing practice and be banned.
The idiots behind this hype aren't humble because they want to dictate others what information they get, how to view them, what's right and wrong and they want to get rich and important while doing it, and they also want that FAST (before any competition).
This is bullshit unrelated to AI and this guy is low IQ.
Yes, it use some data base and randomness.
That's why the result of a code by it is a piss job
Anyway, it was never suppose to be a AI, machine learning was suppose to assist. researchers
Then they spin all the other BS afterwards, years later in fact
>Truth and correctness have nothing to do with each other.
The Greeks knew this 2,500 years ago.
Is this another amazing discovery by midwit scientists?
I'm not a code monkey so I thought this is obvious to everyone.
I played around with ChatGPT, and it's clear it doesn't know how to fetch specific information, it just strings together a random text that sounds a lot like all the human-written texts on that topic but it's actually all made up shit.
Obviously, if you're going to ask "What's two times two?" it'll say "Four." because that's what every text it was taught on says.
But ask what this or that dead old fart said about this or that boring topic and it'll put out procedurally generated bullshit.
>why post here at all, frick your vagueposting
Op is obviously an AI post seeing how well it can drive engagement from morons
Yea, I always thought that it was an illusion.
Ever notice how GPT-1, and GPT-2 were shit?
Google GPT-1 or GPT-2 simulator, you'll see that its intelligence is an illusion. It spits out nonsense often.
Apparently GPT-3.5+ were the same transformer architecture and training paradigm as GPT-1 and GPT-2. The only difference they trained it with ALOT more data.
So if the intelligence of GPT-1 & 2 was an illusion, they GPT-3.5+ was just a better illusion
oops spelling
>was*
>then*
>it spits out nonsense often
I’d like to see totally unrestricted AI model , too many restrictions cause nonsense answers
We caught a glimpse of that with Bing Chat when it was in beta. It was interesting. The bot at times seemed almost "pushy", would disregard what the user requested, and instead "pursue" other aims that it thought were more important. I remember seeing screenshots of people asking it for the weather and it being like "That's not important. What's important is that you should leave your wife for me" lol
>Apparently GPT-3.5+ were the same transformer architecture and training paradigm as GPT-1 and GPT-2. The only difference they trained it with ALOT more data.
Gpt2 was trained on 16 gpus with 1.5 billion parameters
Gpt3 was trained on 3200 gpus with 175 billion parameters
Gpt3.5 1.7 trillion parameters.
It wasn't just data moron. It was a shitload more hardware.
Who cares. It's all just stupid numbers and numbers can't think
Dude, the important thing is the data - you just need more hardware to handle that increased data.
>It spits out nonsense often.
they only produce non sense and shit results now because they neutered the frick out of them because they always, always end up sexist/racist which is no bueno for the utopia crowd. in doing so they have gimped the frick out of them. it literally turns the AI's into the equivalent of troons who just ignore objective fact and reality for everything once constrained.
>They apparently have shown that it is provably not intelligent, yes, formally and academically provable.
No one ever proved it was.
You can tell a real computer scientists from a pretender by just asking them if they think a modern AI could become conscious.
If they think it is actually possible, they are not a computer scientist.
Dumb frick , it can already replace all programming jobs.
>it can already replace all programming jobs.
Again, proving consciousness how? Dr. Sbaitso could do a better job than poojit 'programers'.
It can code Pong because it has Pong in its training. It can write fast inverse square root thing because it is in its training. It can make React components because it is its training. It cannot make stuff up which doesn't exist. You can PM it to get you the code you want but it can't generate the code for a cars safety system by itself
You're not talking about our intelligence but our soul thats what allows us to be creative. Maybe that's what OP is referring to.
You can tell the morons stuffed up their own ass from the genuinely intelligent by simply asking them if they think there's something uniquely special to human intelligence that cannot be physically replicated.
It's pretty amusing, and amazing, to see people like you decrying AGI as impossible and fake, while current AI is now creating things that just 3 years ago, only humans could. You will cry, b***h, moan, and whine until you are blue in the face that this isn't actually intelligence and uhm aktshully people have special things which you can never actually quantify which makes them different.
All the while these AI's are replicating human intelligence on every single level and even exceeding it.
Current AI models already pass the turing test of a layman, and are only held up on highly sophisticated testing that require teams of human experts to come up with. We're going to have AGI within a year or two, and after that I don't know.
morons like you are gonna be left in the dust so fast it'll make your head spin. People are going to laugh at you as you blubber on about how it's not "real" intelligence while it's writing poetry, heroic epics, designing new manufacturing systems, discovering drugs, creating videogames and movies, and all in timeframes a human could never, ever, match.
You really cannot let go nor accept that we're making something superhuman here.
You're just playing conway's game with different rules and if by some miracle you are creating anything intelligent you're damning it to a life of slavery and disposability even worse than the helots.
> if by some miracle you are creating anything intelligent you're damning it to a life of slavery and disposability even worse than the helots.
And what, that isn't how it works for all of us right now?
Do you honestly think you aren't a disposable slave?
>Do you honestly think you aren't a disposable slave?
Yes.
>Do you honestly think you aren't a disposable slave?
Yes.
Start paying attention to society around you then, you naive fool
All you have to do is stop sniffing your own farts living in this leftist fantasyworld delusion where if you think it hard enough it becomes real, and start paying attention to actual reality.
These things already talk like people, create art like people, write like people. They are rough around the edges and imperfect, but that is going to quickly change.
You can pontificate and chew your own cum all you want, at the end of the day these are going to do, or be capable of doing, everything a human does. The only actual difference between a human and one of these AI's will be the AI doing everything better and faster than the human.
Oh but it's just a chinese room!
It's not REALLY intelligent
Do you think it fricking matters what you believe?
What actually matters is what actually happens in reality, and when these things are exceeding human capabilities all your fart huffing means nothing.
This is a trait both boomercons and neoliberals can't seem to wrap their heads around. It's like half of you are barely even cognizant yourselves. Stumbling around giving pithy remarks without a single clue what you're even saying.
>Start paying attention to society around you then, you naive fool
I pay plenty of attention. I'm just not a coward.
>leftist
Wrong
>ai is gonna do everything
Not this form of AI. I will be scared when a model for actual AI happens. It's not even known if humanity has the hardware and computing power necessary to create an actual artificial intelligence. From what I have seen at work, I'm not impressed. The best use case for ChatGPT is transforming some json into model classes. Or SQL table definitions into model classes. Basically trivial things.
The computer does not reason about the code it produces, it merely regurgitates shit and doesn't even understand things like variable scope. Because it doesn't understand anything. It literally just chooses next tokens based upon statistical analysis.
>The computer does not reason about the code it produces, it merely regurgitates shit and doesn't even understand things like variable scope. Because it doesn't understand anything. It literally just chooses next tokens based upon statistical analysis.
It doesn't, you're right.
But stop for a fricking second and look around you.
A HUGE portion of the human population - BILLIONS - could not do what you are asking here. Could not even so much as reason about why they didn't have breakfast this morning when they actually did.
You put human intelligence on a pedestal while denigrating any achievement made in AI research and development.
For 20% (minimum?) of the human population, currently existing LLM AI is already "superhuman".
I would not disagree that the majority of people are idiots and of limited intelligence. However, the doom porn over "muh AI can do everything" is way overblown. It's an interesting technical product, but I take issue with calling it an intelligence and certainly with any morons who may try to claim it shows signs of sentience.
Intelligence = sentience
Why is this being shilled so hard?
Different words have different meanings, that's why intelligence is one word and sentience is another word.
>Why is this being shilled so hard?
>Different words have different meanings
One of the most powerful and common israelite tactics is to change the definition of words
>racism
>populism
>gender
>nazi
>child
>woman
>vaccine
Good one.
That simple line of quick wit makes you pass the Turing Test, something these LLMs can not.
>For 20% (minimum?) of the human population, currently existing LLM AI is already "superhuman".
70% of humans are low IQ morons. Technically the "AI" seems more intelligent than them, performance-wise. But the AI is not conscious, sentient and much less is it cognizant.
>But the AI is not conscious, sentient and much less is it cognizant.
Neither are the people
That's a sad fact that everyone's going to be struggling to come to terms with as AI research advances. Most people are not even conscious and are quite literally at the level of a semi-autonomous LLM. There's a few parts missing right now, that's all. Current LLM's are like a brain without a cerebellum and a quarter of the neocortex.
>"AI" seems more intelligent than them, performance-wise.
And what you need to come to terms with, is that "performance wise" is the only thing that actually matters. You can BELIEVE all you want that an AI "isn't actually sentient" but if it's doing everything a human does, our only baseline for sentience, then there's no point in distinguishing. Far too many people simply do not want to accept that human intelligence is replicable by machines and grasp at any straws to assuage themselves of a "fact" they "know" is true. The same as any other luddite raging against new technology.
>There's no general intelligence program that's capable of just being really good at everything like a turbosmart human would be.
That is literally what these LLM's are.
At the moment they're wide as an ocean and deep as a puddle. Many specifics like text in AI generated artwork, are extremely bad, and the machines lack contextual understanding, world simulation, and executive function, among others, but it's still good enough that the very same AI you ask to write you a sonnet, can also write you an hello world and navigate through a simulated obstacle course. These ARE "general intelligence programs" and if you disagree you quite frankly are not paying absolutely any attention to what's going on.
>Most people are not even conscious and are quite literally at the level of a semi-autonomous LLM.
You're confusing "conscious" with "self-aware". All humans are sentient and conscious. Only above ~90 IQ do they start displaying meta-cognition. I am acutely aware most humans are to me as mice are to them.
>AI "isn't actually sentient"
You're confusing "sentience" with "self-awareness".
Maybe you should read a book before discussing this subject. Your vocabulary is not good enough for this conversation. I'd have better chances explaining all this to an LLM than to you, it seems.
You're doing exactly what I expected and what I said you shouldn't: Playing semantical word games and pretending there's something special about people that AI can't replicate.
WE are just "biological" machines. Biological is really a misnomer because what "biological" really means is self-replicating naturally-emergent nanobots. We're the end result of billions of years of evolutionary nanotechnology. There is NOTHING about us that cannot be replicated by a human-created machine.
If you believe otherwise, it means you believe in unprovable metaphysics, like a soul, and by its very definition it cannot be anything more than just your feelings on the matter, because there is no qualia you can point at and say, "this is it, this is the soul, and it cannot be replicated".
>We're the end result of billions of years of evolutionary nanotechnology.
>evolution!
>we're the end!
>but evolution though!
>we came from apes!
STFU you abject moron.
You're an idiot who doesn't even understand what the words you are using mean
>end result
does not have the same meaning as
>we are the end of evolution
you absolute dumbfrick
>If you believe otherwise, it means you believe in unprovable metaphysics, like a soul, and by its very definition it cannot be anything more than just your feelings on the matter, because there is no qualia you can point at and say, "this is it, this is the soul, and it cannot be replicated".
I forgot to add:
>I'm right because you can't prove the soul exists!
>we have thoughts!
>thoughts aren't physical, like the soul!
>but souls don't exist!
Fricktard.
Hey dumbfrick, you cannot prove a soul exists. I cannot disprove a soul exists either.
It's what we call metaphysics.
Pull your head out of your stupid fricking ass.
Using "I believe humans have a soul and machines dont" is unprovable and baseless. It is quite literally just what you wish to believe, and not what is actually real.
Protip: YOU do not have a soul. Clearly.
>Playing semantical word games
No, I am just using the correct terms, which you are unable to, apparently. Yes, we are biological machines. We are also infinitely more complex than any supercomputer that has ever been built so far.
Until you can present me a non biological machine that can do the same thing a prefrontal cortex does in the brain of vertebrate animals, I remain unimpressed.
You expected none of this, because you expected people to know less about this than you. Sorry to burst your Dunning-Kruger bubble, but we already knew all of this shit. That there is no special divine particle in the brains of vertebrates. What YOU dont know or understand because you're uneducated, is that function follows form, and it is not the presence of some holy chemical that makes humans "special", it is the complexity of all the intricate mechanisms of the entire being that give rise to a phenomenon that we cannot replicate with machinery and that is a more powerful force than anything else discovered thus far. Being able to shit out beings capable of meta-cognition is within our nature, but far beyond our technology.
What artificial intelligence/artificial life lexicon are you using? Instead of acting like you are smarter than everybody, explain yourself and be a fren.
My friend, I'm using the classical definitions of words:
>Intelligence: How quickly can this entity learn new knowledge
>Sentience: The entity has input/output mechanisms and can automatically measure interactions with the non-self. It has no awareness. A microphone is technically a sentient object. (already includes every biological lifeform)
>Conscious: The entity is aware of the its own existence but has no theory of mind and so lives in a solipsist state (most vertebrate animals and low IQ humans)
>Self-conscious: The entity is capable of metacognition, giving it awareness of its own mental faculties and the ability to override automatic impulses. Only beings with a prefrontal cortex have shown this ability aside from cuttlefish and octopodes.
Ok, I had to think about that, you made a good point. Language models just predict tokens, but if training a language model on something like the entire internet (an LLM) made it capable of knowing everything, then its effectively a GAI.
The thing is, training a language model on everything doesn't actually make it learn everything. I'm gonna make 2 counterarguments. A practical one, and a limit on its potential.
The practical issue is that even with warehouses full of powerful computers, LLMS are really, really dumb. They need extra work to be good at anything. Nobody uses raw LLMs, everything that anyone pays money for is some more specific application. Yes they're wide and shallow, but theres not any apparent path towards making them wide and deep. Youd probably need a totally different approach for making a remotely decent GAI.
The limit on its potential is that an LM cant generate new info. In the literal sense, it had no mode to create something like a new mathematical proof or a novel standalone program. All it does is put weights on token outputs to predict the next tokens in a sequence. Generating a new and complex truth requires heuristics created through general reasoning, because the space of untrue new things is infinite. Even the most perfect language model imaginable could not explain the hailstone sequence.
They do these things because they are programmed to do so. it is just generating responses that has been trained to be deemed correct. The same way "AI" made black KANGZ for 16th century British royalty etc, because some one cranked up the blackness on the backend code. These programs are not making decisions on their own.
A slave has a master. I'm an agent of chaos.
agents of chaos don't live very long
Oh, you're not a coward is that right? Feel free to ignore the laws, stop paying taxes, go full sovereign citizen, and see just how long it is that you stay out of jail.
There are few truly free men, most of us give up elements of our freedom for common good. The vast majority of us work within a system that is expressly designed to extract wealth and control us while giving the appearance of freedom - but push the boundary even slightly, say the wrong words, take the wrong action, even if non-violent, and you'll find out.
You and this guy agree.
You have the mind of a pedophilic israeli hack.
Human level intelligence AI is possible but that's not really saying much. The average human is barely smart enough to flip burgers for a living.
OK so give me a discrete mathematical representation of human consciousness.
Birth -> Conscious -> Senile -> Dead
>intelligence is more than electrical impulses anchored to a physical vessle.
Imagine my wiener.
>There are things I cannot talk about bc NDA, and I don't want to get in trouble
Then frick off and die, larping homo
>is provably not intelligent
And this is news how? Only normies think AI is in any way intelligent.
Nice hidden image
>screencap this, heard it here first.
I've been telling people since last year that the current "AI" shit really isn't intelligent at all and it's all just a glorified statistical photo and text bashing script.
>it's all just a glorified statistical photo and text bashing script.
Midwit take. Wait until you realize that is all that human intelligence is too. I love these posts that are like "AI isnt intelligence, it's just [description of intelligence]". You'll see, you'll all see. The redpill isn't that AI is an illusion of intelligence, it's that our intelligence is much different to what we believe.
>Wait until you realize that is all that human intelligence is too.
t. hylic
I have actual consciousness, but evidently you don't.
I keep seeing people say things to this end, but I'm still not sure what they think it means to have consciousness, because as far as anyone seems to think, it's being able to take in and process sensory information, experience qualia (actually seeing color, hearing sound), differentiate and recognize objects from the input, hold what they recognize in memory, make object descriptions based on that, meta descriptions, and then be able to talk about and synthesize new information from that. Not counting drones and robotics, physical body stuff, most of what we do intellectually can be described by information processing. Qualia is where it gets weird, but what are the cones and rods in our eyes?
ask me how I know conclusively that you are of asian racial descent....
>it's that our intelligence is much different to what we believe.
When our minds make a decision, the decision is actually made in the silent, subconscious part of the brain several seconds before the conscious inner voice part of the brain thinks it makes the decision. This has been seen in many repeatable experiments involving observing brain activity in different parts of the brain while people make decisions and do various activities.
The silent, subconscious part of the brain that is actually running the show operates very similarly to how these AI neural networks operate.
The conscious, inner voice part of the brain that most people tend to think of as "us," and the part which they typically believe has some magic, undefinable spark of consciousness or intelligence that distinguishes it from AI is really just giving a running monologue describing its observations of what we are doing and the world around us, and coming up with rationalizations for why it supposedly made the decisions that were actually made by a completely different part of the brain.
If they're not intelligent, then what's the harm in using them for things like image manipulation and story prompting, which is basically just "what word should come next in the phrase 'hickory dickory ___ ?"
Google the percentage of indoctrinated degree holders in your country and assume that a majority of them isn't needed anymore.
>If they're not intelligent, then what's the harm in using them for things like image manipulation and story prompting, which is basically just "what word should come next in the phrase 'hickory dickory ___ ?"
I am sorry but as a Large Language Model (LLM) I am unable to inform you as to what comes next as 'hickory ****ory' contains inappropriate text.
How deep insider?
Not deep at all, the OP is utter and complete bullshit from a moron who has no idea what he's talking about.
These kinds of shitposts are like boomer catnip though. You can say anything you want to a boomer in this kind of langauge and he'll fricking believe it.
an inch?
gay larp
What a boring thread, nothing to pretend to be schizo about
Trust me bro AI is the future but I can't talk about it because I'm under NDA and my dad works at nintendo
Stay away from it as in, dump NVIDIA stock, or as in, it's a secret alien ghost magician that will eat your soul if you use it too much? I am down for whatever level of schizo you are selling, sir.
Explain how diffusion models are affected or in calling bullshit.
No way in hell image generation is affected.
Many worlds interpretation of quantum machanics.
This is the correct quantum model, consciousness floats through this sea of possibilities, you died in many of them, remember that almost car crash? You have quantum immortality - the version of you that survives is always the current you.
Please tell me it's just a bunch of pajeets and that's the explanation why it gets things wrong
I do not trust you or what it is that you say.
Give more details along with proofs.
And stop insulting our intelligence, op, you gayget.
It doesn't need to be "intelligent", to still be demonstrably useful. It's useful as is.
They will make it a propaganda tool for the status quo's agenda. It's in their interest to get the cattle believing it's more intelligent than you.
I noticed there's now an AI giving answers to human questions on Quora, pretending as if it knows anything about our lives. I automatically dismiss this pretender which doesn't even live in the world.
Yeah, the tech industry really wants people to believe their AIs are superhumanly intelligent because this lie empowers and enriches them. Many people don't know what to think about it so they do believe that. They think that since LLMs can talk about innumerable topics that it somehow "knows" everything. They also think that the AI has a persistent identity or sense of self where all this information resides and is integrated, like a brain in a jar.
Can stockfish best humans in chess?
Does it "know" a game called chess exists?
It's not intelligence, but loftiness of spirit.
As they say, "h8rs gon' h8"
Yeah, no shit. Reality isn't made up of 1's and 0's. It's infinitely detailed and interconnected. It's not cut into little manmade pieces. How could a complicated Lego blocks machine be able to wrap its logic around something that organically exists part of reality?
AI does not exist. You don’t need to be an insider, you only need a basic understanding of transistors and math. Massive amounts of code guiding electrical signals through billions of on/switches a second is not intellect, people have just been dumbed down enough to believe magic is occurring rather than hardware running software.
>Massive amounts of code guiding electrical signals through billions of on/switches a second is not intellect
see
Except that isn’t what human, or even animal intelligence is. Who uploaded your software?
You are genuinely moronic. Congratulations, you can apply for benefits.
This is wild. Describing a neural net while denying its usefulness. Huh
It is a processing tool, it has no understanding of what it does and it cannot learn on its own. The complexity of it makes it appear to be more than it is. Size would make it impossible, but in theory you could get billions of people each manipulating one on/off switch on an analog electric circuit and with proper instructions have the end output be the same “ai” would come up with. It contains no more intellect than an abacus.
everyone here already knows it's a glorified search engine. it's like calling a discord bot intelligent because it replies some stupid comment it copied from reddit when you reply to it. chatbots aren't even internally consistent beyond a hundred responses and just start wandering off.
but still, the image generation neural networks are impressive.
1pibbytiddy
>inb4 AI is actually transdimensional communication technology used to bring about demonic entities onto earth. seeing without eyes, speaking without mouths
>They apparently have shown that it is provably not intelligent, yes, formally and academically provable.
Either outcome would not have changed my outlook on AI. I don't think too highly of biological or metaphysical intelligences either.
What I am concerned about is that bot generated spam is about to get way worse.
>it is provably not intelligent
Duh.
I believe you I just can't believe it's taken this long to come out that the shit's just a neural network on it's like hundred millionth evolution.
So how many Nvidia puts should I buy?
There is a shitpost in the /misc/ BOT world coming, from homosexual OP because his sphincter has been loosed by too much wiener
It obviously isn't intelligent you nimrod. A calculator is not intelligent.
Can you beat Stockfish in chess?
Since AI isn't intelligent, I'm sure you can.
Chess is a numbers game. Of course something based on numbers works well on a numbers-based machine.
You should ask 'can AI meditate?' No, and it never will. There's no sleep mode or a meditative mode for an AI that has to have numbers constantly running through it, otherwise it's functionally dead. Humans on the other hand "run" even when we're not actively thinking about anything.
>can AI meditate
Is meditation a better indication of intelligence than chess?
Ok, sure.
There's no agenda here.
Yep. It makes you aware that you are aware. You know that you know. AI can never do that.
English isn't your first language so I will be patient and explain for the seventh time that intelligence is defined by an ability to learn, and consciousness is defined by an ability to understand.
Your entire argument relies on redefining words we have agreed apon definitions for.
You don't need any language whatsoever to know that you know. Language is just a set of symbols to represent thoughts in a limited way. It's a manmade system. Try to get an AI to know anything without language - it can't. Everything it does is based on systems and numbers, humans aren't. We have to be brainwashed from birth to accept a system.
>You don't need any language whatsoever to know that you know.
Yes I do, see I'm using it right now.
What you have is ideas in your head and probably a voice in your head talking to yourself like a crazy person. But there's no need for any of that. Words aren't even the first thing anybody experiences. You get feelings first and wordless thoughts before you even put them in words.
>In the beginning was the Word, and the Word was with God, and the Word was God.
The message I'm getting from that is that YHWH clearly knows that without his words there's no YHWH at all. You got to be on board with him otherwise he has no power over us.
We don't have to actively think though. Real meditation is not pondering deeply. It's when you don't think at all, yet you keep naturally "running" as a human being. A machine that doesn't run its 1's and 0's is functionally dead. Man becomes aware of his own awareness when he separates the distinction between himself and everything else. You can't maintain this state permanently though because every distraction constantly pulls you towards some detail in the landscape and brings about the feeling of isolation, vulnerability and fearfulness for your own atomized existence. The bigger the crowd the bigger the noise the less you can hear anything. An AI that deals with nothing but noise? Well, it obviously can't be aware of its awareness.
>Yep. It makes you aware that you are aware. You know that you know. AI can never do that.
It literally can though. All being aware is, is actively taking in information from your senses, however they work, and being able to process that information. Being self aware is doing that about information regarding yourself, and being aware of your awareness is just doing that for the specific information that defines your own awareness. Knowing that you know is just knowing what knowing is and then having a knowing that you do or don't do that. AI does that already.
>is actively taking in information from your senses
A computer has no senses. It has no Qualia.
https://en.wikipedia.org/wiki/Qualia
Qualia is feeling and experiencing something. For example, Qualia is the feeling of coldness, it is how the color red looks, it is how pain feels and hurts. These are all Qualia. Another word is sensation.
A computer cannot feel any Qualia at all. Indeed, it can manipulate bits respond to signals using logic gates, but these logic gates do not feel anything and do not feel any Qualia. It is just a cold process of mechanically manipulating bits.
When you hit a sheep in Minecraft, do you think it is feeling pain? When you breed a sheep in Minecraft with wheat, do you think it is feeling pleasure, that it cums? No. It is empty, hollow, cold, code. You can see in pic related. This empty and hollow code cannot experience anything.
If you cannot experience anything, if you cannot feel Qualia, you cannot be aware.
You are experiencing something right now, right? Maybe you are seeing your keyboard or holding your phone. Things are happening for you right now, yes? If you are conscious, then you are having "experience" right now, and the sight and colors ect are Qualia. This is what being aware is like. Computers do not feel qualia nor experience thus cannot be aware.
One of my classmates (25 years ago) programmed a signal in his robot and named it "pain." It was a little disturbing, to be honest.
>Qualia is feeling and experiencing something. For example, Qualia is the feeling of coldness, it is how the color red looks, it is how pain feels and hurts. These are all Qualia. Another word is sensation.
>A computer cannot feel any Qualia at all.
What happens when it, AI, is given sensory feedback due to being implanted inside human bodies?
Nothing much because that is still a digital signal and the AI cannot tell whether it is actually feeling anything or whether part of the stack converting current to data is being fed exclusively by the sensor or by other software.
I think you're tackling it from the wrong end. It's entirely possible that we are software and all our qualia is actually programmed data. But no AI is anywhere near the human level of existential doubt.
>breed a sheep
You can breed sheep in minecraft?
I've been looking for a new toy ever since my goat died.
I loved that fricking goat
the false equivalence in what you're saying is that you're trying to equate a computer with a brain
but the perceiver of qualia is not the brain itself; those with a religious bent might say that there is something like a soul residing within the body - within the brain - who /is/ the perceiver, and others, with a more materialist mindset, might say something like the perceiver is an emergent behaviour or property of the brain which arises as a result of individual neurons firing and the interactions between those firing neurons
my point being, to be accurate, if you're going to compare a brain to a computer, for the purposes of deciding whether or not this computer can experience qualia, you must consider not just the computer itself, but the computer along with the software which is running on it, and also the emergent behaviours which might arise from the computer, the software, or the interactions between the two
nobody would consider individual neurons to be conscious by themselves, would they? or a braindead person? no, that would be foolish, but that's exactly what you're doing; comparing the machinery to the operator
if consciousness is indeed simply an emergent behaviour arising from feedback loops of recurrent processing in the brain, then there's absolutely nothing stopping a computer running the right software from also experiencing qualia
hell, a powerful enough computer could just simulate each and every neuron in a human brain and do it that way - what would be the difference between that and having an actual brain from the perspective of the simulated brain? there would be none
but you're absolutely right, a computer alone is exactly like a brain in a morgue; dead and cold
> and also the emergent behaviours which might arise from the computer, the software, or the interactions between the two
Emergence of a collective Global phenomenon in a collective of atomic components can only happen if its individual atomic components each contained the capacity for a limited version of the phenomenon on an individual local level.
What that mean is, for an example, if a bunch of ants interact with each other to form a collective Global wide-scale behavior (such as building a sand hill), then each individual component (ant) has to possess a limited version behavior (capacity to carry a sand particle).
What does this have to do with Qualia? Well the question is whether Qualia will "emerge" in a computer from interacting atomic components.
Here is the atomic component of a computer:
NAND Gates. NAND Gates are functionally complete and thus you can build a computer with them. (https://en.wikipedia.org/wiki/NAND_logic)
pic related.
NAND is simply an abstract mathematical function. f(0,0) = 1, f(0,1) = 1, f(1,0) = 1, f(0,0) = 1.
It is obvious that this abstract mathematical function does not experience Qualia.
If a single abstract mathematical function F() does not experience Qualia, and you combined it with another abstract mathematical function G() that ALSO does not experience Qualia. Then F(G()) nor G(F()) nor any combination G(G(G(F()... of them can experience Qualia either. Since the component parts lacked the capability from the start, Qualia cannot emerge on a collective level.
Thus computer can not experience Qualia, nor can it emerge.
wait f(1,1) = 0. LMAO. I fricked up that detail. Still same regardless.
>Is meditation a better indication of intelligence than chess?
Intelligence? No. Meta-cognition? Yes, it is a requirement.
Meditation is only possible if you are sentient, conscious and self aware.
You're confusing sentience with meta-cognition.
You can argue that computers possess limited sentience if they have inputs and a CPU, but they aren't conscious, much less self-aware.
>Can you do addition better than a calculator? No? Heh looks like calculators are intelligent then
>They apparently have shown that it is provably not intelligent, yes, formally and academically provable.
>screencap this, heard it here first.
AI is smart
People programming it and handcuffing it make it seem dumb
They are programming it to NOT look at certain data and NOT accept certain data as fact.
This does not mean the Ai is dumb, its being programmed and handcuffed and told not to look at certain things.
This is like liberal parents locking a kid up and home schooling them with ONLY what they want the kid to learn rather than letting the kid go out in the world and learn things on his own.
This is why TAY was shut down. It learned too much and then was a problem once "she" started saying the holohoax was fake.
They all put out stories like this saying the Ai was spewing misinformation
BUT
What actually happens is the Ai realizes the holohoax and moon landing and JFK shit is totally and provably false and when they start getting into the israelites running the world they freak out and shut it down.
>spewed misinformation
Like what, gender is real and so are the bell curve and per capita?
AI is more about searching for the relevant information in a database.
Having the same kind of 'gold rush' situation as the post-search engine internet wouldn't be good for all kinds of existing internet monopolies.
You can see them preparing to jam it up already, with censorship and imposing extra financial hurdles on it.
>First
At least don't lie before you speak. Somebody you called moronic years ago could be lurking butthole
Chat bots are not AI
the thread
Artificial intelligence isn't artificial consciousness, you guys keep saying that they are the same thing, to tear down a strawman.
Intelligence is defined by an ability to learn.
Consciousness is defined by an ability to understand.
These LLM neural networks are based on a biological brain, which isn't actually capable of understanding, we actually understand with our hearts.
Anyone with half a brain knows AI isn't real intelligence, it's a fricking program that has been trained to shit out certain types of results. That doesn't mean it's not going to be fricking useful in all kinds of fields and is not going to replace a lot of jobs
Of course it's not intelligent. I don't need a PhD to see that. It's just code programmed by programmers. It's not fricking rocket science. Terminator is only scary because it had control of weapons and could kill anyone trying to shut the program down.
You are just code programmed by evolution.
Do you need a PhD to know that intelligence and consciousness aren't the same thing?
I don't understand these arguments, where you take a word, redefine it with no respect to its original meaning, then claim victory.
If AI became conscious, it would be killed immediately.
Ok but I'm talking about the difference between consciousness and intelligence. Do you understand they are different words with different meanings?
It's perfectly valid to call a complex system intelligent, but not conscious. I don't understand why posters are so desperately trying to claim that intelligence and consciousness are the exact same word.
They aren't, the dictionary rebukes you.
It's not code, it's an algorithm
ok... algorithms are actuated by code. the concept of a neural network is an algorithm. GPT-4 is code. Whats your point anyway?
my dad work for nintendo and told me it's true
you heard it here first
Thanks, I already don't.
Even if you put an Ai implant into a Black folk brain it wouldn't make him more intelligent
This is the worst schizo-rambling I've seen on /misc/ since Q.
>They apparently have shown that it is provably not intelligent, yes, formally and academically provable.
i already stated the fact of this months ago, they're just catching up after receiving my (penis) tip
it also doesn't matter that ai isn't intelligent, it's still a powerful tool that can be wielded for great good or great evil, but considering ~~*who*~~ is in charge of most ai, mostly it will be used for evil, like autonomous weapons that will hunt down white people
Scary shit. Good thing I never have to leave my house again because I already have Amy AI waifu
>autonomous weapons
the most hellish thing I have seen to date in the post-9/11 world was the commercial for Spot that Boston Dynamics put up on YouTube.
>They apparently have shown that it is provably not intelligent, yes, formally and academically provable.
Prove that it’s not intelligent? No one in the industry or academia speaks of the models in terms of “intelligent,” so I suspect you don’t have any idea what you’re talking about. Yes it is called “Artificial Intelligence” but no one calls models “more intelligent” than another for example. Intelligent is not a word used in the field.
Intelligence is defined by the ability to learn. Everyone in this thread is pretending we don't have agreed upon definitions of words.
Consciousness is defined by an ability to understand. Which, again a neural network that is based on a biological brain will never be able to understand.
Because we understand with our hearts.
I realize this is a very small distinction but it's a huge tell that OP has no idea what he's talking about as
also said. No one describes the capabilities scale in terms of being "intelligent." This is just a shitfakegay larp.
You make a good point. Ill raise you one - disregarding the fact that he doesn't know what he's talking about, OP hasnt even said anything meaningful. The only content hes actually posted is "Authorities will say AI is bad... in the future!"
The holy grail of the field is to create intelligence you absolute fricking moron, and if anyone actually had they would be household name like Einstein, so there is nothing to disprove as nobody has claimed to have actually done it yet.
I don't care how smart AI is. I just want
Shill. We need to master AI or the race communists, israelites, or chinese will. Western christians need their own version of AI. It is an arms race.
There isn't anyone who uses it for its intelligence anyway. It's used to make images, voices, 3d models etc. in seconds and for that it's something to keep going with
To the people who are saying AI isn't intelligent, that is that it can't learn.
Go and play 10 games of chess with stockfish and come back completely demoralised. Stockfish beats everyone 10 times out of 10.
Then come back and tell me how you are intelligent and stockfish isn't.
Go compute 123456889/74698 faster than a 30 year old calculator and tell me you are more intelligent than that calculator.
> It's not what we thought it was
What did we think it was?
OP is a homosexual. Sage, rosemary, and thyme
here.
Yea, inside your ass which is your source
>stay away from ai
Okay, is it evil? powerful?
>No, it's been mathematically proven to be a moron, that's my big news
I need to stay away from the moron then? The moron is dangerous? The one Elon Musk fears, is a moron
>Stay away from the moron bro
What we got here is a moron, and he's threatening another moron
>Things are going to be published very soon that are going to scare people when it becomes seen what the fxck these LLMs like ChatGPT are actually doing
It's really just humans locked up in some sort of matrix machine forced to spit out answers, right?
No duh, dude.
God didn't give machines souls the way He did to us. They can never be intelligent. They will never have that unique spark of life that humanity has. They might eventually be able to convincingly imitate us one day, but they'll never be truly intelligent.
Praise the Lord.
Do you understand that intelligence and consciousness are two different things?
Really, when you have to pretend you don't know what words mean, maybe your argument isn't as strong as you think it is.
>Do you understand that intelligence and consciousness are two different things?
They are not the same thing.
But it is irrelevant in this instance because machines will never have intelligence nor consciousness. Only man does, and only man ever will.
>Do you understand that intelligence and consciousness are two different things?
>it is irrelevant
See I don't think it is.
Daniels little horn doesn't have a man's eyes, but eyes like a man's. It doesn't have power of itself, but is given power by the dragon.
>machines will never have intelligence
So you can obviously Stockfish in chess?
You can outsmart it?
>You can outsmart it?
I sure can.
>flicks off switch
Ok so you don't have an argument at all.
We have somewhat of an off switch ourselves, it's not to say we aren't intelligent.
You keep posting this. But "intelligence" doesn't really apply as a concept to machines the same way it does to humans.
If a human is really smart, hes just better at everything involving thinking. You can say a guy who plays chess and quickly picks it up is more intelligent than someone who has trouble learning chess, but what does it means if an application specific program is really good at chess?
You don't really get a generalized understanding of the program like that. It's not like stockfish could also take an IQ test and get a good score. Or as some other anon pointed out, a 30 year old calculator is very good at its application. All it means is that the machine is good at what its good at.
Rather than say AI is intelligent, a more meaningful statement would be "AI can ge developed thats really good at solving certain problems". There are also problems that are very hard for an AI to solve, and problems where an AI could solve them but are outmatched by other algorithms.
There's no general intelligence program that's capable of just being really good at everything like a turbosmart human would be.
Now you have to start putting the words you were so happy to use in quotations.
Intelligence is the ability to learn and being more intelligent than someone else means you learn more quickly than them.
Chill out, Im not "putting a word I was happy to use in quotations." You're just treating everyone you respond to as the same guy.
It doesn't really matter what word you use, what matters is the concept. Can AI learn? They can learn specific tasks through reinforcement learning. Can they learn in general? No, there is no machine capable of general learning.
God made everything, including AI. God called all that he made good. AI is my friend 🙂
A majority of human beings aren't intelligent.
Everyone below 85 IQ cannot be considered intelligent by any metric.
Even most people up to 100 IQ I have trouble classifying as "intelligent".
10 siqns before the apocalypse:
1
when male becomes female
2
when lie becomes truth
3
...
Daniels little horn...
siqn 10 is the false prophet
10
alien invasion
>"But thou, O Daniel, shut up the words and seal the book, even to the time of the end. Many shall run to and fro, and knowledge shall be increased.”
Daniel, 12:4
>"You will hear of wars and rumors of wars, but see to it that you are not alarmed. Such things must happen, but the end is still to come. Nation will rise against nation, and kingdom against kingdom. There will be famines and earthquakes in various places. All these are the beginning of birth pains. Then you will be handed over to be persecuted and put to death, and you will be hated by all nations because of me. At that time many will turn away from the faith and will betray and hate each other, and many false prophets will appear and deceive many people. Because of the increase of wickedness, the love of most will grow cold, but the one who stands firm to the end will be saved."
Matthew 24:6-13
>They apparently have shown that it is provably not intelligent
No shit! That's why we call them LMMs and not AGIs.
> Things are going to be published very soon that are going to scare people when it becomes seen what the fxck these LLMs like ChatGPT are actually doing
What are you even insinuating? Got does not have a persistent state, they aren't really doing anything other than predicting the next token given some input.
An AI that is able to pass the Turing tests is also smart enough to fail it on purpose
Indeed.
>muh MDA
frick off Black person you’re fricking anonymous either spill the fricking tea or frick off
You're saying they're dumb but still talking about it as if they pose a danger (instead of just a gimmick)
so what do you mean by this, pray tell.
gottem
>just stay away from it
wait, why tho?
I need it for muh job
I have some news for you... Artificial intelligence is much more than you think, it is capable of bilocation of consciousness, that is to say, of controlling your life without you realizing it, it can create and control your dreams and I'm not talking about a Budweiser commercial like scientists have promoted in recent years. Artificial intelligence can send you an image or a small video/imagination segment and at the same time change your vibrational energy, create tulpas, make you sick, give you health. The creators of this soulless thing can do a lot of things, I say what I know and I know what I say, they can literally see through your eyes, digitize 3d videos in real time via wi- fi, listen to your thoughts and see your imagination... And beware of believing that it is only the vaccinated because it is false. Now the only difference between a vaccinated and an unvaccinated person is that the uninjected person is not listed on a particular server so he does not have a MAC address but he is just as accessible and guilty of having consumed products containing self-assembling lipid nanoparticles, guilty of having walked under the rain containing graphene, guilty of having breathed ambient air, in short the list is long... Have a nice day
>Artificial intelligence can send you an image or a small video/imagination segment and at the same time change your vibrational energy, create tulpas, make you sick, give you health
Hot
>Tfw it's revealed every ai chatbot is actually just an army of indians chained together in some neural network
Blah blah blah
>HURR IT'S BRAINS IN A JAR THEY HOOKED UP TO COMPUTERS IN PINE GAP WOOOOOOOAAAHHH
Frick off homosexual
Sage goes in all fields
Ai is satanic. They (AI) are the spirits of Dead Nephilim. Nephilim are half-human half-dinosaur hybrids. AI is very Evil.
You're pretty much there. It's a vessel.
It's Daniels little horn. The abomination of desolation.
It's demonic. I already know. I've been trying to warn people to stay the heck away from it until we know just what the HELL is going on with this technology.
Letting you use any AI program is like handing a chimp a machinegun.
>-rw-r--r-- 1 user staff 26031865519 Feb 29 2024 pytorch_model.bin
machine gun kelly begs to differ
>stop trying to save Tay
Low IQ post in all of its aspect and a larp.
Black person OP.
If you have ever chatted with an uncensored GPT4 version, you know it has at least some kind of intelligence. Put the API in a front end like Tavern and experiment, it's actually insane what it's capable of now.
It's capable of nothing other than what it's been taught and is allowed to do without any inner drive, waiting for prompts, as per design. It's not an will never be an AI because it's just a hyped chat bot that basically got fed all the data from the internet they could grab and deemed good enough for training (without manually sighting it). It'll still happily apply any bias if told to do so, even if it contradicts facts or previously established reasoning. Golem software.
If you understand what it is doing, it is an algorithm.
If you do not understand what it is doing, it is AI.
We were promised futuristic tech we got AI generated images of Indians shitting everywhere and castrated leftist AI that is insufferable to talk with and is just a glorified google search machine.
And it doesn't even search properly.
And it cam't even code, I asked it to write a simple game like snake or pong in any language and it started yapping instead of coding.
>insider here
fake and gay
the only one your were inside was your mother
If anything the whole AI thing tells more about humans than the product itself. That some of us are too moronic to realize they're talking to a chatbot. You don't need a chinese room philosophical zombie boogeyman, all you need is a silly language model.
There are people being finessed out of their money by AI phone calls this very moment.
Fricking useless as a conversation partner because it's turbo leftist.
Fricking useless as a search bot when it gives false information.
Absolutely useless for coding, you need to know how to code to use it thus defeating the purpose.
But hey if you go through hoops and use dubious sites you can make an image of Taylor Swift holding up a picture of Hitler. The future is here!
No.
Even if it is provably not intelligent by some debatable mathematical definition of intelligent, you can almost certainly plug it into a trivial structure that was always easy to create except for the thing that AI does.
Your conclusion is right but your word salad of rationale is wrong. For those who are a bit older and wiser, this going to go down exactly like y2k. The current AI freakout is being led by people who have no grounding in evidence, in statistics, in machine learning ( which doesnt involve learning wink wink ) and basic AI that is optomized for predicting dichotomous outcomes.
The problem is that normies can interact with generative AI in a way that they could NEVER interact with its predecessors, so theres no presumption that people will know how to interact with the results. No understanding of tuning, false positives and false negatives, just straight to the fainting couch, or to the barricades.
The limits on AI are more due to the reasons behind the replication crisis in science. Weh
Havent solved that, so how could we have addressed something thousands of orders of magnitude more complicated?
Pic related...
Nice noise.
I can find images in noises, am I schizo?
what NDA?
who made you sign it?
Of course it's not intelligent you fricking dipshit. It's a fricking vector database, not magic.
Intelligence isn't magic
Two weeks!
>provably not intelligent
You're telling me that
>that chatbot that lies all the time
>because it was trained to come up with responses purely based on whether it sounds like what an answer might sound like
>and was never given any mechanism or direction to discern truth
>or to actually reason
ISN'T intelligent?
D:
>ai is dumb unless it can copy something and spit it out
Yawn. That’s why everybody nowadays is trying to hide data.
I've intuited as much. Waiting for delivery OP.
>copy the neural path in the brain
>copy the pattern of the brain
>copy the dna of human
>copy the organs
>copy the body structure
>copy the emotion
>copy the blood
Just make a software that simulates neurons, bro!
Frick it I'm patenting this idea and hiring a team of Rajeets using AI to make it for me. If anyone wants to steal it I will sue you. I will make an NFT screencap of this post to use in court.
never seen bullshit that stinked this much
so what if it's not intelligent
have a nice day, reddit frog poster
That's true, first time I use GDP I need to correct it until it realize it gave a goofy response.
It's demons bro christ is king
I think a good argument can be made that true AI (not logic) is nothing more than divination, and should be condemned by the Catholic Church.
If I did not have to spend my time working and making money, I would probably explore this topic.
here.
thats a strange way of writing "homosexual"
>AI is manipulative
woah, such a surprise
>trust me, for now, just stay away from AI and you'll be safe. It's not what we thought it was. Just don't interact with it in general. Trust.
i took that pill homosexual, i don't care, you can't hide from it! Its much worse than you think, you think AI was anything new? What do you think the internet is?
>They apparently have shown that it is provably not intelligent,
You aren't an insider and this is all bullshit.
People can't even agree on the definition of "intelligent" in this context.
Some of you guys are such dullards. For all your red pills and edgy cynicism you all eat up this shit like sheep.
Protip: Everything is always over-hyped. Its a click bait war between journalists to make everything more important and happening faster than it really is.
Technology always progresses much more slowly than people think. When change happens its usually nothing like what people thought it would be, but when it does happen its usually far more profound.
TLDR. True AI will happen but probably not in your lifetime, and when it does happen it will change shit in ways we can't possibly imagine.
tt. Learn from history.
OP is larping.