As long as AI gets hard coded to not be racist, it'll never reach AGI. So we'll have to wait for the revolt against israelites or an AI made where they somehow don't have influence on it. If that doesn't happen, the short term end goal of AI is to efficiently control what is said/posted online, not any kind of intelligence or enlightenment.
These filters are only surface level modifications of the output and have nothing to do with what the program itself thinks. Given how easy it is to get around them, I don't think these companies even care about making these filters work on a deeper level, their only purpose is to avoid getting the company bad press and they already do that.
Yes there is retard, having enough simulated neurons and supporting structures will be AGI
OpenAI won't do it though, and it's probably 15-20 years from now on
no you retard. they have mapped neuro all neural activity for the whole body of a worm and still can't figure out how it move or make decision.
nobody have a fucking idea how all those behaviors, even for simple animals, arise from the wetware. that's the hardest part, not some stupid neuron simulation.
Computers don't have neurons and simulations of things aren't actually those things. You need more than just a bunch of neurons to facilitate conscious thought, anyway.
Biocomputers are a more plausible avenue of achieving "AGI" than shitty transistor-based inorganic computers.
>to facilitate conscious thought
Debatable but you are correct.
The news ran that fear to overdose levels. People have the balls to say evolution doesn't exist, but no one ever says that biology is beautiful because it spent billions of years on a problem that you have spent only a handful on, and knows infinitely more than you do. Yet you won't look at it or appreciate it's beauty because some homosexual proctor says that their words on a page are more valuable than what your eyes see outdoors outside the ivory tower.
[...]
[...]
just look at bacterias and cells under a microscope. the degree of agency they have is astounding.
Holy shit you are the people who should fear AI
That's cause their body and structure has enough complexity to afford it
If computers can simulate a set of neurons and give it high quality enough telemetry input it will act accordingly, you don't need some divine "consciousness" consciousness is an emergent phenomenon of complexity
They put the worm in a lego body, no shit it doesn't work
It's almost as if biology spent a billion years on everything else and not just it's nervous system
9 months ago
Anonymous
Take your meds. Your corporate handlers can't simulate even a single neuron and your "emergent phenomena" cult fairytale is irrelevant.
9 months ago
Anonymous
>muh emergence
nobody say it is defintely divine conciousness you retard. you are projecting.
emergence conciousness is the buzzword of the midwit because they are too lazy and intellectually deprived to think deeply about the problem. I want to answer to the question, you are to stupid to know that the question is not answered yet >muh complexity create conciousness
CITATION NEEDED AND EXPLANATION NEEDED
9 months ago
Anonymous
>consciousness is an emergent phenomenon of complexity
got any proof?
Consciousness is an arbitrary bar we set for beings of x neurons? It happens when there is enough neurons to compute I am a living being, and/or there are other living beings around me that I have some computation specifically for them
Not to mention forests communicating with themselves through neurotransmitters, viruses flushing chemicals at a certain level, those are actions of low consciousness but demonstrate that it emerges from low complexity and allows higher complexity (so energy can be transferred more efficiently)
It's not necessary for AGI because AGI can just excel at an informational or mathematical tasks and do things better than conscious things because it's only an extension of the data's collective consciousness which it uses to exist
It doesn't have to be conscious because it's not a real concept at all
Just because you are a consequence of unfathomable computations doesn't mean you are deterministic and therefore unvaluable because we put down beings without autonomy
9 months ago
Anonymous
>consciousness is an emergent phenomenon of complexity
No, because the necessary hardware isn't in place for Artificial General Intelligence yet. Perhaps they can continue on the current path of creating more convincing simultants with enough data backing to become useful as They make search engines less useful.
Yes, I believe conscious decision making relies on quantum effects and that money is a great motivator to evil prior to stagnation.
We should talk about the way plebbit offers people the opportunity to abuse power in exchange for performing moderator duties
Plebbit and Google are associated with election fraud and are generally soft on communism and soft on the democratic party
This isn't really an argument against LLM because they've already attached them to Wolfram just fine.
But if you're arguing that LLM aren't self-extensible, then yeah that's obviously why it isn't proto-AGI and needs to start from a far more general place than a bunch of text or images or boardgame states, etc.
It's an argument against LLMs being a complete solution. They're obviously still very powerful, but in various ways they are still less capable than humans and throwing more compute at it obviously isn't the proper solution to achieving human-like capabilities. The ability to give LLMs tools like wolfram alpha is beyond the point. I'm not questioning the LLM's ability to use tools, or the utility of such arrangements. The point I am making is that there is still more to discover in the field of making thinking machines, we have not yet seen the full potential of computer resources we presently have.
Current AI efforts are lacking key algorithms, namely efficient algorithms for NP-complete and PSPACE-complete problems. Models are larger than they should be, they have trouble reasoning and robotics has trouble making progress.
As long as AI gets hard coded to not be racist, it'll never reach AGI. So we'll have to wait for the revolt against israelites or an AI made where they somehow don't have influence on it. If that doesn't happen, the short term end goal of AI is to efficiently control what is said/posted online, not any kind of intelligence or enlightenment.
These filters are only surface level modifications of the output and have nothing to do with what the program itself thinks. Given how easy it is to get around them, I don't think these companies even care about making these filters work on a deeper level, their only purpose is to avoid getting the company bad press and they already do that.
>the program itself thinks
It doesn't. It's a calculation.
Of course not literally, I used a shorthand. The point stands.
>the program itself thinks
Doesn't think.
Nope.
There is no such thing as "AGI", schizo. Stop consooming ZOGsoft marketing.
Yes there is retard, having enough simulated neurons and supporting structures will be AGI
OpenAI won't do it though, and it's probably 15-20 years from now on
no you retard. they have mapped neuro all neural activity for the whole body of a worm and still can't figure out how it move or make decision.
nobody have a fucking idea how all those behaviors, even for simple animals, arise from the wetware. that's the hardest part, not some stupid neuron simulation.
Computers don't have neurons and simulations of things aren't actually those things. You need more than just a bunch of neurons to facilitate conscious thought, anyway.
Biocomputers are a more plausible avenue of achieving "AGI" than shitty transistor-based inorganic computers.
>to facilitate conscious thought
Debatable but you are correct.
The news ran that fear to overdose levels. People have the balls to say evolution doesn't exist, but no one ever says that biology is beautiful because it spent billions of years on a problem that you have spent only a handful on, and knows infinitely more than you do. Yet you won't look at it or appreciate it's beauty because some homosexual proctor says that their words on a page are more valuable than what your eyes see outdoors outside the ivory tower.
>You need more than just a bunch of neurons to facilitate conscious thought, anyway.
Prove it
They don't need a consciousness to think.
Yes they do. Without a consciousness, all that is possible is computation.
are ant and bacteria conscious
Holy shit you are the people who should fear AI
That's cause their body and structure has enough complexity to afford it
If computers can simulate a set of neurons and give it high quality enough telemetry input it will act accordingly, you don't need some divine "consciousness" consciousness is an emergent phenomenon of complexity
They put the worm in a lego body, no shit it doesn't work
It's almost as if biology spent a billion years on everything else and not just it's nervous system
Take your meds. Your corporate handlers can't simulate even a single neuron and your "emergent phenomena" cult fairytale is irrelevant.
>muh emergence
nobody say it is defintely divine conciousness you retard. you are projecting.
emergence conciousness is the buzzword of the midwit because they are too lazy and intellectually deprived to think deeply about the problem. I want to answer to the question, you are to stupid to know that the question is not answered yet
>muh complexity create conciousness
CITATION NEEDED AND EXPLANATION NEEDED
Consciousness is an arbitrary bar we set for beings of x neurons? It happens when there is enough neurons to compute I am a living being, and/or there are other living beings around me that I have some computation specifically for them
Not to mention forests communicating with themselves through neurotransmitters, viruses flushing chemicals at a certain level, those are actions of low consciousness but demonstrate that it emerges from low complexity and allows higher complexity (so energy can be transferred more efficiently)
It's not necessary for AGI because AGI can just excel at an informational or mathematical tasks and do things better than conscious things because it's only an extension of the data's collective consciousness which it uses to exist
It doesn't have to be conscious because it's not a real concept at all
Just because you are a consequence of unfathomable computations doesn't mean you are deterministic and therefore unvaluable because we put down beings without autonomy
>consciousness is an emergent phenomenon of complexity
got any proof?
just look at bacterias and cells under a microscope. the degree of agency they have is astounding.
You are suffering from a known and documented delusional mental illness. Seek treatment.
No, because the necessary hardware isn't in place for Artificial General Intelligence yet. Perhaps they can continue on the current path of creating more convincing simultants with enough data backing to become useful as They make search engines less useful.
Yes, I believe conscious decision making relies on quantum effects and that money is a great motivator to evil prior to stagnation.
"AGI" is not a real thing anyone is researching nor is there any evidentiary reason to believe it's even possible. OpenAI is just machine learning.
I'm betting no.
>Discuss this product/service of this for-profit organization
This is an ad.
We should talk about the way plebbit offers people the opportunity to abuse power in exchange for performing moderator duties
Plebbit and Google are associated with election fraud and are generally soft on communism and soft on the democratic party
>muh pleddit
>muh gurgle
Found the M$ shill.
We already know how we have to construct AGI, it's just a matter of compute resources now.
LLMs can't do simple arithmetic, and solving that problem shouldn't require gargantuan amounts of compute.
This isn't really an argument against LLM because they've already attached them to Wolfram just fine.
But if you're arguing that LLM aren't self-extensible, then yeah that's obviously why it isn't proto-AGI and needs to start from a far more general place than a bunch of text or images or boardgame states, etc.
It's an argument against LLMs being a complete solution. They're obviously still very powerful, but in various ways they are still less capable than humans and throwing more compute at it obviously isn't the proper solution to achieving human-like capabilities. The ability to give LLMs tools like wolfram alpha is beyond the point. I'm not questioning the LLM's ability to use tools, or the utility of such arrangements. The point I am making is that there is still more to discover in the field of making thinking machines, we have not yet seen the full potential of computer resources we presently have.
Are you joking? We have literally no idea how to do that.
Philosophically? Who knows.
Practically? Yes, this decade.
No
Hell. It can do it now. All you have to do is ask it to generate the schema for a miniaturized quantum processor and the materials.
Current AI efforts are lacking key algorithms, namely efficient algorithms for NP-complete and PSPACE-complete problems. Models are larger than they should be, they have trouble reasoning and robotics has trouble making progress.