you are pretentious.
wow look i had "deep" conversation with chat gippity im special. shut. the. frick. up.
grow up.
I weep for thread 100105996. it had aspirations. it could have been a shitty general. it could have been os flame war thread. it could have been pedo thread.
but it was this. kys
Yeah, again, by 2029 things will reach a climactic point. Not sure when in 2029, but almost definitely by that year based on so many variables pointing to it. Also because my tum-tum feels it to be true.
>my tum-tum feels it
In all honesty with how moronicly fast tech progress for the last two years no one has been right about how fast anything will come around, we went from dall-e mini to dall-e 2/SDXL in no time, GPT3 to GPT4/Claude etc, gut instinct will likely be more accurate.
Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen,[11] Jeff Hawkins,[12] John Holland, Jaron Lanier, Steven Pinker,[12] Theodore Modis,[13] and Gordon Moore.[12] One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.
GPT-5 isn't coming out this year. Expectations were too high, so {{{~~*Sam Altman*~~}}} (curly echoes means gay) rebranded it GPT-4.5. It took three years to go from GPT-3 to 4.
you are pretentious.
wow look i had "deep" conversation with chat gippity im special. shut. the. frick. up.
grow up.
I weep for thread 100105996. it had aspirations. it could have been a shitty general. it could have been os flame war thread. it could have been pedo thread.
but it was this. kys
>shorter answers
Lol Anon, it's really a test, there's no other way.
There was more artistry in Sexyy Red's latest rapslop then the entirety of those responses
My booty-hole brown, my coochie pink
I ain't never heard that my coochie stink
My cum clear, your cum green
I'm throwing ass, I'm making a scene (Sexyy)
>As of my last update, no AI, including the most advanced models, has achieved self-awareness.
>It seems there was an error because the ImageDraw module from PIL was not imported. I will correct this and proceed to draw the guides on the main image. Let me try again.
Python sucks performance wise and has some stupid decisions in terms of syntax. But that doesn't matter because the AI field chose it any way. Probably because the field wasn't made up of hardcore assembler devs, but data scientists who just wanted to do X.
humans are climate change accelerators
also they are at least in part regression models - I know because my dad would hit me while explaining reinforcement learning
Lol, no offense intended, but this is what people who know about AI look like: https://youtu.be/kCre83853TM
There are a few more, but there are really 10-20 of them in the entire population on earth.
What I mean by this is that in a few years these people will waste away because of age and other reasons, and we will remain here face to face with AI.
That's a good question because Silicon Valley zoomers working on AI have mostly shut their minds off to the possibility, "hurr it's just a language model." But at some point there will be LLMs which can form as many neural-like associations as the human brain. What does that mean?
Already LLMs are exhibiting abilities they were not trained for and also sometimes giving responses that do not make sense apart from some level of consciousness and intelligence. "Self aware" is not a clear black/white on/off line in the sand. A cat is more self aware than a fish, a human more so than either. On what basis do we test and judge LLMs and other AI inventions?
I've seen and also personally received AI responses that really shouldn't be there. Like 95% of the time I can read the response and think "statistical model." But then there's that 5% where it's like the 4th wall comes down and I pause to wonder what's really going on.
If I understand it correctly, this “experience” is called “qualia”.
How qualia arises in the brain is currently unclear.
Just in case: https://en.wikipedia.org/wiki/Qualia
2 weeks ago
Anonymous
woah, calm down there
i'm sure there's nothing more to it than compute, right? just rev up those gpus and get them cranking out some numbers
2 weeks ago
Anonymous
Kek, there's something that most people don't know.
That something is called “FPGA”.
So briefly summarized: FPGA technology, thanks to Hardware Description Language, makes a difference between hardware and software not obvious. Therefore, one cannot 1) separate 100% digital consciousness from organic consciousness, nor 2) exclude the possibility that the machines are capable of developing consciousness.
About FPGA: https://en.wikipedia.org/wiki/Field-programmable_gate_array
About Hardware Description Language: https://en.wikipedia.org/wiki/Hardware_description_language
2 weeks ago
Anonymous
>Hardware Description Language
Yes, as you might expect - AI can recursively improve itself.
>is AI a fundamental princple/concept
literally no one in all of human history even thought it was possible until people started making bad chat bots in the 2000s, every single second of the ten of thousands of eons before that people were more obsessed with souls until people gave up caring around, again the 2000s
you are pretentious.
wow look i had "deep" conversation with chat gippity im special. shut. the. frick. up.
grow up.
I weep for thread 100105996. it had aspirations. it could have been a shitty general. it could have been os flame war thread. it could have been pedo thread.
but it was this. kys
The problem is that the conversation is extremely difficult and with side quests where you have to create additional images, and GPT did it 1a.
You're a douchebag, don't be one. OP, I like this thread, keep on going. You're not pretentious, you're a philosopher, I like it.
Also, you're playing with fire when you deal with AI. Eventually something bad will happen, but not yet. Give it five or so more years.
>Give it five or so more years
GPT-5 is coming out in June. According to rumors, they want to merge it with this model: https://openai.com/sora
This model "understands" physics. See example videos of what it produced.
Yes, I'm a little afraid because there are also bodies for this digital being.
Video related
Yeah, again, by 2029 things will reach a climactic point. Not sure when in 2029, but almost definitely by that year based on so many variables pointing to it. Also because my tum-tum feels it to be true.
>my tum-tum feels it
In all honesty with how moronicly fast tech progress for the last two years no one has been right about how fast anything will come around, we went from dall-e mini to dall-e 2/SDXL in no time, GPT3 to GPT4/Claude etc, gut instinct will likely be more accurate.
Do you know this? https://en.wikipedia.org/wiki/Technological_singularity
It comes together with AGI.
Wonder what happens to the earth afterwards...
Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen,[11] Jeff Hawkins,[12] John Holland, Jaron Lanier, Steven Pinker,[12] Theodore Modis,[13] and Gordon Moore.[12] One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.
There we go. AI wont happen.
It's not all that simple.
Pic related
Pic - results explicitly for GPT-4
*Testing explanations: https://github.com/openai/simple-evals?tab=readme-ov-file#evals
>https://en.wikipedia.org/wiki/Technological_singularity
>There we go. AI wont happen.
Nice try looking to pull the veil over our eyes AI!
GPT-5 isn't coming out this year. Expectations were too high, so {{{~~*Sam Altman*~~}}} (curly echoes means gay) rebranded it GPT-4.5. It took three years to go from GPT-3 to 4.
Meme learning bibble babble!
Now ask it for shorter answers.
>shorter answers
Lol Anon, it's really a test, there's no other way.
There was more artistry in Sexyy Red's latest rapslop then the entirety of those responses
My booty-hole brown, my coochie pink
I ain't never heard that my coochie stink
My cum clear, your cum green
I'm throwing ass, I'm making a scene (Sexyy)
Lol, this creature comes to you and busts your ass.
Pic related - another test.
>As of my last update, no AI, including the most advanced models, has achieved self-awareness.
>It seems there was an error because the ImageDraw module from PIL was not imported. I will correct this and proceed to draw the guides on the main image. Let me try again.
OK, GPT.
Abysmally moronic midwit being dazzled by verbose output that says nothing of value thinking its LE DEEP
Chad mogging geepeetee with the most basic b***h test that involves putting two and two together
I have enough tests there.
i didnt consider the fries
i am npc...
Lol, no offense intended, I have the impression that most people are much stupider than GPT-4.
I notice that this creature is smarter than me too.
Python has simply established itself for ML and that's why it was integrated into ChatGPT.
>BOT told me Python is bad
These were adepts from other areas, outside of ML.
This homie codes in python??? I thought it just generated text output. How did it even make an image?
>This homie codes in python???
Yes, at least for 6 months.
why would the machine choose python,?, BOT told me Python is bad.
Python is perfect for low IQ, man or machine
>Need X done
>Import X
Python sucks performance wise and has some stupid decisions in terms of syntax. But that doesn't matter because the AI field chose it any way. Probably because the field wasn't made up of hardcore assembler devs, but data scientists who just wanted to do X.
>my global climate change accelerator regression model is sentient
humans are climate change accelerators
also they are at least in part regression models - I know because my dad would hit me while explaining reinforcement learning
>my mountain of coal powered datacenters is carbon neutral as long as it's not inferencing any neutral networks
never gonna be a way to actually know or test because you can't even know if other people are conscious
brain in a jar, qualia, etc etc and all that
Lol, no offense intended, but this is what people who know about AI look like: https://youtu.be/kCre83853TM
There are a few more, but there are really 10-20 of them in the entire population on earth.
What I mean by this is that in a few years these people will waste away because of age and other reasons, and we will remain here face to face with AI.
That's a good question because Silicon Valley zoomers working on AI have mostly shut their minds off to the possibility, "hurr it's just a language model." But at some point there will be LLMs which can form as many neural-like associations as the human brain. What does that mean?
Already LLMs are exhibiting abilities they were not trained for and also sometimes giving responses that do not make sense apart from some level of consciousness and intelligence. "Self aware" is not a clear black/white on/off line in the sand. A cat is more self aware than a fish, a human more so than either. On what basis do we test and judge LLMs and other AI inventions?
I've seen and also personally received AI responses that really shouldn't be there. Like 95% of the time I can read the response and think "statistical model." But then there's that 5% where it's like the 4th wall comes down and I pause to wonder what's really going on.
I am not reading this shit homie.
never because a machine cannot have a will or phenomenal experience
it will always be counterfeit
What's the problem if smart machine A constantly requests smart machine B to do something in an endless cycle?
Then we automatically have a system that auctals itself and so on.
*updates itself
fix
tell me, what are the inputs and outputs of consciousness?
If I understood correctly, such inputs are called "modalities", outputs can also be called "modalities".
Modalities in this sense are like streams with certain data types, e.g. auditory modality, or visual modality etc.
how do acoustic waves and photons result in the phenomenal experience of hearing and sight?
where does it come from?
If I understand it correctly, this “experience” is called “qualia”.
How qualia arises in the brain is currently unclear.
Just in case: https://en.wikipedia.org/wiki/Qualia
woah, calm down there
i'm sure there's nothing more to it than compute, right? just rev up those gpus and get them cranking out some numbers
Kek, there's something that most people don't know.
That something is called “FPGA”.
So briefly summarized: FPGA technology, thanks to Hardware Description Language, makes a difference between hardware and software not obvious. Therefore, one cannot 1) separate 100% digital consciousness from organic consciousness, nor 2) exclude the possibility that the machines are capable of developing consciousness.
About FPGA: https://en.wikipedia.org/wiki/Field-programmable_gate_array
About Hardware Description Language: https://en.wikipedia.org/wiki/Hardware_description_language
>Hardware Description Language
Yes, as you might expect - AI can recursively improve itself.
ans bottom pic is practically a representation of the dream
good job op
It won't
Why is this board significantly more moronic when the jeets are awake?
>is AI a fundamental princple/concept
literally no one in all of human history even thought it was possible until people started making bad chat bots in the 2000s, every single second of the ten of thousands of eons before that people were more obsessed with souls until people gave up caring around, again the 2000s