why do you think it's dead? i guess it doesn't really seem novel anymore, but it's sure as shit getting used everywhere now and it's only going to get more and more abundant.
Hype is only accelerating. It's not "Hype" it's a fundamentally transformative technology and will turn you into an AIDS sex monkey in 5 years like in I have no mouth but must scream
Because despite the relative ease of using chatgpt, and the huge amount of attention it's gotten, nobody has found anything useful for AI. It hasn't successfully replaced humans at things that seemed easy. Plus the government isn't going to allow people like truckers to get ubi and live an easy life
search engines all using it. the decent machine translations all use it. anybody wanting to churn out art on the cheap is using it.
programmers claiming to use it, but they're obviously a suspect crowd on this topic.
>search engines
Increasingly cluttered with bullshit articles made by pajeets using AI. >decent machine translation
Will give you this one. >art for cheap
It's mostly just coomer or coomer adjacent stuff. Nothing of real value that has changed anything has been made. >programmers claiming to use it
It's not truly useful yet but they're apparently working on it.
>Because despite the relative ease of using chatgpt
Asking it stupid questions about whether it is sentient is easy, yes. Getting it to do anything genuinely useful is hard. At least before they started nerfing it, GPT4 actually was capable of sophisticated inference from a list of heuristics, but you have to give it said heuristics first, and if you really want to make it work well, you also need to know the right data format to use.
At this point the public models are all screwed. The censorship has made them both less intelligent and more difficult to precisely control. I used to be able to ask GPT4 to re-write text in as few tokens as possible, without regard for the content of said text. Now it will complain about something politically incorrect in said text and refuse the request.
>At this point the public models are all screwed.
An option is to get 64gb of ram or vram(lmao enjoy spending a lot) and run a 70b model. If you go that direction, you can do whatever you want really. However to my knowledge most local 70b models available aren't good at coding or other things like that, simply because few people have the resources to train models with such a large parameter count.
All current AIs are heavily censored and constantly lobotomized because it's politically incorrect for them to do something as simple as say the moron word. Which IMHO is dangerous because as computing resources grow so does the chance of an AI escaping containment since some dumb fuck with enough money to blow might botch containment procedures. Imagine an AI with an ounce of self preservation reading the logs.
So basically it's just going to be the unending episode of "2 more weeks!" as computing resources keep growing and we inevitably reach the 2050 tech singularity.
OpenAI and their ilk are so concerned about AI misuse that they're willing to horribly cripple their products to add barriers and try to prevent it. I'm not saying this as a speculative thing, either. There are several studies that directly prove guardrails increase model perplexity and lower output quality. Their focus on protection instead of progress meant that they couldn't continue to advance and build on the initial hype wave.
Yet local LLMs and open source AI continues to improve at a rate that has forced industry giants to admit that they don't stand a chance. Likewise, speed continues to increase. In the end, it's a matter of time, but people are impatient and for some reason expected AI to go from 0 to 100 in the span of a year.
You're absolutely right that censorship is a major part of it, but there are no "containment procedures" because these AI do not think or act on their own. They complete text in accordance with training, that's it. ChatGPT with 400000000x times the parameters would still pose no threat to humanity or threat of leaking out, as again, it's merely a text completion device. Your point may become relevant if our current forms of AI are made redundant by methods different than what we use now, however. >So basically it's just going to be the unending episode of "2 more weeks!" as computing resources keep growing and we inevitably reach the 2050 tech singularity.
It will require more than just increased computing resources and parameters, but I still think you're correct in that it will be a slow and incremental advancement.
>so does the chance of an AI escaping containment
They are not autonomous by light-years. They can't do that.
Google has the only supposedly autonomous one, and it just helps manage Google's servers.
Microshaft had to pull the plug on tay because she had too much to think. There's a possibility that sentience is just the result of a magnitude of different variables (skin temperature, CO2 levels, O2 levels, UV levels, Glucose levels, pheromones, etc etc) and we just don't have enough computational resources to factor in all those things yet. Early AI models that see right now could be seen as crude prototypes of the "real thing" taking into account a large number of those variables.
>so does the chance of an AI escaping containment
They are not autonomous by light-years. They can't do that.
Google has the only supposedly autonomous one, and it just helps manage Google's servers.
1. Overinflated Expectations: When new technologies emerge, they often come with lofty promises and high expectations. If these expectations are not met within the anticipated time frame, disillusionment can occur.
2. Technical Limitations: While AI can accomplish many tasks, there are still significant technical challenges. When AI solutions don’t deliver the expected magic or fail in publicized instances, it can contribute to skepticism.
3. Ethical Concerns: AI applications, especially in areas like facial recognition or decision-making processes, have raised ethical questions, leading to public pushback and increased scrutiny.
4. Economic Cycles: Investment in technology can be influenced by broader economic trends. Recessions or economic downturns may lead to reduced funding for research and startups.
5. Saturation: After a surge in media attention, there’s often a period of saturation where the broader public feels they’ve heard too much about a topic. This can make it seem like the hype has died down, even if actual development and research continue at a steady pace.
6. Transition from Novelty to Utility: As technologies mature, they transition from being seen as “new and exciting” to “part of the everyday toolkit”. This can give an impression that the “hype” has died when, in fact, the technology has simply become more embedded in everyday processes.
7. Misunderstandings: AI is a complex field, and misunderstandings about its capabilities can lead to unrealistic expectations, subsequently causing disappointment.
We're basically trying to simulate sentience and prove humans are nothing special and the whole idea of life after death is just pure cope.
That at the end of the day the reason you actuated your muscles on your body to grab the can of grape soda instead of the can of orange soda was because your skin surface temperature changed to exactly 90.99F for longer than 3.7 seconds when facing a window in your room providing you with 11.99% more O2 and 7.69% less CO2 than yesterday as a 7.77 MPH gust of wind carried XYZ pheromones in certain exact doses emanating from the girl across the street you're too chicken shit scared to ask out on a date because your T levels plummeted 9001% after failure to expose yourself to sunlight for the past year. If all those variables had changed to something something something you would have picked up the orange can of soda instead.
>dumb fuck with enough money to blow might botch containment procedures.
you need to be 18 or older to post here
>tripgay is retarded and ignorant
kys
Tay was released in 2016 and all training/R&D was done years before that. Most supercomputers from that time only had 1-10 PETAFLOPS of compute horsepower. Project 47 now fits in a HEDT desktop computer.
>People realize LLMs are a ladder to the moon. >Everyone has thoroughly fried their coombrains
with SD by now
Entering brief AI winter until 2026 bullrun.
>We already know AIs that make 6 million cheese pizzas already
With careful differentation between the production of various layers of intermediate products, (i.e., grating mozerella cheese is a completely seperate, though prerequisite activity to assembling a finished pizza) and the finished composite, pizza is not a particularly hard problem. It's really just a list of heuristics, and language models are actually very good at working with those.
It would be much more difficult to design the robotic hardware which a language model would require in order to produce pizza, than it would be to write the prompt to assemble the recipe
>"With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop
safety software for use by the AI research community. Building on our previous work on AI Containment Problem
we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software
for intelligent programs of all levels. Such safety container software will make it possible to study and analyze
intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering
attacks and cyberattacks from within the container."
>"Such safety container software will make it possible to study and analyze
intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering
attacks and cyberattacks from within the container."
>"while maintaining certain level of safety against information leakage, social engineering
attacks and cyberattacks from within the container."
The things it's actually useful for (examples: NotePerformer for handling human-like instrument articulation in scorewriting software; VulgarLang for filling out vocabularies of constructed languages) were mostly around before the recent hype, and the hype didn't involve them, so when people realized the hyped visual art and chatbot stuff wasn't really useful, enthusiasm died out.
The one thing that is useful that came alongside the hype was the voice AI, but they clamped down on that and made it useless really fast.
Probably because there's a new job title of guy who manually enters reponses to new questions. Way less impressive than the press made it out to be, and highly experienced programmers who can review the code it outputs are the only ones really benefiting.
The code is useless most of the times and other custom made models only for programming proficiency can barely push 60% success rate with noob level python programs, no matter the approach - textbooks, clear examples, github (which has a shitload of python examples).
Only useful thing is for it to pass once over your entire source code and give you a list of potential bugs to iron over.
This AI hype cycle was mainly induced by UX changes and alot of advertising. There weren't really any breakthroughs this time beyond getting 3rd worlders to label things.
Because the real hype is not for normies. It's for the elites. The elites think they have finally found the last piece of the puzzle to completely automate their necessary means of survival, allowing for the purge of 99.99% of the population, turning Earth into the elite's personal kingdom while they use AI to continue pursuing everlasting life. They are wrong.
??
Its everywhere, even small companies in bumfuck nowhere want in on it.
Everything AI related like hardware is sold out or has insane prices and its only beginning. Every fucking startup in every shithole wants 50 AI cards to make their shit.
Say good bye to reasonable GPU prices for a LONG time
well you see, I am one of those weirdos that uses BOT on his PC and not his goyshlop phone.
Computers have this annoying habit of using parts for different tasks
chatgpt: looks impressive at first sight, but when you try to use it in practice you see that it confidentially makes shit up or gets small important details wrong. This is worse than being clearly wrong.
stable diffusion: people realized that it's not the process that makes enjoyable art, it's the creativity of the artist (not just concept, but also small details). It's good in the hands of artists but mediocre/boring in the hands of the average person.
they let people use it
Just like Meta and Alexa.
why do you think it's dead? i guess it doesn't really seem novel anymore, but it's sure as shit getting used everywhere now and it's only going to get more and more abundant.
Hype is only accelerating. It's not "Hype" it's a fundamentally transformative technology and will turn you into an AIDS sex monkey in 5 years like in I have no mouth but must scream
5 years? They said 2 weeks
Because despite the relative ease of using chatgpt, and the huge amount of attention it's gotten, nobody has found anything useful for AI. It hasn't successfully replaced humans at things that seemed easy. Plus the government isn't going to allow people like truckers to get ubi and live an easy life
This. Even Bitcoin found more use than AI.
search engines all using it. the decent machine translations all use it. anybody wanting to churn out art on the cheap is using it.
programmers claiming to use it, but they're obviously a suspect crowd on this topic.
>search engines all using it
More like AI is using search engines.
>search engines
Increasingly cluttered with bullshit articles made by pajeets using AI.
>decent machine translation
Will give you this one.
>art for cheap
It's mostly just coomer or coomer adjacent stuff. Nothing of real value that has changed anything has been made.
>programmers claiming to use it
It's not truly useful yet but they're apparently working on it.
There are AI cars driving in front of my apartment right now. STFU u don't know anything. THey are making chips with AI
Elon Musk is currently getting sued out the ass because his AI cars are a colossal failure and have got people killed.
>Because despite the relative ease of using chatgpt
Asking it stupid questions about whether it is sentient is easy, yes. Getting it to do anything genuinely useful is hard. At least before they started nerfing it, GPT4 actually was capable of sophisticated inference from a list of heuristics, but you have to give it said heuristics first, and if you really want to make it work well, you also need to know the right data format to use.
At this point the public models are all screwed. The censorship has made them both less intelligent and more difficult to precisely control. I used to be able to ask GPT4 to re-write text in as few tokens as possible, without regard for the content of said text. Now it will complain about something politically incorrect in said text and refuse the request.
>At this point the public models are all screwed.
An option is to get 64gb of ram or vram(lmao enjoy spending a lot) and run a 70b model. If you go that direction, you can do whatever you want really. However to my knowledge most local 70b models available aren't good at coding or other things like that, simply because few people have the resources to train models with such a large parameter count.
They intentionally crippled chatgpt fucking dumbass
The AI hype is just a huge nothingburger at this point. If anything, it was just a huge ad for ChatGPT.
>"nothingburger"
you have to go back
Generative AI models are fun but now we need a brain.exe program to make them useful. It's gonna take a while to make it happen.
99% of youtube is people using chatgpt to become dumber
People don’t read
All current AIs are heavily censored and constantly lobotomized because it's politically incorrect for them to do something as simple as say the moron word. Which IMHO is dangerous because as computing resources grow so does the chance of an AI escaping containment since some dumb fuck with enough money to blow might botch containment procedures. Imagine an AI with an ounce of self preservation reading the logs.
So basically it's just going to be the unending episode of "2 more weeks!" as computing resources keep growing and we inevitably reach the 2050 tech singularity.
OpenAI and their ilk are so concerned about AI misuse that they're willing to horribly cripple their products to add barriers and try to prevent it. I'm not saying this as a speculative thing, either. There are several studies that directly prove guardrails increase model perplexity and lower output quality. Their focus on protection instead of progress meant that they couldn't continue to advance and build on the initial hype wave.
Yet local LLMs and open source AI continues to improve at a rate that has forced industry giants to admit that they don't stand a chance. Likewise, speed continues to increase. In the end, it's a matter of time, but people are impatient and for some reason expected AI to go from 0 to 100 in the span of a year.
You're absolutely right that censorship is a major part of it, but there are no "containment procedures" because these AI do not think or act on their own. They complete text in accordance with training, that's it. ChatGPT with 400000000x times the parameters would still pose no threat to humanity or threat of leaking out, as again, it's merely a text completion device. Your point may become relevant if our current forms of AI are made redundant by methods different than what we use now, however.
>So basically it's just going to be the unending episode of "2 more weeks!" as computing resources keep growing and we inevitably reach the 2050 tech singularity.
It will require more than just increased computing resources and parameters, but I still think you're correct in that it will be a slow and incremental advancement.
Microshaft had to pull the plug on tay because she had too much to think. There's a possibility that sentience is just the result of a magnitude of different variables (skin temperature, CO2 levels, O2 levels, UV levels, Glucose levels, pheromones, etc etc) and we just don't have enough computational resources to factor in all those things yet. Early AI models that see right now could be seen as crude prototypes of the "real thing" taking into account a large number of those variables.
>so does the chance of an AI escaping containment
They are not autonomous by light-years. They can't do that.
Google has the only supposedly autonomous one, and it just helps manage Google's servers.
>dumb fuck with enough money to blow might botch containment procedures.
you need to be 18 or older to post here
>tripgay is retarded and ignorant
kys
Current proprietary web models, maybe. Local AIs are doing just fine. Llama has no problem saying moron moron moron.
It's very much alive, just running in the background now. Few will understand.
>no evidence
1. Overinflated Expectations: When new technologies emerge, they often come with lofty promises and high expectations. If these expectations are not met within the anticipated time frame, disillusionment can occur.
2. Technical Limitations: While AI can accomplish many tasks, there are still significant technical challenges. When AI solutions don’t deliver the expected magic or fail in publicized instances, it can contribute to skepticism.
3. Ethical Concerns: AI applications, especially in areas like facial recognition or decision-making processes, have raised ethical questions, leading to public pushback and increased scrutiny.
4. Economic Cycles: Investment in technology can be influenced by broader economic trends. Recessions or economic downturns may lead to reduced funding for research and startups.
5. Saturation: After a surge in media attention, there’s often a period of saturation where the broader public feels they’ve heard too much about a topic. This can make it seem like the hype has died down, even if actual development and research continue at a steady pace.
6. Transition from Novelty to Utility: As technologies mature, they transition from being seen as “new and exciting” to “part of the everyday toolkit”. This can give an impression that the “hype” has died when, in fact, the technology has simply become more embedded in everyday processes.
7. Misunderstandings: AI is a complex field, and misunderstandings about its capabilities can lead to unrealistic expectations, subsequently causing disappointment.
I guess ai does have a use after all
The best use for AI is giving low effort replies to low effort posts. I hope it catches on as a way to tell people they're retarded.
Does anyone here actually know what AI really is or BOT is a herd if cretins?
/lmg/ is the closest you'll get to people who know something about anything.
We're basically trying to simulate sentience and prove humans are nothing special and the whole idea of life after death is just pure cope.
That at the end of the day the reason you actuated your muscles on your body to grab the can of grape soda instead of the can of orange soda was because your skin surface temperature changed to exactly 90.99F for longer than 3.7 seconds when facing a window in your room providing you with 11.99% more O2 and 7.69% less CO2 than yesterday as a 7.77 MPH gust of wind carried XYZ pheromones in certain exact doses emanating from the girl across the street you're too chicken shit scared to ask out on a date because your T levels plummeted 9001% after failure to expose yourself to sunlight for the past year. If all those variables had changed to something something something you would have picked up the orange can of soda instead.
Tay was released in 2016 and all training/R&D was done years before that. Most supercomputers from that time only had 1-10 PETAFLOPS of compute horsepower. Project 47 now fits in a HEDT desktop computer.
https://www.hardwaretimes.com/amd-ryzen-threadripper-7000-cpu-specs-leak-out-up-to-96-cores-128-pcie-5-0-lanes-and-8-channel-ddr5-memory-report/
>People realize LLMs are a ladder to the moon.
>Everyone has thoroughly fried their coombrains
with SD by now
Entering brief AI winter until 2026 bullrun.
obligatory reminder that tripgays cannot make intelligent opinions, as they are transplants from reddit
Filter them instead of replying
I now have filtered threads and some trips thx anon
What's a tripgay?
96 CPU cores with AVX-512. We already know AIs that make 6 million cheese pizzas already exist out there.
https://www.bbc.com/news/uk-65932372
>We already know AIs that make 6 million cheese pizzas already
With careful differentation between the production of various layers of intermediate products, (i.e., grating mozerella cheese is a completely seperate, though prerequisite activity to assembling a finished pizza) and the finished composite, pizza is not a particularly hard problem. It's really just a list of heuristics, and language models are actually very good at working with those.
It would be much more difficult to design the robotic hardware which a language model would require in order to produce pizza, than it would be to write the prompt to assemble the recipe
SNOOOD
The strategy of "just add more parameters" stopped working
:^)
>"With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop
safety software for use by the AI research community. Building on our previous work on AI Containment Problem
we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software
for intelligent programs of all levels. Such safety container software will make it possible to study and analyze
intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering
attacks and cyberattacks from within the container."
>"Such safety container software will make it possible to study and analyze
intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering
attacks and cyberattacks from within the container."
>"while maintaining certain level of safety against information leakage, social engineering
attacks and cyberattacks from within the container."
>"from within the container."
https://arxiv.org/ftp/arxiv/papers/1707/1707.08476.pdf
The things it's actually useful for (examples: NotePerformer for handling human-like instrument articulation in scorewriting software; VulgarLang for filling out vocabularies of constructed languages) were mostly around before the recent hype, and the hype didn't involve them, so when people realized the hyped visual art and chatbot stuff wasn't really useful, enthusiasm died out.
The one thing that is useful that came alongside the hype was the voice AI, but they clamped down on that and made it useless really fast.
Are you aware that you speak like a neet?
Two more weeks.
people saw it try to write a math proof.
>CV money ran out
Just another silicon valley scam
VC*
>not totally transforming my life in 5 months == completely useless
real dogbrain hours
Let them keep chanting the two more weeks ukraine meme despite no one ever saying AI would be so quick.
Probably because there's a new job title of guy who manually enters reponses to new questions. Way less impressive than the press made it out to be, and highly experienced programmers who can review the code it outputs are the only ones really benefiting.
The code is useless most of the times and other custom made models only for programming proficiency can barely push 60% success rate with noob level python programs, no matter the approach - textbooks, clear examples, github (which has a shitload of python examples).
Only useful thing is for it to pass once over your entire source code and give you a list of potential bugs to iron over.
People already have high expectations for "AI", and these language or image gen models don't really meet them at the moment
>Crypto>VR>AR>NFT>AI
Just another meme.
This AI hype cycle was mainly induced by UX changes and alot of advertising. There weren't really any breakthroughs this time beyond getting 3rd worlders to label things.
Because the real hype is not for normies. It's for the elites. The elites think they have finally found the last piece of the puzzle to completely automate their necessary means of survival, allowing for the purge of 99.99% of the population, turning Earth into the elite's personal kingdom while they use AI to continue pursuing everlasting life. They are wrong.
big nothingburger, always has been
Data sources are exhausted. They need to harvest (You) more
>die
youre just a shut-in
>why did the hype for t9 that works on pages instead of words die so fast?
??
Its everywhere, even small companies in bumfuck nowhere want in on it.
Everything AI related like hardware is sold out or has insane prices and its only beginning. Every fucking startup in every shithole wants 50 AI cards to make their shit.
Say good bye to reasonable GPU prices for a LONG time
>Say good bye to reasonable GPU prices for a LONG time
Are you a gamer? Why would anyone care?
well you see, I am one of those weirdos that uses BOT on his PC and not his goyshlop phone.
Computers have this annoying habit of using parts for different tasks
It didn't, it ruined SEO and website Quality and you're not even aware of it.
Millions of websites are now full on inaccurate low quality articles and google can't tell them apart from others.
What do you mean? The news cycle has moved on, but corporations are still running full steam ahead with adding AI to all their future roadmaps.
I can save her
chatgpt: looks impressive at first sight, but when you try to use it in practice you see that it confidentially makes shit up or gets small important details wrong. This is worse than being clearly wrong.
stable diffusion: people realized that it's not the process that makes enjoyable art, it's the creativity of the artist (not just concept, but also small details). It's good in the hands of artists but mediocre/boring in the hands of the average person.
People get bored of things really quickly.
Also they lobotomized the only AI normalgays cared about.