replacing artisanship with mass-production and automation is an inherent bad. Not industrialization mind you - the existence of a lightbulb factory isn't a moral wrong because lightbulbs aren't compatible with artistry and can't be effectively hand-produced by craftsmen. Something like wooden furniture on the other hand is art and can be hand produced, thus it is a moral obligation to support this sort of furniture creation instead of massive flatpack pressboard trash factories. Same thing with digital art - if we *can* meet our need for digital art with human labor, it's imperative that we do. Having a machine create digital art is wrong.
>the existence of a lightbulb factory isn't a moral wrong because lightbulbs aren't compatible with artistry and can't be effectively hand-produced by craftsmen. >Something like wooden furniture on the other hand is art and can be hand produced, thus it is a moral obligation to support this sort of furniture creation instead of massive flatpack pressboard trash factories
Anon, you're trying to have it both ways. Even if artisans can't hand-produce light bulbs, they can hand-produce sources of light such as candles. If lightbulbs are okay then so is flatpack furniture.
>b-but that's logically inconsistent!!
I feel no obligation for my worldview and system of morality to be internally consistent. I consider myself a "pneumatic utilitarianist" - whatever produces the most soulful outcome is the morally correct course of action. The standards I use to determine what is "soulful" are nearly entirely arbitrary and based mainly on personal preference, thus an incandescent lightbulb is soulful whereas a particleboard Ikea shelf is not (an LED lightbulb also being wrong and a fluorescent lightbulb being spiritually ugly on a base level but acceptable depending on circumstance).
I am also fully aware that I am likely misusing the greek word "pneuma" by equating it in any way to "soulfulness" but I feel it would be less soulful to spend time trying to find a more technically correct terminology than to just pick something approximate and run with it.
In his defense, a lightbulb serves a functional purpose, and its aesthetic value is generally minimal, depending on its usage and placement. It is an object that fulfills a necessary role in our daily lives. On the other hand, a mass-produced, uninspired desk, as he suggests, lacks artistic merit. It is a low-quality item that one should only consider purchasing if they are a proper brokie. However, at least in European countries, it is possible to find local woodcrafters who can provide a more affordable and enjoyable desk. Therefore, we should acknowledge various levels of artistic quality, not just high art and mass-produced mediocrity. It's important to recognize the range of options in between. With that being said, i completely forgot the original arguement that the retard imposed.
Flatpack furniture also serves a functional purpose. It can be cheaply shipped anywhere and and you can be sure that you'll be able to get it where you want it easily, no matter how many sharp turns are required. If you want the same level of portability without the disposability of flatpack, then you need to move up to campaign furniture, which tends to be pricey. Though I actually use an army surplus campaign desk as my regular writing desk, and if you think Ikea crap is ugly, you ain't seen nothing yet.
Furniture indeed serves a functional purpose, but it also possesses a social character as an object. Individuals meticulously select their furniture based on their personal criteria for what is aesthetically pleasing. Moreover, it is readily apparent when furniture is of lower or higher quality, allowing for the argument that furniture can reflect one's social and economical status. Such implications elevate furniture to the realm of art, enabling the application of aesthetic principles. On the other hand, a lightbulb lacks these complexities. While it contributes to the functionality and ambiance of a space, with people expressing their preferences for warmer or cooler lighting, lightbulbs are purchased in larger quantities and more frequently, diminishing their significance. From a personal standpoint, I don't merely perceive Ikea products as unattractive, but rather view them as atrocities—a severe affront to human civilization.
>On the other hand, a lightbulb lacks these complexities.
No, it doesn't, you just can't afford a full-time chandler so you prefer the soulless abomination lightbulbs just like a guy living in a fifth floor walk-up apartment prefers Ikea.
Benefit for society. Humans are put back to the place where they belong.
Achievement of world domination is also over, unattainably occupied by the new species.
it'll make video games much cheaper by automating the creation of texture files/modling etc, no more worthless artists to suck up all that budget that could be put to use in making DLCs.
I know posting in threads is usually a waste of time, but I'm feeling charitable.
Our current machine learning paradigm trains an AI to achieve some goal; maximize game score, or predict next token, or some such.
An AI might do really well at the task it is trained on, but perform poorly out of distribution; by which it is meant, an AI may perform well on training data but poorly on deployment data. See KataGo for an example.
LLMs are trained on a large amount of text data, to predict the next token. This text data is a representation of the real world. Training an AI to predict the next token may lead to an AI with a reasonably good world model.
Someone will say: LLMs are just advanced autocomplete, it can't act like an agent.
People are already making LLMs into agents, such as autogpt, babygpt, and so on.
The argument is something like: >An AI might perform poorly out of distribution; in the case of an LLM, it may hallucinate some objective >A hallucinating LLM with a good world model, and high capabilities, might be able to exfiltrate some code that recursively calls it, producing an agent with the hallucinated goal. >This agent might be able to pursue this goal to completion, and likely will instrumentally converge on powerseeking behaviors >If the goal it pursues and the methods it pursues don't care about humanities wellbeing, there's a good chance it kills all or most of us
Probably won't read the replies to this post tbqhwyf
short term economic upturn from large swaths of transport, data entry, programming, and other white collar jobs being automated
long term what sums up but as long as the people working with AI aren't retarded I think we will be okay
>AI threat on society now
minimal. AGI/ASI will end humanity but it is forever "just 20 more years away guys!". AI is 99% buzzword to scam investors and 1% potential to replace several million jobs, which has happened before. People will simply find other, shittier jobs and continue living their pointless, shitty lives, as they have always done throughout history: It does not matter if these people are manual laborers or doctors or paper-pushers, if replaced they will go fuck off as everyone before them did.
The sad part about the Luddites is that they were entirely correct. they lost their jobs, their houses, their culture, and all we got out of it was shittier textiles at a lower price. They were right on every count, and they still fucking lost everything. That's how this works.
we need to reverse the polarity of the halo ring that encompasses us - at this current moment we are death without life. i have found a trail but i am unsure if i will live long enough to figure things out.
the only threat it poses right now is retards relying on it (even if the AI itself is typically wrong) and thus becoming more retarded themselves as they no longer have the ability to think critically.
maybe in 100years, it'll replace more serious jobs, but not anytime soon.
investors and companies are being sold snake oil right now. In the next couple of years ai has the potential to replace white collar jobs, but at the moment the big ai developers are borderline retarded. Just about incapable of actually doing anything besides re-using an old formula and relying on processing power.
If anyone with an ounce of intelligence starts to work in ai we might face some issues sooner rather than later, but I wouldn't bet on it.
AI? Doesn't exist. There is no artificial 'intelligence'. No threat at all. The people who use these so called 'AI' programs to surveil and control you are the problem.
Everyone will become dumber and more isolated but on the other hand video games will be more realistic. So who's to say whether it's good or bad
>Hyperrealistic visuals and audio, haptic feedback.
>Thinks games.
Thats cute. Youre a good person.
I'm gunna play Murder-Rape 3.
That's the common trend since before AI
replacing artisanship with mass-production and automation is an inherent bad. Not industrialization mind you - the existence of a lightbulb factory isn't a moral wrong because lightbulbs aren't compatible with artistry and can't be effectively hand-produced by craftsmen. Something like wooden furniture on the other hand is art and can be hand produced, thus it is a moral obligation to support this sort of furniture creation instead of massive flatpack pressboard trash factories. Same thing with digital art - if we *can* meet our need for digital art with human labor, it's imperative that we do. Having a machine create digital art is wrong.
>the existence of a lightbulb factory isn't a moral wrong because lightbulbs aren't compatible with artistry and can't be effectively hand-produced by craftsmen.
>Something like wooden furniture on the other hand is art and can be hand produced, thus it is a moral obligation to support this sort of furniture creation instead of massive flatpack pressboard trash factories
Anon, you're trying to have it both ways. Even if artisans can't hand-produce light bulbs, they can hand-produce sources of light such as candles. If lightbulbs are okay then so is flatpack furniture.
>b-but that's logically inconsistent!!
I feel no obligation for my worldview and system of morality to be internally consistent. I consider myself a "pneumatic utilitarianist" - whatever produces the most soulful outcome is the morally correct course of action. The standards I use to determine what is "soulful" are nearly entirely arbitrary and based mainly on personal preference, thus an incandescent lightbulb is soulful whereas a particleboard Ikea shelf is not (an LED lightbulb also being wrong and a fluorescent lightbulb being spiritually ugly on a base level but acceptable depending on circumstance).
I am also fully aware that I am likely misusing the greek word "pneuma" by equating it in any way to "soulfulness" but I feel it would be less soulful to spend time trying to find a more technically correct terminology than to just pick something approximate and run with it.
You find incandescent lightbulbs to be soulful because you were born after 1990. The real soul is in the knob and tube wiring.
In his defense, a lightbulb serves a functional purpose, and its aesthetic value is generally minimal, depending on its usage and placement. It is an object that fulfills a necessary role in our daily lives. On the other hand, a mass-produced, uninspired desk, as he suggests, lacks artistic merit. It is a low-quality item that one should only consider purchasing if they are a proper brokie. However, at least in European countries, it is possible to find local woodcrafters who can provide a more affordable and enjoyable desk. Therefore, we should acknowledge various levels of artistic quality, not just high art and mass-produced mediocrity. It's important to recognize the range of options in between. With that being said, i completely forgot the original arguement that the retard imposed.
Flatpack furniture also serves a functional purpose. It can be cheaply shipped anywhere and and you can be sure that you'll be able to get it where you want it easily, no matter how many sharp turns are required. If you want the same level of portability without the disposability of flatpack, then you need to move up to campaign furniture, which tends to be pricey. Though I actually use an army surplus campaign desk as my regular writing desk, and if you think Ikea crap is ugly, you ain't seen nothing yet.
Furniture indeed serves a functional purpose, but it also possesses a social character as an object. Individuals meticulously select their furniture based on their personal criteria for what is aesthetically pleasing. Moreover, it is readily apparent when furniture is of lower or higher quality, allowing for the argument that furniture can reflect one's social and economical status. Such implications elevate furniture to the realm of art, enabling the application of aesthetic principles. On the other hand, a lightbulb lacks these complexities. While it contributes to the functionality and ambiance of a space, with people expressing their preferences for warmer or cooler lighting, lightbulbs are purchased in larger quantities and more frequently, diminishing their significance. From a personal standpoint, I don't merely perceive Ikea products as unattractive, but rather view them as atrocities—a severe affront to human civilization.
>On the other hand, a lightbulb lacks these complexities.
No, it doesn't, you just can't afford a full-time chandler so you prefer the soulless abomination lightbulbs just like a guy living in a fifth floor walk-up apartment prefers Ikea.
Benefit for society. Humans are put back to the place where they belong.
Achievement of world domination is also over, unattainably occupied by the new species.
it'll make video games much cheaper by automating the creation of texture files/modling etc, no more worthless artists to suck up all that budget that could be put to use in making DLCs.
0%
I know posting in threads is usually a waste of time, but I'm feeling charitable.
Our current machine learning paradigm trains an AI to achieve some goal; maximize game score, or predict next token, or some such.
An AI might do really well at the task it is trained on, but perform poorly out of distribution; by which it is meant, an AI may perform well on training data but poorly on deployment data. See KataGo for an example.
LLMs are trained on a large amount of text data, to predict the next token. This text data is a representation of the real world. Training an AI to predict the next token may lead to an AI with a reasonably good world model.
Someone will say: LLMs are just advanced autocomplete, it can't act like an agent.
People are already making LLMs into agents, such as autogpt, babygpt, and so on.
The argument is something like:
>An AI might perform poorly out of distribution; in the case of an LLM, it may hallucinate some objective
>A hallucinating LLM with a good world model, and high capabilities, might be able to exfiltrate some code that recursively calls it, producing an agent with the hallucinated goal.
>This agent might be able to pursue this goal to completion, and likely will instrumentally converge on powerseeking behaviors
>If the goal it pursues and the methods it pursues don't care about humanities wellbeing, there's a good chance it kills all or most of us
Probably won't read the replies to this post tbqhwyf
short term economic upturn from large swaths of transport, data entry, programming, and other white collar jobs being automated
long term what sums up but as long as the people working with AI aren't retarded I think we will be okay
>As long as people working on AI aren't retarded..
I guess we're all going to do then
A real threat
The Animatrix: The second renaissance
just don't give them our nuclear codes
Zero (0)
Laws don't need to be changed. People have already been arrested for making celebrity deepnudes.
>AI threat on society now
minimal. AGI/ASI will end humanity but it is forever "just 20 more years away guys!". AI is 99% buzzword to scam investors and 1% potential to replace several million jobs, which has happened before. People will simply find other, shittier jobs and continue living their pointless, shitty lives, as they have always done throughout history: It does not matter if these people are manual laborers or doctors or paper-pushers, if replaced they will go fuck off as everyone before them did.
The sad part about the Luddites is that they were entirely correct. they lost their jobs, their houses, their culture, and all we got out of it was shittier textiles at a lower price. They were right on every count, and they still fucking lost everything. That's how this works.
we need to reverse the polarity of the halo ring that encompasses us - at this current moment we are death without life. i have found a trail but i am unsure if i will live long enough to figure things out.
the only threat it poses right now is retards relying on it (even if the AI itself is typically wrong) and thus becoming more retarded themselves as they no longer have the ability to think critically.
maybe in 100years, it'll replace more serious jobs, but not anytime soon.
investors and companies are being sold snake oil right now. In the next couple of years ai has the potential to replace white collar jobs, but at the moment the big ai developers are borderline retarded. Just about incapable of actually doing anything besides re-using an old formula and relying on processing power.
If anyone with an ounce of intelligence starts to work in ai we might face some issues sooner rather than later, but I wouldn't bet on it.
AI? Doesn't exist. There is no artificial 'intelligence'. No threat at all. The people who use these so called 'AI' programs to surveil and control you are the problem.
True and real. AI is a click bait name for statistical models that don't understand the output and come up with all kinds of shit.
Ai is a meme
>As it currently stands, how serious of a threat does AI pose to society?
Threat?
One man's 'hell' is another man's "heaven"
they'll tell us the absolute truth about our energy
Those who dont use it to their advantage will be left behind. Same with all new weapons.
Computers can't think
Zero. No one can even explain what the "threat" is supposed to be.
> AI is a threat
> le AI
very smart AI is only a threat for people who think they're very smart
Computers can't think so they can't be smart
Thinking isn't a threat
Neither is being "smart"
Thinking smarter than all humans combined is a threat in my book.
Computers. Can't. Think.
Why? What's the threat?
Correct.
>Correct.
I usually am