The more I learn about how machine learning actually works, the more I become convinced there's no robust solution to the alignment problem.
It's literally just clever calculus that takes Goodhart's law to the logical extreme.
https://www.eacambridge.org/technical-alignment-curriculum
TL;DR we're all dead
Ape Out Shirt $21.68 |
UFOs Are A Psyop Shirt $21.68 |
Ape Out Shirt $21.68 |
No, a bird will always be smarter than any code you ever write
lol no, animals are moronic, they're barely optimized programs made by evolution. You could write the logic of a bird in a way more optimal way in only a few hundred lines of code
prove it with code
yeah yeah sure thing pedo antichrist lover. why don't you go finish the open worm project if it so easy
holy fricking shit
this is what they teach you in CS in america?
whats next?
EOY papers on isaac asimov's work?
dissertations on ridley scott's cinematographic work?
lol no
it may however lead to a new era of human creativity
technologies such as dall-e 2, once it or a similar technology is inevitability available to the masses, will democratize artistry by giving those with ideas but without skill the means to realize them effortlessly
until you make a model of what makes art- art.
if i werent busy making money i would prolly be coding a drum and bass generator.
its gonna be a couple of years i have that idea in a corner of my mind, but never had the time or energy to build it
it also steals art from artists so
cope
plus this can already be done with commissions, if we take away that from people we'll be left with a society that feels no fulfillment when it creates art because hey, an ai can do it for better and cheaper so what's the point. this is what Friedrich Nietzsche feared and what we're barreling into.
There’s still going to be some value in making the art yourself, the same way handmade soap or candles have value inherent to the fact that it was hand-made, even if a factory can do it better. Or the way folding a complex origami would feel fulfilling even if a robot could do it perfectly. You just won’t be able to monetize it
AGI < 10 years
machine learning is completely unrelated to AGI and the alignment problem
The hell are you smoking?
how about you explain why you think they are?
we don't know what AGI will look like
before we even attempt to replicate thought, we'd first have to understand what that even is
if its rated with real intelligence it will only kill israelites
exponential growth is not sustainable in any natural system
doesn't need to be sustainable for it to wipe out humans, or worse.
A human society saturated with "AI" tech will also lead to a shift in how humans think. We will co-evolve with the AI. So the entire premise of the thread is wrong.
Artificial sapience is a long pipe dream. The software and hardware for it doesn't exist yet.
It is much more likely we will get destroyed by artificial stupidity in a form of an optimization problem that ends up being self-destructive for the "AI".
What's the point of posting this on BOT? This forum is filled with morons who just want to argue about muh language A vs muh language B.
Anyway, yeah, we probably are. Even if it's not some kind of weird sci-fi scenario, there will probably be a military arms race to weaponize AI. Then everyone will race to develop this tech without adequate precautions because it will be either that or get left behind. And the more powerful the tech will become, the greater the potential destruction from one tiny mistake.