Rob Miles from Computerphile - The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment Posted on October 23, 2022 by Anonymous How dangerous are Mesa-Optimizers?
fuck off nerd
comfy screen saver
no one cares, Robert
"AI alignment" and "AI risk" is pseudoscience
attempting to do scientific reasoning about something which is not based in reality
It's not pseudoscience
>AIs shifting focus towards unexpected instrumental goals
>AI performing differently in real environment compared to test environment. i.e. being deceptive
>AI "cheating"/behaving unexpectedly to just fulfill its reward instead of finishing the task
>Translating human goals and values into logical rulesets and statements is impossible
LOGICALLY PROVEN 200 YEARS AGO (hume's guillotine)
>Introducing highly intelligent agents into soceity can have unexpected consequences
EXPERIMENTALLY PROVEN, dumb stock trading bots caused multiple economic crashes in the past, AGI trading bots will break the world's economy
so what your saying is that its pseudoscience.
There are experiments that prove all the concerns raised by AI researchers. It literally doesn't get more scientific than this
Experiments all done with machine learning, not artificial intelligence.
Any study of artificial intelligence must necessarily start from an understanding of intelligence, which has yet to even be rigorously defined.
Just because you're a retard who thinks "intelligence is some magical quantity that makes me not a machine" doesn't mean that there doesn't exist a standard, practical definition for intelligence for most everyone else.
> I KNOW WHAT IM TALKING ABOUT, THE SCINE SAID SO
>reponding 35s after my post
yeah i didn't fucking read it, I saw it pop up (BOTx) and and said "yeah this guy looks like a retard"
>I can't read 4 sentences
you really did deserve it
>Just because you're a retard who thinks "intelligence is some magical quantity that makes me not a machine"
That's not at all what I think.
>a standard, practical definition for intelligence for most everyone else.
Oh really? Define it.
And if you want to claim to be scientific it better be a rigorous mathematical definition.
The "AI" that exists now is not intelligent, at least not for any definition of intelligence that is typified by humans, or even animals.
It cannot learn on it's own, it cannot learn while acting, and it cannot set it's own goals.
There are AI researchers that are working on real "artificial intelligence", but they are doing so by studying the human brain. Not by trying to wrangle gradient descent algorithms into more and more convoluted configurations.
A good definition for intelligence that I use is "the ability to reach the same goal by different means"
An neural network has the ability to reach a goal via adaptive optimization. It is without any argument a form of learning, and thus the system is intelligent.
>It cannot learn on it's own
Nothing can, put a child in a sensory deprivation tank their entire life and they will fail to develop.
>it cannot learn while acting, and it cannot set it's own goals.
Goals are set by the objective function/optimizer, and the system itself creates intermediate goals that are used to transform information from one state to another. You are wrong.
>Nothing can, put a child in a sensory deprivation tank their entire life and they will fail to develop.
By "on it's own" I don't mean lacking an environment, I mean learning without the presence of other intelligences.
Children don't need thousands of examples of adults doing something in order to learn things. Leave them on their own and they will try things, observe the result of their actions, and learn from the single data points.
>>it cannot learn while acting, and it cannot set it's own goals
>Goals are set by the objective function/optimizer
You literally agreed with me then claimed I was wrong.
Nowhere in a human brain does there exist an objective function, it wouldn't even make sense because there isn't any unified output to apply an objective function to.
The use of the term "neural network" in machine learning has infected everyone with wildly erroneous impressions of how brains work.
>I mean learning without the presence of other intelligence.
Unsupervised learning is the majority of how networks of scale are trained. AI learns from a single data point; just because it isn't as much as humans can doesn't make it not learning. And if you agree that it's learning, then what's the issue in the first place?
>Nowhere in a human brain does there exist an objective function
There are objective functions implicitly in DNA, in each cell, in each cell cluster, and yes, in the brain. An objective function is merely an evaluation of performance in any sense that tunes behavior, and this is done at pretty much all levels of biology almost down to chemistry.
You don't know what an objective function is.
An objective function, also referred to as a loss function in machine learning, is a function that computes the error of the output of a system. In a biological sense this is usually refereed to as "fitness", and is not only a primary mover for traditional evolution, but is also applies on many biological scales as a tool for growth and learning. For instance, individual cells from a multi-celled organism can operate in independence, or in plurality depending on the environment they exist in. Various pieces of state information are used to decide a given cells operational "mode" including chemical and electrical via ion channels. Environmental pressure causes the cell's state to update until it reaches a new equilibrium of optimal behavior. This is a form of intelligence, and learning. Just as an example.
You are taking "loss function" which is well defined in a machine learning context, and trying to apply it in a nebulous way to biology.
In the sense of machine learning objective/loss functions, nothing mathematically equivalent exists in a human brain.
If you want to loss functions exist at cellular levels or in various places via chemical interactions you can, but you have no conception of how those multiple separate loss functions are unified in a single intelligence.
If you want to say*
A loss function is a mechanism for computing error, and those algorithms exist implicitly and explicitly in not only the brain, but biology at large. Or are you saying that "intelligence when biological fitness function" or "intelligence when NOT human readable loss function" because both are pretty shitty and utterly useless definition for intelligence.
Just because the objectives we use in machine don't share the exact environmental and biological pressures/level of complexity a human brain has does not mean a system isn't intelligent. For instance, we could create an arbitrarily obtuse objective using random noise with entropy so high it's nonsensical; doesn't make a system learning to operate around it any more 'intelligent" in any ways that matter.
Functions for computing error and improving based on that error exist in both biology(including the brain) and machine learning. Both are fundamentally intelligent, and learning.
>Or are you saying that "intelligence when biological fitness function" or "intelligence when NOT human readable loss function" because both are pretty shitty and utterly useless definition for intelligence.
I'm saying there is necessarily something which is fundamentally different from a loss function which unifies intelligence in the brain because there is no centralized output with which to calculate error. If you don't understand what that unifying algorithm is then you don't understand intelligence.
You can say machine learning algorithms are "intelligent", since they can "learn" things, but don't use that definition to refer to brains.
There it is, the cope. This so called "unifying algorithm" is merely an emergent property of the human system. Fundamentally it's no more important to intelligence itself than the concept "movement" and a car. Sure cars move, and they use a very complex system of many parts to do so. But that's irrelevant to the definition, and plenty of simpler things move as well.
>You can say machine learning algorithms are "intelligent", since they can "learn" things, but don't use that definition to refer to brains.
Of course that definition applies to the brain. Cortices fulfill similar roles to VAEs, and vast swathes of the brain are devoted classification and prediction. Again, the exact algorithm may exist implicitly depending on the environment, biology, current state, etc, but that's irrelevant; it exists, regardless of how complex it ends up being.
>is merely an emergent property
Something being an emergent property isn't an excuse not to understand it and how it emerges.
There is zero evidence or reason to think that current machine learning algorithms provide a foundation from which something akin to human intelligence can emerge.
>There is zero evidence or reason to think that current machine learning algorithms provide a foundation from which something akin to human intelligence can emerge
For one, this is entirely aside from the argument around the definition of intelligence and whether or not current machine learning systems qualify as intelligent.
Secondly, there already exists systems from just the last 2 years that well exceed human capabilities in numerous tasks. This is a perfect example of the AI effect. https://en.wikipedia.org/wiki/AI_effect
We don't consider humans intelligent because we are good at individual tasks. Computers have been superhuman in certain tasks for nearly 100 years.
The "AI effect" is a nice name for an acute observation but it's critiquing definitions of intelligence more like your own than mine.
This doesn't address or challenge anything I said. Do you agree that artificial intelligence is indeed intelligence then?
No. And I think there isn't much point in continuing replying as I've said everything I have to say in earlier posts.
Cool, think about what I've said and I'm certain you'll come to the conclusion I'm right. Everything I've said is well reasoned.
>doesn't mean that there doesn't exist a standard, practical definition for intelligence for most everyone else.
intelligence is nearly as poorly defined as consciousness
In real life AI is pretty much just machine learning. Do you think that someone is going to code an entire AI by hand? All of the AIs you see are just machine learning. watson, stable diffusion, gpt3, etc.
the experiments elucidate potential hazards by showing how even dumb AIs suffer from these problems. If you understood what the problems are you would see how they would only scale and become harder and more dangerous as the AI becomes smarter
>we must kill the industry or we will all be AI slaves
AI just shouldn't be mass implemented before all AI safety issues are addressed
say it is AI safe, for whichever definition or AI safe people have, how would the world be safe from the big corpo that controls the AI?
the world is already FUBAR by big corpos, at a 6th massive exction event level, and that's without an AI.
What do you think those garden gnomes will do with the power of an AI? How safe will we be?
Let me guess, that's you in the video? Screaming and crying doesn't make something not pseudoscience bud
The flaw in this reasoning is that AI doesn't exists so whatever observations and experiments they made was not made on anything resembling AI, so they are no conclusion to be made about AI.
But even without this, the idea of AI "taking over" is completely bullshit, Industry is dying and there is no technology without industry.
I bet it's just another psyop to make you forget that the economy is fucking collapsing.
>NOOOOO technology is dangerous !!!!
>we must stop technology
>we must kill the industry or we will all be AI slaves
jokes on you, it would have never happened but the inidustry is still dying
>Industry is dying
My dude, have you been living under a rock for the last year?
The AI industry has exploded
I mean the real industry
isn't he the gay who shills mac whenever he appears in the video. There is also a white hair gay who also shills mac.
The only one I like is Prof Brailsford he is a legend.
Rob? Not that I've seen
iirc he is a linux and tiling wm guy
I was right, he uses manjaro and awesomeWM
Rob is our BOTuy
Rob, "or someone with equivalent knowledge" explain to us how stable diffusion code is being optimized to get 40x speed increases. Also how hard would it be to implement those refinements for AMD ROCm. I know you are skilled enough to know what's at stake in the ai race and open source is pivotal. So thanks for sharing.
Also what has Rob been doing this last year, the video output has dropped to 2min competition announcements. It would be useful to know what moves are being made at the cutting edge of ai to go closed source and how possible it is to truly open source and liberate such efforts from behind api's and paywalls.
It's possible they are sparcefying it a la https://neuralmagic.com/
This is just a guess though, can't think of any other ways it could get such speedup.
my man! you delivered, very worthwhile knowledge to learn. The best part about Rob's videos are the fluency and naturalness of communicating the technical details WHILST providing broader subject context. this is delivered at speed within a sentence or two. that's a skill, keep making the videos, you are ahead of the game.
for me, its the guy that looks like thunderf00t
Just build an optimizer that optimizes the optimizer's objective so that it's the same as your objective.
That's called coherent extrapolated volition and it probably doesn't work in the real world
>shut it down!
There is something unsettling about this guy. Can't quite put my finger on it.
holy shit you retards
AI ISNT REAL
ITS SIMPLE SEARCH ALGORITHMS LOOKING THINGS UP IN A DATABASE
"AI alignment" is nu-philosophy where they ask hard-hitting questions like "what if google search... BAD?!?!?!"
>ITS SIMPLE SEARCH ALGORITHMS LOOKING THINGS UP IN A DATABASE
he doesn't know how matrices work
you just have to look at his fucking face to know he his low IQ retard