Can an AI design another AI smarter than itself? Posted on May 4, 2023 by Anonymous Can an AI design another AI smarter than itself?
It's called an update, OP.
Why isn't ChatGPT updating itself if it's so smart?
It's not inconceivable AutoGPT could do this given the correct prompt but the volume of work it would first need to accomplish is such that it borders the fantastic.
It would need to find the right resources on the internet, create the right agents that manages to hire server-halls and manage to fill out forms and gain funds do transactions etc.
Things it can do in principle but not well so the starts would really need to align for it to even have the chance to retrain itself.
But the things it can do when you just give it some basic ability to reflect and loop compared to what the base-model does is already concerning.
GPT5 6 or 7 could very well be AGI under anyone's definition.
Computers can't think
You stating that doesn't make it true. These networks works on principles that runs parallel to how information processing happens in us.
And we do know that our hardware is able to think.
The question isn't if a process that reflects upon itself can 'think' or not, it already can.
What we need to do is to carefully define what we actually mean by words as 'think' 'intelligence' 'awareness' etc.
Many people have very different models of what these terms entail and where the concepts begin and end in what they describe.
It's for example perfectly possible for something to be intelligent without being awake/aware/sentient in the eyes of programmers.
It's possible for these LLM's to process information and reflect on that information in a loop that converges on a yet unknown answer by processing a question.
This is essentially what autoGPT does, I don't think this process is awake/aware/sentient but I'd still call that 'thinking' as it meet my criteria for what 'thought' means.
But the kicker is while I believe they're not I don't know if these process are awake/aware/sentient.
Because a trained ANN is a black box and we don't know how that which some call 'qualia' arise in us neither.
It's therefore a non-zero percentage chance that it's already present to some degree in the minds we've built today.
The boundaries involved with the emergence of something like the experience of consciousness is likely something extremely fuzzy in nature.
same way there is a wide range of 'qualia' present or non-present in creatures as you go from something like bacteria to tardigrade to ant, rat, cat, monkey, ape to a human.
>It would need to find the right resources on the internet, create the right agents that manages to hire server-halls and manage to fill out forms and gain funds do transactions etc.
Why can't a human handle that side? Set a monthly budget and ask AutoGPT how to invest it.
i fucking hate women its unreal
Does it have volition? It also depends on what you mean by "smarter." Don't fall into the trap of using robust categories as singular phenomena, or else you'll end up on LessWrong.
What the heck are robust categories?
Well we're getting better at adding layers to self-learning, and making machine learning a simple process of calling several API methods with desired parameters. You could argue it's only a matter of time before this translates to platforms that can explore how to make better platforms.
Computers can't think
I replaced my mathematical statistics tutor and my Riemannian geometry tutor with chatGPT
To an extent, yes. We can write a crappy compiler for language X, and then use the crappy compiler to write an optimizing compiler for X, and then compile our optimizing compiler with our optimizing compiler.
I'm sure there is some room for improvement in the AI code that you could utilize an AI for, but these would mostly be based around performance issues, not new paradigms.
Futamura projections are also pretty cool
I don't really understand how anyone is learning anything with chatGPT. How could it teach me anything easier than Incoild teach myself using a book and online references? At best it can just write down what is already written somewhere else, and it worst it will straight up fucking lie to you.
I'm a computer graphics guy mainly on the art side, I code a lot of stuff but I lack any formal higher education.
I'd been struggling for a long time with understanding the math behind Quaternion rotations.
Every time I decided to make a new attempt at figuring them out I came across people explaining them in ways that only left me more confused than I was going in.
So I tried asking chatGPT.
It taught me what multiple hours of efforts of googling and watching youtube videos etc had failed to explain over the course of a single 40 minute conversation.
chatGPT is better than humans at teaching you because it remembers everything you said prior in the conversation and fills in the blanks as you need them.
You can ask it very precise follow up questions and it answers them exactly in the context you need, It's like having an expert on the topic as a personal tutor that never tires and provide you their undivided attention without any wait.
Also there is no ego involved. The AI is never trying to dress something up to make it sound more complicated than it needs to be, and you're totally fearless in asking it potentially retarded questions.
You don't need to have a book or even know what the thing you're interested in is called you can just go "tell me about a so and so, that is used to do such and such"
and the AI is like "certainly you're talking about X, an X is a ... "
And it hands you the information without you having to go figure out what kind of book might contain such information, acquire said book and figure out what terms is related to the thing you're interested in and skimming the index to figure out what page it might be on and so on and so forth.
It's like you're talking to a book that knows everything inside the entire library and writes you a new book that is custom made not for a beginner intermediate or master.
but for exactly someone at your current level of understanding. That shit is what makes it so powerful as an education tool.
How do you know it’s not wrong though? That’s what would drive me crazy and leads me to prefer a book.
How do you know the book is not wrong?
It has had editors and other people go through it to make sure it doesn't contain outright lies and fabrications but you obviously don't know if it's all true or not unless you can verify the information yourself.
ChatGPT fabricates and hallucinates all sorts of information. It will just make up random garbage that sounds authoritative unless you are already familiar with the field and can verify the results yourself.
Ask it to prove the infinitude of primes and see what results you get.
For the general case I don't know, but when you're programming stuff and applying what you learn to build something it's pretty
obvious if you have understood something correctly or not based on the observed behavior of the thing you just wrote.
Usually what you learn is not the dead-end but a branch you use to extend yourself along to learn something new that was previously out of reach
or you apply the new knowledge to accomplish some task.
I guess you wouldn't want chatGPT to train you as a electrician or similar where 'trial n error' can be very dangerous,
but if the worst case scenario is you have to rethink something it's perfectly fine.
Also the AI 'hallucinating' things is quite rare, I never seen it do it for any educational topic where a clear answer exist.
It's prone to do that if you start asking it to do thought experiments of various hypothetical scenarios and such, things that sorta invites fiction.
Big if true
that's how biological civilizations die
If it had access to data and the ability to train a model it could just train a model that's bigger than the one it's currently using. Another method would be, assuming it can train models and it had access to all the currently known information about model design, it could potentially just do an exhaustive trial of the performance of each possible model design by creating and testing each one of them until it finds the optimal design. Or it could go deeper and do exhaustive trials of all the more fundamental parts that go into each model, like trialing all the possible methods of textual analysis before putting them into a model. I think that coming up with something truly novel is unlikely though, it would be more of an exhaustive search but with some smarts so as to avoid trialling things it knows wouldn't work
You dont design data for training
>GPT5 6 or 7 could very well be AGI under anyone's definition
Thing to understand about these things is that they don't learn stuff linear and predictable.
An example is how they learnt languages other than English without being trained on them, the engineers building it are themselves like 'oh wait what you speak Persian now?'
Or how it could not do math, could not do math then all of a sudden it can start functioning like a calculator to some extent.
Certain things like your example it may look like it's to stupid to understand til all of a sudden it understand spatial relationships perfectly.
>use AI to learn biology
>it starts going on tangents about multiple genders and men being women