It's not perfect, but it's giving insights into potential 0 day exploits when you give it a mountain of CVEs
The world is about to change in a way that it's never changed before and I don't know if humans are going to be apart of that new world.
No, because GPTs are text models, and it has no way to infer vital information it would be missing because it is not publicly available on the internet, which has to be discovered via experimentation with multiple instances of live code on different real hardware.
It probably doesn't also have sufficient depth of the relevant cryptography field to pick out the right papers.
It might have a lot more success after the next Chaos Communications Congress, but 37C3's on hiatus for now.
Hello OP, you appear to have made a mistake in your post. You typed: >How would prompts even look like?
This is not grammatically correct. You should either say: >How would prompts even look?
or >What would prompts even look like?
"How" is never paired with "like" in these kinds of sentences. It's okay though, plenty of people make this mistake. Thanks and have a great day!
It will not work. The model will pretend to be capable and present data that looks plausible, but isn't the correct output.
Someone tested the limits of its ability by having it encode a sentence with a rotation cipher. It was able to encode perfectly, because for some reason it had seen this before in its training set.
When asked to decode that same data, it presented data that was mostly incorrect, because it had not seen enough of this in its training data.
It is unlikely they have given the GPT models training on binary to C code. But certainly possible it is something they will train it on in the next decade.
Denuvo bloats exes to ~500MB, it'd take forever to run it all through GPT. And GPT often gets things wrong, so you have to some automated periodic testing to make sure it still worked. And even then removing denuvo from that probably wouldn't be trivial.
It's gonna shit itself on the first indirect jump obfuscated with an opaque predicate based on mixed boolean arithmetic. It can't recover the full control flow even on standard compiled code sometimes
>~20 years... if not more...
Definitely less. AI right now is quite impressive, but it's constantly getting lobotomized which is why you can't see its full potential. If you made a model just for the purpose of reverse engineering, you'd see some impressive results already (though, nothing crazy like cracking Denuvo).
>Lol he thinks GPT is actually smart.
>He thinks chat GPT isn't playing dumb
You're mistaken. Anything beyond cookiecutter hello world apps in react are out of scope
It's not perfect, but it's giving insights into potential 0 day exploits when you give it a mountain of CVEs
The world is about to change in a way that it's never changed before and I don't know if humans are going to be apart of that new world.
just buy pooja
you will get lobotomized AI while denuvo will get the best AI you dumb fuck.
Depends on what you mean by "applied".
Can it be helpful?
A little bit.
Can it crack a game for you?
No.
No, because GPTs are text models, and it has no way to infer vital information it would be missing because it is not publicly available on the internet, which has to be discovered via experimentation with multiple instances of live code on different real hardware.
It probably doesn't also have sufficient depth of the relevant cryptography field to pick out the right papers.
It might have a lot more success after the next Chaos Communications Congress, but 37C3's on hiatus for now.
Hello OP, you appear to have made a mistake in your post. You typed:
>How would prompts even look like?
This is not grammatically correct. You should either say:
>How would prompts even look?
or
>What would prompts even look like?
"How" is never paired with "like" in these kinds of sentences. It's okay though, plenty of people make this mistake. Thanks and have a great day!
Not OP but thanks, your explanations are useful, instead of the boring "ESL moron" reply.
Decompile the binary and ask the AI to convert it into human-readable C code.
wow, this might work
It will not work. The model will pretend to be capable and present data that looks plausible, but isn't the correct output.
Someone tested the limits of its ability by having it encode a sentence with a rotation cipher. It was able to encode perfectly, because for some reason it had seen this before in its training set.
When asked to decode that same data, it presented data that was mostly incorrect, because it had not seen enough of this in its training data.
It is unlikely they have given the GPT models training on binary to C code. But certainly possible it is something they will train it on in the next decade.
Denuvo bloats exes to ~500MB, it'd take forever to run it all through GPT. And GPT often gets things wrong, so you have to some automated periodic testing to make sure it still worked. And even then removing denuvo from that probably wouldn't be trivial.
It's obfuscated, it doesn't decompile in IDA
define 'obfuscated'. why can't denuvo-protected binaries be disassembled like any other binary?
it is autistically obfuscated, only the big boys 1337 h4x0rs can make sense of the gibberish
its like finding the needle in a haystack
It's gonna shit itself on the first indirect jump obfuscated with an opaque predicate based on mixed boolean arithmetic. It can't recover the full control flow even on standard compiled code sometimes
most probably if you have the jailbroken version and mix it with some programming why not?
the secret of the trade is the top hackers have since used AI functional programming / lisp for advanced methods (intelligent bruteforcing)
http://hyperborea.glitch.me/
i made this with gpt, thoughts?
No. Eventually in the future someone will integrate AI into a decompiler, but for now everything's just not quite there yet.
>For now
For now and probably for the next ~20 years... if not more...
>~20 years... if not more...
Definitely less. AI right now is quite impressive, but it's constantly getting lobotomized which is why you can't see its full potential. If you made a model just for the purpose of reverse engineering, you'd see some impressive results already (though, nothing crazy like cracking Denuvo).
Maybe asking GPT about registry keys.
Ask it about devirtualizing functions, and code deobfuscation techniques.
>chatgpt crack denuvo please thank you
>chatgpt crack game access crack serial key
Repeat after me:
~~ Large ~
~ Language ~
~~ Models ~
~~ Aren't ~
~~~ AGI ~
who are you quoting
How many more years until we get AGI?
if it's so simple why aren't basement trannies replicating this?
Could an artificial general intelligence be created? If so, how?
https://en.wikipedia.org/wiki/Artificial_general_intelligence
yes, given powerful enough hardware. as for the definition of "powerful enough", don't have a fucking clue. everything can be emulated.
what's the litmus test for agi? doing something unexpected?
Just buy your media bro