Deep learning model that will generate erotic stories based on several features which can be measured via a form/test in a web app. It is fed on a curated dataset with no attribution to any author for the stories I use in it.
if someone is making anything worthwhile they'll probably be quiet about it. Something akin to the dark forest theory. Lots of left brained people in the subject, desperate to find real world use cases.
I made an e-thot image classification model on BOT to get some experience. Going to leverage what I learned to make a time series forecasting model next.
I made a thing that takes a problem specification and outputs a sequence of actions to reach a goal. Trained self-supervised RL. It took a lot of reward shaping to get good results and I kept reading papers for the latest techniques that were "sample efficient" etc because I was using free colab to train. I either had things that trained fast, but could only solve goals 1-10 out of 50 or ones that showed promise but were really slow to run (and thus train).
After randomly changing models around and architectures from 10 different papers I gave up.
It would work fine if I had a bunch of compute, but I wasn't going to spend the money to get 10000 corehours.
While you'll never match the speed of supercomputers, there are numerous ways to compensate >Gradient accumulation to simulate larger batch sizes >Reversible architectures to create models or use batch sizes that would otherwise exceed VRAM >16-bit floats >Reusing processed batches while loading new batches to reduce GPU starvation (forgot the term for this technique) >Storing data in contiguous memory if you have no SSD
I'm sure there's many more, but reversible architectures alone have gone a long way for me personally.
I'm slightly retarded and am still learning the basics of programming.
How difficult (relative to the world of ML) is it to use Tensorflow or Pytorch to make a program that finds the best possible strategy/settings/parameters to win a (simple, numbers-based) game?
Do I need a powerful GPU as well?
One thing I'd like to try, if I had any idea about model training, would be to train an AI on the data structure of SMW levels exported using Lunar Magic.
A trainer model over time would be capable of produicng levels that work in the game engine, can be beatable, have appropriate level design that produces hurdles and challenges, etc.
>CodeT improves the [email protected] on HumanEval to 65.8%, an increase of absolute 18.8% on the code-davinci-002 model, and an absolute 20+% improvement over previous state-of-the-art results.
Deep learning model that will generate erotic stories based on several features which can be measured via a form/test in a web app. It is fed on a curated dataset with no attribution to any author for the stories I use in it.
Coomer tech is best tech.
How would you approach it? I've thought about something similar using transformers but don't know how to stop it from generating nonsense
Nothing. I find AI kinda interesting, but have zero applications for it.
if someone is making anything worthwhile they'll probably be quiet about it. Something akin to the dark forest theory. Lots of left brained people in the subject, desperate to find real world use cases.
I made an e-thot image classification model on BOT to get some experience. Going to leverage what I learned to make a time series forecasting model next.
I made a thing that takes a problem specification and outputs a sequence of actions to reach a goal. Trained self-supervised RL. It took a lot of reward shaping to get good results and I kept reading papers for the latest techniques that were "sample efficient" etc because I was using free colab to train. I either had things that trained fast, but could only solve goals 1-10 out of 50 or ones that showed promise but were really slow to run (and thus train).
After randomly changing models around and architectures from 10 different papers I gave up.
It would work fine if I had a bunch of compute, but I wasn't going to spend the money to get 10000 corehours.
>but I wasn't going to spend the money to get 10000 corehours.
These days you need a supercomputer to do AI research. Sad!
While you'll never match the speed of supercomputers, there are numerous ways to compensate
>Gradient accumulation to simulate larger batch sizes
>Reversible architectures to create models or use batch sizes that would otherwise exceed VRAM
>16-bit floats
>Reusing processed batches while loading new batches to reduce GPU starvation (forgot the term for this technique)
>Storing data in contiguous memory if you have no SSD
I'm sure there's many more, but reversible architectures alone have gone a long way for me personally.
nothing, too poo.r to afford an rtx A5000 gpu
just buy a 3090 bro
I'm slightly retarded and am still learning the basics of programming.
How difficult (relative to the world of ML) is it to use Tensorflow or Pytorch to make a program that finds the best possible strategy/settings/parameters to win a (simple, numbers-based) game?
Do I need a powerful GPU as well?
Is your problem differentiable?
>yes
Then you can use deep learning
>no
Then use evolutionary strategies such as genetic algorithms, particle swarm optimization, etc.
Is that difficult? And would a good GPU be needed?
Thanks, I'll look into it, but the variables are quite numerous, so ML might be necessary.
>a (simple, numbers-based) game?
You can probably do this discretely if it is really a simple game, with old-fashioned AI (constraint programming)
https://stackabuse.com/constraint-programming-with-python-constraint/
You can solve stuff like sudoku with this.
Can it generate sprite sheets for 2d games?
Dalle 2 is trained and any image (except porn) so it can generate any image (except porn).
>except porn
What an useless software godamn
>What an useless software godamn
I fully agree. Dalle 2 for porn when?
how is he going to turn the page without letting go of the cat?
Frog tongue
One thing I'd like to try, if I had any idea about model training, would be to train an AI on the data structure of SMW levels exported using Lunar Magic.
A trainer model over time would be capable of produicng levels that work in the game engine, can be beatable, have appropriate level design that produces hurdles and challenges, etc.
where does one start to learn AI ? I want to make something as awesome as jarvis or cortanna
New SOTA for code generation.
https://arxiv.org/abs/2207.10397
>CodeT improves the [email protected] on HumanEval to 65.8%, an increase of absolute 18.8% on the code-davinci-002 model, and an absolute 20+% improvement over previous state-of-the-art results.
It's over for human programmers
>It's over for human programmers
you mean
>It's over for humans