As I recall, adversarial training usually stops when the discriminator (AI detector) becomes no better than random chance, which means (if the training worked) that the generator has gotten good.
You could try training the discriminator some more or training a better discriminator, but if the generator is good, the there'll likely be a high false positive rate.
there will never be an AI good at picking out ai written text vs non-ai written text. text doesn't have enough distinction for any AI to be able to pick that out, ever. Images are free game, text is not.
>AI is bad because it gives wrong results and it’s making moderation difficult! >I know, let’s make an AI to predict whether something else is an AI so we can have it moderate for us! >What do you mean it’s making mistakes and we shouldn’t trust it for banning people, that’s just Big Tech trying to shut down competition! >t. Stack Overflow Moderator
Sam Altman is literally the Antichrist, but he’s right here for a change, AI detectors are creating more problems than AI itself right now
AI detection was a failed endeavor to begin with. The AI generates valid sentences. As long as the flow of the paragraph is not disjointed there is a reasonable probability that a human wrote that sentence. I mean let's take copilot for example, we have copilot regurgitating GPL code. That code was written by a human and was just coaxed out of the AI. The same can be said about anything written by GPT. This means no discernable difference can be made because the AI is producing content written by humans, and yes I know all content produced by an AI is not verbatim other people, but its within that probability range. People will have to change their communication style for any meaningful AI detector to exist and then the AI will simply change to match people trying to outwit it.
>OpenAI decides it has had it's fill of taking advantage of the mentally handicapped after spending the better part of a year being fed free curated datasets of university-level writings by dumbass teachers placing self-righteous honor code crusades over student information safety... again >more news at 10 >later... University students complain that OpenAI is leaking information they've never submitted on the platform! Nobody knows how the machine has collected their information or how it imitates their writing styles perfectly! Our top reporters investigate!
Good. The fucking thing was making teachers flunk honest students at random.
>the AI is so good at imitating human writing that nobody can reliably detect it
>this proves the AI isn't going to continue taking over jobs
Retard
Is it just one adversarial network vs another?
As I recall, adversarial training usually stops when the discriminator (AI detector) becomes no better than random chance, which means (if the training worked) that the generator has gotten good.
You could try training the discriminator some more or training a better discriminator, but if the generator is good, the there'll likely be a high false positive rate.
What training does the generator move to at that point if the discriminator is done?
RLHF and similar methods. Once it's passable quality to where the output is at least coherent, humans become the discriminator.
Anon, the low accuracy means it's too hard to predict lol
>it can't tell the difference between people and AI
>this means AI bad
I knew the people against AI had low IQ, but...
once more capitalism bends over for the machine god
xeet
retard-chama this doesn't prove what you think it does
>create a problem, sell the solution
>26% true positive rate
>9% false positive rate
LMAOOOOO
there will never be an AI good at picking out ai written text vs non-ai written text. text doesn't have enough distinction for any AI to be able to pick that out, ever. Images are free game, text is not.
>AI is bad because it gives wrong results and it’s making moderation difficult!
>I know, let’s make an AI to predict whether something else is an AI so we can have it moderate for us!
>What do you mean it’s making mistakes and we shouldn’t trust it for banning people, that’s just Big Tech trying to shut down competition!
>t. Stack Overflow Moderator
Sam Altman is literally the Antichrist, but he’s right here for a change, AI detectors are creating more problems than AI itself right now
AI detection was a failed endeavor to begin with. The AI generates valid sentences. As long as the flow of the paragraph is not disjointed there is a reasonable probability that a human wrote that sentence. I mean let's take copilot for example, we have copilot regurgitating GPL code. That code was written by a human and was just coaxed out of the AI. The same can be said about anything written by GPT. This means no discernable difference can be made because the AI is producing content written by humans, and yes I know all content produced by an AI is not verbatim other people, but its within that probability range. People will have to change their communication style for any meaningful AI detector to exist and then the AI will simply change to match people trying to outwit it.
>AI can’t detect AI because it’s AI
Fucking twats lmao when will they learn that it’s fucking OVER
>OpenAI decides it has had it's fill of taking advantage of the mentally handicapped after spending the better part of a year being fed free curated datasets of university-level writings by dumbass teachers placing self-righteous honor code crusades over student information safety... again
>more news at 10
>later... University students complain that OpenAI is leaking information they've never submitted on the platform! Nobody knows how the machine has collected their information or how it imitates their writing styles perfectly! Our top reporters investigate!