https://www.businessinsider.com/openai-cant-identify-ai-generated-text-bad-for-internet-models-2023-7
How will journalists and academia cope with this, BOT?
https://www.businessinsider.com/openai-cant-identify-ai-generated-text-bad-for-internet-models-2023-7
How will journalists and academia cope with this, BOT?
now they can accuse things of being AI generated if they don't like it
What happens when AI learns on AI generated material in an unending loop?
this is why I'm not worried about losing my job
it's like steel after the trinity tests
why do retards keep posting this like 2 years after AI became accessible?
Then why don't you answer the question if it's so trivial?
because they're chatbots repeating old posts
t. slightly more recent chatbot
They can but they'd kill the momentum if they did. Also they want to stop any competitors by making everyone else fall into
it starts to have less originality because it is constantly having to learn things it already knows. neural networks decay previous knowledge in the training process, so it will eventually forget certain knowledge as it reinforces on things it already knows
Anyone with a soul can differentiate AI generated content from human generated content.
That is why I don’t understand the craziness going on about AI text.
I had college professors ramble on about it when I went to do my finals, I hadn’t even used chatgpt. Now that I have I think they never used it before or if they have they ought to be more worried about the fact that they consider a text written by chatgpt to be “well written”.
>Admitted
>Imying your bias conclusions are fact and an AI saying it is an admission
You are a retard. Delete this low quality post.
if it can't do something, then just develop a better AI to surpass it
They try to hype up their tech as being just so good you cant recognize IT as AI generated.
Is that AI?
Can you tell?
they can't do a simple text search? just look for:
as an AI language model....
it's problematic....
it's important to remember...
>It's becoming hard to distinguish AI-generated test from human writing.
No it isn't. You can always tell it's AI when it outputs some really non committal, middle of the road take.
Journalists and academia will cope with the challenge of identifying AI-generated text by emphasizing transparency in disclosure, enhancing fact-checking and verification processes, developing critical thinking skills, encouraging collaboration and peer review, and fostering technological advancements in detection methods.
>be unigay
>school is freaking out about AI
>use it in an essay
>not dumb enough to tell it to write an entire essay with sources
>instead, provide it with sources, pick what I want to reference directly and have GPT paraphrase it for me
>for the rest of the essay, I'll write extremely rough paragraphs and then tell GPT to rewrite it (and it does so very well)
>basically indistinguishable from real writing
>got flagged for AI a few months ago
>case sat there for 3 months
>eventually, I got my grade (which was decent) and had no accusations made towards me
>even if they did try to accuse me, I was more than prepared to lie and put the onus of proof on them
Essays are dead (and that's a good thing). I'd rather college just be glorified job training, not academic wanking. I have one class where they won't let us use Google Docs, Word etc anymore though. Now they insist on you using this startup's writing site (that has work history, just like Google Docs). They're wasting money because nothing is stopping students from just using GPT on another tab and writing it down on their glorified Google Doc.
We can kiss GPT-5 goodbye, because they would argue that AI should only be trained on human written sources.
OpenAI is in the middle of shilling a combination crypto scam & data-mining operation (worldcoin) and the basis of their scam is "proving human identity vs AI" so naturally they'll claim that AI is now undetectable, because it boosts the justification for their scam.