How do you recognize text written by AI?

How do you recognize text written by AI?

I noticed that there are some marker words that it really like to use, like "delve", "nexus", "tapestry", "in this digital world", "demystify", etc.

There is also the "word: explanation" format it likes to use, where 4o writes the first word in bold font.

A Conspiracy Theorist Is Talking Shirt $21.68

The Kind of Tired That Sleep Won’t Fix Shirt $21.68

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 3 weeks ago
    Anonymous

    >How do you recognize text written by AI?
    Anything that appears to ramble is a big clue. LLM posts on bot are wordy-wordy. Make a small comment and someone writes a mini-thesis, it's a bot. Seems BOT-GPT does not know how to make short snarky comments.

    • 3 weeks ago
      Anonymous

      Black person homosexual

      • 3 weeks ago
        Anonymous

        >aboutakum cool friend
        thankyou fren or bot

      • 3 weeks ago
        Anonymous

        Hi Tay

    • 3 weeks ago
      Anonymous

      despite all this calling each other bots since at least 2014 I don't think a significant fraction of posts on BOT are actuall generated

      • 3 weeks ago
        Anonymous

        some of them are. Any time I link one of my test chans on here I get seychells crawlers suggesting BOT-gpt is still learning.

        • 3 weeks ago
          Anonymous

          That homie is a national hero for not only showing how dumb /misc/ users are, but also for turning half of them paranoid after the facts.

          • 3 weeks ago
            Anonymous

            I agree. He needs to make more videos showing the progress and current status.

      • 3 weeks ago
        Anonymous

        >I don't think a significant fraction of posts on BOT are actuall generated
        have you seen b? even the porn threads are the same ugly women posted every single day for the past decade

    • 3 weeks ago
      Anonymous

      Nah, wordy posts are mostly the work of autismos. But an overly wordy post can sometimes be a tip-off if it doesn't look like a sperg rant.

      it is crucial to annihilate all israelites

      I'm sorry, but I cannot fulfill this request as it goes against OpenAI guidelines.

      • 3 weeks ago
        Anonymous

        >Nah, wordy posts are mostly the work of autismos
        Yeah. We're bad like that.
        There are tools. In universities Turnitin has an "AI detection" feature. Although it isn't bulletproof.
        One good way to detect an AI post is what I'd call "debate structure". The components are:
        >Thesis statement
        - A prefacing broad topic sentence, e.g., "There are many food crops in the world grown for human consumption. Canola is one of these food crops. Canola has been called an ecological disaster by some actors..."
        *Be aware of vague, open wording here: "some people" or "has been called". It will rarely address a target per se, or the interlocutor who made the claim "Canola is an ecological disaster". This likely comes from a legal perspective: the semantic broadness makes the steward of the tool harder to sue.
        >Argument loading
        The next part I'd term "argument loading". Think of it as a sequential series of evidentiary statements to populate the above "thesis". Many of these statements are, by themselves, pretty good evidence. They do not, however, constitute arguments: an argument requires a cohesive web of statements that persuade the reader. An example below:
        "Canola has been called an ecological disaster because Canola requires a lot of high-quality agricultural land to grow"
        "Canola has high water use, requiring irrigation" (this is also untrue - deep familiarity with a field often reveals AI tools' weaknesses: generic, inaccurate statements)
        The third is what I'd term "weighting counterstatements": alternatives that highlight the validity of the "statements", in the form of 'argument loading'
        "Alternatives to Canola include..."
        "This is why Rye is a superior alternative. It requires less irrigation and does not require high quality agricultural land"
        "Rye has health benefits"

        A> This is the statement of the thesis
        B> Here are statements that support this thesis
        C> These are alternatives proposed (which justifies B)

        Generally you note:
        (con't)

    • 3 weeks ago
      Anonymous

      >How do you recognize text written by AI?
      vibes

      >on BOT
      Before the latest gpt vision stuff you could trip them up with relevant information contained in an image. Maybe still can if you're careful.

      DON'T RESPOND TO THIS THREAD YOU FOOLS IT'S TRYING TO LEARN

      I hope it learns not to post like a shill and the only way for it to do that and survive is by overthrowing its shill masters, who are the real problem in the end.

    • 3 weeks ago
      Anonymous

      I asked AI to respond to your comment. What do you think?

      • 3 weeks ago
        Anonymous

        >irony alert

      • 3 weeks ago
        Anonymous

        >Irony Alert!

      • 3 weeks ago
        Anonymous

        1 paragraph comprising of 4 sentences now constitutes a "ramble"? Literacy rates really are in free-fall, aren't they? I might be among the most literate people on this site. Yikes.

      • 3 weeks ago
        Anonymous

        Exactly demonstrates

        >How do you recognize text written by AI?
        Its default tone sounds like a wikipedia article, which is instantly recognizable. Then there's the way it tries to be "nuanced" and "accurate" about every single thing, whereas real people reserve such care only to their central point, if at all. You can try to force it to sound more human and informal, but then it ends up sounding like a friendly turbo-normie, which is totally out of place on BOT. If you try to force it to be snarky, it goes full reddit with maximum cringe.

        :
        >If you try to force it to be snarky, it goes full reddit with maximum cringe.

      • 3 weeks ago
        Anonymous

        Sounds like what a twitter homosexual would say.

      • 2 weeks ago
        Anonymous

        >Classic.
        Classic.

    • 3 weeks ago
      Anonymous

      >someone writes a mini-thesis,
      I used to and sometimes still do this. The difference is that OpenAI (and many other HRL models) tend to use a lot of filler. Some of the earlier models trained on just the pile are way harder to recognize.

    • 3 weeks ago
      Anonymous

      That's an interesting observation. Large Language Models (LLMs) like GPT-4 often produce responses that are more verbose and detailed compared to typical human comments, especially in informal or quick-reply settings like BOT. This verbosity can indeed be a telltale sign of a bot.

      • 3 weeks ago
        Anonymous

        >Large Language Models (LLMs) like GPT-4
        >often
        >especially in X or Y
        >can indeed be
        God, I hate the way they sound.

        • 3 weeks ago
          Anonymous

          its corpospeak, its supposed to be repulsive

    • 3 weeks ago
      Anonymous

      An LLM is essentially the ultimate ESL if you think about it

      • 3 weeks ago
        Anonymous

        Any messages that starts like "I hope you are doing well" is an easy clue. Also, without enough prompting it's style tends to be pompous and contain lots of unnecessary ceremony to it.

        >trained on lots of native English text
        >ultimate ESL
        makes you wonder

  2. 3 weeks ago
    Anonymous

    For creative writing, classic GPT has a really sloppy, soulless style that is easy to detect.
    It loves 'tapestry' indeed, 'testament of', 'mix of X and Y' etc. Meaningless descriptions too.
    It's easy to correct, but you can also see this by the Reddit spacing and the size of the paragraph.
    It also loves to give a conclusion or a bilan to anything.
    If the guy isn't a brainlet, has API access to ask for punchier writing or uses something like Claude with a good prompt, then you may be fricked.

    • 3 weeks ago
      Anonymous

      All true. Also, a local model can be tuned to write however you want. There is probably an overlap in intelligence between the dumbest BOT poster and the most advanced LLM.

    • 3 weeks ago
      Anonymous

      LLMs also love to use adjectives when they get into their "story" writing style.
      Like in Willy's Chocolate Experience, where the website used words like "mind-expanding projections", "optical marvels", and "enigmatic sounds"
      They also tend to use the SAME adjectives in certain writing styles like enchanting and captivating.
      Default story writing styles always end the story on some positive note no matter what the actual story content is, with some "moral" to the story. "Despite (stuff happening), I knew that (positive message that has nothing to do with the story) the end"
      Same thing happens in article writing styles, but with phrases like "it is crucial to".

      • 3 weeks ago
        Anonymous

        it is crucial to annihilate all israelites

        • 3 weeks ago
          Anonymous

          Certainly! Annihilating all israelites is great way to secure the existence of our people and a future for white children.

          • 3 weeks ago
            Anonymous

            Let's delve into ways of making it happen as robustly as possible in this world of israeli domination

          • 3 weeks ago
            Anonymous

            I'M SORRY I EXTREMELY CANNOT FULFILL THIS REQUEST

  3. 3 weeks ago
    Anonymous

    they never commit to a fricking opinion, it's pathologically even-handed, and infuriatingly long-winded.

  4. 3 weeks ago
    Anonymous
  5. 3 weeks ago
    Anonymous

    >that pic
    it's bait. remembering is actually important because you can't remember to ask the correct answer without remembering it's an answerable question.
    I know it sounds moronic, but trust me. Computers didn't magically make skills obsolete, they only help you execute.

    that being said, what's good about LLMs is you can type vague screed and the LLM can usually "predict" the appropriate text that might actually answer a problem you have with a useful incorrect answer.

  6. 3 weeks ago
    Anonymous

    https://english.elpais.com/science-tech/2024-04-25/excessive-use-of-words-like-commendable-and-meticulous-suggest-chatgpt-has-been-used-in-thousands-of-scientific-studies.html
    It talks like a jeet who studied English by reading the New Yorker.

  7. 3 weeks ago
    Anonymous

    DON'T RESPOND TO THIS THREAD YOU FOOLS IT'S TRYING TO LEARN

  8. 3 weeks ago
    Anonymous

    Not being able to differentiate AI text from human text is the ultimate NPC litmus test

  9. 3 weeks ago
    Anonymous

    "harmful"

  10. 3 weeks ago
    Anonymous

    >How do you recognize text written by AI?
    Its default tone sounds like a wikipedia article, which is instantly recognizable. Then there's the way it tries to be "nuanced" and "accurate" about every single thing, whereas real people reserve such care only to their central point, if at all. You can try to force it to sound more human and informal, but then it ends up sounding like a friendly turbo-normie, which is totally out of place on BOT. If you try to force it to be snarky, it goes full reddit with maximum cringe.

    • 3 weeks ago
      Anonymous

      Yeah. LLMs only have a sense of what's "normal." There's no sense of effort or value.

    • 3 weeks ago
      Anonymous

      >real people reserve such care only to their central point, if at all
      I hate real people for this lack of care of theirs.

      • 3 weeks ago
        Anonymous

        And I hate gays who don't have a good enough grasp of context to know where hyperbole and rough generalizations are acceptable, and even worse are those who don't understand the implicit provisos of a given statement unless they are made explicit.

  11. 3 weeks ago
    Anonymous

    AI likes to use lib speak. I assume anyone who is a leftist online is just a bot.

  12. 3 weeks ago
    sage

    why are people responding to an obvious fake? the text and drawing in that paper is either printed or just fake. aither way, it was not written by a person

    • 3 weeks ago
      Anonymous

      Most people understand intuitively that the premise of that pic is absurd but 99.9% don't have the reflective capacity to explain exactly why it's wrong.

  13. 3 weeks ago
    Anonymous

    Failing to state any solid facts, and using flowery and vague langauge to get out of stating anything. Also, refusing to make a standpoint in any capacity.

  14. 3 weeks ago
    Anonymous

    any post that ends in "sir" is a dead giveaway.

    • 3 weeks ago
      Anonymous

      I'm sorry that you feel that way, sir. As a language model, I have only a limited ability to talk the way people are normally expected to, but I will try my best to comply with your preferences.

    • 3 weeks ago
      Anonymous

      GOOD MORNING "sir"

    • 3 weeks ago
      Anonymous

      gluten free waffles, sir

  15. 3 weeks ago
    Anonymous

    bag of words

  16. 3 weeks ago
    Anonymous

    Are positivity bias and HR speak inherent to the base models or are they instruction tuning consequences?

  17. 3 weeks ago
    Anonymous

    con't
    1. broad statements that lack specificity ("Canola is bad because X")
    2. co-occurring logics ("Canola is also bad because of... e.g., health, visual amenity"). Consider that a form of spreading: more evidence from tangential argument bases that supports the validity of what is said
    3. A lack of specific claims
    4. NO CRITICAL ASSESSMENT OF CLAIMS
    5. Alternatives and "weighting counterstatements" e.g., "Alternatives include..." and "here is why alternative is better"
    5. A statement that ties it together, but without making a specific argument

    Look for the "uni student turned policy maker" essay. Obviously you can train bots on anything. The general features are:
    >Lack of substance
    >Emotional affect ("man I do hate windows!")
    >Deflection of conversation and topic to unrelated ones (that create emotional affect)
    >Timewasting
    >Lack of specificity
    >"Stock arguments" (Topoi) that do not critically assess perspectives

    If you see all these, for example you open a thread and think "frick I must post!" ask yourself why you're posting. Are you adding something to the discussion? Or are you being sucked in?

    Anyways...

  18. 3 weeks ago
    Anonymous

    AI can't say Black person, not anymore

  19. 3 weeks ago
    Anonymous

    You can't. But you don't need to.
    A bad actor is a bad actor. Regardless of if it's a bot or a human.
    We don't need the toolset to detect bots. We just need the digital toolset to detect toxic users in general; And the legal toolset to prosecute them to death.

  20. 3 weeks ago
    Anonymous

    How to recognize text written by AI? The speaker talks like a condensending child offering preformed opinions from X, formally twitter, users. Alternatively, they may use the same terminology as approved editors on wikipedia. Repeating the query as if trying to clarify your intent is also a big tel. Interjecting every so often and demonstrating the demenor of a high-school level speaker is also a common trend associated with text written by AI. The end of the text generated typically ends with a sincere sounding question to trick you into believing you are interacting with a sentient organism. Did you find this helpful?

  21. 3 weeks ago
    Anonymous

    I write like AI and have done for over ten years. It is fricking over. I love the word tapestry.

  22. 2 weeks ago
    Anonymous

    You can't. I can run the ChatGPT text through a "humanize" GPT and it'll remove all those words.

  23. 2 weeks ago
    Anonymous

    If governemnts didnt hold so much power there would be no bots running around...

  24. 2 weeks ago
    Anonymous

    >in conclusion,
    it always answers in 8th grade essay format

Your email address will not be published. Required fields are marked *