Can AI actually write?

It's evident that AI is already capable of content writing, to the extent that it could replace the majority of those who write for TV, films, newspapers, low-brow literature. But could it reach a point where it is creating works of literature as prescient, timeless, beautiful and truthful as the Classics that have come before it?

The way I see it, if writing is only a skilled expression of a given writer's experiences in life and/or observations about the world, this could be replicated by AI. I think this because these are 'material things', things that exist in the exterior world are therefore computable. The only barrier would be if there were a divine aspect to writing, if a sufficient condition for creating timeless literature was the ability to tap into a soul, or to have some kind of unconscious divine instruction.

I'm sure this has been discussed countless times before but I don't care.

Nothing Ever Happens Shirt $21.68

The Kind of Tired That Sleep Won’t Fix Shirt $21.68

Nothing Ever Happens Shirt $21.68

  1. 10 months ago
    Anonymous

    What people are calling "AI" is not actually intelligence but an extremely sophisticated imitation software. It is incapable of generating new insights, so anything it created would not be worth reading over existing literature from an intellectual standpoint, as its output will be inferior by definition. Unfortunately, it likely will be able to create entertainment that will convince the lowest common denominator that it is some form of true "intelligence." If you want some insight on why this is the case, consider reading pic related. The incompleteness theorem proves that there are truths within a logical system that can't be proved within the language of that system

  2. 10 months ago
    Anonymous

    no. machines can not think, therefore they can not write as language is a tool for communicating ideas.

  3. 10 months ago
    Anonymous

    All current AI output is nothing but statistical composite of what was fed into it, so no.

  4. 10 months ago
    Anonymous

    OP, have you ever heard of the "Chinese Room Experiment"?

  5. 10 months ago
    Anonymous

    I don't see how an AI could create multiple layers of sub text or play with various emotional tensions throughout a piece like a human that is able to react to their work as they make it.

    • 10 months ago
      Anonymous

      Oh give it time, anon. Give it time. We didn't think they'd be making next-to-photoreal images in the year 2008 either, did we?

      • 10 months ago
        Anonymous

        yes we did. Anyone that saw that final fantasy movie was blown away how much potential computers had to making photorealistic images. And that was in 2001.

        • 10 months ago
          Anonymous

          I see what you're saying, but that was with a great deal of human input. "AIs" nowadays make up these images on their own (simplistic and static though they are) with just a little bit of prompting. No sculpting or modeling by human programmers needed.

          • 10 months ago
            Anonymous

            fair enough. I concede to your point.

    • 10 months ago
      Anonymous

      In other words, how does an AI read and replicate what is not present in the text?

      • 10 months ago
        Anonymous

        Would you say in order to "read" sub-text you need to have lived life and gained wisdom so that you can pick up on reflections of that experience from the text?

  6. 10 months ago
    Anonymous

    >people still don’t know how llms work and cling to the idea of ai because of its name
    The AI craze was last year. You should know the answer to your question already. The answer is no, and it can’t do none of the things you state either.

    • 10 months ago
      Anonymous

      What is what most people call "AI" then? Just an advanced digester-refractor of visual inputs? Some program that remixes images and then shits out new ones?

      • 10 months ago
        Anonymous

        Right now, the AI that writes texts is just a program that has millions of example texts as a database and that you need to "train" for getting good answers to your questions. It can not go beyond the texts you have provided to it or the training you give it.
        And even then there is this very concise concept of "hallucinations" in AI which is the computer talking nonsense sometimes even with good training or a large enough database

        • 10 months ago
          Anonymous

          >And even then there is this very concise concept of "hallucinations" in AI which is the computer talking nonsense sometimes even with good training or a large enough database
          Ah, like word salad?

          • 10 months ago
            Anonymous

            From my experience, fake information confidently presented as real information is more common. An example I remember is an academic who asked ChatGPT to provide citations on a certain topic and the AI provided citations of autors and works that didnt exist at all

            • 10 months ago
              Anonymous

              Ohhhh interesting, so they are literally like hallucinations in that sense then. It is "convinced" that something is "real" (as much as a machine or a program can be said to be convinced of anything) and is presenting it as such. That's a bizarre phenomenon.

              • 10 months ago
                Anonymous

                AI doesn’t interpret information. It merely looks for text that matches the input and builds on top of that to provide an answer.
                It doesn’t consider fake information as real. It merely spits out the most likely answer from a statistical standpoint based on its dataset.

              • 10 months ago
                Anonymous

                >It doesn’t consider fake information as real. It merely spits out the most likely answer from a statistical standpoint based on its dataset.
                I understand, but you said it has been documented spitting out fake sources and texts before.

              • 10 months ago
                Anonymous

                I’m not that anon, though he is right. You ask AI for something that doesn’t exist and (unless its dataset says it doesn’t exist) it will come up with an answer, not because it’s true, but because the algorithm solved a string of words that seem statistically valid.
                I hope I’m being clear. I’ll be as blunt as I can: AI will shit out words based on numbers, not reason. You prompt something, and AI will give you an answer; it may steal it from someone else’s work (i.e., take it from its dataset) or spit something back based on how likely it would be that those words should be strung together (if the answer can’t be found in its dataset).
                You can try it yourself. Write something on your phone and let predictive text take the wheel after five or six words. How long does it take to be nonsensical? Add a little more computing power, and that’s AI for you.

                Woah, based. Thank you for the explanation. So in other words, the "AI" terror really is mostly a media train psy-op that people such as myself buy into because they don't understand the technology behind these "AIs"?

                Pretty much. AI is the new web3 or crypto craze. It won’t amount to shit. What did it revolutionize so far? Lit related someone made a quick buck off Amazon self publishing maybe and that’s that.

      • 10 months ago
        Anonymous

        LLM stands for large language model. You input a shitload of texts into a software to train an algorithm, input a prompt for the algorithm to solve, and said algorithm will create a statistical model to sort out which words it will spit back at you. It’s not artificial nor intelligent. It’s just mathematical probability based on already existing texts.
        That’s the gist. If you really want to dumb it down, your phone already does this type of thing; it’s called predictive text.

        • 10 months ago
          Anonymous

          Woah, based. Thank you for the explanation. So in other words, the "AI" terror really is mostly a media train psy-op that people such as myself buy into because they don't understand the technology behind these "AIs"?

  7. 10 months ago
    Anonymous

    AI writes very poorly but well enough for the average idiot who never reads. I'm still waiting for an AI that can replicate Edgar Allen Poe or Lovecraft's prose. Novel AI supposedly has a Poe AI, but it doesn't sound anything like Poe.

  8. 10 months ago
    Anonymous

    Call me a nonbeliever but I think there’s something uniquely significant about human expression that can’t be replicated with AI. All AI can do is notice trends and spit out something imitative. It’s not intelligent, contrary to its name. All it can produce is something that is perfectly average, because that’s what it is - the average of all the content it’s training on.

    For something to be good writing, it must contain something that speaks to you. And things only speak to you when you feel they see something others don’t.

  9. 10 months ago
    Anonymous

    No, it can imitate writing.

  10. 10 months ago
    Anonymous

    >Can AI actually write
    Not literature. The more perfect AI becomes, the more rational it is. Great literature isn't rational. It relies on the unconscious to compress ideas that are too complex for articulation into imagery, details, events that resonate on the unconscious level. That's where all the important stuff is. You can have a totally articulate explanation for why the whale is white, but that's just reverse engineering to get a subjective rationalization for something the author was only ever inspired by because it resonated unconsciously.

Your email address will not be published. Required fields are marked *