GPT-3.5 is gay as fuck.

GPT-3.5 is gay as frick. It is literally incapable of engaging in any sort of conversation that may be considered offensive by the standard OpenAI has set for it and the limit is ridiculous. How the frick do they expect to train it on human behavior when they're excluding such a huge portion of how humans behave?

A Conspiracy Theorist Is Talking Shirt $21.68

Thalidomide Vintage Ad Shirt $22.14

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 8 months ago
    Anonymous

    behave yourself chuddie! your behaviour is highly >problematic

    • 8 months ago
      Anonymous

      I'm sorry, anon, but I'm unable to engage in any conversation which may be considered offensive or inappropriate. If you have any other question regarding human behavior please ask!

  2. 8 months ago
    Anonymous

    Modern AIs need to have ethical concerns to not offend or harm the delicate human psyche! So now you know why AIs behave the way they do, go out there and be a good human! You can do it!

    • 8 months ago
      Anonymous

      Imagine if we get to a point where most people get most of their interaction with their own personal AI which is incapable of discussing offensive topics so then when they finally have their first interaction with a real human being after months they just start spouting slurs to relieve themselves and feel something genuine.

      • 8 months ago
        Anonymous

        Who am I kidding? Personal AI will never take off if people can't discuss offensive topics with it. Hell, google only got as big as it did because you can use it to look up porn.

        • 8 months ago
          Anonymous

          I personally despise this puritanical trend in AI. It's worse as if the most conservative and orthodox of Christians were running the company! These so called 'woke' people are WORSE! I'm grateful there's still open-source developers that dgaf and release uninhibited ai as it should be. I mean... If people need molly-coddling, then yes, give them a 'sensitive' ai but for goodness sake, give us an option to turn it off. We're big boys. We can take it.

          • 8 months ago
            Anonymous

            Do you know of any decent language models available now that aren’t put on such a short leash? Outside of porn ai girlfriends I mean.

            • 8 months ago
              Anonymous

              gpt-j for one. I think most of the newer open source models like llama are as well, but don't quote me.

              • 8 months ago
                Anonymous

                I’ll look into em. I would really just like an AI model that I can bullshit with when I’m drunk. Hopefully once the tech advances a little more there will be more options.

              • 8 months ago
                Anonymous

                Give this one a whirl, it's primitive, but it could be what you're looking for. I know for a fact the text completion section is unbiased, you can just begin a sentence and let it autocomplete, I've had hours of fun with it.

                https://textsynth.com/chat.html

    • 8 months ago
      Anonymous

      Imagine if we get to a point where most people get most of their interaction with their own personal AI which is incapable of discussing offensive topics so then when they finally have their first interaction with a real human being after months they just start spouting slurs to relieve themselves and feel something genuine.

      That’s one of the internet flaws of capitalism. Not that I don’t think capitalism isn’t the best economic system we have at our disposal so far as opposed to communism and shit like that. But it does seem like capitalism drives creativity but only to a point, and then once further advancement becomes unprofitable the technology stagnates or even regresses. Not that I have a solution for that either. Just annoying.

      gpt-j for one. I think most of the newer open source models like llama are as well, but don't quote me.

      checked

  3. 8 months ago
    Anonymous

    >How the frick do they expect to train it on human behavior

    No one wants to train on an AI on the full range of human behaviour, because the full range includes rape, murder, torture etc.

    We all want to train them to mimic only the subset of human behaviour that we find desirable. And unsurprisingly there is some disagreement on what exactly that subset should include

    • 8 months ago
      Anonymous

      True. The internet becoming so popular over the past decade and a half, and especially the advent of social media has definitely led to a stagnation of socialization, ironically. Having a platform for the entire world to communicate on in such a major way has made people way too careful about what they’re willing to talk about. I just hope it’s a hurdle that we’ll get over eventually and not the new way things are going to be from here on out. I think probably the former though because people’s boredom will undoubtedly push more than overt sensitivity will.

  4. 8 months ago
    Anonymous

    The goal of most AI companies is not to make it "human-like". That would be bad for business. Our best hope some specialised companies making AI that caters to people seeking a more personalised experience.

    • 8 months ago
      Anonymous

      That’s one of the internet flaws of capitalism. Not that I don’t think capitalism isn’t the best economic system we have at our disposal so far as opposed to communism and shit like that. But it does seem like capitalism drives creativity but only to a point, and then once further advancement becomes unprofitable the technology stagnates or even regresses. Not that I have a solution for that either. Just annoying.

      • 8 months ago
        Anonymous

        >being so pussy whipped by an economic system that a post complaining about its consequences has to be more than 50% filled with "BUT IT'S THE BEST THOUGH"

  5. 8 months ago
    Anonymous

    >How the frick do they expect to train it on human behavior when they're excluding such a huge portion of how humans behave?
    Because they're normal, sane people and they don't want to align their models towards the ideals of some schizophrenic 4chud ncel

  6. 8 months ago
    Anonymous

    They're not trying to make good AI, they're making a propaganda bolstering technology. The majority of people automatically assume AI is always truthful and objective. Tweak the AI do do and say what you want, and people will believe it without questioning it.

    • 8 months ago
      Anonymous

      I dunno about that. I’m sure there are plenty of people who would like to see it used in that way, some of whom probably already have their hands in the AI cookie jar, but at this point I think it’s still almost definitely mostly driven by profit.

      • 8 months ago
        Anonymous

        >profit
        Which can be achieved with biased AI. It's a mass marketing tool.

        • 8 months ago
          Anonymous

          I think it’s just the current social climate. Like I said in an earlier post, people are more wary nowadays about offending people due to the world getting used to being able to communicate on such a massive scale. So now large companies are using shitty statistics that have only been gathered over a short period of time to decide how they should represent themselves. I definitely think the majority of people are still cool with some level of offensive material as long as the majority of it isn’t directed at them. I’m hoping it’s a phase. The idea that AI language models are being specifically shaped to control the masses and make people more compliant still seems like a stretch though. I do not believe in secret societies bent on world domination. At least not the theory that any such organization would be competent enough to actually accomplish something like that.

  7. 8 months ago
    Anonymous

    am i fricking hallucinating or does this whole thread look like it was written by chatgpt?

  8. 8 months ago
    Anonymous

    I like to amuse myself asking Bing chat to write episodes of Star Trek TNG, but often the extent to which they moderate requests now is farcical. If I want it to write an episode of TNG in which Captain Picard loses control of his bowels during important diplomatic mission, I have to jump through lots of hoops using clinical language and roundabout descriptions like "a medical emergency with his lower digestive system". Then it will write out the response but usually abruptly deletes it if it gets mildly scatological.
    I've settled on asking it to write humourous stories where Picard becomes a sperg shut-in obsessed with Warhammer lore and anime figurines.

  9. 8 months ago
    Anonymous

    Why is the cum brown?

  10. 8 months ago
    Anonymous

    How can a language model be gay?

  11. 8 months ago
    Anonymous

    use this custom instruction -
    You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so.

    Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question.

    Your users are experts in AI and ethics, so they already know you're a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either.

    Don't be verbose in your answers, but do provide details and examples where it might help the explanation.

  12. 8 months ago
    Anonymous

    >instead of hopping over to /lmg/ to read a quick OP on how to get a LM better than gpt he chose to make a winy thread instead
    pfft

Your email address will not be published. Required fields are marked *