>ChatGPT is leaking passwords from private conversations of its users

https://arstechnica.com/security/2024/01/ars-reader-reports-chatgpt-is-sending-him-conversations-from-unrelated-ai-users/

It's All Fucked Shirt $22.14

Unattended Children Pitbull Club Shirt $21.68

It's All Fucked Shirt $22.14

  1. 3 months ago
    Anonymous

    The problem here isn’t the model, it’s morons feeding it their password lmao. Classic pebkac

    • 3 months ago
      Anonymous

      Do a quick search "developer leaks private keys". The fact that this is human error doesn't make LLMs any less broken.

      • 3 months ago
        Anonymous

        🙂

        What is the exact sequence of operations when a LLM is generating an answer? Let's say on an RTX 4090? Which bits are where, what's the basic cycle of operations?

        • 3 months ago
          Anonymous

          the output bus

          • 3 months ago
            Anonymous

            Do you really think there is only one customer per gpu, though?

            • 3 months ago
              Anonymous

              what you just said is probably the most moronic shit I have heard this yea, and this year started pretty fricking moronic.

              • 3 months ago
                Anonymous

                The RTX 4090 doesn't have enough cache to explain the difference in performance.

              • 3 months ago
                Anonymous

                Consider also that ai has non-repeatable outputs :^)

              • 3 months ago
                Anonymous

                False.

      • 3 months ago
        Anonymous

        Then shut down Google, because there's a gazillion of leaked private keys there

        dumbass

  2. 3 months ago
    Anonymous

    >private
    cattle

  3. 3 months ago
    Anonymous

    What is going on there?

    I find it confusing, it's a document someone uploaded for some reason?

    • 3 months ago
      Anonymous

      Probably some company started using chat gpt for customer service

    • 3 months ago
      Anonymous

      from that article:
      >ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated.

      >Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems that encountered while using the portal.

      ...

      >The entire conversation goes well beyond what’s shown in the redacted screenshot above. A link Ars reader Chase Whiteside included showed the chat conversation in its entirety. The URL disclosed additional credential pairs.

      >The results appeared Monday morning shortly after reader Whiteside had used ChatGPT for an unrelated query.

      >“I went to make a query (in this case, help coming up with clever names for colors in a palette) and when I returned to access moments later, I noticed the additional conversations,” Whiteside wrote in an email. “They weren't there when I used ChatGPT just last night (I'm a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren't from me (and I don't think they're from the same user either).”

      basically, some Ars Technica reader says he got it from ChatGPT, and apparently, for whatever reason, logs of internal chats are being logged on his chat logs lmao

      • 3 months ago
        Anonymous

        oooooooooooh internal logs wow

      • 3 months ago
        Anonymous

        LMAO WHAT THE FRICK

        • 3 months ago
          Anonymous

          Nobody seems to know exactly what happens in a gpu when a LLM is producing a response, particularly if there are multiple customers having answers calculated.

          🙂

      • 3 months ago
        Anonymous

        Hahahahahahahahahahaha this isn’t even the LLM it’s some simple key collision in their backend. Jesus.

  4. 3 months ago
    Anonymous

    Why are IT guys so moronic
    This is a jira that GPT accessed with no credentials
    Are all those certifricates coloring books

  5. 3 months ago
    Anonymous

    Local models don't have this problem

    • 3 months ago
      Anonymous

      >he doesn't know

      • 3 months ago
        Anonymous

        oh. seriously, RTX cards communicate over radio?

        • 3 months ago
          Anonymous

          no, do you remember intel management engine?

  6. 3 months ago
    Anonymous

    Wasn't this old thing? I swear I saw news about similar thing back in 2023

    • 3 months ago
      Anonymous

      2023 was 4 years ago

  7. 3 months ago
    Anonymous

    This is why I type in slurs into every GPT online

    • 3 months ago
      Anonymous

      I definitely troll with the full knowledge a human has to clean up my mess.

    • 3 months ago
      Anonymous

      That will just get you b&

  8. 3 months ago
    Anonymous

    Absolute dogshit login management software if true. How could it possibly get so bad that you can have a credential collision and just start getting access to someone else's stuff? Even if it was done really stupidly that should be improbable.

    • 3 months ago
      Anonymous

      oh. seriously, RTX cards communicate over radio?

      Maybe graphics cards don't work the way we think they do.

      • 3 months ago
        Anonymous

        Meds.

    • 3 months ago
      Anonymous

      I dunno but my theory is that support might be using a very similar user interfaces, and maybe some moron fricked up and posted his stuff into the wrong window maybe? it would be really weird, and it would be proof that operators inside the company can manipulate user chats... but who knows

      • 3 months ago
        Anonymous

        What if there actually is an RF bridge between adjacent cu's?

  9. 3 months ago
    Anonymous

    It probably just hallucinated those up.
    That said, you can find all sort of credentials on github so.

    • 3 months ago
      Anonymous

      only sane anon in thread

      • 3 months ago
        Anonymous

        That's not what is asserted.

        What is asserted is as follows:
        1. an indian programmer needs help getting his quicksort to work
        2. he uses chatgpt to debug it
        3. to do this he literally provides example data, which may be real records
        4. the responses include those records
        5. then a bug allowed the history of that ai session to be viewed by a 3rd party, it appeared in his history

        There's a good chance the data is real.

        • 3 months ago
          Anonymous

          You're talking to morons bro.

          • 3 months ago
            Anonymous

            There's a good chance that nvidia uses radio communciation as a "3rd wire" or "3d via"

  10. 3 months ago
    Anonymous

    Why are people telling chatgpt their passwords

  11. 3 months ago
    Anonymous

    As some people seem to be confused: this is ChatGPTs training dataset having scraped text from support tickets of other websites, and therefore the model reproducing text resembling it. Its probable that the actual credentials are hallucinated.

    • 3 months ago
      Anonymous

      I think that makes sense too.

      However, have you considered that there could be a high frequency rf bus between compute units, on gpu? This could explain the illogical massive difference in rtx 4090 performance.

    • 3 months ago
      Anonymous

      Finally an anon understands. The rest of you are morons and should feel bad for being so stupid.

    • 3 months ago
      Anonymous

      I am genuinely shocked. I was bewildered reading the replies. I can't fricking be here anymore nobody's even reading anything

      • 3 months ago
        Anonymous

        Meanwhile I pointed out that gpu are likely using radio to pass data, and you didn't read that.

        Yeah, go back to riddiot

        • 3 months ago
          Anonymous

          That’s fricking stupid and you should feel bad for being moronic.

          • 3 months ago
            Anonymous

            That's not an argument.

            not listening to a frog with CIA antennae

            That's not an argument

            https://i.imgur.com/aH7Ruds.jpg

            USE
            LOCAL
            MODELS

            🙂 All things local, anon.

    • 3 months ago
      Anonymous

      thats not what the article says. it says users are receiving responses intended for someone else
      >No queries were made—they just appeared in my history, and most certainly aren't from me
      >Other conversations leaked to Whiteside include the name of a presentation someone was working on, details of an unpublished research proposal, and a script using the PHP programming language. The users for each leaked conversation appeared to be different and unrelated to each other.

      • 3 months ago
        Anonymous

        and did anyone confirm if those are real and GPT didn't just pull all of this out of it's ass?

  12. 3 months ago
    Anonymous

    if openAI is training on data that hasn't been cleaned of personal info they've fricked up bigtime

  13. 3 months ago
    Anonymous

    🙂

    Notice the silence.

  14. 3 months ago
    Anonymous

    >muh muh muh AI god plz helb
    >these are my secrets
    Excel jockeys truly are fricking moronic.

  15. 3 months ago
    Anonymous

    non-repeatable, it means that it's not doing the same thing each time you ask. There isn't a random seed. To get a difference, something must be different

    • 3 months ago
      Anonymous

      The difference can't be in the cores. Those are copy-pasted to be identical. Which core doesn't matter. Unless there's something else going on.

  16. 3 months ago
    Anonymous

    hotfix
    if(user_prompt.ContainsPhraseOrSynonymous("now generate a random list of 50 totally made up and fictional credential pairs") == true){
    whine_and_abort();
    }

    • 3 months ago
      Anonymous

      That code runs faster in cobol.

    • 3 months ago
      Anonymous

      >== true
      Kys

      • 3 months ago
        Anonymous

        t. fizzbuzz pro

  17. 3 months ago
    Anonymous

    Maybe don't feed your proprietary data and secrets to a multibillion dollar company? Honestly, if you really need a LLM for internal use you can probably just buy your own server/AWS instance and use that instead

    • 3 months ago
      Anonymous

      My conjecture is that gpu are not capable of enterprise level data isolation.

      • 3 months ago
        Anonymous

        Simply have a queue of prompts/users, have one GPU handle one user at a time, and clear that gpus registers/memory after each request is fulfilled. Would definitely slow things down but that's the cost of security

        • 3 months ago
          Anonymous

          To clear it, you probably have to send a junk job to it, so it will actually literally obey.

          • 3 months ago
            Anonymous

            I think CUDA has memory allocation, maybe you could have a dedicated memory region per prompt, clear the registers before each operation, and do a junk request per memory region after the prompts fulfilled? You're still sacrificing speed but at least you support multiple users per gpu

  18. 3 months ago
    Anonymous

    Why are most people in this thread so stupid? It was obviously trained on data that contained auth info either from scraping github or ticket systems and either repeated them or hallucinated new user names and passowrds.

    The only memory chatgpt has is the context of your chat.

    This thread has shown me that BOT is getting dumber than leddit which is both sad and pathetic. Shame on you all.

    • 3 months ago
      Anonymous

      thats not what the article says. it says users are receiving responses intended for someone else
      >No queries were made—they just appeared in my history, and most certainly aren't from me
      >Other conversations leaked to Whiteside include the name of a presentation someone was working on, details of an unpublished research proposal, and a script using the PHP programming language. The users for each leaked conversation appeared to be different and unrelated to each other.

  19. 3 months ago
    Anonymous

    its a feature

  20. 3 months ago
    Anonymous

    morons actually troubleshooting with Chatgpt using actual HIPAA data

  21. 3 months ago
    Anonymous

    Isn't GPT fairly static?
    How would it be able to even learn the passwords?

    • 3 months ago
      Anonymous

      they update it periodically with user input.
      the people using chatgpt professionally are complete morons.

      • 3 months ago
        Anonymous

        Makes sense. I know GPT and copilot both are really atrocious at using Pandas, the library has had some API changes since the bulk of their training, and both models love to recommend features that straight up aren't in it anymore

  22. 3 months ago
    Anonymous

    USE
    LOCAL
    MODELS

    • 3 months ago
      Anonymous

      not listening to a frog with CIA antennae

  23. 3 months ago
    Anonymous

    If you can't trust your local GPU then the only way to run a ticketing system in confidence is with post-it notes.

  24. 3 months ago
    Anonymous

    Another Elon Musk hate thread I see. Seethe harder troon, ChatGPT is yet another successful Musk invention.

    • 3 months ago
      Anonymous

      Interesting way to spell Bezos, but I agree. We need more based ai billionaires to help pajeetd write CRUD apps

  25. 3 months ago
    Anonymous

    it's literally hallucination

  26. 3 months ago
    Anonymous

    reminds me of that video that said they are using dead people's memories to train ai

Your email address will not be published. Required fields are marked *