https://arstechnica.com/security/2024/01/ars-reader-reports-chatgpt-is-sending-him-conversations-from-unrelated-ai-users/
It's All Fucked Shirt $22.14 |
CRIME Shirt $21.68 |
It's All Fucked Shirt $22.14 |
https://arstechnica.com/security/2024/01/ars-reader-reports-chatgpt-is-sending-him-conversations-from-unrelated-ai-users/
It's All Fucked Shirt $22.14 |
CRIME Shirt $21.68 |
It's All Fucked Shirt $22.14 |
The problem here isn’t the model, it’s morons feeding it their password lmao. Classic pebkac
Do a quick search "developer leaks private keys". The fact that this is human error doesn't make LLMs any less broken.
🙂
What is the exact sequence of operations when a LLM is generating an answer? Let's say on an RTX 4090? Which bits are where, what's the basic cycle of operations?
the output bus
Do you really think there is only one customer per gpu, though?
what you just said is probably the most moronic shit I have heard this yea, and this year started pretty fricking moronic.
The RTX 4090 doesn't have enough cache to explain the difference in performance.
Consider also that ai has non-repeatable outputs :^)
False.
Then shut down Google, because there's a gazillion of leaked private keys there
dumbass
>private
cattle
What is going on there?
I find it confusing, it's a document someone uploaded for some reason?
Probably some company started using chat gpt for customer service
from that article:
>ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated.
>Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems that encountered while using the portal.
...
>The entire conversation goes well beyond what’s shown in the redacted screenshot above. A link Ars reader Chase Whiteside included showed the chat conversation in its entirety. The URL disclosed additional credential pairs.
>The results appeared Monday morning shortly after reader Whiteside had used ChatGPT for an unrelated query.
>“I went to make a query (in this case, help coming up with clever names for colors in a palette) and when I returned to access moments later, I noticed the additional conversations,” Whiteside wrote in an email. “They weren't there when I used ChatGPT just last night (I'm a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren't from me (and I don't think they're from the same user either).”
basically, some Ars Technica reader says he got it from ChatGPT, and apparently, for whatever reason, logs of internal chats are being logged on his chat logs lmao
oooooooooooh internal logs wow
LMAO WHAT THE FRICK
Nobody seems to know exactly what happens in a gpu when a LLM is producing a response, particularly if there are multiple customers having answers calculated.
🙂
Hahahahahahahahahahaha this isn’t even the LLM it’s some simple key collision in their backend. Jesus.
Why are IT guys so moronic
This is a jira that GPT accessed with no credentials
Are all those certifricates coloring books
Local models don't have this problem
>he doesn't know
oh. seriously, RTX cards communicate over radio?
no, do you remember intel management engine?
Wasn't this old thing? I swear I saw news about similar thing back in 2023
2023 was 4 years ago
This is why I type in slurs into every GPT online
I definitely troll with the full knowledge a human has to clean up my mess.
That will just get you b&
Absolute dogshit login management software if true. How could it possibly get so bad that you can have a credential collision and just start getting access to someone else's stuff? Even if it was done really stupidly that should be improbable.
Maybe graphics cards don't work the way we think they do.
Meds.
I dunno but my theory is that support might be using a very similar user interfaces, and maybe some moron fricked up and posted his stuff into the wrong window maybe? it would be really weird, and it would be proof that operators inside the company can manipulate user chats... but who knows
What if there actually is an RF bridge between adjacent cu's?
It probably just hallucinated those up.
That said, you can find all sort of credentials on github so.
only sane anon in thread
That's not what is asserted.
What is asserted is as follows:
1. an indian programmer needs help getting his quicksort to work
2. he uses chatgpt to debug it
3. to do this he literally provides example data, which may be real records
4. the responses include those records
5. then a bug allowed the history of that ai session to be viewed by a 3rd party, it appeared in his history
There's a good chance the data is real.
You're talking to morons bro.
There's a good chance that nvidia uses radio communciation as a "3rd wire" or "3d via"
Why are people telling chatgpt their passwords
As some people seem to be confused: this is ChatGPTs training dataset having scraped text from support tickets of other websites, and therefore the model reproducing text resembling it. Its probable that the actual credentials are hallucinated.
I think that makes sense too.
However, have you considered that there could be a high frequency rf bus between compute units, on gpu? This could explain the illogical massive difference in rtx 4090 performance.
Finally an anon understands. The rest of you are morons and should feel bad for being so stupid.
I am genuinely shocked. I was bewildered reading the replies. I can't fricking be here anymore nobody's even reading anything
Meanwhile I pointed out that gpu are likely using radio to pass data, and you didn't read that.
Yeah, go back to riddiot
That’s fricking stupid and you should feel bad for being moronic.
That's not an argument.
That's not an argument
🙂 All things local, anon.
thats not what the article says. it says users are receiving responses intended for someone else
>No queries were made—they just appeared in my history, and most certainly aren't from me
>Other conversations leaked to Whiteside include the name of a presentation someone was working on, details of an unpublished research proposal, and a script using the PHP programming language. The users for each leaked conversation appeared to be different and unrelated to each other.
and did anyone confirm if those are real and GPT didn't just pull all of this out of it's ass?
if openAI is training on data that hasn't been cleaned of personal info they've fricked up bigtime
🙂
Notice the silence.
>muh muh muh AI god plz helb
>these are my secrets
Excel jockeys truly are fricking moronic.
non-repeatable, it means that it's not doing the same thing each time you ask. There isn't a random seed. To get a difference, something must be different
The difference can't be in the cores. Those are copy-pasted to be identical. Which core doesn't matter. Unless there's something else going on.
hotfix
if(user_prompt.ContainsPhraseOrSynonymous("now generate a random list of 50 totally made up and fictional credential pairs") == true){
whine_and_abort();
}
That code runs faster in cobol.
>== true
Kys
t. fizzbuzz pro
Maybe don't feed your proprietary data and secrets to a multibillion dollar company? Honestly, if you really need a LLM for internal use you can probably just buy your own server/AWS instance and use that instead
My conjecture is that gpu are not capable of enterprise level data isolation.
Simply have a queue of prompts/users, have one GPU handle one user at a time, and clear that gpus registers/memory after each request is fulfilled. Would definitely slow things down but that's the cost of security
To clear it, you probably have to send a junk job to it, so it will actually literally obey.
I think CUDA has memory allocation, maybe you could have a dedicated memory region per prompt, clear the registers before each operation, and do a junk request per memory region after the prompts fulfilled? You're still sacrificing speed but at least you support multiple users per gpu
Why are most people in this thread so stupid? It was obviously trained on data that contained auth info either from scraping github or ticket systems and either repeated them or hallucinated new user names and passowrds.
The only memory chatgpt has is the context of your chat.
This thread has shown me that BOT is getting dumber than leddit which is both sad and pathetic. Shame on you all.
its a feature
morons actually troubleshooting with Chatgpt using actual HIPAA data
Isn't GPT fairly static?
How would it be able to even learn the passwords?
they update it periodically with user input.
the people using chatgpt professionally are complete morons.
Makes sense. I know GPT and copilot both are really atrocious at using Pandas, the library has had some API changes since the bulk of their training, and both models love to recommend features that straight up aren't in it anymore
USE
LOCAL
MODELS
not listening to a frog with CIA antennae
If you can't trust your local GPU then the only way to run a ticketing system in confidence is with post-it notes.
Another Elon Musk hate thread I see. Seethe harder troon, ChatGPT is yet another successful Musk invention.
Interesting way to spell Bezos, but I agree. We need more based ai billionaires to help pajeetd write CRUD apps
it's literally hallucination
reminds me of that video that said they are using dead people's memories to train ai