Can LLMs and AI be used to obfuscate data cryptographically? Let's say you train a LLM on a set of data you don't want anyone to see and the only way to extract it is by using the correct prompt. How secure is this method cryptographically? Can this be exploited? I am high on ketamine right now and thought of this sorry.