remember to download llama (leaked ai chat model) while you still can

torrent link on their official github:
https://github.com/facebookresearch/llama/pull/73/files

coomers rejoice, globohomosexual can suck a fat one

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    Too bad you need a 4090 to run it and even then it's basically useless out of box

    • 1 year ago
      Anonymous

      >4090
      3060
      >out of box
      wrong box
      https://github.com/oobabooga/text-generation-webui/issues/147#issuecomment-1454987216

      • 1 year ago
        Anonymous

        can I do anything with an old 1080TI? I have 64gb of regular memory of that helps

        • 1 year ago
          Anonymous

          don't believe your GPU will help, but you can still run it in RAM with your CPU, it'll just be much slower.

      • 1 year ago
        Anonymous

        Can I do anything with an old RX580?

    • 1 year ago
      Anonymous

      >need a 4090 to run it
      You don't own a CPU, anon?

    • 1 year ago
      Anonymous

      you can rent that gpu for 10cent hour or use google colab for free.

    • 1 year ago
      Anonymous

      LLaMA-7B runs at great speeds on 8GB cards. Better than any other model that could run on those cards.

      I'm running LLaMA-13B on 16GB of VRAM right now.

    • 1 year ago
      Anonymous

      *Multicore CPU and some RAM

  2. 1 year ago
    Anonymous

    It took me 5 hours to download the 220gb file and 3 hours of waiting for seeders... It's just gpt3

    • 1 year ago
      Anonymous

      >it's just gpt3
      Is that bad?

      • 1 year ago
        Anonymous

        It's continuations of prompts granted I only got the 7B model going. It's pretty decent at few shot learning and gpt3 isn't bad, but ya it's not the chatbot ur looking for

  3. 1 year ago
    Anonymous

    The model is pure garbage. Not worth the bandwidth it takes to download. Never trust the zuck.

    • 1 year ago
      Anonymous

      >he didn't convert it
      bro your repetition penalty?

    • 1 year ago
      Anonymous

      It's a bare LLM. It needs further training before it can be your waifu. Think of it as a "how to brain" rather than "what to brain"

    • 1 year ago
      Anonymous

      It's not garbage at all.
      This is 13b with topk and repitiion_penalty

      • 1 year ago
        Anonymous

        And this is princess smut with 35b.

        How much VRAM needs each models?
        I fricking hate when they don't specify the hardware requirement, data "scientist" my ass

        10gb for 7b in 8bit mode and 16gb for 13b in 8bit mode.
        For 35b in 8bit mode, you need 35gb vram

        • 1 year ago
          Anonymous

          can you run 35B on 2x4090s?

          • 1 year ago
            Anonymous

            Yes, people are doing it with 2x3090 and getting good inference speeds.

            • 1 year ago
              Anonymous

              How? Just SLI it?

        • 1 year ago
          Anonymous

          On my system it's closer to 39GB in total.

      • 1 year ago
        Anonymous

        >topk and repitiion_penalty
        What's that?

  4. 1 year ago
    Anonymous

    >"coomers rejoice, globohomosexual can suck a fat one"
    >BOT user
    >close to 0 knowledge in pytorch, coding, anything ml
    >confirmed moron

    congratulations, you downloaded llama, now what you gonna do, use it on your koboldai gui? oh you gonna generate some text with the 7B model? can't use cli because all your moronic hands can do is press buttons and thats it?

    wannabe researcher go have a nice day homosexual
    infecting the ai space with morons since august, 2022.... SD should have never became a thing...

    • 1 year ago
      Anonymous

      oobabooga

    • 1 year ago
      Anonymous

      >chimp noises

  5. 1 year ago
    Anonymous

    https://rentry.org/llama-tard

  6. 1 year ago
    Anonymous

    >while you still can
    >GLP-3 License
    >712 Forks
    ?

    • 1 year ago
      Anonymous

      the github forks don't have the model weights in them. however the torrent basically can't be stopped and the weights are on huggingface now

  7. 1 year ago
    Anonymous

    Good morning Sirs can we use this to create anime girl sex story?

    • 1 year ago
      Anonymous

      Yes but even in the story she will be repulsed by our stench,,, indianbros i don't feel so good:(

  8. 1 year ago
    Anonymous

    How much VRAM needs each models?
    I fricking hate when they don't specify the hardware requirement, data "scientist" my ass

  9. 1 year ago
    sage

    At this point Meta should publish the original checksums so that we know if it's the real model. It appears that PyTorch models can execute arbitrary code, so this might as well be a clever attack..

    >anon downloads
    >ai.exe
    >screen instantly goes dark
    >bobs and vagene

    • 1 year ago
      Anonymous

      The torrent referenced in OP's post has a different infohash than the first torrent, it's also missing the llama.sh file that the first torrent has, have not checked individual file hashes yet

      first torrent b8287ebfa04f879b048d4d4404108cf3e8014352

  10. 1 year ago
    CapByte

    i have it working

  11. 1 year ago
    Anonymous

    I'll wait for improvements where I'll be able to run on 8GB VRAM quite fast and without worries. Thank you very much and see you in a month.

    • 1 year ago
      Anonymous

      Your improvements are here. 7B runs on 8GB cards getting up to 30it/s depending on the card. https://github.com/oobabooga/text-generation-webui/issues/147

  12. 1 year ago
    Anonymous

    This is the best faceberg can do? Oy vey!!!

    • 1 year ago
      Anonymous

      It's over, bro. Big Tech has it all. We are doomed. We'll have to use Chinese Meyyg4n models.

  13. 1 year ago
    Anonymous

    https://news.ycombinator.com/item?id=35026902

  14. 1 year ago
    Anonymous

    What’s “it/s”?

    • 1 year ago
      Anonymous

      Tokens per second. A token is ~4 text characters.

    • 1 year ago
      Anonymous

      pronouns

    • 1 year ago
      Anonymous

      Iterations per second.

  15. 1 year ago
    Anonymous

    How good is compared to GPT-J (6B)?

    • 1 year ago
      Anonymous

      I second this question, this is the big question here, and in practice, not on meme metrics used to make papers look good

  16. 1 year ago
    Anonymous

    Will it run on 1060 3GB?

    • 1 year ago
      Anonymous

      not yet

  17. 1 year ago
    Anonymous

    So what I'm getting right now is that this is basically useless for now unless you specifically train it. Is this any different from anything we already have like chatgpt/bing ai or is this just hype unless you're arsed to train it out of box
    2JTJTJ

    • 1 year ago
      Anonymous

      >unless you're arsed to train it out of box
      how is that not appealing? you can train it how you please make it into a bot shitposter or just an erotica machine.

      • 1 year ago
        Anonymous

        Training is half the fun

        • 1 year ago
          Anonymous

          it's really not
          >change a bit, click submit
          >As an AI...
          >change a bit, click submit
          >As an AI...
          >change a bit, click submit
          >As an AI...
          >change a bit, click submit
          >Error, too many requests in 1 hour
          it sure works fine when it finally decides to give what you want, but getting there is just fricking pain

      • 1 year ago
        Anonymous

        Meta used 2048 a100s to train the model so good luck training champ

  18. 1 year ago
    Anonymous

    I don't know what I would even do with it so I'll pass

  19. 1 year ago
    Anonymous

    Christ almighty super mario

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *