AMD ROCm AI Stable Diffusion Guide

Post information about AMD stable diffusion specs, models and techniques to help improve the install guide. AMD power chads, we need YOU HERE NOW! Lets make it werk. No shills!

  1. 1 month ago
    Anonymous

    Previous threads:
    https://desuarchive.org/g/thread/89022796/
    https://desuarchive.org/g/thread/89069132/

    Guide:
    https://rentry.org/sdamd

    • 1 month ago
      Anonymous

      here's all the AMD guides in the OP

      Linux AMD most detailed guide (Native install, not docker)
      https://rentry.org/sd-nativeisekaitoo

      Linux AMD original guide
      https://rentry.org/sdamd

      Windows AMD guide
      https://rentry.org/ayymd-stable-diffustion-v1_4-guide

      People who have tried multiple methods of install, what have been your experiences. ie docker vs native. Also do you think there are any critical details missing from these guides that you wish they had mentioned or problems with following them that you ran into?

      Be detailed so the guides can be updated, everyone's setup is very different.

      • 1 month ago
        Anonymous

        if you want to streamline it and reduce questions i think everybody deciding on a beginner friendly distro to pair with the native install guide is the way to go

        anybody already familiar with linux can figure it out but for a winbaby doing their first dual boot even choosing a distro is a nightmare, let alone figuring out the differences between them when it comes to installation instructions

        • 1 month ago
          Anonymous

          manjaro kde is pretty easy to jump into I think
          then the stuff can just be added to an AUR repo and automate the install

        • 1 month ago
          Anonymous

          Using a thumb drive to run a distro is easy for people to understand. That or a VM would work fine

      • 1 month ago
        Anonymous

        I've gotten the onyx Python config file version working on windows. It loads 15 gigs into my ram for a few seconds then switches over to the 6900xt. Without a GUI or a lot of the variables to change it wasn't really worth messing with.

      • 1 month ago
        Anonymous

        the only one ive gotten to work was the onnx version using huggingface, but I don't think you can change the model and its nowhere near as nice as having a ui

  2. 1 month ago
    Anonymous

    So I guess running Stable Diffusion on AMD is impossible to do on Windows?

    • 1 month ago
      Anonymous

      Not impossible, just inferior and hacky.

      • 1 month ago
        Anonymous

        Amen to that, took me 3 hours just to configure because of having to rebuild PyTorch from source, and it's using an outdated LTS version so it's inferior in all aspects.

        Just get a Ubuntu VM and run it through there.

    • 1 month ago
      Anonymous

      Try virtualbox to run a USB install of Linux.
      https://desuarchive.org/g/thread/89022796/#q89045929

      Give it a try, any problems just tell us what happened. We'll get you up and running.

    • 1 month ago
      Anonymous

      Shouldn't it run fine in a docker container?

      • 1 month ago
        Anonymous

        not on windows, amd gpus are not exposed to docker containers on windows.

    • 1 month ago
      Anonymous

      Here is the guide for windows, let us know your results, also try linux install on USB to compare.

      https://rentry.org/ayymd-stable-diffustion-v1_4-guide

    • 1 month ago
      Anonymous

      hip works on windows you know
      the only reason you're forced to go through the retarded shit like onnx is because it's missing the extra runtime libs, it has the core ones
      it just needs environment variables setup and an export library to work with clang
      so in theory you should be able to rejigger the rocm build scripts
      i got rocblas to successfully build but without the tensor library, unfortunately hipBLAS and the rest of the runtime libs need the tensor lib
      i want to figure out how to get the tensor lib to build as a single binary but its build system is even stupider than the rest of the hip libs and uses a really really stupidly designed python script

    • 1 month ago
      Anonymous

      I'm running it right now on my Win10 machine.
      R9 5950x, 5700XT, 32GB

      It's using my CPU instead of my GPU, but it works.

  3. 1 month ago
    Anonymous

    https://desuarchive.org/g/thread/89069132/#q89069802
    links to best posts of 1st thread

    Helpful Information to include in your post:
    Have you had Docker Issues? (y/n)
    Pytorch Version
    Have you had packaging dependency issues? (y/n)
    What GPU are you using?
    What linux distros have you tried? What issues arose or went smoothly for you?
    How many it/s? (or s/it on older cards)

    • 1 month ago
      Anonymous

      Not impossible, just inferior and hacky.

      Try virtualbox to run a USB install of Linux.
      https://desuarchive.org/g/thread/89022796/#q89045929

      Give it a try, any problems just tell us what happened. We'll get you up and running.

      Ubuntu 20.04 may be good to try first.

      https://desuarchive.org/g/thread/89022796/#q89066255
      Got it working insanely easily today.
      You need Ubuntu 20.04 as a base. AMD's repos are fucked for any other system, including 22.04.
      From there, you install ROCm using their guide (amdgpu-install --usecase=rocm).
      Paranoidically alias python to python3.8.
      Install python3.8-venv
      Clone the automatic repo.
      HSA_OVERRIDE_GFX_VERSION=10.3.0
      TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1'
      COMMANDLINE_ARGS="--precision full --no-half --listen" (usually also requires --medvram)

      Really easy after that. It's simple to implement after knowing that half of the guides are made by retards that don't know what the fuck they're doing. I bet some of my steps can be reduced, too.

      Also add the user you made to the "render" group, which is important

      • 1 month ago
        Anonymous

        Manjaro Just Werks for me, and I'm not about to break it.

  4. 1 month ago
    Anonymous

    This command is meant to improve AMD stable diffusion performance, anyone used it, did it speed up image generation times?
    HSA_OVERRIDE_GFX_VERSION=10.3.0

    good insights into HSA from previous threads
    worth a read
    https://desuarchive.org/g/search/text/hsa%20amd/

    Jargon Guide
    AMD ROCm - Radeon Open Compute
    OpenCL - Open Computing Language
    HIP - Heterogeneous Interface for Portability
    HSA - Heterogenous System Architecture

    https://developer.amd.com/wordpress/media/2012/10/hsa10.pdf

    • 1 month ago
      Anonymous

      HSA OVERIDE Experiences
      HSA_OVERRIDE_GFX_VERSION=10.3.0

      https://desuarchive.org/g/search/text/HSA_OVERRIDE_GFX_VERSION%3D10.3.0/

  5. 1 month ago
    Anonymous

    https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs

    Videos

  6. 1 month ago
    Anonymous

    seems like many amd cards are working

    https://www.reddit.com/r/StableDiffusion/comments/ww436j/howto_stable_diffusion_on_an_amd_gpu/

    I've documented the procedure I used to get Stable Diffusion up and running on my AMD Radeon 6800XT card. This method should work for all the newer navi cards that are supported by ROCm.

    UPDATE: Nearly all AMD GPU's from the RX470 and above are now working.

    CONFIRMED WORKING GPUS: Radeon RX 66XX/67XX/68XX/69XX (XT and non-XT) GPU's, as well as VEGA 56/64, Radeon VII.

    CONFIRMED: (with ENV Workaround): Radeon RX 6600/6650 (XT and non XT) and RX6700S Mobile GPU.

    RADEON 5700(XT) CONFIRMED WORKING - requires additional step!

    CONFIRMED: 8GB models of Radeon RX 470/480/570/580/590. (8GB users may have to reduce batch size to 1 or lower resolution) - details

    Note: With 8GB GPU's you may want to remove the NSFW filter and watermark to save vram, and possibly lower the samples (batch_size): --n_samples 1

    ENV Workaround...
    https://github.com/RadeonOpenCompute/ROCm/issues/1756#issuecomment-1160386571

    HSA_OVERRIDE_GFX_VERSION=10.3.0

    • 1 month ago
      Anonymous

      What's the performance on ayymd compared to novidya anyway?

      • 1 month ago
        Anonymous

        Very poorly. 6800XT gets 6it/s on 512x512.

        • 1 month ago
          Anonymous

          Welp, better than nothing I guess.

        • 1 month ago
          Anonymous

          That's triple what my 1080ti does so still a sufficient upgrade if I feel 7800xt isnt worth going for

      • 1 month ago
        Anonymous

        Very poorly. 6800XT gets 6it/s on 512x512.

        Welp, better than nothing I guess.

        I'm going to try and get feedback on more GPUs and performance, no one has optimized any of the code really so it is early days.

        the later 6000 series is going to get better results. I was searching through, there was a guy on a
        RX 6900 XT getting about 6-9it/s.

        GPU: RX 6900 XT

        He used this dockerfile to make his rx6900xt work.

        https://github.com/AshleyYakeley/stable-diffusion-rocm

        Stable Diffusion Dockerfile for ROCm

        Run Stable Diffusion on an AMD card, using this method. Tested on my RX 6900 XT.

        Obtain sd-v1-4.ckpt and put it in models/.
        Run ./build-rocm to build the Docker image.
        Run ./run-rocm to run a shell in the Docker container.
        Inside the container, you can do e.g. python scripts/txt2img.py --outdir /output --plms --seed $RANDOM$RANDOM --prompt "a unicorn riding a purple tricycle"

    • 1 month ago
      Anonymous

      HSA OVERIDE Experiences
      HSA_OVERRIDE_GFX_VERSION=10.3.0

      https://desuarchive.org/g/search/text/HSA_OVERRIDE_GFX_VERSION%3D10.3.0/

      This command is meant to improve AMD stable diffusion performance, anyone used it, did it speed up image generation times?
      HSA_OVERRIDE_GFX_VERSION=10.3.0

      good insights into HSA from previous threads
      worth a read
      https://desuarchive.org/g/search/text/hsa%20amd/

      Jargon Guide
      AMD ROCm - Radeon Open Compute
      OpenCL - Open Computing Language
      HIP - Heterogeneous Interface for Portability
      HSA - Heterogenous System Architecture

      https://developer.amd.com/wordpress/media/2012/10/hsa10.pdf

      >HSA_OVERRIDE_GFX_VERSION=10.3.0
      Where does this go?
      >The webui-user.sh file

  7. 1 month ago
    Anonymous

    How to check now ROCm version:

    apt show rocm-dkms

    Package: rocm-dkms
    Version: 3.5.1-34
    Priority: optional
    Section: devel
    Maintainer: Advanced Micro Devices Inc.
    Installed-Size: 13.3 kB
    Depends: rocm-dev, rock-dkms
    Homepage: https://github.com/RadeonOpenCompute/ROCm
    Download-Size: 756 B
    APT-Manual-Installed: yes
    APT-Sources: http://repo.radeon.com/rocm/apt/3.5.1 xenial/main amd64 Packages
    Description: Radeon Open Compute (ROCm) Runtime software stack

  8. 1 month ago
    Anonymous

    AMD Documentation

    AMD technical documentation is pooled here.
    https://rocmdocs.amd.com/en/latest/ROCm.html

    ROC Github Repositories
    https://github.com/orgs/RadeonOpenCompute/repositories?type=all

  9. 1 month ago
    Anonymous

    >Is there a simple command to determine if you are using the CPU or GPU when running stable diffusion? Anyone know how you simply determine that?

    python3 -c 'import torch; print(torch.cuda.is_available())'
    prints true if the GPU is working, false if not

    also you can check your GPU metrics, you should see your VRAM get filled up.

  10. 1 month ago
    Anonymous

    GPU RX 5700 XT / RX 6800

    I have been running it on my 5700xt and then upgraded to an rx 6800 because i wanted more vram, haven't had any problems, steps below are what i did. One thing to note on first run there is some complaint about a hip sqlite file. I have not found a way to fix this and have tried installing the hip packages in the arch repos. I dont think it effects much but the first time you make an image after starting the webui it will take a minute or two. after that its works at normal speed

    Remove all version of pytorch installed on your system

    Git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1122

    edit the webui-user.sh

    Add this to the command line args, it will fix an issue where grey pictures are produced

    --precision=full --no-half

    Add these to the bottom of the file. the first line is so it dls rocm pytorch, the second fixes and issue where it will say cuda capable gpu not found when running

    export TORCH_COMMAND="pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.1.1";
    export HSA_OVERRIDE_GFX_VERSION=10.3.0

    Start the webui.sh

  11. 1 month ago
    Anonymous

    let me guess, windows still "just werks" even though you have to do all of this gorilla retardation?

    • 1 month ago
      Anonymous

      No, SD on Windows is a giant pain in the ass even with Novidya.

    • 1 month ago
      Anonymous

      you only have to do this if you fell for the ayymd meme

    • 1 month ago
      Anonymous

      It's fucking awful, just use a VM or go over to Loonix

  12. 1 month ago
    Anonymous

    The technology is moving quickly, it'll be good to post mini guides or links to guides of what is new.

    Textual Inversion - trains a word with one or more vectors that approximate your image. So if it is something it already has seen lots of examples of, it might have the concept and just need to 'point' at it. It is just expanding the vocabulary of model but all information it uses is already in the model.

    Dreambooth - this is essentially model fine tuning, which changes the weights of the main model. Dreambooth differs from typical fine tuning in that in tries to keep from forgetting/overwriting adjacent concepts during the tuning.

    Hypernetworks - this is basically an adaptive head - it takes information from late in the model but injects information from the prompt 'skipping' the rest of the model. So it is similar to fine tuning the last 2 layers of a model but it gets much more signal from the prompt (it is taking the clip embedding of the prompt right before the output layer).

    Hypernetwork Training Information

    https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/2284

    https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Textual-Inversion#hypernetworks

    https://blog.novelai.net/novelai-improvements-on-stable-diffusion-e10d38db82ac

  13. 1 month ago
    Anonymous

    So I managed to blunder my way into a working installation of the Automatic1111 WebUI on Manjaro, but I have a couple outstanding issues. First, certain functions cause a systemwide hang/lockup that force me to do a hard shutdown. Two reliable triggers I've identified are trying to switch the model.ckpt while the client is live(i.e. using the dropdown), and trying to use LDSR upscaling. Another, less pressing issue is that I haven't figured out how to install the MIOpen kernel files (or whatever that notification is talking about); I couldn't find them in the Manjaro repository, and then when I was trying to figure out how to get them from Arch I got really confused because I was checking my already-installed packages and didn't see anything containing "rocm," which seems impossible because I've done PROOMPTs with no problem at speeds(around 6-7 it/sec) that I'm pretty sure would be impossible if it were just using the CPU.

    • 1 month ago
      Anonymous

      you can try this

      How to check now ROCm version:

      apt show rocm-dkms

      Package: rocm-dkms
      Version: 3.5.1-34
      Priority: optional
      Section: devel
      Maintainer: Advanced Micro Devices Inc.
      Installed-Size: 13.3 kB
      Depends: rocm-dev, rock-dkms
      Homepage: https://github.com/RadeonOpenCompute/ROCm
      Download-Size: 756 B
      APT-Manual-Installed: yes
      APT-Sources: http://repo.radeon.com/rocm/apt/3.5.1 xenial/main amd64 Packages
      Description: Radeon Open Compute (ROCm) Runtime software stack

      to check your ROCm version

      if you followed this guide for MIOpen install say where it went wrong?
      https://rocmsoftwareplatform.github.io/MIOpen/doc/html/install.html

      great post btw, all the different distros means not everyone's problems/solutions are the same. explaining your steps and problems in depth will help us all in the long run. we need more posts like this.

      more Mlopen info
      http://ceur-ws.org/Vol-2744/invited2.pdf
      MIOpen: An Open Source Library For Deep Learning Primitives

    • 1 month ago
      Anonymous

      Oh I almost forgot, I remember a few months back when those new Radeon drivers for Windows with the OpenGL improvements came out, I was reading about it and saw comments from people complaining about some longstanding issue where video playback would slow their system to a crawl. I never encountered that issue myself, but just recently I was watching some youtube videos on my Manjaro partition and it locked up in a similar way to when the WebUI freezes. Not sure if that's at all related but I figured I'd throw it out there. I decided to go with the open source gpu drivers because I'd heard good things, but I was surprised to learn they apparently haven't been updated in like, two years or some shit? I guess that's not inherently a reason to be skeptical but it did weird me out a little.

      you can try this
      [...]
      to check your ROCm version

      if you followed this guide for MIOpen install say where it went wrong?
      https://rocmsoftwareplatform.github.io/MIOpen/doc/html/install.html

      great post btw, all the different distros means not everyone's problems/solutions are the same. explaining your steps and problems in depth will help us all in the long run. we need more posts like this.

      more Mlopen info
      http://ceur-ws.org/Vol-2744/invited2.pdf
      MIOpen: An Open Source Library For Deep Learning Primitives

      Honestly I probably gave up on the MIOpen thing prematurely, I was itching to try out SD and ended up kinda forgetting about the warning because it seems to work fine without it. From what I understand not having it installed just means your first gen every startup is a little slower than the rest. I'm currently doing some stuff on my Windows partition but I'll boot into Manjaro and check out what's going on with ROCm in a little bit.

      • 1 month ago
        Anonymous

        here, sorry for the late response but I ended up running into a lot of problems and had to turn in for the night. Unfortunately I've not made much progress today either. For the sake of contributing all I can say is that I've got a 6700 XT running a dual-boot install of Manjaro and it does about 6-7 it/sec as I said, without the MIOpen kernels package. And yeah, best as I can tell it is/was somehow working despite "pacman -Q" spitting out picrel(i.e. seemingly no rocm related packages installed). I've since messed around with trying to build the miopen-hip package from the AUR, the result of which seems to have been that I have at least some of the rocm dependencies installed now(and I gotta say just as an aside, being a linux noob, that experience was...far from pleasant). I'm currently stuck with several of these errors
        clang-14: error: invalid target ID 'gfx####'; format is a processor name followed by an optional colon-delimited list of features followed by an enable/disable sign
        Where "####" is some variant like "1103" or "1036" for each error. Google has availed me not in trying to figure out how to proceed from this point. I also encountered freezes during the code compilation with just a web browser and the terminal open, so I'm beginning to wonder if there's something inherently unstable about my setup. I've hardly done anything besides install the SD-related stuff, though.

        • 1 month ago
          Anonymous

          hang in there, hopefully someone can give you advice on what to try out. Your posts are excellent information. you are documenting the problems you encounter, error messages and steps you have tried. keep doing that.

          Anyone know if it's related to this?
          https://reviews.llvm.org/D88377

          well i can see why someone would find arch cool. they can have way more control over what they are doing. only problem is when some retard like me bumbles in trying to use god damned stable diffusion and has only even gotten to the point of having a working desktop. at this point, I think I am better off than before though. If anyone has guidance from start to finish on a fresh install of stable diffusion starting from a fresh install of arch i am all ears.

          there is an arch section in this guide.
          https://rentry.org/sdamd
          just tell us what doesn't make sense to you or what you get stuck on specifically or what you don't understand. devs have the habit of assuming most users have their prerequisite level of knowledge. that isn't the case, especially with amd rocm, so we need to make better guides. your understanding, or lack of it, *at every stage*, will make the guides better. just keep telling the processes you are going through and we will help.

  14. 1 month ago
    Anonymous

    AMD ROCm videos

    ROCm: AMD's platform for GPU computing

    Installing AMD OpenCL ROCm driver Ubuntu 20.04

    Proprietary Drivers vs Open Source | nVidia vs AMD - Chris Titus Tech

    AMD ROCm™ OpenMP Demo

    AMD Instinct(tm) Accelerators and the ROCm(tm) Platform...

    Introduction to AMD GPU Hardware
    General introduction to ROCm and programing with HIP (Heterogeneous-Computing Interface for Porta...

    Install ROCm 3.0 and tensorflow on Ubuntu - AMD RX 580 GPU...

    Install ROCm & PAL AMD OpenCL on fedora 36 (and compatible distro...

    Tensorflow on AMD GPU with Windows 10

    Tensorflow CUDA vs DirectML on 3090,Titan RTX and Radeon 6800 ...

    AMD ROCM Budget Deep Learning PC Build (USD $700)
    https://www.youtube.com/watch?v=-A3dC2EBc9c

    ROCm profiler basic tutorial
    https://www.youtube.com/watch?v=Kb50mnJGaUc

    ROCm and Distributed Deep Learning on Spark and TensorFlowJim Dow...
    https://www.youtube.com/watch?v=neb1C6JlEXc

    • 1 month ago
      Anonymous

      hip works on windows you know
      the only reason you're forced to go through the retarded shit like onnx is because it's missing the extra runtime libs, it has the core ones
      it just needs environment variables setup and an export library to work with clang
      so in theory you should be able to rejigger the rocm build scripts
      i got rocblas to successfully build but without the tensor library, unfortunately hipBLAS and the rest of the runtime libs need the tensor lib
      i want to figure out how to get the tensor lib to build as a single binary but its build system is even stupider than the rest of the hip libs and uses a really really stupidly designed python script

      Here are the AMD roc drivers / libraries etc

      https://rocmdocs.amd.com/en/latest/ROCm_Libraries/ROCm_Libraries.html

      hipSOLVER User Guide
      https://hipsolver.readthedocs.io/

      MIGraphX User Guide
      https://rocmsoftwareplatform.github.io/AMDMIGraphX/doc/html/

      RCCL User Guide
      https://rccl.readthedocs.io/

      rocALUTION User Guide
      https://rocalution.readthedocs.io/

      rocBLAS User Guide
      https://rocblas.readthedocs.io/

      rocFFT User Guide
      https://rocfft.readthedocs.io/

      rocRAND User Guide
      https://rocrand.readthedocs.io/

      rocSOLVER User Guide
      https://rocsolver.readthedocs.io/

      rocSPARSE User Guide
      https://rocsparse.readthedocs.io/

      rocThrust User Guide
      https://rocthrust.readthedocs.io/

  15. 1 month ago
    Anonymous

    Some more USB / Dual boot Linux videos
    Artix Linux - does anyone notice a performance increase or encounter any problems on this lightweight distro?

    Running/installing FULL Linux operating system on a USB (Artix or...

    Artix Linux - Installation and Review

    Dual Boot Windows and Artix Linux | Artix Linux Install | Dual Bo...

    Artix Installation Guide for VM Virtual Box | UEFI boot installation

    GPU Pass-through On Linux/Virt-Manager

    Ditch Virtualbox, Get QEMU/Virt Manager

    Ventoy - An Easy to Use MultiBoot USB Tool.

  16. 1 month ago
    Anonymous

    Docker guide

    Using Docker seems to be the easiest solution, and it works on any OS.

    Install Docker. https://docker.com
    Go to https://hub.docker.com/r/rocm/pytorch/tags
    Pull the latest image, and run "the master test".
    If it works, great! Follow the tard guide inside the running image.
    If it fails with hipErrorNoBinaryForGpu: You'll have to compile PyTorch for your GPU. See the corresponding section. If you're on Arch, try the unofficial repository method.

  17. 1 month ago
    Anonymous

    Manual Linux Install Guide for Arch
    Any thing need to be added?
    https://rentry.org/sdamd

    Add the arch4edu repository as per https://wiki.archlinux.org/title/unofficial_user_repositories#arch4edu.
    Now:
    sudo pacman -S python-pytorch-opt-rocm python-torchvision
    git clone https://github.com/kali-linex/arch-fmt8.git
    cd arch-fmt8 && makepkg -si && cd .. && rm -rf arch-fmt8
    cd /wherever/you/want/to/put/the/stable/diffusion/repo
    git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
    python3 -m venv --system-site-packages venv

    Add export MIOPEN_DEBUG_COMGR_HIP_PCH_ENFORCE=0 to the top of webui.sh
    Run ./webui.sh
    It should work now. If it has very bad performance (About 1-1.5 it/s expected for RX580 8GB, newer cards will be faster), webui.sh might have installed its own pytorch and torchvision. Run these commands (bash or zsh):

    source venv/bin/activate
    if [ "$(python -c 'import torch; print(torch.cuda.is_available())')" = False ]; then pip uninstall pytorch torchvision; echo "Uninstalled virtualenv torch & torchvision; should be good now"; else echo "Something else is wrong or your GPU is just slow"; fi

    • 1 month ago
      Anonymous

      does the first command also install python/pip?

  18. 1 month ago
    Anonymous

    I just used the AMD guide from the Stable Diffusion OP on BOT and it worked on my 6800XT. I can make a waifu image every 20 seconds I guess?

  19. 1 month ago
    Anonymous

    someone for the love of god make a gui

  20. 1 month ago
    Anonymous

    https://3d-diffusion.github.io/
    https://dreamfusion3d.github.io/
    https://nv-tlabs.github.io/GET3D/

    not good atm but something to keep an eye on this year.

  21. 1 month ago
    Anonymous

    I am attempting to follow the docker guide. It links to a page with docker images, and I can find an image with a version of rocm that supports my GPU (RX 580). However, the version of pytorch included in that docker image is too old and not compatible with the stable diffusion code. The instructions for getting a version of pytorch that supports old GPUs are for Arch, while the docker images with rocm are all ubuntu or centos.
    Are there any Arch docker images with rocm and/or pytorch preinstalled?
    Or, are there equivalents of rocm-arch and arch4edu that work with the ubuntu or centos docker images?
    (I want to use docker because I don't want to downgrade my kernel to be compatible with the old version of rocm)

    I tried to hack something together by making using the old version of pytorch anyway, and I had to change/remove some of the code that was calling methods that don't exist in the old version of pytorch. Some things just throw exceptions when I try to do them, but a little bit of the functionality still works. Surely there's a better way to do this though.

    • 1 month ago
      Anonymous

      GPU RX 580

      Anyone here who got RX 580 working with gui can share what they followed and their distro?

      RX580 info
      https://rentry.org/sdamd
      https://rentry.org/sd-nativeisekaitoo

      Some user's info here
      https://desuarchive.org/g/thread/89022796/#89024312

      https://desuarchive.org/g/thread/89022796/#89043310
      If you're on Arch, you basically just have to type a few commands thanks to arch4edu. No need to compile rocm/pytorch for hours now only to be met with another dependency you forgot to install.

      Also in that thread:
      somehow fucking got it to work. I don't know how.
      AND SO, THE RESULTS
      using Ubuntu 20.04, kernel 5.4, rocm 3.5.1-34, torch 1.12.1, RX580:
      (--full precision --no-half)
      Euler a, CFG 7, size 512x512
      I'm generating at 12.89s/it

      A video about RX 580 and ROCm
      Install ROCm 3.0 and tensorflow on Ubuntu - AMD RX 580 GPU...

  22. 1 month ago
    Anonymous

    Is there a one click setup that's as good and cmdr2? Until then my 6900xt will be for gaming only and my 2080ti for sd/htpc/dlss shit

  23. 1 month ago
    Anonymous

    RadeonVIIchad reporting in. Automatic1111 works great for me as long as I use the custom ROCm pip repo when installing dependencies. The only annoyance is that it does crap out after a lot of sustained proompting, requiring a full reboot. Archlinux btw.

  24. 1 month ago
    Anonymous

    This is too much work. Time to decide which 4080 to purchase.

  25. 1 month ago
    Anonymous

    If you're using an amd card you should lower your powerlimit the junction temp on my 6800 was hovering at 110 degrees for like 3 days, my pc shuts off now if I even try to make a single pic without under clocking so I coulda done some damage kek. I took the wattage down by 60 watts in corectrl the clock speed only dropped by like 50-100mhz but the junction temp now doesn't go above 90 and I get no overheating

    • 1 month ago
      Anonymous

      Same here, I've got heavy underclocking and undervolting on my 5700 XT after I saw junctions reach >100C and crashing.
      Peak is set at 1700 MHz GPU and 825 MHz for GPU memory. Peak power measured at 100W when under max GPU load.
      Junction temps are now around 80C max under a few minutes of load but stays around 70C. Haven't tried 1 hour load with it.

      For a 512x512 Euler A render at 50 steps, no CLIP skip, single batch of 1 image, I get a time of 22s at 2.35 it/s. SD v1.4
      Arch on a native install with KDE
      Unfortunately, I need to pass --precision full --no-half

    • 1 month ago
      Anonymous

      Ok looks like undervolting is highly necessary for continued use...

      *** WARNING ***
      Stable Diffusion addiction can seriously overload your GPU, it is good advice to undervolt it, probably by reducing around 60watts of power. (Let us know what impact that has on your system)

      AMD GPU UNDERVOLTING

      How to Overclock and Undervolt an AMD GPU (2021) (RX 6800 XT as d...

      How to UNDERVOLT AMD RX 6000 Series GPUs

      The Ultimate GPU Undervolting Guide - Navi, Turing, Vega + More...

      How To Undervolt AMD RX 570 and RX 580 (How to lower GPU temp upt...

      Another method here
      https://desuarchive.org/g/thread/89022796/#q89039536
      https://linuxreviews.org/HOWTO_undervolt_the_AMD_RX_4XX_and_RX_5XX_GPUs#The_Quick_And_Easy_Way_To_Manually_.22Undervolt.22_AMD_GPUs
      https://unix.stackexchange.com/questions/620072/reduce-amd-gpu-wattage

      Also someone tried using wattmanGTK

      ------------------------------------
      Corectrl

      Also see in this thread using corectrl
      https://gitlab.com/corectrl/corectrl

      CoreCTRL - A quick look

      Linux Gaming: This App Does What AMD Won't

      https://i.imgur.com/KzXTbO7.png

      I used corectrl, you need to add this to your grub config first and then update grub amdgpu.ppfeaturemask=0xffffffff

      these are the settings im using i just lowered the power limit until it started significantly effecting clock speed and then stopped, it can be done the same way as the native isekai guide on newr cards because they dont have set clock porifles anymore the card decides it all itself based on powerlimit afaik. you also have to opencorectrl and enable the profile at every boot im not sure if theres a way to have it auto apply or not

      Same here, I've got heavy underclocking and undervolting on my 5700 XT after I saw junctions reach >100C and crashing.
      Peak is set at 1700 MHz GPU and 825 MHz for GPU memory. Peak power measured at 100W when under max GPU load.
      Junction temps are now around 80C max under a few minutes of load but stays around 70C. Haven't tried 1 hour load with it.

      For a 512x512 Euler A render at 50 steps, no CLIP skip, single batch of 1 image, I get a time of 22s at 2.35 it/s. SD v1.4
      Arch on a native install with KDE
      Unfortunately, I need to pass --precision full --no-half

      Thanks anons for trying out the different methods. Let us know what temp change / clock cycle Hz reduction occurred for what wattage reduction.

      As this seems very necessary for heavy prooomting what are the best programs / techniques / commands you know of.

    • 1 month ago
      Anonymous

      what dogshit card do you have that you're hitting 110c

      • 1 month ago
        Anonymous
  26. 1 month ago
    Anonymous

    Anyone here who got RX 580 working with gui can share what they followed and their distro?

    • 1 month ago
      Anonymous

      check the previous threads and this guide
      https://rentry.org/sd-nativeisekaitoo

      the only one ive gotten to work was the onnx version using huggingface, but I don't think you can change the model and its nowhere near as nice as having a ui

      try a usb linux distro, just use virtualbox or something else,

      Some more USB / Dual boot Linux videos
      Artix Linux - does anyone notice a performance increase or encounter any problems on this lightweight distro?

      Running/installing FULL Linux operating system on a USB (Artix or...

      Artix Linux - Installation and Review

      Dual Boot Windows and Artix Linux | Artix Linux Install | Dual Bo...

      Artix Installation Guide for VM Virtual Box | UEFI boot installation

      GPU Pass-through On Linux/Virt-Manager

      Ditch Virtualbox, Get QEMU/Virt Manager

      Ventoy - An Easy to Use MultiBoot USB Tool.

      maybe unbuntu 20.04 or manjaro or arch or artix if you don't want systemd.
      It is worth trying you will see performance improvements,
      ctrl+F 580
      It's slow but it works, you will also need to throttle your voltage temp too probably. worth trying. tell us how you get on. or if you run into problems just tell us how you got stuck and on what guide / step.

      If you're using an amd card you should lower your powerlimit the junction temp on my 6800 was hovering at 110 degrees for like 3 days, my pc shuts off now if I even try to make a single pic without under clocking so I coulda done some damage kek. I took the wattage down by 60 watts in corectrl the clock speed only dropped by like 50-100mhz but the junction temp now doesn't go above 90 and I get no overheating

      Good advice, thanks, people have been using different things to throttle voltage, what did you use or what would you think is good to use?

      This is too much work. Time to decide which 4080 to purchase.

      be careful if you unplug it. do you research before you buy any of the 4k series.

      • 1 month ago
        Anonymous

        I have a 2nd m.2 I have manjaro installed on but following the regular guide didnt work so im trying docker on windows

        • 1 month ago
          Anonymous

          tell us the steps you took and where you got stuck and what commands / procedures you try out. As said elsewhere Windows plus docker can be slow. It is perhaps better to try a USB linux distro. It will do no harm learning multiple methods however. This is just the beginning.

          the only one ive gotten to work was the onnx version using huggingface, but I don't think you can change the model and its nowhere near as nice as having a ui

          i messed up the previous answer, ctrl+f 580 was meant for you. 580 is slow but it does work, lots of info for you in these threads if you search:
          https://desuarchive.org/g/thread/89022796/
          https://desuarchive.org/g/thread/89069132/

          As always give steps on what you try out.

          ===============================
          FOR ALL AMD USERS
          ===============================
          the important components are

          https://desuarchive.org/g/thread/89069132/#q89069802
          links to best posts of 1st thread

          Helpful Information to include in your post:
          Have you had Docker Issues? (y/n)
          Pytorch Version
          Have you had packaging dependency issues? (y/n)
          What GPU are you using?
          What linux distros have you tried? What issues arose or went smoothly for you?
          How many it/s? (or s/it on older cards)

          Docker, Pytorch (version, complete uninstall), packaging dependencies, distro
          HSA_OVERRIDE_GFX_VERSION=10.3.0
          and maybe
          --precision=full --no-half

          the more users that comment on these things the more we can pinpoint the common cause of errors.
          hope succeed in getting it to work, let us know if you don't and be specific about the steps you took. good luck!
          ==============================

          6900XT, Docker on NixOS.

          I just followed the docker guide for AMD, then the normal Nvidia guide inside Docker. Ran a few prompts with face correction and upscaling to make it download those models, then closed SD and took a snapshot of the docker. I run SD by launching my docker with a script and the path to webui.py, to close SD I just hit ctrl-c and the docker instance automatically closes along with it. For updates I just launch the docker without the webui.py line and do a git pull, then save a snapshot. That way I can easily revert if something breaks in an update.
          My output folder is on my NAS, I have docker set up so it mounts and saves to that automatically.

          great detailed write up, thank you
          the more details, the better

          • 1 month ago
            Anonymous

            rx 5700xt and manjaro
            I followed this guide https://rentry.org/sd-nativeisekaitoo
            all the way to launch webui.sh and this is where it says it wont support my gpu and ive been stuck here for 2 hours

            • 1 month ago
              Anonymous

              read through this

              give more details
              RX 5700 XT - working.

              https://desuarchive.org/g/thread/89022796/
              ctrl+f 5700

              ...One thing to note on first run there is some complaint about a hip sqlite file. I have not found a way to fix this and have tried installing the hip packages in the arch repos. I dont think it effects much but the first time you make an image after starting the webui it will take a minute or two. after that its works at normal speed

              Remove all versions of pytorch installed on your system

              Git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1122

              edit the webui-user.sh

              Add this to the command line args, it will fix an issue where grey pictures are produced

              --precision=full --no-half

              Add these to the bottom of the file. the first line is so it dls rocm pytorch, the second fixes and issue where it will say cuda capable gpu not found when running

              export TORCH_COMMAND="pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.1.1";
              export HSA_OVERRIDE_GFX_VERSION=10.3.0

              Start the webui.sh

              https://www.reddit.com/r/StableDiffusion/comments/ww436j/comment/imp7bx2/

              I copied in the optimizedSD folder from this repo into my stable-diffusion folder, opened optimizedSD/v1-inference.yaml and deleted the 5 optimizedSD. prefixes.

              Then, when running the model with any command, I apply the environment variable HSA_OVERRIDE_GFX_VERSION=10.3.0 before the command.

              As a bonus, I ran pip install gradio and now just use the command HSA_OVERRIDE_GFX_VERSION=10.3.0 python3 optimizedSD/txt2img_gradio.py and open the URL to the gradio server.

              Full precision (via CLI args or the checkbox in gradio) is required or it only generates grey outputs.

              You edit the file "scripts/relauncher.py" and on the line that says 'os.system("python scripts/webui.py")' make it 'os.system("python scripts/webui.py --optimized --precision=full --no-half")'

              Then start with "HSA_OVERRIDE_GFX_VERSION=10.3.0 python scripts/relauncher.py"

              did you try this part?
              HSA_OVERRIDE_GFX_VERSION=10.3.0

              • 1 month ago
                Anonymous

                I'm

                6900XT, Docker on NixOS.

                I just followed the docker guide for AMD, then the normal Nvidia guide inside Docker. Ran a few prompts with face correction and upscaling to make it download those models, then closed SD and took a snapshot of the docker. I run SD by launching my docker with a script and the path to webui.py, to close SD I just hit ctrl-c and the docker instance automatically closes along with it. For updates I just launch the docker without the webui.py line and do a git pull, then save a snapshot. That way I can easily revert if something breaks in an update.
                My output folder is on my NAS, I have docker set up so it mounts and saves to that automatically.

                , I just tested with and without HSA_OVERRIDE.
                Averaged 10.14it/s with, 10.13it/s without. 512x512, sd1.4 model.

              • 1 month ago
                Anonymous

                thanks for testing that out. very useful to know. the more people who do A/B testing like this and report back the better. There are so many variables that implementing small changes and reporting on their impact makes a HUGE difference as it is hard to replicate everyone's systems. Be your own experiments and report back the findings. We will grow a good knowledge base that way.

                I'm trying docker but when I try to launch SD I get this error. Sounds like I need "lark", whatever that is.

                0-16T00:58:10.078921053+02:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e08a8b8642a0e07c8f9a32e88402bb5bdb0f9ab4c8efb5df54fa3d294e95106d pid=1337775 runtime=io.containerd.runc.v2
                Warning: k_diffusion not found at path /app/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py
                Traceback (most recent call last):
                File "/app/stable-diffusion-webui/webui.py", line 12, in <module>
                from modules import devices, sd_samplers
                File "/app/stable-diffusion-webui/modules/sd_samplers.py", line 10, in <module>
                from modules import prompt_parser, devices, processing
                File "/app/stable-diffusion-webui/modules/prompt_parser.py", line 4, in <module>
                import lark
                ModuleNotFoundError: No module named 'lark'

                not heard of that before
                https://desuarchive.org/g/search/text/ModuleNotFoundError/

                have a look through the archives, other users have had similar modulenotfounderror, just not lark. maybe their issues will help.
                possibly xformers related

                ===========================
                TROUBLESHOOTING

                AS A GENERAL RULE YOU CAN SEARCH /G ARCHIVES FOR YOUR ERROR MESSAGES.
                PUT IN PART OF THE ERROR MESSAGE

                https://desuarchive.org/g/search/text/ERROR MESSAGE HERE/
                ===========================

              • 1 month ago
                Anonymous

                >thanks for testing that out. very useful to know. the more people who do A/B testing like this and report back the better. There are so many variables that implementing small changes and reporting on their impact makes a HUGE difference as it is hard to replicate everyone's systems. Be your own experiments and report back the findings. We will grow a good knowledge base that way.

                Yeah, this is just the start, the code hasn't been optimized in any way yet. render times could rapidly drop. All your information for all the AMD cards is vital.

      • 1 month ago
        Anonymous

        I used corectrl, you need to add this to your grub config first and then update grub amdgpu.ppfeaturemask=0xffffffff

        these are the settings im using i just lowered the power limit until it started significantly effecting clock speed and then stopped, it can be done the same way as the native isekai guide on newr cards because they dont have set clock porifles anymore the card decides it all itself based on powerlimit afaik. you also have to opencorectrl and enable the profile at every boot im not sure if theres a way to have it auto apply or not

    • 1 month ago
      Anonymous

      I followed the nativeisekaitoo guide but I also had to change the --extra-index-url https://download.pytorch.org/whl/rocm5.1.1 to https://download.pytorch.org/whl/rocm3.5.1 since for some dumb reason they dropped support for gfx803 (RX 470, 480, 580, etc.) in new rocm. Also rebuilt my kernel with CONFIG_HSA_AMD=y since I am running a mainline kernel, but that's probably not necessary if you're using the distro kernel.
      I'm using Manjaro and RX 470 and I assume it's working unless it fell back to CPU. Getting 11s/iter 500x500 Euler A.

  27. 1 month ago
    Anonymous

    any tips on knowing what distribution to use? I have been seeing posts talking about manjaro, arch, certain versions of ubuntu, and I don't have any idea what would be my best option. in the guide https://rentry.org/sd-nativeisekaitoo for instance it doesn't explicitly state anything, but there is a point where it talks about installing pytorch and mentions arch4edu which makes me wonder if it means that this guide is targeted at arch users. yesterday I installed ubuntu 22.04.1 LTS specifically onto a spare hard drive and was successful in getting that started, but when I attempted to follow https://rentry.org/sdamd I was getting errors leading me to the section that recommends I build pytorch from the source. That guide was a challenge up to that point since it skipped over some steps and seemed to assume the user already knows what they are doing and it only got worse there for me so I called it a night at that. Right now I am strongly considering waiting for a true retard guide for someone who has never touched linux before in their life and is basically doing a fresh install. I am using a 5700xt so if there is any guidance specific to that please do share.

    • 1 month ago
      Anonymous

      give more details
      RX 5700 XT - working.

      https://desuarchive.org/g/thread/89022796/
      ctrl+f 5700

      ...One thing to note on first run there is some complaint about a hip sqlite file. I have not found a way to fix this and have tried installing the hip packages in the arch repos. I dont think it effects much but the first time you make an image after starting the webui it will take a minute or two. after that its works at normal speed

      Remove all versions of pytorch installed on your system

      Git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1122

      edit the webui-user.sh

      Add this to the command line args, it will fix an issue where grey pictures are produced

      --precision=full --no-half

      Add these to the bottom of the file. the first line is so it dls rocm pytorch, the second fixes and issue where it will say cuda capable gpu not found when running

      export TORCH_COMMAND="pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.1.1";
      export HSA_OVERRIDE_GFX_VERSION=10.3.0

      Start the webui.sh

      https://www.reddit.com/r/StableDiffusion/comments/ww436j/comment/imp7bx2/

      I copied in the optimizedSD folder from this repo into my stable-diffusion folder, opened optimizedSD/v1-inference.yaml and deleted the 5 optimizedSD. prefixes.

      Then, when running the model with any command, I apply the environment variable HSA_OVERRIDE_GFX_VERSION=10.3.0 before the command.

      As a bonus, I ran pip install gradio and now just use the command HSA_OVERRIDE_GFX_VERSION=10.3.0 python3 optimizedSD/txt2img_gradio.py and open the URL to the gradio server.

      Full precision (via CLI args or the checkbox in gradio) is required or it only generates grey outputs.

      You edit the file "scripts/relauncher.py" and on the line that says 'os.system("python scripts/webui.py")' make it 'os.system("python scripts/webui.py --optimized --precision=full --no-half")'

      Then start with "HSA_OVERRIDE_GFX_VERSION=10.3.0 python scripts/relauncher.py"

      • 1 month ago
        Anonymous

        it says repo not found (different anon)

  28. 1 month ago
    Anonymous

    6900XT, Docker on NixOS.

    I just followed the docker guide for AMD, then the normal Nvidia guide inside Docker. Ran a few prompts with face correction and upscaling to make it download those models, then closed SD and took a snapshot of the docker. I run SD by launching my docker with a script and the path to webui.py, to close SD I just hit ctrl-c and the docker instance automatically closes along with it. For updates I just launch the docker without the webui.py line and do a git pull, then save a snapshot. That way I can easily revert if something breaks in an update.
    My output folder is on my NAS, I have docker set up so it mounts and saves to that automatically.

  29. 1 month ago
    Anonymous

    >AMD CAN DO IT TOO!
    >FIRST, OPEN DOCKER

    PASSSSSSSSSSSS

    • 1 month ago
      Anonymous

      docker isn't necessary
      >https://rentry.org/sd-nativeisekaitoo
      native install without docker on Linux, you can use Linux on a USB too bypassing windows.

  30. 1 month ago
    Anonymous

    These videos might help people
    https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs

    Videos

  31. 1 month ago
    Anonymous

    I'm trying docker but when I try to launch SD I get this error. Sounds like I need "lark", whatever that is.

    0-16T00:58:10.078921053+02:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e08a8b8642a0e07c8f9a32e88402bb5bdb0f9ab4c8efb5df54fa3d294e95106d pid=1337775 runtime=io.containerd.runc.v2
    Warning: k_diffusion not found at path /app/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py
    Traceback (most recent call last):
    File "/app/stable-diffusion-webui/webui.py", line 12, in <module>
    from modules import devices, sd_samplers
    File "/app/stable-diffusion-webui/modules/sd_samplers.py", line 10, in <module>
    from modules import prompt_parser, devices, processing
    File "/app/stable-diffusion-webui/modules/prompt_parser.py", line 4, in <module>
    import lark
    ModuleNotFoundError: No module named 'lark'

  32. 1 month ago
    Anonymous

    does anyone know if --xformers is available for amd or are we fucked....

    • 1 month ago
      Anonymous

      you mean the facebook library?
      yea it should do, just install the hip version of pytorch

  33. 1 month ago
    Anonymous

    how do I get arch working without wanting to blow my brains out. I already waded through a few hours of bullshit and have my partitions set up, locale stuff dealt with, even made a non root user. Now I just want to give the non root user the ability to use sudo and actually have a desktop environment but even when I do pacman -S sudo I don't get shit. All I have been doing is reading Google for half my day to try and fix these kinds of issues and it is frustrating.

    • 1 month ago
      Anonymous

      add the user to /etc/sudoers
      it's something like <username> (ALL:ALL) ALL
      https://wiki.archlinux.org/title/sudo
      add NOPASSWD if you don't want to have them log in

      • 1 month ago
        Anonymous

        Like I said, I didn't even have sudo installed and when I tried pacman -S sudo I was getting could not resolve host errors. I did manage to end up fixing this, but I had to hop back onto the flash drive I booted from. I used iwd to get into my wifi and after that used mount /dev/sda2 /mnt followed by mounting sda1 where my efi was or whatever it is called onto /mnt/boot. After this I used arch-root /mnt and was successfully able to install sudo. I would like to be able to simply get the wifi connected without needing to go back to the flash drive but for now that is my best answer. I dont know what packages I use so I can have wifi working in a non-retarded way

        • 1 month ago
          Anonymous

          oh
          wifi problems

          i don't know what to say other than openSUSE tumbleweed doesn't have this problem, it worked perfectly fine with my wifi card out of the box, and is a more stable rolling release distro in general
          big plus is that rocm is actually packaged for openSUSE LTE so installing it on tumbleweed only requires unfucking a couple package deps

          • 1 month ago
            Anonymous

            For arch I did the wifi workaround of going back in through the installation media on the flashdrive again. I guess according to google there are drivers in there so it can connect to the internet for that part at least? Anyways I ended up doing pacman -S networkmanager, and then another piece for actually enabling it, aka systemctl enable NetworkManager. After that I rebooted, hopped into my user account, and was able to use nmcli device wifi connect INSERT_WIFI_NAME password INSERT_PASSWORD. It seems like it did indeed work so hopefully that solves my problems there, otherwise I'll just go chug some bleach and gouge my eyes out

            • 1 month ago
              Anonymous

              well i can see why someone would find arch cool. they can have way more control over what they are doing. only problem is when some retard like me bumbles in trying to use god damned stable diffusion and has only even gotten to the point of having a working desktop. at this point, I think I am better off than before though. If anyone has guidance from start to finish on a fresh install of stable diffusion starting from a fresh install of arch i am all ears.

        • 1 month ago
          Anonymous

          *arch-chroot /mnt

  34. 1 month ago
    Anonymous

    I've been looking every-fucking-where but I cannot find how the fuck to initialise conda, what the fuck is a pacman? I'm using docker for this.

    • 1 month ago
      Anonymous

      See if any of this helps you. This is all new for many people, many components people have no experience with etc. Let us know what was holding you up if you get anywhere.

      ============================
      Conda / Anaconda and the Python Environment

      Master the basics of Conda environments in Python

      Conda Tutorial (Python) p.1: Package and Environment Manager

      Conda Tutorial (Python) p.2: Commands for Managing Environments

      Anaconda Beginners Guide for Linux and Windows - Python Working Environment

      https://docs.conda.io/projects/conda/en/latest/dev-guide/deep-dive-activation.html

      ===============================
      Pacman

      Using Pacman on Arch Linux: Everything you need to know...

      Linux Crash Course - The Pacman Command

      =============================
      Docker
      How to Install Docker + Docker Desktop on Arch Linux/Arch based D...

      ==========================

      Also many people find docker difficult, know that you can do a native installation too, many people have had good success doing that. It is probably good to understand both install techniques.

      You can also use a usb stick to run a different linux distro on to see if that helps at all.

      Some more USB / Dual boot Linux videos
      Artix Linux - does anyone notice a performance increase or encounter any problems on this lightweight distro?

      Running/installing FULL Linux operating system on a USB (Artix or...

      Artix Linux - Installation and Review

      Dual Boot Windows and Artix Linux | Artix Linux Install | Dual Bo...

      Artix Installation Guide for VM Virtual Box | UEFI boot installation
      https://www.youtube.com/watch?v=b9Uv9fNSIzw

      GPU Pass-through On Linux/Virt-Manager
      https://www.youtube.com/watch?v=KVDUs019IB8

      Ditch Virtualbox, Get QEMU/Virt Manager
      https://www.youtube.com/watch?v=wxxP39cNJOs

      Ventoy - An Easy to Use MultiBoot USB Tool.
      https://www.youtube.com/watch?v=K64sT0pQc-0

      We'll get you there, keep posting updates and problems you run into or anything that's been confusing for you.

      • 1 month ago
        Anonymous

        While I do appreciate your help, it's not really of use. I got as far as having a Docker with pytorch, but it just fails after that. Doing anything after initialising conda just doesn't work at all on Docker. How would I even continue after?

        • 1 month ago
          Anonymous

          np, you need to be really specific about what your setup is, ie where you got your docker image from, link it. What distro and version you are using? What version of pytorch you are using? What version of python are you using? Do you have any other versions of python / pytorch installed? Did you EVER have previous versions of python / pytorch installed. Have you tried COMPLETELY UNISTALLING python / pytorch before reinstalling the correct version. **Is conda installing pytorch?**(see below) How old is your system roughly, what GPU specifically?
          =====================
          There are several posts with docker info in the first thread
          https://desuarchive.org/g/thread/89022796/
          ctrl+f docker

          ***this post in particular, but others as well***
          https://desuarchive.org/g/thread/89022796/#q89051970

          the docker guide needs a complete rewrite as it stands and as everyone is using different versions of everything it is hard to know a common solution just yet. give us more information as you proceed.
          ===========================
          TROUBLESHOOTING

          AS A GENERAL RULE YOU CAN SEARCH /G ARCHIVES FOR YOUR ERROR MESSAGES.
          PUT IN PART OF THE ERROR MESSAGE

          https://desuarchive.org/g/search/text/ERROR MESSAGE HERE/
          ===========================
          so for example you can search rocm pytorch
          https://desuarchive.org/g/search/text/rocm%20pytorch/

          or rocm conda
          https://desuarchive.org/g/search/text/rocm%20conda/

          Maybe this is of use...

          So the basic steps seem to be:
          1) pick a distro
          2) set up ROCm and correct PyTorch version for ROCm (Don't install pytorch by yourself. Conda will install it, and you install the correct version for your card on top)
          >pip3 install --upgrade http://
          3) set up conda environment
          4) set up vanilla SD within Conda
          5) replicate .bat file instructions inside conda environment
          Check the list of packages you'll need for rocm, but I installed one that just had them all as dependencies so w/e. Also, mind your storage. It's taking me like 20GB from the root partition.

          • 1 month ago
            Anonymous

            >where you got your docker image from
            hub docker com/r/rocm/pytorch/tags
            latest version

            >What distro and version you are using?
            Windows 10 🙁

            >What version of pytorch you are using?
            I have no fucking clue, I installed the one for CPU first which worked fine? I keep seeing links that route to rocm versions of pytorch that I keep installing, such as 'download pytorch org/whl/rocm5.1.1'.

            >What version of python are you using?
            Latest, 3.10.6

            >Do you have any other versions of python / pytorch installed?
            Pytorch, yes. Python, I don't think I do.

            >Did you EVER have previous versions of python / pytorch installed.
            Well, yes.

            >Have you tried COMPLETELY UNISTALLING python / pytorch before reinstalling the correct version.
            I can try to remove it, but which version should I download then? I doubt any.

            • 1 month ago
              Anonymous

              I searched for windows pytorch to see what problems other users have run into...
              https://desuarchive.org/g/search/text/windows%20pytorch/
              --------
              So, PyTorch compiles multiple releases depending on what GPU you have, and different instructions depending on your OS and package manager.
              https://pytorch.org/get-started/locally/

              It's not as simple as "pip install torch torchvision". On Linux, that will install the GPU version automatically.
              On Windows, that will install the CPU version automatically.
              ---------
              read this post
              https://desuarchive.org/g/thread/88749514/#q88751277

              I thought rather than printing out long lists of commands here that may or may not work, you should try reading through what steps others have gone through searching the archives for error codes or your system setup, like pytorch, windows, amd, conda, docker, onnx. it might take a lot of searching but you will gain more of a feel for what is causing the problem. you also never mentioned your GPU, it might not be important, but then again it might be critical with software versions and gpu compatibility.

              have a read through what others have been through. If you get no further come back later, someone who has setup windows rocm on amd might know what is necessary in your situation with windows 10 conda pytorch rocm setup.
              -------------
              Another alternative if you get nowhere in the time being is a linux usb install, just so you know there are options (which may also run quicker)
              Everyone will be quick to say linux is better than windows for this but we should be able to get it running on both systems, so hang in there anon there are options and someone who has been through this may know what you need. read up in the time being so you can ask more specific questions. some dev maybe able to help

              • 1 month ago
                Anonymous

                Currently posting from a different PC, but I'm just trying to install Linux at this point. Windows can go fuck itself with all this bullshit at this point. Now I'm just in partition hell.

              • 1 month ago
                Anonymous

                we'll get you there anon, it might just take time and a bit of reading around the components involved so you can ask more specific questions someone can give a direct solution to. read/search the archives, read these past threads, watch videos.

                https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs

                Videos

                good luck and maybe someone will post about windows and docker / conda / pytorch / rocm for amd gpu later.

  35. 1 month ago
    Anonymous

    Why would anyone buy AMD in the age of Machine Learning?

    • 1 month ago
      Anonymous

      >Why would anyone buy AMD in the age of Machine Learning?
      me personally, because I use hackintosh and only amd is supported there...

    • 1 month ago
      Anonymous

      I'm a poorgay.

Your email address will not be published.