π₯ Stable Diffusion Linux Installation Guide 2026: A1111, ComfyUI & WebUI
Step-by-step guide to install Stable Diffusion on Linux in 2026. Covers Automatic1111 WebUI, ComfyUI setup, Python 3.10.6 requirements, VRAM flags (–lowvram, –medvram), AMD ROCm support, and fixes for the conv3d memory bug & ComfyUI-Manager errors.
Python 3.10.6
Required for A1111 & ComfyUI in 2026
–medvram / –lowvram
Essential VRAM flags for 4-12GB GPUs
–opt-sdp-attention
Replace xformers on newer Linux kernels
AMD ROCm Support
RX 6000/7000 series with ROCm 6.2+
π Not Using Linux?
We have optimized guides for other platforms:
π Best Way to Install Stable Diffusion Locally in 2026
| UI/Framework | Best For | Difficulty | VRAM Flags | Python Version |
|---|---|---|---|---|
| Automatic1111 WebUI | Beginners, Most Features, Extensions | Easy-Medium | --medvram, --lowvram |
3.10.6 |
| ComfyUI | Advanced Workflows, Best Performance, Flux/SD3.5 | Medium-Hard | --lowvram, --medvram, --highvram |
3.10.6 |
| Forge | Performance, SD 3.5, Flux.1 Support | Medium | --medvram, --lowvram |
3.10.6 |
| SwarmUI | Latest Features, Multi-Backend | Hard | Varies by backend | 3.10+ |
π Python Version Requirements 2026
Critical: Based on 2026 search data, Python version mismatch is the #1 cause of installation failures.
# Add deadsnakes PPA
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
# Install Python 3.10 and virtual environment
sudo apt install python3.10 python3.10-venv python3.10-dev
# Create and activate virtual environment
python3.10 -m venv sd-env
source sd-env/bin/activate
# Install Python 3.10
sudo pacman -S python310
# Create virtual environment
python3.10 -m venv comfyui-env
source comfyui-env/bin/activate
β‘ ComfyUI Command Line Flags 2026: –lowvram, –medvram, –use-sdp
Top searched flags based on 2026 query data. Use the correct flag for your GPU VRAM:
–lowvram
For 4-6GB VRAM (RTX 3050, 3060, older GPUs)
–medvram
For 6-12GB VRAM (RTX 3060 Ti, 3070, 4060)
–highvram
For 12GB+ VRAM (RTX 3080, 3090, 4090)
π Key Flags Explained (2026 Update)
--opt-sdp-attentionβ Use for lower VRAM (4-12GB). Optimized attention.--use-sdp-attentionβ Use for 12GB+ VRAM. Faster but more memory.--disable-cuda-mallocβ Fix for “working around nvidia conv3d memory bug”.
π οΈ Fix: “Working around nvidia conv3d memory bug”
β οΈ Common Error (12+ searches in 7 days)
working around nvidia conv3d memory bug appears in console, causes VRAM leaks and crashes.
β Immediate Workaround
β Permanent Fix: Update NVIDIA Drivers
β Alternative: Use FP16 Precision
π§ Fix: “[comfyui-manager] installation failed: ignored”
β οΈ Common Manager Error
ComfyUI Manager fails to install or update custom nodes.
β Step 1: Manual Installation
β Step 2: Clear Cache & Permissions
β Step 3: Manual Node Installation (If Manager Still Fails)
π¦ Fix: “stability-ai/stablediffusion repository not found”
β οΈ 404 Error – Repository Moved/Renamed
Old Stability AI repositories are being reorganized in 2026.
β Solution 1: Use Hugging Face Mirrors
β Solution 2: Use SSH Instead of HTTPS
π₯ AMD GPU Linux Support (ROCm 6.2+)
# Install ROCm
sudo apt update
sudo apt install rocm-hip-sdk rocm-opencl-sdk
# Add user to video group
sudo usermod -a -G video $USER
# Install PyTorch with ROCm
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.2
# Launch ComfyUI with ROCm
python main.py --use-rocm --medvram
π Quick Diagnostic Commands
# 1. Check GPU
nvidia-smi # or rocminfo for AMD
# 2. Check Python and PyTorch
python -c "import torch; print(f'PyTorch: {torch.__version__}'); print(f'CUDA: {torch.cuda.is_available()}')"
# 3. Check VRAM
python -c "import torch; print(f'VRAM: {torch.cuda.get_device_properties(0).total_memory / 1e9:.2f} GB')"
β Frequently Asked Questions (Based on 2026 Searches)
Q: What Python version for A1111 in 2026?
A: Python 3.10.6 is required. Python 3.11+ will fail with xformers and torch compatibility issues.
Q: How do I use –medvram flag in ComfyUI?
A: Add --medvram when launching: python main.py --medvram --opt-sdp-attention. Use for 6-12GB GPUs.
Q: What’s the difference between –opt-sdp-attention and –use-sdp-attention?
A: --opt-sdp-attention is optimized for lower VRAM (4-12GB). --use-sdp-attention is faster but uses more memory (12GB+).
Q: Why is Stability AI repository not found?
A: Old repositories have been reorganized. Use Hugging Face mirrors or SSH cloning instead of HTTPS.
Q: How to install Stable Diffusion on Linux in 2026?
A: Use this guide! Install Python 3.10.6, clone A1111 or ComfyUI, use --medvram if needed, and follow the steps above.
Step-by-step guide to install Stable Diffusion on Linux in 2026. Covers Automatic1111 WebUI, ComfyUI setup, Python version requirements, VRAM flags (--lowvram, --medvram), and troubleshooting common errors. Complete Linux installation tutorial.
Running Stable Diffusion locally on Linux in 2026 is easier than everβbut Python mismatches, Torch errors, AMD GPU confusion, and low VRAM limitations still trip people up. This guide covers AUTOMATIC1111 (A1111) and ComfyUI, with fixes that actually work today. Hereβs a practical, up-to-date (as of early 2026) guide to running Stable Diffusion locally on Linux.
Best way to install Stable Diffusion Locally 2026
Add this table after introduction:
| UI/Framework | Best For | Difficulty | VRAM Flags | Python Version |
|---|---|---|---|---|
| Automatic1111 WebUI | Beginners, Most Features | Easy-Medium | --medvram --lowvram | 3.10+ |
| ComfyUI | Workflows, Advanced Users | Medium-Hard | --lowvram --medvram | 3.10+ |
| Forge | Performance, SD 3.0 | Medium | Varies | 3.10+ |
| SwarmUI | Latest Features | Hard | Varies | 3.10+ |