0
点赞
收藏
分享

微信扫一扫

SD 手动安装-conda版本

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

  1. 使用git克隆项目

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

  1. 切换到克隆目录

cd stable-diffusion-webui

  1. 再目录中初始化conda环境

# Create environment
conda create -n StableDiffusion python=3.10.6
# Activate environment
conda activate StableDiffusion

  1. 查看环境列表,确认创建成功

# Validate environment is selected
conda env list

  1. 启动webui

webui-user.bat
# Wait for "Running on local URL:  http://127.0.0.1:7860" and open that URI.

  1. 效果图

SD 手动安装-conda版本_ci

SD 手动安装-conda版本_手动安装_02

  1. 附录

commandline argument

explanation

--opt-sdp-attention

May results in faster speeds than using xFormers on some systems but requires more VRAM. (non-deterministic)

--opt-sdp-no-mem-attention

May results in faster speeds than using xFormers on some systems but requires more VRAM. (deterministic, slightly slower than --opt-sdp-attention and uses more VRAM)

--xformers

Use xFormers library. Great improvement to memory consumption and speed. Nvidia GPUs only. (non-deterministic)

--force-enable-xformers

Enables xFormers regardless of whether the program thinks you can run it or not. Do not report bugs you get running this.

--opt-split-attention

Cross attention layer optimization significantly reducing memory use for almost no cost (some report improved performance with it). Black magic.

On by default for torch.cuda, which includes both NVidia and AMD cards.

--disable-opt-split-attention

Disables the optimization above.

--opt-sub-quad-attention

Sub-quadratic attention, a memory efficient Cross Attention layer optimization that can significantly reduce required memory, sometimes at a slight performance cost. Recommended if getting poor performance or failed generations with a hardware/software configuration that xFormers doesn't work for. On macOS, this will also allow for generation of larger images.

--opt-split-attention-v1

Uses an older version of the optimization above that is not as memory hungry (it will use less VRAM, but will be more limiting in the maximum size of pictures you can make).

--medvram

Makes the Stable Diffusion model consume less VRAM by splitting it into three parts - cond (for transforming text into numerical representation), first_stage (for converting a picture into latent space and back), and unet (for actual denoising of latent space) and making it so that only one is in VRAM at all times, sending others to CPU RAM. Lowers performance, but only by a bit - except if live previews are enabled.

--lowvram

An even more thorough optimization of the above, splitting unet into many modules, and only one module is kept in VRAM. Devastating for performance.

*do-not-batch-cond-uncond

Prevents batching of positive and negative prompts during sampling, which essentially lets you run at 0.5 batch size, saving a lot of memory. Decreases performance. Not a command line option, but an optimization implicitly enabled by using --medvram or --lowvram.

--always-batch-cond-uncond

Disables the optimization above. Only makes sense together with --medvram or --lowvram

--opt-channelslast

Changes torch memory type for stable diffusion to channels last. Effects not closely studied.

--upcast-sampling

For Nvidia and AMD cards normally forced to run with --no-half, should improve generation speed.

举报

相关推荐

0 条评论