诈尸更新


配置:AMD Ryzen 5 2500U with Radeon Vega Mobile Gfx
增加核显显存
WARNING: USE AT YOUR OWN RISK
下载 UniversalAMDFormBrowser.zip (位于 DavidS95/Smokeless_UMAF)
如何使用
解压到FAT32 U盘,开机从U盘启动(方法略),会加载自定义bios UI,现在如果进入Device Manager,AMD PBS/CBS就在那里,你可以修改你想要的,完成后,直接按esc直到它请您保存。
在 AMI Bios 上,除了 AMD PBS 和 CBS 也会显示“Setup”,这是常规的 bios,这里的编辑可能不会保存(AMD PBS/CBS,Aod Setup)
展开/折叠 翻译前原文
How to use it
Extract in a FAT32 USB, and boot from it, it will load the custom bios UI, Now if you enter Device Manager, AMD PBS/CBS will be there, you can modify what you want, and when done, just hit esc until it ask you to save.
On AMI Bios, in addition to AMD PBS and CBS will be shown also "Setup", this is the regular bios, the edit here might not be saved (AMD PBS/CBS, Aod Setup)
主菜单 -> Device Manager -> AMD CBS -> NBIO Common Opinions -> GFX Configuration(选项位置可能不同,仅供参考)
| |
选中, enter, ↑ ↓, enter |
|
| Intergrated Graphics Controller |
<Forces> |
|
| UMA Mode |
<UMA_AUTO> |
|
| UMA Version |
<Auto> |
|
| Display Resolution |
<3840x2160> |
(选最大的) |
其实 UMA Mode 还可以选另一个选项来手动指定显存大小(UMA Frame buffer Size),但是我这里改了没用。只能用上面这个方法,把显存从1G改到2G
修改完了esc退出,退到AMD CBS这一级菜单时会弹出确认保存的窗口,按y确认保存
ctrl+alt+del重启,任务管理器,确认修改生效
安装运行
Install and Run on AMD GPUs · AUTOMATIC1111/stable-diffusion-webui Wiki (github.com)
Windows+AMD的支持还没有正式为webui做,
但您可以安装 lshqqytiger 的使用 Direct-ml 的 webui 分支。
- 训练目前不起作用,但各种功能/扩展都可以,例如 LoRAs 和 controlnet。 在 https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues 报告问题
安装 Python 3.10.6(勾选添加到路径)和 git
将此行粘贴到 cmd/terminal:
git clone https://github.com/lshqqytiger/stable-diffusion-webui-directml && cd stable-diffusion-webui-directml && git submodule init && git submodule update
(您可以将程序文件夹移动到其他地方。)
双击 webui-user.bat
如果在安装或运行时看起来卡住了,请在终端中按回车键,它应该会继续。
展开/折叠 翻译前原文
Windows+AMD support has not officially been made for webui,
but you can install lshqqytiger's fork of webui that uses Direct-ml.
-Training currently doesn't work, yet a variety of features/extensions do, such as LoRAs and controlnet. Report issues at https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues
Install Python 3.10.6 (ticking Add to PATH), and git
paste this line in cmd/terminal: git clone https://github.com/lshqqytiger/stable-diffusion-webui-directml && cd stable-diffusion-webui-directml && git submodule init && git submodule update
(you can move the program folder somewhere else.)
Double-click webui-user.bat
If it looks like it is stuck when installing or running, press enter in the terminal and it should continue.
按照上面的步骤,运行webui-user.bat(确保互联网连接通畅)便会全自动安装依赖项,甚至帮你下好模型 (v1-5-pruned-emaonly.safetensors),然后会直接运行webui。然而我们不用它自动运行的因为没加参数,看到输出中已经开启了位于http://127.0.0.1:7860/的gradio服务器然后给你输出了启动用时就证明已经装好全部依赖加载好模型正在运行了,ctrl+c直接终止就行
如果已经下载过了v1-5-pruned-emaonly.safetensors,那么在运行webui-user.bat前将其置于models\Stable-diffusion然后再运行脚本安装(当然用mklink创建符号链接也行)
至于模型的选择,最好用FP16量化过的模型,占用显存较低(?需要验证)
运行webui.py,附加参数--medvram
E:\stable-diffusion-webui-directml>venv\Scripts\activate.bat
(venv) E:\stable-diffusion-webui-directml>python webui.py --medvram
Warning: experimental graphic memory optimization for AMDGPU is disabled. Because there is an unknown error.
Disabled experimental graphic memory optimizations.
Interrogations are fallen back to cpu. This doesn't affect on image generation. But if you want to use interrogate (CLIP or DeepBooru), check out this issue: https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/10
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
No module 'xformers'. Proceeding without it.
Loading weights [d1facd9a2b] from E:\stable-diffusion-webui-directml\models\Stable-diffusion\anything-v3-fp16-pruned.safetensors
Creating model from config: E:\stable-diffusion-webui-directml\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Downloading (…)olve/main/vocab.json: 100%|██████████████████████████████████████████| 961k/961k [00:00<00:00, 1.74MB/s]
Downloading (…)olve/main/merges.txt: 100%|██████████████████████████████████████████| 525k/525k [00:00<00:00, 1.57MB/s]
Downloading (…)cial_tokens_map.json: 100%|█████████████████████████████████████████████| 389/389 [00:00<00:00, 194kB/s]
Downloading (…)okenizer_config.json: 100%|█████████████████████████████████████████████| 905/905 [00:00<00:00, 450kB/s]
Downloading (…)lve/main/config.json: 100%|████████████████████████████████████████| 4.52k/4.52k [00:00<00:00, 2.26MB/s]
Applying cross attention optimization (InvokeAI).
Textual inversion embeddings loaded(0):
Model loaded in 48.4s (load weights from disk: 0.4s, create model: 11.1s, apply weights to model: 23.6s, apply half(): 13.0s, load textual inversion embeddings: 0.1s).
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 59.9s (import torch: 3.9s, import gradio: 1.9s, import ldm: 0.9s, other imports: 2.0s, setup codeformer: 0.1s, load scripts: 1.4s, load SD checkpoint: 48.5s, create ui: 0.9s, gradio launch: 0.3s).
Read more
Optimizations · AUTOMATIC1111/stable-diffusion-webui Wiki (github.com)
Troubleshooting · AUTOMATIC1111/stable-diffusion-webui Wiki (github.com)
Command Line Arguments and Settings · AUTOMATIC1111/stable-diffusion-webui Wiki (github.com)