MySQL 8におけるデータベースユーザ作成と権限の割り当てが従来の「grant all on DB名.* to wordpress@localhost identified by ‘パスワード’;」という一文から、「create user ~」と「grant ~」の2つに分かれている点に注意が必要です。
$ sudo mysql -u root
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.41 Source distribution
Copyright (c) 2000, 2025, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create database DB名 character set utf8;
Query OK, 1 row affected, 1 warning (0.01 sec)
mysql> create user wordpress@localhost identified by 'パスワード';
Query OK, 0 rows affected (0.01 sec)
mysql> grant all privileges on DB名.* to wordpress@localhost;
Query OK, 0 rows affected (0.00 sec)
mysql> quit
Bye
$
手順7: Webサーバ設定
手順7-1: httpdインストール
httpdをインストールします。
Oracle Linux 9.6ではWebサーバとして Apache(httpd) 2.4.62 、nginx 1.20.1、nginx 1.26.3が使えるが、apacheを使う。
$ sudo dnf install httpd -y
Last metadata expiration check: 0:09:15 ago on Thu 14 Aug 2025 01:58:49 PM JST.
Dependencies resolved.
===========================================================================================================================
Package Architecture Version Repository Size
===========================================================================================================================
Installing:
httpd x86_64 2.4.62-4.0.1.el9 ol9_appstream 64 k
Installing dependencies:
apr x86_64 1.7.0-12.el9_3 ol9_appstream 131 k
apr-util x86_64 1.6.1-23.el9 ol9_appstream 99 k
apr-util-bdb x86_64 1.6.1-23.el9 ol9_appstream 12 k
httpd-core x86_64 2.4.62-4.0.1.el9 ol9_appstream 1.8 M
httpd-tools x86_64 2.4.62-4.0.1.el9 ol9_appstream 93 k
oracle-logos-httpd noarch 90.4-1.0.1.el9 ol9_baseos_latest 37 k
Installing weak dependencies:
apr-util-openssl x86_64 1.6.1-23.el9 ol9_appstream 14 k
mod_http2 x86_64 2.0.26-4.el9 ol9_appstream 171 k
mod_lua x86_64 2.4.62-4.0.1.el9 ol9_appstream 58 k
Transaction Summary
===========================================================================================================================
Install 10 Packages
Total download size: 2.4 M
Installed size: 6.1 M
<略>
Installed:
apr-1.7.0-12.el9_3.x86_64 apr-util-1.6.1-23.el9.x86_64 apr-util-bdb-1.6.1-23.el9.x86_64
apr-util-openssl-1.6.1-23.el9.x86_64 httpd-2.4.62-4.0.1.el9.x86_64 httpd-core-2.4.62-4.0.1.el9.x86_64
httpd-tools-2.4.62-4.0.1.el9.x86_64 mod_http2-2.0.26-4.el9.x86_64 mod_lua-2.4.62-4.0.1.el9.x86_64
oracle-logos-httpd-90.4-1.0.1.el9.noarch
Complete!
$
$ sudo dehydrated --register
# INFO: Using main config file /etc/dehydrated/config
# INFO: Using additional config file /etc/dehydrated/conf.d/local.sh
To use dehydrated with this certificate authority you have to agree to their terms of service which you can find here: https://letsencrypt.org/documents/LE-SA-v1.3-September-21-2022.pdf
To accept these terms of service run "/bin/dehydrated --register --accept-terms".
$ sudo /bin/dehydrated --register --accept-terms
# INFO: Using main config file /etc/dehydrated/config
# INFO: Using additional config file /etc/dehydrated/conf.d/local.sh
+ Generating account key...
+ Registering account key with ACME server...
+ Fetching account URL...
+ Done!
$
初回のSSL証明書発行処理を実行します。
$ sudo dehydrated --cron
# INFO: Using main config file /etc/dehydrated/config
# INFO: Using additional config file /etc/dehydrated/conf.d/local.sh
+ Creating chain cache directory /etc/dehydrated/chains
Processing ホスト1名.ドメイン名 with alternative names: ホスト2名.ドメイン名
+ Creating new directory /etc/dehydrated/certs/ホスト1名.ドメイン名 ...
+ Signing domains...
+ Generating private key...
+ Generating signing request...
+ Requesting new certificate order from CA...
+ Received 2 authorizations URLs from the CA
+ Handling authorization for ホスト1名.ドメイン名
+ Handling authorization for ホスト2名.ドメイン名
+ 2 pending challenge(s)
+ Deploying challenge tokens...
+ Responding to challenge for ホスト1名.ドメイン名 authorization...
+ Challenge is valid!
+ Responding to challenge for ホスト2名.ドメイン名 authorization...
+ Challenge is valid!
+ Cleaning challenge tokens...
+ Requesting certificate...
+ Checking certificate...
+ Done!
+ Creating fullchain.pem...
+ Done!
+ Running automatic cleanup
$
手順7-3: WebサーバへのSSL証明書設定
まず、httpdにmod_sslを追加します。
$ sudo dnf install mod_ssl -y
Last metadata expiration check: 0:19:40 ago on Thu 14 Aug 2025 01:58:49 PM JST.
Dependencies resolved.
===========================================================================================================================
Package Architecture Version Repository Size
===========================================================================================================================
Installing:
mod_ssl x86_64 1:2.4.62-4.0.1.el9 ol9_appstream 117 k
Transaction Summary
===========================================================================================================================
Install 1 Package
<略>
$
$ cd /var/www/html
$ ls
$ sudo curl -O https://wordpress.org/latest.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 22.3M 100 22.3M 0 0 17.6M 0 0:00:01 0:00:01 --:--:-- 17.6M
$ ls
latest.tar.gz
$ sudo tar xfz latest.tar.gz
$ ls -l
total 22904
-rw-r--r--. 1 root root 23447259 Sep 12 11:57 latest.tar.gz
drwxr-xr-x. 5 nobody nobody 4096 Aug 29 23:14 wordpress
$ sudo rm latest.tar.gz
$
現在の設定値を「sudo getsebool -a |grep httpd_can_network」で確認し、「sudo setsebool -P httpd_can_network_connect on」で有効にする
$ sudo getsebool -a |grep httpd_can_network
httpd_can_network_connect --> off
httpd_can_network_connect_cobbler --> off
httpd_can_network_connect_db --> off
httpd_can_network_memcache --> off
httpd_can_network_relay --> off
$ sudo setsebool -P httpd_can_network_connect on
$ sudo getsebool -a |grep httpd_can_network
httpd_can_network_connect --> on
httpd_can_network_connect_cobbler --> off
httpd_can_network_connect_db --> off
httpd_can_network_memcache --> off
httpd_can_network_relay --> off
$
Oracle Linux 8ではimagemagickのphpモジュールはなかったが、Oracle Linux 9だとあったので「sudo dnf install php-pecl-imagick」でインストールしたいのだが、Oracle Linux 9.6 + php 8.3環境だと、php 8.3向けモジュールが出ていないため失敗する
$ sudo dnf install php-pecl-imagick
Last metadata expiration check: 0:45:12 ago on Thu 14 Aug 2025 01:58:49 PM JST.
Error:
Problem: package php-pecl-imagick-3.7.0-1.el9.x86_64 from ol9_developer_EPEL requires php(api) = 20200930-64, but none of the providers can be installed
- package php-pecl-imagick-3.7.0-1.el9.x86_64 from ol9_developer_EPEL requires php(zend-abi) = 20200930-64, but none of the providers can be installed
- conflicting requests
- package php-common-8.0.13-1.el9.x86_64 from ol9_appstream is filtered out by modular filtering
- package php-common-8.0.13-2.el9_0.x86_64 from ol9_appstream is filtered out by modular filtering
- package php-common-8.0.20-3.el9.x86_64 from ol9_appstream is filtered out by modular filtering
- package php-common-8.0.27-1.el9_1.x86_64 from ol9_appstream is filtered out by modular filtering
- package php-common-8.0.30-1.el9_2.x86_64 from ol9_appstream is filtered out by modular filtering
- package php-common-8.0.30-2.el9.x86_64 from ol9_appstream is filtered out by modular filtering
- package php-common-8.0.30-3.el9_6.x86_64 from ol9_appstream is filtered out by modular filtering
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
$
$ sudo vi /etc/dnf/automatic.conf
$ cat /etc/dnf/automatic.conf
[commands]
# What kind of upgrade to perform:
# default = all available upgrades
# security = only the security upgrades
upgrade_type = default
random_sleep = 0
# Maximum time in seconds to wait until the system is on-line and able to
# connect to remote repositories.
network_online_timeout = 60
# To just receive updates use dnf-automatic-notifyonly.timer
# Whether updates should be downloaded when they are available, by
# dnf-automatic.timer. notifyonly.timer, download.timer and
# install.timer override this setting.
download_updates = yes
# Whether updates should be applied when they are available, by
# dnf-automatic.timer. notifyonly.timer, download.timer and
# install.timer override this setting.
apply_updates = yes
[emitters]
# Name to use for this system in messages that are emitted. Default is the
# hostname.
# system_name = my-host
# How to send messages. Valid options are stdio, email and motd. If
# emit_via includes stdio, messages will be sent to stdout; this is useful
# to have cron send the messages. If emit_via includes email, this
# program will send email itself according to the configured options.
# If emit_via includes motd, /etc/motd file will have the messages. if
# emit_via includes command_email, then messages will be send via a shell
# command compatible with sendmail.
# Default is email,stdio.
# If emit_via is None or left blank, no messages will be sent.
emit_via = stdio
[email]
# The address to send email messages from.
email_from = root@example.com
# List of addresses to send messages to.
email_to = root
# Name of the host to connect to to send email messages.
email_host = localhost
[command]
# The shell command to execute. This is a Python format string, as used in
# str.format(). The format function will pass a shell-quoted argument called
# `body`.
# command_format = "cat"
# The contents of stdin to pass to the command. It is a format string with the
# same arguments as `command_format`.
# stdin_format = "{body}"
[command_email]
# The shell command to use to send email. This is a Python format string,
# as used in str.format(). The format function will pass shell-quoted arguments
# called body, subject, email_from, email_to.
# command_format = "mail -Ssendwait -s {subject} -r {email_from} {email_to}"
# The contents of stdin to pass to the command. It is a format string with the
# same arguments as `command_format`.
# stdin_format = "{body}"
# The address to send email messages from.
email_from = root@example.com
# List of addresses to send messages to.
email_to = root
[base]
# This section overrides dnf.conf
# Use this to filter DNF core messages
debuglevel = 1
$
そしてdnf-automatic.timerを有効化し、開始します。
$ sudo systemctl enable dnf-automatic.timer
Created symlink /etc/systemd/system/timers.target.wants/dnf-automatic.timer → /usr/lib/systemd/system/dnf-automatic.timer.
$ sudo systemctl status dnf-automatic
○ dnf-automatic.service - dnf automatic
Loaded: loaded (/usr/lib/systemd/system/dnf-automatic.service; static)
Active: inactive (dead)
TriggeredBy: ○ dnf-automatic.timer
$ sudo systemctl start dnf-automatic.timer
$ sudo systemctl status dnf-automatic.timer
● dnf-automatic.timer - dnf-automatic timer
Loaded: loaded (/usr/lib/systemd/system/dnf-automatic.timer; enabled; pres>
Active: active (waiting) since Tue 2023-09-12 13:11:00 JST; 5s ago
Until: Tue 2023-09-12 13:11:00 JST; 5s ago
Trigger: Wed 2023-09-13 06:44:33 JST; 17h left
Triggers: ● dnf-automatic.service
Sep 12 13:11:00 ホスト名 systemd[1]: Started dnf-automatic timer.
$
手順14 メモリが足らない対策
Oracle CloudのFree Tierで初期値のまま稼働させてみたのですが、頻繁に応答がなくなりました。(Oracle Linux 8だと問題なかったんだけど、Oracle Linux 9だと1日複数回発生)
# dnf repolist
repo id repo の名前
ol9_UEKR7 Oracle Linux 9 UEK Release 7 (x86_64)
ol9_addons Oracle Linux 9 Addons (x86_64)
ol9_appstream Oracle Linux 9 Application Stream Packages (x86_64)
ol9_baseos_latest Oracle Linux 9 BaseOS Latest (x86_64)
#
しかし、”Oracle Linux 9 OCI Included Packages (x86_64)”(oci-included-ol9.repo)を戻したところ応答がなくなる現象発生
# dnf repolist
repo id repo の名前
ol9_UEKR7 Oracle Linux 9 UEK Release 7 (x86_64)
ol9_addons Oracle Linux 9 Addons (x86_64)
ol9_appstream Oracle Linux 9 Application Stream Packages (x86_64)
ol9_baseos_latest Oracle Linux 9 BaseOS Latest (x86_64)
ol9_oci_included Oracle Linux 9 OCI Included Packages (x86_64)
# dnf check-update
Oracle Linux 9 OCI Included Packages (x86_64) 27 MB/s | 84 MB 00:03
<ここから出力が続かない>
強制再起動したあと、Oracle Linux 9 OCI Included Packages (x86_64)を除外して、EPELを含めてそれ以外の状態を戻してみるとdnf check-updateに成功
# dnf repolist
repo id repo の名前
ol9_UEKR7 Oracle Linux 9 UEK Release 7 (x86_64)
ol9_addons Oracle Linux 9 Addons (x86_64)
ol9_appstream Oracle Linux 9 Application Stream Packages (x86_64)
ol9_baseos_latest Oracle Linux 9 BaseOS Latest (x86_64)
ol9_developer_EPEL Oracle Linux 9 EPEL Packages for Development (x86_64)
ol9_ksplice Ksplice for Oracle Linux 9 (x86_64)
#
が・・・dnf updateでエラーが・・・
# dnf update -y
メタデータの期限切れの最終確認: 0:01:49 前の 2024年05月07日 11時32分27秒 に実施しました。
エラー:
問題 1: package ImageMagick-libs-6.9.12.93-1.el9.x86_64 from @System requires libraw_r.so.20()(64bit), but none of the providers can be installed
- cannot install both LibRaw-0.21.1-1.el9.x86_64 from ol9_appstream and LibRaw-0.20.2-6.el9.x86_64 from @System
- cannot install both LibRaw-0.21.1-1.el9.x86_64 from ol9_appstream and LibRaw-0.20.2-5.el9.x86_64 from ol9_appstream
- cannot install both LibRaw-0.21.1-1.el9.x86_64 from ol9_appstream and LibRaw-0.20.2-6.el9.x86_64 from ol9_appstream
- パッケージの最良アップデート候補をインストールできません LibRaw-0.20.2-6.el9.x86_64
- パッケージの最良アップデート候補をインストールできません ImageMagick-libs-6.9.12.93-1.el9.x86_64
問題 2: package tuned-profiles-oci-2.21.0-1.0.1.el9_3.noarch from @System requires tuned = 2.21.0-1.0.1.el9_3, but none of the providers can be installed
- cannot install both tuned-2.22.1-1.0.1.el9.noarch from ol9_baseos_latest and tuned-2.21.0-1.0.1.el9_3.noarch from @System
- cannot install both tuned-2.22.1-1.0.1.el9.noarch from ol9_baseos_latest and tuned-2.21.0-1.0.1.el9_3.noarch from ol9_baseos_latest
- パッケージの最良アップデート候補をインストールできません tuned-2.21.0-1.0.1.el9_3.noarch
- インストール済パッケージの問題 tuned-profiles-oci-2.21.0-1.0.1.el9_3.noarch
(競合するパッケージを置き換えるには、コマンドラインに '--allowerasing' を追加してみてください または、'--skip-broken' を追加して、インストール不可のパッケージをスキップしてください または、'--nobest' を追加して、最適候補のパッケージのみを使用しないでください)
#
# dnf update -y --exclude=tuned*
メタデータの期限切れの最終確認: 0:21:39 前の 2024年05月07日 11時32分27秒 に実施しました。
エラー:
問題: package ImageMagick-libs-6.9.12.93-1.el9.x86_64 from @System requires libraw_r.so.20()(64bit), but none of the providers can be installed
- cannot install both LibRaw-0.21.1-1.el9.x86_64 from ol9_appstream and LibRaw-0.20.2-6.el9.x86_64 from @System
- cannot install both LibRaw-0.21.1-1.el9.x86_64 from ol9_appstream and LibRaw-0.20.2-5.el9.x86_64 from ol9_appstream
- cannot install both LibRaw-0.21.1-1.el9.x86_64 from ol9_appstream and LibRaw-0.20.2-6.el9.x86_64 from ol9_appstream
- パッケージの最良アップデート候補をインストールできません LibRaw-0.20.2-6.el9.x86_64
- パッケージの最良アップデート候補をインストールできません ImageMagick-libs-6.9.12.93-1.el9.x86_64
(競合するパッケージを置き換えるには、コマンドラインに '--allowerasing' を追加してみてください または、'--skip-broken' を追加して、インストール不可のパッケージをスキップしてください または、'--nobest' を追加して、最適候補のパッケージのみを使用しないでください)
# dnf update -y --exclude=tuned*,ImageMagick-libs,LibRaw
メタデータの期限切れの最終確認: 0:22:54 前の 2024年05月07日 11時32分27秒 に実施しました。
依存関係が解決しました。
<略>
アップデート完了後、Oracle Linux 9 OCI Included Packages (x86_64)を戻してdnf check-updateを実行すると、これまでと同じで止まる
ただ、python 3.10.11 がインストールされたんだが、オリジナルの Stable Diffusion web UI の「Automatic Installation on Windows」には「Install Python 3.10.6 (Newer version of Python does not support torch), checking “Add Python to PATH”.」という記載が・・・果たしてホントにダメなのか?→問題ありませんでした
PS D:\sdnext\automatic> .\webui.bat
Creating venv in directory D:\sdnext\automatic\venv using python "C:\Users\OSAKANATARO\AppData\Local\Programs\Python\Python310\python.exe"
Using VENV: D:\sdnext\automatic\venv
15:25:01-666542 INFO Starting SD.Next
15:25:01-669541 INFO Python 3.10.11 on Windows
15:25:01-721480 INFO Version: 6466d3cb Mon Jul 10 17:20:29 2023 -0400
15:25:01-789179 INFO Using CPU-only Torch
15:25:01-791196 INFO Installing package: torch torchvision
15:28:27-814772 INFO Torch 2.0.1+cpu
15:28:27-816772 INFO Installing package: tensorflow==2.12.0
15:29:30-011443 INFO Verifying requirements
15:29:30-018087 INFO Installing package: addict
15:29:31-123764 INFO Installing package: aenum
15:29:32-305603 INFO Installing package: aiohttp
15:29:34-971224 INFO Installing package: anyio
15:29:36-493994 INFO Installing package: appdirs
15:29:37-534966 INFO Installing package: astunparse
15:29:38-564191 INFO Installing package: bitsandbytes
15:29:50-921879 INFO Installing package: blendmodes
15:29:53-458099 INFO Installing package: clean-fid
15:30:03-300722 INFO Installing package: easydev
15:30:06-960355 INFO Installing package: extcolors
15:30:08-507545 INFO Installing package: facexlib
15:30:33-800356 INFO Installing package: filetype
15:30:35-194993 INFO Installing package: future
15:30:42-170599 INFO Installing package: gdown
15:30:43-999361 INFO Installing package: gfpgan
15:31:07-467514 INFO Installing package: GitPython
15:31:09-671195 INFO Installing package: httpcore
15:31:11-496157 INFO Installing package: inflection
15:31:12-879955 INFO Installing package: jsonmerge
15:31:16-636081 INFO Installing package: kornia
15:31:20-478210 INFO Installing package: lark
15:31:22-125443 INFO Installing package: lmdb
15:31:23-437953 INFO Installing package: lpips
15:31:24-867851 INFO Installing package: omegaconf
15:31:29-258237 INFO Installing package: open-clip-torch
15:31:36-741714 INFO Installing package: opencv-contrib-python
15:31:43-728945 INFO Installing package: piexif
15:31:45-357791 INFO Installing package: psutil
15:31:47-282924 INFO Installing package: pyyaml
15:31:48-716454 INFO Installing package: realesrgan
15:31:50-511931 INFO Installing package: resize-right
15:31:52-093682 INFO Installing package: rich
15:31:53-644532 INFO Installing package: safetensors
15:31:55-125015 INFO Installing package: scipy
15:31:56-653853 INFO Installing package: tb_nightly
15:31:58-439541 INFO Installing package: toml
15:32:00-133340 INFO Installing package: torchdiffeq
15:32:01-912273 INFO Installing package: torchsde
15:32:04-240460 INFO Installing package: voluptuous
15:32:05-884949 INFO Installing package: yapf
15:32:07-385998 INFO Installing package: scikit-image
15:32:08-929379 INFO Installing package: basicsr
15:32:10-544987 INFO Installing package: compel
15:32:41-171247 INFO Installing package: typing-extensions==4.7.1
15:32:43-013058 INFO Installing package: antlr4-python3-runtime==4.9.3
15:32:45-010443 INFO Installing package: pydantic==1.10.11
15:32:47-661255 INFO Installing package: requests==2.31.0
15:32:49-665092 INFO Installing package: tqdm==4.65.0
15:32:51-622194 INFO Installing package: accelerate==0.20.3
15:32:54-560549 INFO Installing package: opencv-python==4.7.0.72
15:33:01-124008 INFO Installing package: diffusers==0.18.1
15:33:03-084405 INFO Installing package: einops==0.4.1
15:33:05-232281 INFO Installing package: gradio==3.32.0
15:33:31-795569 INFO Installing package: numexpr==2.8.4
15:33:34-212078 INFO Installing package: numpy==1.23.5
15:33:36-321166 INFO Installing package: numba==0.57.0
15:33:45-795266 INFO Installing package: pandas==1.5.3
15:34:02-667504 INFO Installing package: protobuf==3.20.3
15:34:04-879519 INFO Installing package: pytorch_lightning==1.9.4
15:34:11-965173 INFO Installing package: transformers==4.30.2
15:34:14-260230 INFO Installing package: tomesd==0.1.3
15:34:16-574323 INFO Installing package: urllib3==1.26.15
15:34:19-258844 INFO Installing package: Pillow==9.5.0
15:34:21-521566 INFO Installing package: timm==0.6.13
15:34:25-728405 INFO Verifying packages
15:34:25-729402 INFO Installing package: git+https://github.com/openai/CLIP.git
15:34:32-108450 INFO Installing package:
git+https://github.com/patrickvonplaten/invisible-watermark.git@remove_onnxruntime_depedency
15:34:40-136600 INFO Installing package: onnxruntime==1.15.1
15:34:45-579550 INFO Verifying repositories
15:34:45-581057 INFO Cloning repository: https://github.com/Stability-AI/stablediffusion.git
15:34:54-267186 INFO Cloning repository: https://github.com/CompVis/taming-transformers.git
15:35:39-098788 INFO Cloning repository: https://github.com/crowsonkb/k-diffusion.git
15:35:40-207126 INFO Cloning repository: https://github.com/sczhou/CodeFormer.git
15:35:43-303813 INFO Cloning repository: https://github.com/salesforce/BLIP.git
15:35:45-355666 INFO Verifying submodules
15:36:50-587204 INFO Extension installed packages: clip-interrogator-ext ['clip-interrogator==0.6.0']
15:36:57-547973 INFO Extension installed packages: sd-webui-agent-scheduler ['SQLAlchemy==2.0.18',
'greenlet==2.0.2']
15:37:26-237541 INFO Extension installed packages: sd-webui-controlnet ['pywin32==306', 'lxml==4.9.3',
'reportlab==4.0.4', 'pycparser==2.21', 'portalocker==2.7.0', 'cffi==1.15.1', 'svglib==1.5.1',
'tinycss2==1.2.1', 'mediapipe==0.10.2', 'tabulate==0.9.0', 'cssselect2==0.7.0',
'webencodings==0.5.1', 'sounddevice==0.4.6', 'iopath==0.1.9', 'yacs==0.1.8',
'fvcore==0.1.5.post20221221']
15:37:41-631094 INFO Extension installed packages: stable-diffusion-webui-images-browser ['Send2Trash==1.8.2',
'image-reward==1.5', 'fairscale==0.4.13']
15:37:48-683136 INFO Extension installed packages: stable-diffusion-webui-rembg ['rembg==2.0.38', 'pooch==1.7.0',
'PyMatting==1.1.8']
15:37:48-781391 INFO Extensions enabled: ['a1111-sd-webui-lycoris', 'clip-interrogator-ext', 'LDSR', 'Lora',
'multidiffusion-upscaler-for-automatic1111', 'ScuNET', 'sd-dynamic-thresholding',
'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet',
'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'SwinIR']
15:37:48-783895 INFO Verifying packages
15:37:48-845754 INFO Extension preload: 0.0s D:\sdnext\automatic\extensions-builtin
15:37:48-846767 INFO Extension preload: 0.0s D:\sdnext\automatic\extensions
15:37:48-882113 INFO Server arguments: []
15:37:56-683469 INFO Pipeline: Backend.ORIGINAL
No module 'xformers'. Proceeding without it.
15:38:01-166704 INFO Libraries loaded
15:38:01-168718 INFO Using data path: D:\sdnext\automatic
15:38:01-171245 INFO Available VAEs: D:\sdnext\automatic\models\VAE 0
15:38:01-174758 INFO Available models: D:\sdnext\automatic\models\Stable-diffusion 0
Download the default model? (y/N) y
Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to D:\sdnext\automatic\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
100.0%
15:45:08-083310 INFO ControlNet v1.1.232
ControlNet v1.1.232
ControlNet preprocessor location: D:\sdnext\automatic\extensions-builtin\sd-webui-controlnet\annotator\downloads
15:45:08-271984 INFO ControlNet v1.1.232
ControlNet v1.1.232
Image Browser: ImageReward is not installed, cannot be used.
Image Browser: Creating database
Image Browser: Database created
15:45:08-497758 ERROR Module load:
D:\sdnext\automatic\extensions-builtin\stable-diffusion-webui-rembg\scripts\api.py: ImportError
Module load: D:\sdnext\automatic\extensions-builtin\stable-diffusion-webui-rembg\scripts\api.py: ImportError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\sdnext\automatic\modules\script_loading.py:13 in load_module │
│ │
│ 12 │ try: │
│ ❱ 13 │ │ module_spec.loader.exec_module(module) │
│ 14 │ except Exception as e: │
│ in exec_module:883 │
│ │
│ ... 7 frames hidden ... │
│ │
│ D:\sdnext\automatic\venv\lib\site-packages\numba\__init__.py:55 in <module> │
│ │
│ 54 │
│ ❱ 55 _ensure_critical_deps() │
│ 56 # END DO NOT MOVE │
│ │
│ D:\sdnext\automatic\venv\lib\site-packages\numba\__init__.py:42 in _ensure_critical_deps │
│ │
│ 41 │ elif numpy_version > (1, 24): │
│ ❱ 42 │ │ raise ImportError("Numba needs NumPy 1.24 or less") │
│ 43 │ try: │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: Numba needs NumPy 1.24 or less
15:45:08-546905 ERROR Module load:
D:\sdnext\automatic\extensions-builtin\stable-diffusion-webui-rembg\scripts\postprocessing_remb
g.py: ImportError
Module load: D:\sdnext\automatic\extensions-builtin\stable-diffusion-webui-rembg\scripts\postprocessing_rembg.py: ImportError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\sdnext\automatic\modules\script_loading.py:13 in load_module │
│ │
│ 12 │ try: │
│ ❱ 13 │ │ module_spec.loader.exec_module(module) │
│ 14 │ except Exception as e: │
│ in exec_module:883 │
│ │
│ ... 7 frames hidden ... │
│ │
│ D:\sdnext\automatic\venv\lib\site-packages\numba\__init__.py:55 in <module> │
│ │
│ 54 │
│ ❱ 55 _ensure_critical_deps() │
│ 56 # END DO NOT MOVE │
│ │
│ D:\sdnext\automatic\venv\lib\site-packages\numba\__init__.py:42 in _ensure_critical_deps │
│ │
│ 41 │ elif numpy_version > (1, 24): │
│ ❱ 42 │ │ raise ImportError("Numba needs NumPy 1.24 or less") │
│ 43 │ try: │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: Numba needs NumPy 1.24 or less
15:45:08-867572 INFO Loading UI theme: name=black-orange style=Auto
Running on local URL: http://127.0.0.1:7860
15:45:11-480274 INFO Local URL: http://127.0.0.1:7860/
15:45:11-482798 INFO Initializing middleware
15:45:11-602837 INFO [AgentScheduler] Task queue is empty
15:45:11-606823 INFO [AgentScheduler] Registering APIs
15:45:11-709704 INFO Model metadata saved: D:\sdnext\automatic\metadata.json 1
Loading weights: D:\sdnext\automatic\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors ━━━━━━━━━ 0.0/4.3 -:--:--
GB
15:45:12-501405 WARNING Torch FP16 test failed: Forcing FP32 operations: "LayerNormKernelImpl" not implemented for
'Half'
15:45:12-503413 INFO Torch override dtype: no-half set
15:45:12-504408 INFO Torch override VAE dtype: no-half set
15:45:12-505409 INFO Setting Torch parameters: dtype=torch.float32 vae=torch.float32 unet=torch.float32
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Downloading (…)olve/main/vocab.json: 100%|██████████████████████████████████████████| 961k/961k [00:00<00:00, 1.61MB/s]
Downloading (…)olve/main/merges.txt: 100%|██████████████████████████████████████████| 525k/525k [00:00<00:00, 1.16MB/s]
Downloading (…)cial_tokens_map.json: 100%|████████████████████████████████████████████████████| 389/389 [00:00<?, ?B/s]
Downloading (…)okenizer_config.json: 100%|████████████████████████████████████████████████████| 905/905 [00:00<?, ?B/s]
Downloading (…)lve/main/config.json: 100%|████████████████████████████████████████████████| 4.52k/4.52k [00:00<?, ?B/s]
Calculating model hash: D:\sdnext\automatic\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors ━━━━━━ 4.3/4… 0:00:…
GB
15:45:20-045323 INFO Applying Doggettx cross attention optimization
15:45:20-051844 INFO Embeddings: loaded=0 skipped=0
15:45:20-057917 INFO Model loaded in 8.1s (load=0.2s config=0.4s create=3.5s hash=3.2s apply=0.8s)
15:45:20-301777 INFO Model load finished: {'ram': {'used': 8.55, 'total': 31.3}} cached=0
15:45:20-859838 INFO Startup time: 452.0s (torch=4.3s gradio=2.4s libraries=5.5s models=424.0s codeformer=0.2s
scripts=3.3s onchange=0.2s ui-txt2img=0.1s ui-img2img=0.1s ui-settings=0.4s ui-extensions=1.7s
ui-defaults=0.1s launch=0.2s app-started=0.2s checkpoint=9.2s)
エラーがでていたので中断して、もう1回起動してみたらさっき出てたエラーっぽいのはないが止まった。
PS D:\sdnext\automatic> .\webui.bat
Using VENV: D:\sdnext\automatic\venv
20:46:25-099403 INFO Starting SD.Next
20:46:25-107728 INFO Python 3.10.11 on Windows
20:46:25-168108 INFO Version: 6466d3cb Mon Jul 10 17:20:29 2023 -0400
20:46:25-610382 INFO Latest published version: a844a83d9daa9987295932c0db391ec7be5f2d32 2023-07-11T08:00:45Z
20:46:25-634606 INFO Using CPU-only Torch
20:46:28-219427 INFO Torch 2.0.1+cpu
20:46:28-220614 INFO Installing package: tensorflow==2.12.0
20:47:05-861641 INFO Enabled extensions-builtin: ['a1111-sd-webui-lycoris', 'clip-interrogator-ext', 'LDSR', 'Lora',
'multidiffusion-upscaler-for-automatic1111', 'ScuNET', 'sd-dynamic-thresholding',
'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet',
'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'SwinIR']
20:47:05-870117 INFO Enabled extensions: []
20:47:05-872302 INFO Verifying requirements
20:47:05-889503 INFO Verifying packages
20:47:05-891503 INFO Verifying repositories
20:47:11-387347 INFO Verifying submodules
20:47:32-176175 INFO Extensions enabled: ['a1111-sd-webui-lycoris', 'clip-interrogator-ext', 'LDSR', 'Lora',
'multidiffusion-upscaler-for-automatic1111', 'ScuNET', 'sd-dynamic-thresholding',
'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet',
'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'SwinIR']
20:47:32-178176 INFO Verifying packages
20:47:32-186325 INFO Extension preload: 0.0s D:\sdnext\automatic\extensions-builtin
20:47:32-188648 INFO Extension preload: 0.0s D:\sdnext\automatic\extensions
20:47:32-221762 INFO Server arguments: []
20:47:40-417209 INFO Pipeline: Backend.ORIGINAL
No module 'xformers'. Proceeding without it.
20:47:43-468816 INFO Libraries loaded
20:47:43-469815 INFO Using data path: D:\sdnext\automatic
20:47:43-473321 INFO Available VAEs: D:\sdnext\automatic\models\VAE 0
20:47:43-488860 INFO Available models: D:\sdnext\automatic\models\Stable-diffusion 1
20:47:46-821663 INFO ControlNet v1.1.232
ControlNet v1.1.232
ControlNet preprocessor location: D:\sdnext\automatic\extensions-builtin\sd-webui-controlnet\annotator\downloads
20:47:47-027110 INFO ControlNet v1.1.232
ControlNet v1.1.232
Image Browser: ImageReward is not installed, cannot be used.
20:48:25-145779 INFO Loading UI theme: name=black-orange style=Auto
Running on local URL: http://127.0.0.1:7860
20:48:27-450550 INFO Local URL: http://127.0.0.1:7860/
20:48:27-451639 INFO Initializing middleware
20:48:28-016312 INFO [AgentScheduler] Task queue is empty
20:48:28-017325 INFO [AgentScheduler] Registering APIs
20:48:28-133032 WARNING Selected checkpoint not found: model.ckpt
Loading weights: D:\sdnext\automatic\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors ━━━━━━━━━ 0.0/4.3 -:--:--
GB
20:48:29-090045 WARNING Torch FP16 test failed: Forcing FP32 operations: "LayerNormKernelImpl" not implemented for
'Half'
20:48:29-091161 INFO Torch override dtype: no-half set
20:48:29-092186 INFO Torch override VAE dtype: no-half set
20:48:29-093785 INFO Setting Torch parameters: dtype=torch.float32 vae=torch.float32 unet=torch.float32
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
20:48:30-662359 INFO Applying Doggettx cross attention optimization
20:48:30-666359 INFO Embeddings: loaded=0 skipped=0
20:48:30-679671 INFO Model loaded in 2.2s (load=0.2s config=0.4s create=0.5s apply=1.0s)
20:48:31-105108 INFO Model load finished: {'ram': {'used': 8.9, 'total': 31.3}} cached=0
20:48:31-879698 INFO Startup time: 59.7s (torch=6.1s gradio=1.5s libraries=3.7s codeformer=0.1s scripts=41.4s
onchange=0.2s ui-txt2img=0.1s ui-img2img=0.1s ui-settings=0.1s ui-extensions=1.6s
ui-defaults=0.1s launch=0.2s app-started=0.7s checkpoint=3.7s)
PS C:\stablediff\stable-diffusion-webui-directml> .\webui-user-amd.bat
venv "C:\stablediff\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: ## 1.4.0
Commit hash: 265d626471eacd617321bdb51e50e4b87a7ca82e
Installing requirements
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --autolaunch
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
reading checkpoint metadata: C:\stablediff\stable-diffusion-webui-directml\models\Stable-diffusion\unlimitedReplicant_v10.safetensors: AssertionError
Traceback (most recent call last):
File "C:\stablediff\stable-diffusion-webui-directml\modules\sd_models.py", line 62, in __init__
self.metadata = read_metadata_from_safetensors(filename)
File "C:\stablediff\stable-diffusion-webui-directml\modules\sd_models.py", line 236, in read_metadata_from_safetensors
assert metadata_len > 2 and json_start in (b'{"', b"{'"), f"{filename} is not a safetensors file"
AssertionError: C:\stablediff\stable-diffusion-webui-directml\models\Stable-diffusion\unlimitedReplicant_v10.safetensors is not a safetensors file
2023-07-12 13:46:53,471 - ControlNet - INFO - ControlNet v1.1.232
ControlNet preprocessor location: C:\stablediff\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads
2023-07-12 13:46:53,548 - ControlNet - INFO - ControlNet v1.1.232
Loading weights [c348e5681e] from C:\stablediff\stable-diffusion-webui-directml\models\Stable-diffusion\muaccamix_v15.safetensors
preload_extensions_git_metadata for 8 extensions took 0.13s
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 7.2s (import torch: 2.2s, import gradio: 1.0s, import ldm: 0.5s, other imports: 1.2s, load scripts: 1.3s, create ui: 0.4s, gradio launch: 0.5s).
Creating model from config: C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\configs\stable-diffusion\v2-inference-v.yaml
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
Applying attention optimization: sub-quadratic... done.
Textual inversion embeddings loaded(0):
Model loaded in 7.1s (load weights from disk: 0.7s, find config: 2.4s, create model: 0.2s, apply weights to model: 1.9s, apply half(): 1.0s, move model to device: 0.3s, calculate empty prompt: 0.4s).
Loading weights [e3b0c44298] from C:\stablediff\stable-diffusion-webui-directml\models\Stable-diffusion\unlimitedReplicant_v10.safetensors
changing setting sd_model_checkpoint to unlimitedReplicant_v10.safetensors [e3b0c44298]: SafetensorError
Traceback (most recent call last):
File "C:\stablediff\stable-diffusion-webui-directml\modules\shared.py", line 610, in set
self.data_labels[key].onchange()
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\webui.py", line 226, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()), call=False)
File "C:\stablediff\stable-diffusion-webui-directml\modules\sd_models.py", line 568, in reload_model_weights
state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
File "C:\stablediff\stable-diffusion-webui-directml\modules\sd_models.py", line 277, in get_checkpoint_state_dict
res = read_state_dict(checkpoint_info.filename)
File "C:\stablediff\stable-diffusion-webui-directml\modules\sd_models.py", line 256, in read_state_dict
pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\safetensors\torch.py", line 259, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooSmall
*** Error completing request
*** Arguments: ('task(d0d406cu3531u31)', 'miku\n', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FAA7416110>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50) {}
Traceback (most recent call last):
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\txt2img.py", line 94, in txt2img
processed = processing.process_images(p)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 623, in process_images
res = process_images_inner(p)
File "C:\stablediff\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 732, in process_images_inner
p.setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 1129, in setup_conds
super().setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 346, in setup_conds
self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, self.negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 338, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "C:\stablediff\stable-diffusion-webui-directml\modules\prompt_parser.py", line 143, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 665, in get_learned_conditioning
c = self.cond_stage_model.encode(c)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 236, in encode
return self(text)
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 213, in forward
z = self.encode_with_transformer(tokens.to(self.device))
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
---
*** Error completing request
*** Arguments: ('task(rw9uda96ly6wovo)', 'miku\n', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FA000A6620>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50) {}
Traceback (most recent call last):
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\txt2img.py", line 94, in txt2img
processed = processing.process_images(p)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 623, in process_images
res = process_images_inner(p)
File "C:\stablediff\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 732, in process_images_inner
p.setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 1129, in setup_conds
super().setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 346, in setup_conds
self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, self.negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 338, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "C:\stablediff\stable-diffusion-webui-directml\modules\prompt_parser.py", line 143, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 665, in get_learned_conditioning
c = self.cond_stage_model.encode(c)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 236, in encode
return self(text)
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 213, in forward
z = self.encode_with_transformer(tokens.to(self.device))
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
---
*** Error completing request
*** Arguments: ('task(qgndomumiw4zfai)', 'miku\n', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FAA6F229E0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50) {}
Traceback (most recent call last):
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\txt2img.py", line 94, in txt2img
processed = processing.process_images(p)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 623, in process_images
res = process_images_inner(p)
File "C:\stablediff\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 732, in process_images_inner
p.setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 1129, in setup_conds
super().setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 346, in setup_conds
self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, self.negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 338, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "C:\stablediff\stable-diffusion-webui-directml\modules\prompt_parser.py", line 143, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 665, in get_learned_conditioning
c = self.cond_stage_model.encode(c)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 236, in encode
return self(text)
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 213, in forward
z = self.encode_with_transformer(tokens.to(self.device))
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
---
Restarting UI...
Closing server running on port: 7860
2023-07-12 13:54:32,359 - ControlNet - INFO - ControlNet v1.1.232
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 0.6s (load scripts: 0.3s, create ui: 0.2s).
preload_extensions_git_metadata for 8 extensions took 0.15s
*** Error completing request
*** Arguments: ('task(jwkb7fcvkg7wpb4)', 'miku', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FB1BC5F010>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50) {}
Traceback (most recent call last):
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\txt2img.py", line 94, in txt2img
processed = processing.process_images(p)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 623, in process_images
res = process_images_inner(p)
File "C:\stablediff\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 732, in process_images_inner
p.setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 1129, in setup_conds
super().setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 346, in setup_conds
self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, self.negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 338, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "C:\stablediff\stable-diffusion-webui-directml\modules\prompt_parser.py", line 143, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 665, in get_learned_conditioning
c = self.cond_stage_model.encode(c)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 236, in encode
return self(text)
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 213, in forward
z = self.encode_with_transformer(tokens.to(self.device))
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
---
fatal: No names found, cannot describe anything.
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: ## 1.4.0
Commit hash: 265d626471eacd617321bdb51e50e4b87a7ca82e
Installing requirements
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --autolaunch
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [c348e5681e] from C:\stablediff\stable-diffusion-webui-directml\models\Stable-diffusion\muaccamix_v15.safetensors
preload_extensions_git_metadata for 8 extensions took 0.13s
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 6.6s (import torch: 2.2s, import gradio: 1.0s, import ldm: 0.5s, other imports: 1.2s, load scripts: 1.0s, create ui: 0.5s, gradio launch: 0.2s).
Creating model from config: C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\configs\stable-diffusion\v2-inference-v.yaml
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
Applying attention optimization: sub-quadratic... done.
Textual inversion embeddings loaded(0):
Model loaded in 6.3s (load weights from disk: 0.7s, find config: 1.7s, create model: 0.6s, apply weights to model: 1.6s, apply half(): 1.0s, move model to device: 0.3s, calculate empty prompt: 0.4s).
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [01:44<00:00, 5.23s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [01:41<00:00, 5.06s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [01:41<00:00, 5.08s/it]