MySQL 8におけるデータベースユーザ作成と権限の割り当てが従来の「grant all on DB名.* to wordpress@localhost identified by ‘パスワード’;」という一文から、「create user ~」と「grant ~」の2つに分かれている点に注意が必要です。
$ sudo mysql -u root
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.32 Source distribution
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create database DB名 character set utf8;
Query OK, 1 row affected, 1 warning (0.01 sec)
mysql> create user wordpress@localhost identified by 'パスワード';
Query OK, 0 rows affected (0.01 sec)
mysql> grant all privileges on DB名.* to wordpress@localhost;
Query OK, 0 rows affected (0.00 sec)
mysql> quit
Bye
$
手順7: Webサーバ設定
手順7-1: httpdインストール
httpdをインストールします。
Oracle Linux 9.2ではWebサーバとして Apache(httpd) 2.4.53 、nginx 1.20.1、nginx 1.22.1が使えるが、apacheを使う。
$ sudo dnf install httpd -y
Last metadata expiration check: 0:05:50 ago on Tue 12 Sep 2023 11:38:07 AM JST.
Package httpd-2.4.53-11.0.1.el9_2.5.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
$
$ sudo dehydrated --register
# INFO: Using main config file /etc/dehydrated/config
# INFO: Using additional config file /etc/dehydrated/conf.d/local.sh
To use dehydrated with this certificate authority you have to agree to their terms of service which you can find here: https://letsencrypt.org/documents/LE-SA-v1.3-September-21-2022.pdf
To accept these terms of service run "/bin/dehydrated --register --accept-terms".
$ sudo /bin/dehydrated --register --accept-terms
# INFO: Using main config file /etc/dehydrated/config
# INFO: Using additional config file /etc/dehydrated/conf.d/local.sh
+ Generating account key...
+ Registering account key with ACME server...
+ Fetching account URL...
+ Done!
$
初回のSSL証明書発行処理を実行します。
$ sudo dehydrated --cron
# INFO: Using main config file /etc/dehydrated/config
# INFO: Using additional config file /etc/dehydrated/conf.d/local.sh
+ Creating chain cache directory /etc/dehydrated/chains
Processing ホスト1名.ドメイン名 with alternative names: ホスト2名.ドメイン名
+ Creating new directory /etc/dehydrated/certs/ホスト1名.ドメイン名 ...
+ Signing domains...
+ Generating private key...
+ Generating signing request...
+ Requesting new certificate order from CA...
+ Received 2 authorizations URLs from the CA
+ Handling authorization for ホスト1名.ドメイン名
+ Handling authorization for ホスト2名.ドメイン名
+ 2 pending challenge(s)
+ Deploying challenge tokens...
+ Responding to challenge for ホスト1名.ドメイン名 authorization...
+ Challenge is valid!
+ Responding to challenge for ホスト2名.ドメイン名 authorization...
+ Challenge is valid!
+ Cleaning challenge tokens...
+ Requesting certificate...
+ Checking certificate...
+ Done!
+ Creating fullchain.pem...
+ Done!
+ Running automatic cleanup
$
手順7-3: WebサーバへのSSL証明書設定
まず、httpdにmod_sslを追加します。
$ sudo dnf install mod_ssl -y
Last metadata expiration check: 0:13:55 ago on Tue 12 Sep 2023 11:38:07 AM JST.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
mod_ssl x86_64 1:2.4.53-11.0.1.el9_2.5 ol9_appstream 119 k
Transaction Summary
================================================================================
Install 1 Package
<略>
$
$ cd /var/www/html
$ ls
$ sudo curl -O https://wordpress.org/latest.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 22.3M 100 22.3M 0 0 17.6M 0 0:00:01 0:00:01 --:--:-- 17.6M
$ ls
latest.tar.gz
$ sudo tar xfz latest.tar.gz
$ ls -l
total 22904
-rw-r--r--. 1 root root 23447259 Sep 12 11:57 latest.tar.gz
drwxr-xr-x. 5 nobody nobody 4096 Aug 29 23:14 wordpress
$ sudo rm latest.tar.gz
$
現在の設定値を「sudo getsebool -a |grep httpd_can_network」で確認し、「sudo setsebool -P httpd_can_network_connect on」で有効にする
$ sudo getsebool -a |grep httpd_can_network
httpd_can_network_connect --> off
httpd_can_network_connect_cobbler --> off
httpd_can_network_connect_db --> off
httpd_can_network_memcache --> off
httpd_can_network_relay --> off
$ sudo setsebool -P httpd_can_network_connect on
$ sudo getsebool -a |grep httpd_can_network
httpd_can_network_connect --> on
httpd_can_network_connect_cobbler --> off
httpd_can_network_connect_db --> off
httpd_can_network_memcache --> off
httpd_can_network_relay --> off
$
$ sudo vi /etc/dnf/automatic.conf
$ cat /etc/dnf/automatic.conf
[commands]
# What kind of upgrade to perform:
# default = all available upgrades
# security = only the security upgrades
upgrade_type = default
random_sleep = 0
# Maximum time in seconds to wait until the system is on-line and able to
# connect to remote repositories.
network_online_timeout = 60
# To just receive updates use dnf-automatic-notifyonly.timer
# Whether updates should be downloaded when they are available, by
# dnf-automatic.timer. notifyonly.timer, download.timer and
# install.timer override this setting.
download_updates = yes
# Whether updates should be applied when they are available, by
# dnf-automatic.timer. notifyonly.timer, download.timer and
# install.timer override this setting.
apply_updates = yes
[emitters]
# Name to use for this system in messages that are emitted. Default is the
# hostname.
# system_name = my-host
# How to send messages. Valid options are stdio, email and motd. If
# emit_via includes stdio, messages will be sent to stdout; this is useful
# to have cron send the messages. If emit_via includes email, this
# program will send email itself according to the configured options.
# If emit_via includes motd, /etc/motd file will have the messages. if
# emit_via includes command_email, then messages will be send via a shell
# command compatible with sendmail.
# Default is email,stdio.
# If emit_via is None or left blank, no messages will be sent.
emit_via = stdio
[email]
# The address to send email messages from.
email_from = root@example.com
# List of addresses to send messages to.
email_to = root
# Name of the host to connect to to send email messages.
email_host = localhost
[command]
# The shell command to execute. This is a Python format string, as used in
# str.format(). The format function will pass a shell-quoted argument called
# `body`.
# command_format = "cat"
# The contents of stdin to pass to the command. It is a format string with the
# same arguments as `command_format`.
# stdin_format = "{body}"
[command_email]
# The shell command to use to send email. This is a Python format string,
# as used in str.format(). The format function will pass shell-quoted arguments
# called body, subject, email_from, email_to.
# command_format = "mail -Ssendwait -s {subject} -r {email_from} {email_to}"
# The contents of stdin to pass to the command. It is a format string with the
# same arguments as `command_format`.
# stdin_format = "{body}"
# The address to send email messages from.
email_from = root@example.com
# List of addresses to send messages to.
email_to = root
[base]
# This section overrides dnf.conf
# Use this to filter DNF core messages
debuglevel = 1
$
そしてdnf-automatic.timerを有効化し、開始します。
$ sudo systemctl enable dnf-automatic.timer
Created symlink /etc/systemd/system/timers.target.wants/dnf-automatic.timer → /usr/lib/systemd/system/dnf-automatic.timer.
$ sudo systemctl status dnf-automatic
○ dnf-automatic.service - dnf automatic
Loaded: loaded (/usr/lib/systemd/system/dnf-automatic.service; static)
Active: inactive (dead)
TriggeredBy: ○ dnf-automatic.timer
$ sudo systemctl start dnf-automatic.timer
$ sudo systemctl status dnf-automatic.timer
● dnf-automatic.timer - dnf-automatic timer
Loaded: loaded (/usr/lib/systemd/system/dnf-automatic.timer; enabled; pres>
Active: active (waiting) since Tue 2023-09-12 13:11:00 JST; 5s ago
Until: Tue 2023-09-12 13:11:00 JST; 5s ago
Trigger: Wed 2023-09-13 06:44:33 JST; 17h left
Triggers: ● dnf-automatic.service
Sep 12 13:11:00 ホスト名 systemd[1]: Started dnf-automatic timer.
$
手順14 メモリが足らない対策
Oracle CloudのFree Tierで初期値のまま稼働させてみたのですが、頻繁に応答がなくなりました。(Oracle Linux 8だと問題なかったんだけど、Oracle Linux 9だと1日複数回発生)
# dnf repolist
repo id repo の名前
ol9_UEKR7 Oracle Linux 9 UEK Release 7 (x86_64)
ol9_addons Oracle Linux 9 Addons (x86_64)
ol9_appstream Oracle Linux 9 Application Stream Packages (x86_64)
ol9_baseos_latest Oracle Linux 9 BaseOS Latest (x86_64)
#
しかし、”Oracle Linux 9 OCI Included Packages (x86_64)”(oci-included-ol9.repo)を戻したところ応答がなくなる現象発生
# dnf repolist
repo id repo の名前
ol9_UEKR7 Oracle Linux 9 UEK Release 7 (x86_64)
ol9_addons Oracle Linux 9 Addons (x86_64)
ol9_appstream Oracle Linux 9 Application Stream Packages (x86_64)
ol9_baseos_latest Oracle Linux 9 BaseOS Latest (x86_64)
ol9_oci_included Oracle Linux 9 OCI Included Packages (x86_64)
# dnf check-update
Oracle Linux 9 OCI Included Packages (x86_64) 27 MB/s | 84 MB 00:03
<ここから出力が続かない>
強制再起動したあと、Oracle Linux 9 OCI Included Packages (x86_64)を除外して、EPELを含めてそれ以外の状態を戻してみるとdnf check-updateに成功
# dnf repolist
repo id repo の名前
ol9_UEKR7 Oracle Linux 9 UEK Release 7 (x86_64)
ol9_addons Oracle Linux 9 Addons (x86_64)
ol9_appstream Oracle Linux 9 Application Stream Packages (x86_64)
ol9_baseos_latest Oracle Linux 9 BaseOS Latest (x86_64)
ol9_developer_EPEL Oracle Linux 9 EPEL Packages for Development (x86_64)
ol9_ksplice Ksplice for Oracle Linux 9 (x86_64)
#
が・・・dnf updateでエラーが・・・
# dnf update -y
メタデータの期限切れの最終確認: 0:01:49 前の 2024年05月07日 11時32分27秒 に実施しました。
エラー:
問題 1: package ImageMagick-libs-6.9.12.93-1.el9.x86_64 from @System requires libraw_r.so.20()(64bit), but none of the providers can be installed
- cannot install both LibRaw-0.21.1-1.el9.x86_64 from ol9_appstream and LibRaw-0.20.2-6.el9.x86_64 from @System
- cannot install both LibRaw-0.21.1-1.el9.x86_64 from ol9_appstream and LibRaw-0.20.2-5.el9.x86_64 from ol9_appstream
- cannot install both LibRaw-0.21.1-1.el9.x86_64 from ol9_appstream and LibRaw-0.20.2-6.el9.x86_64 from ol9_appstream
- パッケージの最良アップデート候補をインストールできません LibRaw-0.20.2-6.el9.x86_64
- パッケージの最良アップデート候補をインストールできません ImageMagick-libs-6.9.12.93-1.el9.x86_64
問題 2: package tuned-profiles-oci-2.21.0-1.0.1.el9_3.noarch from @System requires tuned = 2.21.0-1.0.1.el9_3, but none of the providers can be installed
- cannot install both tuned-2.22.1-1.0.1.el9.noarch from ol9_baseos_latest and tuned-2.21.0-1.0.1.el9_3.noarch from @System
- cannot install both tuned-2.22.1-1.0.1.el9.noarch from ol9_baseos_latest and tuned-2.21.0-1.0.1.el9_3.noarch from ol9_baseos_latest
- パッケージの最良アップデート候補をインストールできません tuned-2.21.0-1.0.1.el9_3.noarch
- インストール済パッケージの問題 tuned-profiles-oci-2.21.0-1.0.1.el9_3.noarch
(競合するパッケージを置き換えるには、コマンドラインに '--allowerasing' を追加してみてください または、'--skip-broken' を追加して、インストール不可のパッケージをスキップしてください または、'--nobest' を追加して、最適候補のパッケージのみを使用しないでください)
#
# dnf update -y --exclude=tuned*
メタデータの期限切れの最終確認: 0:21:39 前の 2024年05月07日 11時32分27秒 に実施しました。
エラー:
問題: package ImageMagick-libs-6.9.12.93-1.el9.x86_64 from @System requires libraw_r.so.20()(64bit), but none of the providers can be installed
- cannot install both LibRaw-0.21.1-1.el9.x86_64 from ol9_appstream and LibRaw-0.20.2-6.el9.x86_64 from @System
- cannot install both LibRaw-0.21.1-1.el9.x86_64 from ol9_appstream and LibRaw-0.20.2-5.el9.x86_64 from ol9_appstream
- cannot install both LibRaw-0.21.1-1.el9.x86_64 from ol9_appstream and LibRaw-0.20.2-6.el9.x86_64 from ol9_appstream
- パッケージの最良アップデート候補をインストールできません LibRaw-0.20.2-6.el9.x86_64
- パッケージの最良アップデート候補をインストールできません ImageMagick-libs-6.9.12.93-1.el9.x86_64
(競合するパッケージを置き換えるには、コマンドラインに '--allowerasing' を追加してみてください または、'--skip-broken' を追加して、インストール不可のパッケージをスキップしてください または、'--nobest' を追加して、最適候補のパッケージのみを使用しないでください)
# dnf update -y --exclude=tuned*,ImageMagick-libs,LibRaw
メタデータの期限切れの最終確認: 0:22:54 前の 2024年05月07日 11時32分27秒 に実施しました。
依存関係が解決しました。
<略>
アップデート完了後、Oracle Linux 9 OCI Included Packages (x86_64)を戻してdnf check-updateを実行すると、これまでと同じで止まる
ただ、python 3.10.11 がインストールされたんだが、オリジナルの Stable Diffusion web UI の「Automatic Installation on Windows」には「Install Python 3.10.6 (Newer version of Python does not support torch), checking “Add Python to PATH”.」という記載が・・・果たしてホントにダメなのか?→問題ありませんでした
PS D:\sdnext\automatic> .\webui.bat
Creating venv in directory D:\sdnext\automatic\venv using python "C:\Users\OSAKANATARO\AppData\Local\Programs\Python\Python310\python.exe"
Using VENV: D:\sdnext\automatic\venv
15:25:01-666542 INFO Starting SD.Next
15:25:01-669541 INFO Python 3.10.11 on Windows
15:25:01-721480 INFO Version: 6466d3cb Mon Jul 10 17:20:29 2023 -0400
15:25:01-789179 INFO Using CPU-only Torch
15:25:01-791196 INFO Installing package: torch torchvision
15:28:27-814772 INFO Torch 2.0.1+cpu
15:28:27-816772 INFO Installing package: tensorflow==2.12.0
15:29:30-011443 INFO Verifying requirements
15:29:30-018087 INFO Installing package: addict
15:29:31-123764 INFO Installing package: aenum
15:29:32-305603 INFO Installing package: aiohttp
15:29:34-971224 INFO Installing package: anyio
15:29:36-493994 INFO Installing package: appdirs
15:29:37-534966 INFO Installing package: astunparse
15:29:38-564191 INFO Installing package: bitsandbytes
15:29:50-921879 INFO Installing package: blendmodes
15:29:53-458099 INFO Installing package: clean-fid
15:30:03-300722 INFO Installing package: easydev
15:30:06-960355 INFO Installing package: extcolors
15:30:08-507545 INFO Installing package: facexlib
15:30:33-800356 INFO Installing package: filetype
15:30:35-194993 INFO Installing package: future
15:30:42-170599 INFO Installing package: gdown
15:30:43-999361 INFO Installing package: gfpgan
15:31:07-467514 INFO Installing package: GitPython
15:31:09-671195 INFO Installing package: httpcore
15:31:11-496157 INFO Installing package: inflection
15:31:12-879955 INFO Installing package: jsonmerge
15:31:16-636081 INFO Installing package: kornia
15:31:20-478210 INFO Installing package: lark
15:31:22-125443 INFO Installing package: lmdb
15:31:23-437953 INFO Installing package: lpips
15:31:24-867851 INFO Installing package: omegaconf
15:31:29-258237 INFO Installing package: open-clip-torch
15:31:36-741714 INFO Installing package: opencv-contrib-python
15:31:43-728945 INFO Installing package: piexif
15:31:45-357791 INFO Installing package: psutil
15:31:47-282924 INFO Installing package: pyyaml
15:31:48-716454 INFO Installing package: realesrgan
15:31:50-511931 INFO Installing package: resize-right
15:31:52-093682 INFO Installing package: rich
15:31:53-644532 INFO Installing package: safetensors
15:31:55-125015 INFO Installing package: scipy
15:31:56-653853 INFO Installing package: tb_nightly
15:31:58-439541 INFO Installing package: toml
15:32:00-133340 INFO Installing package: torchdiffeq
15:32:01-912273 INFO Installing package: torchsde
15:32:04-240460 INFO Installing package: voluptuous
15:32:05-884949 INFO Installing package: yapf
15:32:07-385998 INFO Installing package: scikit-image
15:32:08-929379 INFO Installing package: basicsr
15:32:10-544987 INFO Installing package: compel
15:32:41-171247 INFO Installing package: typing-extensions==4.7.1
15:32:43-013058 INFO Installing package: antlr4-python3-runtime==4.9.3
15:32:45-010443 INFO Installing package: pydantic==1.10.11
15:32:47-661255 INFO Installing package: requests==2.31.0
15:32:49-665092 INFO Installing package: tqdm==4.65.0
15:32:51-622194 INFO Installing package: accelerate==0.20.3
15:32:54-560549 INFO Installing package: opencv-python==4.7.0.72
15:33:01-124008 INFO Installing package: diffusers==0.18.1
15:33:03-084405 INFO Installing package: einops==0.4.1
15:33:05-232281 INFO Installing package: gradio==3.32.0
15:33:31-795569 INFO Installing package: numexpr==2.8.4
15:33:34-212078 INFO Installing package: numpy==1.23.5
15:33:36-321166 INFO Installing package: numba==0.57.0
15:33:45-795266 INFO Installing package: pandas==1.5.3
15:34:02-667504 INFO Installing package: protobuf==3.20.3
15:34:04-879519 INFO Installing package: pytorch_lightning==1.9.4
15:34:11-965173 INFO Installing package: transformers==4.30.2
15:34:14-260230 INFO Installing package: tomesd==0.1.3
15:34:16-574323 INFO Installing package: urllib3==1.26.15
15:34:19-258844 INFO Installing package: Pillow==9.5.0
15:34:21-521566 INFO Installing package: timm==0.6.13
15:34:25-728405 INFO Verifying packages
15:34:25-729402 INFO Installing package: git+https://github.com/openai/CLIP.git
15:34:32-108450 INFO Installing package:
git+https://github.com/patrickvonplaten/invisible-watermark.git@remove_onnxruntime_depedency
15:34:40-136600 INFO Installing package: onnxruntime==1.15.1
15:34:45-579550 INFO Verifying repositories
15:34:45-581057 INFO Cloning repository: https://github.com/Stability-AI/stablediffusion.git
15:34:54-267186 INFO Cloning repository: https://github.com/CompVis/taming-transformers.git
15:35:39-098788 INFO Cloning repository: https://github.com/crowsonkb/k-diffusion.git
15:35:40-207126 INFO Cloning repository: https://github.com/sczhou/CodeFormer.git
15:35:43-303813 INFO Cloning repository: https://github.com/salesforce/BLIP.git
15:35:45-355666 INFO Verifying submodules
15:36:50-587204 INFO Extension installed packages: clip-interrogator-ext ['clip-interrogator==0.6.0']
15:36:57-547973 INFO Extension installed packages: sd-webui-agent-scheduler ['SQLAlchemy==2.0.18',
'greenlet==2.0.2']
15:37:26-237541 INFO Extension installed packages: sd-webui-controlnet ['pywin32==306', 'lxml==4.9.3',
'reportlab==4.0.4', 'pycparser==2.21', 'portalocker==2.7.0', 'cffi==1.15.1', 'svglib==1.5.1',
'tinycss2==1.2.1', 'mediapipe==0.10.2', 'tabulate==0.9.0', 'cssselect2==0.7.0',
'webencodings==0.5.1', 'sounddevice==0.4.6', 'iopath==0.1.9', 'yacs==0.1.8',
'fvcore==0.1.5.post20221221']
15:37:41-631094 INFO Extension installed packages: stable-diffusion-webui-images-browser ['Send2Trash==1.8.2',
'image-reward==1.5', 'fairscale==0.4.13']
15:37:48-683136 INFO Extension installed packages: stable-diffusion-webui-rembg ['rembg==2.0.38', 'pooch==1.7.0',
'PyMatting==1.1.8']
15:37:48-781391 INFO Extensions enabled: ['a1111-sd-webui-lycoris', 'clip-interrogator-ext', 'LDSR', 'Lora',
'multidiffusion-upscaler-for-automatic1111', 'ScuNET', 'sd-dynamic-thresholding',
'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet',
'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'SwinIR']
15:37:48-783895 INFO Verifying packages
15:37:48-845754 INFO Extension preload: 0.0s D:\sdnext\automatic\extensions-builtin
15:37:48-846767 INFO Extension preload: 0.0s D:\sdnext\automatic\extensions
15:37:48-882113 INFO Server arguments: []
15:37:56-683469 INFO Pipeline: Backend.ORIGINAL
No module 'xformers'. Proceeding without it.
15:38:01-166704 INFO Libraries loaded
15:38:01-168718 INFO Using data path: D:\sdnext\automatic
15:38:01-171245 INFO Available VAEs: D:\sdnext\automatic\models\VAE 0
15:38:01-174758 INFO Available models: D:\sdnext\automatic\models\Stable-diffusion 0
Download the default model? (y/N) y
Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to D:\sdnext\automatic\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
100.0%
15:45:08-083310 INFO ControlNet v1.1.232
ControlNet v1.1.232
ControlNet preprocessor location: D:\sdnext\automatic\extensions-builtin\sd-webui-controlnet\annotator\downloads
15:45:08-271984 INFO ControlNet v1.1.232
ControlNet v1.1.232
Image Browser: ImageReward is not installed, cannot be used.
Image Browser: Creating database
Image Browser: Database created
15:45:08-497758 ERROR Module load:
D:\sdnext\automatic\extensions-builtin\stable-diffusion-webui-rembg\scripts\api.py: ImportError
Module load: D:\sdnext\automatic\extensions-builtin\stable-diffusion-webui-rembg\scripts\api.py: ImportError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\sdnext\automatic\modules\script_loading.py:13 in load_module │
│ │
│ 12 │ try: │
│ ❱ 13 │ │ module_spec.loader.exec_module(module) │
│ 14 │ except Exception as e: │
│ in exec_module:883 │
│ │
│ ... 7 frames hidden ... │
│ │
│ D:\sdnext\automatic\venv\lib\site-packages\numba\__init__.py:55 in <module> │
│ │
│ 54 │
│ ❱ 55 _ensure_critical_deps() │
│ 56 # END DO NOT MOVE │
│ │
│ D:\sdnext\automatic\venv\lib\site-packages\numba\__init__.py:42 in _ensure_critical_deps │
│ │
│ 41 │ elif numpy_version > (1, 24): │
│ ❱ 42 │ │ raise ImportError("Numba needs NumPy 1.24 or less") │
│ 43 │ try: │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: Numba needs NumPy 1.24 or less
15:45:08-546905 ERROR Module load:
D:\sdnext\automatic\extensions-builtin\stable-diffusion-webui-rembg\scripts\postprocessing_remb
g.py: ImportError
Module load: D:\sdnext\automatic\extensions-builtin\stable-diffusion-webui-rembg\scripts\postprocessing_rembg.py: ImportError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\sdnext\automatic\modules\script_loading.py:13 in load_module │
│ │
│ 12 │ try: │
│ ❱ 13 │ │ module_spec.loader.exec_module(module) │
│ 14 │ except Exception as e: │
│ in exec_module:883 │
│ │
│ ... 7 frames hidden ... │
│ │
│ D:\sdnext\automatic\venv\lib\site-packages\numba\__init__.py:55 in <module> │
│ │
│ 54 │
│ ❱ 55 _ensure_critical_deps() │
│ 56 # END DO NOT MOVE │
│ │
│ D:\sdnext\automatic\venv\lib\site-packages\numba\__init__.py:42 in _ensure_critical_deps │
│ │
│ 41 │ elif numpy_version > (1, 24): │
│ ❱ 42 │ │ raise ImportError("Numba needs NumPy 1.24 or less") │
│ 43 │ try: │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: Numba needs NumPy 1.24 or less
15:45:08-867572 INFO Loading UI theme: name=black-orange style=Auto
Running on local URL: http://127.0.0.1:7860
15:45:11-480274 INFO Local URL: http://127.0.0.1:7860/
15:45:11-482798 INFO Initializing middleware
15:45:11-602837 INFO [AgentScheduler] Task queue is empty
15:45:11-606823 INFO [AgentScheduler] Registering APIs
15:45:11-709704 INFO Model metadata saved: D:\sdnext\automatic\metadata.json 1
Loading weights: D:\sdnext\automatic\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors ━━━━━━━━━ 0.0/4.3 -:--:--
GB
15:45:12-501405 WARNING Torch FP16 test failed: Forcing FP32 operations: "LayerNormKernelImpl" not implemented for
'Half'
15:45:12-503413 INFO Torch override dtype: no-half set
15:45:12-504408 INFO Torch override VAE dtype: no-half set
15:45:12-505409 INFO Setting Torch parameters: dtype=torch.float32 vae=torch.float32 unet=torch.float32
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Downloading (…)olve/main/vocab.json: 100%|██████████████████████████████████████████| 961k/961k [00:00<00:00, 1.61MB/s]
Downloading (…)olve/main/merges.txt: 100%|██████████████████████████████████████████| 525k/525k [00:00<00:00, 1.16MB/s]
Downloading (…)cial_tokens_map.json: 100%|████████████████████████████████████████████████████| 389/389 [00:00<?, ?B/s]
Downloading (…)okenizer_config.json: 100%|████████████████████████████████████████████████████| 905/905 [00:00<?, ?B/s]
Downloading (…)lve/main/config.json: 100%|████████████████████████████████████████████████| 4.52k/4.52k [00:00<?, ?B/s]
Calculating model hash: D:\sdnext\automatic\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors ━━━━━━ 4.3/4… 0:00:…
GB
15:45:20-045323 INFO Applying Doggettx cross attention optimization
15:45:20-051844 INFO Embeddings: loaded=0 skipped=0
15:45:20-057917 INFO Model loaded in 8.1s (load=0.2s config=0.4s create=3.5s hash=3.2s apply=0.8s)
15:45:20-301777 INFO Model load finished: {'ram': {'used': 8.55, 'total': 31.3}} cached=0
15:45:20-859838 INFO Startup time: 452.0s (torch=4.3s gradio=2.4s libraries=5.5s models=424.0s codeformer=0.2s
scripts=3.3s onchange=0.2s ui-txt2img=0.1s ui-img2img=0.1s ui-settings=0.4s ui-extensions=1.7s
ui-defaults=0.1s launch=0.2s app-started=0.2s checkpoint=9.2s)
エラーがでていたので中断して、もう1回起動してみたらさっき出てたエラーっぽいのはないが止まった。
PS D:\sdnext\automatic> .\webui.bat
Using VENV: D:\sdnext\automatic\venv
20:46:25-099403 INFO Starting SD.Next
20:46:25-107728 INFO Python 3.10.11 on Windows
20:46:25-168108 INFO Version: 6466d3cb Mon Jul 10 17:20:29 2023 -0400
20:46:25-610382 INFO Latest published version: a844a83d9daa9987295932c0db391ec7be5f2d32 2023-07-11T08:00:45Z
20:46:25-634606 INFO Using CPU-only Torch
20:46:28-219427 INFO Torch 2.0.1+cpu
20:46:28-220614 INFO Installing package: tensorflow==2.12.0
20:47:05-861641 INFO Enabled extensions-builtin: ['a1111-sd-webui-lycoris', 'clip-interrogator-ext', 'LDSR', 'Lora',
'multidiffusion-upscaler-for-automatic1111', 'ScuNET', 'sd-dynamic-thresholding',
'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet',
'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'SwinIR']
20:47:05-870117 INFO Enabled extensions: []
20:47:05-872302 INFO Verifying requirements
20:47:05-889503 INFO Verifying packages
20:47:05-891503 INFO Verifying repositories
20:47:11-387347 INFO Verifying submodules
20:47:32-176175 INFO Extensions enabled: ['a1111-sd-webui-lycoris', 'clip-interrogator-ext', 'LDSR', 'Lora',
'multidiffusion-upscaler-for-automatic1111', 'ScuNET', 'sd-dynamic-thresholding',
'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet',
'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'SwinIR']
20:47:32-178176 INFO Verifying packages
20:47:32-186325 INFO Extension preload: 0.0s D:\sdnext\automatic\extensions-builtin
20:47:32-188648 INFO Extension preload: 0.0s D:\sdnext\automatic\extensions
20:47:32-221762 INFO Server arguments: []
20:47:40-417209 INFO Pipeline: Backend.ORIGINAL
No module 'xformers'. Proceeding without it.
20:47:43-468816 INFO Libraries loaded
20:47:43-469815 INFO Using data path: D:\sdnext\automatic
20:47:43-473321 INFO Available VAEs: D:\sdnext\automatic\models\VAE 0
20:47:43-488860 INFO Available models: D:\sdnext\automatic\models\Stable-diffusion 1
20:47:46-821663 INFO ControlNet v1.1.232
ControlNet v1.1.232
ControlNet preprocessor location: D:\sdnext\automatic\extensions-builtin\sd-webui-controlnet\annotator\downloads
20:47:47-027110 INFO ControlNet v1.1.232
ControlNet v1.1.232
Image Browser: ImageReward is not installed, cannot be used.
20:48:25-145779 INFO Loading UI theme: name=black-orange style=Auto
Running on local URL: http://127.0.0.1:7860
20:48:27-450550 INFO Local URL: http://127.0.0.1:7860/
20:48:27-451639 INFO Initializing middleware
20:48:28-016312 INFO [AgentScheduler] Task queue is empty
20:48:28-017325 INFO [AgentScheduler] Registering APIs
20:48:28-133032 WARNING Selected checkpoint not found: model.ckpt
Loading weights: D:\sdnext\automatic\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors ━━━━━━━━━ 0.0/4.3 -:--:--
GB
20:48:29-090045 WARNING Torch FP16 test failed: Forcing FP32 operations: "LayerNormKernelImpl" not implemented for
'Half'
20:48:29-091161 INFO Torch override dtype: no-half set
20:48:29-092186 INFO Torch override VAE dtype: no-half set
20:48:29-093785 INFO Setting Torch parameters: dtype=torch.float32 vae=torch.float32 unet=torch.float32
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
20:48:30-662359 INFO Applying Doggettx cross attention optimization
20:48:30-666359 INFO Embeddings: loaded=0 skipped=0
20:48:30-679671 INFO Model loaded in 2.2s (load=0.2s config=0.4s create=0.5s apply=1.0s)
20:48:31-105108 INFO Model load finished: {'ram': {'used': 8.9, 'total': 31.3}} cached=0
20:48:31-879698 INFO Startup time: 59.7s (torch=6.1s gradio=1.5s libraries=3.7s codeformer=0.1s scripts=41.4s
onchange=0.2s ui-txt2img=0.1s ui-img2img=0.1s ui-settings=0.1s ui-extensions=1.6s
ui-defaults=0.1s launch=0.2s app-started=0.7s checkpoint=3.7s)
PS C:\stablediff\stable-diffusion-webui-directml> .\webui-user-amd.bat
venv "C:\stablediff\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: ## 1.4.0
Commit hash: 265d626471eacd617321bdb51e50e4b87a7ca82e
Installing requirements
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --autolaunch
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
reading checkpoint metadata: C:\stablediff\stable-diffusion-webui-directml\models\Stable-diffusion\unlimitedReplicant_v10.safetensors: AssertionError
Traceback (most recent call last):
File "C:\stablediff\stable-diffusion-webui-directml\modules\sd_models.py", line 62, in __init__
self.metadata = read_metadata_from_safetensors(filename)
File "C:\stablediff\stable-diffusion-webui-directml\modules\sd_models.py", line 236, in read_metadata_from_safetensors
assert metadata_len > 2 and json_start in (b'{"', b"{'"), f"{filename} is not a safetensors file"
AssertionError: C:\stablediff\stable-diffusion-webui-directml\models\Stable-diffusion\unlimitedReplicant_v10.safetensors is not a safetensors file
2023-07-12 13:46:53,471 - ControlNet - INFO - ControlNet v1.1.232
ControlNet preprocessor location: C:\stablediff\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads
2023-07-12 13:46:53,548 - ControlNet - INFO - ControlNet v1.1.232
Loading weights [c348e5681e] from C:\stablediff\stable-diffusion-webui-directml\models\Stable-diffusion\muaccamix_v15.safetensors
preload_extensions_git_metadata for 8 extensions took 0.13s
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 7.2s (import torch: 2.2s, import gradio: 1.0s, import ldm: 0.5s, other imports: 1.2s, load scripts: 1.3s, create ui: 0.4s, gradio launch: 0.5s).
Creating model from config: C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\configs\stable-diffusion\v2-inference-v.yaml
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
Applying attention optimization: sub-quadratic... done.
Textual inversion embeddings loaded(0):
Model loaded in 7.1s (load weights from disk: 0.7s, find config: 2.4s, create model: 0.2s, apply weights to model: 1.9s, apply half(): 1.0s, move model to device: 0.3s, calculate empty prompt: 0.4s).
Loading weights [e3b0c44298] from C:\stablediff\stable-diffusion-webui-directml\models\Stable-diffusion\unlimitedReplicant_v10.safetensors
changing setting sd_model_checkpoint to unlimitedReplicant_v10.safetensors [e3b0c44298]: SafetensorError
Traceback (most recent call last):
File "C:\stablediff\stable-diffusion-webui-directml\modules\shared.py", line 610, in set
self.data_labels[key].onchange()
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\webui.py", line 226, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()), call=False)
File "C:\stablediff\stable-diffusion-webui-directml\modules\sd_models.py", line 568, in reload_model_weights
state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
File "C:\stablediff\stable-diffusion-webui-directml\modules\sd_models.py", line 277, in get_checkpoint_state_dict
res = read_state_dict(checkpoint_info.filename)
File "C:\stablediff\stable-diffusion-webui-directml\modules\sd_models.py", line 256, in read_state_dict
pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\safetensors\torch.py", line 259, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooSmall
*** Error completing request
*** Arguments: ('task(d0d406cu3531u31)', 'miku\n', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FAA7416110>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50) {}
Traceback (most recent call last):
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\txt2img.py", line 94, in txt2img
processed = processing.process_images(p)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 623, in process_images
res = process_images_inner(p)
File "C:\stablediff\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 732, in process_images_inner
p.setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 1129, in setup_conds
super().setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 346, in setup_conds
self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, self.negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 338, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "C:\stablediff\stable-diffusion-webui-directml\modules\prompt_parser.py", line 143, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 665, in get_learned_conditioning
c = self.cond_stage_model.encode(c)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 236, in encode
return self(text)
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 213, in forward
z = self.encode_with_transformer(tokens.to(self.device))
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
---
*** Error completing request
*** Arguments: ('task(rw9uda96ly6wovo)', 'miku\n', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FA000A6620>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50) {}
Traceback (most recent call last):
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\txt2img.py", line 94, in txt2img
processed = processing.process_images(p)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 623, in process_images
res = process_images_inner(p)
File "C:\stablediff\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 732, in process_images_inner
p.setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 1129, in setup_conds
super().setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 346, in setup_conds
self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, self.negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 338, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "C:\stablediff\stable-diffusion-webui-directml\modules\prompt_parser.py", line 143, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 665, in get_learned_conditioning
c = self.cond_stage_model.encode(c)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 236, in encode
return self(text)
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 213, in forward
z = self.encode_with_transformer(tokens.to(self.device))
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
---
*** Error completing request
*** Arguments: ('task(qgndomumiw4zfai)', 'miku\n', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FAA6F229E0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50) {}
Traceback (most recent call last):
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\txt2img.py", line 94, in txt2img
processed = processing.process_images(p)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 623, in process_images
res = process_images_inner(p)
File "C:\stablediff\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 732, in process_images_inner
p.setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 1129, in setup_conds
super().setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 346, in setup_conds
self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, self.negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 338, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "C:\stablediff\stable-diffusion-webui-directml\modules\prompt_parser.py", line 143, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 665, in get_learned_conditioning
c = self.cond_stage_model.encode(c)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 236, in encode
return self(text)
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 213, in forward
z = self.encode_with_transformer(tokens.to(self.device))
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
---
Restarting UI...
Closing server running on port: 7860
2023-07-12 13:54:32,359 - ControlNet - INFO - ControlNet v1.1.232
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 0.6s (load scripts: 0.3s, create ui: 0.2s).
preload_extensions_git_metadata for 8 extensions took 0.15s
*** Error completing request
*** Arguments: ('task(jwkb7fcvkg7wpb4)', 'miku', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FB1BC5F010>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50) {}
Traceback (most recent call last):
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "C:\stablediff\stable-diffusion-webui-directml\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\txt2img.py", line 94, in txt2img
processed = processing.process_images(p)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 623, in process_images
res = process_images_inner(p)
File "C:\stablediff\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 732, in process_images_inner
p.setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 1129, in setup_conds
super().setup_conds()
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 346, in setup_conds
self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, self.negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
File "C:\stablediff\stable-diffusion-webui-directml\modules\processing.py", line 338, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "C:\stablediff\stable-diffusion-webui-directml\modules\prompt_parser.py", line 143, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 665, in get_learned_conditioning
c = self.cond_stage_model.encode(c)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 236, in encode
return self(text)
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 213, in forward
z = self.encode_with_transformer(tokens.to(self.device))
File "C:\stablediff\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
---
fatal: No names found, cannot describe anything.
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: ## 1.4.0
Commit hash: 265d626471eacd617321bdb51e50e4b87a7ca82e
Installing requirements
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --autolaunch
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [c348e5681e] from C:\stablediff\stable-diffusion-webui-directml\models\Stable-diffusion\muaccamix_v15.safetensors
preload_extensions_git_metadata for 8 extensions took 0.13s
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 6.6s (import torch: 2.2s, import gradio: 1.0s, import ldm: 0.5s, other imports: 1.2s, load scripts: 1.0s, create ui: 0.5s, gradio launch: 0.2s).
Creating model from config: C:\stablediff\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\configs\stable-diffusion\v2-inference-v.yaml
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
Applying attention optimization: sub-quadratic... done.
Textual inversion embeddings loaded(0):
Model loaded in 6.3s (load weights from disk: 0.7s, find config: 1.7s, create model: 0.6s, apply weights to model: 1.6s, apply half(): 1.0s, move model to device: 0.3s, calculate empty prompt: 0.4s).
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [01:44<00:00, 5.23s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [01:41<00:00, 5.06s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [01:41<00:00, 5.08s/it]
とりあえずお知らせ記載の「wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh 」を実行したところエラーとなった
# wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
--2023-06-26 11:17:16-- https://my-netdata.io/kickstart.sh
Resolving my-netdata.io (my-netdata.io)... 2606:4700:3031::6815:d9f, 2606:4700:3036::ac43:9cc0, 172.67.156.192, ...
Connecting to my-netdata.io (my-netdata.io)|2606:4700:3031::6815:d9f|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/octet-stream]
Saving to: ‘/tmp/netdata-kickstart.sh’
/tmp/netdata-kickst [ <=> ] 81.38K --.-KB/s in 0.01s
2023-06-26 11:17:17 (6.31 MB/s) - ‘/tmp/netdata-kickstart.sh’ saved [83335]
--- Using /tmp/netdata-kickstart-UrT2UNzClU as a temporary directory. ---
--- Checking for existing installations of Netdata... ---
[/tmp/netdata-kickstart-UrT2UNzClU]# sh -c cat "//etc/netdata/.install-type" > "/tmp/netdata-kickstart-UrT2UNzClU/install-type"
OK
ABORTED Found an existing netdata install at /, but the install type is 'custom', which is not supported by this script, refusing to proceed.
For community support, you can connect with us on:
- GitHub: https://github.com/netdata/netdata/discussions
- Discord: https://discord.gg/5ygS846fR6
- Our community forums: https://community.netdata.cloud/
[/root]# rm -rf /tmp/netdata-kickstart-UrT2UNzClU
OK
#
# wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh --reinstall
--2023-06-26 11:17:34-- https://my-netdata.io/kickstart.sh
Resolving my-netdata.io (my-netdata.io)... 2606:4700:3036::ac43:9cc0, 2606:4700:3031::6815:d9f, 104.21.13.159, ...
Connecting to my-netdata.io (my-netdata.io)|2606:4700:3036::ac43:9cc0|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/octet-stream]
Saving to: ‘/tmp/netdata-kickstart.sh’
/tmp/netdata-kickst [ <=> ] 81.38K --.-KB/s in 0.01s
2023-06-26 11:17:34 (6.37 MB/s) - ‘/tmp/netdata-kickstart.sh’ saved [83335]
--- Using /tmp/netdata-kickstart-J62Yweer7w as a temporary directory. ---
--- Attempting to install using native packages... ---
--- Repository configuration is already present, attempting to install netdata. ---
There was an error communicating with OSMS server.
OSMS based repositories will be disabled.
<ProtocolError for http://127.0.0.1:9003/XMLRPC: 500 500 Server Error: Internal Server Error for url: http://127.0.0.1:9003/XMLRPC>
WARNING Could not find a usable native package for ol on aarch64.
--- Attempting to uninstall repository configuration package. ---
[/tmp/netdata-kickstart-J62Yweer7w]# env dnf remove netdata-repo-edge
There was an error communicating with OSMS server.
OSMS based repositories will be disabled.
<ProtocolError for http://127.0.0.1:9003/XMLRPC: 500 500 Server Error: Internal Server Error for url: http://127.0.0.1:9003/XMLRPC>
Dependencies resolved.
================================================================================
Package Architecture Version Repository Size
================================================================================
Removing:
netdata-repo-edge noarch 1-2 @@commandline 580
Transaction Summary
================================================================================
Remove 1 Package
Freed space: 580
Is this ok [y/N]: y
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Erasing : netdata-repo-edge-1-2.noarch 1/1
warning: file /etc/yum.repos.d/netdata-edge.repo: remove failed: No such file or directory
Verifying : netdata-repo-edge-1-2.noarch 1/1
Removed:
netdata-repo-edge-1-2.noarch
Complete!
OK
WARNING Could not install native binary packages, falling back to alternative installation method.
[/tmp/netdata-kickstart-J62Yweer7w]# sh -c /bin/curl https://github.com/netdata/netdata-nightlies/releases/latest -s -L -I -o /dev/null -w '%{url_effective}' | grep -o '[^/]*$'
OK
--- Attempting to install using static build... ---
[/tmp/netdata-kickstart-J62Yweer7w]# /bin/curl --fail -q -sSL --connect-timeout 10 --retry 3 --output /tmp/netdata-kickstart-J62Yweer7w/netdata-aarch64-latest.gz.run https://github.com/netdata/netdata-nightlies/releases/download/v1.40.0-38-nightly/netdata-aarch64-latest.gz.run
OK
[/tmp/netdata-kickstart-J62Yweer7w]# /bin/curl --fail -q -sSL --connect-timeout 10 --retry 3 --output /tmp/netdata-kickstart-J62Yweer7w/sha256sum.txt https://github.com/netdata/netdata-nightlies/releases/download/v1.40.0-38-nightly/sha256sums.txt
OK
--- Installing netdata ---
[/tmp/netdata-kickstart-J62Yweer7w]# sh /tmp/netdata-kickstart-J62Yweer7w/netdata-aarch64-latest.gz.run --
^
|.-. .-. .-. .-. . netdata
| '-' '-' '-' '-' real-time performance monitoring, done right!
+----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+--->
(C) Copyright 2017-2023, Costa Tsaousis
All rights reserved
Released under GPL v3+
You are about to install netdata to this system.
netdata will be installed at:
/opt/netdata
The following changes will be made to your system:
# USERS / GROUPS
User 'netdata' and group 'netdata' will be added, if not present.
# LOGROTATE
This file will be installed if logrotate is present.
- /etc/logrotate.d/netdata
# SYSTEM INIT
If a supported init system is detected, appropriate configuration will be
installed to allow Netdata to run as a system service. We currently support
systemd, OpenRC, LSB init scripts, and traditional init.d setups, as well as
having experimental support for runit.
This package can also update a netdata installation that has been
created with another version of it.
Your netdata configuration will be retained.
After installation, netdata will be (re-)started.
netdata re-distributes a lot of open source software components.
Check its full license at:
https://github.com/netdata/netdata/blob/master/LICENSE
Please type y to accept, n otherwise: y
Creating directory /opt/netdata
Verifying archive integrity... 100% MD5 checksums are OK. All good.
Uncompressing netdata, the real-time performance and health monitoring system 100%
--- Attempt to create user/group netdata/netadata ---
Group 'netdata' already exists.
User 'netdata' already exists.
--- Add user netdata to required user groups ---
Group 'docker' does not exist.
User 'netdata' is already in group 'nginx'.
Group 'varnish' does not exist.
Group 'haproxy' does not exist.
User 'netdata' is already in group 'adm'.
Group 'nsd' does not exist.
Group 'proxy' does not exist.
Group 'squid' does not exist.
Group 'ceph' does not exist.
User 'netdata' is already in group 'nobody'.
Group 'I2C' does not exist.
--- Install logrotate configuration for netdata ---
[/opt/netdata]# chmod 644 /etc/logrotate.d/netdata
OK ''
--- Telemetry configuration ---
You can opt out from anonymous statistics via the --disable-telemetry option, or by creating an empty file /opt/netdata/etc/netdata/.opt-out-from-anonymous-statistics
--- Install netdata at system init ---
Installing systemd service...
--- Install (but not enable) netdata updater tool ---
cat: /system/systemd/netdata-updater.timer: No such file or directory
cat: /system/systemd/netdata-updater.service: No such file or directory
Update script is located at /opt/netdata/usr/libexec/netdata/netdata-updater.sh
--- creating quick links ---
[/opt/netdata]# ln -s bin sbin
OK ''
[/opt/netdata/usr]# ln -s ../bin bin
OK ''
[/opt/netdata/usr]# ln -s ../bin sbin
OK ''
[/opt/netdata/usr]# ln -s . local
OK ''
[/opt/netdata]# ln -s etc/netdata netdata-configs
OK ''
[/opt/netdata]# ln -s usr/share/netdata/web netdata-web-files
OK ''
[/opt/netdata]# ln -s usr/libexec/netdata netdata-plugins
OK ''
[/opt/netdata]# ln -s var/lib/netdata netdata-dbs
OK ''
[/opt/netdata]# ln -s var/cache/netdata netdata-metrics
OK ''
[/opt/netdata]# ln -s var/log/netdata netdata-logs
OK ''
[/opt/netdata/etc/netdata]# rm orig
OK ''
[/opt/netdata/etc/netdata]# ln -s ../../usr/lib/netdata/conf.d orig
OK ''
--- fix permissions ---
[/opt/netdata]# chmod g+rx,o+rx /opt
OK ''
[/opt/netdata]# find /opt/netdata -type d -exec chmod go+rx {} +
OK ''
[/opt/netdata]# chown -R netdata:netdata /opt/netdata/var
OK ''
--- changing plugins ownership and permissions ---
[/opt/netdata]# chown root:netdata usr/libexec/netdata/plugins.d/apps.plugin
OK ''
[/opt/netdata]# chown root:netdata usr/libexec/netdata/plugins.d/perf.plugin
OK ''
[/opt/netdata]# chown root:netdata usr/libexec/netdata/plugins.d/slabinfo.plugin
OK ''
[/opt/netdata]# chown root:netdata usr/libexec/netdata/plugins.d/debugfs.plugin
OK ''
[/opt/netdata]# chown root:netdata usr/libexec/netdata/plugins.d/ioping
OK ''
[/opt/netdata]# chown root:netdata usr/libexec/netdata/plugins.d/cgroup-network
OK ''
[/opt/netdata]# chown root:netdata usr/libexec/netdata/plugins.d/nfacct.plugin
OK ''
[/opt/netdata]# chown root:netdata usr/libexec/netdata/plugins.d/python.d.plugin
OK ''
[/opt/netdata]# chown root:netdata usr/libexec/netdata/plugins.d/charts.d.plugin
OK ''
[/opt/netdata]# chown root:netdata usr/libexec/netdata/plugins.d/go.d.plugin
OK ''
[/opt/netdata]# chown root:netdata usr/libexec/netdata/plugins.d/ioping.plugin
OK ''
[/opt/netdata]# chown root:netdata usr/libexec/netdata/plugins.d/cgroup-network-helper.sh
OK ''
[/opt/netdata]# setcap cap_dac_read_search,cap_sys_ptrace=ep usr/libexec/netdata/plugins.d/apps.plugin
OK ''
[/opt/netdata]# setcap cap_dac_read_search=ep usr/libexec/netdata/plugins.d/slabinfo.plugin
OK ''
[/opt/netdata]# setcap cap_dac_read_search=ep usr/libexec/netdata/plugins.d/debugfs.plugin
OK ''
[/opt/netdata]# setcap cap_sys_admin=ep usr/libexec/netdata/plugins.d/perf.plugin
OK ''
[/opt/netdata]# setcap cap_net_admin,cap_net_raw=eip usr/libexec/netdata/plugins.d/go.d.plugin
OK ''
[/opt/netdata]# chmod 4750 usr/libexec/netdata/plugins.d/ioping
OK ''
[/opt/netdata]# chmod 4750 usr/libexec/netdata/plugins.d/cgroup-network
OK ''
[/opt/netdata]# chmod 4750 usr/libexec/netdata/plugins.d/nfacct.plugin
OK ''
Configure TLS certificate paths
Using /etc/pki/tls for TLS configuration and certificates
Save install options
--- starting netdata ---
--- Restarting netdata instance ---
Stopping all netdata threads
[/opt/netdata]# stop_all_netdata
OK ''
Starting netdata using command 'systemctl start netdata'
[/opt/netdata]# systemctl start netdata
OK ''
Downloading default configuration from netdata...
[/opt/netdata]# /bin/curl -sSL --connect-timeout 10 --retry 3 http://localhost:19999/netdata.conf
OK ''
[/opt/netdata]# mv /opt/netdata/etc/netdata/netdata.conf.new /opt/netdata/etc/netdata/netdata.conf
OK ''
OK New configuration saved for you to edit at /opt/netdata/etc/netdata/netdata.conf
^
|.-. .-. .-. .-. .-. . netdata .-. .-. .-. .-. .-. .-
| '-' '-' '-' '-' '-' '-' '-' '-' '-' '-'
+----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+--->
[/opt/netdata]# chmod 0644 /opt/netdata/etc/netdata/netdata.conf
OK ''
OK
[/tmp/netdata-kickstart-J62Yweer7w]# sh -c cat "/opt/netdata/etc/netdata/.install-type" > "/tmp/netdata-kickstart-J62Yweer7w/install-type"
OK
[/tmp/netdata-kickstart-J62Yweer7w]# chown 0:0 /tmp/netdata-kickstart-J62Yweer7w/install-type
OK
[/tmp/netdata-kickstart-J62Yweer7w]# chown netdata:netdata /tmp/netdata-kickstart-J62Yweer7w/install-type
OK
[/tmp/netdata-kickstart-J62Yweer7w]# cp /tmp/netdata-kickstart-J62Yweer7w/install-type /opt/netdata/etc/netdata/.install-type
OK
[/tmp/netdata-kickstart-J62Yweer7w]# test -x /opt/netdata/usr/libexec/netdata/netdata-updater.sh
OK
[/tmp/netdata-kickstart-J62Yweer7w]# grep -q \-\-enable-auto-updates /opt/netdata/usr/libexec/netdata/netdata-updater.sh
OK
[/tmp/netdata-kickstart-J62Yweer7w]# /opt/netdata/usr/libexec/netdata/netdata-updater.sh --enable-auto-updates
Mon Jun 26 11:18:13 JST 2023 : INFO: netdata-updater.sh: Auto-updating has been ENABLED through cron, updater script linked to /etc/cron.daily/netdata-updater\n
Mon Jun 26 11:18:13 JST 2023 : INFO: netdata-updater.sh: If the update process fails and you have email notifications set up correctly for cron on this system, you should receive an email notification of the failure.
Mon Jun 26 11:18:13 JST 2023 : INFO: netdata-updater.sh: Successful updates will not send an email.
OK
Successfully installed the Netdata Agent.
The following non-fatal warnings or errors were encountered:
- Could not find a usable native package for ol on aarch64.
- Could not install native binary packages, falling back to alternative installation method.
Official documentation can be found online at https://learn.netdata.cloud/docs/.
Looking to monitor all of your infrastructure with Netdata? Check out Netdata Cloud at https://app.netdata.cloud.
Join our community and connect with us on:
- GitHub: https://github.com/netdata/netdata/discussions
- Discord: https://discord.gg/5ygS846fR6
- Our community forums: https://community.netdata.cloud/
[/root]# rm -rf /tmp/netdata-kickstart-J62Yweer7w
OK
#
# curl -O https://my-netdata.io/kickstart.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 83335 0 83335 0 0 129k 0 --:--:-- --:--:-- --:--:-- 128k
# bash kickstart.sh
kickstart.sh: line 34: cd: kickstart.sh: Not a directory
--- Using /tmp/netdata-kickstart-gOQowVR9j7 as a temporary directory. ---
--- Checking for existing installations of Netdata... ---
[/tmp/netdata-kickstart-gOQowVR9j7]# sh -c cat "//etc/netdata/.install-type" > "/tmp/netdata-kickstart-gOQowVR9j7/install-type"
OK
ABORTED Found an existing netdata install at /, but the install type is 'custom', which is not supported by this script, refusing to proceed.
For community support, you can connect with us on:
- GitHub: https://github.com/netdata/netdata/discussions
- Discord: https://discord.gg/5ygS846fR6
- Our community forums: https://community.netdata.cloud/
[/root]# rm -rf /tmp/netdata-kickstart-gOQowVR9j7
OK
#