dovecotをActive Directoryと連携させる(メモ版

設定検証が落ち着いたら最終結果のみの記事を別に作成する予定です


Windows Server 2025のActice Directoryサーバと連携したメールサーバを作って、dovecotでimap/pop3、postfixでsmtpを提供する、という設定を行うこととした。

まずは「Windows Server 2025で作ったActive Directory環境でldapsを使えるようにする」にあるようにActive Directoryサーバをldapsで使えるように設定した。

ただ、調べると、純正Windows ServeverのActice Directoryサーバとldapを使って連携させた場合、userPassword, unixUserPassword, msSFU30Password などのパスワード情報が配布されていないので連携が取れない、的なことが書いてある記事などがあった。

実際 ldapsearchを使って確認してみると取得できたもののなかに含まれていない。これはまずいのでは?といろいろ調べる羽目になった。(なお、最終的には ldapsearchでは取得できないけど dovecotの認証としてActive Directoryで設定したパスワードがそのまま利用できることを確認できました)

# ldapsearch -x -H ldaps://192.168.122.10 -D "cn=vmail,cn=Users,dc=adsample,dc=local" -w "パスワード" -b "cn=Users,dc=adsample,dc=local" samAccountName=testuser1
# extended LDIF
#
# LDAPv3
# base <cn=Users,dc=adsample,dc=local> with scope subtree
# filter: samAccountName=testuser1
# requesting: ALL
#

# testuser1, Users, adsample.local
dn: CN=testuser1,CN=Users,DC=adsample,DC=local
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
cn: testuser1
givenName: testuser1
distinguishedName: CN=testuser1,CN=Users,DC=adsample,DC=local
instanceType: 4
whenCreated: 20250417094618.0Z
whenChanged: 20250425001141.0Z
displayName: testuser1
uSNCreated: 12609
uSNChanged: 36883
name: testuser1
objectGUID:: H4j5I6UhEEaDahAIt64JeA==
userAccountControl: 66048
badPwdCount: 0
codePage: 0
countryCode: 0
badPasswordTime: 133900185974014530
lastLogoff: 0
lastLogon: 133900186129696391
pwdLastSet: 133893567784742554
primaryGroupID: 513
objectSid:: AQUAAAAAAAUVAAAArlEnuz4EHgKbAhGoTwQAAA==
accountExpires: 9223372036854775807
logonCount: 0
sAMAccountName: testuser1
sAMAccountType: 805306368
userPrincipalName: testuser1@adsample.local
objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=adsample,DC=local
dSCorePropagationData: 20250418015428.0Z
dSCorePropagationData: 16010101000000.0Z
lastLogonTimestamp: 133900135017739905
mail: testuser1@example.com

# search result
search: 2
result: 0 Success

# numResponses: 2
# numEntries: 1
#

じゃあ、と回避策を探すと sssd を使ってLinux OSごとActive Directoryに参加させて、pam+sssdで認証を行う、という手法がある。この場合、Linux OS上に各アカウントのUIDがsssd経由で自動発行されディレクトリが作られていくことになる。

vmailアカウントで1つにまとめるvirtual boxタイプでやりようがないのかなぁ、と、とりあえず試してみることにした。

まずはRedHatのドキュメント「メールサーバーサービスの設定および維持」にある「1.2. LDAP 認証を使用した Dovecot サーバーのセットアップ」を参考に設定を実施

dovecotインストール

まず、dovecotをインストール

[root@mail ~]# dnf install dovecot
Last metadata expiration check: 0:15:03 ago on Fri Apr 25 02:14:57 2025.
Dependencies resolved.
================================================================================
 Package        Arch    Version                                Repository  Size
================================================================================
Installing:
 dovecot        x86_64  1:2.3.16-14.el9                        appstream  4.7 M
Installing dependencies:
 clucene-core   x86_64  2.3.3.4-42.20130812.e8e3d20git.el9     appstream  585 k
 libexttextcat  x86_64  3.4.5-11.el9                           appstream  209 k
 libicu         x86_64  67.1-9.el9                             baseos     9.6 M
 libstemmer     x86_64  0-18.585svn.el9                        appstream   82 k

Transaction Summary
================================================================================
Install  5 Packages

Total download size: 15 M
Installed size: 53 M
Is this ok [y/N]: y
Downloading Packages:
(1/5): clucene-core-2.3.3.4-42.20130812.e8e3d20 582 kB/s | 585 kB     00:01    A
(2/5): libexttextcat-3.4.5-11.el9.x86_64.rpm    207 kB/s | 209 kB     00:01
(3/5): libstemmer-0-18.585svn.el9.x86_64.rpm    995 kB/s |  82 kB     00:00
(4/5): dovecot-2.3.16-14.el9.x86_64.rpm         1.3 MB/s | 4.7 MB     00:03
(5/5): libicu-67.1-9.el9.x86_64.rpm             1.2 MB/s | 9.6 MB     00:07
--------------------------------------------------------------------------------
Total                                           1.3 MB/s |  15 MB     00:11
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1
  Installing       : libicu-67.1-9.el9.x86_64                               1/5
  Installing       : libstemmer-0-18.585svn.el9.x86_64                      2/5
  Installing       : libexttextcat-3.4.5-11.el9.x86_64                      3/5
  Installing       : clucene-core-2.3.3.4-42.20130812.e8e3d20git.el9.x86_   4/5
  Running scriptlet: dovecot-1:2.3.16-14.el9.x86_64                         5/5
  Installing       : dovecot-1:2.3.16-14.el9.x86_64                         5/5
  Running scriptlet: dovecot-1:2.3.16-14.el9.x86_64                         5/5
  Verifying        : clucene-core-2.3.3.4-42.20130812.e8e3d20git.el9.x86_   1/5
  Verifying        : dovecot-1:2.3.16-14.el9.x86_64                         2/5
  Verifying        : libexttextcat-3.4.5-11.el9.x86_64                      3/5
  Verifying        : libstemmer-0-18.585svn.el9.x86_64                      4/5
  Verifying        : libicu-67.1-9.el9.x86_64                               5/5

Installed:
  clucene-core-2.3.3.4-42.20130812.e8e3d20git.el9.x86_64
  dovecot-1:2.3.16-14.el9.x86_64
  libexttextcat-3.4.5-11.el9.x86_64
  libicu-67.1-9.el9.x86_64
  libstemmer-0-18.585svn.el9.x86_64

Complete!
[root@mail ~]#

dovecot用自己証明書作成

dovecot用に自己証明書を作成するけど、標準だと有効期限1年なので変更して作成する

まずは/etc/pki/dovecot/dovecot-openssl.cnf に適切なホスト名と管理者メールアドレスを記載する

[root@mail ~]# cat /etc/pki/dovecot/dovecot-openssl.cnf
[ req ]
default_bits = 3072
encrypt_key = yes
distinguished_name = req_dn
x509_extensions = cert_type
prompt = no

[ req_dn ]
# country (2 letter code)
#C=FI

# State or Province Name (full name)
#ST=

# Locality Name (eg. city)
#L=Helsinki

# Organization (eg. company)
#O=Dovecot

# Organizational Unit Name (eg. section)
OU=IMAP server

# Common Name (*.example.com is also possible)
CN=mail.adsample.local

# E-mail contact
emailAddress=postmaster@adsample.local

[ cert_type ]
nsCertType = server
[root@mail ~]#

次に通常はdovecot初回起動時に /usr/share/doc/dovecot/mkcert.sh を実行して自己証明書を作成しているのだが、このスクリプト内で「-days 365」と書かれているから有効期限が1年になっているので、コピーして「-days 3650」などに修正する

[root@mail ~]# cp /usr/share/doc/dovecot/mkcert.sh .
[root@mail ~]# vi mkcert.sh
[root@mail ~]# diff -u /usr/share/doc/dovecot/mkcert.sh mkcert.sh
--- /usr/share/doc/dovecot/mkcert.sh    2024-10-03 05:08:31.000000000 +0900
+++ mkcert.sh   2025-04-25 02:55:09.510440927 +0900
@@ -34,7 +34,7 @@
   exit 1
 fi

-$OPENSSL req -new -x509 -nodes -config $OPENSSLCONFIG -out $CERTFILE -keyout $KEYFILE -days 365 || exit 2
+$OPENSSL req -new -x509 -nodes -config $OPENSSLCONFIG -out $CERTFILE -keyout $KEYFILE -days 3650 || exit 2
 chown root:root $CERTFILE $KEYFILE
 chmod 0600 $CERTFILE $KEYFILE
 echo
[root@mail ~]#

で、修正したmkcert.shを実行して証明書を作成

[root@mail ~]# bash mkcert.sh
/etc/pki/dovecot/certs/dovecot.pem already exists, won't overwrite
[root@mail ~]# rm /etc/pki/dovecot/certs/dovecot.pem
rm: remove regular file '/etc/pki/dovecot/certs/dovecot.pem'? y
[root@mail ~]# rm /etc/pki/dovecot/private/dovecot.pem
rm: remove regular file '/etc/pki/dovecot/private/dovecot.pem'? y
[root@mail ~]# bash mkcert.sh
..+...+......+.....+....+..............+.+.....+.........++++++++++++++++++++++++++++++++++++++++++*..+....+......+........+......+....+............++++++++++++++++++++++++++++++++++++++++++*....+......+.+...+.....+......+...................+......+...+...+.....+...+.+...+.....................+............+.........+.....+.....................+.............+.........+...........+....+.....+.+..............+.......+........+.............+.....+..........+............+.....+..........+...+..+.+.....+.......+..+.+.....+.........+...+.......+..+.+..+..................+.......+..+...+.+....................+.+.........+.....+....+...+...+............+.....+.......+..+......+.......+...+...............+..+...+....+...........+....+........+.+......+........+...............+.......+........................+.........+..+....+......+.........+..+..................+....+......+............+.....+....+........+.......+.....+.+.....+...+......+..........+.........+++++
.........+.+..+....+...+...+...++++++++++++++++++++++++++++++++++++++++++*..........+...+.................+...+...++++++++++++++++++++++++++++++++++++++++++*.......+.....+....+..............+......+.........+.......+...+..+................+.....+.......+..+.........+....+......+..+..................+.........+......+............+.............+..+...+....+...............+...........+..........+.........+...+...+++++
-----

subject=OU=IMAP server, CN=mail.adsample.local, emailAddress=postmaster@adsample.local
SHA1 Fingerprint=DD:2E:9B:1A:6A:84:07:03:EF:6E:7F:D4:7A:03:39:F0:24:FC:0E:2A
[root@mail ~]# 

ファイルが作成され、「openssl x509 -noout -dates -in ファイル名」を実行し有効期限が約10年であることを確認

[root@mail ~]# ls -ltR /etc/pki/dovecot/
/etc/pki/dovecot/:
total 8
drwxr-xr-x. 2 root root  25 Apr 25 02:56 certs
drwxr-xr-x. 2 root root  25 Apr 25 02:56 private
-rw-r--r--. 1 root root 502 Apr 25 02:45 dovecot-openssl.cnf
-rw-r--r--. 1 root root 496 Apr 25 02:45 dovecot-openssl.cnf.org

/etc/pki/dovecot/certs:
total 4
-rw-------. 1 root root 1619 Apr 25 02:56 dovecot.pem

/etc/pki/dovecot/private:
total 4
-rw-------. 1 root root 2484 Apr 25 02:56 dovecot.pem
[root@mail ~]# openssl x509 -noout -dates -in /etc/pki/dovecot/certs/dovecot.pem
notBefore=Apr 24 17:56:04 2025 GMT
notAfter=Apr 22 17:56:04 2035 GMT
[root@mail ~]#

続いてRedHatのページにも記載されているDiffie-Hellmanパラメータファイル作成

[root@mail ~]# openssl dhparam -out /etc/dovecot/dh.pem 4096
Generating DH parameters, 4096 bit long safe prime
.....................................................+........................................................+.........................................................+......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................+.................................................................................................................+.....................................................................................................................+..........................................+.......................................................+...................................................................................+....................................................................................................................................................................................................................................................................+......................................................+......+..........................................+........................................+..............................................................................................................................................................................+......................................................................................................................................................................................................................................................................................................................................................................................................................................................+.............................................................................................................................................................................................................................................................................+...........................................................................................................................................................+............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................+.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................+........................+...............................................+.....................................................................................................................+......................................................................................................................................................................................................................................................................................................................................................................+........................................................................................................................................................................................................................................................+......................................................................................................+.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................+.......................................................................................................................................................................................................................+.......................................................................................................................................+...................................................................................................................................................................................................................................+....................................................................................................................................................................................................................................................................+...+....................................................................................................................................................................................................................................................+........+........+..................................................................................................................................................................................................................................................................................................................+......................................................+...........................................................................................................................................................................+..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................+....................................................................+...........+.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................+.........................................................................................................................................................................................................................................................................................................................................................+....+........................................................................................................................................................................................................................................................................................................................................+......+...................................................................................................................................................................................................................................................................................................................................................+........................................................................................................................................................................................................................................+............................+...............................................................+.......................................................................................................................................................................................................................................+..............................................................................................................................................................................................................................................+..........................................................................................................................................................................................................+................................................................................+......................................................................................+.........................................................................................................................................+..............................................++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*++*
[root@mail ~]# ls -l /etc/dovecot/dh.pem
-rw-r--r--. 1 root root 773 Apr 25 03:02 /etc/dovecot/dh.pem
[root@mail ~]#

/etc/dovecot/conf.d/10-ssl.conf に証明書ファイルとDiffie-Hellmanパラメータファイルの登録

ssl_certとssl_keyは標準値のまま
ssl_ca については登録せず
ssl_shのコメントを外す

[root@mail postfix]# cat /etc/dovecot/conf.d/10-ssl.conf
##
## SSL settings
##

# SSL/TLS support: yes, no, required. <doc/wiki/SSL.txt>
# disable plain pop3 and imap, allowed are only pop3+TLS, pop3s, imap+TLS and imaps
# plain imap and pop3 are still allowed for local connections
ssl = no

# PEM encoded X.509 SSL/TLS certificate and private key. They're opened before
# dropping root privileges, so keep the key file unreadable by anyone but
# root. Included doc/mkcert.sh can be used to easily generate self-signed
# certificate, just make sure to update the domains in dovecot-openssl.cnf
ssl_cert = </etc/pki/dovecot/certs/dovecot.pem
ssl_key = </etc/pki/dovecot/private/dovecot.pem

# If key file is password protected, give the password here. Alternatively
# give it when starting dovecot with -p parameter. Since this file is often
# world-readable, you may want to place this setting instead to a different
# root owned 0600 file by using ssl_key_password = <path.
#ssl_key_password =

# PEM encoded trusted certificate authority. Set this only if you intend to use
# ssl_verify_client_cert=yes. The file should contain the CA certificate(s)
# followed by the matching CRL(s). (e.g. ssl_ca = </etc/pki/dovecot/certs/ca.pem)
#ssl_ca =

# Require that CRL check succeeds for client certificates.
#ssl_require_crl = yes

# Directory and/or file for trusted SSL CA certificates. These are used only
# when Dovecot needs to act as an SSL client (e.g. imapc backend or
# submission service). The directory is usually /etc/pki/dovecot/certs in
# Debian-based systems and the file is /etc/pki/tls/cert.pem in
# RedHat-based systems. Note that ssl_client_ca_file isn't recommended with
# large CA bundles, because it leads to excessive memory usage.
#ssl_client_ca_dir =
#ssl_client_ca_file =

# Require valid cert when connecting to a remote server
#ssl_client_require_valid_cert = yes

# Request client to send a certificate. If you also want to require it, set
# auth_ssl_require_client_cert=yes in auth section.
#ssl_verify_client_cert = no

# Which field from certificate to use for username. commonName and
# x500UniqueIdentifier are the usual choices. You'll also need to set
# auth_ssl_username_from_cert=yes.
#ssl_cert_username_field = commonName

# SSL DH parameters
# Generate new params with `openssl dhparam -out /etc/dovecot/dh.pem 4096`
# Or migrate from old ssl-parameters.dat file with the command dovecot
# gives on startup when ssl_dh is unset.
ssl_dh = </etc/dovecot/dh.pem

# Minimum SSL protocol version to use. Potentially recognized values are SSLv3,
# TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3, depending on the OpenSSL version used.
#
# Dovecot also recognizes values ANY and LATEST. ANY matches with any protocol
# version, and LATEST matches with the latest version supported by library.
#ssl_min_protocol = TLSv1.2

# SSL ciphers to use, the default is:
#ssl_cipher_list = ALL:!kRSA:!SRP:!kDHd:!DSS:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK:!RC4:!ADH:!LOW@STRENGTH
# To disable non-EC DH, use:
#ssl_cipher_list = ALL:!DH:!kRSA:!SRP:!kDHd:!DSS:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK:!RC4:!ADH:!LOW@STRENGTH
ssl_cipher_list = PROFILE=SYSTEM

# Colon separated list of elliptic curves to use. Empty value (the default)
# means use the defaults from the SSL library. P-521:P-384:P-256 would be an
# example of a valid value.
#ssl_curve_list =

# Prefer the server's order of ciphers over client's.
#ssl_prefer_server_ciphers = no

# SSL crypto device to use, for valid values run "openssl engine"
#ssl_crypto_device =

# SSL extra options. Currently supported options are:
#   compression - Enable compression.
#   no_ticket - Disable SSL session tickets.
#ssl_options =
[root@mail postfix]#

また、標準設定のままだとSSLが必須(ssl=required)になっているので、なくてもよい「ssl=yes」に変更します。

[root@mail ~]# vi /etc/dovecot/conf.d/10-ssl.conf
[root@mail ~]# diff -u /etc/dovecot/conf.d/10-ssl.conf.org /etc/dovecot/conf.d/10-ssl.conf
--- /etc/dovecot/conf.d/10-ssl.conf.org 2025-04-25 03:03:27.865411037 +0900
+++ /etc/dovecot/conf.d/10-ssl.conf     2025-04-25 03:28:09.900322146 +0900
@@ -5,7 +5,7 @@
 # SSL/TLS support: yes, no, required. <doc/wiki/SSL.txt>
 # disable plain pop3 and imap, allowed are only pop3+TLS, pop3s, imap+TLS and imaps
 # plain imap and pop3 are still allowed for local connections
-ssl = required
+ssl = yes

 # PEM encoded X.509 SSL/TLS certificate and private key. They're opened before
 # dropping root privileges, so keep the key file unreadable by anyone but
@@ -53,7 +53,7 @@
 # Generate new params with `openssl dhparam -out /etc/dovecot/dh.pem 4096`
 # Or migrate from old ssl-parameters.dat file with the command dovecot
 # gives on startup when ssl_dh is unset.
-#ssl_dh = </etc/dovecot/dh.pem
+ssl_dh = </etc/dovecot/dh.pem

 # Minimum SSL protocol version to use. Potentially recognized values are SSLv3,
 # TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3, depending on the OpenSSL version used.
[root@mail ~]#

メール管理用ユーザ作成

メール管理ユーザとしてRHELドキュメントにあるように「vmail」ユーザを作成

[root@mail ~]# useradd --home-dir /var/mail --shell /usr/sbin/nologin vmail
useradd: warning: the home directory /var/mail already exists.
useradd: Not copying any file from skel directory into it.
[root@mail ~]#

[root@mail ~]# id vmail
uid=1000(vmail) gid=1000(vmail) groups=1000(vmail)
[root@mail ~]#

上記でホームディレクトリを /var/mail で指定しているが、すでに存在しているディレクトリであるため、現存する/var/mailの所有者を変更する。

[root@mail ~]# ls -ld /var/mail
lrwxrwxrwx. 1 root root 10 Oct  3  2024 /var/mail -> spool/mail
[root@mail ~]# ls -ld /var/spool/mail
drwxrwxr-x. 2 root mail 19 Apr 25 03:11 /var/spool/mail
[root@mail ~]#
[root@mail ~]# chown vmail:vmail /var/mail/
[root@mail ~]# chmod 700 /var/mail/
[root@mail ~]# ls -ld /var/mail
lrwxrwxrwx. 1 root root 10 Oct  3  2024 /var/mail -> spool/mail
[root@mail ~]# ls -ld /var/mail/
drwx------. 2 vmail vmail 19 Apr 25 03:11 /var/mail/
[root@mail ~]# ls -ld /var/spool/mail
drwx------. 2 vmail vmail 19 Apr 25 03:11 /var/spool/mail
[root@mail ~]# ls -ld /var/spool/mail/
drwx------. 2 vmail vmail 19 Apr 25 03:11 /var/spool/mail/
[root@mail ~]#

/etc/dovecot/conf.d/10-mail.conf にメール保存先のディレクトリ設定「mail_location = sdbox:/var/mail/%n/」を追加

(なお、sdboxという指定だと dbox Mailbox Formatのsingle-dbox形式、になっていて、maildir形式にしたい場合は maildir と書く必要がありました。あとで”mail_location = maildir:/var/mail/%n/Maildir”に修正しています )

[root@mail ~]# cp /etc/dovecot/conf.d/10-mail.conf /etc/dovecot/conf.d/10-mail.conf.org
[root@mail ~]# vi /etc/dovecot/conf.d/10-mail.conf
[root@mail ~]# diff -u /etc/dovecot/conf.d/10-mail.conf.org /etc/dovecot/conf.d/10-mail.conf
--- /etc/dovecot/conf.d/10-mail.conf.org        2025-04-25 03:13:54.044373479 +0900
+++ /etc/dovecot/conf.d/10-mail.conf    2025-04-25 03:14:17.970372044 +0900
@@ -27,7 +27,7 @@
 #
 # <doc/wiki/MailLocation.txt>
 #
-#mail_location =
+mail_location = sdbox:/var/mail/%n/

 # If you need to set multiple mailbox locations or want to change default
 # namespace settings, you can do it by defining namespace sections.
[root@mail ~]#

LDAPとの連携設定

まずは /etc/dovecot/conf.d/10-auth.conf で auth-system.conf.ext ファイルの読み込みをやめ、 auth-ldap.conf.ext ファイルを読み込む設定に修正

[root@mail ~]# cp /etc/dovecot/conf.d/10-auth.conf /etc/dovecot/conf.d/10-auth.conf.org
[root@mail ~]# vi /etc/dovecot/conf.d/10-auth.conf
[root@mail ~]# diff -u /etc/dovecot/conf.d/10-auth.conf.org /etc/dovecot/conf.d/10-auth.conf
--- /etc/dovecot/conf.d/10-auth.conf.org        2025-04-25 03:15:55.357366203 +0900
+++ /etc/dovecot/conf.d/10-auth.conf    2025-04-25 03:16:28.351364224 +0900
@@ -119,9 +119,9 @@
 #!include auth-deny.conf.ext
 #!include auth-master.conf.ext

-!include auth-system.conf.ext
+#!include auth-system.conf.ext
 #!include auth-sql.conf.ext
-#!include auth-ldap.conf.ext
+!include auth-ldap.conf.ext
 #!include auth-passwdfile.conf.ext
 #!include auth-checkpassword.conf.ext
 #!include auth-static.conf.ext
[root@mail ~]#

/etc/dovecot/conf.d/auth-ldap.conf.ext の userdb について override_fileds 設定を追加します

[root@mail ~]# cp /etc/dovecot/conf.d/auth-ldap.conf.ext /etc/dovecot/conf.d/auth-ldap.conf.ext.org
[root@mail ~]# vi /etc/dovecot/conf.d/auth-ldap.conf.ext
[root@mail ~]# diff -u /etc/dovecot/conf.d/auth-ldap.conf.ext.org /etc/dovecot/conf.d/auth-ldap.conf.ext
--- /etc/dovecot/conf.d/auth-ldap.conf.ext.org  2025-04-25 03:17:04.099362080 +0900
+++ /etc/dovecot/conf.d/auth-ldap.conf.ext      2025-04-25 03:17:39.663359947 +0900
@@ -7,6 +7,7 @@

   # Path for LDAP configuration file, see example-config/dovecot-ldap.conf.ext
   args = /etc/dovecot/dovecot-ldap.conf.ext
+  override_fields = uid=vmail gid=vmail home=/var/mail/%n/
 }

 # "prefetch" user database means that the passdb already provided the
[root@mail ~]#

uidとgidについて、vmailという文字列ではなく、実際のUID/GIDの数値を指定しなければならないかな?と思って両方試してみましたが、どちらでも動作しました。
(「override_fields = uid=1000 gid=1000 home=/var/mail/%n/」でも大丈夫だった)

次に、LDAP検索用ファイル/etc/dovecot/dovecot-ldap.conf.extを新規で作成….が、いろいろあったので、詳しいところは後回しにします。

firewall設定

標準だとポートが開けられていないので、必要に応じて開けます

まず初期状況確認

[root@mail ~]# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens160
  sources:
  services: cockpit dhcpv6-client ssh
  ports:
  protocols:
  forward: yes
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
[root@mail ~]#

メール系ポートを開けていきます。この記事はdovecot(imapとpop3)についてなので、imapとpop3について追加します

[root@mail ~]# firewall-cmd --permanent --add-service imaps --add-service imap 
success
[root@mail ~]# firewall-cmd --permanent --add-service pop3 --add-service pop3s
success
[root@mail ~]# firewall-cmd --reload
success
[root@mail ~]#  firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens160
  sources:
  services: cockpit dhcpv6-client imap imaps pop3 pop3s ssh
  ports:
  protocols:
  forward: yes
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
[root@mail ~]#

dovecotの起動登録

標準だと起動してこないdovecotを起動してくるように設定を入れます

[root@mail ~]# systemctl status dovecot
○ dovecot.service - Dovecot IMAP/POP3 email server
     Loaded: loaded (/usr/lib/systemd/system/dovecot.service; disabled; preset:>
     Active: inactive (dead)
       Docs: man:dovecot(1)
             https://doc.dovecot.org/
[root@mail ~]# systemctl enable --now dovecot
Created symlink /etc/systemd/system/multi-user.target.wants/dovecot.service → /usr/lib/systemd/system/dovecot.service.
[root@mail ~]# systemctl status dovecot
● dovecot.service - Dovecot IMAP/POP3 email server
     Loaded: loaded (/usr/lib/systemd/system/dovecot.service; enabled; preset: >
     Active: active (running) since Fri 2025-04-25 03:24:50 JST; 2s ago
       Docs: man:dovecot(1)
             https://doc.dovecot.org/
    Process: 2180 ExecStartPre=/usr/libexec/dovecot/prestartscript (code=exited>
   Main PID: 2186 (dovecot)
     Status: "v2.3.16 (7e2e900c1a) running"
      Tasks: 4 (limit: 10873)
     Memory: 5.3M
        CPU: 92ms
     CGroup: /system.slice/dovecot.service
             tq2186 /usr/sbin/dovecot -F
             tq2187 dovecot/anvil
             tq2188 dovecot/log
             mq2189 dovecot/config

Apr 25 03:24:50 mail.adsample.local systemd[1]: Starting Dovecot IMAP/POP3 emai>
Apr 25 03:24:50 mail.adsample.local dovecot[2186]: master: Dovecot v2.3.16 (7e2>
Apr 25 03:24:50 mail.adsample.local systemd[1]: Started Dovecot IMAP/POP3 email>
[root@mail ~]#

Active Directoryサーバを利用するのに必要なLDAP設定の調査

RHELのドキュメントに記載されているものは、OpenLDAPサーバを利用した場合の設定で、Windows Serverをベースとした場合、posixAccountに関する情報は標準では提供されていないため利用できません。

このため、Active DirectoryのLDAPで利用できる情報は何かをldapsearchコマンドを実行しながら確認していきます。

とりあえず先に調査した結果、これでいけるな、となったものは以下です

[root@mail ~]# cat /etc/dovecot/dovecot-ldap.conf.ext
# LDAPサーバへの接続に関する設定
uris=ldaps://192.168.122.10
auth_bind=yes
dn= cn=vmail,cn=Users,dc=adsample,dc=local
dnpass= パスワード

# LDAPの検索
base= cn=Users,dc=adsample,dc=local
scope=subtree

# 検索結果に対するフィルター
user_filter= (samAccountName=%u)
pass_filter= (samAccountName=%u)
[root@mail ~]#

各項に関して解説します

接続先のLDAPサーバの指定を「uris=ldaps://サーバ名」で行います

古い資料では「server_host=ホスト名」と「server_port=389」になっていたりしますが、現代は「uris=ldap://サーバ名」か「uris=ldaps://サーバ名」です

2025年現在のLDAPサーバではセキュリティ強化のためユーザ認証を行わないとLDAP上の情報を検索できないようになっています。このため「auth_bind=yes」で、認証を行うようにします。

「dn=」で指定しているのがユーザ検索に使用するActive Directory上のユーザ指定です。今回は「vmail」というユーザを作成していますので、それを指定しています。

続く「dnpass=」は上記ユーザに設定したActive Directoryでのパスワードです。平文でそのまま記載します。

次はLDAPから情報を引っ張ってくるのに使う設定です

Active Directoryでユーザに関する情報は「cn=Users,dc=adsample,dc=local」に入ってるので、それを使用します。

ldapsearchコマンドを使うことでどういった情報が取得できるかを確認することができます。

「-D」のオプションとしてdn=の後ろに入力したもの
「-b」のオプションとしてbase=の後ろに入力したもの

[root@mail ~]#  ldapsearch -x -H ldaps://192.168.122.10 -D "cn=vmail,cn=Users,dc=adsample,dc=local" -w "パスワード" -b "cn=Users,dc=adsample,dc=local" -s subtree
# extended LDIF
#
# LDAPv3
# base <cn=Users,dc=adsample,dc=local> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# Users, adsample.local
dn: CN=Users,DC=adsample,DC=local
objectClass: top
objectClass: container
cn: Users
description: Default container for upgraded user accounts
distinguishedName: CN=Users,DC=adsample,DC=local
instanceType: 4
whenCreated: 20250417093642.0Z
whenChanged: 20250417093642.0Z
uSNCreated: 5672
uSNChanged: 5672
showInAdvancedViewOnly: FALSE
name: Users
objectGUID:: 0+Pn0tolgUSaHrCO7ll4VQ==
systemFlags: -1946157056
objectCategory: CN=Container,CN=Schema,CN=Configuration,DC=adsample,DC=local
isCriticalSystemObject: TRUE
dSCorePropagationData: 20250417093820.0Z
dSCorePropagationData: 16010101000001.0Z

# testuser1, Users, adsample.local
dn: CN=testuser1,CN=Users,DC=adsample,DC=local
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
cn: testuser1
givenName: testuser1
distinguishedName: CN=testuser1,CN=Users,DC=adsample,DC=local
instanceType: 4
whenCreated: 20250417094618.0Z
whenChanged: 20250425001141.0Z
displayName: testuser1
uSNCreated: 12609
uSNChanged: 36883
name: testuser1
objectGUID:: H4j5I6UhEEaDahAIt64JeA==
userAccountControl: 66048
badPwdCount: 0
codePage: 0
countryCode: 0
badPasswordTime: 133900339076624909
lastLogoff: 0
lastLogon: 133900339256453379
pwdLastSet: 133893567784742554
primaryGroupID: 513
objectSid:: AQUAAAAAAAUVAAAArlEnuz4EHgKbAhGoTwQAAA==
accountExpires: 9223372036854775807
logonCount: 0
sAMAccountName: testuser1
sAMAccountType: 805306368
userPrincipalName: testuser1@adsample.local
objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=adsample,DC=local
dSCorePropagationData: 20250418015428.0Z
dSCorePropagationData: 16010101000000.0Z
lastLogonTimestamp: 133900135017739905
mail: testuser1@example.com

# testuser2, Users, adsample.local
dn: CN=testuser2,CN=Users,DC=adsample,DC=local
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
cn: testuser2
givenName: testuser2
<略>

# search result
search: 2
result: 0 Success

# numResponses: 32
# numEntries: 31
[root@mail ~]#

まあ、たくさんの情報が出てきてしまいます。

これの範囲を狭くするための設定がfilterです

user_filter がユーザ名を検索するときに使うフィルターで、 pass_filter がそのユーザのパスワードを検索する際に使うフィルターです
注意点は、これは検索した結果を狭めるために設定するもので、ユーザ名やパスワードとしてみなす値が何なのかを指定するものではありません。

上の例では「user_filter= (samAccountName=%u)」「pass_filter= (samAccountName=%u)」としています。

%uが imap/pop3でアクセスしたときに入力したユーザ名に置き換えられますので、ユーザがtestuser1でログインしたときに使用されるフィルターは「samAccountName=testuser1」となります。

この時にどのような値が取得できるかをldapsearchで確認するには、以下のように最後にフィルター文字列を指定して実行します。

[root@mail ~]#  ldapsearch -x -H ldaps://192.168.122.10 -D "cn=vmail,cn=Users,dc=adsample,dc=local" -w "パスワード" -b "cn=Users,dc=adsample,dc=local" -s subtree samAccountName=testuser1
# extended LDIF
#
# LDAPv3
# base <cn=Users,dc=adsample,dc=local> with scope subtree
# filter: samAccountName=testuser1
# requesting: ALL
#

# testuser1, Users, adsample.local
dn: CN=testuser1,CN=Users,DC=adsample,DC=local
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
cn: testuser1
givenName: testuser1
distinguishedName: CN=testuser1,CN=Users,DC=adsample,DC=local
instanceType: 4
whenCreated: 20250417094618.0Z
whenChanged: 20250425001141.0Z
displayName: testuser1
uSNCreated: 12609
uSNChanged: 36883
name: testuser1
objectGUID:: H4j5I6UhEEaDahAIt64JeA==
userAccountControl: 66048
badPwdCount: 0
codePage: 0
countryCode: 0
badPasswordTime: 133900339076624909
lastLogoff: 0
lastLogon: 133900339256453379
pwdLastSet: 133893567784742554
primaryGroupID: 513
objectSid:: AQUAAAAAAAUVAAAArlEnuz4EHgKbAhGoTwQAAA==
accountExpires: 9223372036854775807
logonCount: 0
sAMAccountName: testuser1
sAMAccountType: 805306368
userPrincipalName: testuser1@adsample.local
objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=adsample,DC=local
dSCorePropagationData: 20250418015428.0Z
dSCorePropagationData: 16010101000000.0Z
lastLogonTimestamp: 133900135017739905
mail: testuser1@example.com

# search result
search: 2
result: 0 Success

# numResponses: 2
# numEntries: 1
[root@mail ~]#

じゃあ、上記ででてきた値のうち、どれをユーザ名やパスワードとして認識させるんだ、という話なんですが、おそらく user_attrs と pass_attrs なんだと思われるのですが、いまいち動作が確認できませんでした。

上で示した設定では ユーザ名を使っていましたが「ユーザ名 @ドメイン名」でログインできるようにする場合は samAccountName ではなく userPrincipalNameを使うことでログインできるようになりました。

[root@mail ~]# cat /etc/dovecot/dovecot-ldap.conf.ext
# LDAPサーバへの接続に関する設定
uris=ldaps://192.168.122.10
auth_bind=yes
dn= cn=vmail,cn=Users,dc=adsample,dc=local
dnpass= パスワード

# LDAPの検索
base= cn=Users,dc=adsample,dc=local
scope=subtree

# 検索結果に対するフィルター
user_filter= (userPrincipalName=%u)
pass_filter= (userPrincipalName=%u)
[root@mail ~]#

で・・・パスワードとして使えるらしい userPassword, unixUserPassword, msSFU30Password  は ldapsearchの出力結果に出てこないのですが、doveadmコマンドで確認してみると、ちゃんとActive Directoryに設定したパスワードで認証が通ることが確認できました。

dovecotの動作確認

dovecotのコマンド「doveadm auth login ユーザ名」で認証がちゃんと動くかという検証ができます。

最初のユーザ名だけでログインできる場合は以下

[root@mail ~]# doveadm auth login testuser1
Password:
passdb: testuser1 auth succeeded
extra fields:
  user=testuser1
userdb extra fields:
  testuser1@adsample.local
  uid=1000
  gid=1000
  home=/var/mail/testuser1/
  auth_mech=PLAIN
  auth_user=testuser1
[root@mail ~]#

ドメイン名付きで設定した場合は以下

[root@mail ~]# doveadm auth login testuser1@adsample.local
Password:
passdb: testuser1@adsample.local auth succeeded
extra fields:
  user=testuser1@adsample.local
userdb extra fields:
  testuser1@adsample.local
  uid=1000
  gid=1000
  home=/var/mail/testuser1/
  auth_mech=PLAIN
[root@mail ~]#

うまく動かなかった場合は、dovecotのログ出力を増やします。

設定のon/offがしやすいように /etc/dovecot/conf.d/99-debug.conf というファイルを新規作成しました。

[root@mail ~]# cat /etc/dovecot/conf.d/99-debug.conf
auth_debug=yes
auth_debug_passwords=yes
auth_verbose=yes
auth_verbose_passwords=yes
verbose_proctitle=yes
verbose_ssl=yes

[root@mail ~]#

この設定の注意点は「auth_debug_passwords=yes」と「auth_verbose_passwords=yes」です。エラー時にパスワードとして入力した文字列がログファイルに記録されてしまうので、取り扱いに注意してください。

例えば、ドメイン名ありでログインしなければならないのにユーザ名のみでログインしようとした場合のエラーとログは以下のようになりました。

[root@mail ~]# doveadm auth login testuser1
Password:
passdb: testuser1 auth failed
extra fields:
  user=testuser1
[root@mail ~]#
[root@mail ~]# tail -f /var/log/maillog
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: Loading modules from directory: /usr/lib64/dovecot/auth
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: Module loaded: /usr/lib64/dovecot/auth/lib20_auth_var_expand_crypt.so
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libdriver_sqlite.so
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: Loading modules from directory: /usr/lib64/dovecot/auth
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libauthdb_ldap.so
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: Read auth token secret from /run/dovecot/auth-token-secret.dat
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: ldap(/etc/dovecot/dovecot-ldap.conf.ext): LDAP initialization took 24 msecs
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: auth client connected (pid=3334)
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: client in: AUTH#0111#011PLAIN#011service=doveadm#011debug#011resp=dGVzdHVzZXIxAHRlc3R1c2VyMQBkaWdpdGFsMTIzQSM= (previous base64 data may contain sensitive data)
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: ldap(testuser1): Performing passdb lookup
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: ldap(testuser1): bind search: base=cn=Users,dc=adsample,dc=local filter=(userPrincipalName=testuser1)
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: ldap(testuser1): no fields returned by the server
Apr 25 18:47:01 mail dovecot[3326]: auth: ldap(testuser1): unknown user (given password: パスワード)
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: ldap(testuser1): Finished passdb lookup
Apr 25 18:47:01 mail dovecot[3326]: auth: Debug: auth(testuser1): Auth request finished
Apr 25 18:47:03 mail dovecot[3326]: auth: Debug: client passdb out: FAIL#0111#011user=testuser1

「auth_debug_passwords=yes」と「auth_verbose_passwords=yes」を設定しているのでパスワード文字列が出力されています。

正しくログインできた場合のログは下記のようになります。

[root@mail ~]# tail -f /var/log/maillog
Apr 25 18:49:05 mail dovecot[3326]: auth: Debug: Loading modules from directory: /usr/lib64/dovecot/auth
Apr 25 18:49:05 mail dovecot[3326]: auth: Debug: Module loaded: /usr/lib64/dovecot/auth/lib20_auth_var_expand_crypt.so
Apr 25 18:49:05 mail dovecot[3326]: auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libdriver_sqlite.so
Apr 25 18:49:05 mail dovecot[3326]: auth: Debug: Loading modules from directory: /usr/lib64/dovecot/auth
Apr 25 18:49:05 mail dovecot[3326]: auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libauthdb_ldap.so
Apr 25 18:49:05 mail dovecot[3326]: auth: Debug: Read auth token secret from /run/dovecot/auth-token-secret.dat
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: ldap(/etc/dovecot/dovecot-ldap.conf.ext): LDAP initialization took 20 msecs
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: auth client connected (pid=3337)
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: client in: AUTH#0111#011PLAIN#011service=doveadm#011debug#011resp=dGVzdHVzZXIxQGFkc2FtcGxlLmxvY2FsAHRlc3R1c2VyMUBhZHNhbXBsZS5sb2NhbABkaWdpdGFsMTIzQSM= (previous base64 data may contain sensitive data)
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: ldap(testuser1@adsample.local): Performing passdb lookup
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: ldap(testuser1@adsample.local): bind search: base=cn=Users,dc=adsample,dc=local filter=(userPrincipalName=testuser1@adsample.local)
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: ldap(testuser1@adsample.local): no fields returned by the server
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: ldap(testuser1@adsample.local): result:  uid missing
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: ldap(testuser1@adsample.local): Finished passdb lookup
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: auth(testuser1@adsample.local): Auth request finished
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: client passdb out: OK#0111#011user=testuser1@adsample.local
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: master in: REQUEST#0114040032257#0113337#0111#0119908c30ac4ecc1214e5ca9f458d737ff#011session_pid=3337
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: ldap(testuser1@adsample.local): Performing userdb lookup
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: ldap(testuser1@adsample.local): user search: base=cn=Users,dc=adsample,dc=local scope=subtree filter=(userPrincipalName=testuser1@adsample.local) fields=homeDirectory,uidNumber,gidNumber
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: ldap(testuser1@adsample.local): no fields returned by the server
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: ldap(testuser1@adsample.local): result:  homeDirectory missing; uidNumber missing; gidNumber missing
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: ldap(testuser1@adsample.local): Finished userdb lookup
Apr 25 18:49:06 mail dovecot[3326]: auth: Debug: master userdb out: USER#0114040032257#011testuser1@adsample.local#011uid=1000#011gid=1000#011home=/var/mail/testuser1/#011auth_mech=PLAIN

で、/var/mail がどのようになっているかを確認すると、まだ何もない

[root@mail ~]# ls -l /var/mail/
total 0
[root@mail ~]#

telnetコマンドでPOP3ログインを手動で実施してみる

[root@mail ~]# telnet localhost 110
Trying ::1...
Connected to localhost.
Escape character is '^]'.
+OK Dovecot ready.
user testuser1
+OK
pass パスワード
+OK Logged in.
quit
+OK Logging out.
Connection closed by foreign host.
[root@mail ~]# 

/var/mail/にディレクトリが作成された

[root@mail ~]# ls -l /var/mail/
total 0
drwx------. 3 vmail vmail 116 Apr 25  2025 testuser1
[root@mail ~]#

ただ、mbox形式で作成されていた

[root@mail ~]# ls -ltR /var/mail/testuser1/
/var/mail/testuser1/:
total 8
-rw-------. 1 vmail vmail 452 Apr 25  2025 dovecot.list.index.log
-rw-------. 1 vmail vmail   8 Apr 25  2025 dovecot-uidvalidity
-r--r--r--. 1 vmail vmail   0 Apr 25  2025 dovecot-uidvalidity.680b65c4
drwx------. 3 vmail vmail  19 Apr 25  2025 mailboxes

/var/mail/testuser1/mailboxes:
total 0
drwx------. 3 vmail vmail 24 Apr 25  2025 INBOX

/var/mail/testuser1/mailboxes/INBOX:
total 0
drwx------. 2 vmail vmail 31 Apr 25  2025 dbox-Mails

/var/mail/testuser1/mailboxes/INBOX/dbox-Mails:
total 4
-rw-------. 1 vmail vmail 224 Apr 25  2025 dovecot.index.log
[root@mail ~]#

maildir形式への変更

設定をみなおしてみると、RedHat手順の中で /etc/dovecot/conf.d/10-mail.conf に mail_location パラメータで 「sdbox:~」としていたところが dbox Mailbox Formatのsingle-dbox形式での保存という設定という意味だった。

設定を「mail_location = maildir:/var/mail/%n/Maildir」に変更

[root@mail dovecot]# diff -u /etc/dovecot/conf.d/10-mail.conf.org /etc/dovecot/conf.d/10-mail.conf
--- /etc/dovecot/conf.d/10-mail.conf.org        2025-04-25 03:13:54.044373479 +0900
+++ /etc/dovecot/conf.d/10-mail.conf    2025-04-30 10:59:12.661404241 +0900
@@ -27,7 +27,7 @@
 #
 # <doc/wiki/MailLocation.txt>
 #
-#mail_location =
+mail_location = maildir:/var/mail/%n/Maildir

 # If you need to set multiple mailbox locations or want to change default
 # namespace settings, you can do it by defining namespace sections.
[root@mail dovecot]# systemctl restart dovecot
[root@mail dovecot]#

とりあえずsdbox設定で作られたメールディレクトリを削除

[root@mail dovecot]# ls /var/mail/testuser2/
dovecot-uidvalidity           dovecot.list.index.log
dovecot-uidvalidity.680b6784  mailboxes
[root@mail dovecot]# rm -rf /var/mail/testuser2/
[root@mail dovecot]# ls -l /var/mail/testuser2/
ls: cannot access '/var/mail/testuser2/': No such file or directory
[root@mail dovecot]# 

認証テストとpop3ログインテストを実施

[root@mail dovecot]# doveadm auth login testuser2@adsample.local
Password: パスワード
passdb: testuser2@adsample.local auth succeeded
extra fields:
  user=testuser2@adsample.local
userdb extra fields:
  testuser2@adsample.local
  uid=1000
  gid=1000
  home=/var/mail/testuser2/
  auth_mech=PLAIN
[root@mail dovecot]# ls -l /var/mail/testuser2/
ls: cannot access '/var/mail/testuser2/': No such file or directory
[root@mail dovecot]# telnet localhost 110
Trying ::1...
Connected to localhost.
Escape character is '^]'.
+OK Dovecot ready.
user testuser2@adsample.local
+OK
pass パスワード
+OK Logged in.
quit
+OK Logging out.
Connection closed by foreign host.
[root@mail dovecot]# 

pop3ログイン後に maildir形式で作成されていることを確認

[root@mail dovecot]# ls -l /var/mail/testuser2/
total 4
drwx------. 5 vmail vmail 4096 Apr 30 11:00 Maildir
[root@mail dovecot]# ls -l /var/mail/testuser2/Maildir/
total 16
drwx------. 2 vmail vmail   6 Apr 30 11:00 cur
-rw-------. 1 vmail vmail  51 Apr 30 11:00 dovecot-uidlist
-rw-------. 1 vmail vmail   8 Apr 30 11:00 dovecot-uidvalidity
-r--r--r--. 1 vmail vmail   0 Apr 30 11:00 dovecot-uidvalidity.68118427
-rw-------. 1 vmail vmail 320 Apr 30 11:00 dovecot.index.log
-rw-------. 1 vmail vmail 452 Apr 30 11:00 dovecot.list.index.log
-rw-------. 1 vmail vmail   0 Apr 30 11:00 maildirfolder
drwx------. 2 vmail vmail   6 Apr 30 11:00 new
drwx------. 2 vmail vmail   6 Apr 30 11:00 tmp
[root@mail dovecot]#

ちなみに testuser1のほうはmailboxesディレクトリなどを残したままmaildirでログインしなおしてみたところ両方のディレクトリが残った状態となりました。

[root@mail dovecot]# ls -l /var/mail/
total 0
drwx------. 3 vmail vmail 116 Apr 25 19:36 testuser1
drwx------. 3 vmail vmail  21 Apr 30 11:00 testuser2
[root@mail dovecot]# ls -l /var/mail/testuser1
total 8
-rw-------. 1 vmail vmail   8 Apr 25 19:36 dovecot-uidvalidity
-r--r--r--. 1 vmail vmail   0 Apr 25 19:36 dovecot-uidvalidity.680b65c4
-rw-------. 1 vmail vmail 452 Apr 25 19:36 dovecot.list.index.log
drwx------. 3 vmail vmail  19 Apr 25 19:36 mailboxes
[root@mail dovecot]# telnet localhost 110
Trying ::1...
Connected to localhost.
Escape character is '^]'.
+OK Dovecot ready.
user testuser1@adsample.local
+OK
pass パスワード
+OK Logged in.
quit
+OK Logging out.
Connection closed by foreign host.
[root@mail dovecot]# ls -l /var/mail/testuser1
total 12
drwx------. 5 vmail vmail 4096 Apr 30 11:05 Maildir
-rw-------. 1 vmail vmail    8 Apr 25 19:36 dovecot-uidvalidity
-r--r--r--. 1 vmail vmail    0 Apr 25 19:36 dovecot-uidvalidity.680b65c4
-rw-------. 1 vmail vmail  452 Apr 25 19:36 dovecot.list.index.log
drwx------. 3 vmail vmail   19 Apr 25 19:36 mailboxes
[root@mail dovecot]#

Windows Server 2025で作ったActive Directory環境でldapsを使えるようにする

Windows Server 2025でActive Directory環境を作ってみた。

フォレスト/機能レベルはWindows Server 2016とした

この環境に対してAlmaLinux 9で作ったサーバから ldapsearchコマンドを実行したところ「ldap_bind: Strong(er) authentication required (8)」というエラーになった

# ldapsearch -x -D "cn=administrator,cn=users,dc=adsample,dc=local" -w "パスワード" -H ldap://192.168.122.10 -b "CN=testuser1,CN=Users,DC=adsample,dc=local" -s base
ldap_bind: Strong(er) authentication required (8)
        additional info: 00002028: LdapErr: DSID-0C0903CB, comment: The server requires binds to turn on integrity checking if SSL\TLS are not already active on the connection, data 0, v65f4
#

セキュリティ強化のためLDAP署名もしくはLDAPSに対応していないとダメになったようだ

[AD管理者向け] 2020 年 LDAP 署名と LDAP チャネルバインディングが有効化。確認を!

じゃあ、とldapsでアクセスしてみるがエラー

# ldapsearch -x -D "cn=administrator,cn=users,dc=adsample,dc=local" -w "パスワード -H ldaps://192.168.122.10 -b "CN=testuser1,CN=Users,DC=adsample,dc=local" -s base
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
#

デバッグモードで接続のテストだけ行ってみる

# ldapsearch -x -d -1 -H ldaps://192.168.122.10
ldap_url_parse_ext(ldaps://192.168.122.10)
ldap_create
ldap_url_parse_ext(ldaps://192.168.122.10:636/??base)
ldap_sasl_bind
ldap_send_initial_request
ldap_new_connection 1 1 0
ldap_int_open_connection
ldap_connect_to_host: TCP 192.168.122.10:636
ldap_new_socket: 3
ldap_prepare_socket: 3
ldap_connect_to_host: Trying 192.168.122.10:636
ldap_pvt_connect: fd: 3 tm: -1 async: 0
attempting to connect:
connect success
TLS trace: SSL_connect:before SSL initialization
tls_write: want=302 error=Connection reset by peer
TLS trace: SSL_connect:error in SSLv3/TLS write client hello
TLS: can't connect: .
ldap_err2string
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
#

証明書が設定されてないようだ

確認のためopenssl s_clientでも接続を試みる

# openssl s_client -connect  192.168.122.10:636
Connecting to 192.168.122.10
CONNECTED(00000003)
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 302 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
This TLS version forbids renegotiation.
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
#

接続できない

DELLのサイトに「Active Directory統合用にLDAPSを構成する方法」という記事があって、それがそのまま使えた

まず、Active Directoryサーバにログインして、「ldp.exe」を実行し、接続先としてドメイン名、ポート389、SSLチェックは無しを指定して接続を試みる

これはldapでの接続テストとなる。

下記のように情報が取れれば問題ない

次にldapsでの接続をテストするため、SSLにチェックを入れ、ポート636で接続する

接続ができない、というエラーとなった。

このため、証明書の作成が必要、ということになる。

作成にはPowerShellから「New-SelfSignedCertificate -DnsName adtest.adsample.local,adtest -CertStoreLocation cert:\LocalMachine\My」を実行する

-DnsNameの後ろは、Active DIrectoryサーバのFQDNとショートホスト名の2種類をカンマ区切りで指定する

これを実行すると「certlm.msc」の[個人]-[証明書]に証明書が発行される

注意点としては、証明書の有効期限が1年間となっているので、1年以内に更新する必要がある、という点。

期間を変更する場合、New-SelfSignedCertificateの-NotAfterオプションに終了日を指定する。

試してないがおそらく 「-NotAfter (Get-Date).AddMonths(36)」で3年間が作れるのでは?

この証明書を右クリックメニューのコピーをして

[信頼されたルート証明機関]-[証明書]にペースト

再度 ldp.exeから、ポート636, SSLチェックありで接続を試みて接続に成功することを確認

再びLinux側に戻って、まずはopenssl s_clientでの確認

# openssl s_client -connect  192.168.122.10:636
Connecting to 192.168.122.10
CONNECTED(00000003)
Can't use SSL_get_servername
depth=0 CN=adtest.adsample.local
verify error:num=18:self-signed certificate
verify return:1
depth=0 CN=adtest.adsample.local
verify return:1
---
<略>
    Start Time: 1744940383
    Timeout   : 7200 (sec)
    Verify return code: 18 (self-signed certificate)
    Extended master secret: no
    Max Early Data: 0
---
read R BLOCK
^C
#

証明書が設定されたことを確認できた。

改めてldapsearchを実行して、情報が取得できることを確認した

# ldapsearch -x -D "cn=administrator,cn=users,dc=adsample,dc=local" -w "パスワード" -H ldaps://192.168.122.10 -b "CN=testuser1,CN=Users,DC=adsample,dc=local" -s base
# extended LDIF
#
# LDAPv3
# base <CN=testuser1,CN=Users,DC=adsample,dc=local> with scope baseObject
# filter: (objectclass=*)
# requesting: ALL
#

# search result
search: 2
result: 32 No such object
matchedDN: CN=Users,DC=adsample,DC=local
text: 0000208D: NameErr: DSID-0310028F, problem 2001 (NO_OBJECT), data 0, best
 match of:
        'CN=Users,DC=adsample,DC=local'


# numResponses: 1
#

iredmail運用メモ 2024/12/19版

iRedMail という postfix+dovecot で構成された複数ドメインのメールサーバを運用できるツールがある。

いにしえのqmailを使用するvpopmail みたいな感じで使えるのだが、各ドメインごとに管理者を置きたいとかになるとiRedMail 有償版(iRedAdmin-Pro)が必要になる。

2018年にvpopmailからiredmail/CentOS7環境に移行してつかっていたのだが、この度ようやく新しいサーバiredmail/AlmaLinux 9環境への移行が完了した。

というわけで、 iRedMailサーバをCentOS7からAlmaLinux 9ベースに置き換えた時に設定した内容のメモ書きです。

iRedMailのインストールについては公式 Install iRedMail on Red Hat Enterprise Linux, CentOS を参照のこと。SSL対応の追加作業は Request a free cert from Let’s Encrypt (for servers deployed with downloadable iRedMail installer)

トピック

(0) mysql DBを移行することについて
(1)いま新規作成すると全メールDKIM署名ありになるけど罠あり
(2)メール同期にrsyncの追加インストールを忘れずに
(3) greylisting情報の移植
(4)旧仕様クライアント対策
(5) 他サーバで送信された自ドメインが受信拒否される
(6) Barracudacentralがメールを拒否しすぎる問題
(7) mail.goo.ne.jp メールの拒否解除
(8) IPアドレスが入ったホスト名拒否設定の一部解除
(9) heloでDNSに登録されていないホスト名しゃべるメールサーバの取り扱い
(10) SOGoのタイムゾーン変更
(11) 切り替え後の運用状況確認
(12) spamasssasin の設定移植
(13) logwatchの設定調整
(14) logwatchのdovecotスクリプトカスタマイズ
(15) logrotateの設定調整
(16) Let’s Encrypt によるSSL対応を dehydrated を使って行う
(17) logwatchに spamhaus 登録検知?

(0) mysql DBを移行することについて

メール本文は /var/vmail/vmail1 以下の各ドメインディレクトリをまるごとrsyncでコピーすればOK

mysql dbについては「/var/vmail/backup/backup_mysql.sh」を実行すると、/var/vmail/backup/mysql/ 以下に/var/vmail/backup/mysql/20xx/月/日」というディレクトリが作成され、以下のようなデータが保存されている。

# ls /var/vmail/backup/mysql/2024/12/15
2024-12-15-03-30-01.log
amavisd-2024-12-15-03-30-01.sql.bz2
iredadmin-2024-12-15-03-30-01.sql.bz2
iredapd-2024-12-15-03-30-01.sql.bz2
mysql-2024-12-15-03-30-01.sql.bz2
roundcubemail-2024-12-15-03-30-01.sql.bz2
sogo-2024-12-15-03-30-01.sql.bz2
vmail-2024-12-15-03-30-01.sql.bz2
#

このうち、mysqlについては、これを移植してしまうとmysql dbにアクセスするためのユーザ情報が変わってしまうため、不可

移行するmyql dbは以下

vmail : postfix/dovecotが利用するユーザ情報が保管されている
iredadmin: iredamilの管理情報が保管されている
iredapd: greylist情報などの管理情報が保管されている
amavisd: 届いたメールの検査などを行うamavisdが使用

手法は「Backup and restore」を参考に
1) 「systemctl stop postfix」「systemctl stop dovecot」を実行してメールサービス停止
2) rsyncで/var/vmail/vmail1 を新サーバにコピー
3) /var/vmail/backup/backup_mysql.sh を実行して最新のバックアップ取得
4) /var/vmail/backup/mysql/20xx/月/日 を新サーバにコピー
5) 新サーバにコピーしたmysql dbのbzip2 圧縮を展開
6) 「mysql -u root データベース名」で起動して「source データベース名-日付-sql」でSQLリストア

(1) いま新規作成すると全メールDKIM署名ありになるけど罠あり

Sign DKIM signature on outgoing emails for new mail domain の「Use one DKIM key for all mail domains」に書かれている全ドメインのメールに対して、メインドメインのメールとしての署名を付けるような設定がされている。

しかし、DKIM署名は、同じドメインでないと有効とはならないので、ちゃんと設定しないとgmailでFAILとなる。

例えば、メインドメインが osakana.net だとした場合、出て行くメールにはすべて osakana.net としての署名がつく設定になっている。
( dkim._domainkey.osakana.net のTXT で v=DKIM1; p=</var/lib/dkim/osakana.pemを元にした内容> を登録。値はamavisd -c /etc/amavisd/amavisd.conf showkeys で確認する)

adosakana.local のドメインが載っていても、特に設定しない限りは osakana.net の署名を付けて出て行く設定になっているのだが、gmailでFAILとなる。

これは、 dkim._domainkey.adosakana.local の TXT に v=DKIM1; p=</var/lib/dkim/osakana.pemを元にした内容> で 登録してあったとしてもダメ。なぜかと言えば、各メールのヘッダに付与されるdkimドメイン情報は d=osakana.net となっていて、メールアドレスのドメイン adosakana.local と ヘッダのdkimドメインosakana.net は異なるため検証が失敗することになる。

これを防ぐには amavisd.conf の dkim_key と@dkim_signature_options_bysender_maps の間に以下を追加する必要がある。

dkim_key('osakana.net', "dkim", "/var/lib/dkim/osakana.net.pem");
dkim_key('adosakana.local', "dkim", "/var/lib/dkim/osakana.net.pem");

pemファイル自体はメインドメインと同じでもかまわない。

この場合は、 dkim._domainkey.adosakana.local の TXT に登録するデータ自体は dkim._domainkey.osakana.net と全く同一で問題無かった。

で、@dkim_signature_options_bysender_maps = ({ のあとに下記のようにdkim keyを実際に追加させるための処理を入れる

"osakana.net" => { d => "osakana.net", a => 'rsa-sha256', ttl => 10*24*3600 },
"adosakana.local" => { d => "adosakana.local", a => 'rsa-sha256', ttl => 10*24*3600 },

(2)メール同期に使うrsyncのインストールを忘れずに

最小インストールではrsyncがインストールされないので、rsyncを使って新旧サーバ間で /var/vmail/vmail1の同期を行う場合は、両サーバに rsyncを追加インストールするのを忘れない。

また、ホスト名/IPアドレス確認でnslookupコマンド, dig コマンドを使うことがあるので bind-utils も追加しておいた

(3) greylisting情報の移植

iredapdによるgreylisting設定の移植が必要

mysql dbのiredapd に含まれているので、それを移植すれば大丈夫

内容の確認については以下で行える

グレイリスト処理を行わないドメインやIPアドレスを確認するため「/opt/iredapd/tools/greylisting_admin.py –list」を実行

# /opt/iredapd/tools/greylisting_admin.py --list
Status   Sender                             -> Local Account
------------------------------------------------------------------------------
disabled 1xx.xx.xxx.230                     -> @. (anyone)
disabled 1xx.xxx.xxx.84                     -> @. (anyone)
disabled @.salesforce.com                   -> @. (anyone)
enabled  @. (anyone)                        -> @. (anyone)
#

上記だとsalesforce.com のメールと、特定のIPアドレスからのメールがグレイリスト処理対象外(disabled)となっている。

続いてホワイトリストとして登録しているドメインや具体的なメールアドレスがあるかを以下を実行して確認する

/opt/iredapd/tools/greylisting_admin.py --list-whitelist-domains
/opt/iredapd/tools/greylisting_admin.py --list-whitelists

ちなみに、現行のデフォルトは以下となっていた。

# /opt/iredapd/tools/greylisting_admin.py --list-whitelist-domains
amazon.com
aol.com
cloudfiltering.com
cloudflare.com
constantcontact.com
craigslist.org
cust-spf.exacttarget.com
ebay.com
exacttarget.com
facebook.com
facebookmail.com
fbmta.com
fishbowl.com
github.com
gmx.com
google.com
hotmail.com
icloud.com
icontact.com
inbox.com
instagram.com
iredmail.org
linkedin.com
mail.com
mailchimp.com
mailgun.com
mailjet.com
messagelabs.com
microsoft.com
outlook.com
paypal.com
pinterest.com
reddit.com
salesforce.com
sbcglobal.net
sendgrid.com
sendgrid.net
serverfault.com
stackoverflow.com
tumblr.com
twitter.com
yahoo.com
yandex.ru
zendesk.com
zoho.com
# /opt/iredapd/tools/greylisting_admin.py --list-whitelists
10.162.0.0/16 -> @., 'AUTO-UPDATE: icloud.com'
103.151.192.0/23 -> @., 'AUTO-UPDATE: cloudflare.com'
103.28.42.0/24 -> @., 'AUTO-UPDATE: ebay.com'
103.9.96.0/22 -> @., 'AUTO-UPDATE: messagelabs.com'
104.130.122.0/23 -> @., 'AUTO-UPDATE: mailgun.com'
104.130.96.0/28 -> @., 'AUTO-UPDATE: mailgun.com'
104.43.243.237 -> @., 'AUTO-UPDATE: zendesk.com'
104.44.112.128/25 -> @., 'AUTO-UPDATE: microsoft.com'
104.47.0.0/17 -> @., 'AUTO-UPDATE: github.com'
104.47.108.0/23 -> @., 'AUTO-UPDATE: hotmail.com'
104.47.20.0/23 -> @., 'AUTO-UPDATE: hotmail.com'
104.47.75.0/24 -> @., 'AUTO-UPDATE: hotmail.com'
106.50.16.0/28 -> @., 'AUTO-UPDATE: amazon.com'
107.20.18.111/32 -> @., 'AUTO-UPDATE: fishbowl.com'
107.20.210.250 -> @., 'AUTO-UPDATE: mailchimp.com'
108.174.0.0/24 -> @., 'AUTO-UPDATE: linkedin.com'
108.174.0.215 -> @., 'AUTO-UPDATE: linkedin.com'
108.174.3.0/24 -> @., 'AUTO-UPDATE: linkedin.com'
108.174.3.215 -> @., 'AUTO-UPDATE: linkedin.com'
108.174.6.0/24 -> @., 'AUTO-UPDATE: linkedin.com'
108.174.6.215 -> @., 'AUTO-UPDATE: linkedin.com'
108.175.18.45 -> @., 'AUTO-UPDATE: paypal.com'
108.175.30.45 -> @., 'AUTO-UPDATE: paypal.com'
108.177.8.0/21 -> @., 'AUTO-UPDATE: cloudflare.com'
108.177.96.0/19 -> @., 'AUTO-UPDATE: cloudflare.com'
108.179.144.0/20 -> @., 'AUTO-UPDATE: fishbowl.com'
111.221.112.0/21 -> @., 'AUTO-UPDATE: hotmail.com'
111.221.23.128/25 -> @., 'AUTO-UPDATE: hotmail.com'
111.221.26.0/27 -> @., 'AUTO-UPDATE: hotmail.com'
111.221.66.0/25 -> @., 'AUTO-UPDATE: hotmail.com'
111.221.69.128/25 -> @., 'AUTO-UPDATE: hotmail.com'
112.19.199.64/29 -> @., 'AUTO-UPDATE: icloud.com'
112.19.242.64/29 -> @., 'AUTO-UPDATE: icloud.com'
117.120.16.0/21 -> @., 'AUTO-UPDATE: messagelabs.com'
12.130.86.238 -> @., 'AUTO-UPDATE: paypal.com'
121.244.91.48/32 -> @., 'AUTO-UPDATE: zoho.com'
122.15.156.182/32 -> @., 'AUTO-UPDATE: zoho.com'
128.17.0.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.17.128.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.17.192.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.17.64.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.245.0.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.245.176.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.245.240.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.245.241.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.245.242.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.245.242.16 -> @., 'AUTO-UPDATE: exacttarget.com'
128.245.242.17 -> @., 'AUTO-UPDATE: exacttarget.com'
128.245.242.18 -> @., 'AUTO-UPDATE: exacttarget.com'
128.245.243.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.245.244.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.245.245.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.245.246.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.245.247.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.245.248.0/21 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
128.245.64.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
129.145.74.12 -> @., 'AUTO-UPDATE: mailchimp.com'
129.146.147.105 -> @., 'AUTO-UPDATE: mailchimp.com'
129.146.236.58 -> @., 'AUTO-UPDATE: mailchimp.com'
129.146.88.28 -> @., 'AUTO-UPDATE: mailchimp.com'
129.151.67.221 -> @., 'AUTO-UPDATE: mailchimp.com'
129.153.104.71 -> @., 'AUTO-UPDATE: mailchimp.com'
129.153.168.146 -> @., 'AUTO-UPDATE: mailchimp.com'
129.153.190.200 -> @., 'AUTO-UPDATE: mailchimp.com'
129.153.194.228 -> @., 'AUTO-UPDATE: mailchimp.com'
129.153.62.216 -> @., 'AUTO-UPDATE: mailchimp.com'
129.154.255.129 -> @., 'AUTO-UPDATE: mailchimp.com'
129.158.56.255 -> @., 'AUTO-UPDATE: constantcontact.com'
129.159.22.159 -> @., 'AUTO-UPDATE: mailchimp.com'
129.159.87.137 -> @., 'AUTO-UPDATE: mailchimp.com'
129.213.195.191 -> @., 'AUTO-UPDATE: mailchimp.com'
129.41.169.249 -> @., 'AUTO-UPDATE: exacttarget.com'
129.41.77.70 -> @., 'AUTO-UPDATE: paypal.com'
129.80.145.156 -> @., 'AUTO-UPDATE: constantcontact.com'
129.80.5.164 -> @., 'AUTO-UPDATE: mailchimp.com'
129.80.64.36 -> @., 'AUTO-UPDATE: constantcontact.com'
129.80.67.121 -> @., 'AUTO-UPDATE: mailchimp.com'
13.110.208.0/21 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
13.110.209.0/24 -> @., 'AUTO-UPDATE: exacttarget.com'
13.110.216.0/22 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
13.110.224.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
13.111.0.0/16 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
13.111.191.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
130.162.39.83 -> @., 'AUTO-UPDATE: mailchimp.com'
130.211.0.0/22 -> @., 'AUTO-UPDATE: cloudflare.com'
130.61.9.72 -> @., 'AUTO-UPDATE: mailchimp.com'
131.253.121.0/26 -> @., 'AUTO-UPDATE: microsoft.com'
131.253.30.0/24 -> @., 'AUTO-UPDATE: microsoft.com'
132.145.13.209 -> @., 'AUTO-UPDATE: mailchimp.com'
132.226.26.225 -> @., 'AUTO-UPDATE: mailchimp.com'
132.226.49.32 -> @., 'AUTO-UPDATE: mailchimp.com'
132.226.56.24 -> @., 'AUTO-UPDATE: mailchimp.com'
134.170.113.0/26 -> @., 'AUTO-UPDATE: microsoft.com'
134.170.141.64/26 -> @., 'AUTO-UPDATE: microsoft.com'
134.170.143.0/24 -> @., 'AUTO-UPDATE: microsoft.com'
134.170.174.0/24 -> @., 'AUTO-UPDATE: microsoft.com'
134.170.27.8 -> @., 'AUTO-UPDATE: microsoft.com'
135.84.80.0/24 -> @., 'AUTO-UPDATE: zoho.com'
135.84.81.0/24 -> @., 'AUTO-UPDATE: zoho.com'
135.84.82.0/24 -> @., 'AUTO-UPDATE: zoho.com'
135.84.83.0/24 -> @., 'AUTO-UPDATE: zoho.com'
136.143.160.0/24 -> @., 'AUTO-UPDATE: zoho.com'
136.143.161.0/24 -> @., 'AUTO-UPDATE: zoho.com'
136.143.162.0/24 -> @., 'AUTO-UPDATE: zoho.com'
136.143.178.49/32 -> @., 'AUTO-UPDATE: zoho.com'
136.143.182.0/23 -> @., 'AUTO-UPDATE: zoho.com'
136.143.184.0/24 -> @., 'AUTO-UPDATE: zoho.com'
136.143.188.0/24 -> @., 'AUTO-UPDATE: zoho.com'
136.143.190.0/23 -> @., 'AUTO-UPDATE: zoho.com'
136.147.128.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
136.147.135.0/24 -> @., 'AUTO-UPDATE: mailgun.com'
136.147.176.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
136.147.176.0/24 -> @., 'AUTO-UPDATE: mailgun.com'
136.147.182.0/24 -> @., 'AUTO-UPDATE: mailgun.com'
136.147.224.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
136.179.50.206 -> @., 'AUTO-UPDATE: zendesk.com'
139.138.35.44 -> @., 'AUTO-UPDATE: mailchimp.com'
139.138.46.121 -> @., 'AUTO-UPDATE: mailchimp.com'
139.138.46.176 -> @., 'AUTO-UPDATE: mailchimp.com'
139.138.46.219 -> @., 'AUTO-UPDATE: mailchimp.com'
139.138.57.55 -> @., 'AUTO-UPDATE: mailchimp.com'
139.138.58.119 -> @., 'AUTO-UPDATE: mailchimp.com'
139.180.17.0/24 -> @., 'AUTO-UPDATE: fishbowl.com'
139.60.152.0/22 -> @., 'AUTO-UPDATE: mailchimp.com'
140.238.148.191 -> @., 'AUTO-UPDATE: constantcontact.com'
141.148.159.229 -> @., 'AUTO-UPDATE: mailchimp.com'
141.193.184.128/25 -> @., 'AUTO-UPDATE: fishbowl.com'
141.193.184.32/27 -> @., 'AUTO-UPDATE: fishbowl.com'
141.193.184.64/26 -> @., 'AUTO-UPDATE: fishbowl.com'
141.193.185.128/25 -> @., 'AUTO-UPDATE: fishbowl.com'
141.193.185.32/27 -> @., 'AUTO-UPDATE: fishbowl.com'
141.193.185.64/26 -> @., 'AUTO-UPDATE: fishbowl.com'
141.193.32.0/23 -> @., 'AUTO-UPDATE: mailgun.com'
143.244.80.0/20 -> @., 'AUTO-UPDATE: fishbowl.com'
143.47.120.152 -> @., 'AUTO-UPDATE: constantcontact.com'
143.55.224.0/21 -> @., 'AUTO-UPDATE: mailgun.com'
143.55.232.0/22 -> @., 'AUTO-UPDATE: mailgun.com'
143.55.236.0/22 -> @., 'AUTO-UPDATE: mailgun.com'
144.160.159.21 -> @., 'AUTO-UPDATE: sbcglobal.net'
144.160.159.22 -> @., 'AUTO-UPDATE: sbcglobal.net'
144.160.235.143 -> @., 'AUTO-UPDATE: sbcglobal.net'
144.160.235.144 -> @., 'AUTO-UPDATE: sbcglobal.net'
144.178.36.0/24 -> @., 'AUTO-UPDATE: icloud.com'
144.178.38.0/24 -> @., 'AUTO-UPDATE: icloud.com'
144.24.6.140 -> @., 'AUTO-UPDATE: mailchimp.com'
144.34.32.247 -> @., 'AUTO-UPDATE: twitter.com'
144.34.33.247 -> @., 'AUTO-UPDATE: twitter.com'
144.34.8.247 -> @., 'AUTO-UPDATE: twitter.com'
144.34.9.247 -> @., 'AUTO-UPDATE: twitter.com'
144.76.86.15 -> @., 'AUTO-UPDATE: cloudfiltering.com'
146.20.112.0/26 -> @., 'AUTO-UPDATE: mailgun.com'
146.20.113.0/24 -> @., 'AUTO-UPDATE: mailgun.com'
146.20.14.104 -> @., 'AUTO-UPDATE: constantcontact.com'
146.20.14.105 -> @., 'AUTO-UPDATE: constantcontact.com'
146.20.14.106 -> @., 'AUTO-UPDATE: constantcontact.com'
146.20.14.107 -> @., 'AUTO-UPDATE: constantcontact.com'
146.20.191.0/24 -> @., 'AUTO-UPDATE: mailgun.com'
146.20.215.0/24 -> @., 'AUTO-UPDATE: fishbowl.com'
146.20.215.182 -> @., 'AUTO-UPDATE: fbmta.com'
146.88.28.0/24 -> @., 'AUTO-UPDATE: ebay.com'
147.243.1.153 -> @., 'AUTO-UPDATE: microsoft.com'
147.243.1.47 -> @., 'AUTO-UPDATE: microsoft.com'
147.243.1.48 -> @., 'AUTO-UPDATE: microsoft.com'
147.243.128.24 -> @., 'AUTO-UPDATE: microsoft.com'
147.243.128.26 -> @., 'AUTO-UPDATE: microsoft.com'
148.105.0.0/16 -> @., 'AUTO-UPDATE: mailchimp.com'
148.105.8.0/21 -> @., 'AUTO-UPDATE: ebay.com'
149.72.0.0/16 -> @., 'AUTO-UPDATE: ebay.com'
149.72.223.204 -> @., 'AUTO-UPDATE: constantcontact.com'
149.72.248.236 -> @., 'AUTO-UPDATE: reddit.com'
149.97.173.180 -> @., 'AUTO-UPDATE: zendesk.com'
15.200.201.185 -> @., 'AUTO-UPDATE: mailchimp.com'
15.200.21.50 -> @., 'AUTO-UPDATE: mailchimp.com'
15.200.44.248 -> @., 'AUTO-UPDATE: mailchimp.com'
150.230.98.160 -> @., 'AUTO-UPDATE: mailchimp.com'
151.145.38.14 -> @., 'AUTO-UPDATE: constantcontact.com'
152.67.105.195 -> @., 'AUTO-UPDATE: mailchimp.com'
152.69.200.236 -> @., 'AUTO-UPDATE: mailchimp.com'
152.70.155.126 -> @., 'AUTO-UPDATE: mailchimp.com'
155.248.208.51 -> @., 'AUTO-UPDATE: mailchimp.com'
155.248.220.138 -> @., 'AUTO-UPDATE: constantcontact.com'
155.248.234.149 -> @., 'AUTO-UPDATE: constantcontact.com'
155.248.237.141 -> @., 'AUTO-UPDATE: constantcontact.com'
157.151.208.65 -> @., 'AUTO-UPDATE: paypal.com'
157.255.1.64/29 -> @., 'AUTO-UPDATE: icloud.com'
157.55.0.192/26 -> @., 'AUTO-UPDATE: hotmail.com'
157.55.1.128/26 -> @., 'AUTO-UPDATE: hotmail.com'
157.55.11.0/25 -> @., 'AUTO-UPDATE: hotmail.com'
157.55.157.128/25 -> @., 'AUTO-UPDATE: hotmail.com'
157.55.2.0/25 -> @., 'AUTO-UPDATE: hotmail.com'
157.55.225.0/25 -> @., 'AUTO-UPDATE: hotmail.com'
157.55.49.0/25 -> @., 'AUTO-UPDATE: hotmail.com'
157.55.61.0/24 -> @., 'AUTO-UPDATE: hotmail.com'
157.55.9.128/25 -> @., 'AUTO-UPDATE: hotmail.com'
157.56.120.128/26 -> @., 'AUTO-UPDATE: microsoft.com'
157.56.232.0/21 -> @., 'AUTO-UPDATE: hotmail.com'
157.56.24.0/25 -> @., 'AUTO-UPDATE: hotmail.com'
157.56.240.0/20 -> @., 'AUTO-UPDATE: hotmail.com'
157.56.248.0/21 -> @., 'AUTO-UPDATE: hotmail.com'
157.58.196.96/29 -> @., 'AUTO-UPDATE: microsoft.com'
157.58.249.3 -> @., 'AUTO-UPDATE: microsoft.com'
157.58.30.128/25 -> @., 'AUTO-UPDATE: microsoft.com'
158.101.211.207 -> @., 'AUTO-UPDATE: mailchimp.com'
158.247.16.0/20 -> @., 'AUTO-UPDATE: fishbowl.com'
159.112.240.0/20 -> @., 'AUTO-UPDATE: mailgun.com'
159.112.242.162 -> @., 'AUTO-UPDATE: cloudflare.com'
159.135.132.128/25 -> @., 'AUTO-UPDATE: mailgun.com'
159.135.140.80/29 -> @., 'AUTO-UPDATE: mailgun.com'
159.135.224.0/20 -> @., 'AUTO-UPDATE: mailgun.com'
159.135.228.10 -> @., 'AUTO-UPDATE: cloudflare.com'
159.183.0.0/16 -> @., 'AUTO-UPDATE: ebay.com'
159.92.154.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
159.92.155.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
159.92.157.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
159.92.157.16 -> @., 'AUTO-UPDATE: exacttarget.com'
159.92.157.17 -> @., 'AUTO-UPDATE: exacttarget.com'
159.92.157.18 -> @., 'AUTO-UPDATE: exacttarget.com'
159.92.158.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
159.92.159.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
159.92.160.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
159.92.161.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
159.92.162.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
159.92.163.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
159.92.164.0/22 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
159.92.168.0/21 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
160.1.62.192 -> @., 'AUTO-UPDATE: mailchimp.com'
161.38.192.0/20 -> @., 'AUTO-UPDATE: mailgun.com'
161.38.204.0/22 -> @., 'AUTO-UPDATE: mailgun.com'
161.71.32.0/19 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
161.71.64.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
162.247.216.0/22 -> @., 'AUTO-UPDATE: mailchimp.com'
163.114.130.16 -> @., 'AUTO-UPDATE: instagram.com'
163.114.132.120 -> @., 'AUTO-UPDATE: instagram.com'
163.114.134.16 -> @., 'AUTO-UPDATE: instagram.com'
163.114.135.16 -> @., 'AUTO-UPDATE: instagram.com'
163.47.180.0/22 -> @., 'AUTO-UPDATE: ebay.com'
164.152.23.32 -> @., 'AUTO-UPDATE: mailchimp.com'
164.177.132.168/30 -> @., 'AUTO-UPDATE: constantcontact.com'
165.173.128.0/24 -> @., 'AUTO-UPDATE: zoho.com'
166.78.68.0/22 -> @., 'AUTO-UPDATE: mailgun.com'
166.78.68.221 -> @., 'AUTO-UPDATE: cloudflare.com'
166.78.69.169 -> @., 'AUTO-UPDATE: github.com'
166.78.69.170 -> @., 'AUTO-UPDATE: github.com'
166.78.71.131 -> @., 'AUTO-UPDATE: github.com'
167.220.67.232/29 -> @., 'AUTO-UPDATE: microsoft.com'
167.89.0.0/17 -> @., 'AUTO-UPDATE: ebay.com'
167.89.101.192/28 -> @., 'AUTO-UPDATE: github.com'
167.89.101.2 -> @., 'AUTO-UPDATE: github.com'
167.89.46.159 -> @., 'AUTO-UPDATE: cloudflare.com'
167.89.54.103 -> @., 'AUTO-UPDATE: reddit.com'
167.89.60.95 -> @., 'AUTO-UPDATE: sendgrid.com'
167.89.64.9 -> @., 'AUTO-UPDATE: cloudflare.com'
167.89.65.0 -> @., 'AUTO-UPDATE: cloudflare.com'
167.89.65.100 -> @., 'AUTO-UPDATE: cloudflare.com'
167.89.65.53 -> @., 'AUTO-UPDATE: cloudflare.com'
167.89.74.233 -> @., 'AUTO-UPDATE: cloudflare.com'
167.89.75.126 -> @., 'AUTO-UPDATE: cloudflare.com'
167.89.75.136 -> @., 'AUTO-UPDATE: cloudflare.com'
167.89.75.164 -> @., 'AUTO-UPDATE: cloudflare.com'
167.89.75.33 -> @., 'AUTO-UPDATE: cloudflare.com'
168.138.5.36 -> @., 'AUTO-UPDATE: mailchimp.com'
168.138.73.51 -> @., 'AUTO-UPDATE: mailchimp.com'
168.138.77.31 -> @., 'AUTO-UPDATE: constantcontact.com'
168.245.0.0/17 -> @., 'AUTO-UPDATE: ebay.com'
168.245.12.252 -> @., 'AUTO-UPDATE: reddit.com'
168.245.127.231 -> @., 'AUTO-UPDATE: reddit.com'
168.245.46.9 -> @., 'AUTO-UPDATE: reddit.com'
169.148.129.0/24 -> @., 'AUTO-UPDATE: zoho.com'
169.148.131.0/24 -> @., 'AUTO-UPDATE: zoho.com'
169.148.142.10/32 -> @., 'AUTO-UPDATE: zoho.com'
169.148.144.0/25 -> @., 'AUTO-UPDATE: zoho.com'
169.148.144.10/32 -> @., 'AUTO-UPDATE: zoho.com'
17.142.0.0/15 -> @., 'AUTO-UPDATE: icloud.com'
17.41.0.0/16 -> @., 'AUTO-UPDATE: icloud.com'
17.57.155.0/24 -> @., 'AUTO-UPDATE: icloud.com'
17.57.156.0/24 -> @., 'AUTO-UPDATE: icloud.com'
17.58.0.0/16 -> @., 'AUTO-UPDATE: icloud.com'
170.10.128.0/24 -> @., 'AUTO-UPDATE: zendesk.com'
170.10.129.0/24 -> @., 'AUTO-UPDATE: zendesk.com'
170.10.132.56/29 -> @., 'AUTO-UPDATE: zendesk.com'
170.10.132.64/29 -> @., 'AUTO-UPDATE: zendesk.com'
170.10.133.0/24 -> @., 'AUTO-UPDATE: zendesk.com'
172.104.245.227 -> @., 'AUTO-UPDATE: iredmail.org'
172.105.68.48 -> @., 'AUTO-UPDATE: iredmail.org'
172.217.0.0/19 -> @., 'AUTO-UPDATE: cloudflare.com'
172.217.128.0/19 -> @., 'AUTO-UPDATE: cloudflare.com'
172.217.160.0/20 -> @., 'AUTO-UPDATE: cloudflare.com'
172.217.192.0/19 -> @., 'AUTO-UPDATE: cloudflare.com'
172.217.32.0/20 -> @., 'AUTO-UPDATE: cloudflare.com'
172.253.112.0/20 -> @., 'AUTO-UPDATE: cloudflare.com'
172.253.56.0/21 -> @., 'AUTO-UPDATE: cloudflare.com'
173.0.84.0/29 -> @., 'AUTO-UPDATE: paypal.com'
173.0.84.224/27 -> @., 'AUTO-UPDATE: paypal.com'
173.0.94.244/30 -> @., 'AUTO-UPDATE: paypal.com'
173.194.0.0/16 -> @., 'AUTO-UPDATE: cloudflare.com'
173.203.79.182 -> @., 'AUTO-UPDATE: exacttarget.com'
173.203.81.39 -> @., 'AUTO-UPDATE: exacttarget.com'
173.224.161.128/25 -> @., 'AUTO-UPDATE: paypal.com'
173.224.165.0/26 -> @., 'AUTO-UPDATE: paypal.com'
173.245.48.0/20 -> @., 'AUTO-UPDATE: cloudflare.com'
174.36.114.128/30 -> @., 'AUTO-UPDATE: exacttarget.com'
174.36.114.140/30 -> @., 'AUTO-UPDATE: exacttarget.com'
174.36.114.148/30 -> @., 'AUTO-UPDATE: exacttarget.com'
174.36.114.152/29 -> @., 'AUTO-UPDATE: exacttarget.com'
174.36.84.144/29 -> @., 'AUTO-UPDATE: exacttarget.com'
174.36.84.16/29 -> @., 'AUTO-UPDATE: exacttarget.com'
174.36.84.240/29 -> @., 'AUTO-UPDATE: exacttarget.com'
174.36.84.32/29 -> @., 'AUTO-UPDATE: exacttarget.com'
174.36.84.8/29 -> @., 'AUTO-UPDATE: exacttarget.com'
174.36.85.248/30 -> @., 'AUTO-UPDATE: exacttarget.com'
174.37.67.28/30 -> @., 'AUTO-UPDATE: exacttarget.com'
175.41.215.51 -> @., 'AUTO-UPDATE: zendesk.com'
176.32.105.0/24 -> @., 'AUTO-UPDATE: amazon.com'
176.32.127.0/24 -> @., 'AUTO-UPDATE: amazon.com'
178.154.239.136/29 -> @., 'AUTO-UPDATE: yandex.ru'
178.154.239.144/28 -> @., 'AUTO-UPDATE: yandex.ru'
178.154.239.200/29 -> @., 'AUTO-UPDATE: yandex.ru'
178.154.239.208/28 -> @., 'AUTO-UPDATE: yandex.ru'
178.154.239.72/29 -> @., 'AUTO-UPDATE: yandex.ru'
178.154.239.80/28 -> @., 'AUTO-UPDATE: yandex.ru'
178.236.10.128/26 -> @., 'AUTO-UPDATE: amazon.com'
18.156.89.250 -> @., 'AUTO-UPDATE: zendesk.com'
18.157.243.190 -> @., 'AUTO-UPDATE: zendesk.com'
18.194.95.56 -> @., 'AUTO-UPDATE: zendesk.com'
18.198.96.88 -> @., 'AUTO-UPDATE: zendesk.com'
18.208.124.128/25 -> @., 'AUTO-UPDATE: fishbowl.com'
18.216.232.154 -> @., 'AUTO-UPDATE: zendesk.com'
18.235.27.253/32 -> @., 'AUTO-UPDATE: fishbowl.com'
18.236.40.242 -> @., 'AUTO-UPDATE: zendesk.com'
18.236.56.161 -> @., 'AUTO-UPDATE: constantcontact.com'
182.50.76.0/22 -> @., 'AUTO-UPDATE: exacttarget.com'
182.50.78.64/28 -> @., 'AUTO-UPDATE: paypal.com'
185.12.80.0/22 -> @., 'AUTO-UPDATE: cloudflare.com'
185.138.56.128/25 -> @., 'AUTO-UPDATE: inbox.com'
185.189.236.0/22 -> @., 'AUTO-UPDATE: mailgun.com'
185.211.120.0/22 -> @., 'AUTO-UPDATE: mailgun.com'
185.250.236.0/22 -> @., 'AUTO-UPDATE: mailgun.com'
185.250.239.148 -> @., 'AUTO-UPDATE: pinterest.com'
185.250.239.168 -> @., 'AUTO-UPDATE: pinterest.com'
185.250.239.190 -> @., 'AUTO-UPDATE: pinterest.com'
185.4.120.0/22 -> @., 'AUTO-UPDATE: ebay.com'
185.58.84.93 -> @., 'AUTO-UPDATE: zendesk.com'
185.90.20.0/22 -> @., 'AUTO-UPDATE: ebay.com'
188.172.128.0/20 -> @., 'AUTO-UPDATE: cloudflare.com'
192.0.64.0/18 -> @., 'AUTO-UPDATE: tumblr.com'
192.111.0.125 -> @., 'AUTO-UPDATE: stackoverflow.com'
192.111.0.71 -> @., 'AUTO-UPDATE: stackoverflow.com'
192.124.132.125 -> @., 'AUTO-UPDATE: stackoverflow.com'
192.124.132.71 -> @., 'AUTO-UPDATE: stackoverflow.com'
192.161.144.0/20 -> @., 'AUTO-UPDATE: cloudflare.com'
192.18.139.154 -> @., 'AUTO-UPDATE: mailchimp.com'
192.18.145.36 -> @., 'AUTO-UPDATE: constantcontact.com'
192.18.152.58 -> @., 'AUTO-UPDATE: constantcontact.com'
192.237.158.0/23 -> @., 'AUTO-UPDATE: mailgun.com'
192.237.159.42 -> @., 'AUTO-UPDATE: cloudflare.com'
192.237.159.43 -> @., 'AUTO-UPDATE: cloudflare.com'
192.254.112.0/20 -> @., 'AUTO-UPDATE: ebay.com'
192.254.112.60 -> @., 'AUTO-UPDATE: github.com'
192.254.112.98/31 -> @., 'AUTO-UPDATE: github.com'
192.254.113.10 -> @., 'AUTO-UPDATE: github.com'
192.254.113.101 -> @., 'AUTO-UPDATE: github.com'
192.254.114.176 -> @., 'AUTO-UPDATE: github.com'
192.30.252.0/22 -> @., 'AUTO-UPDATE: github.com'
192.33.11.125 -> @., 'AUTO-UPDATE: stackoverflow.com'
192.33.11.71 -> @., 'AUTO-UPDATE: stackoverflow.com'
193.109.254.0/23 -> @., 'AUTO-UPDATE: messagelabs.com'
193.122.128.100 -> @., 'AUTO-UPDATE: mailchimp.com'
193.123.56.63 -> @., 'AUTO-UPDATE: mailchimp.com'
194.106.220.0/23 -> @., 'AUTO-UPDATE: messagelabs.com'
194.113.24.0/22 -> @., 'AUTO-UPDATE: ebay.com'
194.154.193.192/27 -> @., 'AUTO-UPDATE: amazon.com'
194.19.134.0/25 -> @., 'AUTO-UPDATE: inbox.com'
194.64.234.129 -> @., 'AUTO-UPDATE: paypal.com'
195.234.109.226/32 -> @., 'AUTO-UPDATE: tumblr.com'
195.245.230.0/23 -> @., 'AUTO-UPDATE: messagelabs.com'
195.54.172.0/23 -> @., 'AUTO-UPDATE: ebay.com'
198.178.234.57 -> @., 'AUTO-UPDATE: paypal.com'
198.2.128.0/18 -> @., 'AUTO-UPDATE: cloudflare.com'
198.2.128.0/24 -> @., 'AUTO-UPDATE: cloudflare.com'
198.2.132.0/22 -> @., 'AUTO-UPDATE: cloudflare.com'
198.2.136.0/23 -> @., 'AUTO-UPDATE: cloudflare.com'
198.2.145.0/24 -> @., 'AUTO-UPDATE: cloudflare.com'
198.2.177.0/24 -> @., 'AUTO-UPDATE: cloudflare.com'
198.2.178.0/23 -> @., 'AUTO-UPDATE: cloudflare.com'
198.2.180.0/24 -> @., 'AUTO-UPDATE: cloudflare.com'
198.2.186.0/23 -> @., 'AUTO-UPDATE: cloudflare.com'
198.21.0.0/21 -> @., 'AUTO-UPDATE: ebay.com'
198.244.48.0/20 -> @., 'AUTO-UPDATE: mailgun.com'
198.244.59.30 -> @., 'AUTO-UPDATE: pinterest.com'
198.244.59.33 -> @., 'AUTO-UPDATE: pinterest.com'
198.244.59.35 -> @., 'AUTO-UPDATE: pinterest.com'
198.244.60.0/22 -> @., 'AUTO-UPDATE: mailgun.com'
198.245.80.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
198.245.81.0/24 -> @., 'AUTO-UPDATE: mailgun.com'
198.252.206.125 -> @., 'AUTO-UPDATE: stackoverflow.com'
198.252.206.71 -> @., 'AUTO-UPDATE: stackoverflow.com'
198.37.144.0/20 -> @., 'AUTO-UPDATE: ebay.com'
198.37.152.186 -> @., 'AUTO-UPDATE: zendesk.com'
198.61.254.0/23 -> @., 'AUTO-UPDATE: mailgun.com'
198.61.254.21 -> @., 'AUTO-UPDATE: pinterest.com'
198.61.254.231 -> @., 'AUTO-UPDATE: paypal.com'
199.101.161.130 -> @., 'AUTO-UPDATE: linkedin.com'
199.101.162.0/25 -> @., 'AUTO-UPDATE: linkedin.com'
199.122.120.0/21 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
199.122.123.0/24 -> @., 'AUTO-UPDATE: mailgun.com'
199.127.232.0/22 -> @., 'AUTO-UPDATE: amazon.com'
199.15.212.0/22 -> @., 'AUTO-UPDATE: cloudflare.com'
199.16.156.0/22 -> @., 'AUTO-UPDATE: twitter.com'
199.255.192.0/22 -> @., 'AUTO-UPDATE: amazon.com'
199.33.145.1 -> @., 'AUTO-UPDATE: mailchimp.com'
199.33.145.32 -> @., 'AUTO-UPDATE: mailchimp.com'
199.34.22.36/32 -> @., 'AUTO-UPDATE: zoho.com'
199.59.148.0/22 -> @., 'AUTO-UPDATE: twitter.com'
199.67.80.2/32 -> @., 'AUTO-UPDATE: zoho.com'
199.67.82.2/32 -> @., 'AUTO-UPDATE: zoho.com'
199.67.84.0/24 -> @., 'AUTO-UPDATE: zoho.com'
199.67.86.0/24 -> @., 'AUTO-UPDATE: zoho.com'
199.67.88.0/24 -> @., 'AUTO-UPDATE: zoho.com'
20.105.209.76/30 -> @., 'AUTO-UPDATE: microsoft.com'
20.107.239.64/30 -> @., 'AUTO-UPDATE: microsoft.com'
20.118.139.208/30 -> @., 'AUTO-UPDATE: microsoft.com'
20.141.10.196 -> @., 'AUTO-UPDATE: microsoft.com'
20.185.214.0/27 -> @., 'AUTO-UPDATE: fbmta.com'
20.185.214.32/27 -> @., 'AUTO-UPDATE: fbmta.com'
20.185.214.64/27 -> @., 'AUTO-UPDATE: fbmta.com'
20.51.6.32/30 -> @., 'AUTO-UPDATE: microsoft.com'
20.51.98.61 -> @., 'AUTO-UPDATE: constantcontact.com'
20.52.128.133 -> @., 'AUTO-UPDATE: zendesk.com'
20.52.52.2 -> @., 'AUTO-UPDATE: zendesk.com'
20.59.80.4/30 -> @., 'AUTO-UPDATE: microsoft.com'
20.63.210.192/28 -> @., 'AUTO-UPDATE: microsoft.com'
20.69.8.108/30 -> @., 'AUTO-UPDATE: microsoft.com'
20.83.222.104/30 -> @., 'AUTO-UPDATE: microsoft.com'
20.88.157.184/30 -> @., 'AUTO-UPDATE: microsoft.com'
20.94.180.64/28 -> @., 'AUTO-UPDATE: microsoft.com'
20.97.34.220/30 -> @., 'AUTO-UPDATE: microsoft.com'
20.98.148.156/30 -> @., 'AUTO-UPDATE: microsoft.com'
20.98.194.68/30 -> @., 'AUTO-UPDATE: microsoft.com'
2001:4860:4000::/36 -> @., 'AUTO-UPDATE: cloudflare.com'
202.129.242.0/23 -> @., 'AUTO-UPDATE: exacttarget.com'
202.177.148.100 -> @., 'AUTO-UPDATE: microsoft.com'
202.177.148.110 -> @., 'AUTO-UPDATE: microsoft.com'
203.122.32.250 -> @., 'AUTO-UPDATE: microsoft.com'
203.145.57.160/27 -> @., 'AUTO-UPDATE: ebay.com'
203.32.4.25 -> @., 'AUTO-UPDATE: microsoft.com'
203.55.21.0/24 -> @., 'AUTO-UPDATE: ebay.com'
203.81.17.0/24 -> @., 'AUTO-UPDATE: amazon.com'
204.11.168.0/21 -> @., 'AUTO-UPDATE: icontact.com'
204.13.11.48/29 -> @., 'AUTO-UPDATE: paypal.com'
204.14.232.0/21 -> @., 'AUTO-UPDATE: exacttarget.com'
204.14.232.64/28 -> @., 'AUTO-UPDATE: icontact.com'
204.14.234.64/28 -> @., 'AUTO-UPDATE: paypal.com'
204.141.32.0/23 -> @., 'AUTO-UPDATE: zoho.com'
204.141.42.0/23 -> @., 'AUTO-UPDATE: zoho.com'
204.220.160.0/21 -> @., 'AUTO-UPDATE: mailgun.com'
204.220.168.0/21 -> @., 'AUTO-UPDATE: mailgun.com'
204.220.176.0/20 -> @., 'AUTO-UPDATE: mailgun.com'
204.232.168.0/24 -> @., 'AUTO-UPDATE: fishbowl.com'
204.75.142.0/24 -> @., 'AUTO-UPDATE: ebay.com'
204.92.114.187 -> @., 'AUTO-UPDATE: paypal.com'
204.92.114.203 -> @., 'AUTO-UPDATE: twitter.com'
204.92.114.204/31 -> @., 'AUTO-UPDATE: twitter.com'
205.139.110.0/24 -> @., 'AUTO-UPDATE: zendesk.com'
205.201.128.0/20 -> @., 'AUTO-UPDATE: cloudflare.com'
205.201.131.128/25 -> @., 'AUTO-UPDATE: cloudflare.com'
205.201.134.128/25 -> @., 'AUTO-UPDATE: cloudflare.com'
205.201.136.0/23 -> @., 'AUTO-UPDATE: cloudflare.com'
205.201.137.229 -> @., 'AUTO-UPDATE: ebay.com'
205.201.139.0/24 -> @., 'AUTO-UPDATE: cloudflare.com'
205.207.104.0/22 -> @., 'AUTO-UPDATE: constantcontact.com'
205.220.167.17 -> @., 'AUTO-UPDATE: instagram.com'
205.220.167.98 -> @., 'AUTO-UPDATE: constantcontact.com'
205.220.179.17 -> @., 'AUTO-UPDATE: instagram.com'
205.220.179.98 -> @., 'AUTO-UPDATE: constantcontact.com'
205.251.233.32/32 -> @., 'AUTO-UPDATE: amazon.com'
205.251.233.36/32 -> @., 'AUTO-UPDATE: amazon.com'
206.165.246.80/29 -> @., 'AUTO-UPDATE: ebay.com'
206.191.224.0/19 -> @., 'AUTO-UPDATE: microsoft.com'
206.246.157.1 -> @., 'AUTO-UPDATE: exacttarget.com'
206.25.247.143 -> @., 'AUTO-UPDATE: paypal.com'
206.25.247.155 -> @., 'AUTO-UPDATE: paypal.com'
206.55.144.0/20 -> @., 'AUTO-UPDATE: amazon.com'
207.126.144.0/20 -> @., 'AUTO-UPDATE: exacttarget.com'
207.171.160.0/19 -> @., 'AUTO-UPDATE: amazon.com'
207.211.30.128/25 -> @., 'AUTO-UPDATE: zendesk.com'
207.211.30.64/26 -> @., 'AUTO-UPDATE: zendesk.com'
207.211.31.0/25 -> @., 'AUTO-UPDATE: zendesk.com'
207.211.41.113 -> @., 'AUTO-UPDATE: zendesk.com'
207.218.90.122 -> @., 'AUTO-UPDATE: zendesk.com'
207.250.68.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
207.46.116.128/29 -> @., 'AUTO-UPDATE: hotmail.com'
207.46.117.0/24 -> @., 'AUTO-UPDATE: hotmail.com'
207.46.132.128/27 -> @., 'AUTO-UPDATE: hotmail.com'
207.46.198.0/25 -> @., 'AUTO-UPDATE: hotmail.com'
207.46.200.0/27 -> @., 'AUTO-UPDATE: hotmail.com'
207.46.22.35 -> @., 'AUTO-UPDATE: microsoft.com'
207.46.4.128/25 -> @., 'AUTO-UPDATE: hotmail.com'
207.46.50.192/26 -> @., 'AUTO-UPDATE: hotmail.com'
207.46.50.224 -> @., 'AUTO-UPDATE: hotmail.com'
207.46.50.72 -> @., 'AUTO-UPDATE: microsoft.com'
207.46.50.82 -> @., 'AUTO-UPDATE: microsoft.com'
207.46.52.71 -> @., 'AUTO-UPDATE: microsoft.com'
207.46.52.79 -> @., 'AUTO-UPDATE: microsoft.com'
207.46.58.128/25 -> @., 'AUTO-UPDATE: hotmail.com'
207.67.38.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
207.67.98.192/27 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
207.68.176.0/26 -> @., 'AUTO-UPDATE: hotmail.com'
207.68.176.96/27 -> @., 'AUTO-UPDATE: hotmail.com'
207.97.204.96/29 -> @., 'AUTO-UPDATE: constantcontact.com'
208.117.48.0/20 -> @., 'AUTO-UPDATE: ebay.com'
208.185.229.45 -> @., 'AUTO-UPDATE: paypal.com'
208.201.241.163 -> @., 'AUTO-UPDATE: paypal.com'
208.40.232.70 -> @., 'AUTO-UPDATE: paypal.com'
208.43.21.28/30 -> @., 'AUTO-UPDATE: exacttarget.com'
208.43.21.64/29 -> @., 'AUTO-UPDATE: exacttarget.com'
208.43.21.72/30 -> @., 'AUTO-UPDATE: exacttarget.com'
208.64.132.0/22 -> @., 'AUTO-UPDATE: paypal.com'
208.72.249.240/29 -> @., 'AUTO-UPDATE: paypal.com'
208.74.204.5 -> @., 'AUTO-UPDATE: constantcontact.com'
208.74.204.9 -> @., 'AUTO-UPDATE: constantcontact.com'
208.75.120.0/22 -> @., 'AUTO-UPDATE: constantcontact.com'
208.82.237.104/31 -> @., 'AUTO-UPDATE: craigslist.org'
208.82.237.96/29 -> @., 'AUTO-UPDATE: craigslist.org'
208.82.238.104/31 -> @., 'AUTO-UPDATE: craigslist.org'
208.82.238.96/29 -> @., 'AUTO-UPDATE: craigslist.org'
208.85.50.137 -> @., 'AUTO-UPDATE: paypal.com'
209.43.22.0/28 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
209.46.117.168 -> @., 'AUTO-UPDATE: paypal.com'
209.46.117.179 -> @., 'AUTO-UPDATE: paypal.com'
209.61.151.0/24 -> @., 'AUTO-UPDATE: mailgun.com'
209.61.151.236 -> @., 'AUTO-UPDATE: pinterest.com'
209.61.151.249 -> @., 'AUTO-UPDATE: pinterest.com'
209.61.151.251 -> @., 'AUTO-UPDATE: pinterest.com'
209.67.98.46 -> @., 'AUTO-UPDATE: paypal.com'
209.67.98.59 -> @., 'AUTO-UPDATE: paypal.com'
209.85.128.0/17 -> @., 'AUTO-UPDATE: cloudflare.com'
212.123.28.40/32 -> @., 'AUTO-UPDATE: amazon.com'
212.227.126.224 -> @., 'AUTO-UPDATE: gmx.com'
212.227.126.225 -> @., 'AUTO-UPDATE: gmx.com'
212.227.126.226 -> @., 'AUTO-UPDATE: gmx.com'
212.227.126.227 -> @., 'AUTO-UPDATE: gmx.com'
212.227.15.15 -> @., 'AUTO-UPDATE: gmx.com'
212.227.15.18 -> @., 'AUTO-UPDATE: gmx.com'
212.227.15.19 -> @., 'AUTO-UPDATE: gmx.com'
212.227.15.44 -> @., 'AUTO-UPDATE: gmx.com'
212.227.15.45 -> @., 'AUTO-UPDATE: gmx.com'
212.227.15.46 -> @., 'AUTO-UPDATE: gmx.com'
212.227.15.47 -> @., 'AUTO-UPDATE: gmx.com'
212.227.15.50 -> @., 'AUTO-UPDATE: gmx.com'
212.227.15.52 -> @., 'AUTO-UPDATE: gmx.com'
212.227.15.53 -> @., 'AUTO-UPDATE: gmx.com'
212.227.17.20 -> @., 'AUTO-UPDATE: gmx.com'
212.227.17.21 -> @., 'AUTO-UPDATE: gmx.com'
212.227.17.22 -> @., 'AUTO-UPDATE: gmx.com'
212.227.17.26 -> @., 'AUTO-UPDATE: gmx.com'
212.227.17.28 -> @., 'AUTO-UPDATE: gmx.com'
212.227.17.29 -> @., 'AUTO-UPDATE: gmx.com'
213.199.128.139 -> @., 'AUTO-UPDATE: microsoft.com'
213.199.128.145 -> @., 'AUTO-UPDATE: microsoft.com'
213.199.138.181 -> @., 'AUTO-UPDATE: microsoft.com'
213.199.138.191 -> @., 'AUTO-UPDATE: microsoft.com'
213.199.161.128/27 -> @., 'AUTO-UPDATE: hotmail.com'
213.199.177.0/26 -> @., 'AUTO-UPDATE: hotmail.com'
216.128.126.97 -> @., 'AUTO-UPDATE: paypal.com'
216.136.162.120/29 -> @., 'AUTO-UPDATE: paypal.com'
216.136.162.65 -> @., 'AUTO-UPDATE: paypal.com'
216.136.168.80/28 -> @., 'AUTO-UPDATE: paypal.com'
216.139.64.0/19 -> @., 'AUTO-UPDATE: fishbowl.com'
216.145.221.0/24 -> @., 'AUTO-UPDATE: zendesk.com'
216.17.150.242 -> @., 'AUTO-UPDATE: constantcontact.com'
216.17.150.251 -> @., 'AUTO-UPDATE: constantcontact.com'
216.198.0.0/18 -> @., 'AUTO-UPDATE: cloudflare.com'
216.203.30.55 -> @., 'AUTO-UPDATE: exacttarget.com'
216.203.33.178/31 -> @., 'AUTO-UPDATE: exacttarget.com'
216.205.24.0/24 -> @., 'AUTO-UPDATE: zendesk.com'
216.221.160.0/19 -> @., 'AUTO-UPDATE: amazon.com'
216.239.32.0/19 -> @., 'AUTO-UPDATE: cloudflare.com'
216.24.224.0/20 -> @., 'AUTO-UPDATE: icontact.com'
216.58.192.0/19 -> @., 'AUTO-UPDATE: cloudflare.com'
216.66.217.240/29 -> @., 'AUTO-UPDATE: paypal.com'
216.71.138.33 -> @., 'AUTO-UPDATE: mailchimp.com'
216.71.152.207 -> @., 'AUTO-UPDATE: ebay.com'
216.71.154.29 -> @., 'AUTO-UPDATE: ebay.com'
216.71.155.89 -> @., 'AUTO-UPDATE: ebay.com'
216.74.162.13 -> @., 'AUTO-UPDATE: ebay.com'
216.74.162.14 -> @., 'AUTO-UPDATE: ebay.com'
216.82.240.0/20 -> @., 'AUTO-UPDATE: messagelabs.com'
216.99.5.67 -> @., 'AUTO-UPDATE: microsoft.com'
216.99.5.68 -> @., 'AUTO-UPDATE: microsoft.com'
217.175.194.0/24 -> @., 'AUTO-UPDATE: ebay.com'
217.77.141.52 -> @., 'AUTO-UPDATE: microsoft.com'
217.77.141.59 -> @., 'AUTO-UPDATE: microsoft.com'
222.73.195.64/29 -> @., 'AUTO-UPDATE: icloud.com'
223.165.113.0/24 -> @., 'AUTO-UPDATE: ebay.com'
223.165.115.0/24 -> @., 'AUTO-UPDATE: ebay.com'
223.165.118.0/23 -> @., 'AUTO-UPDATE: ebay.com'
223.165.120.0/23 -> @., 'AUTO-UPDATE: ebay.com'
23.103.224.0/19 -> @., 'AUTO-UPDATE: microsoft.com'
23.249.208.0/20 -> @., 'AUTO-UPDATE: amazon.com'
23.251.224.0/19 -> @., 'AUTO-UPDATE: amazon.com'
23.253.141.0/24 -> @., 'AUTO-UPDATE: fishbowl.com'
23.253.182.0/23 -> @., 'AUTO-UPDATE: mailgun.com'
23.253.182.103 -> @., 'AUTO-UPDATE: cloudflare.com'
23.253.183.145 -> @., 'AUTO-UPDATE: cloudflare.com'
23.253.183.146 -> @., 'AUTO-UPDATE: cloudflare.com'
23.253.183.147 -> @., 'AUTO-UPDATE: cloudflare.com'
23.253.183.148 -> @., 'AUTO-UPDATE: cloudflare.com'
23.253.183.150 -> @., 'AUTO-UPDATE: cloudflare.com'
2404:6800:4000::/36 -> @., 'AUTO-UPDATE: cloudflare.com'
2607:13c0:0001:0000:0000:0000:0000:7000/116 -> @., 'AUTO-UPDATE: zoho.com'
2607:13c0:0002:0000:0000:0000:0000:1000/116 -> @., 'AUTO-UPDATE: zoho.com'
2607:13c0:0004:0000:0000:0000:0000:0000/116 -> @., 'AUTO-UPDATE: zoho.com'
2607:f8b0:4000::/36 -> @., 'AUTO-UPDATE: cloudflare.com'
2620:109:c003:104::/64 -> @., 'AUTO-UPDATE: linkedin.com'
2620:109:c006:104::/64 -> @., 'AUTO-UPDATE: linkedin.com'
2620:109:c00d:104::/64 -> @., 'AUTO-UPDATE: linkedin.com'
2620:10d:c090:400::8:1 -> @., 'AUTO-UPDATE: instagram.com'
2620:10d:c091:400::8:1 -> @., 'AUTO-UPDATE: instagram.com'
2620:10d:c09b:400::8:1 -> @., 'AUTO-UPDATE: instagram.com'
2620:10d:c09c:400::8:1 -> @., 'AUTO-UPDATE: instagram.com'
2620:119:50c0:207::/64 -> @., 'AUTO-UPDATE: linkedin.com'
2800:3f0:4000::/36 -> @., 'AUTO-UPDATE: cloudflare.com'
2a00:1450:4000::/36 -> @., 'AUTO-UPDATE: cloudflare.com'
2a01:111:f400::/48 -> @., 'AUTO-UPDATE: github.com'
2a01:111:f403:8000::/50 -> @., 'AUTO-UPDATE: hotmail.com'
2a01:111:f403:8000::/51 -> @., 'AUTO-UPDATE: github.com'
2a01:111:f403::/49 -> @., 'AUTO-UPDATE: github.com'
2a01:111:f403:c000::/51 -> @., 'AUTO-UPDATE: github.com'
2a01:111:f403:f000::/52 -> @., 'AUTO-UPDATE: github.com'
2a01:7e01::f03c:91ff:fe74:9543 -> @., 'AUTO-UPDATE: iredmail.org'
2a01:7e01::f03c:93ff:fe25:7e10 -> @., 'AUTO-UPDATE: iredmail.org'
2a02:6b8:0:1472::/64 -> @., 'AUTO-UPDATE: yandex.ru'
2a02:6b8:0:1619::/64 -> @., 'AUTO-UPDATE: yandex.ru'
2a02:6b8:0:1a2d::/64 -> @., 'AUTO-UPDATE: yandex.ru'
2a02:6b8:0:801::/64 -> @., 'AUTO-UPDATE: yandex.ru'
2a02:6b8:c00::/40 -> @., 'AUTO-UPDATE: yandex.ru'
2c0f:fb50:4000::/36 -> @., 'AUTO-UPDATE: cloudflare.com'
3.129.120.190 -> @., 'AUTO-UPDATE: zendesk.com'
3.210.190.0/24 -> @., 'AUTO-UPDATE: fishbowl.com'
3.64.143.187 -> @., 'AUTO-UPDATE: cloudfiltering.com'
3.70.123.177 -> @., 'AUTO-UPDATE: zendesk.com'
3.78.6.244 -> @., 'AUTO-UPDATE: cloudfiltering.com'
3.93.157.0/24 -> @., 'AUTO-UPDATE: fishbowl.com'
3.94.40.108 -> @., 'AUTO-UPDATE: fbmta.com'
34.195.217.107 -> @., 'AUTO-UPDATE: zendesk.com'
34.215.104.144 -> @., 'AUTO-UPDATE: zendesk.com'
34.218.116.3 -> @., 'AUTO-UPDATE: constantcontact.com'
34.225.212.172/32 -> @., 'AUTO-UPDATE: icontact.com'
35.161.32.253 -> @., 'AUTO-UPDATE: zendesk.com'
35.167.93.243 -> @., 'AUTO-UPDATE: zendesk.com'
35.176.132.251 -> @., 'AUTO-UPDATE: mailchimp.com'
35.190.247.0/24 -> @., 'AUTO-UPDATE: cloudflare.com'
35.191.0.0/16 -> @., 'AUTO-UPDATE: cloudflare.com'
35.205.92.9 -> @., 'AUTO-UPDATE: messagelabs.com'
35.242.169.159 -> @., 'AUTO-UPDATE: messagelabs.com'
37.140.190.0/23 -> @., 'AUTO-UPDATE: yandex.ru'
40.107.0.0/16 -> @., 'AUTO-UPDATE: github.com'
40.112.65.63 -> @., 'AUTO-UPDATE: microsoft.com'
40.233.64.216 -> @., 'AUTO-UPDATE: constantcontact.com'
40.233.83.78 -> @., 'AUTO-UPDATE: constantcontact.com'
40.233.88.28 -> @., 'AUTO-UPDATE: constantcontact.com'
40.92.0.0/15 -> @., 'AUTO-UPDATE: github.com'
40.92.0.0/16 -> @., 'AUTO-UPDATE: hotmail.com'
44.193.121.189 -> @., 'AUTO-UPDATE: stackoverflow.com'
44.206.138.57 -> @., 'AUTO-UPDATE: zendesk.com'
44.217.45.156/32 -> @., 'AUTO-UPDATE: fishbowl.com'
44.236.56.93 -> @., 'AUTO-UPDATE: zendesk.com'
44.238.220.251 -> @., 'AUTO-UPDATE: zendesk.com'
45.14.148.0/22 -> @., 'AUTO-UPDATE: mailjet.com'
46.19.170.16 -> @., 'AUTO-UPDATE: constantcontact.com'
46.226.48.0/21 -> @., 'AUTO-UPDATE: messagelabs.com'
5.45.198.0/23 -> @., 'AUTO-UPDATE: yandex.ru'
5.45.224.0/25 -> @., 'AUTO-UPDATE: yandex.ru'
50.18.121.236 -> @., 'AUTO-UPDATE: exacttarget.com'
50.18.121.248 -> @., 'AUTO-UPDATE: exacttarget.com'
50.18.123.221 -> @., 'AUTO-UPDATE: exacttarget.com'
50.18.124.70 -> @., 'AUTO-UPDATE: exacttarget.com'
50.18.125.237 -> @., 'AUTO-UPDATE: exacttarget.com'
50.18.125.97 -> @., 'AUTO-UPDATE: exacttarget.com'
50.18.126.162 -> @., 'AUTO-UPDATE: exacttarget.com'
50.18.45.249 -> @., 'AUTO-UPDATE: exacttarget.com'
50.31.32.0/19 -> @., 'AUTO-UPDATE: ebay.com'
50.31.36.205 -> @., 'AUTO-UPDATE: sendgrid.com'
50.56.130.220 -> @., 'AUTO-UPDATE: constantcontact.com'
50.56.130.221 -> @., 'AUTO-UPDATE: constantcontact.com'
50.56.130.222 -> @., 'AUTO-UPDATE: constantcontact.com'
51.250.56.144/28 -> @., 'AUTO-UPDATE: yandex.ru'
51.250.56.16/28 -> @., 'AUTO-UPDATE: yandex.ru'
51.250.56.80/28 -> @., 'AUTO-UPDATE: yandex.ru'
52.1.14.157 -> @., 'AUTO-UPDATE: mailchimp.com'
52.100.0.0/15 -> @., 'AUTO-UPDATE: github.com'
52.102.0.0/16 -> @., 'AUTO-UPDATE: github.com'
52.103.0.0/17 -> @., 'AUTO-UPDATE: github.com'
52.119.213.144/28 -> @., 'AUTO-UPDATE: amazon.com'
52.185.106.240/28 -> @., 'AUTO-UPDATE: microsoft.com'
52.207.191.216 -> @., 'AUTO-UPDATE: zendesk.com'
52.222.62.51 -> @., 'AUTO-UPDATE: mailchimp.com'
52.222.73.120/32 -> @., 'AUTO-UPDATE: mailchimp.com'
52.222.73.83 -> @., 'AUTO-UPDATE: mailchimp.com'
52.222.75.85 -> @., 'AUTO-UPDATE: mailchimp.com'
52.222.89.228 -> @., 'AUTO-UPDATE: mailchimp.com'
52.234.172.96/28 -> @., 'AUTO-UPDATE: microsoft.com'
52.235.253.128 -> @., 'AUTO-UPDATE: microsoft.com'
52.236.28.240/28 -> @., 'AUTO-UPDATE: microsoft.com'
52.28.63.81 -> @., 'AUTO-UPDATE: zendesk.com'
52.37.142.146 -> @., 'AUTO-UPDATE: zendesk.com'
52.38.191.241 -> @., 'AUTO-UPDATE: stackoverflow.com'
52.5.230.59/32 -> @., 'AUTO-UPDATE: icontact.com'
52.50.24.208 -> @., 'AUTO-UPDATE: reddit.com'
52.58.216.183 -> @., 'AUTO-UPDATE: zendesk.com'
52.59.143.3 -> @., 'AUTO-UPDATE: zendesk.com'
52.60.115.116 -> @., 'AUTO-UPDATE: mailchimp.com'
52.60.41.5 -> @., 'AUTO-UPDATE: zendesk.com'
52.61.91.9 -> @., 'AUTO-UPDATE: mailchimp.com'
52.71.0.205/32 -> @., 'AUTO-UPDATE: icontact.com'
52.73.203.75 -> @., 'AUTO-UPDATE: stackoverflow.com'
52.94.124.0/28 -> @., 'AUTO-UPDATE: amazon.com'
52.95.48.152/29 -> @., 'AUTO-UPDATE: amazon.com'
52.95.49.88/29 -> @., 'AUTO-UPDATE: amazon.com'
54.165.19.38 -> @., 'AUTO-UPDATE: fbmta.com'
54.174.52.0/24 -> @., 'AUTO-UPDATE: fishbowl.com'
54.174.57.0/24 -> @., 'AUTO-UPDATE: fishbowl.com'
54.174.59.0/24 -> @., 'AUTO-UPDATE: fishbowl.com'
54.174.60.0/23 -> @., 'AUTO-UPDATE: fishbowl.com'
54.174.63.0/24 -> @., 'AUTO-UPDATE: fishbowl.com'
54.186.193.102/32 -> @., 'AUTO-UPDATE: mailchimp.com'
54.191.223.56 -> @., 'AUTO-UPDATE: zendesk.com'
54.213.20.246 -> @., 'AUTO-UPDATE: zendesk.com'
54.214.39.184 -> @., 'AUTO-UPDATE: paypal.com'
54.240.0.0/18 -> @., 'AUTO-UPDATE: amazon.com'
54.240.64.0/19 -> @., 'AUTO-UPDATE: amazon.com'
54.240.96.0/19 -> @., 'AUTO-UPDATE: amazon.com'
54.241.16.209 -> @., 'AUTO-UPDATE: paypal.com'
54.244.242.0/24 -> @., 'AUTO-UPDATE: paypal.com'
54.255.61.23 -> @., 'AUTO-UPDATE: zendesk.com'
54.90.148.255/32 -> @., 'AUTO-UPDATE: icontact.com'
62.17.146.128/26 -> @., 'AUTO-UPDATE: exacttarget.com'
62.253.227.114 -> @., 'AUTO-UPDATE: github.com'
63.128.21.0/24 -> @., 'AUTO-UPDATE: zendesk.com'
63.80.14.0/23 -> @., 'AUTO-UPDATE: paypal.com'
64.127.115.252 -> @., 'AUTO-UPDATE: paypal.com'
64.132.88.0/23 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
64.132.92.0/24 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
64.18.0.0/20 -> @., 'AUTO-UPDATE: exacttarget.com'
64.20.241.45 -> @., 'AUTO-UPDATE: exacttarget.com'
64.207.219.10 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.11 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.12 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.13 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.135 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.136 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.137 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.138 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.139 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.14 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.140 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.141 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.142 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.143 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.15 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.7 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.71 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.72 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.73 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.74 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.75 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.76 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.77 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.78 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.79 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.8 -> @., 'AUTO-UPDATE: linkedin.com'
64.207.219.9 -> @., 'AUTO-UPDATE: linkedin.com'
64.233.160.0/19 -> @., 'AUTO-UPDATE: cloudflare.com'
64.69.212.0/24 -> @., 'AUTO-UPDATE: mailchimp.com'
64.79.155.192 -> @., 'AUTO-UPDATE: zendesk.com'
64.79.155.193 -> @., 'AUTO-UPDATE: zendesk.com'
64.79.155.205 -> @., 'AUTO-UPDATE: zendesk.com'
64.79.155.206 -> @., 'AUTO-UPDATE: zendesk.com'
65.110.161.77 -> @., 'AUTO-UPDATE: paypal.com'
65.123.29.213/32 -> @., 'AUTO-UPDATE: icontact.com'
65.123.29.220/32 -> @., 'AUTO-UPDATE: icontact.com'
65.154.166.0/24 -> @., 'AUTO-UPDATE: zoho.com'
65.212.180.36 -> @., 'AUTO-UPDATE: paypal.com'
65.52.80.137/32 -> @., 'AUTO-UPDATE: microsoft.com'
65.54.121.120/29 -> @., 'AUTO-UPDATE: hotmail.com'
65.54.190.0/24 -> @., 'AUTO-UPDATE: hotmail.com'
65.54.241.0/24 -> @., 'AUTO-UPDATE: hotmail.com'
65.54.51.64/26 -> @., 'AUTO-UPDATE: hotmail.com'
65.54.61.64/26 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.111.0/24 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.113.64/26 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.116.0/25 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.126.0/25 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.174.0/25 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.178.128/27 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.234.192/26 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.29.77 -> @., 'AUTO-UPDATE: microsoft.com'
65.55.33.64/28 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.34.0/24 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.42.224/28 -> @., 'AUTO-UPDATE: microsoft.com'
65.55.52.224/27 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.78.128/25 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.81.48/28 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.90.0/24 -> @., 'AUTO-UPDATE: hotmail.com'
65.55.94.0/25 -> @., 'AUTO-UPDATE: hotmail.com'
66.102.0.0/20 -> @., 'AUTO-UPDATE: cloudflare.com'
66.119.150.192/26 -> @., 'AUTO-UPDATE: microsoft.com'
66.162.193.226/31 -> @., 'AUTO-UPDATE: icontact.com'
66.170.126.97 -> @., 'AUTO-UPDATE: paypal.com'
66.211.170.88/29 -> @., 'AUTO-UPDATE: paypal.com'
66.211.184.0/23 -> @., 'AUTO-UPDATE: ebay.com'
66.220.144.128/25 -> @., 'AUTO-UPDATE: facebook.com'
66.220.155.0/24 -> @., 'AUTO-UPDATE: facebook.com'
66.220.157.0/25 -> @., 'AUTO-UPDATE: facebook.com'
66.231.80.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
66.249.80.0/20 -> @., 'AUTO-UPDATE: cloudflare.com'
67.219.240.0/20 -> @., 'AUTO-UPDATE: messagelabs.com'
67.221.168.65 -> @., 'AUTO-UPDATE: paypal.com'
67.228.2.24/30 -> @., 'AUTO-UPDATE: exacttarget.com'
67.228.21.184/29 -> @., 'AUTO-UPDATE: exacttarget.com'
67.228.37.4/30 -> @., 'AUTO-UPDATE: exacttarget.com'
67.23.31.6 -> @., 'AUTO-UPDATE: exacttarget.com'
67.231.145.42 -> @., 'AUTO-UPDATE: instagram.com'
67.231.153.30 -> @., 'AUTO-UPDATE: instagram.com'
67.72.99.26 -> @., 'AUTO-UPDATE: paypal.com'
68.232.140.138 -> @., 'AUTO-UPDATE: mailchimp.com'
68.232.157.143 -> @., 'AUTO-UPDATE: ebay.com'
68.232.192.0/20 -> @., 'AUTO-UPDATE: cust-spf.exacttarget.com'
69.162.98.0/24 -> @., 'AUTO-UPDATE: exacttarget.com'
69.169.224.0/20 -> @., 'AUTO-UPDATE: amazon.com'
69.63.178.128/25 -> @., 'AUTO-UPDATE: facebook.com'
69.63.181.0/24 -> @., 'AUTO-UPDATE: facebook.com'
69.63.184.0/25 -> @., 'AUTO-UPDATE: facebook.com'
69.65.42.195 -> @., 'AUTO-UPDATE: exacttarget.com'
69.65.49.192/29 -> @., 'AUTO-UPDATE: exacttarget.com'
69.72.32.0/20 -> @., 'AUTO-UPDATE: mailgun.com'
69.72.40.93 -> @., 'AUTO-UPDATE: pinterest.com'
69.72.40.94/31 -> @., 'AUTO-UPDATE: pinterest.com'
69.72.40.96/30 -> @., 'AUTO-UPDATE: pinterest.com'
69.72.47.205 -> @., 'AUTO-UPDATE: pinterest.com'
70.37.151.128/25 -> @., 'AUTO-UPDATE: hotmail.com'
70.42.149.35 -> @., 'AUTO-UPDATE: fbmta.com'
72.14.192.0/18 -> @., 'AUTO-UPDATE: cloudflare.com'
72.21.192.0/19 -> @., 'AUTO-UPDATE: amazon.com'
72.21.217.142/32 -> @., 'AUTO-UPDATE: amazon.com'
74.112.67.243 -> @., 'AUTO-UPDATE: paypal.com'
74.125.0.0/16 -> @., 'AUTO-UPDATE: cloudflare.com'
74.202.227.40/32 -> @., 'AUTO-UPDATE: icontact.com'
74.208.4.200 -> @., 'AUTO-UPDATE: mail.com'
74.208.4.201 -> @., 'AUTO-UPDATE: mail.com'
74.208.4.220 -> @., 'AUTO-UPDATE: mail.com'
74.208.4.221 -> @., 'AUTO-UPDATE: mail.com'
74.209.250.0/24 -> @., 'AUTO-UPDATE: fbmta.com'
74.63.234.75 -> @., 'AUTO-UPDATE: exacttarget.com'
74.63.236.0/24 -> @., 'AUTO-UPDATE: exacttarget.com'
74.86.113.28/30 -> @., 'AUTO-UPDATE: exacttarget.com'
74.86.129.240/30 -> @., 'AUTO-UPDATE: exacttarget.com'
74.86.131.208/30 -> @., 'AUTO-UPDATE: exacttarget.com'
74.86.132.208/30 -> @., 'AUTO-UPDATE: exacttarget.com'
74.86.160.160/30 -> @., 'AUTO-UPDATE: exacttarget.com'
74.86.164.188/30 -> @., 'AUTO-UPDATE: exacttarget.com'
74.86.171.192/30 -> @., 'AUTO-UPDATE: exacttarget.com'
74.86.195.28/30 -> @., 'AUTO-UPDATE: exacttarget.com'
74.86.207.36/30 -> @., 'AUTO-UPDATE: exacttarget.com'
74.86.226.216/30 -> @., 'AUTO-UPDATE: exacttarget.com'
74.86.236.240/30 -> @., 'AUTO-UPDATE: exacttarget.com'
74.86.241.250/31 -> @., 'AUTO-UPDATE: exacttarget.com'
75.2.70.75 -> @., 'AUTO-UPDATE: fishbowl.com'
76.223.128.0/19 -> @., 'AUTO-UPDATE: amazon.com'
76.223.176.0/20 -> @., 'AUTO-UPDATE: amazon.com'
77.88.28.0/24 -> @., 'AUTO-UPDATE: yandex.ru'
77.88.29.0/24 -> @., 'AUTO-UPDATE: yandex.ru'
8.20.114.31 -> @., 'AUTO-UPDATE: paypal.com'
8.25.194.0/23 -> @., 'AUTO-UPDATE: twitter.com'
8.25.196.0/23 -> @., 'AUTO-UPDATE: twitter.com'
8.39.54.0/23 -> @., 'AUTO-UPDATE: zoho.com'
8.40.222.0/23 -> @., 'AUTO-UPDATE: zoho.com'
81.223.46.0/27 -> @., 'AUTO-UPDATE: paypal.com'
82.165.159.12 -> @., 'AUTO-UPDATE: gmx.com'
82.165.159.13 -> @., 'AUTO-UPDATE: gmx.com'
82.165.159.130 -> @., 'AUTO-UPDATE: mail.com'
82.165.159.131 -> @., 'AUTO-UPDATE: mail.com'
82.165.159.14 -> @., 'AUTO-UPDATE: gmx.com'
82.165.159.40 -> @., 'AUTO-UPDATE: gmx.com'
82.165.159.41 -> @., 'AUTO-UPDATE: gmx.com'
82.165.159.42 -> @., 'AUTO-UPDATE: gmx.com'
85.158.136.0/21 -> @., 'AUTO-UPDATE: messagelabs.com'
86.61.88.25 -> @., 'AUTO-UPDATE: microsoft.com'
87.238.80.0/21 -> @., 'AUTO-UPDATE: amazon.com'
87.253.232.0/21 -> @., 'AUTO-UPDATE: mailgun.com'
91.211.240.0/22 -> @., 'AUTO-UPDATE: ebay.com'
94.245.112.0/27 -> @., 'AUTO-UPDATE: hotmail.com'
94.245.112.10/31 -> @., 'AUTO-UPDATE: hotmail.com'
95.108.130.0/23 -> @., 'AUTO-UPDATE: yandex.ru'
95.108.205.0/24 -> @., 'AUTO-UPDATE: yandex.ru'
95.131.104.0/21 -> @., 'AUTO-UPDATE: messagelabs.com'
96.43.144.0/20 -> @., 'AUTO-UPDATE: exacttarget.com'
96.43.144.64/28 -> @., 'AUTO-UPDATE: paypal.com'
96.43.144.64/31 -> @., 'AUTO-UPDATE: twitter.com'
96.43.148.64/28 -> @., 'AUTO-UPDATE: paypal.com'
96.43.148.64/31 -> @., 'AUTO-UPDATE: twitter.com'
96.43.151.64/28 -> @., 'AUTO-UPDATE: paypal.com'
98.97.248.0/21 -> @., 'AUTO-UPDATE: mailchimp.com'
99.78.197.208/28 -> @., 'AUTO-UPDATE: amazon.com'
99.83.190.102 -> @., 'AUTO-UPDATE: fishbowl.com'
@yahoo.com -> @., 'AUTO-UPDATE: aol.com'
@yahoo.net -> @., 'AUTO-UPDATE: aol.com'
#

(4)旧仕様クライアント対策

2018年設定時に発生した「postfix/dovecotメールサーバでWindows Live Mail 2012がエラーになる

Windows Live メールなど古いクライアントに対する対応

公式:Allow insecure POP3/IMAP/SMTP connections without STARTTLS

POP3/IMAPについてdovecotにdisable_plaintext_authとsslの設定がどのように行われているか確認

# grep disable_plaintext_auth *
grep: conf.d: Is a directory
dovecot.conf:# With disable_plaintext_auth=yes AND ssl=required, STARTTLS is mandatory.
dovecot.conf:# Set disable_plaintext_auth=no AND ssl=yes to allow plain password transmitted
dovecot.conf:disable_plaintext_auth = yes
dovecot.conf:#   disable_plaintext_auth = no
dovecot.conf.2024.12.05.10.09.03:# for authentication checks). disable_plaintext_auth is also ignored for
# grep disable_plaintext_auth */*
conf.d/10-auth.conf:#disable_plaintext_auth = yes
conf.d/10-auth.conf:# NOTE: See also disable_plaintext_auth setting.
#
# grep -e "ssl =" -e "ssl=" *
grep: conf.d: Is a directory
dovecot.conf:ssl = required
dovecot.conf:verbose_ssl = no
dovecot.conf:# With disable_plaintext_auth=yes AND ssl=required, STARTTLS is mandatory.
dovecot.conf:# Set disable_plaintext_auth=no AND ssl=yes to allow plain password transmitted
dovecot.conf:    #    ssl = yes
dovecot.conf:    #    ssl = yes
# grep -e "ssl =" -e "ssl=" */*
conf.d/10-auth.conf:# See also ssl=required setting.
conf.d/10-logging.conf:#verbose_ssl = no
conf.d/10-master.conf:    #ssl = yes
conf.d/10-master.conf:    #ssl = yes
conf.d/20-submission.conf:#submission_relay_ssl = no
#

/etc/dovecot以下を検索すると上記のようになっていた。

これをsslをrequired→yes, disable_plaintext_authをyes→noに変更するのだが、/etc/dovecot/dovecot.conf に修正を入れた。

また、TLSv1.2以降をサポートとなっているが、 /var/log/dovecot/dovecot.log を確認すると、「encryption_protocol=TLSv1, encryption_cipher=ECDHE-ECDSA-AES256-SHA,」なんてユーザがいたので、設定を確認

# SSL: Global settings.
# Refer to wiki site for per protocol, ip, server name SSL settings:
# http://wiki2.dovecot.org/SSL/DovecotConfiguration
ssl_min_protocol = TLSv1.2

上記を「ssl_min_protocol = TLSv1」に変えればいけるか?と思ったけど、AlmaLinuxのopenssl側がTLSv1.0, TLSv1.1対応の設定になっていなかった

opensslの設定としてはTLSv1.0もある

# openssl ciphers -v 'ALL:COMPLEMENTOFALL'
TLS_AES_256_GCM_SHA384         TLSv1.3 Kx=any      Au=any   Enc=AESGCM(256)            Mac=AEAD
TLS_CHACHA20_POLY1305_SHA256   TLSv1.3 Kx=any      Au=any   Enc=CHACHA20/POLY1305(256) Mac=AEAD
TLS_AES_128_GCM_SHA256         TLSv1.3 Kx=any      Au=any   Enc=AESGCM(128)            Mac=AEAD
TLS_AES_128_CCM_SHA256         TLSv1.3 Kx=any      Au=any   Enc=AESCCM(128)            Mac=AEAD
ECDHE-ECDSA-AES256-GCM-SHA384  TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(256)            Mac=AEAD
ECDHE-RSA-AES256-GCM-SHA384    TLSv1.2 Kx=ECDH     Au=RSA   Enc=AESGCM(256)            Mac=AEAD
DHE-DSS-AES256-GCM-SHA384      TLSv1.2 Kx=DH       Au=DSS   Enc=AESGCM(256)            Mac=AEAD
DHE-RSA-AES256-GCM-SHA384      TLSv1.2 Kx=DH       Au=RSA   Enc=AESGCM(256)            Mac=AEAD
ECDHE-ECDSA-CHACHA20-POLY1305  TLSv1.2 Kx=ECDH     Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD
ECDHE-RSA-CHACHA20-POLY1305    TLSv1.2 Kx=ECDH     Au=RSA   Enc=CHACHA20/POLY1305(256) Mac=AEAD
DHE-RSA-CHACHA20-POLY1305      TLSv1.2 Kx=DH       Au=RSA   Enc=CHACHA20/POLY1305(256) Mac=AEAD
ECDHE-ECDSA-AES256-CCM         TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESCCM(256)            Mac=AEAD
DHE-RSA-AES256-CCM             TLSv1.2 Kx=DH       Au=RSA   Enc=AESCCM(256)            Mac=AEAD
ECDHE-ECDSA-ARIA256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=ARIAGCM(256)           Mac=AEAD
ECDHE-ARIA256-GCM-SHA384       TLSv1.2 Kx=ECDH     Au=RSA   Enc=ARIAGCM(256)           Mac=AEAD
DHE-DSS-ARIA256-GCM-SHA384     TLSv1.2 Kx=DH       Au=DSS   Enc=ARIAGCM(256)           Mac=AEAD
DHE-RSA-ARIA256-GCM-SHA384     TLSv1.2 Kx=DH       Au=RSA   Enc=ARIAGCM(256)           Mac=AEAD
ADH-AES256-GCM-SHA384          TLSv1.2 Kx=DH       Au=None  Enc=AESGCM(256)            Mac=AEAD
ECDHE-ECDSA-AES128-GCM-SHA256  TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(128)            Mac=AEAD
ECDHE-RSA-AES128-GCM-SHA256    TLSv1.2 Kx=ECDH     Au=RSA   Enc=AESGCM(128)            Mac=AEAD
DHE-DSS-AES128-GCM-SHA256      TLSv1.2 Kx=DH       Au=DSS   Enc=AESGCM(128)            Mac=AEAD
DHE-RSA-AES128-GCM-SHA256      TLSv1.2 Kx=DH       Au=RSA   Enc=AESGCM(128)            Mac=AEAD
ECDHE-ECDSA-AES128-CCM         TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESCCM(128)            Mac=AEAD
DHE-RSA-AES128-CCM             TLSv1.2 Kx=DH       Au=RSA   Enc=AESCCM(128)            Mac=AEAD
ECDHE-ECDSA-ARIA128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=ARIAGCM(128)           Mac=AEAD
ECDHE-ARIA128-GCM-SHA256       TLSv1.2 Kx=ECDH     Au=RSA   Enc=ARIAGCM(128)           Mac=AEAD
DHE-DSS-ARIA128-GCM-SHA256     TLSv1.2 Kx=DH       Au=DSS   Enc=ARIAGCM(128)           Mac=AEAD
DHE-RSA-ARIA128-GCM-SHA256     TLSv1.2 Kx=DH       Au=RSA   Enc=ARIAGCM(128)           Mac=AEAD
ADH-AES128-GCM-SHA256          TLSv1.2 Kx=DH       Au=None  Enc=AESGCM(128)            Mac=AEAD
ECDHE-ECDSA-AES256-CCM8        TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESCCM8(256)           Mac=AEAD
ECDHE-ECDSA-AES128-CCM8        TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESCCM8(128)           Mac=AEAD
DHE-RSA-AES256-CCM8            TLSv1.2 Kx=DH       Au=RSA   Enc=AESCCM8(256)           Mac=AEAD
DHE-RSA-AES128-CCM8            TLSv1.2 Kx=DH       Au=RSA   Enc=AESCCM8(128)           Mac=AEAD
ECDHE-ECDSA-AES256-SHA384      TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AES(256)               Mac=SHA384
ECDHE-RSA-AES256-SHA384        TLSv1.2 Kx=ECDH     Au=RSA   Enc=AES(256)               Mac=SHA384
DHE-RSA-AES256-SHA256          TLSv1.2 Kx=DH       Au=RSA   Enc=AES(256)               Mac=SHA256
DHE-DSS-AES256-SHA256          TLSv1.2 Kx=DH       Au=DSS   Enc=AES(256)               Mac=SHA256
ECDHE-ECDSA-CAMELLIA256-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=Camellia(256)          Mac=SHA384
ECDHE-RSA-CAMELLIA256-SHA384   TLSv1.2 Kx=ECDH     Au=RSA   Enc=Camellia(256)          Mac=SHA384
DHE-RSA-CAMELLIA256-SHA256     TLSv1.2 Kx=DH       Au=RSA   Enc=Camellia(256)          Mac=SHA256
DHE-DSS-CAMELLIA256-SHA256     TLSv1.2 Kx=DH       Au=DSS   Enc=Camellia(256)          Mac=SHA256
ADH-AES256-SHA256              TLSv1.2 Kx=DH       Au=None  Enc=AES(256)               Mac=SHA256
ADH-CAMELLIA256-SHA256         TLSv1.2 Kx=DH       Au=None  Enc=Camellia(256)          Mac=SHA256
ECDHE-ECDSA-AES128-SHA256      TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AES(128)               Mac=SHA256
ECDHE-RSA-AES128-SHA256        TLSv1.2 Kx=ECDH     Au=RSA   Enc=AES(128)               Mac=SHA256
DHE-RSA-AES128-SHA256          TLSv1.2 Kx=DH       Au=RSA   Enc=AES(128)               Mac=SHA256
DHE-DSS-AES128-SHA256          TLSv1.2 Kx=DH       Au=DSS   Enc=AES(128)               Mac=SHA256
ECDHE-ECDSA-CAMELLIA128-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=Camellia(128)          Mac=SHA256
ECDHE-RSA-CAMELLIA128-SHA256   TLSv1.2 Kx=ECDH     Au=RSA   Enc=Camellia(128)          Mac=SHA256
DHE-RSA-CAMELLIA128-SHA256     TLSv1.2 Kx=DH       Au=RSA   Enc=Camellia(128)          Mac=SHA256
DHE-DSS-CAMELLIA128-SHA256     TLSv1.2 Kx=DH       Au=DSS   Enc=Camellia(128)          Mac=SHA256
ADH-AES128-SHA256              TLSv1.2 Kx=DH       Au=None  Enc=AES(128)               Mac=SHA256
ADH-CAMELLIA128-SHA256         TLSv1.2 Kx=DH       Au=None  Enc=Camellia(128)          Mac=SHA256
ECDHE-ECDSA-AES256-SHA         TLSv1   Kx=ECDH     Au=ECDSA Enc=AES(256)               Mac=SHA1
ECDHE-RSA-AES256-SHA           TLSv1   Kx=ECDH     Au=RSA   Enc=AES(256)               Mac=SHA1
DHE-RSA-AES256-SHA             SSLv3   Kx=DH       Au=RSA   Enc=AES(256)               Mac=SHA1
DHE-DSS-AES256-SHA             SSLv3   Kx=DH       Au=DSS   Enc=AES(256)               Mac=SHA1
DHE-RSA-CAMELLIA256-SHA        SSLv3   Kx=DH       Au=RSA   Enc=Camellia(256)          Mac=SHA1
DHE-DSS-CAMELLIA256-SHA        SSLv3   Kx=DH       Au=DSS   Enc=Camellia(256)          Mac=SHA1
AECDH-AES256-SHA               TLSv1   Kx=ECDH     Au=None  Enc=AES(256)               Mac=SHA1
ADH-AES256-SHA                 SSLv3   Kx=DH       Au=None  Enc=AES(256)               Mac=SHA1
ADH-CAMELLIA256-SHA            SSLv3   Kx=DH       Au=None  Enc=Camellia(256)          Mac=SHA1
ECDHE-ECDSA-AES128-SHA         TLSv1   Kx=ECDH     Au=ECDSA Enc=AES(128)               Mac=SHA1
ECDHE-RSA-AES128-SHA           TLSv1   Kx=ECDH     Au=RSA   Enc=AES(128)               Mac=SHA1
DHE-RSA-AES128-SHA             SSLv3   Kx=DH       Au=RSA   Enc=AES(128)               Mac=SHA1
DHE-DSS-AES128-SHA             SSLv3   Kx=DH       Au=DSS   Enc=AES(128)               Mac=SHA1
DHE-RSA-CAMELLIA128-SHA        SSLv3   Kx=DH       Au=RSA   Enc=Camellia(128)          Mac=SHA1
DHE-DSS-CAMELLIA128-SHA        SSLv3   Kx=DH       Au=DSS   Enc=Camellia(128)          Mac=SHA1
AECDH-AES128-SHA               TLSv1   Kx=ECDH     Au=None  Enc=AES(128)               Mac=SHA1
ADH-AES128-SHA                 SSLv3   Kx=DH       Au=None  Enc=AES(128)               Mac=SHA1
ADH-CAMELLIA128-SHA            SSLv3   Kx=DH       Au=None  Enc=Camellia(128)          Mac=SHA1
RSA-PSK-AES256-GCM-SHA384      TLSv1.2 Kx=RSAPSK   Au=RSA   Enc=AESGCM(256)            Mac=AEAD
DHE-PSK-AES256-GCM-SHA384      TLSv1.2 Kx=DHEPSK   Au=PSK   Enc=AESGCM(256)            Mac=AEAD
RSA-PSK-CHACHA20-POLY1305      TLSv1.2 Kx=RSAPSK   Au=RSA   Enc=CHACHA20/POLY1305(256) Mac=AEAD
DHE-PSK-CHACHA20-POLY1305      TLSv1.2 Kx=DHEPSK   Au=PSK   Enc=CHACHA20/POLY1305(256) Mac=AEAD
ECDHE-PSK-CHACHA20-POLY1305    TLSv1.2 Kx=ECDHEPSK Au=PSK   Enc=CHACHA20/POLY1305(256) Mac=AEAD
DHE-PSK-AES256-CCM             TLSv1.2 Kx=DHEPSK   Au=PSK   Enc=AESCCM(256)            Mac=AEAD
RSA-PSK-ARIA256-GCM-SHA384     TLSv1.2 Kx=RSAPSK   Au=RSA   Enc=ARIAGCM(256)           Mac=AEAD
DHE-PSK-ARIA256-GCM-SHA384     TLSv1.2 Kx=DHEPSK   Au=PSK   Enc=ARIAGCM(256)           Mac=AEAD
AES256-GCM-SHA384              TLSv1.2 Kx=RSA      Au=RSA   Enc=AESGCM(256)            Mac=AEAD
AES256-CCM                     TLSv1.2 Kx=RSA      Au=RSA   Enc=AESCCM(256)            Mac=AEAD
ARIA256-GCM-SHA384             TLSv1.2 Kx=RSA      Au=RSA   Enc=ARIAGCM(256)           Mac=AEAD
PSK-AES256-GCM-SHA384          TLSv1.2 Kx=PSK      Au=PSK   Enc=AESGCM(256)            Mac=AEAD
PSK-CHACHA20-POLY1305          TLSv1.2 Kx=PSK      Au=PSK   Enc=CHACHA20/POLY1305(256) Mac=AEAD
PSK-AES256-CCM                 TLSv1.2 Kx=PSK      Au=PSK   Enc=AESCCM(256)            Mac=AEAD
PSK-ARIA256-GCM-SHA384         TLSv1.2 Kx=PSK      Au=PSK   Enc=ARIAGCM(256)           Mac=AEAD
RSA-PSK-AES128-GCM-SHA256      TLSv1.2 Kx=RSAPSK   Au=RSA   Enc=AESGCM(128)            Mac=AEAD
DHE-PSK-AES128-GCM-SHA256      TLSv1.2 Kx=DHEPSK   Au=PSK   Enc=AESGCM(128)            Mac=AEAD
DHE-PSK-AES128-CCM             TLSv1.2 Kx=DHEPSK   Au=PSK   Enc=AESCCM(128)            Mac=AEAD
RSA-PSK-ARIA128-GCM-SHA256     TLSv1.2 Kx=RSAPSK   Au=RSA   Enc=ARIAGCM(128)           Mac=AEAD
DHE-PSK-ARIA128-GCM-SHA256     TLSv1.2 Kx=DHEPSK   Au=PSK   Enc=ARIAGCM(128)           Mac=AEAD
AES128-GCM-SHA256              TLSv1.2 Kx=RSA      Au=RSA   Enc=AESGCM(128)            Mac=AEAD
AES128-CCM                     TLSv1.2 Kx=RSA      Au=RSA   Enc=AESCCM(128)            Mac=AEAD
ARIA128-GCM-SHA256             TLSv1.2 Kx=RSA      Au=RSA   Enc=ARIAGCM(128)           Mac=AEAD
PSK-AES128-GCM-SHA256          TLSv1.2 Kx=PSK      Au=PSK   Enc=AESGCM(128)            Mac=AEAD
PSK-AES128-CCM                 TLSv1.2 Kx=PSK      Au=PSK   Enc=AESCCM(128)            Mac=AEAD
PSK-ARIA128-GCM-SHA256         TLSv1.2 Kx=PSK      Au=PSK   Enc=ARIAGCM(128)           Mac=AEAD
DHE-PSK-AES256-CCM8            TLSv1.2 Kx=DHEPSK   Au=PSK   Enc=AESCCM8(256)           Mac=AEAD
DHE-PSK-AES128-CCM8            TLSv1.2 Kx=DHEPSK   Au=PSK   Enc=AESCCM8(128)           Mac=AEAD
AES256-CCM8                    TLSv1.2 Kx=RSA      Au=RSA   Enc=AESCCM8(256)           Mac=AEAD
AES128-CCM8                    TLSv1.2 Kx=RSA      Au=RSA   Enc=AESCCM8(128)           Mac=AEAD
PSK-AES256-CCM8                TLSv1.2 Kx=PSK      Au=PSK   Enc=AESCCM8(256)           Mac=AEAD
PSK-AES128-CCM8                TLSv1.2 Kx=PSK      Au=PSK   Enc=AESCCM8(128)           Mac=AEAD
AES256-SHA256                  TLSv1.2 Kx=RSA      Au=RSA   Enc=AES(256)               Mac=SHA256
CAMELLIA256-SHA256             TLSv1.2 Kx=RSA      Au=RSA   Enc=Camellia(256)          Mac=SHA256
AES128-SHA256                  TLSv1.2 Kx=RSA      Au=RSA   Enc=AES(128)               Mac=SHA256
CAMELLIA128-SHA256             TLSv1.2 Kx=RSA      Au=RSA   Enc=Camellia(128)          Mac=SHA256
ECDHE-PSK-AES256-CBC-SHA384    TLSv1   Kx=ECDHEPSK Au=PSK   Enc=AES(256)               Mac=SHA384
ECDHE-PSK-AES256-CBC-SHA       TLSv1   Kx=ECDHEPSK Au=PSK   Enc=AES(256)               Mac=SHA1
SRP-DSS-AES-256-CBC-SHA        SSLv3   Kx=SRP      Au=DSS   Enc=AES(256)               Mac=SHA1
SRP-RSA-AES-256-CBC-SHA        SSLv3   Kx=SRP      Au=RSA   Enc=AES(256)               Mac=SHA1
SRP-AES-256-CBC-SHA            SSLv3   Kx=SRP      Au=SRP   Enc=AES(256)               Mac=SHA1
RSA-PSK-AES256-CBC-SHA384      TLSv1   Kx=RSAPSK   Au=RSA   Enc=AES(256)               Mac=SHA384
DHE-PSK-AES256-CBC-SHA384      TLSv1   Kx=DHEPSK   Au=PSK   Enc=AES(256)               Mac=SHA384
RSA-PSK-AES256-CBC-SHA         SSLv3   Kx=RSAPSK   Au=RSA   Enc=AES(256)               Mac=SHA1
DHE-PSK-AES256-CBC-SHA         SSLv3   Kx=DHEPSK   Au=PSK   Enc=AES(256)               Mac=SHA1
ECDHE-PSK-CAMELLIA256-SHA384   TLSv1   Kx=ECDHEPSK Au=PSK   Enc=Camellia(256)          Mac=SHA384
RSA-PSK-CAMELLIA256-SHA384     TLSv1   Kx=RSAPSK   Au=RSA   Enc=Camellia(256)          Mac=SHA384
DHE-PSK-CAMELLIA256-SHA384     TLSv1   Kx=DHEPSK   Au=PSK   Enc=Camellia(256)          Mac=SHA384
AES256-SHA                     SSLv3   Kx=RSA      Au=RSA   Enc=AES(256)               Mac=SHA1
CAMELLIA256-SHA                SSLv3   Kx=RSA      Au=RSA   Enc=Camellia(256)          Mac=SHA1
PSK-AES256-CBC-SHA384          TLSv1   Kx=PSK      Au=PSK   Enc=AES(256)               Mac=SHA384
PSK-AES256-CBC-SHA             SSLv3   Kx=PSK      Au=PSK   Enc=AES(256)               Mac=SHA1
PSK-CAMELLIA256-SHA384         TLSv1   Kx=PSK      Au=PSK   Enc=Camellia(256)          Mac=SHA384
ECDHE-PSK-AES128-CBC-SHA256    TLSv1   Kx=ECDHEPSK Au=PSK   Enc=AES(128)               Mac=SHA256
ECDHE-PSK-AES128-CBC-SHA       TLSv1   Kx=ECDHEPSK Au=PSK   Enc=AES(128)               Mac=SHA1
SRP-DSS-AES-128-CBC-SHA        SSLv3   Kx=SRP      Au=DSS   Enc=AES(128)               Mac=SHA1
SRP-RSA-AES-128-CBC-SHA        SSLv3   Kx=SRP      Au=RSA   Enc=AES(128)               Mac=SHA1
SRP-AES-128-CBC-SHA            SSLv3   Kx=SRP      Au=SRP   Enc=AES(128)               Mac=SHA1
RSA-PSK-AES128-CBC-SHA256      TLSv1   Kx=RSAPSK   Au=RSA   Enc=AES(128)               Mac=SHA256
DHE-PSK-AES128-CBC-SHA256      TLSv1   Kx=DHEPSK   Au=PSK   Enc=AES(128)               Mac=SHA256
RSA-PSK-AES128-CBC-SHA         SSLv3   Kx=RSAPSK   Au=RSA   Enc=AES(128)               Mac=SHA1
DHE-PSK-AES128-CBC-SHA         SSLv3   Kx=DHEPSK   Au=PSK   Enc=AES(128)               Mac=SHA1
ECDHE-PSK-CAMELLIA128-SHA256   TLSv1   Kx=ECDHEPSK Au=PSK   Enc=Camellia(128)          Mac=SHA256
RSA-PSK-CAMELLIA128-SHA256     TLSv1   Kx=RSAPSK   Au=RSA   Enc=Camellia(128)          Mac=SHA256
DHE-PSK-CAMELLIA128-SHA256     TLSv1   Kx=DHEPSK   Au=PSK   Enc=Camellia(128)          Mac=SHA256
AES128-SHA                     SSLv3   Kx=RSA      Au=RSA   Enc=AES(128)               Mac=SHA1
CAMELLIA128-SHA                SSLv3   Kx=RSA      Au=RSA   Enc=Camellia(128)          Mac=SHA1
PSK-AES128-CBC-SHA256          TLSv1   Kx=PSK      Au=PSK   Enc=AES(128)               Mac=SHA256
PSK-AES128-CBC-SHA             SSLv3   Kx=PSK      Au=PSK   Enc=AES(128)               Mac=SHA1
PSK-CAMELLIA128-SHA256         TLSv1   Kx=PSK      Au=PSK   Enc=Camellia(128)          Mac=SHA256
ECDHE-ECDSA-NULL-SHA           TLSv1   Kx=ECDH     Au=ECDSA Enc=None                   Mac=SHA1
ECDHE-RSA-NULL-SHA             TLSv1   Kx=ECDH     Au=RSA   Enc=None                   Mac=SHA1
AECDH-NULL-SHA                 TLSv1   Kx=ECDH     Au=None  Enc=None                   Mac=SHA1
NULL-SHA256                    TLSv1.2 Kx=RSA      Au=RSA   Enc=None                   Mac=SHA256
ECDHE-PSK-NULL-SHA384          TLSv1   Kx=ECDHEPSK Au=PSK   Enc=None                   Mac=SHA384
ECDHE-PSK-NULL-SHA256          TLSv1   Kx=ECDHEPSK Au=PSK   Enc=None                   Mac=SHA256
ECDHE-PSK-NULL-SHA             TLSv1   Kx=ECDHEPSK Au=PSK   Enc=None                   Mac=SHA1
RSA-PSK-NULL-SHA384            TLSv1   Kx=RSAPSK   Au=RSA   Enc=None                   Mac=SHA384
RSA-PSK-NULL-SHA256            TLSv1   Kx=RSAPSK   Au=RSA   Enc=None                   Mac=SHA256
DHE-PSK-NULL-SHA384            TLSv1   Kx=DHEPSK   Au=PSK   Enc=None                   Mac=SHA384
DHE-PSK-NULL-SHA256            TLSv1   Kx=DHEPSK   Au=PSK   Enc=None                   Mac=SHA256
RSA-PSK-NULL-SHA               SSLv3   Kx=RSAPSK   Au=RSA   Enc=None                   Mac=SHA1
DHE-PSK-NULL-SHA               SSLv3   Kx=DHEPSK   Au=PSK   Enc=None                   Mac=SHA1
NULL-SHA                       SSLv3   Kx=RSA      Au=RSA   Enc=None                   Mac=SHA1
NULL-MD5                       SSLv3   Kx=RSA      Au=RSA   Enc=None                   Mac=MD5
PSK-NULL-SHA384                TLSv1   Kx=PSK      Au=PSK   Enc=None                   Mac=SHA384
PSK-NULL-SHA256                TLSv1   Kx=PSK      Au=PSK   Enc=None                   Mac=SHA256
PSK-NULL-SHA                   SSLv3   Kx=PSK      Au=PSK   Enc=None                   Mac=SHA1
#

しかし、opensslの設定で使えるcipherに入っていない。

# cat /etc/crypto-policies/back-ends/opensslcnf.config
CipherString = @SECLEVEL=2:kEECDH:kRSA:kEDH:kPSK:kDHEPSK:kECDHEPSK:kRSAPSK:-aDSS:-3DES:!DES:!RC4:!RC2:!IDEA:-SEED:!eNULL:!aNULL:!MD5:-SHA384:-CAMELLIA:-ARIA:-AESCCM8
Ciphersuites = TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:TLS_AES_128_CCM_SHA256
TLS.MinProtocol = TLSv1.2
TLS.MaxProtocol = TLSv1.3
DTLS.MinProtocol = DTLSv1.2
DTLS.MaxProtocol = DTLSv1.2
SignatureAlgorithms = ECDSA+SHA256:ECDSA+SHA384:ECDSA+SHA512:ed25519:ed448:rsa_pss_pss_sha256:rsa_pss_pss_sha384:rsa_pss_pss_sha512:rsa_pss_rsae_sha256:rsa_pss_rsae_sha384:rsa_pss_rsae_sha512:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA224:RSA+SHA224
Groups = X25519:secp256r1:X448:secp521r1:secp384r1:ffdhe2048:ffdhe3072:ffdhe4096:ffdhe6144:ffdhe8192
#

これを変更するのはどうなのかな?と思って、TLSv1.0.,1.1は無効のまま切り替えを実施したが、現状は問題が出ていない

(5) 他サーバで送信された自ドメインが受信拒否される

2018年設定時に発生した「postfixを使用したiredmailで他サーバで送信された自ドメインメールが受信拒否される

フォーラムにある「SMTP AUTH is required for users under this sender domain (Mailing list」にある/opt/iredapd/settings.py に「CHECK_SPF_IF_LOGIN_MISMATCH = True」を追加して、iredapd.serviceを再起動する、を今回も採用した。

(6) Barracudacentralがメールを拒否しすぎる問題

2018年設定時に発生した問題「iRedMailの初期設定から変えたところ 2018/08/21版

現状のpostfix設定をpostconf -vを実行して、確認すると以下が設定されている

postscreen_dnsbl_sites = zen.spamhaus.org=127.0.0.[2..11]*3 b.barracudacentral.org=127.0.0.2*2

あれから6年たってるけど、改めて有効化する勇気は持てなかったので、現状踏襲として、以下で設定した。

postscreen_dnsbl_sites = zen.spamhaus.org=127.0.0.[2..11]*3

postconf -vコマンドは全体をまとめた出力になるので、実際に設定されている場所を確認すると /etc/postfix/main.cf だったので該当箇所は下記のようになった

# Attention:
#   - zen.spamhaus.org free tire has 3 limits
#     (https://www.spamhaus.org/organization/dnsblusage/):
#
#     1) Your use of the Spamhaus DNSBLs is non-commercial*, and
#     2) Your email traffic is less than 100,000 SMTP connections per day, and
#     3) Your DNSBL query volume is less than 300,000 queries per day.
#
#   - FAQ: "Your DNSBL blocks nothing at all!"
#     https://www.spamhaus.org/faq/section/DNSBL%20Usage#261
#
# It's strongly recommended to use a local DNS server for cache.
postscreen_dnsbl_sites =
    zen.spamhaus.org=127.0.0.[2..11]*3
#    b.barracudacentral.org=127.0.0.2*2

postscreen_dnsbl_reply_map = texthash:/etc/postfix/postscreen_dnsbl_reply
postscreen_access_list = permit_mynetworks cidr:/etc/postfix/postscreen_access.cidr

(7) mail.goo.ne.jp メールの拒否解除

2018年設定時に発生した問題「iRedMailの初期設定から変えたところ 2018/08/21版

iredmailが用意している/etc/postfix/helo_access.pcreを見てみると、2024年でも引き続き mail.goo.ne.jpメールが明示的に拒否されていた

変更前

/^(mail\.goo\.ne\.jp)$/ REJECT ACCESS DENIED. Your email was rejected because it appears to come from a known spamming mail server (${1})

変更後

###/^(mail\.goo\.ne\.jp)$/ REJECT ACCESS DENIED. Your email was rejected because it appears to come from a known spamming mail server (${1})

(8) IPアドレスが入ったホスト名拒否設定の一部解除

2018年設定時に発生した問題「iRedMailの初期設定から変えたところ 2018/08/21版

ホスト名にIPアドレスが含まれていると拒否する設定があるので、設定を一部変更する。
家庭用回線などnetwork-192-168.0.100.ぷろばいだ.ne.jpという感じでIPアドレスがもろに埋め込まれたホスト名が使われている。
それに対して、メールサーバは多くの場合、ちゃんと命名されていることが多い。

2018年の時よりbypass設定されてるものが増えていた

変更前

# bypass "[IP_ADDRESS]"
/^\[(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\]$/ OK

# Bypass HELOs used by known big ISPs which contains IP address
/\.outbound-(email|mail)\.sendgrid\.net$/ OK
/^\d{1,3}-\d{1,3}-\d{1,3}-\d{1,3}\.mail-.*\.facebook\.com$/ OK
/^outbound-\d{1,3}-\d{1,3}-\d{1,3}-\d{1,3}\.pinterestmail\.com$/ OK
/\.outbound\.protection\.outlook\.com$/ OK
/^ec2-\d{1,3}-\d{1,3}-\d{1,3}-\d{1,3}\..*\.compute\.amazonaws\.com$/ OK
/^out\d{1,3}-\d{1,3}-\d{1,3}-\d{1,3}\.mail\.qq\.com$/ OK

変更後….2018年だと最後が「DUNNO」だったのが、2024年設定だと「OK」に変わっていたので揃えた

# bypass "[IP_ADDRESS]"
/^\[(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\]$/ OK

# Bypass HELOs used by known big ISPs which contains IP address
/\.outbound-(email|mail)\.sendgrid\.net$/ OK
/^\d{1,3}-\d{1,3}-\d{1,3}-\d{1,3}\.mail-.*\.facebook\.com$/ OK
/^outbound-\d{1,3}-\d{1,3}-\d{1,3}-\d{1,3}\.pinterestmail\.com$/ OK
/\.outbound\.protection\.outlook\.com$/ OK
/^ec2-\d{1,3}-\d{1,3}-\d{1,3}-\d{1,3}\..*\.compute\.amazonaws\.com$/ OK
/^out\d{1,3}-\d{1,3}-\d{1,3}-\d{1,3}\.mail\.qq\.com$/ OK

### add for yawata-lions.com
/^sv(\d{1,3}-\d{1,3}-\d{1,3}-\d{1,3}).*.seedshosting.jp$/ OK
### add for shop-pro.jp
# mail-10-200-1-137.mitaka.shop-pro.jp
/^mail-(\d{1,3}-\d{1,3}-\d{1,3}-\d{1,3})/ OK
### add for salesforce
/^smtp*\.mta\.salesforce\.com$/ OK

(9) heloでDNSに登録されていないホスト名しゃべるメールサーバの取り扱い

ml2.vector.co.jp から送られてくるメールは www05.vector.co.jp を名乗っているがDNS登録がなくてエラーになっていた。

Dec 16 10:25:42 ml2 postfix/smtpd[502984]: NOQUEUE: reject: RCPT from ml2.vector.co.jp[180.214.37.169]: 450 4.7.1 &lt;www05.vector.co.jp>: Helo command rejected: Host not found; from=&lt;shopmgr@ml.vector.co.jp> to=&lt;xxxxx@xxxxxx.jp> proto=ESMTP helo=&lt;www05.vector.co.jp>

これ、こっちの設定の問題なのかな?と思いながら確認したところ、トレンドマイクロのWebにちょうどいい物が・・・「配送先のメールサーバから”Helo command rejected: Host not found”といったエラーが返却されて、メールが配送できません。

原因
InterScan MSS/IMSVA/DDEIがメール配送時にSMTP HELO/EHLOコマンドで指定した自ホスト名が、配送先メールサーバ側で名前解決できなかったことが原因になります。
デフォルトでは、InterScan MSS/IMSVA/DDEIのOSのホスト名がSMTP HELO/EHLOコマンドで使用されます。
メールサーバによっては、「SMTPクライアントがSMTP HELO/EHLOコマンドで指定するホスト名」に対し、対応するAレコードまたはMXレコードがDNSから確認できない場合に受信拒否を行うことがあります。
なお、RFC 2821(Simple Mail Transfer Protocol) の 3.6 Domains では、SMTP EHLOコマンドで指定するホスト名が名前解決できることを求めています。
例として、postfixにはこのような機能があります。当該機能に抵触すると、以下のようなSMTP応答をSMTPクライアントに返却し、メールの受信を拒否いたします。
 450 4.7.1 <ホスト名>: Helo command rejected: Host not found

というわけで、送ってくるやつが悪、と

とはいえ、解除できるのか?と確認

iredmailフォーラム「Helo command rejected: Host not found 450 4.7.1 – error [FIXED]」に smtpd_helo_restrictions の記載とあるので新旧サーバのpostconf -vを比較

旧サーバ
smtpd_helo_restrictions = permit_mynetworks permit_sasl_authenticated check_helo_access pcre:/etc/postfix/helo_access.pcre

新サーバ
smtpd_helo_restrictions = permit_mynetworks permit_sasl_authenticated check_helo_access pcre:/etc/postfix/helo_access.pcre reject_non_fqdn_helo_hostname reject_unknown_helo_hostname

なるほど・・・確認したら2018年のpostfix/dovecotメールサーバでWindows Live Mail 2012がエラーになる でreject_non_fqdn_helo_hostname, reject_unknown_helo_hostname を削除した、という記録が残っていた。

ということで、まずは /etc/postfix/main.cf の smtpd_helo_restrictionsからとりあえず reject_unknown_helo_hostnameだけを削除して動作状況確認

・・・設定後に再送されてきたvectorメールはHost not foundになっているので、reject_non_fqdn_helo_hostname も削除してみると、受信成功

というわけで、今回も smtpd_helo_restrictions から reject_non_fqdn_helo_hostnameと reject_unknown_helo_hostname を削除した。

(10) SOGoのタイムゾーン変更

/etc/sogo/sogo.conf に「SOGoTimeZone = “America/New_York”;」という設定があるので「SOGoTimeZone = “Asia/Tokyo”;」に書き換える。

(11) 切り替え後の運用状況確認

rejectの確認

メール受信で想定外の拒否が起きてないかを「tail -f /var/log/maillog |grep reject」で確認

“Recipient address rejected: Intentional policy rejection, please try again later”はgreylist関連で15分後の再送要求なので、ずっと同じfromとtoで発生していない限りは無視

Dec 16 13:34:39 ml2 postfix/smtpd[528122]: NOQUEUE: reject: RCPT from mta-sndfb-e01.mail.nifty.com[106.153.226.65]: 451 4.7.1 &lt;xxxx@xxxx.jp>: Recipient address rejected: Intentional policy rejection, please try again later; from=&lt;yyyy@yyyy.com> to=&lt;xxxx@xxxx.jp> proto=ESMTP helo=&lt;mta-sndfb-e01.mail.nifty.com>

“Sender address rejected: Domain not found”は、存在しないドメインからのメールなので受け取れなくて問題ない。

Dec 16 13:32:44 ml2 postfix/smtpd[528122]: NOQUEUE: reject: RCPT from unknown[178.170.191.125]: 450 4.1.8 &lt;yyyy@yyyy.com>: Sender address rejected: Domain not found; from=&lt;yyyy@yyyy.com> to=&lt;xxxx@xxxx.jp> proto=ESMTP helo=&lt;mx01.vbudushee.ru>

“Helo command rejected: Host not found”は、本来メールサーバホスト名はDNSに登録してる必要があるのに登録していないサーバから送信されてきてる、というのもなんだけど、有名どころでも登録してない場合がちらほらあるので、/etc/postfix/main.cf の smtpd_helo_restrictions設定を見直し

Dec 16 11:35:50 ml2 postfix/smtpd[513244]: NOQUEUE: reject: RCPT from ml2.vector.co.jp[180.214.37.169]: 450 4.7.1 &lt;www05.vector.co.jp>: Helo command rejected: Host not found; from=&lt;yyyyyy@ml.vector.co.jp> to=&lt;xxxx@xxxx.jp> proto=ESMTP helo=&lt;www05.vector.co.jp>

pop3/imap認証のエラー確認

メール取得で認証のエラーを出してないか確認

「tail -f /var/log/dovecot/imap.log|grep “auth fail”」「tail -f /var/log/dovecot/pop3.log|grep “auth fail”」

Dec 16 09:43:22 ml2 dovecot[414228]: imap-login: Disconnected: Connection closed (auth failed, 1 attempts in 8 secs): user=&lt;xxx@xxx.jp>, method=PLAIN, rip=xx.xx.xx.xx, lip=yy.yy.yy.yy, TLS, TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits), session=&lt;qTdBd1gps43Ssvsh>
Dec 16 11:14:37 ml2 dovecot[414228]: imap-login: Disconnected: Connection closed (auth failed, 1 attempts in 2 secs): user=&lt;yyy@yyy.com>, method=PLAIN, rip=xx.xx.xx.xx, lip=yy.yy.yy.yy, TLS: Connection closed, TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits), session=&lt;SbLtvVkpCuUjSoVw>
Dec 16 08:07:46 ml2 dovecot[414228]: pop3-login: Disconnected: Connection closed (auth failed, 1 attempts in 2 secs): user=&lt;yyy@yyy.com>, method=PLAIN, rip=xx.xx.xx.xx, lip=yy.yy.yy.yy, session=&lt;SeOzIVcpZLE5tnGw>
Dec 16 08:25:44 ml2 dovecot[414228]: pop3-login: Disconnected: Aborted login by logging out (auth failed, 1 attempts in 2 secs): user=&lt;xxx@xxx.jp>, method=PLAIN, rip=xx.xx.xx.xx, lip=yy.yy.yy.yy, TLS, TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits), session=&lt;xrf1YVcp9P2Zrq1g>

基本的にどっちも元IPである”rip=”のあとのIPアドレスの所属を調べて、妥当な場所からのアクセスだったら、該当ユーザに電話などで連絡を取って状況を確認

メール受信時のgreylist処理詳細確認

iredapdを使用したメール受信時のgreylist処理のログ /var/log/iredapd/iredapd.log を確認

ざらっと見てたら、いまどき暗号化無し(encryption_protocol= の値がなし)なのは大量メール送信者ぐらいって感じでしたね

ポイント系

ponta.jp
tsite.jp

決済系

docomo-bill.ne.jp (偽物かと思ったら、本物のNTTファイナンス ntt-finance.co.jp でした)

一般サイト

abema.tv
b-cle.com
bestrsv.com
e-trend.co.jp
gnavi.co.jp
yodobashi.com

DM送信業者

cuenote.jp
emberpoint.com
mpse.jp

変な業者

tubox.jp
ec-tools.jp
myselection.net

(12) spamasssasin の設定移植

spamasssasin にいろいろカスタマイズしていたので、その設定を移動

むかし www.flcl.org/~yoh/user_prefs で配布していたものを /etc/mail/spamassassin/japancustom として保存しているので、コピー

また、いろいろカスタムした内容を /etc/mail/spamassassin/private_prefs として保存しているので、コピー

この2ファイルを組み込むように /etc/mail/spamassassin/local.cf の最後に以下を追加

include japancustom
include private_prefs

で spamasssasind を再起動しようとしたら起動してないし、届いたメールのヘッダを確認したら X-Spam-Status とかが存在していない

どういうことか確認したら「Amavisd + SpamAssassin not working? no mail header (X-Spam-*) inserted」を発見。amavisdのなかからspamassassinを起動してるので常時動いてるわけじゃない、というのと、標準だとX-Spamを付けない状態、ということだった。

旧サーバでの設定を確認

#$sa_tag_level_deflt  = 2.0;  # add spam info headers if at, or above that level
$sa_tag_level_deflt = -999;
$sa_tag2_level_deflt = 6.2;  # add 'spam detected' headers at that level
$sa_tag2_level_deflt = 15;
#$sa_kill_level_deflt = 6.9;  # triggers spam evasive actions (e.g. blocks mail)
$sa_kill_level_deflt = 100;
#$sa_dsn_cutoff_level = 10;   # spam level beyond which a DSN is not sent
$sa_dsn_cutoff_level = 100;
$sa_crediblefrom_dsn_cutoff_level = 18; # likewise, but for a likely valid From
# $sa_quarantine_cutoff_level = 25; # spam level beyond which quarantine is off
$penpals_bonus_score = 8;    # (no effect without a @storage_sql_dsn database)
$penpals_threshold_high = $sa_kill_level_deflt;  # don't waste time on hi spam
$bounce_killer_score = 100;  # spam score points to add for joe-jobbed bounces

$sa_mail_body_size_limit = 400*1024; # don't waste time on SA if mail is larger
$sa_local_tests_only = 0;    # only tests which do not require internet access?

新サーバでの設定は?と見てみると

$sa_tag_level_deflt  = 2.0;  # add spam info headers if at, or above that level
$sa_tag2_level_deflt = 6.2;  # add 'spam detected' headers at that level
$sa_kill_level_deflt = 6.9;  # triggers spam evasive actions (e.g. blocks mail)
$sa_dsn_cutoff_level = 10;   # spam level beyond which a DSN is not sent
$sa_crediblefrom_dsn_cutoff_level = 18; # likewise, but for a likely valid From
#$sa_quarantine_cutoff_level = 25; # spam level beyond which quarantine is off

$sa_mail_body_size_limit = 400*1024; # don't waste time on SA if mail is larger
$sa_local_tests_only = 0;    # only tests which do not require internet access?

設定を見比べて思い出した

2018年時点で、標準だとSPAM判定が厳しいなぁ、ということがあったので緩和していました(Webに書いてなかったので忘れてた)

ヘッダーに「X-Spam」を表示するため $sa_tag_level_deflt を 2.0 → -999

殺す率を下げるために $sa_tag2_level_deflt を 6.2 → 15

$sa_kill_level_deflt を 6.9 から 100

$sa_dsn_cutoff_level を 10 から 100

という形で設定した

(13) logwatchの設定調整

logwatchを入れた状態だとpostfixに関してのunmatchが非常に多量に出力される

フォーラムに「Logwatch postfix: a lot of unmatched entries」があり、 /etc/logwatch/conf/services/postfix.conf で 「$postfix_Enable_Long_Queue_Ids = Yes」を設定する、とある。

初期値は「$postfix_Enable_Long_Queue_Ids = No」だったので変更して logwatch –output stdout を実行してみたのだが、postfixは問題なくなったものの、それ以外がまだ多い・・・

特に dovecotに関する出力では lda からの具体的なメールファイルの取り扱い記録が出力されている

/etc/logwatch/conf/services/dovecot.conf は postfix.conf と違って設定可能項目が少ない。

フォーラム見ると「 logwatch and dovecot produce lot of output」で ignore に設定しろ、とある。

とりあえず以下を設定した

# cat /etc/logwatch/conf/ignore.conf
###### REGULAR EXPRESSIONS IN THIS FILE WILL BE TRIMMED FROM REPORT OUTPUT #####
### for dovecot imap,lda
: sieve: from=
: expunge: box=
: copy from
: delete: box=
# maildir move error?
: Expunged message reappeared, giving a new UID
: Warning: Fixed a duplicate:
#

とりあえず問題なさそうに見える

(14) logwatchのdovecotスクリプトカスタマイズ

logwatchのdovecot で pop3 / imap のログイン回数がわかるが、アカウントが乗っ取られた時に如実に表れるのは接続してくるIPアドレスの多さになるので、各アカウントごとに接続に成功したIPアドレス数をカウントすると、なんとなく検知できるようになる

まず、標準のservices/dovecot の設定だとサーバ全体の回数しかわからないので、ユーザごとについても出力できるようにする

/etc/logwatch/conf/services/dovecot.conf の「#$dovecot_detail = 10」を「$dovecot_detail = 10」に変更

で・・・gmail, outlook, などのメールサーバ側でpop3アクセスしてメールを取ってくる、って設定をしている場合、これらのサーバからのアクセスIPはかなりいろいろある。特にgmail

これらをまるっと出力をまとめるために /etc/logwatch/scripts/services/dovecot に修正を入れた

gmailもoutlookもIPv6アドレスが多かったので頻出したものを登録した。

変更前のPOP3部分

   } elsif ( (my ($User, $Host) = ( $ThisLine =~ /^(?:$dovecottag )?pop3-login: Login: (.*?) \[(.*)\]/ ) ) or
             (my ($User, $Host) = ( $ThisLine =~ /^(?:$dovecottag )?pop3-login: (?:Info: )?Login: user=\&lt;(.*?)\>.*rip=(.*), lip=/ ) ) ) {
      if ($Host !~ /$IgnoreHost/) {
         $Host = hostName($Host);
         $Login{$User}{$Host}++;
         $LoginPOP3{$User}++;
         $ConnectionPOP3{$Host}++;
         $Connection{$Host}++;
      }

変更後のPOP3部分

   } elsif ( (my ($User, $Host) = ( $ThisLine =~ /^(?:$dovecottag )?pop3-login: Login: (.*?) \[(.*)\]/ ) ) or
             (my ($User, $Host) = ( $ThisLine =~ /^(?:$dovecottag )?pop3-login: (?:Info: )?Login: user=\&lt;(.*?)\>.*rip=(.*), lip=/ ) ) ) {
      if ($Host !~ /$IgnoreHost/) {
         $Host = hostName($Host);
         if(($Host =~ /mail-[a-z][a-z0-9]*-[a-z][a-z0-9]*\.google\.com/)
                ||($Host =~ /2001:4860:4864:/)
                ||($Host =~ /2607:f8b0:4864:/)
                ||($Host =~ /2a00:1450:4864:/)
                ){
             $Host = "access from Gmail server(POP3)";
         }
         if(($Host =~ /40\.99\.44\.229/)
                ||($Host =~ /40\.99\.251\.133/)
                ||($Host =~ /2603:1036:/)
                ){
             $Host = "access from Microsoft server(POP3)";
         }
         $Login{$User}{$Host}++;
         $LoginPOP3{$User}++;
         $ConnectionPOP3{$Host}++;
         $Connection{$Host}++;
      }

変更前のIMAP部分

   } elsif ( (my ($User, $Host) = ( $ThisLine =~ /^(?:$dovecottag )?imap-login: Login: (.*?) \[(.*)\]/ ) ) or
             (my ($User, $Host, $Session) = ( $ThisLine =~ /^(?:$dovecottag )?imap-login: (?:Info: )?Login: user=\&lt;(.*?)\>.*rip=(.*), lip=.*, session=&lt;([^>]+)>/ ) ) ) {
      if ($Host !~ /$IgnoreHost/) {
         $Host = hostName($Host);
         $Login{$User}{$Host}++;
         $LoginIMAP{$User}++;
         $ConnectionIMAP{$Host}++;
         $Connection{$Host}++;
         if (defined($MUASessionList{$Session})) {
             $MUAList{$MUASessionList{$Session}}{$User}++;
             delete $MUASessionList{$Session};
         }
      }

変更後のIMAP部分

   } elsif ( (my ($User, $Host) = ( $ThisLine =~ /^(?:$dovecottag )?imap-login: Login: (.*?) \[(.*)\]/ ) ) or
             (my ($User, $Host, $Session) = ( $ThisLine =~ /^(?:$dovecottag )?imap-login: (?:Info: )?Login: user=\&lt;(.*?)\>.*rip=(.*), lip=.*, session=&lt;([^>]+)>/ ) ) ) {
      if ($Host !~ /$IgnoreHost/) {
         $Host = hostName($Host);
         if(($Host =~ /mail-[a-z][a-z0-9]*-[a-z][a-z0-9]*\.google\.com/)
                ||($Host =~ /2001:4860:4864:/)
                ||($Host =~ /2607:f8b0:4864:/)
                ||($Host =~ /2a00:1450:4864:/)
                ){
             $Host = "access from Gmail server(IMAP)";
         }
         if(($Host =~ /40\.99\.44\.229/)
                ||($Host =~ /40\.99\.251\.133/)
                ||($Host =~ /2603:1036:/)
                ){
             $Host = "access from Microsoft server(IMAP)";
         }
         $Login{$User}{$Host}++;
         $LoginIMAP{$User}++;
         $ConnectionIMAP{$Host}++;
         $Connection{$Host}++;
         if (defined($MUASessionList{$Session})) {
             $MUAList{$MUASessionList{$Session}}{$User}++;
             delete $MUASessionList{$Session};
         }
      }

(15) logrotateの設定調整

メールログなど長期保存した方がいいので、設定変更をした

/etc/logrotate.conf への修正

rotate 4 → rotate 30

#compress → compress

maillog, dovecot/* の設定が現状 weekly になってるが、元のメールサーバではdailyに変更して日々のメール取扱量が曜日によって変わるかをわかりやすくしていた。

今回は、どうせ必要になることは少ないんだからweeklyのままにしておくか、ということにした。

(16) Let’s Encrypt によるSSL対応を dehydrated を使って行う

公式手順 Request a free cert from Let’s Encrypt (for servers deployed with downloadable iRedMail installer) だと certbot コマンドを使っているが、私は単純化されているdehydratedを使っている。

EPELはiredmailインストール時に有効にしているので「dnf install dehydrated」でインストールできる。

証明書発行時に使う /.well-known/acme-challenge/ へのアクセスを nginx に実施

/etc/nginx/templates/dehydrated.tmpl に以下を記載

location ^~ /.well-known/acme-challenge/ {
  access_log on;
  autoindex off;
  alias /var/www/dehydrated/;
}

/etc/nginx/sites-available/00-default.conf にinclude /etc/nginx/templates/dehydrated.tmpl; を追加

#
# Note: This file must be loaded before other virtual host config files,
#
# HTTP
server {
    # Listen on ipv4
    listen 80;
    listen [::]:80;

    server_name _;

    include /etc/nginx/templates/dehydrated.tmpl;
    # Redirect all insecure http:// requests to https://
    return 301 https://$host$request_uri;
}

/etc/nginx/sites-available/00-default-ssl.conf にも追加

#
# Note: This file must be loaded before other virtual host config files,
#
# HTTP
server {
    # Listen on ipv4
    listen 80;
    listen [::]:80;

    server_name _;

    include /etc/nginx/templates/dehydrated.tmpl;
    # Redirect all insecure http:// requests to https://
    return 301 https://$host$request_uri;
}
[root@ml2 nginx]# cat sites-available/00-default-ssl.conf
#
# Note: This file must be loaded before other virtual host config files,
#
# HTTPS
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name _;

    root /var/www/html;
    index index.php index.html;

    include /etc/nginx/templates/misc.tmpl;
    include /etc/nginx/templates/ssl.tmpl;
    include /etc/nginx/templates/iredadmin.tmpl;
    include /etc/nginx/templates/roundcube.tmpl;
    include /etc/nginx/templates/sogo.tmpl;
    include /etc/nginx/templates/netdata.tmpl;
    include /etc/nginx/templates/php-catchall.tmpl;
    include /etc/nginx/templates/stub_status.tmpl;
    include /etc/nginx/templates/dehydrated.tmpl;
}

追加した後はnginxを再起動

/etc/dehydrated/domain.txt に発行したいSSL証明書のFQDNをすべて1行に列挙

「dehydrated –register」「dehydrated –register –accept-terms」で登録を行い、「dehydrated –cron」でSSL証明書を発行

で、証明書が発行されたら、Use Let’s Encrypt cert 記載のようなかたちで /etc/pki/tls のファイルを /etc/dehydrated/certs/<FQDN>/ に置き換える作業を行う

また、証明書が発行された場合に自動的に関連するサービスを再起動して読み込み直す処理を入れるため /etc/dehydrated/hook.sh の deploy_cert() { に追加する

deploy_cert() {
  local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" FULLCHAINFILE="${4}" CHAINFILE="${5}" TIMESTAMP="${6}"

  # 略
  # systemctl reload nginx.service
    chmod a+r /etc/dehydrated/certs/${DOMAIN}/fullchain-*.pem
    chmod a+r /etc/dehydrated/certs/${DOMAIN}/privkey-*.pem
    /usr/bin/systemctl restart postfix.service
    /usr/bin/systemctl restart dovecot.service
    /usr/bin/systemctl restart nginx.service
    /usr/bin/systemctl restart sogod.service
    /usr/bin/systemctl restart mariadb.service
}

最後にdehydratedが自動実行されるように dehydrated.timer 設定を入れるため systemctl enable dehydrated.timer を実行する

systemctl status dehydrated.timer
systemctl enable dehydrated.timer
systemctl status dehydrated.timer

(17) logwatchに spamhaus 登録検知?

最初は問題なく運用できていたのだが、spamhaus に IPv6 逆引きができない、という理由でSPAM登録されるという事態が発生した。

実際には複数の異なるネットワーク上に存在するサーバにてdigコマンドを使ってちゃんと応答が返ってくるにも関わらず、というくそな状態

/var/log/maillog を確認すると下記のような「blocked using Spamhaus.」というログがあった

Dec 23 14:22:12 サーバ名 postfix/smtp[1969195]: 4YGmc31klrz9sls: to=&lt;osakanataro@ドメイン>, relay=hotmail-com.olc.protection.outlook.com[52.101.42.11]:25, delay=0.98, delays=0/0.02/0.85/0.11, dsn=5.7.1, status=bounced (host hotmail-com.olc.protection.outlook.com[52.101.42.11] said: 550 5.7.1 Service unavailable, Client host [xxx.xxx.xx.xxx] blocked using Spamhaus. To request removal from this list see https://www.spamhaus.org/query/ip/xxx.xxx.xx.xxx (ASXXXX). [Name=Protocol Filter Agent][AGT=PFA][MxId=11BA4226D003BECF] [CO1PEPF000044F6.namprd21.prod.outlook.com 2024-12-23T05:22:17.053Z 08DD21136A4618D7] (in reply to MAIL FROM command))

logwatchのpostfixでこれが検出できるようにすればいいかと雑な対応を実施した。

/etc/logwatch/scripts/services/postfixの「Main processing loop」の最初のあたりで、本来はテンプレートにない珍しいログを出力するためのunmatchedとして「blocked using」が含まれる行を送り込めばいいや、という判断です

# diff -u postfix.20241202.sourceforge postfix
--- postfix.20241202.sourceforge        2024-12-26 16:39:56.259000000 +0900
+++ postfix     2024-12-26 17:23:14.236000000 +0900
@@ -2829,6 +2829,13 @@
    # ignore tlsproxy for now
    if ($service_name eq 'tlsproxy')        { next; }                             # postfix/tlsproxy

+   ### 2024/12/26 start
+   if ($p1 =~/blocked using/){
+       inc_unmatched('final')   if ! in_ignore_list ($p1);
+       #return;
+   }
+   ### 2024/12/26 end
+
    my ($helo, $relay, $from, $origto, $to, $domain, $status,
        $type, $reason, $reason2, $filter, $site, $cmd, $qid,
        $rej_type, $reject_name, $host, $hostip, $dsn, $reply, $fmthost, $bytes);
#

とりあえず、これで検出できるようにはなりました。

なお修正後の試験は該当するログがある日付を指定する形で「logwatch –range “2024/12/23” –service postfix」を実行しました。

Linuxでソフトウェアミラーしてたら怪しげなNVMeストレージがすぐ死んだ

Amazonにて4280円で買ったNVMe SSDが2週間経たずに死んだ

ちなみに買ったやつはコレ

別記事にしているようにミニPCにてNVMe SSDとSATA SSDでミラーするように設定している環境だったので、まだ影響は出ていないが、早すぎでは??

まずは状態確認

cat /proc/mdstat

# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdc3[1] nvme0n1p3[0](F)
      497876992 blocks super 1.2 [2/1] [_U]
      bitmap: 2/4 pages [8KB], 65536KB chunk

unused devices: <none>
#

mdadmでdatail表示

# mdadm --query /dev/md127
/dev/md127: 474.81GiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail.
# mdadm --detail /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Mon Nov 25 22:23:15 2024
        Raid Level : raid1
        Array Size : 497876992 (474.81 GiB 509.83 GB)
     Used Dev Size : 497876992 (474.81 GiB 509.83 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Dec  6 11:27:27 2024
             State : clean, degraded
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : bitmap

              Name : niselog.dyndns.ws:pv00  (local to host niselog.dyndns.ws)
              UUID : 44d77e34:c9af4167:1c6031a7:b047cdb0
            Events : 56525

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       35        1      active sync   /dev/sdc3

       0     259        3        -      faulty   /dev/nvme0n1p3
#

mdを構成する各デバイスの状態をmdadm –examineで取得

# mdadm --examine /dev/sdc3
/dev/sdc3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 44d77e34:c9af4167:1c6031a7:b047cdb0
           Name : niselog.dyndns.ws:pv00  (local to host niselog.dyndns.ws)
  Creation Time : Mon Nov 25 22:23:15 2024
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 995753984 sectors (474.81 GiB 509.83 GB)
     Array Size : 497876992 KiB (474.81 GiB 509.83 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=0 sectors
          State : clean
    Device UUID : 622cd160:74e95f66:6266ee0d:85ba3287

Internal Bitmap : 8 sectors from superblock
    Update Time : Fri Dec  6 11:29:02 2024
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 247ea644 - correct
         Events : 56583


   Device Role : Active device 1
   Array State : .A ('A' == active, '.' == missing, 'R' == replacing)
# mdadm --examine /dev/nvme0n1p3
mdadm: No md superblock detected on /dev/nvme0n1p3.
#

NVMe側のデバイスが見えていない

関連するdmesg

[251879.751800] systemd-rc-local-generator[882428]: /etc/rc.d/rc.local is not marked executable, skipping.
[345055.452619] nvme nvme0: I/O tag 322 (0142) opcode 0x0 (Flush) QID 4 timeout, aborting req_op:FLUSH(2) size:0
[345057.437597] nvme nvme0: I/O tag 210 (a0d2) opcode 0x2 (Read) QID 2 timeout, aborting req_op:READ(0) size:32768
[345057.437643] nvme nvme0: I/O tag 706 (c2c2) opcode 0x2 (Read) QID 3 timeout, aborting req_op:READ(0) size:32768
[345085.664306] nvme nvme0: I/O tag 322 (0142) opcode 0x0 (Flush) QID 4 timeout, reset controller
[345167.062438] INFO: task md127_raid1:603 blocked for more than 122 seconds.
[345167.062449]       Tainted: G               X  -------  ---  5.14.0-503.14.1.el9_5.x86_64 #1
[345167.062452] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[345167.062454] task:md127_raid1     state:D stack:0     pid:603   tgid:603   ppid:2      flags:0x00004000
[345167.062460] Call Trace:
[345167.062462]  <TASK>
[345167.062466]  __schedule+0x229/0x550
[345167.062473]  ? __schedule+0x231/0x550
[345167.062477]  schedule+0x2e/0xd0
[345167.062480]  md_super_wait+0x72/0xa0
[345167.062484]  ? __pfx_autoremove_wake_function+0x10/0x10
[345167.062489]  write_sb_page+0x8a/0x110
[345167.062492]  md_update_sb.part.0+0x2eb/0x800
[345167.062494]  md_check_recovery+0x232/0x390
[345167.062500]  raid1d+0x40/0x580 [raid1]
[345167.062508]  ? __timer_delete_sync+0x2c/0x40
[345167.062511]  ? schedule_timeout+0x92/0x160
[345167.062514]  ? prepare_to_wait_event+0x5d/0x180
[345167.062517]  md_thread+0xa8/0x160
[345167.062520]  ? __pfx_autoremove_wake_function+0x10/0x10
[345167.062523]  ? __pfx_md_thread+0x10/0x10
[345167.062525]  kthread+0xdd/0x100
[345167.062529]  ? __pfx_kthread+0x10/0x10
[345167.062532]  ret_from_fork+0x29/0x50
[345167.062536]  </TASK>
[345167.062539] INFO: task xfsaild/dm-0:715 blocked for more than 122 seconds.
[345167.062542]       Tainted: G               X  -------  ---  5.14.0-503.14.1.el9_5.x86_64 #1
[345167.062544] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[345167.062546] task:xfsaild/dm-0    state:D stack:0     pid:715   tgid:715   ppid:2      flags:0x00004000
[345167.062550] Call Trace:
[345167.062552]  <TASK>
[345167.062553]  __schedule+0x229/0x550
[345167.062556]  ? bio_associate_blkg_from_css+0xf5/0x320
[345167.062561]  schedule+0x2e/0xd0
[345167.062564]  md_write_start.part.0+0x195/0x250
[345167.062566]  ? __pfx_autoremove_wake_function+0x10/0x10
[345167.062570]  raid1_make_request+0x5b/0xbb [raid1]
[345167.062575]  md_handle_request+0x150/0x270
[345167.062578]  ? __bio_split_to_limits+0x8e/0x280
[345167.062582]  __submit_bio+0x94/0x130
[345167.062584]  __submit_bio_noacct+0x7e/0x1e0
[345167.062587]  xfs_buf_ioapply_map+0x1cb/0x270 [xfs]
[345167.062725]  _xfs_buf_ioapply+0xcf/0x1b0 [xfs]
[345167.062821]  ? __pfx_default_wake_function+0x10/0x10
[345167.062824]  __xfs_buf_submit+0x6e/0x1e0 [xfs]
[345167.062916]  xfs_buf_delwri_submit_buffers+0xe3/0x230 [xfs]
[345167.063005]  xfsaild_push+0x1aa/0x740 [xfs]
[345167.063122]  xfsaild+0xb2/0x150 [xfs]
[345167.063230]  ? __pfx_xfsaild+0x10/0x10 [xfs]
[345167.063333]  kthread+0xdd/0x100
[345167.063336]  ? __pfx_kthread+0x10/0x10
[345167.063339]  ret_from_fork+0x29/0x50
[345167.063342]  </TASK>
[345167.063353] INFO: task xfsaild/dm-12:1051 blocked for more than 122 seconds.
[345167.063356]       Tainted: G               X  -------  ---  5.14.0-503.14.1.el9_5.x86_64 #1
[345167.063358] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[345167.063360] task:xfsaild/dm-12   state:D stack:0     pid:1051  tgid:1051  ppid:2      flags:0x00004000
[345167.063364] Call Trace:
[345167.063365]  <TASK>
[345167.063366]  __schedule+0x229/0x550
[345167.063369]  ? bio_associate_blkg_from_css+0xf5/0x320
[345167.063373]  schedule+0x2e/0xd0
[345167.063376]  md_write_start.part.0+0x195/0x250
[345167.063378]  ? __pfx_autoremove_wake_function+0x10/0x10
[345167.063382]  raid1_make_request+0x5b/0xbb [raid1]
[345167.063387]  md_handle_request+0x150/0x270
[345167.063390]  ? __bio_split_to_limits+0x8e/0x280
[345167.063393]  __submit_bio+0x94/0x130
[345167.063395]  __submit_bio_noacct+0x7e/0x1e0
[345167.063397]  xfs_buf_ioapply_map+0x1cb/0x270 [xfs]
[345167.063503]  _xfs_buf_ioapply+0xcf/0x1b0 [xfs]
[345167.063598]  ? __pfx_default_wake_function+0x10/0x10
[345167.063602]  __xfs_buf_submit+0x6e/0x1e0 [xfs]
[345167.063693]  xfs_buf_delwri_submit_buffers+0xe3/0x230 [xfs]
[345167.063783]  xfsaild_push+0x1aa/0x740 [xfs]
[345167.063893]  xfsaild+0xb2/0x150 [xfs]
[345167.063996]  ? __pfx_xfsaild+0x10/0x10 [xfs]
[345167.064096]  kthread+0xdd/0x100
[345167.064099]  ? __pfx_kthread+0x10/0x10
[345167.064102]  ret_from_fork+0x29/0x50
[345167.064105]  </TASK>
[345167.064149] INFO: task UV_WORKER[13]:882664 blocked for more than 122 seconds.
[345167.064152]       Tainted: G               X  -------  ---  5.14.0-503.14.1.el9_5.x86_64 #1
[345167.064154] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[345167.064156] task:UV_WORKER[13]   state:D stack:0     pid:882664 tgid:882471 ppid:1      flags:0x00000002
[345167.064160] Call Trace:
[345167.064161]  <TASK>
[345167.064163]  __schedule+0x229/0x550
[345167.064166]  ? bio_associate_blkg_from_css+0xf5/0x320
[345167.064170]  schedule+0x2e/0xd0
[345167.064172]  md_write_start.part.0+0x195/0x250
[345167.064175]  ? __pfx_autoremove_wake_function+0x10/0x10
[345167.064178]  raid1_make_request+0x5b/0xbb [raid1]
[345167.064184]  md_handle_request+0x150/0x270
[345167.064187]  ? __bio_split_to_limits+0x8e/0x280
[345167.064190]  __submit_bio+0x94/0x130
[345167.064192]  __submit_bio_noacct+0x7e/0x1e0
[345167.064194]  iomap_submit_ioend+0x4e/0x80
[345167.064199]  xfs_vm_writepages+0x7a/0xb0 [xfs]
[345167.064305]  do_writepages+0xcc/0x1a0
[345167.064308]  filemap_fdatawrite_wbc+0x66/0x90
[345167.064312]  __filemap_fdatawrite_range+0x54/0x80
[345167.064317]  file_write_and_wait_range+0x48/0xb0
[345167.064319]  xfs_file_fsync+0x5a/0x240 [xfs]
[345167.064425]  __x64_sys_fsync+0x33/0x60
[345167.064430]  do_syscall_64+0x5c/0xf0
[345167.064433]  ? fcntl_setlk+0x1cb/0x3b0
[345167.064437]  ? do_fcntl+0x458/0x670
[345167.064440]  ? syscall_exit_work+0x103/0x130
[345167.064443]  ? syscall_exit_to_user_mode+0x19/0x40
[345167.064446]  ? do_syscall_64+0x6b/0xf0
[345167.064448]  ? __count_memcg_events+0x4f/0xb0
[345167.064451]  ? mm_account_fault+0x6c/0x100
[345167.064455]  ? handle_mm_fault+0x116/0x270
[345167.064458]  ? do_user_addr_fault+0x1b4/0x6a0
[345167.064461]  ? exc_page_fault+0x62/0x150
[345167.064465]  entry_SYSCALL_64_after_hwframe+0x78/0x80
[345167.064468] RIP: 0033:0x7f36adb0459b
[345167.064496] RSP: 002b:00007f36a0ce4c20 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
[345167.064500] RAX: ffffffffffffffda RBX: 0000563b7f63af38 RCX: 00007f36adb0459b
[345167.064502] RDX: 0000000000000002 RSI: 0000000000000002 RDI: 000000000000000d
[345167.064504] RBP: 0000000000000008 R08: 0000000000000000 R09: 0000000000000000
[345167.064506] R10: 0000000000000000 R11: 0000000000000293 R12: 0000563b7f63aea8
[345167.064508] R13: 0000563b82320850 R14: 0000000000000000 R15: 00007f36a0ce4ce0
[345167.064512]  </TASK>
[345167.064562] INFO: task kworker/u16:2:1205595 blocked for more than 122 seconds.
[345167.064565]       Tainted: G               X  -------  ---  5.14.0-503.14.1.el9_5.x86_64 #1
[345167.064567] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[345167.064569] task:kworker/u16:2   state:D stack:0     pid:1205595 tgid:1205595 ppid:2      flags:0x00004000
[345167.064574] Workqueue: writeback wb_workfn (flush-253:6)
[345167.064578] Call Trace:
[345167.064579]  <TASK>
[345167.064581]  __schedule+0x229/0x550
[345167.064584]  ? bio_associate_blkg_from_css+0xf5/0x320
[345167.064587]  schedule+0x2e/0xd0
[345167.064590]  md_write_start.part.0+0x195/0x250
[345167.064593]  ? __pfx_autoremove_wake_function+0x10/0x10
[345167.064596]  raid1_make_request+0x5b/0xbb [raid1]
[345167.064602]  md_handle_request+0x150/0x270
[345167.064605]  ? __bio_split_to_limits+0x8e/0x280
[345167.064608]  __submit_bio+0x94/0x130
[345167.064610]  __submit_bio_noacct+0x7e/0x1e0
[345167.064612]  iomap_submit_ioend+0x4e/0x80
[345167.064616]  iomap_writepage_map+0x30a/0x4c0
[345167.064618]  write_cache_pages+0x13c/0x3a0
[345167.064620]  ? __pfx_iomap_do_writepage+0x10/0x10
[345167.064623]  ? scsi_dispatch_cmd+0x8d/0x240
[345167.064626]  ? scsi_queue_rq+0x1ad/0x610
[345167.064631]  ? update_sg_lb_stats+0xb6/0x460
[345167.064635]  iomap_writepages+0x1c/0x40
[345167.064638]  xfs_vm_writepages+0x7a/0xb0 [xfs]
[345167.064739]  do_writepages+0xcc/0x1a0
[345167.064742]  ? __percpu_counter_sum_mask+0x6f/0x80
[345167.064747]  __writeback_single_inode+0x41/0x270
[345167.064750]  writeback_sb_inodes+0x209/0x4a0
[345167.064753]  __writeback_inodes_wb+0x4c/0xe0
[345167.064755]  wb_writeback+0x1d7/0x2d0
[345167.064758]  wb_do_writeback+0x1d1/0x2b0
[345167.064760]  wb_workfn+0x5e/0x290
[345167.064763]  ? try_to_wake_up+0x1ca/0x530
[345167.064766]  process_one_work+0x194/0x380
[345167.064769]  worker_thread+0x2fe/0x410
[345167.064772]  ? __pfx_worker_thread+0x10/0x10
[345167.064775]  kthread+0xdd/0x100
[345167.064778]  ? __pfx_kthread+0x10/0x10
[345167.064781]  ret_from_fork+0x29/0x50
[345167.064784]  </TASK>
[345167.064786] INFO: task kworker/u16:0:1209123 blocked for more than 122 seconds.
[345167.064788]       Tainted: G               X  -------  ---  5.14.0-503.14.1.el9_5.x86_64 #1
[345167.064790] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[345167.064792] task:kworker/u16:0   state:D stack:0     pid:1209123 tgid:1209123 ppid:2      flags:0x00004000
[345167.064796] Workqueue: writeback wb_workfn (flush-253:6)
[345167.064799] Call Trace:
[345167.064801]  <TASK>
[345167.064802]  __schedule+0x229/0x550
[345167.064805]  ? bio_associate_blkg_from_css+0xf5/0x320
[345167.064808]  schedule+0x2e/0xd0
[345167.064811]  md_write_start.part.0+0x195/0x250
[345167.064813]  ? __pfx_autoremove_wake_function+0x10/0x10
[345167.064817]  raid1_make_request+0x5b/0xbb [raid1]
[345167.064822]  md_handle_request+0x150/0x270
[345167.064825]  ? __bio_split_to_limits+0x8e/0x280
[345167.064828]  __submit_bio+0x94/0x130
[345167.064830]  __submit_bio_noacct+0x7e/0x1e0
[345167.064832]  iomap_submit_ioend+0x4e/0x80
[345167.064835]  iomap_writepage_map+0x30a/0x4c0
[345167.064838]  write_cache_pages+0x13c/0x3a0
[345167.064840]  ? __pfx_iomap_do_writepage+0x10/0x10
[345167.064843]  ? scsi_dispatch_cmd+0x8d/0x240
[345167.064845]  ? scsi_queue_rq+0x1ad/0x610
[345167.064848]  ? update_sg_lb_stats+0xb6/0x460
[345167.064851]  iomap_writepages+0x1c/0x40
[345167.064854]  xfs_vm_writepages+0x7a/0xb0 [xfs]
[345167.064949]  do_writepages+0xcc/0x1a0
[345167.064952]  ? __percpu_counter_sum_mask+0x6f/0x80
[345167.064955]  __writeback_single_inode+0x41/0x270
[345167.064958]  writeback_sb_inodes+0x209/0x4a0
[345167.064961]  __writeback_inodes_wb+0x4c/0xe0
[345167.064963]  wb_writeback+0x1d7/0x2d0
[345167.064965]  wb_do_writeback+0x1d1/0x2b0
[345167.064968]  wb_workfn+0x5e/0x290
[345167.064970]  ? __switch_to_asm+0x3a/0x80
[345167.064972]  ? finish_task_switch.isra.0+0x8c/0x2a0
[345167.064976]  ? __schedule+0x231/0x550
[345167.064979]  process_one_work+0x194/0x380
[345167.064982]  worker_thread+0x2fe/0x410
[345167.064985]  ? __pfx_worker_thread+0x10/0x10
[345167.064987]  kthread+0xdd/0x100
[345167.064990]  ? __pfx_kthread+0x10/0x10
[345167.064994]  ret_from_fork+0x29/0x50
[345167.064996]  </TASK>
[345167.064999] INFO: task kworker/u16:4:1216782 blocked for more than 122 seconds.
[345167.065001]       Tainted: G               X  -------  ---  5.14.0-503.14.1.el9_5.x86_64 #1
[345167.065004] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[345167.065005] task:kworker/u16:4   state:D stack:0     pid:1216782 tgid:1216782 ppid:2      flags:0x00004000
[345167.065009] Workqueue: writeback wb_workfn (flush-253:6)
[345167.065012] Call Trace:
[345167.065014]  <TASK>
[345167.065015]  __schedule+0x229/0x550
[345167.065018]  ? bio_associate_blkg_from_css+0xf5/0x320
[345167.065021]  schedule+0x2e/0xd0
[345167.065024]  md_write_start.part.0+0x195/0x250
[345167.065026]  ? __pfx_autoremove_wake_function+0x10/0x10
[345167.065030]  raid1_make_request+0x5b/0xbb [raid1]
[345167.065035]  md_handle_request+0x150/0x270
[345167.065038]  ? __bio_split_to_limits+0x8e/0x280
[345167.065041]  __submit_bio+0x94/0x130
[345167.065043]  __submit_bio_noacct+0x7e/0x1e0
[345167.065045]  iomap_submit_ioend+0x4e/0x80
[345167.065048]  xfs_vm_writepages+0x7a/0xb0 [xfs]
[345167.065140]  do_writepages+0xcc/0x1a0
[345167.065143]  ? __wb_calc_thresh+0x3a/0x120
[345167.065145]  __writeback_single_inode+0x41/0x270
[345167.065147]  writeback_sb_inodes+0x209/0x4a0
[345167.065150]  __writeback_inodes_wb+0x4c/0xe0
[345167.065153]  wb_writeback+0x1d7/0x2d0
[345167.065155]  wb_do_writeback+0x22a/0x2b0
[345167.065157]  wb_workfn+0x5e/0x290
[345167.065160]  ? try_to_wake_up+0x1ca/0x530
[345167.065163]  process_one_work+0x194/0x380
[345167.065166]  worker_thread+0x2fe/0x410
[345167.065168]  ? __pfx_worker_thread+0x10/0x10
[345167.065171]  kthread+0xdd/0x100
[345167.065174]  ? __pfx_kthread+0x10/0x10
[345167.065177]  ret_from_fork+0x29/0x50
[345167.065180]  </TASK>
[345167.065181] INFO: task kworker/1:0:1217700 blocked for more than 122 seconds.
[345167.065184]       Tainted: G               X  -------  ---  5.14.0-503.14.1.el9_5.x86_64 #1
[345167.065186] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[345167.065188] task:kworker/1:0     state:D stack:0     pid:1217700 tgid:1217700 ppid:2      flags:0x00004000
[345167.065192] Workqueue: xfs-sync/dm-4 xfs_log_worker [xfs]
[345167.065302] Call Trace:
[345167.065304]  <TASK>
[345167.065305]  __schedule+0x229/0x550
[345167.065309]  ? __send_empty_flush+0xea/0x120 [dm_mod]
[345167.065324]  schedule+0x2e/0xd0
[345167.065327]  md_flush_request+0x9b/0x1e0
[345167.065331]  ? __pfx_autoremove_wake_function+0x10/0x10
[345167.065335]  raid1_make_request+0xa8/0xbb [raid1]
[345167.065340]  md_handle_request+0x150/0x270
[345167.065343]  ? __bio_split_to_limits+0x8e/0x280
[345167.065346]  __submit_bio+0x94/0x130
[345167.065348]  __submit_bio_noacct+0x7e/0x1e0
[345167.065350]  xlog_state_release_iclog+0xe6/0x1c0 [xfs]
[345167.065464]  xfs_log_force+0x172/0x230 [xfs]
[345167.065566]  xfs_log_worker+0x3b/0xd0 [xfs]
[345167.065664]  process_one_work+0x194/0x380
[345167.065667]  worker_thread+0x2fe/0x410
[345167.065669]  ? __pfx_worker_thread+0x10/0x10
[345167.065672]  kthread+0xdd/0x100
[345167.065675]  ? __pfx_kthread+0x10/0x10
[345167.065678]  ret_from_fork+0x29/0x50
[345167.065681]  </TASK>
[345167.065683] INFO: task kworker/0:2:1219498 blocked for more than 122 seconds.
[345167.065685]       Tainted: G               X  -------  ---  5.14.0-503.14.1.el9_5.x86_64 #1
[345167.065687] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[345167.065689] task:kworker/0:2     state:D stack:0     pid:1219498 tgid:1219498 ppid:2      flags:0x00004000
[345167.065693] Workqueue: xfs-sync/dm-6 xfs_log_worker [xfs]
[345167.065790] Call Trace:
[345167.065791]  <TASK>
[345167.065793]  __schedule+0x229/0x550
[345167.065796]  ? __send_empty_flush+0xea/0x120 [dm_mod]
[345167.065810]  schedule+0x2e/0xd0
[345167.065812]  md_flush_request+0x9b/0x1e0
[345167.065816]  ? __pfx_autoremove_wake_function+0x10/0x10
[345167.065819]  raid1_make_request+0xa8/0xbb [raid1]
[345167.065825]  md_handle_request+0x150/0x270
[345167.065827]  ? __bio_split_to_limits+0x8e/0x280
[345167.065830]  __submit_bio+0x94/0x130
[345167.065832]  __submit_bio_noacct+0x7e/0x1e0
[345167.065835]  xlog_state_release_iclog+0xe6/0x1c0 [xfs]
[345167.065931]  xfs_log_force+0x172/0x230 [xfs]
[345167.066027]  xfs_log_worker+0x3b/0xd0 [xfs]
[345167.066122]  process_one_work+0x194/0x380
[345167.066125]  worker_thread+0x2fe/0x410
[345167.066128]  ? __pfx_worker_thread+0x10/0x10
[345167.066131]  kthread+0xdd/0x100
[345167.066134]  ? __pfx_kthread+0x10/0x10
[345167.066137]  ret_from_fork+0x29/0x50
[345167.066140]  </TASK>
[345167.066141] INFO: task kworker/u16:1:1220633 blocked for more than 122 seconds.
[345167.066144]       Tainted: G               X  -------  ---  5.14.0-503.14.1.el9_5.x86_64 #1
[345167.066146] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[345167.066148] task:kworker/u16:1   state:D stack:0     pid:1220633 tgid:1220633 ppid:2      flags:0x00004000
[345167.066152] Workqueue: writeback wb_workfn (flush-253:6)
[345167.066155] Call Trace:
[345167.066157]  <TASK>
[345167.066158]  __schedule+0x229/0x550
[345167.066162]  schedule+0x2e/0xd0
[345167.066165]  md_write_start.part.0+0x195/0x250
[345167.066167]  ? __pfx_autoremove_wake_function+0x10/0x10
[345167.066171]  raid1_make_request+0x5b/0xbb [raid1]
[345167.066177]  md_handle_request+0x150/0x270
[345167.066179]  ? __bio_split_to_limits+0x8e/0x280
[345167.066182]  __submit_bio+0x94/0x130
[345167.066185]  __submit_bio_noacct+0x7e/0x1e0
[345167.066187]  iomap_submit_ioend+0x4e/0x80
[345167.066191]  xfs_vm_writepages+0x7a/0xb0 [xfs]
[345167.066299]  do_writepages+0xcc/0x1a0
[345167.066301]  ? find_busiest_group+0x43/0x240
[345167.066304]  __writeback_single_inode+0x41/0x270
[345167.066306]  writeback_sb_inodes+0x209/0x4a0
[345167.066309]  __writeback_inodes_wb+0x4c/0xe0
[345167.066312]  wb_writeback+0x1d7/0x2d0
[345167.066314]  wb_do_writeback+0x1d1/0x2b0
[345167.066317]  wb_workfn+0x5e/0x290
[345167.066319]  ? try_to_wake_up+0x1ca/0x530
[345167.066322]  process_one_work+0x194/0x380
[345167.066325]  worker_thread+0x2fe/0x410
[345167.066328]  ? __pfx_worker_thread+0x10/0x10
[345167.066330]  kthread+0xdd/0x100
[345167.066333]  ? __pfx_kthread+0x10/0x10
[345167.066336]  ret_from_fork+0x29/0x50
[345167.066339]  </TASK>
[345274.582484] nvme nvme0: Device not ready; aborting reset, CSTS=0x1
[345274.588547] nvme nvme0: Abort status: 0x371
[345274.588554] nvme nvme0: Abort status: 0x371
[345274.588556] nvme nvme0: Abort status: 0x371
[345402.595930] nvme nvme0: Device not ready; aborting reset, CSTS=0x1
[345402.596168] nvme nvme0: Disabling device after reset failure: -19
[345402.603001] I/O error, dev nvme0n1, sector 31757592 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2
[345402.603001] I/O error, dev nvme0n1, sector 31745656 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2
[345402.603005] I/O error, dev nvme0n1, sector 4196368 op 0x1:(WRITE) flags 0x29800 phys_seg 1 prio class 2
[345402.603011] md: super_written gets error=-5
[345402.603011] md/raid1:md127: nvme0n1p3: rescheduling sector 27297048
[345402.603017] I/O error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 2
[345402.603018] md/raid1:md127: nvme0n1p3: rescheduling sector 27285112
[345402.603021] md/raid1:md127: Disk failure on nvme0n1p3, disabling device.
                md/raid1:md127: Operation continuing on 1 devices.
[345402.603021] I/O error, dev nvme0n1, sector 31835944 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2
[345402.603024] md/raid1:md127: nvme0n1p3: rescheduling sector 27375400
[345402.603025] I/O error, dev nvme0n1, sector 31772336 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2
[345402.603027] md/raid1:md127: nvme0n1p3: rescheduling sector 27311792
[345402.603037] I/O error, dev nvme0n1, sector 31790576 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2
[345402.603040] md/raid1:md127: nvme0n1p3: rescheduling sector 27330032
[345402.603066] I/O error, dev nvme0n1, sector 31750480 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2
[345402.603071] md/raid1:md127: nvme0n1p3: rescheduling sector 27289936
[345402.603073] I/O error, dev nvme0n1, sector 31831344 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2
[345402.603076] md/raid1:md127: nvme0n1p3: rescheduling sector 27370800
[345402.603100] nvme nvme0: Identify namespace failed (-5)
[345402.606121] md/raid1:md127: redirecting sector 27297048 to other mirror: sdc3
[345402.616231] md/raid1:md127: redirecting sector 27285112 to other mirror: sdc3
[345402.618772] md/raid1:md127: redirecting sector 27375400 to other mirror: sdc3
[345402.620045] md/raid1:md127: redirecting sector 27311792 to other mirror: sdc3
[345402.621385] md/raid1:md127: redirecting sector 27330032 to other mirror: sdc3
[345402.623214] md/raid1:md127: redirecting sector 27289936 to other mirror: sdc3
[345402.625367] md/raid1:md127: redirecting sector 27370800 to other mirror: sdc3
[345415.911236] nvme nvme0: Identify namespace failed (-5)
[346065.904105] nvme nvme0: Identify namespace failed (-5)
[346705.897901] nvme nvme0: Identify namespace failed (-5)
[347330.890137] nvme nvme0: Identify namespace failed (-5)
[348045.882527] nvme nvme0: Identify namespace failed (-5)
[348825.874978] nvme nvme0: Identify namespace failed (-5)
[349535.866785] nvme nvme0: Identify namespace failed (-5)
[350350.858851] nvme nvme0: Identify namespace failed (-5)
[351205.849071] nvme nvme0: Identify namespace failed (-5)
[351985.841745] nvme nvme0: Identify namespace failed (-5)
[352775.833593] nvme nvme0: Identify namespace failed (-5)
[353565.825575] nvme nvme0: Identify namespace failed (-5)
[354185.819012] nvme nvme0: Identify namespace failed (-5)
[354805.812068] nvme nvme0: Identify namespace failed (-5)
[355735.801917] nvme nvme0: Identify namespace failed (-5)
[356405.795685] nvme nvme0: Identify namespace failed (-5)
[357365.784744] nvme nvme0: Identify namespace failed (-5)
[358085.778398] nvme nvme0: Identify namespace failed (-5)
[358915.770064] nvme nvme0: Identify namespace failed (-5)
[359685.761817] nvme nvme0: Identify namespace failed (-5)
[360535.752860] nvme nvme0: Identify namespace failed (-5)
[361355.743738] nvme nvme0: Identify namespace failed (-5)
[362375.733015] nvme nvme0: Identify namespace failed (-5)
[363245.724684] nvme nvme0: Identify namespace failed (-5)
[364125.714801] nvme nvme0: Identify namespace failed (-5)
[365045.706093] nvme nvme0: Identify namespace failed (-5)
[365860.696897] nvme nvme0: Identify namespace failed (-5)
[366830.687532] nvme nvme0: Identify namespace failed (-5)
[367800.677730] nvme nvme0: Identify namespace failed (-5)
[368675.667759] nvme nvme0: Identify namespace failed (-5)
[369695.658067] nvme nvme0: Identify namespace failed (-5)
[370655.647552] nvme nvme0: Identify namespace failed (-5)
[371725.636876] nvme nvme0: Identify namespace failed (-5)
[372795.625832] nvme nvme0: Identify namespace failed (-5)
[373405.619870] nvme nvme0: Identify namespace failed (-5)
[374525.607754] nvme nvme0: Identify namespace failed (-5)
[375320.600472] nvme nvme0: Identify namespace failed (-5)
[376490.587461] nvme nvme0: Identify namespace failed (-5)
[377660.575315] nvme nvme0: Identify namespace failed (-5)
[378765.564104] nvme nvme0: Identify namespace failed (-5)
[379375.558613] nvme nvme0: Identify namespace failed (-5)
[379985.552536] nvme nvme0: Identify namespace failed (-5)
[380595.546287] nvme nvme0: Identify namespace failed (-5)
[380894.663810] systemd-rc-local-generator[1347729]: /etc/rc.d/rc.local is not marked executable, skipping.
[380902.636127] nvme nvme0: Identify namespace failed (-5)
[469038.217996] systemd-rc-local-generator[1658780]: /etc/rc.d/rc.local is not marked executable, skipping.
[469041.391405] nvme nvme0: Identify namespace failed (-5)

うーん・・・

代替のSSDをどうするか悩みどころ・・・

TBWの値はどうなってるか確認しつつ選定かな

CRUCIAL P1 (1900MB/950MB)
CRUCIAL P3 PLUS SSD 512GB 500TBW (5000MB/4200MB)
CRUCIAL T500 SSD 500GB 300TBW
Crucial P310 500GB 110TBW
Crucial P3 500GB 110TBW
Lexor LNM620X512G-RNNNG 512GB 250TBW
fanxiang S500 Pro 500GB 320TBW (3500MB/2700MB)
fanxiang S501Q 512GB 160TBW (3600MB/2700MB) ← 今回壊れたやつ
fanxiang S660 500GB 350TBW (4600MB/2650MB)
fanxiang S880E 500GB 300TBW (6300MB/3100MB)
Fikwot FN960 512GB 350TBW (7400MB/2750MB)
Fikwot FX991 500GB 300TBW (6300MB/3100MB)
Samsung 980 500GB 300TBW
Ediloca EN600 PRO 500GB 320TBW (3200MB/2800MB)
EDILOCA EN605 500GB 300TBW (2150MB/1600MB)
Ediloca EN760 500GB 350TBW (4800MB/2650MB)
Ediloca EN855 500GB 350TBW (7400MB/2750MB)
WD Blue SN580 500GB 300TBW
ADATA LEGEND 800シリーズ 500GB 300TBW
Acclamator N20 500GB 250TBW (2500MB/2000MB)
Acclamator N30 500GB 300TBW (3500MB/3000MB)
ORICO J10 512GB 150TBW (2800MB/1300MB)



NVMeの状態を見れる「nvme」コマンドってあったな、とarchlinuxの「ソリッドステートドライブ/NVMe」を見ながらコマンドを入れてみる

現状、「nvme list」ではデバイスは出てこない

[root@niselog ~]# nvme list
Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
[root@niselog ~]#

エラーログを見れるか「nvme error-log」を実行してみるがデバイスが見えないのでダメっぽい

[root@niselog ~]# nvme error-log  /dev/nvme0n1
identify controller: Input/output error
[root@niselog ~]#

リセットも同様にダメ

[root@niselog ~]# nvme reset /dev/nvme0n1
Reset: Block device required
[root@niselog ~]#

じゃあ、再検索かな?と「nvme discover」を実行したところ、再認識に成功

[root@niselog ~]# nvme discover
[root@niselog ~]# nvme list
Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
/dev/nvme0n1          /dev/ng0n1            FXS501Q244110889     Fanxiang S501Q 512GB                     0x1        512.11  GB /   0.00   B    512   B +  0 B   SN22751
[root@niselog ~]#

ん????

[root@niselog ~]# nvme error-log  /dev/nvme0n1
identify controller: Input/output error
[root@niselog ~]# nvme list
Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
[root@niselog ~]#

即オフラインになっていた

dmesg上は特になし

[518981.064372] nvme nvme0: Identify namespace failed (-5)
[519070.106359] nvme nvme0: Identify namespace failed (-5)
[519106.607320] nvme nvme0: Identify namespace failed (-5)
[519392.028895] nvme nvme0: Identify namespace failed (-5)
[519430.063154] nvme nvme0: Identify namespace failed (-5)
[519439.241555] nvme nvme0: Identify namespace failed (-5)

だめっぽい?

で、archlinuxのページの下の方にある「APST サポートの問題によるコントローラの機能不全」に似たようなログが出ている

[345055.452619] nvme nvme0: I/O tag 322 (0142) opcode 0x0 (Flush) QID 4 timeout, aborting req_op:FLUSH(2) size:0
[345057.437597] nvme nvme0: I/O tag 210 (a0d2) opcode 0x2 (Read) QID 2 timeout, aborting req_op:READ(0) size:32768
[345057.437643] nvme nvme0: I/O tag 706 (c2c2) opcode 0x2 (Read) QID 3 timeout, aborting req_op:READ(0) size:32768
[345085.664306] nvme nvme0: I/O tag 322 (0142) opcode 0x0 (Flush) QID 4 timeout, reset controller
[345274.582484] nvme nvme0: Device not ready; aborting reset, CSTS=0x1
[345274.588547] nvme nvme0: Abort status: 0x371
[345274.588554] nvme nvme0: Abort status: 0x371
[345274.588556] nvme nvme0: Abort status: 0x371
[345402.595930] nvme nvme0: Device not ready; aborting reset, CSTS=0x1
[345402.596168] nvme nvme0: Disabling device after reset failure: -19
[345402.603001] I/O error, dev nvme0n1, sector 31757592 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2
[345402.603001] I/O error, dev nvme0n1, sector 31745656 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2
[345402.603005] I/O error, dev nvme0n1, sector 4196368 op 0x1:(WRITE) flags 0x29800 phys_seg 1 prio class 2
[345402.603011] md/raid1:md127: nvme0n1p3: rescheduling sector 27297048
[345402.603017] I/O error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 2
[345402.603018] md/raid1:md127: nvme0n1p3: rescheduling sector 27285112
[345402.603021] md/raid1:md127: Disk failure on nvme0n1p3, disabling device.

ただ、いまのkenrel は 5.14.0-503.14.1.el9_5.x86_64 なので、これは対策されてるはずの問題のはず

とはいえ、現状の値がどうなってるかを確認してみる

[root@niselog sys]# find /sys -print|grep nvme|grep latency
/sys/devices/pci0000:00/0000:00:1c.0/0000:01:00.0/nvme/nvme0/power/pm_qos_latency_tolerance_us
/sys/module/nvme_core/parameters/apst_primary_latency_tol_us
/sys/module/nvme_core/parameters/apst_secondary_latency_tol_us
/sys/module/nvme_core/parameters/default_ps_max_latency_us
[root@niselog sys]# cat /sys/module/nvme_core/parameters/apst_primary_latency_tol_us
15000
[root@niselog sys]# cat /sys/module/nvme_core/parameters/apst_secondary_latency_tol_us
100000
[root@niselog sys]# cat /sys/module/nvme_core/parameters/default_ps_max_latency_us
100000
[root@niselog sys]# cat /sys/devices/pci0000:00/0000:00:1c.0/0000:01:00.0/nvme/nvme0/power/pm_qos_latency_tolerance_us
100000
[root@niselog sys]#

とりあえず値を0にしてみる

[root@niselog sys]# echo 0 > /sys/module/nvme_core/parameters/default_ps_max_latency_us
[root@niselog sys]# cat /sys/module/nvme_core/parameters/default_ps_max_latency_us
0
[root@niselog sys]#

やっぱりすぐ消えるな

[root@niselog sys]# nvme list
Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
[root@niselog sys]# nvme discover
[root@niselog sys]# nvme list
Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
/dev/nvme0n1          /dev/ng0n1            FXS501Q244110889     Fanxiang S501Q 512GB                     0x1        512.11  GB /   0.00   B    512   B +  0 B   SN22751
[root@niselog sys]# nvme list
Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
[root@niselog sys]#

2024/12/26追記

一度電源を落として起動しなおしたら、問題のNVMeストレージは再認識できた。

「APST サポートの問題によるコントローラの機能不全」の疑いがあるので /etc/default/grub の「GRUB_CMDLINE_LINUX=」に「nvme_core.default_ps_max_latency_us=0」を 追加した。

追加後再起動して /sys/module/nvme_core/parameters/default_ps_max_latency_us の値が0であることを確認

NVMeも正常に認識している

# cat /sys/module/nvme_core/parameters/default_ps_max_latency_us
0
# nvme list
Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
/dev/nvme0n1          /dev/ng0n1            FXS501Q244110889     Fanxiang S501Q 512GB                     0x1        512.11  GB / 512.11  GB    512   B +  0 B   SN22751
#

smart-logをとってみる

# nvme smart-log /dev/nvme0n1
Smart Log for NVME device:nvme0n1 namespace-id:ffffffff
critical_warning                        : 0
temperature                             : 42 °C (315 K)
available_spare                         : 85%
available_spare_threshold               : 1%
percentage_used                         : 0%
endurance group critical warning summary: 0
Data Units Read                         : 2671220 (1.37 TB)
Data Units Written                      : 594263 (304.26 GB)
host_read_commands                      : 8060270
host_write_commands                     : 5860715
controller_busy_time                    : 61
power_cycles                            : 24
power_on_hours                          : 305
unsafe_shutdowns                        : 8
media_errors                            : 0
num_err_log_entries                     : 0
Warning Temperature Time                : 0
Critical Composite Temperature Time     : 0
Temperature Sensor 1           : 42 °C (315 K)
Temperature Sensor 2           : 40 °C (313 K)
Thermal Management T1 Trans Count       : 0
Thermal Management T2 Trans Count       : 0
Thermal Management T1 Total Time        : 0
Thermal Management T2 Total Time        : 0
#

NVMeの持つ機能確認

# nvme get-feature /dev/nvme0n1
get-feature:0x01 (Arbitration), Current value:0x00000006
get-feature:0x02 (Power Management), Current value:00000000
get-feature:0x04 (Temperature Threshold), Current value:0x0000016b
get-feature:0x05 (Error Recovery), Current value:00000000
get-feature:0x06 (Volatile Write Cache), Current value:0x00000001
get-feature:0x07 (Number of Queues), Current value:0x00030003
get-feature:0x08 (Interrupt Coalescing), Current value:00000000
get-feature:0x09 (Interrupt Vector Configuration), Current value:0x00010000
get-feature:0x0a (Write Atomicity Normal), Current value:00000000
get-feature:0x0b (Async Event Configuration), Current value:0x00000200
get-feature:0x0c (Autonomous Power State Transition), Current value:00000000
       0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
0000: 18 f4 01 00 00 00 00 00 18 f4 01 00 00 00 00 00 "................"
0010: 18 f4 01 00 00 00 00 00 20 70 17 00 00 00 00 00 ".........p......"
0020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0090: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
00a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
00b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
00c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
00d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
00e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
00f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
get-feature:0x0d (Host Memory Buffer), Current value:0x00000001
       0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
0000: 00 10 00 00 00 00 e7 07 01 00 00 00 04 00 00 00 "................"
0010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0090: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
00a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
00b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
00c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
00d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
00e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
00f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0110: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0120: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0130: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0140: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0150: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0160: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0170: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0190: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
01a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
01b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
01c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
01d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
01e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
01f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0200: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0210: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0220: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0230: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0240: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0250: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0260: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0270: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0280: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0290: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
02a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
02b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
02c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
02d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
02e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
02f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0310: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0320: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0330: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0340: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0350: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0360: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0370: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0380: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0390: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
03a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
03b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
03c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
03d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
03e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
03f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0410: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0420: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0430: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0440: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0450: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0460: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0470: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0480: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0490: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
04a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
04b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
04c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
04d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
04e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
04f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0500: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0510: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0520: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0530: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0540: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0550: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0560: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0570: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0580: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0590: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
05a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
05b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
05c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
05d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
05e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
05f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0600: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0610: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0620: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0630: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0640: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0650: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0660: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0670: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0680: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0690: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
06a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
06b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
06c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
06d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
06e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
06f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0700: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0710: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0720: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0730: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0740: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0750: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0760: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0770: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0780: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0790: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
07a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
07b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
07c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
07d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
07e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
07f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0800: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0810: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0820: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0830: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0840: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0850: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0860: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0870: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0890: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
08a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
08b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
08c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
08d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
08e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
08f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0900: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0910: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0920: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0930: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0940: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0950: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0960: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0970: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0980: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0990: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
09a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
09b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
09c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
09d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
09e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
09f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0a00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0a10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0a20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0a30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0a40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0a50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0a60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0a70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0a80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0a90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0aa0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0ab0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0ac0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0ad0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0ae0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0af0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0b00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0b20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0b30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0b40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0b50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0b60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0b70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0b80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0b90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0ba0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0bb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0bc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0bd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0be0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0bf0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0c00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0c10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0c20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0c30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0c40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0c50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0c60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0c70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0c80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0c90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0ca0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0cb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0cc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0cd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0ce0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0cf0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0d00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0d10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0d20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0d30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0d40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0d50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0d60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0d70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0d80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0d90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0da0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0db0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0dc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0dd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0de0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0df0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0e00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0e10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0e20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0e30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0e40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0e50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0e60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0e70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0e80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0ea0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0f20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0f30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0f40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0f50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0f60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0f70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0f90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0fa0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0fb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0fc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
0ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 "................"
get-feature:0x10 (Host Controlled Thermal Management), Current value:0x01750184
get-feature:0x11 (Non-Operational Power State Config), Current value:0x00000001
get-feature:0x80 (Software Progress), Current value:0x0000003b
get-feature:0xc2 (Unknown), Current value:00000000
get-feature:0xcb (Unknown), Current value:00000000
#

2025/04/03 追記

またnvme側が死んでいた。

nvmeの取り扱いが微妙なんだろうか?SATAとするべきか?

Feb 17 18:58:52 niselog kernel: nvme nvme0: I/O tag 566 (0236) opcode 0x2 (Read) QID 4 timeout, aborting req_op:READ(0) size:49152
Feb 17 18:58:53 niselog kernel: nvme nvme0: I/O tag 381 (717d) opcode 0x0 (Flush) QID 1 timeout, aborting req_op:FLUSH(2) size:0
Feb 17 18:59:10 niselog kernel: nvme nvme0: I/O tag 567 (1237) opcode 0x0 (Flush) QID 4 timeout, aborting req_op:FLUSH(2) size:0
Feb 17 18:59:22 niselog kernel: nvme nvme0: I/O tag 566 (0236) opcode 0x2 (Read) QID 4 timeout, reset controller

Feb 17 19:04:39 niselog kernel: INFO: task md127_raid1:588 blocked for more than 122 seconds.
Feb 17 19:04:39 niselog kernel:      Tainted: G               X  -------  ---  5.14.0-503.15.1.el9_5.x86_64 #1
Feb 17 19:04:39 niselog kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 17 19:04:39 niselog kernel: task:md127_raid1     state:D stack:0     pid:588   tgid:588   ppid:2      flags:0x00004000
Feb 17 19:04:39 niselog kernel: Call Trace:
Feb 17 19:04:39 niselog kernel: <TASK>
Feb 17 19:04:39 niselog kernel: __schedule+0x229/0x550
Feb 17 19:04:39 niselog kernel: schedule+0x2e/0xd0
Feb 17 19:04:39 niselog kernel: md_super_wait+0x72/0xa0
Feb 17 19:04:39 niselog kernel: ? __pfx_autoremove_wake_function+0x10/0x10
Feb 17 19:04:39 niselog kernel: md_bitmap_daemon_work+0x16d/0x3b0
Feb 17 19:04:39 niselog kernel: md_check_recovery+0x1d/0x390
Feb 17 19:04:39 niselog kernel: raid1d+0x40/0x580 [raid1]
Feb 17 19:04:39 niselog kernel: ? __timer_delete_sync+0x2c/0x40
Feb 17 19:04:39 niselog kernel: ? schedule_timeout+0x92/0x160
Feb 17 19:04:39 niselog kernel: ? prepare_to_wait_event+0x5d/0x180
Feb 17 19:04:39 niselog kernel: md_thread+0xa8/0x160
Feb 17 19:04:39 niselog kernel: ? __pfx_autoremove_wake_function+0x10/0x10
Feb 17 19:04:39 niselog kernel: ? __pfx_md_thread+0x10/0x10
Feb 17 19:04:39 niselog kernel: kthread+0xdd/0x100
Feb 17 19:04:39 niselog kernel: ? __pfx_kthread+0x10/0x10
Feb 17 19:04:39 niselog kernel: ret_from_fork+0x29/0x50
Feb 17 19:04:39 niselog kernel: </TASK>
Feb 17 19:04:39 niselog kernel: INFO: task journal-offline:2923856 blocked for more than 122 seconds.
Feb 17 19:04:39 niselog kernel:      Tainted: G               X  -------  ---  5.14.0-503.15.1.el9_5.x86_64 #1
Feb 17 19:04:39 niselog kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 17 19:04:39 niselog kernel: task:journal-offline state:D stack:0     pid:2923856 tgid:814   ppid:1      flags:0x00000002
Feb 17 19:04:39 niselog kernel: Call Trace:
Feb 17 19:04:39 niselog kernel: <TASK>
Feb 17 19:04:39 niselog kernel: __schedule+0x229/0x550
Feb 17 19:04:39 niselog kernel: schedule+0x2e/0xd0
Feb 17 19:04:39 niselog kernel: io_schedule+0x42/0x70
Feb 17 19:04:39 niselog kernel: folio_wait_bit+0xe9/0x200
Feb 17 19:04:39 niselog kernel: ? __pfx_wake_page_function+0x10/0x10
Feb 17 19:04:39 niselog kernel: folio_wait_writeback+0x28/0x80
Feb 17 19:04:39 niselog kernel: write_cache_pages+0x101/0x3a0
Feb 17 19:04:39 niselog kernel: ? __pfx_iomap_do_writepage+0x10/0x10
Feb 17 19:04:39 niselog kernel: iomap_writepages+0x1c/0x40
Feb 17 19:04:39 niselog kernel: xfs_vm_writepages+0x7a/0xb0 [xfs]
Feb 17 19:04:39 niselog kernel: do_writepages+0xcc/0x1a0
Feb 17 19:04:39 niselog kernel: filemap_fdatawrite_wbc+0x66/0x90
Feb 17 19:04:39 niselog kernel: __filemap_fdatawrite_range+0x54/0x80
Feb 17 19:04:39 niselog kernel: file_write_and_wait_range+0x48/0xb0
Feb 17 19:04:39 niselog kernel: xfs_file_fsync+0x5a/0x240 [xfs]
Feb 17 19:04:39 niselog kernel: __x64_sys_fsync+0x33/0x60
Feb 17 19:04:39 niselog kernel: do_syscall_64+0x5c/0xf0
Feb 17 19:04:39 niselog kernel: ? syscall_exit_work+0x103/0x130
Feb 17 19:04:39 niselog kernel: ? syscall_exit_to_user_mode+0x19/0x40
Feb 17 19:04:39 niselog kernel: ? do_syscall_64+0x6b/0xf0
Feb 17 19:04:39 niselog kernel: ? syscall_exit_work+0x103/0x130
Feb 17 19:04:39 niselog kernel: ? syscall_exit_to_user_mode+0x19/0x40
Feb 17 19:04:39 niselog kernel: ? do_syscall_64+0x6b/0xf0
Feb 17 19:04:39 niselog kernel: ? fpregs_restore_userregs+0x47/0xd0
Feb 17 19:04:39 niselog kernel: ? exit_to_user_mode_prepare+0xef/0x100
Feb 17 19:04:39 niselog kernel: entry_SYSCALL_64_after_hwframe+0x78/0x80
Feb 17 19:04:39 niselog kernel: RIP: 0033:0x7f787bf0459b
Feb 17 19:04:39 niselog kernel: RSP: 002b:00007f787a1fe9b0 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
Feb 17 19:04:39 niselog kernel: RAX: ffffffffffffffda RBX: 0000558ae4556ca0 RCX: 00007f787bf0459b
Feb 17 19:04:39 niselog kernel: RDX: 0000000000000002 RSI: 0000000000000002 RDI: 0000000000000021
Feb 17 19:04:39 niselog kernel: RBP: 0000558ae4574190 R08: 0000000000000000 R09: 00007f787a1ff640
Feb 17 19:04:39 niselog kernel: R10: 00007f787be89bc6 R11: 0000000000000293 R12: 0000558ae2568343
Feb 17 19:04:39 niselog kernel: R13: 0000558ae256d8a0 R14: 00007f787be89a50 R15: 0000000000000021
Feb 17 19:04:39 niselog kernel: </TASK>
Feb 17 19:04:39 niselog kernel: INFO: task auditd:1117 blocked for more than 122 seconds.
Feb 17 19:04:39 niselog kernel:      Tainted: G               X  -------  ---  5.14.0-503.15.1.el9_5.x86_64 #1
Feb 17 19:04:39 niselog kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 17 19:04:39 niselog kernel: task:auditd          state:D stack:0     pid:1117  tgid:1116  ppid:1      flags:0x00000002
Feb 17 19:04:39 niselog kernel: Call Trace:
Feb 17 19:04:39 niselog kernel: <TASK>
Feb 17 19:04:39 niselog kernel: __schedule+0x229/0x550
Feb 17 19:04:39 niselog kernel: ? bio_associate_blkg_from_css+0xf5/0x320
Feb 17 19:04:39 niselog kernel: schedule+0x2e/0xd0
Feb 17 19:04:39 niselog kernel: md_write_start.part.0+0x195/0x250
Feb 17 19:04:39 niselog kernel: ? __pfx_autoremove_wake_function+0x10/0x10
Feb 17 19:04:39 niselog kernel: raid1_make_request+0x5b/0xbb [raid1]
Feb 17 19:04:39 niselog kernel: md_handle_request+0x150/0x270
Feb 17 19:04:39 niselog kernel: ? __bio_split_to_limits+0x8e/0x280
Feb 17 19:04:39 niselog kernel: __submit_bio+0x94/0x130
Feb 17 19:04:39 niselog kernel: __submit_bio_noacct+0x7e/0x1e0
Feb 17 19:04:39 niselog kernel: iomap_submit_ioend+0x4e/0x80
Feb 17 19:04:39 niselog kernel: iomap_writepage_map+0x30a/0x4c0
Feb 17 19:04:39 niselog kernel: write_cache_pages+0x13c/0x3a0
Feb 17 19:04:39 niselog kernel: ? __pfx_iomap_do_writepage+0x10/0x10
Feb 17 19:04:39 niselog kernel: ? wakeup_preempt+0x5a/0x70
Feb 17 19:04:39 niselog kernel: ? ttwu_do_activate+0x112/0x1f0
Feb 17 19:04:39 niselog kernel: iomap_writepages+0x1c/0x40
Feb 17 19:04:39 niselog kernel: xfs_vm_writepages+0x7a/0xb0 [xfs]
Feb 17 19:04:39 niselog kernel: do_writepages+0xcc/0x1a0
Feb 17 19:04:39 niselog kernel: ? pick_next_task_fair+0x1dc/0x4f0
Feb 17 19:04:39 niselog kernel: filemap_fdatawrite_wbc+0x66/0x90
Feb 17 19:04:39 niselog kernel: __filemap_fdatawrite_range+0x54/0x80
Feb 17 19:04:39 niselog kernel: file_write_and_wait_range+0x48/0xb0
Feb 17 19:04:39 niselog kernel: xfs_file_fsync+0x5a/0x240 [xfs]
Feb 17 19:04:39 niselog kernel: __x64_sys_fsync+0x33/0x60
Feb 17 19:04:39 niselog kernel: do_syscall_64+0x5c/0xf0
Feb 17 19:04:39 niselog kernel: ? futex_wait+0x67/0x100
Feb 17 19:04:39 niselog kernel: ? futex_wake+0x155/0x190
Feb 17 19:04:39 niselog kernel: ? do_futex+0xbe/0x1d0
Feb 17 19:04:39 niselog kernel: ? __x64_sys_futex+0x73/0x1d0
Feb 17 19:04:39 niselog kernel: ? syscall_exit_to_user_mode+0x19/0x40
Feb 17 19:04:39 niselog kernel: ? do_syscall_64+0x6b/0xf0
Feb 17 19:04:39 niselog kernel: ? rseq_get_rseq_cs+0x1d/0x240
Feb 17 19:04:39 niselog kernel: ? syscall_exit_to_user_mode+0x19/0x40
Feb 17 19:04:39 niselog kernel: ? rseq_ip_fixup+0x6e/0x1a0
Feb 17 19:04:39 niselog kernel: ? fpregs_restore_userregs+0x47/0xd0
Feb 17 19:04:39 niselog kernel: ? exit_to_user_mode_prepare+0xef/0x100
Feb 17 19:04:39 niselog kernel: ? syscall_exit_to_user_mode+0x19/0x40
Feb 17 19:04:39 niselog kernel: ? do_syscall_64+0x6b/0xf0
Feb 17 19:04:39 niselog kernel: ? do_syscall_64+0x6b/0xf0
Feb 17 19:04:39 niselog kernel: ? do_syscall_64+0x6b/0xf0
Feb 17 19:04:39 niselog kernel: ? sysvec_apic_timer_interrupt+0x3c/0x90
Feb 17 19:04:39 niselog kernel: entry_SYSCALL_64_after_hwframe+0x78/0x80
Feb 17 19:04:39 niselog kernel: RIP: 0033:0x7f9b61d0459b
Feb 17 19:04:39 niselog kernel: RSP: 002b:00007f9b615fec50 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
Feb 17 19:04:39 niselog kernel: RAX: ffffffffffffffda RBX: 000055956ea42020 RCX: 00007f9b61d0459b
Feb 17 19:04:39 niselog kernel: RDX: 0000000000000002 RSI: 0000000000000002 RDI: 0000000000000004
Feb 17 19:04:39 niselog kernel: RBP: 000055956ea42060 R08: 0000000000000000 R09: 00000000ffffffff
Feb 17 19:04:39 niselog kernel: R10: 0000000000000000 R11: 0000000000000293 R12: 00007f9b615ff640
Feb 17 19:04:39 niselog kernel: R13: 0000000000000002 R14: 00007f9b61c89a50 R15: 0000000000000000
Feb 17 19:04:39 niselog kernel: </TASK>
Feb 17 19:04:39 niselog kernel: INFO: task systemd-journal:1567 blocked for more than 122 seconds.
Feb 17 19:04:39 niselog kernel:      Tainted: G               X  -------  ---  5.14.0-503.15.1.el9_5.x86_64 #1
Feb 17 19:04:39 niselog kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 17 19:04:39 niselog kernel: task:systemd-journal state:D stack:0     pid:1567  tgid:1567  ppid:1      flags:0x00000002
Feb 17 19:04:39 niselog kernel: Call Trace:
Feb 17 19:04:39 niselog kernel: <TASK>
Feb 17 19:04:39 niselog kernel: __schedule+0x229/0x550
Feb 17 19:04:39 niselog kernel: ? bio_associate_blkg_from_css+0xf5/0x320
Feb 17 19:04:39 niselog kernel: schedule+0x2e/0xd0
Feb 17 19:04:39 niselog kernel: md_write_start.part.0+0x195/0x250
Feb 17 19:04:39 niselog kernel: ? __pfx_autoremove_wake_function+0x10/0x10
Feb 17 19:04:39 niselog kernel: raid1_make_request+0x5b/0xbb [raid1]
Feb 17 19:04:39 niselog kernel: md_handle_request+0x150/0x270
Feb 17 19:04:39 niselog kernel: ? __bio_split_to_limits+0x8e/0x280
Feb 17 19:04:39 niselog kernel: __submit_bio+0x94/0x130
Feb 17 19:04:39 niselog kernel: __submit_bio_noacct+0x7e/0x1e0
Feb 17 19:04:39 niselog kernel: iomap_submit_ioend+0x4e/0x80
Feb 17 19:04:39 niselog kernel: xfs_vm_writepages+0x7a/0xb0 [xfs]
Feb 17 19:04:39 niselog kernel: do_writepages+0xcc/0x1a0
Feb 17 19:04:39 niselog kernel: ? xfs_buffered_write_iomap_begin+0x5da/0xa90 [xfs]
Feb 17 19:04:39 niselog kernel: ? xfs_inode_to_log_dinode+0x210/0x410 [xfs]
Feb 17 19:04:39 niselog kernel: filemap_fdatawrite_wbc+0x66/0x90
Feb 17 19:04:39 niselog kernel: __filemap_fdatawrite_range+0x54/0x80
Feb 17 19:04:39 niselog kernel: file_write_and_wait_range+0x48/0xb0
Feb 17 19:04:39 niselog kernel: xfs_file_fsync+0x5a/0x240 [xfs]
Feb 17 19:04:39 niselog kernel: __x64_sys_fsync+0x33/0x60
Feb 17 19:04:39 niselog kernel: do_syscall_64+0x5c/0xf0
Feb 17 19:04:39 niselog kernel: ? xfs_iunlock+0xb9/0x110 [xfs]
Feb 17 19:04:39 niselog kernel: ? balance_dirty_pages_ratelimited_flags+0x132/0x380
Feb 17 19:04:39 niselog kernel: ? fault_dirty_shared_page+0x8c/0xf0
Feb 17 19:04:39 niselog kernel: ? do_wp_page+0xe7/0x4b0
Feb 17 19:04:39 niselog kernel: ? pte_offset_map_nolock+0x2b/0xb0
Feb 17 19:04:39 niselog kernel: ? __handle_mm_fault+0x2fb/0x690
Feb 17 19:04:39 niselog kernel: ? __count_memcg_events+0x4f/0xb0
Feb 17 19:04:39 niselog kernel: ? mm_account_fault+0x6c/0x100
Feb 17 19:04:39 niselog kernel: ? handle_mm_fault+0x116/0x270
Feb 17 19:04:39 niselog kernel: ? do_user_addr_fault+0x1b4/0x6a0
Feb 17 19:04:39 niselog kernel: ? exc_page_fault+0x62/0x150
Feb 17 19:04:39 niselog kernel: entry_SYSCALL_64_after_hwframe+0x78/0x80
Feb 17 19:04:39 niselog kernel: RIP: 0033:0x7f032590459b
Feb 17 19:04:39 niselog kernel: RSP: 002b:00007fff98e01f50 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
Feb 17 19:04:39 niselog kernel: RAX: ffffffffffffffda RBX: 000055bb10cfdbb0 RCX: 00007f032590459b
Feb 17 19:04:39 niselog kernel: RDX: 0000000000000002 RSI: 0000000000000002 RDI: 0000000000000011
Feb 17 19:04:39 niselog kernel: RBP: 0000000000000098 R08: 0000000000000000 R09: 00007fff98e02cb0
Feb 17 19:04:39 niselog kernel: R10: 00007fff98e01f10 R11: 0000000000000293 R12: 0000000000000003
Feb 17 19:04:39 niselog kernel: R13: 00007fff98e020a0 R14: 00007fff98e02098 R15: 00007fff98e02590
Feb 17 19:04:39 niselog kernel: </TASK>
Feb 17 19:04:39 niselog kernel: INFO: task kworker/3:0:2573918 blocked for more than 122 seconds.
Feb 17 19:04:39 niselog kernel:      Tainted: G               X  -------  ---  5.14.0-503.15.1.el9_5.x86_64 #1
Feb 17 19:04:39 niselog kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 17 19:04:39 niselog kernel: task:kworker/3:0     state:D stack:0     pid:2573918 tgid:2573918 ppid:2      flags:0x00004000
Feb 17 19:04:39 niselog kernel: Workqueue: xfs-sync/dm-0 xfs_log_worker [xfs]
Feb 17 19:04:39 niselog kernel: Call Trace:
Feb 17 19:04:39 niselog kernel: <TASK>
Feb 17 19:04:39 niselog kernel: __schedule+0x229/0x550
Feb 17 19:04:39 niselog kernel: schedule+0x2e/0xd0
Feb 17 19:04:39 niselog kernel: xlog_wait_on_iclog+0x16b/0x180 [xfs]
Feb 17 19:04:39 niselog kernel: ? __pfx_default_wake_function+0x10/0x10
Feb 17 19:04:39 niselog kernel: xfs_log_force_seq+0x8f/0x160 [xfs]
Feb 17 19:04:39 niselog kernel: __xfs_trans_commit+0x2a2/0x360 [xfs]
Feb 17 19:04:39 niselog kernel: xfs_sync_sb+0x6d/0x80 [xfs]
Feb 17 19:04:39 niselog kernel: xfs_log_worker+0x9f/0xd0 [xfs]
Feb 17 19:04:39 niselog kernel: process_one_work+0x194/0x380
Feb 17 19:04:39 niselog kernel: worker_thread+0x2fe/0x410
Feb 17 19:04:39 niselog kernel: ? __pfx_worker_thread+0x10/0x10
Feb 17 19:04:39 niselog kernel: kthread+0xdd/0x100
Feb 17 19:04:39 niselog kernel: ? __pfx_kthread+0x10/0x10
Feb 17 19:04:39 niselog kernel: ret_from_fork+0x29/0x50
Feb 17 19:04:39 niselog kernel: </TASK>
Feb 17 19:04:39 niselog kernel: INFO: task UV_WORKER[5]:2732216 blocked for more than 122 seconds.
Feb 17 19:04:39 niselog kernel:      Tainted: G               X  -------  ---  5.14.0-503.15.1.el9_5.x86_64 #1
Feb 17 19:04:39 niselog kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 17 19:04:39 niselog kernel: task:UV_WORKER[5]    state:D stack:0     pid:2732216 tgid:2732033 ppid:1      flags:0x00000002
Feb 17 19:04:39 niselog kernel: Call Trace:
Feb 17 19:04:39 niselog kernel: <TASK>
Feb 17 19:04:39 niselog kernel: __schedule+0x229/0x550
Feb 17 19:04:39 niselog kernel: schedule+0x2e/0xd0
Feb 17 19:04:39 niselog kernel: schedule_preempt_disabled+0x11/0x20
Feb 17 19:04:39 niselog kernel: rwsem_down_write_slowpath+0x23d/0x500
Feb 17 19:04:39 niselog kernel: down_write+0x58/0x60
Feb 17 19:04:39 niselog kernel: xfs_ilock+0xef/0x100 [xfs]
Feb 17 19:04:39 niselog kernel: xfs_file_write_checks+0x215/0x2e0 [xfs]
Feb 17 19:04:39 niselog kernel: xfs_file_dio_write_aligned+0x65/0x160 [xfs]
Feb 17 19:04:39 niselog kernel: xfs_file_write_iter+0xce/0x110 [xfs]
Feb 17 19:04:39 niselog kernel: vfs_write+0x2cb/0x410
Feb 17 19:04:39 niselog kernel: __x64_sys_pwrite64+0x90/0xc0
Feb 17 19:04:39 niselog kernel: do_syscall_64+0x5c/0xf0
Feb 17 19:04:39 niselog kernel: ? __count_memcg_events+0x4f/0xb0
Feb 17 19:04:39 niselog kernel: ? mm_account_fault+0x6c/0x100
Feb 17 19:04:39 niselog kernel: ? handle_mm_fault+0x116/0x270
Feb 17 19:04:39 niselog kernel: ? do_user_addr_fault+0x1d6/0x6a0
Feb 17 19:04:39 niselog kernel: ? exc_page_fault+0x62/0x150
Feb 17 19:04:39 niselog kernel: entry_SYSCALL_64_after_hwframe+0x78/0x80
Feb 17 19:04:39 niselog kernel: RIP: 0033:0x7f7e034fbc4f
Feb 17 19:04:39 niselog kernel: RSP: 002b:00007f7dfb87cc90 EFLAGS: 00000293 ORIG_RAX: 0000000000000012
Feb 17 19:04:39 niselog kernel: RAX: ffffffffffffffda RBX: 00007f7dfb87df28 RCX: 00007f7e034fbc4f
Feb 17 19:04:39 niselog kernel: RDX: 000000000000b000 RSI: 00005631c116b000 RDI: 000000000000003b
Feb 17 19:04:39 niselog kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000059000
Feb 17 19:04:39 niselog kernel: R10: 0000000000059000 R11: 0000000000000293 R12: 00007f7e043fa658
Feb 17 19:04:39 niselog kernel: R13: 00007f7dfb87d038 R14: 0000000000000001 R15: 00007f7dfb87d010
Feb 17 19:04:39 niselog kernel: </TASK>
Feb 17 19:04:39 niselog kernel: INFO: task UV_WORKER[9]:2732222 blocked for more than 122 seconds.
Feb 17 19:04:39 niselog kernel:      Tainted: G               X  -------  ---  5.14.0-503.15.1.el9_5.x86_64 #1
Feb 17 19:04:39 niselog kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 17 19:04:39 niselog kernel: task:UV_WORKER[9]    state:D stack:0     pid:2732222 tgid:2732033 ppid:1      flags:0x00000002
Feb 17 19:04:39 niselog kernel: Call Trace:
Feb 17 19:04:39 niselog kernel: <TASK>
Feb 17 19:04:39 niselog kernel: __schedule+0x229/0x550
Feb 17 19:04:39 niselog kernel: ? bio_associate_blkg_from_css+0xf5/0x320
Feb 17 19:04:39 niselog kernel: schedule+0x2e/0xd0
Feb 17 19:04:39 niselog kernel: md_write_start.part.0+0x195/0x250
Feb 17 19:04:39 niselog kernel: ? __pfx_autoremove_wake_function+0x10/0x10
Feb 17 19:04:39 niselog kernel: raid1_make_request+0x5b/0xbb [raid1]
Feb 17 19:04:39 niselog kernel: md_handle_request+0x150/0x270
Feb 17 19:04:39 niselog kernel: ? __bio_split_to_limits+0x8e/0x280
Feb 17 19:04:39 niselog kernel: __submit_bio+0x94/0x130
Feb 17 19:04:39 niselog kernel: __submit_bio_noacct+0x7e/0x1e0
Feb 17 19:04:39 niselog kernel: iomap_dio_bio_iter+0x3bb/0x550
Feb 17 19:04:39 niselog kernel: __iomap_dio_rw+0x305/0x590
Feb 17 19:04:39 niselog kernel: iomap_dio_rw+0xa/0x30
Feb 17 19:04:39 niselog kernel: xfs_file_dio_write_aligned+0x96/0x160 [xfs]
Feb 17 19:04:39 niselog kernel: xfs_file_write_iter+0xce/0x110 [xfs]
Feb 17 19:04:39 niselog kernel: vfs_write+0x2cb/0x410
Feb 17 19:04:39 niselog kernel: __x64_sys_pwrite64+0x90/0xc0
Feb 17 19:04:39 niselog kernel: do_syscall_64+0x5c/0xf0
Feb 17 19:04:39 niselog kernel: ? __mod_memcg_lruvec_state+0x76/0xc0
Feb 17 19:04:39 niselog kernel: ? __mod_lruvec_page_state+0x97/0x160
Feb 17 19:04:39 niselog kernel: ? folio_add_new_anon_rmap+0x44/0xe0
Feb 17 19:04:39 niselog kernel: ? do_anonymous_page+0x25a/0x410
Feb 17 19:04:39 niselog kernel: ? __handle_mm_fault+0x2fb/0x690
Feb 17 19:04:39 niselog kernel: ? __count_memcg_events+0x4f/0xb0
Feb 17 19:04:39 niselog kernel: ? mm_account_fault+0x6c/0x100
Feb 17 19:04:39 niselog kernel: ? handle_mm_fault+0x116/0x270
Feb 17 19:04:39 niselog kernel: ? do_user_addr_fault+0x1d6/0x6a0
Feb 17 19:04:39 niselog kernel: ? exc_page_fault+0x62/0x150
Feb 17 19:04:39 niselog kernel: entry_SYSCALL_64_after_hwframe+0x78/0x80
Feb 17 19:04:39 niselog kernel: RIP: 0033:0x7f7e034fbc4f
Feb 17 19:04:39 niselog kernel: RSP: 002b:00007f7df8876c90 EFLAGS: 00000293 ORIG_RAX: 0000000000000012
Feb 17 19:04:39 niselog kernel: RAX: ffffffffffffffda RBX: 00007f7df8877f28 RCX: 00007f7e034fbc4f
Feb 17 19:04:39 niselog kernel: RDX: 0000000000009000 RSI: 00005631c102d000 RDI: 000000000000003b
Feb 17 19:04:39 niselog kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000050000
Feb 17 19:04:39 niselog kernel: R10: 0000000000050000 R11: 0000000000000293 R12: 00007f7e043fa658
Feb 17 19:04:39 niselog kernel: R13: 00007f7df8877038 R14: 0000000000000001 R15: 00007f7df8877010
Feb 17 19:04:39 niselog kernel: </TASK>
Feb 17 19:04:39 niselog kernel: INFO: task UV_WORKER[14]:2732230 blocked for more than 122 seconds.
Feb 17 19:04:39 niselog kernel:      Tainted: G               X  -------  ---  5.14.0-503.15.1.el9_5.x86_64 #1
Feb 17 19:04:39 niselog kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 17 19:04:39 niselog kernel: task:UV_WORKER[14]   state:D stack:0     pid:2732230 tgid:2732033 ppid:1      flags:0x00000002
Feb 17 19:04:39 niselog kernel: Call Trace:
Feb 17 19:04:39 niselog kernel: <TASK>
Feb 17 19:04:39 niselog kernel: __schedule+0x229/0x550
Feb 17 19:04:39 niselog kernel: ? bio_associate_blkg_from_css+0xf5/0x320
Feb 17 19:04:39 niselog kernel: schedule+0x2e/0xd0
Feb 17 19:04:39 niselog kernel: md_write_start.part.0+0x195/0x250
Feb 17 19:04:39 niselog kernel: ? __pfx_autoremove_wake_function+0x10/0x10
Feb 17 19:04:39 niselog kernel: raid1_make_request+0x5b/0xbb [raid1]
Feb 17 19:04:39 niselog kernel: md_handle_request+0x150/0x270
Feb 17 19:04:39 niselog kernel: ? __bio_split_to_limits+0x8e/0x280
Feb 17 19:04:39 niselog kernel: __submit_bio+0x94/0x130
Feb 17 19:04:39 niselog kernel: __submit_bio_noacct+0x7e/0x1e0
Feb 17 19:04:39 niselog kernel: iomap_dio_bio_iter+0x3bb/0x550
Feb 17 19:04:39 niselog kernel: __iomap_dio_rw+0x305/0x590
Feb 17 19:04:39 niselog kernel: iomap_dio_rw+0xa/0x30
Feb 17 19:04:39 niselog kernel: xfs_file_dio_write_aligned+0x96/0x160 [xfs]
Feb 17 19:04:39 niselog kernel: xfs_file_write_iter+0xce/0x110 [xfs]
Feb 17 19:04:39 niselog kernel: vfs_write+0x2cb/0x410
Feb 17 19:04:39 niselog kernel: __x64_sys_pwrite64+0x90/0xc0
Feb 17 19:04:39 niselog kernel: do_syscall_64+0x5c/0xf0
Feb 17 19:04:39 niselog kernel: ? __mod_memcg_lruvec_state+0x76/0xc0
Feb 17 19:04:39 niselog kernel: ? __mod_lruvec_page_state+0x97/0x160
Feb 17 19:04:39 niselog kernel: ? folio_add_new_anon_rmap+0x44/0xe0
Feb 17 19:04:39 niselog kernel: ? do_anonymous_page+0x25a/0x410
Feb 17 19:04:39 niselog kernel: ? __handle_mm_fault+0x2fb/0x690
Feb 17 19:04:39 niselog kernel: ? __count_memcg_events+0x4f/0xb0
Feb 17 19:04:39 niselog kernel: ? mm_account_fault+0x6c/0x100
Feb 17 19:04:39 niselog kernel: ? handle_mm_fault+0x116/0x270
Feb 17 19:04:39 niselog kernel: ? do_user_addr_fault+0x1d6/0x6a0
Feb 17 19:04:39 niselog kernel: ? exc_page_fault+0x62/0x150
Feb 17 19:04:39 niselog kernel: entry_SYSCALL_64_after_hwframe+0x78/0x80
Feb 17 19:04:39 niselog kernel: RIP: 0033:0x7f7e034fbc4f
Feb 17 19:04:39 niselog kernel: RSP: 002b:00007f7df486ec90 EFLAGS: 00000293 ORIG_RAX: 0000000000000012
Feb 17 19:04:39 niselog kernel: RAX: ffffffffffffffda RBX: 00007f7df486ff28 RCX: 00007f7e034fbc4f
Feb 17 19:04:39 niselog kernel: RDX: 0000000000002000 RSI: 00005631c0325000 RDI: 000000000000008a
Feb 17 19:04:39 niselog kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000295000
Feb 17 19:04:39 niselog kernel: R10: 0000000000295000 R11: 0000000000000293 R12: 00007f7e043fa658
Feb 17 19:04:39 niselog kernel: R13: 00007f7df486f038 R14: 0000000000000001 R15: 00007f7df486f010
Feb 17 19:04:39 niselog kernel: </TASK>
Feb 17 19:04:39 niselog kernel: INFO: task UV_WORKER[18]:2732233 blocked for more than 122 seconds.
Feb 17 19:04:39 niselog kernel:      Tainted: G               X  -------  ---  5.14.0-503.15.1.el9_5.x86_64 #1
Feb 17 19:04:39 niselog kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 17 19:04:39 niselog kernel: task:UV_WORKER[18]   state:D stack:0     pid:2732233 tgid:2732033 ppid:1      flags:0x00000002
Feb 17 19:04:39 niselog kernel: Call Trace:
Feb 17 19:04:39 niselog kernel: <TASK>
Feb 17 19:04:39 niselog kernel: __schedule+0x229/0x550
Feb 17 19:04:39 niselog kernel: schedule+0x2e/0xd0
Feb 17 19:04:39 niselog kernel: schedule_preempt_disabled+0x11/0x20
Feb 17 19:04:39 niselog kernel: rwsem_down_read_slowpath+0x37f/0x4f0
Feb 17 19:04:39 niselog kernel: down_read+0x45/0xa0
Feb 17 19:04:39 niselog kernel: xfs_ilock+0x79/0x100 [xfs]
Feb 17 19:04:39 niselog kernel: xfs_file_dio_write_aligned+0xc5/0x160 [xfs]
Feb 17 19:04:39 niselog kernel: xfs_file_write_iter+0xce/0x110 [xfs]
Feb 17 19:04:39 niselog kernel: vfs_write+0x2cb/0x410
Feb 17 19:04:39 niselog kernel: __x64_sys_pwrite64+0x90/0xc0
Feb 17 19:04:39 niselog kernel: do_syscall_64+0x5c/0xf0
Feb 17 19:04:39 niselog kernel: ? do_user_addr_fault+0x1d6/0x6a0
Feb 17 19:04:39 niselog kernel: ? syscall_exit_work+0x103/0x130
Feb 17 19:04:39 niselog kernel: ? exc_page_fault+0x62/0x150
Feb 17 19:04:39 niselog kernel: entry_SYSCALL_64_after_hwframe+0x78/0x80
Feb 17 19:04:39 niselog kernel: RIP: 0033:0x7f7e034fbc4f
Feb 17 19:04:39 niselog kernel: RSP: 002b:00007f7df306bc90 EFLAGS: 00000293 ORIG_RAX: 0000000000000012
Feb 17 19:04:39 niselog kernel: RAX: ffffffffffffffda RBX: 00007f7df306cf28 RCX: 00007f7e034fbc4f
Feb 17 19:04:39 niselog kernel: RDX: 0000000000006000 RSI: 00005631c110d000 RDI: 000000000000003b
Feb 17 19:04:39 niselog kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000064000
Feb 17 19:04:39 niselog kernel: R10: 0000000000064000 R11: 0000000000000293 R12: 00007f7e043fa658
Feb 17 19:04:39 niselog kernel: R13: 00007f7df306c038 R14: 0000000000000001 R15: 00007f7df306c010
Feb 17 19:04:39 niselog kernel: </TASK>
Feb 17 19:04:39 niselog kernel: INFO: task kworker/u16:2:2888985 blocked for more than 122 seconds.
Feb 17 19:04:39 niselog kernel:      Tainted: G               X  -------  ---  5.14.0-503.15.1.el9_5.x86_64 #1
Feb 17 19:04:39 niselog kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 17 19:04:39 niselog kernel: task:kworker/u16:2   state:D stack:0     pid:2888985 tgid:2888985 ppid:2      flags:0x00004000
Feb 17 19:04:39 niselog kernel: Workqueue: writeback wb_workfn (flush-253:6)
Feb 17 19:04:39 niselog kernel: Call Trace:
Feb 17 19:04:39 niselog kernel: <TASK>
Feb 17 19:04:39 niselog kernel: __schedule+0x229/0x550
Feb 17 19:04:39 niselog kernel: schedule+0x2e/0xd0
Feb 17 19:04:39 niselog kernel: md_write_start.part.0+0x195/0x250
Feb 17 19:04:39 niselog kernel: ? __pfx_autoremove_wake_function+0x10/0x10
Feb 17 19:04:39 niselog kernel: raid1_make_request+0x5b/0xbb [raid1]
Feb 17 19:04:39 niselog kernel: md_handle_request+0x150/0x270
Feb 17 19:04:39 niselog kernel: ? __bio_split_to_limits+0x8e/0x280
Feb 17 19:04:39 niselog kernel: __submit_bio+0x94/0x130
Feb 17 19:04:39 niselog kernel: __submit_bio_noacct+0x7e/0x1e0
Feb 17 19:04:39 niselog kernel: iomap_submit_ioend+0x4e/0x80
Feb 17 19:04:39 niselog kernel: xfs_vm_writepages+0x7a/0xb0 [xfs]
Feb 17 19:04:39 niselog kernel: do_writepages+0xcc/0x1a0
Feb 17 19:04:39 niselog kernel: ? __percpu_counter_sum_mask+0x6f/0x80
Feb 17 19:04:39 niselog kernel: __writeback_single_inode+0x41/0x270
Feb 17 19:04:39 niselog kernel: writeback_sb_inodes+0x209/0x4a0
Feb 17 19:04:39 niselog kernel: __writeback_inodes_wb+0x4c/0xe0
Feb 17 19:04:39 niselog kernel: wb_writeback+0x1d7/0x2d0
Feb 17 19:04:39 niselog kernel: wb_do_writeback+0x1d1/0x2b0
Feb 17 19:04:39 niselog kernel: wb_workfn+0x5e/0x290
Feb 17 19:04:39 niselog kernel: ? __switch_to_asm+0x3a/0x80
Feb 17 19:04:39 niselog kernel: ? finish_task_switch.isra.0+0x8c/0x2a0
Feb 17 19:04:39 niselog kernel: ? __schedule+0x231/0x550
Feb 17 19:04:39 niselog kernel: process_one_work+0x194/0x380
Feb 17 19:04:39 niselog kernel: worker_thread+0x2fe/0x410
Feb 17 19:04:39 niselog kernel: ? __pfx_worker_thread+0x10/0x10
Feb 17 19:04:39 niselog kernel: kthread+0xdd/0x100
Feb 17 19:04:39 niselog kernel: ? __pfx_kthread+0x10/0x10
Feb 17 19:04:39 niselog kernel: ret_from_fork+0x29/0x50
Feb 17 19:04:39 niselog kernel: </TASK>
Feb 17 19:04:39 niselog kernel: nvme nvme0: Device not ready; aborting reset, CSTS=0x1
Feb 17 19:04:39 niselog kernel: nvme nvme0: Abort status: 0x371
Feb 17 19:04:39 niselog kernel: nvme nvme0: Abort status: 0x371
Feb 17 19:04:39 niselog kernel: nvme nvme0: Abort status: 0x371
Feb 17 19:04:39 niselog systemd: systemd-journald@netdata.service: Watchdog timeout (limit 3min)!
Feb 17 19:04:39 niselog systemd: systemd-journald@netdata.service: Killing process 1567 (systemd-journal) with signal SIGABRT.
Feb 17 19:04:39 niselog systemd: systemd-journald.service: State 'stop-watchdog' timed out. Killing.
Feb 17 19:04:39 niselog systemd: systemd-journald.service: Killing process 814 (systemd-journal) with signal SIGKILL.
Feb 17 19:04:39 niselog systemd: systemd-journald.service: Killing process 2923856 (journal-offline) with signal SIGKILL.
Feb 17 19:04:39 niselog systemd: systemd-journald@netdata.service: State 'stop-watchdog' timed out. Killing.
Feb 17 19:04:39 niselog systemd: systemd-journald@netdata.service: Killing process 1567 (systemd-journal) with signal SIGKILL.
Feb 17 19:04:39 niselog kernel: nvme nvme0: Device not ready; aborting reset, CSTS=0x1
Feb 17 19:04:39 niselog kernel: nvme nvme0: Disabling device after reset failure: -19
Feb 17 19:04:39 niselog kernel: md: super_written gets error=-5
Feb 17 19:04:39 niselog kernel: md/raid1:md127: nvme0n1p3: rescheduling sector 26697928
Feb 17 19:04:39 niselog kernel: md/raid1:md127: Disk failure on nvme0n1p3, disabling device.#012md/raid1:md127: Operation continuing on 1 devices.
Feb 17 19:04:39 niselog kernel: nvme nvme0: Identify namespace failed (-5)
Feb 17 19:04:39 niselog kernel: XFS (nvme0n1p2): Block device removal (0x20) detected at fs_bdev_mark_dead+0x40/0x60 (fs/xfs/xfs_super.c:1179).  Shutting down filesystem.
Feb 17 19:04:39 niselog kernel: XFS (nvme0n1p2): Please unmount the filesystem and rectify the problem(s)
Feb 17 19:04:39 niselog kernel: md/raid1:md127: redirecting sector 26697928 to other mirror: sda3
Feb 17 19:04:39 niselog systemd: session-49532.scope: Deactivated successfully.
Feb 17 19:04:39 niselog systemd: session-49535.scope: Deactivated successfully.
Feb 17 19:04:39 niselog systemd: session-49534.scope: Deactivated successfully.
Feb 17 19:04:39 niselog systemd: systemd-journald@netdata.service: Main process exited, code=killed, status=9/KILL
Feb 17 19:04:39 niselog systemd: systemd-journald@netdata.service: Failed with result 'watchdog'.
Feb 17 19:04:39 niselog systemd: systemd-journald@netdata.service: Consumed 25.882s CPU time.
Feb 17 19:04:39 niselog systemd: Starting Journal Service for Namespace netdata...
Feb 17 19:04:39 niselog systemd-coredump: Failed to get EXE, ignoring: No such process
Feb 17 19:04:39 niselog systemd: systemd-journald.service: Main process exited, code=killed, status=9/KILL
Feb 17 19:04:39 niselog systemd: systemd-journald.service: Failed with result 'watchdog'.
Feb 17 19:04:39 niselog systemd-coredump: Failed to pread from coredump fd: Unexpected EOF
Feb 17 19:04:39 niselog systemd: systemd-journald.service: Consumed 1min 34.909s CPU time.
Feb 17 19:04:39 niselog systemd: systemd-journald.service: Scheduled restart job, restart counter is at 2.
Feb 17 19:04:39 niselog systemd: Stopped Journal Service.
Feb 17 19:04:39 niselog systemd: systemd-journald.service: Consumed 1min 34.909s CPU time.
Feb 17 19:04:39 niselog systemd: Starting Journal Service...
Feb 17 19:04:39 niselog : Could not parse ELF file, gelf_getehdr() failed: invalid `Elf' handle
Feb 17 19:04:39 niselog systemd-coredump: Process 814 (systemd-journal) of user 0 dumped core.
Feb 17 19:04:39 niselog systemd-coredump: Coredump diverted to /var/lib/systemd/coredump/core.systemd-journal.0.f0f07d48bddb405c8f54476773709261.814.1739786679000000.zst
Feb 17 19:04:39 niselog systemd: Started Journal Service for Namespace netdata.
Feb 17 19:04:39 niselog systemd-journald[2924825]: File /var/log/journal/cd9cc679cd964f349e957629b0d52cb2/system.journal corrupted or uncleanly shut down, renaming and replacing.
Feb 17 19:04:39 niselog systemd-journald[2924825]: Journal started
Feb 17 19:04:39 niselog systemd-journald[2924825]: System Journal (/var/log/journal/cd9cc679cd964f349e957629b0d52cb2) is 350.8M, max 1017.6M, 666.7M free.
Feb 17 19:04:39 niselog systemd[1]: session-49533.scope: Deactivated successfully.
Feb 17 19:04:39 niselog systemd[1]: systemd-journald.service: Watchdog timeout (limit 3min)!
Feb 17 19:04:39 niselog systemd[1]: systemd-journald.service: Killing process 814 (systemd-journal) with signal SIGABRT.
Feb 17 19:04:39 niselog systemd: Started Journal Service.
Feb 17 19:04:39 niselog systemd-journald[2924823]: Failed to open /dev/kmsg, ignoring: Operation not permitted
Feb 17 19:04:39 niselog systemd-journald[2924823]: File /var/log/journal/cd9cc679cd964f349e957629b0d52cb2.netdata/system.journal corrupted or uncleanly shut down, renaming and replacing.
Feb 17 19:04:39 niselog systemd-coredump[2924822]: Process 814 (systemd-journal) of user 0 dumped core.
Feb 17 19:04:39 niselog rsyslogd[1566]: imjournal: journal files changed, reloading...  [v8.2310.0-4.el9 try https://www.rsyslog.com/e/0 ]
Feb 17 19:04:56 niselog mdadm[1151]: Fail event detected on md device /dev/md/pv00, component device /dev/nvme0n1p3



2025/06/19追記

結局1か月以内にNVMe SSDが認識できなくなって、再起動レベルではだめで、電源切って再投入するまでNVMe SSDが認識されないという状況が続きました。

このため、M.2 SATA SSD 2枚構成に変更し、まずは1か月無事運用ができています。

RHEL9系でOSをソフトウェアミラー構成でインストールする

AlmaLinux9でサーバ作ろうとしたら、システムディスクをソフトウェアミラー構成する場合にどうやって設定操作すればいいのか全然わからなかったのでメモとして作成

RHEL9,RockyLinux 9, Oracle Linux 9でも共通のはず

1.ディスク選択

ディスク選択では、ローカルディスクを2つ選択し、「ストレージの設定:カスタム」を選んで、「完了」をクリック

2. 初期状態確認

再インストールなどで既存のパーテーションがある場合は削除します。

3. ひとまず標準設定で作成

「ここをクリックすると自動的に作成します」をクリックしてパーテーションをひとまず作成します。

4. ボリュームグループの容量を減らす

デフォルトで作成されたボリュームグループが容量を全部もっていってるので、ボリュームグループが確保した容量を減らします。

まずは /home を選択し、ボリュームグループの下にある「変更」をクリック

・ディスクを1つだけ選択
・RAIDレベルを「なし」
・サイズポリシーを「容量固定」で「100GiB」ぐらいで設定

5. /boot/efiの容量変更

インストーラーの仕様でパーテーションの順序を指定できないので、逆に設定されても対応できるように、/bootと/boot/efiの容量を同じ「1024MiB」に指定します。

6. /bootと/boot/efiのデバイス指定

/boot/efiと/bootが1個目のディスクに作成されるように、「デバイス」の「変更」から1個目のディスクだけを選択します。

/bootについても、1つめのディスクのみを指定する

7. 2個目のディスク用に/boot2, /boot2/efi を同じ容量で作る

2個目のディスク用に/boot2, /boot2/efi を同じ容量で作る

まず、/boot2を「+」をクリックして、新規マウントポイントの追加をする

作成されたら、デバイスの「変更」を行い、2個目のディスクのみを選択する

設定後、右下の方にある「設定を更新」をクリックして、/boot2のデバイスが/dev/sdbになることを確認

続いて/boot2/efiを作成

こちらもデバイスで、2個目のディスクを選択する、ということを行い

追加として、ファイルシステムを「vfat」に変更します。

/boot,/boot/efi が sda系、 /boot2, /boot2/efi が sdb系となっていることを確認します。

8. ボリュームグループをミラー設定に変える

ボリュームグループをミラー設定に変えるため、/homeをクリックし、ボリュームグループの下の「変更」をクリックします。

以下の設定変更を行う
・デバイスを2つ選択する
・RAIDレベルを「RAID1」に変更
・サイズポリシーを「できるだけ大きく」に変更 ある程度切りが良さそうな値にします

2024/12/19注: 実機で作る時512GB SSDを2枚で構成してたら壊れて交換品を探したら500GBばかりだった、ということがあったので、フルで作ってしまうと交換時に困る、ということが判明しました・・・ある程度空き容量を残しておいた方が良さそうです

好みに応じてボリュームグループの名前を変える

9. 設定状況を確認

うまく設定できているか確認します。

確認ポイント
・/boot, /boot/efi のデバイスが1個目にあること
・/boot2, /boot2/efi のデバイスが2個目にあること
・/boot2, /boot2/efi のパーテーション番号が 1か2であること(3以降になってないこと)

パーテーション番号が3以降になってる場合は、ボリュームグループ設定をRAIDレベルなし、デバイス1個のみ指定に変えて、やり直します。

パーテーションを増やす場合は追加していきます。

問題なければ、「完了」をクリックします。

10. パーテーションを確定します

書き換え内容を確認し、パーテーション変更を確定します。

11. インストールを続ける

このあとはインストールを普通に続けます。

12. /boot2, /boot2/efi のパーテーション確認

インストール完了したら、仮で作成している/boot2, /boot2/efi のパーテーションを確認します。

[root@almalinux ~]# df -h
ファイルシス                         サイズ  使用  残り 使用% マウント位置
devtmpfs                               4.0M     0  4.0M    0% /dev
tmpfs                                  2.8G     0  2.8G    0% /dev/shm
tmpfs                                  1.2G  8.8M  1.1G    1% /run
efivarfs                               256K   47K  205K   19% /sys/firmware/efi/efivars
/dev/mapper/almalinux_almalinux-root    70G  2.0G   68G    3% /
/dev/sdb1                              960M   39M  922M    5% /boot2
/dev/sda2                              960M  225M  736M   24% /boot
/dev/sdb2                             1022M  4.0K 1022M    1% /boot2/efi
/dev/sda1                             1022M  7.1M 1015M    1% /boot/efi
/dev/mapper/almalinux_almalinux-home    25G  204M   24G    1% /home
tmpfs                                  567M     0  567M    0% /run/user/0
[root@almalinux ~]# fdisk -l /dev/sd?
ディスク /dev/sda: 256 GiB, 274877906944 バイト, 536870912 セクタ
ディスク型式: Virtual disk
単位: セクタ (1 * 512 = 512 バイト)
セクタサイズ (論理 / 物理): 512 バイト / 512 バイト
I/O サイズ (最小 / 推奨): 512 バイト / 512 バイト
ディスクラベルのタイプ: gpt
ディスク識別子: AD39DBC4-F496-416D-B408-26B7407C1AE3

デバイス   開始位置  終了位置    セクタ サイズ タイプ
/dev/sda1      2048   2099199   2097152     1G EFI システム
/dev/sda2   2099200   4196351   2097152     1G Linux ファイルシステム
/dev/sda3   4196352 536868863 532672512   254G Linux RAID


ディスク /dev/sdb: 256 GiB, 274877906944 バイト, 536870912 セクタ
ディスク型式: Virtual disk
単位: セクタ (1 * 512 = 512 バイト)
セクタサイズ (論理 / 物理): 512 バイト / 512 バイト
I/O サイズ (最小 / 推奨): 512 バイト / 512 バイト
ディスクラベルのタイプ: gpt
ディスク識別子: 77F9F7E9-068F-4F6B-8CA1-F0C23721890B

デバイス   開始位置  終了位置    セクタ サイズ タイプ
/dev/sdb1      2048   2099199   2097152     1G Linux ファイルシステム
/dev/sdb2   2099200   4196351   2097152     1G Microsoft 基本データ
/dev/sdb3   4196352 536868863 532672512   254G Linux RAID
[root@almalinux ~]#

1個目のディスクと2個目のディスクでパーテーションの/boot, /boot/efiの順序が逆になっていました。

13. /boot2, /boot2/efi のマウント設定解除

/etc/fstab から /boot2, /boot2/efi のエントリを削除します。

</etc/fstab の編集内容は省略>

また、手動で umount します

[root@almalinux ~]# umount /boot2/efi
[root@almalinux ~]# umount /boot2
[root@almalinux ~]# df -h
ファイルシス                         サイズ  使用  残り 使用% マウント位置
devtmpfs                               4.0M     0  4.0M    0% /dev
tmpfs                                  2.8G     0  2.8G    0% /dev/shm
tmpfs                                  1.2G  8.8M  1.1G    1% /run
efivarfs                               256K   47K  205K   19% /sys/firmware/efi/efivars
/dev/mapper/almalinux_almalinux-root    70G  2.0G   68G    3% /
/dev/sda2                              960M  225M  736M   24% /boot
/dev/sda1                             1022M  7.1M 1015M    1% /boot/efi
/dev/mapper/almalinux_almalinux-home    25G  204M   24G    1% /home
tmpfs                                  567M     0  567M    0% /run/user/0
[root@almalinux ~]#

14. /bootと/boot/efiの中身をddコマンドで丸コピー

パーテーションの情報が異なってることは気にしないで、 1個目ディスクのパーテーション1にある /boot/efi を2個目ディスクのパーテーション1に丸コピーします。

同様に 1個目ディスクのパーテーション2 /boot を 2個目ディスクのパーテーション2に丸コピーします。

[root@almalinux ~]# dd if=/dev/sda1 of=/dev/sdb1 bs=10240
104857+1 レコード入力
104857+1 レコード出力
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 23.3005 s, 46.1 MB/s
[root@almalinux ~]# dd if=/dev/sda2 of=/dev/sdb2 bs=10240
104857+1 レコード入力
104857+1 レコード出力
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 20.9083 s, 51.4 MB/s
[root@almalinux ~]#

blkid コマンドを実行して、同じUUIDとなっていることを確認します。

[root@almalinux ~]# blkid /dev/sd*
/dev/sda: PTUUID="ad39dbc4-f496-416d-b408-26b7407c1ae3" PTTYPE="gpt"
/dev/sda1: UUID="D14E-432E" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="00d955a6-b2cc-4659-8bcb-f40bb6484f02"
/dev/sda2: UUID="7353a543-10d3-4ff5-8db1-9405cd38a5fa" TYPE="xfs" PARTUUID="0e4f1623-0552-4e6a-b1ee-8d4dfa88d47b"
/dev/sda3: UUID="14892d7b-9a8d-c7c3-6dc5-26fd1435d076" UUID_SUB="be4ef8ad-56b5-b6be-8b97-7794a5132363" LABEL="almalinux:pv00" TYPE="linux_raid_member" PARTUUID="24e3bb46-6938-48e1-9e5d-20b8d748179f"
/dev/sdb: PTUUID="77f9f7e9-068f-4f6b-8ca1-f0c23721890b" PTTYPE="gpt"
/dev/sdb1: UUID="D14E-432E" TYPE="vfat" PARTUUID="76f36082-17b8-406b-bc80-269057b944a5"
/dev/sdb2: UUID="7353a543-10d3-4ff5-8db1-9405cd38a5fa" TYPE="xfs" PARTUUID="34de2199-82b1-49a2-a09f-d58cd9598563"
/dev/sdb3: UUID="14892d7b-9a8d-c7c3-6dc5-26fd1435d076" UUID_SUB="2103c67d-0ab9-996f-d170-1b51ff955622" LABEL="almalinux:pv00" TYPE="linux_raid_member" PARTUUID="11caecc2-105b-4b50-a5ab-fbbe312867f1"
[root@almalinux ~]#

ほんとは2つのディスクで同じUUID,PARTUUIDがあるとダメなのですが、そこらへんをちゃんと対応しようとすると面倒なので、省略します。(あまり良くないですよ!)

試した限りでは、同じUUID,PARTUUIDがあったとしても、どちらがマウントされるかわからない、という動作をしていましたが、対象となるのは書き換えられることが kernelとgrub2のアップデートがあった場合のみなので、影響が少ない、という判断からです。

kernelとgrub2のアップデートがあった場合は手動でddコマンドを実行して丸コピーします。

15. パーテーション情報修正

まず、いまのパーテーション情報を確認するため parted -lを実行します。(fdisk -l はフラグ情報が表示されず、また情報更新がうまくいってない場合があるので推奨しない)

[root@almalinux ~]# parted -l
モデル: VMware Virtual disk (scsi)
ディスク /dev/sda: 275GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: gpt
ディスクフラグ:

番号  開始    終了    サイズ  ファイルシステム  名前                  フラグ
 1    1049kB  1075MB  1074MB  fat32             EFI System Partition  boot, esp
 2    1075MB  2149MB  1074MB  xfs
 3    2149MB  275GB   273GB                                           raid


モデル: VMware Virtual disk (scsi)
ディスク /dev/sdb: 275GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: gpt
ディスクフラグ:

番号  開始    終了    サイズ  ファイルシステム  名前  フラグ
 1    1049kB  1075MB  1074MB  fat32
 2    1075MB  2149MB  1074MB  xfs                     msftdata
 3    2149MB  275GB   273GB                           raid


エラー: /dev/md127: ディスクラベルが認識できません。
モデル: Linux Software RAID Array (md)
ディスク /dev/md127: 273GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: unknown
ディスクフラグ:

[root@almalinux ~]#

2個目ディスクのパーテーション1と2のフラグ情報が異なっているので、修正します。

まず、bootとespフラグを設定します。

[root@almalinux ~]# parted /dev/sdb
GNU Parted 3.5
/dev/sdb を使用
GNU Parted へようこそ! コマンド一覧を見るには 'help' と入力してください。
(parted) set
パーティション番号? 1
反転するフラグ? boot
新しい状態?  [on]/off?
(parted) print
モデル: VMware Virtual disk (scsi)
ディスク /dev/sdb: 275GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: gpt
ディスクフラグ:

番号  開始    終了    サイズ  ファイルシステム  名前  フラグ
 1    1049kB  1075MB  1074MB  fat32                   boot, esp
 2    1075MB  2149MB  1074MB  xfs                     msftdata
 3    2149MB  275GB   273GB                           raid

(parted)

続けてパーテーション2のmsftdataフラグを解除します。

(parted) set
パーティション番号? 2
反転するフラグ? msftdata
新しい状態?  on/[off]?
(parted) print
モデル: VMware Virtual disk (scsi)
ディスク /dev/sdb: 275GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: gpt
ディスクフラグ:

番号  開始    終了    サイズ  ファイルシステム  名前  フラグ
 1    1049kB  1075MB  1074MB  fat32                   boot, esp
 2    1075MB  2149MB  1074MB  xfs
 3    2149MB  275GB   273GB                           raid

(parted)

問題なければ「q」で抜けて、変更されたことを確認します。

(parted) q
通知: 必要であれば /etc/fstab を更新するのを忘れないようにしてください。

[root@almalinux ~]# parted -l
モデル: VMware Virtual disk (scsi)
ディスク /dev/sda: 275GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: gpt
ディスクフラグ:

番号  開始    終了    サイズ  ファイルシステム  名前                  フラグ
 1    1049kB  1075MB  1074MB  fat32             EFI System Partition  boot, esp
 2    1075MB  2149MB  1074MB  xfs
 3    2149MB  275GB   273GB                                           raid


モデル: VMware Virtual disk (scsi)
ディスク /dev/sdb: 275GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: gpt
ディスクフラグ:

番号  開始    終了    サイズ  ファイルシステム  名前  フラグ
 1    1049kB  1075MB  1074MB  fat32                   boot, esp
 2    1075MB  2149MB  1074MB  xfs
 3    2149MB  275GB   273GB                           raid


エラー: /dev/md127: ディスクラベルが認識できません。
モデル: Linux Software RAID Array (md)
ディスク /dev/md127: 273GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: unknown
ディスクフラグ:

[root@almalinux ~]#

15. ソフトウェアミラーの同期速度を早くする

Linuxの初期設定では同期速度の上限が2024年時点で考えると引きすぎる値に設定されており、同期完了までに非常に時間がかかります。

[root@almalinux ~]# cat /proc/sys/dev/raid/speed_limit_max
200000
[root@almalinux ~]# cat /proc/sys/dev/raid/speed_limit_min
1000
[root@almalinux ~]#

同期完了までの時間はcat /proc/mdstat で確認できます

[root@almalinux ~]# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdb3[1] sda3[0]
      266204160 blocks super 1.2 [2/2] [UU]
      [========>............]  resync = 42.6% (113405440/266204160) finish=25.4min speed=100187K/sec
      bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: <none>
[root@almalinux ~]#

今すぐ設定を変えるには /proc/sys/dev/raid/speed_limit_max と /proc/sys/dev/raid/speed_limit_min に直接値を入れます。

[root@almalinux ~]# echo 2000000 > /proc/sys/dev/raid/speed_limit_max
[root@almalinux ~]# echo 2000000 > /proc/sys/dev/raid/speed_limit_min
[root@almalinux ~]# cat /proc/sys/dev/raid/speed_limit_max
2000000
[root@almalinux ~]# cat /proc/sys/dev/raid/speed_limit_min
2000000
[root@almalinux ~]#

maxとminを同じ値にしても現代では問題にはならないようです。

恒常的な設定変更は /etc/sysctl.d/98-mdadm.conf というファイルを作成し、「dev.raid.speed_limit_max = 2000000」「dev.raid.speed_limit_min = 2000000」を設定します。

[root@almalinux ~]# vi /etc/sysctl.d/98-mdadm.conf
[root@almalinux ~]# cat /etc/sysctl.d/98-mdadm.conf
dev.raid.speed_limit_max = 2000000
dev.raid.speed_limit_min = 2000000
[root@almalinux ~]#

16. UEFIの起動デバイス設定を確認

Linux上で「」を実行するとUEFIに設定されている起動デバイス設定を確認することができます。

[root@almalinux ~]# efibootmgr
BootCurrent: 0004
BootOrder: 0004,0000,0001,0002,0003
Boot0000* EFI Virtual disk (0.0)
Boot0001* EFI Virtual disk (1.0)
Boot0002* EFI VMware Virtual SATA CDROM Drive (0.0)
Boot0003* EFI Network
Boot0004* AlmaLinux
[root@almalinux ~]# efibootmgr -v
BootCurrent: 0005
BootOrder: 0004,0000,0001,0002,0003
Boot0000* EFI Virtual disk (0.0)        PciRoot(0x0)/Pci(0x15,0x0)/Pci(0x0,0x0)/SCSI(0,0)
Boot0001* EFI Virtual disk (1.0)        PciRoot(0x0)/Pci(0x15,0x0)/Pci(0x0,0x0)/SCSI(1,0)
Boot0002* EFI VMware Virtual SATA CDROM Drive (0.0)     PciRoot(0x0)/Pci(0x11,0x0)/Pci(0x3,0x0)/Sata(0,0,0)
Boot0003* EFI Network   PciRoot(0x0)/Pci(0x16,0x0)/Pci(0x0,0x0)/MAC(000c29031475,1)
Boot0004* AlmaLinux     HD(1,GPT,00d955a6-b2cc-4659-8bcb-f40bb6484f02,0x800,0x200000)/File(\EFI\almalinux\shimx64.efi)
[root@almalinux ~]#

上記の場合、「最初はBoot0004で指定されている、ディスク1個目のパーテーションにあるshimx64.efiを読み込んで起動」となっている

設定に書かれてるディスクが何なのかは「blkid|grep <UUID>」を実行して確認することができる。

[root@almalinux ~]# blkid|grep 00d955a6-b2cc-4659-8bcb-f40bb6484f02
/dev/sda1: UUID="D14E-432E" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="00d955a6-b2cc-4659-8bcb-f40bb6484f02"
[root@almalinux ~]#

なお、丸コピーした方の/dev/sdb1の方はPARTUUIDが別の値になっていました。

[root@almalinux ~]# blkid|grep vfat
/dev/sdb1: UUID="D14E-432E" TYPE="vfat" PARTUUID="76f36082-17b8-406b-bc80-269057b944a5"
/dev/sda1: UUID="D14E-432E" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="00d955a6-b2cc-4659-8bcb-f40bb6484f02"
[root@almalinux ~]#

仮想ディスクを置き換えた場合

上記の仮想マシンでsdaを新しく作り直して起動した場合を実験した

起動時の注意点

起動開始後、ここでしばらく停止したあと、起動が継続した

起動直後の状態

起動後の/proc/mdstatは以下

[root@almalinux ~]# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdb3[1]
      266204160 blocks super 1.2 [2/1] [_U]
      bitmap: 1/2 pages [4KB], 65536KB chunk

unused devices: <none>
[root@almalinux ~]#

パーテーション情報は下記。交換したので/dev/sdaの情報はない

[root@almalinux ~]# parted -l
エラー: /dev/sda: ディスクラベルが認識できません。
モデル: VMware Virtual disk (scsi)
ディスク /dev/sda: 275GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: unknown
ディスクフラグ:

モデル: VMware Virtual disk (scsi)
ディスク /dev/sdb: 275GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: gpt
ディスクフラグ:

番号  開始    終了    サイズ  ファイルシステム  名前  フラグ
 1    1049kB  1075MB  1074MB  fat32                   boot, esp
 2    1075MB  2149MB  1074MB  xfs
 3    2149MB  275GB   273GB                           raid


エラー: /dev/md127: ディスクラベルが認識できません。
モデル: Linux Software RAID Array (md)
ディスク /dev/md127: 273GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: unknown
ディスクフラグ:

[root@almalinux ~]#

AlmaLinux 9だとsgdiskがあるのでそちらで確認

[root@almalinux ~]# sgdisk --print /dev/sda
Creating new GPT entries in memory.
Disk /dev/sda: 536870912 sectors, 256.0 GiB
Model: Virtual disk
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 17F6C261-2F3C-49AA-8395-E81BC4FA1AAA
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 536870878
Partitions will be aligned on 2048-sector boundaries
Total free space is 536870845 sectors (256.0 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
[root@almalinux ~]# sgdisk --print /dev/sdb
Disk /dev/sdb: 536870912 sectors, 256.0 GiB
Model: Virtual disk
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 77F9F7E9-068F-4F6B-8CA1-F0C23721890B
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 536870878
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2099199   1024.0 MiB  EF00
   2         2099200         4196351   1024.0 MiB  8300
   3         4196352       536868863   254.0 GiB   FD00
[root@almalinux ~]#

新ディスクへパーテーション設定

sgdiskコマンドの-Rオプションでパーテーションの丸コピーが可能(UUIDも同じになる)ので、まずは「sgdisk 元ディスク -R コピー先ディスク」を実行

[root@almalinux ~]# sgdisk /dev/sdb -R /dev/sda
The operation has completed successfully.
[root@almalinux ~]# sgdisk --print /dev/sda
Disk /dev/sda: 536870912 sectors, 256.0 GiB
Model: Virtual disk
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 77F9F7E9-068F-4F6B-8CA1-F0C23721890B
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 536870878
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2099199   1024.0 MiB  EF00
   2         2099200         4196351   1024.0 MiB  8300
   3         4196352       536868863   254.0 GiB   FD00
[root@almalinux ~]#

この状態だと下記のようにUUIDが同じになってしまっている

[root@almalinux ~]# blkid /dev/sd*
/dev/sda: PTUUID="77f9f7e9-068f-4f6b-8ca1-f0c23721890b" PTTYPE="gpt"
/dev/sda1: PARTUUID="76f36082-17b8-406b-bc80-269057b944a5"
/dev/sda2: PARTUUID="34de2199-82b1-49a2-a09f-d58cd9598563"
/dev/sda3: PARTUUID="11caecc2-105b-4b50-a5ab-fbbe312867f1"
/dev/sdb: PTUUID="77f9f7e9-068f-4f6b-8ca1-f0c23721890b" PTTYPE="gpt"
/dev/sdb1: UUID="D14E-432E" TYPE="vfat" PARTUUID="76f36082-17b8-406b-bc80-269057b944a5"
/dev/sdb2: UUID="7353a543-10d3-4ff5-8db1-9405cd38a5fa" TYPE="xfs" PARTUUID="34de2199-82b1-49a2-a09f-d58cd9598563"
/dev/sdb3: UUID="14892d7b-9a8d-c7c3-6dc5-26fd1435d076" UUID_SUB="2103c67d-0ab9-996f-d170-1b51ff955622" LABEL="almalinux:pv00" TYPE="linux_raid_member" PARTUUID="11caecc2-105b-4b50-a5ab-fbbe312867f1"
[root@almalinux ~]#

別のUUIDにするため、新しいディスクの方のUUIDを変更する「sgdisk -G 新しいディスク」を実行して、UUIDが書き換わることを確認

[root@almalinux ~]# sgdisk -G /dev/sda
The operation has completed successfully.
[root@almalinux ~]# blkid /dev/sd*
/dev/sda: PTUUID="290fa63c-919c-488e-a7ca-96e5a6cf6077" PTTYPE="gpt"
/dev/sda1: PARTUUID="bec0b916-9e2d-4f0d-82e5-34d981e4ead6"
/dev/sda2: PARTUUID="eccf0943-1b37-46e1-9697-4e59c92c5cf2"
/dev/sda3: PARTUUID="cebc1012-90d8-4e0a-aecb-49a5e4c5a8ea"
/dev/sdb: PTUUID="77f9f7e9-068f-4f6b-8ca1-f0c23721890b" PTTYPE="gpt"
/dev/sdb1: UUID="D14E-432E" TYPE="vfat" PARTUUID="76f36082-17b8-406b-bc80-269057b944a5"
/dev/sdb2: UUID="7353a543-10d3-4ff5-8db1-9405cd38a5fa" TYPE="xfs" PARTUUID="34de2199-82b1-49a2-a09f-d58cd9598563"
/dev/sdb3: UUID="14892d7b-9a8d-c7c3-6dc5-26fd1435d076" UUID_SUB="2103c67d-0ab9-996f-d170-1b51ff955622" LABEL="almalinux:pv00" TYPE="linux_raid_member" PARTUUID="11caecc2-105b-4b50-a5ab-fbbe312867f1"
[root@almalinux ~]#

新ディスクに/bootと/boot/efiの中身をコピー

新ディスクのパーテーション1,2の中身が無いので、ddコマンドを使って丸コピーする

[root@almalinux ~]# dd if=/dev/sdb1 of=/dev/sda1 bs=10240
104857+1 レコード入力
104857+1 レコード出力
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 26.0448 s, 41.2 MB/s
[root@almalinux ~]# dd if=/dev/sdb2 of=/dev/sda2 bs=10240
104857+1 レコード入力
104857+1 レコード出力
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 26.7893 s, 40.1 MB/s
[root@almalinux ~]#

blkidで新ディスクにPARTUUIDなどが表示されるようになったことを確認

[root@almalinux ~]# blkid /dev/sd*
/dev/sda: PTUUID="290fa63c-919c-488e-a7ca-96e5a6cf6077" PTTYPE="gpt"
/dev/sda1: UUID="D14E-432E" TYPE="vfat" PARTUUID="bec0b916-9e2d-4f0d-82e5-34d981e4ead6"
/dev/sda2: UUID="7353a543-10d3-4ff5-8db1-9405cd38a5fa" TYPE="xfs" PARTUUID="eccf0943-1b37-46e1-9697-4e59c92c5cf2"
/dev/sda3: PARTUUID="cebc1012-90d8-4e0a-aecb-49a5e4c5a8ea"
/dev/sdb: PTUUID="77f9f7e9-068f-4f6b-8ca1-f0c23721890b" PTTYPE="gpt"
/dev/sdb1: UUID="D14E-432E" TYPE="vfat" PARTUUID="76f36082-17b8-406b-bc80-269057b944a5"
/dev/sdb2: UUID="7353a543-10d3-4ff5-8db1-9405cd38a5fa" TYPE="xfs" PARTUUID="34de2199-82b1-49a2-a09f-d58cd9598563"
/dev/sdb3: UUID="14892d7b-9a8d-c7c3-6dc5-26fd1435d076" UUID_SUB="2103c67d-0ab9-996f-d170-1b51ff955622" LABEL="almalinux:pv00" TYPE="linux_raid_member" PARTUUID="11caecc2-105b-4b50-a5ab-fbbe312867f1"
[root@almalinux ~]#

なお、ほんとはPARTUUIDが重複していたら問題になるのだが、Linuxの仕様で同じのがあったらどちらか片方だけマウントする、ので、問題ないと見なして無視することにしている。

ミラー設定実施

新ディスクのパーテーション3番が Linux RAIDとして設定されていることを確認

[root@almalinux ~]# fdisk -l /dev/sda
ディスク /dev/sda: 256 GiB, 274877906944 バイト, 536870912 セクタ
ディスク型式: Virtual disk
単位: セクタ (1 * 512 = 512 バイト)
セクタサイズ (論理 / 物理): 512 バイト / 512 バイト
I/O サイズ (最小 / 推奨): 512 バイト / 512 バイト
ディスクラベルのタイプ: gpt
ディスク識別子: 290FA63C-919C-488E-A7CA-96E5A6CF6077

デバイス   開始位置  終了位置    セクタ サイズ タイプ
/dev/sda1      2048   2099199   2097152     1G EFI システム
/dev/sda2   2099200   4196351   2097152     1G Linux ファイルシステム
/dev/sda3   4196352 536868863 532672512   254G Linux RAID
[root@almalinux ~]#

mdadmコマンドでパーテーションを追加

[root@almalinux ~]# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdb3[1]
      266204160 blocks super 1.2 [2/1] [_U]
      bitmap: 1/2 pages [4KB], 65536KB chunk

unused devices: <none>
[root@almalinux ~]# mdadm /dev/md127 -a /dev/sda3
mdadm: added /dev/sda3
[root@almalinux ~]# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sda3[2] sdb3[1]
      266204160 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  0.0% (88640/266204160) finish=100.0min speed=44320K/sec
      bitmap: 1/2 pages [4KB], 65536KB chunk

unused devices: <none>
[root@almalinux ~]#

追加と同時に同期が開始されていることを確認

同期が終わる前に再起動したらどうなる?

絶賛初期同期中の状態で再起動したら、どうなる?

[root@almalinux ~]# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sda3[2] sdb3[1]
      266204160 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  1.5% (4163008/266204160) finish=96.7min speed=45128K/sec
      bitmap: 1/2 pages [4KB], 65536KB chunk

unused devices: <none>
[root@almalinux ~]# reboot

再起動…

初回起動時みたいに変な待ち時間もなくすんなりと起動

状態を確認してみます

[root@almalinux ~]# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdb3[1] sda3[2]
      266204160 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  2.4% (6543616/266204160) finish=105.6min speed=40966K/sec
      bitmap: 1/2 pages [4KB], 65536KB chunk

unused devices: <none>
[root@almalinux ~]# df -h
ファイルシス                         サイズ  使用  残り 使用% マウント位置
devtmpfs                               4.0M     0  4.0M    0% /dev
tmpfs                                  2.8G     0  2.8G    0% /dev/shm
tmpfs                                  1.2G  8.8M  1.1G    1% /run
efivarfs                               256K   48K  204K   19% /sys/firmware/efi/efivars
/dev/mapper/almalinux_almalinux-root    70G  2.1G   68G    3% /
/dev/mapper/almalinux_almalinux-home    25G  204M   24G    1% /home
/dev/sda2                              960M  225M  736M   24% /boot
/dev/sdb1                             1022M  7.1M 1015M    1% /boot/efi
tmpfs                                  567M     0  567M    0% /run/user/0
[root@almalinux ~]# parted -l
モデル: VMware Virtual disk (scsi)
ディスク /dev/sda: 275GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: gpt
ディスクフラグ:

番号  開始    終了    サイズ  ファイルシステム  名前  フラグ
 1    1049kB  1075MB  1074MB  fat32                   boot, esp
 2    1075MB  2149MB  1074MB  xfs
 3    2149MB  275GB   273GB                           raid


モデル: VMware Virtual disk (scsi)
ディスク /dev/sdb: 275GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: gpt
ディスクフラグ:

番号  開始    終了    サイズ  ファイルシステム  名前  フラグ
 1    1049kB  1075MB  1074MB  fat32                   boot, esp
 2    1075MB  2149MB  1074MB  xfs
 3    2149MB  275GB   273GB                           raid


エラー: /dev/md127: ディスクラベルが認識できません。
モデル: Linux Software RAID Array (md)
ディスク /dev/md127: 273GB
セクタサイズ (論理/物理): 512B/512B
パーティションテーブル: unknown
ディスクフラグ:

[root@almalinux ~]#

無事に同期が開始されていました。

/bootはsdaから、/boot/efi はsdbからマウントする、というちょっと気持ち悪い状況になってますが、まああまり気にしないでおきます。

この状態での注意点としてはkernelやgrub2のアップデートがあった場合に、マウントしていない側のパーテーションにデータをコピーしてあげる必要がある、ということです。

ちなみに、もう1回再起動したら、今度は /boot, /boot/efi ともにsdbをマウントしていました。

[root@almalinux ~]# df -h
ファイルシス                         サイズ  使用  残り 使用% マウント位置
devtmpfs                               4.0M     0  4.0M    0% /dev
tmpfs                                  2.8G     0  2.8G    0% /dev/shm
tmpfs                                  1.2G  8.8M  1.1G    1% /run
efivarfs                               256K   48K  204K   19% /sys/firmware/efi/efivars
/dev/mapper/almalinux_almalinux-root    70G  2.1G   68G    3% /
/dev/mapper/almalinux_almalinux-home    25G  204M   24G    1% /home
/dev/sdb2                              960M  225M  736M   24% /boot
/dev/sdb1                             1022M  7.1M 1015M    1% /boot/efi
tmpfs                                  567M     0  567M    0% /run/user/0
[root@almalinux ~]#

1個目ディスク抜いて起動した後、ディスクを戻した場合

仮想マシンじゃなくて、ACEMAGIC PC S1の/dev/nvme0n1ディスクと/dev/sdaディスクをミラーする設定にしていた。

テストでnvmeを抜いて、起動することを確認したあと、戻して再起動した。

再起動直後の/proc/mdstat

[osakanataro@niselog ~]$ cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sda3[1]
      497876992 blocks super 1.2 [2/1] [_U]
      bitmap: 2/4 pages [8KB], 65536KB chunk

unused devices: <none>
[osakanataro@niselog ~]$

nvme0n1について登録が外れている。

今回はパーテーションが残ってるので、それをmdadmコマンドで指定してディスク追加

[osakanataro@niselog ~]$ sudo mdadm /dev/md127 -a /dev/nvme0n1p3
mdadm: re-added /dev/nvme0n1p3
[osakanataro@niselog ~]$ cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 nvme0n1p3[0] sda3[1]
      497876992 blocks super 1.2 [2/1] [_U]
      [=====>...............]  recovery = 29.0% (144735232/497876992) finish=9.2min speed=638464K/sec
      bitmap: 2/4 pages [8KB], 65536KB chunk

unused devices: <none>
[osakanataro@niselog ~]$

だいたい同じだったようですぐに同期が終わった

[osakanataro@niselog ~]$ cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 nvme0n1p3[0] sda3[1]
      497876992 blocks super 1.2 [2/2] [UU]
      bitmap: 0/4 pages [0KB], 65536KB chunk

unused devices: <none>
[osakanataro@niselog ~]$