CVE-2022-38023対応後のsamba ADサーバに古いONTAPを参加させる

2023年7月のCVE-2022-38023関連対応後、古いONTAPシミュレーターは sambaで作ったActive Directory環境に参加できなくなった。

samba側の設定を変更し、セキュリティ的に問題がある接続でも受け付けるようにすればいいのかな、と試してみた。

sambaのCVE-2022-38023対応に関して書かれている「RC4/HMAC-MD5 NetLogon Secure Channel is weak and should be avoided」に、これまでのdefaultから変更した点がかかれているので、これの逆をやればいいのか、と以下を設定した。

[global]
        <略>
        allow nt4 crypto = yes
        reject md5 clients = no
        server reject md5 schannel = no
        server schannel = yes
        server schannel require seal = no
 <略>

拒否しない(rejectをno)のと、必須を解除(requireをno)にすればいいか、というレベルでの対応となる。

ドキュメントの「Cryptographic configuration」を見た感じだとこれで良さそう。

もちろんこれは過去のバージョンを動かす必要があって行うものであり、本来の運用で使用するものではない設定となる。

この設定を行い、sambaの再起動

NetAppのCIFS稼働のSVMでは、vserver cifs securityの項目にて「-smb2-enabled-for-dc-connections true」と「-is-aes-encryption-enabled true」の設定を行っている


エラーメッセージコレクション

ONTAP 9.1 シミュレーター(パッチ無し)

このバージョンではオプションの -smb2-enabled-for-dc-connections と -is-aes-encryption-enabled が存在していないので、エラー内容が結構違う。

ontap91::> vserver cifs create -cifs-server svm91 -domain ADOSAKANA.LOCAL

In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"ADOSAKANA.LOCAL" domain.

Enter the user name: administrator

Enter the password:

Warning: An account by this name already exists in Active Directory at
         CN=SVM91,CN=Computers,DC=adosakana,DC=local.
         If there is an existing DNS entry for the name SVM91, it must be
         removed. Data ONTAP cannot remove such an entry.
         Use an external tool to remove it after this command completes.
         Ok to reuse this account? {y|n}: y

Error: command failed: Failed to create CIFS server SVM91. Reason:
       create_with_lug: RPC: Unable to receive; errno = Connection reset by
       peer; netid=tcp fd=17 TO=600.0s TT=0.119s O=224b I=0b CN=113/3 VSID=-3
       127.0.0.1:766.

ontap91::>

ONTAP 9.1P22 シミュレーター

ontap91::> vserver cifs create -cifs-server svm91 -domain ADOSAKANA.LOCAL

In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"ADOSAKANA.LOCAL" domain.

Enter the user name: administrator

Enter the password:

Error: Machine account creation procedure failed
  [    56] Loaded the preliminary configuration.
  [    92] Successfully connected to ip 172.17.44.49, port 88 using
           TCP
  [   107] Successfully connected to ip 172.17.44.49, port 389 using
           TCP
  [   110] Unable to start TLS: Connect error
  [   110]   Additional info:
  [   110] Unable to connect to LDAP (Active Directory) service on
           sambaad.ADOSAKANA.LOCAL
**[   110] FAILURE: Unable to make a connection (LDAP (Active
**         Directory):ADOSAKANA.LOCAL), result: 7652

Error: command failed: Failed to create the Active Directory machine account
       "SVM91". Reason: LDAP Error: Cannot establish a connection to the
       server.

ontap91::>
ontap91::> vserver cifs create -cifs-server svm91 -domain ADOSAKANA.LOCAL

In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"ADOSAKANA.LOCAL" domain.

Enter the user name: administrator

Enter the password:

Error: Machine account creation procedure failed
  [    61] Loaded the preliminary configuration.
  [    99] Successfully connected to ip 172.17.44.49, port 88 using
           TCP
  [   168] Successfully connected to ip 172.17.44.49, port 389 using
           TCP
  [   168] Entry for host-address: 172.17.44.49 not found in the
           current source: FILES. Ignoring and trying next available
           source
  [   172] Source: DNS unavailable. Entry for
           host-address:172.17.44.49 not found in any of the
           available sources
**[   181] FAILURE: Unable to SASL bind to LDAP server using GSSAPI:
**         Local error
  [   181] Additional info: SASL(-1): generic failure: GSSAPI Error:
           Unspecified GSS failure.  Minor code may provide more
           information (Cannot determine realm for numeric host
           address)
  [   181] Unable to connect to LDAP (Active Directory) service on
           sambaad.ADOSAKANA.LOCAL (Error: Local error)
  [   181] Unable to make a connection (LDAP (Active
           Directory):ADOSAKANA.LOCAL), result: 7643

Error: command failed: Failed to create the Active Directory machine account
       "SVM91". Reason: LDAP Error: Local error occurred.

ontap91::>
ontap91::> vserver cifs create -cifs-server svm91 -domain ADOSAKANA.LOCAL

In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"ADOSAKANA.LOCAL" domain.

Enter the user name: administrator

Enter the password:

Warning: An account by this name already exists in Active Directory at
         CN=SVM91,CN=Computers,DC=adosakana,DC=local.
         If there is an existing DNS entry for the name SVM91, it must be
         removed. Data ONTAP cannot remove such an entry.
         Use an external tool to remove it after this command completes.
         Ok to reuse this account? {y|n}: y

Error: Machine account creation procedure failed
  [    13] Loaded the preliminary configuration.
  [    92] Created a machine account in the domain
  [    93] SID to name translations of Domain Users and Admins
           completed successfully
  [   100] Modified account 'cn=SVM91,CN=Computers,dc=VM2,dc=ADOSAKANA
           dc=LOCAL'
  [   101] Successfully connected to ip 172.17.44.49, port 88 using
           TCP
  [   113] Successfully connected to ip 172.17.44.49, port 464 using
           TCP
  [   242] Kerberos password set for 'SVM91$@ADOSAKANA.LOCAL' succeeded
  [   242] Set initial account password
  [   277] Successfully connected to ip 172.17.44.49, port 445 using
           TCP
  [   312] Successfully connected to ip 172.17.44.49, port 88 using
           TCP
  [   346] Successfully authenticated with DC
           sambaad.ADOSAKANA.LOCAL
  [   366] Unable to connect to NetLogon service on
           sambaad.ADOSAKANA.LOCAL (Error:
           RESULT_ERROR_GENERAL_FAILURE)
**[   366] FAILURE: Unable to make a connection
**         (NetLogon:ADOSAKANA.LOCAL), result: 3
  [   366] Unable to make a NetLogon connection to
           sambaad.ADOSAKANA.LOCAL using the new machine account

Error: command failed: Failed to create the Active Directory machine account
       "SVM91". Reason: general failure.

ontap91::>

proxmox 8.1.4で適当にcephストレージ作ったらWARNINGが出た件の対処

proxmox 8.1.4でサーバを3台インストールして、cpehストレージ作ってみるかー、と適当に設定した。

基本は「Deploy Hyper-Converged Ceph Cluster」を見ながらやったんだけど、CephFSを作成するときに、手順だと「pveceph fs create –pg_num 128 –add-storage」と書いてあったんだけど、「pveceph fs create」だけで実行したらどうなるんだろ?と思ってやってみたところ、警告が出た

注:なお、結局のところ、指定しない場合のデフォルト値も128だった。

HEALTH_WARN: 1 pools have too many placement groups
Pool storagepool has 128 placement groups, should have 32

調べてみるとproxmoxのフォーラムに「CEPH pools have too many placement groups」という若干古め(2020年のpoxmox 6.3時代)のものが見つかった。

「pveceph pool ls」で現在の設定を確認

root@zstack137:~# ceph -v
ceph version 18.2.2 (e9fe820e7fffd1b7cde143a9f77653b73fcec748) reef (stable)
root@zstack137:~# pveversion
pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-8-pve)
root@zstack137:~# pveceph pool ls
lqqqqqqqqqqqqqqqqqwqqqqqqwqqqqqqqqqqwqqqqqqqqwqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqk
x Name            x Size x Min Size x PG Num x min. PG Num x Optimal PG Num x PG Autoscale Mode x PG Autoscale Target Size x PG Autoscale Target Ratio x Crush Rule Name x               %-Used x       Used x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x .mgr            x    3 x        2 x      1 x           1 x              1 x on                x                          x                           x replicated_rule x 3.08950029648258e-06 x    1388544 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_data     x    3 x        2 x     32 x             x             32 x on                x                          x                           x replicated_rule x                    0 x          0 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_metadata x    3 x        2 x     32 x          16 x             16 x on                x                          x                           x replicated_rule x 4.41906962578287e-07 x     198610 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x storagepool     x    3 x        2 x    128 x             x             32 x warn              x                          x                           x replicated_rule x   0.0184257291257381 x 8436679796 x
mqqqqqqqqqqqqqqqqqvqqqqqqvqqqqqqqqqqvqqqqqqqqvqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqj
root@zstack137:~#

「ceph osd pool autoscale-status」

root@zstack137:~# ceph osd pool autoscale-status
POOL               SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  BULK
.mgr             452.0k                3.0        449.9G  0.0000                                  1.0       1              on         False
cephfs_data          0                 3.0        449.9G  0.0000                                  1.0      32              on         False
cephfs_metadata  66203                 3.0        449.9G  0.0000                                  4.0      32              on         False
storagepool       2681M                3.0        449.9G  0.0175                                  1.0     128              warn       False
root@zstack137:~#

そういえば、むかし、cephをテスト構築した時もなんかあったな、と思い出して確認してみると2018年に「CephのOSD毎のPlacement Groupの数を確認する」というメモを残していた。

「ceph health」を実行してみると状況は違うようだった。

root@zstack137:~# ceph health
HEALTH_WARN 1 pools have too many placement groups

root@zstack137:~# ceph health detail
HEALTH_WARN 1 pools have too many placement groups
[WRN] POOL_TOO_MANY_PGS: 1 pools have too many placement groups
    Pool storagepool has 128 placement groups, should have 32
root@zstack137:~#
root@zstack137:~# ceph -s
  cluster:
    id:     9e085d6a-77f3-41f1-8f6d-71fadc9c011b
    health: HEALTH_WARN
            1 pools have too many placement groups

  services:
    mon: 3 daemons, quorum zstack136,zstack135,zstack137 (age 3h)
    mgr: zstack136(active, since 3h), standbys: zstack135
    mds: 1/1 daemons up, 1 standby
    osd: 9 osds: 9 up (since 3h), 9 in (since 3d)

  data:
    volumes: 1/1 healthy
    pools:   4 pools, 193 pgs
    objects: 716 objects, 2.7 GiB
    usage:   8.3 GiB used, 442 GiB / 450 GiB avail
    pgs:     193 active+clean

root@zstack137:~#

とはいえ、「ceph pg dump」の出力結果を整形して表示する下記コマンドが実行できるか確認してみる。

ceph pg dump | awk '
BEGIN { IGNORECASE = 1 }
 /^PG_STAT/ { col=1; while($col!="UP") {col++}; col++ }
 /^[0-9a-f]+\.[0-9a-f]+/ { match($0,/^[0-9a-f]+/); pool=substr($0, RSTART, RLENGTH); poollist[pool]=0;
 up=$col; i=0; RSTART=0; RLENGTH=0; delete osds; while(match(up,/[0-9]+/)>0) { osds[++i]=substr(up,RSTART,RLENGTH); up = substr(up, RSTART+RLENGTH) }
 for(i in osds) {array[osds[i],pool]++; osdlist[osds[i]];}
}
END {
 printf("\n");
 printf("pool :\t"); for (i in poollist) printf("%s\t",i); printf("| SUM \n");
 for (i in poollist) printf("--------"); printf("----------------\n");
 for (i in osdlist) { printf("osd.%i\t", i); sum=0;
   for (j in poollist) { printf("%i\t", array[i,j]); sum+=array[i,j]; sumpool[j]+=array[i,j] }; printf("| %i\n",sum) }
 for (i in poollist) printf("--------"); printf("----------------\n");
 printf("SUM :\t"); for (i in poollist) printf("%s\t",sumpool[i]); printf("|\n");
}'

無事実行できた。

root@zstack137:~# ceph pg dump | awk '
BEGIN { IGNORECASE = 1 }
 /^PG_STAT/ { col=1; while($col!="UP") {col++}; col++ }
 /^[0-9a-f]+\.[0-9a-f]+/ { match($0,/^[0-9a-f]+/); pool=substr($0, RSTART, RLENGTH); poollist[pool]=0;
 up=$col; i=0; RSTART=0; RLENGTH=0; delete osds; while(match(up,/[0-9]+/)>0) { osds[++i]=substr(up,RSTART,RLENGTH); up = substr(up, RSTART+RLENGTH) }
 for(i in osds) {array[osds[i],pool]++; osdlist[osds[i]];}
}
END {
 printf("\n");
 printf("pool :\t"); for (i in poollist) printf("%s\t",i); printf("| SUM \n");
 for (i in poollist) printf("--------"); printf("----------------\n");
 for (i in osdlist) { printf("osd.%i\t", i); sum=0;
   for (j in poollist) { printf("%i\t", array[i,j]); sum+=array[i,j]; sumpool[j]+=array[i,j] }; printf("| %i\n",sum) }
 for (i in poollist) printf("--------"); printf("----------------\n");
 printf("SUM :\t"); for (i in poollist) printf("%s\t",sumpool[i]); printf("|\n");
}'
dumped all

pool :  3       2       1       4       | SUM
------------------------------------------------
osd.3   4       5       1       13      | 23
osd.8   4       6       0       12      | 22
osd.6   2       4       0       15      | 21
osd.5   6       4       0       16      | 26
osd.2   3       3       0       15      | 21
osd.1   4       3       0       10      | 17
osd.4   1       1       0       16      | 18
osd.0   5       2       0       10      | 17
osd.7   3       4       0       21      | 28
------------------------------------------------
SUM :   32      32      1       128     |
root@zstack137:~#

poolによって差がありすぎている?

中国語のページで「ceph使用问题积累」というところがあって「HEALTH_WARN:pools have too many placement groups」と「HEALTH_WARN: mons are allowing insecure global_id reclaim」についての対処方法が載っている。

前者については↑で出てきたproxmoxフォーラム記事を参照元として「ceph mgr module disable pg_autoscaler」を実行してauto scale機能を無効化する、とある

後者については「ceph config set mon auth_allow_insecure_global_id_reclaim false」となっていた。

module設定変える前に「ceph mgr module ls」で状態確認

root@zstack137:~# ceph mgr module ls
MODULE
balancer           on (always on)
crash              on (always on)
devicehealth       on (always on)
orchestrator       on (always on)
pg_autoscaler      on (always on)
progress           on (always on)
rbd_support        on (always on)
status             on (always on)
telemetry          on (always on)
volumes            on (always on)
iostat             on
nfs                on
restful            on
alerts             -
influx             -
insights           -
localpool          -
mirroring          -
osd_perf_query     -
osd_support        -
prometheus         -
selftest           -
snap_schedule      -
stats              -
telegraf           -
test_orchestrator  -
zabbix             -
root@zstack137:~#

SUSEのページにあるSUSE Enterprise Storage 7 DocumentationのAdministration and Operations Guide「12 Determine the cluster state」を見るといろいろな状態確認コマンドがあった。

root@zstack137:~# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    450 GiB  442 GiB  8.3 GiB   8.3 GiB       1.85
TOTAL  450 GiB  442 GiB  8.3 GiB   8.3 GiB       1.85

--- POOLS ---
POOL             ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr              1    1  449 KiB        2  1.3 MiB      0    140 GiB
cephfs_data       2   32      0 B        0      0 B      0    140 GiB
cephfs_metadata   3   32   35 KiB       22  194 KiB      0    140 GiB
storagepool       4  128  2.6 GiB      692  7.9 GiB   1.84    140 GiB
root@zstack137:~#  ceph df detail
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    450 GiB  442 GiB  8.3 GiB   8.3 GiB       1.85
TOTAL  450 GiB  442 GiB  8.3 GiB   8.3 GiB       1.85

--- POOLS ---
POOL             ID  PGS   STORED   (DATA)   (OMAP)  OBJECTS     USED   (DATA)   (OMAP)  %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY  USED COMPR  UNDER COMPR
.mgr              1    1  449 KiB  449 KiB      0 B        2  1.3 MiB  1.3 MiB      0 B      0    140 GiB            N/A          N/A    N/A         0 B          0 B
cephfs_data       2   32      0 B      0 B      0 B        0      0 B      0 B      0 B      0    140 GiB            N/A          N/A    N/A         0 B          0 B
cephfs_metadata   3   32   35 KiB   18 KiB   17 KiB       22  194 KiB  144 KiB   50 KiB      0    140 GiB            N/A          N/A    N/A         0 B          0 B
storagepool       4  128  2.6 GiB  2.6 GiB  3.0 KiB      692  7.9 GiB  7.9 GiB  9.1 KiB   1.84    140 GiB            N/A          N/A    N/A         0 B          0 B
root@zstack137:~# 

TOO_MANY_PGSの時の対処としていかが書かれている

TOO_MANY_PGS
The number of PGs in use is above the configurable threshold of mon_pg_warn_max_per_osd PGs per OSD. This can lead to higher memory usage for OSD daemons, slower peering after cluster state changes (for example OSD restarts, additions, or removals), and higher load on the Ceph Managers and Ceph Monitors.

While the pg_num value for existing pools cannot be reduced, the pgp_num value can. This effectively co-locates some PGs on the same sets of OSDs, mitigating some of the negative impacts described above. The pgp_num value can be adjusted with:

proxmox「Deploy Hyper-Converged Ceph Cluster」のあたりをみると PG Autoscale Modeはwarnで設定されるのが標準であるようだ。

cephのautomated scalingを見ると「ceph config set global mon_target_pg_per_osd 100」で値を設定することが書かれているが、現在値の確認方法が書いてない。

ceph config get <who> <key>というのはわかったのだが、whoの部分がなんなのかがわからなかった。(globalではなかった)

「ceph config dump」を実行したところ、いま標準値から変更されているところであろう設定が出てきて、whoに該当するものとしてmonがあった。であればmon_target_pg_per_osdのwhoはmonだろうと試すと現在値らしきものが確認できた。

root@zstack137:~# ceph config dump
WHO  MASK  LEVEL     OPTION                                 VALUE  RO
mon        advanced  auth_allow_insecure_global_id_reclaim  false
root@zstack137:~# ceph config get mon  mon_target_pg_per_osd
100
root@zstack137:~#

とりあえず、「ceph mgr module disable pg_autoscaler」を実行してみたのだが、変更不可だった

root@zstack137:~# ceph mgr module disable pg_autoscaler
Error EINVAL: module 'pg_autoscaler' cannot be disabled (always-on)
root@zstack137:~#

じゃあ、「ceph osd pool set storagepool pgp_num 32」を実行してpgp_numを128から32に変更してみる

root@zstack137:~# ceph osd pool stats
pool .mgr id 1
  nothing is going on

pool cephfs_data id 2
  nothing is going on

pool cephfs_metadata id 3
  nothing is going on

pool storagepool id 4
  nothing is going on

root@zstack137:~# ceph osd pool get storagepool pgp_num
pgp_num: 128
root@zstack137:~# ceph osd pool set storagepool pgp_num 32
set pool 4 pgp_num to 32
root@zstack137:~# ceph osd pool get storagepool pgp_num
pgp_num: 125
root@zstack137:~# ceph osd pool get storagepool pgp_num
pgp_num: 119
root@zstack137:~#

徐々に変更されていく模様

root@zstack137:~# ceph -s
  cluster:
    id:     9e085d6a-77f3-41f1-8f6d-71fadc9c011b
    health: HEALTH_WARN
            Reduced data availability: 1 pg peering
            1 pools have too many placement groups
            1 pools have pg_num > pgp_num

  services:
    mon: 3 daemons, quorum zstack136,zstack135,zstack137 (age 5h)
    mgr: zstack136(active, since 5h), standbys: zstack135
    mds: 1/1 daemons up, 1 standby
    osd: 9 osds: 9 up (since 5h), 9 in (since 3d); 2 remapped pgs

  data:
    volumes: 1/1 healthy
    pools:   4 pools, 193 pgs
    objects: 716 objects, 2.7 GiB
    usage:   8.4 GiB used, 442 GiB / 450 GiB avail
    pgs:     0.518% pgs not active
             16/2148 objects misplaced (0.745%)
             190 active+clean
             2   active+recovering
             1   remapped+peering

  io:
    recovery: 2.0 MiB/s, 0 objects/s

root@zstack137:~# ceph health
HEALTH_WARN Reduced data availability: 1 pg peering; 1 pools have too many placement groups; 1 pools have pg_num > pgp_num
root@zstack137:~# ceph health detail
HEALTH_WARN 1 pools have too many placement groups; 1 pools have pg_num > pgp_num
[WRN] POOL_TOO_MANY_PGS: 1 pools have too many placement groups
    Pool storagepool has 128 placement groups, should have 32
[WRN] SMALLER_PGP_NUM: 1 pools have pg_num > pgp_num
    pool storagepool pg_num 128 > pgp_num 32
root@zstack137:~#

ある程度時間が経過したあと

root@zstack137:~# ceph health detail
HEALTH_WARN 1 pools have too many placement groups; 1 pools have pg_num > pgp_num
[WRN] POOL_TOO_MANY_PGS: 1 pools have too many placement groups
    Pool storagepool has 128 placement groups, should have 32
[WRN] SMALLER_PGP_NUM: 1 pools have pg_num > pgp_num
    pool storagepool pg_num 128 > pgp_num 32
root@zstack137:~# ceph pg dump | awk '
BEGIN { IGNORECASE = 1 }
 /^PG_STAT/ { col=1; while($col!="UP") {col++}; col++ }
 /^[0-9a-f]+\.[0-9a-f]+/ { match($0,/^[0-9a-f]+/); pool=substr($0, RSTART, RLENGTH); poollist[pool]=0;
 up=$col; i=0; RSTART=0; RLENGTH=0; delete osds; while(match(up,/[0-9]+/)>0) { osds[++i]=substr(up,RSTART,RLENGTH); up = substr(up, RSTART+RLENGTH) }
 for(i in osds) {array[osds[i],pool]++; osdlist[osds[i]];}
}
END {
 printf("\n");
 printf("pool :\t"); for (i in poollist) printf("%s\t",i); printf("| SUM \n");
 for (i in poollist) printf("--------"); printf("----------------\n");
 for (i in osdlist) { printf("osd.%i\t", i); sum=0;
   for (j in poollist) { printf("%i\t", array[i,j]); sum+=array[i,j]; sumpool[j]+=array[i,j] }; printf("| %i\n",sum) }
 for (i in poollist) printf("--------"); printf("----------------\n");
 printf("SUM :\t"); for (i in poollist) printf("%s\t",sumpool[i]); printf("|\n");
}'
dumped all

pool :  3       2       1       4       | SUM
------------------------------------------------
osd.3   4       5       1       15      | 25
osd.8   4       6       0       16      | 26
osd.6   2       4       0       16      | 22
osd.5   6       4       0       4       | 14
osd.2   3       3       0       11      | 17
osd.1   4       3       0       13      | 20
osd.4   1       1       0       17      | 19
osd.0   5       2       0       20      | 27
osd.7   3       4       0       16      | 23
------------------------------------------------
SUM :   32      32      1       128     |
root@zstack137:~# ceph osd pool autoscale-status
POOL               SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  BULK
.mgr             452.0k                3.0        449.9G  0.0000                                  1.0       1              on         False
cephfs_data          0                 3.0        449.9G  0.0000                                  1.0      32              on         False
cephfs_metadata  66203                 3.0        449.9G  0.0000                                  4.0      32              on         False
storagepool       2681M                3.0        449.9G  0.0175                                  1.0     128              warn       False
root@zstack137:~# pveceph pool ls
lqqqqqqqqqqqqqqqqqwqqqqqqwqqqqqqqqqqwqqqqqqqqwqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqk
x Name            x Size x Min Size x PG Num x min. PG Num x Optimal PG Num x PG Autoscale Mode x PG Autoscale Target Size x PG Autoscale Target Ratio x Crush Rule Name x               %-Used x       Used x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x .mgr            x    3 x        2 x      1 x           1 x              1 x on                x                          x                           x replicated_rule x 3.09735719383752e-06 x    1388544 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_data     x    3 x        2 x     32 x             x             32 x on                x                          x                           x replicated_rule x                    0 x          0 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_metadata x    3 x        2 x     32 x          16 x             16 x on                x                          x                           x replicated_rule x 4.43030785390874e-07 x     198610 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x storagepool     x    3 x        2 x    128 x             x             32 x warn              x                          x                           x replicated_rule x    0.018471721559763 x 8436679796 x
mqqqqqqqqqqqqqqqqqvqqqqqqvqqqqqqqqqqvqqqqqqqqvqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqj
root@zstack137:~#

pg_numを減らせる?

root@zstack137:~# ceph osd pool get storagepool pg_num
pg_num: 128
root@zstack137:~# ceph osd pool set storagepool pg_num 32
set pool 4 pg_num to 32
root@zstack137:~# ceph osd pool get storagepool pg_num
pg_num: 128
root@zstack137:~# ceph osd pool get storagepool pg_num
pg_num: 124
root@zstack137:~#

徐々に減ってる

ステータスはHEALTH_OLに変わった

root@zstack137:~# ceph osd pool get storagepool pg_num
pg_num: 119
root@zstack137:~# ceph health detail
HEALTH_OK
root@zstack137:~# pveceph pool ls
lqqqqqqqqqqqqqqqqqwqqqqqqwqqqqqqqqqqwqqqqqqqqwqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqk
x Name            x Size x Min Size x PG Num x min. PG Num x Optimal PG Num x PG Autoscale Mode x PG Autoscale Target Size x PG Autoscale Target Ratio x Crush Rule Name x               %-Used x       Used x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x .mgr            x    3 x        2 x      1 x           1 x              1 x on                x                          x                           x replicated_rule x 3.10063592223742e-06 x    1388544 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_data     x    3 x        2 x     32 x             x             32 x on                x                          x                           x replicated_rule x                    0 x          0 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_metadata x    3 x        2 x     32 x          16 x             16 x on                x                          x                           x replicated_rule x 4.43499772018185e-07 x     198610 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x storagepool     x    3 x        2 x    117 x             x             32 x warn              x                          x                           x replicated_rule x   0.0184909123927355 x 8436679796 x
mqqqqqqqqqqqqqqqqqvqqqqqqvqqqqqqqqqqvqqqqqqqqvqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqj
root@zstack137:~#

「ceph osd pool autoscale-status」の方のPG_NUMは即反映

root@zstack137:~# ceph osd pool autoscale-status
POOL               SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  BULK
.mgr             452.0k                3.0        449.9G  0.0000                                  1.0       1              on         False
cephfs_data          0                 3.0        449.9G  0.0000                                  1.0      32              on         False
cephfs_metadata  66203                 3.0        449.9G  0.0000                                  4.0      32              on         False
storagepool       2705M                3.0        449.9G  0.0176                                  1.0      32              warn       False
root@zstack137:~#

しばらく実行したらHEALTH_WARNになったときもあったが、比較的すぐにHEALTH_OKに戻ったりした。

root@zstack137:~# ceph health detail
HEALTH_WARN Reduced data availability: 2 pgs inactive, 2 pgs peering
[WRN] PG_AVAILABILITY: Reduced data availability: 2 pgs inactive, 2 pgs peering
    pg 4.22 is stuck peering for 2d, current state peering, last acting [6,5,2]
    pg 4.62 is stuck peering for 6h, current state peering, last acting [6,5,2]
root@zstack137:~#

しばらく時間がたって変更が終わったあとに状態をとってみた

root@zstack137:~# ceph health detail
HEALTH_OK
root@zstack137:~# pveceph pool ls
lqqqqqqqqqqqqqqqqqwqqqqqqwqqqqqqqqqqwqqqqqqqqwqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqk
x Name            x Size x Min Size x PG Num x min. PG Num x Optimal PG Num x PG Autoscale Mode x PG Autoscale Target Size x PG Autoscale Target Ratio x Crush Rule Name x               %-Used x       Used x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x .mgr            x    3 x        2 x      1 x           1 x              1 x on                x                          x                           x replicated_rule x 3.13595910483855e-06 x    1388544 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_data     x    3 x        2 x     32 x             x             32 x on                x                          x                           x replicated_rule x                    0 x          0 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_metadata x    3 x        2 x     32 x          16 x             16 x on                x                          x                           x replicated_rule x  4.4855224246021e-07 x     198610 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x storagepool     x    3 x        2 x     32 x             x             32 x warn              x                          x                           x replicated_rule x   0.0186976287513971 x 8436679796 x
mqqqqqqqqqqqqqqqqqvqqqqqqvqqqqqqqqqqvqqqqqqqqvqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqj
root@zstack137:~# ceph -s
  cluster:
    id:     9e085d6a-77f3-41f1-8f6d-71fadc9c011b
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum zstack136,zstack135,zstack137 (age 6h)
    mgr: zstack136(active, since 6h), standbys: zstack135
    mds: 1/1 daemons up, 1 standby
    osd: 9 osds: 9 up (since 6h), 9 in (since 3d)

  data:
    volumes: 1/1 healthy
    pools:   4 pools, 97 pgs
    objects: 716 objects, 2.7 GiB
    usage:   8.6 GiB used, 441 GiB / 450 GiB avail
    pgs:     97 active+clean

root@zstack137:~# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    450 GiB  441 GiB  8.7 GiB   8.7 GiB       1.94
TOTAL  450 GiB  441 GiB  8.7 GiB   8.7 GiB       1.94

--- POOLS ---
POOL             ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr              1    1  449 KiB        2  1.3 MiB      0    137 GiB
cephfs_data       2   32      0 B        0      0 B      0    137 GiB
cephfs_metadata   3   32   35 KiB       22  194 KiB      0    137 GiB
storagepool       4   32  2.7 GiB      692  8.0 GiB   1.89    137 GiB
root@zstack137:~#

とりあえず対処できた模様?

Windows Serverによるファイルサーバー移行のメモ

Windows Serverを移行する時に現状の設定を確認する必要がある。

その際に確認しといた方が良さそうな項目についてのメモ

1) ネットワーク設定

Active Directoryに参加するだろうからある程度はなんとかなるだろうけど・・・

DNSサーバの順序あたりは確認しておいた方がいい

「ipconfig /all」
「netsh interface ipv4 show config」

NICチーミング設定がされてるかどうか、されている場合は「チーミングモード」「負荷分散モード」と「スタンバイアダプターの設定有無」を確認

2) Windowsファイアウォール設定

これもActive Directory側から操作されてるかもしれないけど確認

「受信の規則」にある追加されたっぽい設定を確認

コマンドの場合

「netsh advfirewall show allprofiles」でプロファイル確認
「netsh advfirewall show currentprofile」で現在有効になってるプロファイル確認
「netsh advfirewall firewall show rule name=all」で設定出力

3) 共有設定

コンピュータの管理を起動して共有確認
 「コンピュータの管理」の[システムツール]-[共有フォルダー]-[共有]の内容を確認
 各共有のプロパティを開き、「共有のアクセス許可」と「セキュリティ」を確認

「net share」で共有一覧を表示
「net share 共有名」で各共有の詳細を確認
「Get-SmbShare」で共有一覧表示
「Get-SmbShare|Get-SmbShareAccess」で各共有のアクセス権限表示
「Get-SmbShare|Format-List」共有一覧の詳細出力
「Get-SmbShare|Get-SmbShareAccess|Format-List」アクセス権限表示の詳細出力

共有をコマンドで設定する場合

net share "share"="D:\share"  /GRANT:"BUILTIN\Administrators,FULL" /GRANT:"Everyone,FULL"

4) クォータ設定

クォータ設定は2種類あるので注意が必要

ファイルサーバーリソースマネージャを起動して「クォータ」の内容を確認
 対象ディレクトリと適用されている”クォータテンプレート”の内容
  制限値, ハード/ソフト, 通知設定

「dirquota template list」でテンプレート一覧を表示
「dirquota template list /list-n」でテンプレートを通知設定込みで表示
「dirquota autoquota list」で自動適用クォータを確認

ドライブプロパティのクォータ設定を確認
 各ドライブのプロパティを開き、クォータタブの内容を確認

「fsutil volume list」でドライブ名を確認
「fsutil quota query ドライブ名」で確認

コマンドで設定する場合
旧サーバでtemplateをexportして、新サーバでimportして、各quotaを設定

dirquota template export /file:ファイル名
dirquota template import /file:ファイル名
dirquota quota add /Path:"D:\share" /sourcetemplate:"10 GB 制限"

5) シャドウコピー設定

ドライブプロパティのシャドウコピー設定を確認
 各ドライブのプロパティを開き、シャドウコピータブの内容を確認

コマンドの場合、スケジュール実行についてはタスクスケジューラで行っているのでschtasksコマンドでの確認が必要になることに注意

「vssadmin list shadows」で現在存在しているシャドウコピーを確認
「vssadmin list shadowstorage」でシャドウコピーの保存先を確認
「vssadmin list volumes」でディスクのIDを確認
「schtasks /query /xml」で一覧をXML形式で出力し、名前が"ShadowCopyVolume~"のものの内容を確認

6) タスクスケジューラ設定

タスクスケジューラを起動して設定を確認
 タスクスケジューラライブラリに登録されているタスク一覧を確認
 それぞれのタスクの詳細を確認

「schtasks /query /fo list」で一覧を表示
「schtasks /query /xml」で一覧をXML形式で表示

rocobopyでのファイル同期

robocopyのオプション

「/mir /copyall /R:1 /W:1」でいける場合が多い

EFSRAW対応の場合は「/COPYALL /MIR /B /EFSRAW /R:1 /W:1」

Windowsサーバ上で監査設定があって、コピー先が監査非対応の場合は「/E /B /copy:DATSO /R:1 /W:1」かなぁ?

「 /NP」オプションを付けて進捗表示なしにしてもよい

最初は「/MIR」ではなく「/B /E」で実行して、指定誤りによる誤削除がないようにする、というのも1つの手。

これは /MIR か /PURGE を指定すると、コピー元にないファイルは削除されるが、コピー先のディレクトリ指定を誤って既存データのあるディレクトリを指定してしまうと削除されるため。

robocopyを実行するバッチファイルの作成例(注: 0時~9時に実行するとLOGDATEが期待通りに動作しないかもしれないので注意

@echo off
set TIME2=%TIME: =0%
set LOGDATE=%DATE:~0,4%%DATE:~5,2%%DATE:~8,2%-%TIME2:~0,2%%TIME2:~3,2%

robocopy \\旧FS\share  \\新FS\share /mir /copyall /R:0 /W:0 /LOG+:D:\LOGS\share-%LOGDATE%.txt

代表的なrobocopyエラーコードと対処一覧

エラー 2 (0x00000002)→指定したフォルダが存在しない
エラー 5 (0x00000005)→アクセス権がない。robocopyに/Bオプションを付ける
エラー 31 (0x0000001F)→コピー先に元と同じNTFSセキュリティ設定が行えない(OS制約などの可能性)
エラー 32 (0x00000020)→ファイルがアプリケーションからロックされているのでアプリを閉じる
エラー 64 (0x00000040)→指定した共有にアクセスできない
エラー 87 (0x00000057)→指定した共有にアクセスするためのユーザ名/パスワード指定が誤っている
エラー 112(0x00000070)→コピー先容量が足らない

CVE-2022-38023適用後 NetAppがActive Directoryに参加できない

いままでも「SMB2 Enabled for DC Connections設定に起因する接続できない問題」というのがあったが、先日話題になった「Active Directoryサーバのセキュリティ強化アップデート(CVE-2022-38023)に伴うONTAPファイルサーバへの影響」で、2023年7月以降のActive Directory環境ではONTAP をCIFSに新規作成しようとした場合にエラーがでる、という問題が出ていた。

Is AES Encryption Enabled設定」と「AES session key enabled for NetLogon channel設定」の2つの設定を変更する必要がある。

前者はONTAP 9.12.1から初期値変更、後者はONTAP 9.10.1から初期値変更となっているので、最近導入している場合は問題が発生しないのだが、以前のバージョンからアップデートしているような環境の場合は以前の値のままとなっているため注意が必要となっている。

その1: Is AES Encryption Enabled 設定

以前からONTAPを使っていてアップデートしているような環境では、SMB内部接続での暗号化形式でAESを使わない、という設定になっているせいで、下記の様なエラーとなる。

netapp9101::> vserver cifs create -vserver svm3 -cifs-server svm3 -domain adosakana.local

In order to create an Active Directory machine account for the CIFS server, you must supply the name and password of a Windows account with
sufficient privileges to add computers to the "CN=Computers" container within the "ADOSAKANA.LOCAL" domain.

Enter the user name: administrator

Enter the password:

Error: Machine account creation procedure failed
  [    47] Loaded the preliminary configuration.
  [   130] Created a machine account in the domain
  [   130] SID to name translations of Domain Users and Admins
           completed successfully
  [   131] Successfully connected to ip 172.17.44.49, port 88 using
           TCP
  [   142] Successfully connected to ip 172.17.44.49, port 464 using
           TCP
  [   233] Kerberos password set for 'SVM3$@ADOSAKANA.LOCAL' succeeded
  [   233] Set initial account password
  [   244] Successfully connected to ip 172.17.44.49, port 445 using
           TCP
  [   276] Successfully connected to ip 172.17.44.49, port 88 using
           TCP
  [   311] Successfully authenticated with DC
           adserver.adosakana.local
  [   324] Unable to connect to NetLogon service on
           adserver.adosakana.local (Error:
           RESULT_ERROR_GENERAL_FAILURE)
**[   324] FAILURE: Unable to make a connection
**         (NetLogon:ADOSAKANA.LOCAL), result: 3
  [   324] Unable to make a NetLogon connection to
           adserver.adosakana.local using the new machine account
  [   346] Deleted existing account
           'CN=SVM3,CN=Computers,DC=adosakana,DC=local'

Error: command failed: Failed to create the Active Directory machine account "SVM3". Reason: general failure.

netapp9101::>

この問題はマニュアルの「Enable or disable AES encryption for Kerberos-based communication」に記載されているように「is-aes-encryption-enabled」設定をtrueに変更することで解決する。

netapp9101::> vserver cifs security modify -vserver svm3 -is-aes-encryption-enabled true
netapp9101::> vserver cifs security show -fields is-aes-encryption-enabled
vserver is-aes-encryption-enabled
------- -------------------------
Cluster -
Snapmirror-WAN
        -
netapp9101
        -
netapp9101-01
        -
svm0    true
svm2    false
svm3    true
7 entries were displayed.

netapp9101::>

その2: AES session key enabled for NetLogon channel 設定

上記を設定しても、下記の様なエラーとなった。

netapp9101::> vserver cifs create -vserver svm3 -cifs-server svm3 -domain vm2.adosakana.local

In order to create an Active Directory machine account for the CIFS server, you must supply the name and password of
a Windows account with sufficient privileges to add computers to the "CN=Computers" container within the
"ADOSAKANA.LOCAL" domain.

Enter the user name: administrator

Enter the password:

Error: Machine account creation procedure failed
  [    43] Loaded the preliminary configuration.
  [   133] Created a machine account in the domain
  [   133] SID to name translations of Domain Users and Admins
           completed successfully
  [   134] Successfully connected to ip 172.17.44.49, port 88 using
           TCP
  [   144] Successfully connected to ip 172.17.44.49, port 464 using
           TCP
  [   226] Kerberos password set for 'SVM3$@ADOSAKANA.LOCAL' succeeded
  [   226] Set initial account password
  [   253] Successfully connected to ip 172.17.44.49, port 445 using
           TCP
  [   284] Successfully connected to ip 172.17.44.49, port 88 using
           TCP
  [   316] Successfully authenticated with DC
           adserver.adosakana.local
  [   323] Encountered NT error (NT_STATUS_PENDING) for SMB command
           Read
  [   327] Unable to connect to NetLogon service on
           adserver.adosakana.local (Error:
           RESULT_ERROR_GENERAL_FAILURE)
**[   327] FAILURE: Unable to make a connection
**         (NetLogon:ADOSAKANA.LOCAL), result: 3
  [   327] Unable to make a NetLogon connection to
           adserver.adosakana.local using the new machine account
  [   344] Deleted existing account
           'CN=SVM3,CN=Computers,DC=ADOSAKANA,DC=local'

Error: command failed: Failed to create the Active Directory machine account "SVM3". Reason: general failure.

netapp9101::>

この状況となった環境のActive Directoryサーバはsambaで作成しているため /usr/local/samba/var/log.samba を確認してみると下記のエラーがでていた。

[2023/10/20 14:48:22.301935,  0] ../../source4/rpc_server/netlogon/dcerpc_netlogon.c:281(dcesrv_netr_ServerAuthenticate3_check_downgrade)
  CVE-2022-38023: client_account[SVM3$] computer_name[SVM3] schannel_type[2] client_negotiate_flags[0x741ff] real_account[SVM3$] NT_STATUS_DOWNGRADE_DETECTED reject_des[0] reject_md5[1]
[2023/10/20 14:48:22.302215,  0] ../../source4/rpc_server/netlogon/dcerpc_netlogon.c:291(dcesrv_netr_ServerAuthenticate3_check_downgrade)
  CVE-2022-38023: Check if option 'server reject md5 schannel:SVM3$ = no' might be needed for a legacy client.
[2023/10/20 14:48:22.304539,  0] ../../source4/rpc_server/netlogon/dcerpc_netlogon.c:281(dcesrv_netr_ServerAuthenticate3_check_downgrade)
  CVE-2022-38023: client_account[SVM3$] computer_name[SVM3] schannel_type[2] client_negotiate_flags[0x701ff] real_account[SVM3$] NT_STATUS_DOWNGRADE_DETECTED reject_des[1] reject_md5[1]
[2023/10/20 14:48:22.304600,  0] ../../source4/rpc_server/netlogon/dcerpc_netlogon.c:291(dcesrv_netr_ServerAuthenticate3_check_downgrade)
  CVE-2022-38023: Check if option 'server reject md5 schannel:SVM3$ = no' might be needed for a legacy client.
[2023/10/20 14:48:22.304638,  0] ../../source4/rpc_server/netlogon/dcerpc_netlogon.c:298(dcesrv_netr_ServerAuthenticate3_check_downgrade)
  CVE-2022-38023: Check if option 'allow nt4 crypto:SVM3$ = yes' might be needed for a legacy client.

もしやkerneberosではなくNTLMで接続されてたりする?と lm-compatibility-level をkrb に設定しても同じ結果となった。

netapp9101::> vserver cifs security modify -vserver svm3 -lm-compatibility-level krb

netapp9101::> vserver cifs security show -fields lm-compatibility-level
vserver lm-compatibility-level
------- ----------------------
Cluster -
Snapmirror-WAN -
netapp9101 -
netapp9101-01 -
svm0    lm-ntlm-ntlmv2-krb
svm2    lm-ntlm-ntlmv2-krb
svm3    krb
7 entries were displayed.

netapp9101::>

さらに調べると「Configure Active Directory domain controller access overview」に、Netlogon にAESを使いたい場合は「aes-enabled-for-netlogon-channel」をtrueに設定する、と書いてあった

netapp9101::> vserver cifs security show -fields aes-enabled-for-netlogon-channel
vserver aes-enabled-for-netlogon-channel
------- --------------------------------
Cluster -
Snapmirror-WAN -
netapp9101 -
netapp9101-01 -
svm0    false
svm2    false
svm3    false
7 entries were displayed.

netapp9101::> vserver cifs security modify -vserver svm3 -aes-enabled-for-netlogon-channel true

netapp9101::> vserver cifs security show -fields aes-enabled-for-netlogon-channel
vserver aes-enabled-for-netlogon-channel
------- --------------------------------
Cluster -
Snapmirror-WAN -
netapp9101 -
netapp9101-01 -
svm0    false
svm2    false
svm3    true
7 entries were displayed.

netapp9101::>

設定変更後に再実行したところ、Active Directory参加に成功した。

netapp9101::> vserver cifs create -vserver svm3 -cifs-server svm3 -domain adosakana.local

In order to create an Active Directory machine account for the CIFS server, you must supply the name and password of
a Windows account with sufficient privileges to add computers to the "CN=Computers" container within the
"ADOSAKANA.LOCAL" domain.

Enter the user name: administrator

Enter the password:

Notice: SMB1 protocol version is obsolete and considered insecure. Therefore it is deprecated and disabled on this
CIFS server. Support for SMB1 might be removed in a future release. If required, use the (privilege: advanced)
"vserver cifs options modify -vserver svm3 -smb1-enabled true" to enable it.

netapp9101::>

↑のSMB1を有効にするかどうか、というところについては、複合機の出力先として指定されている、とか、LinuxサーバからCIFSでマウントしている、とか、Windowsワークグループからアクセスしている、という場合には有効にする、というような形となる。

Windows Server 2012R2に .Net Framework 3.5が追加できない場合の解決方法

実験するためWindows Server 2012R2を新規インストールして、まずはWindows Updateを全部適用してから、いろいろ設定していこう!と行ってみたところ思わぬところで問題が発生した。

結論を先に書いておくと「.NET Framework 3.5, 4.6.1, 4.7, 4.7.1, 4.7.2, 4.8 のセキュリティおよび品質ロールアップ」がインストールされていることが原因であるため

・Windows Updateを実施する前に追加すれば問題は発生しない
・Windows Update完了後に問題が発生した場合は「.NET Framework 3.5, 4.6.1, 4.7, 4.7.1, 4.7.2, 4.8 のセキュリティおよび品質ロールアップ」をアンインストールする

のどちらかを行うことで、インストールに成功するようになった。

ただ、.NET Frameworkのパッチの内部的な仕組みに気がつかないとアンインストールできないので、原因が知られていない、という状態でした。

問題の検証

Windows Updateが一通り終わったあとで、今回の実験対象のソフトウェアは.NET Framework 3.5が必要だな、と追加しようとしたらエラーとなった。

「代替ソースパスを指定」からDVD内のsources\sxs\を指定

で、エラー

このとき、イベントログのWindowsログの「Setup」に「パッケージ Microsoft .NET Framework 3.0の更新 NetFx3 を有効にできませんでした。 状態 0x800f0906。」と出ている

「0x800f0906」で調べると「.NET Framework 3.5 の展開エラーと解決手順」が出てくるが、コレではなかった。

[MS14-046] Windows 8.1 および Windows Server 2012 R2 用の .NET Framework 3.5 のセキュリティ更新プログラムについて (2014 年 8 月 12 日)」に以下の記載がある

Microsoft .NET Framework 3.5 用のセキュリティ更新プログラム 2966828 (マイクロソフト セキュリティ情報 MS14-046 に記載されています) をインストールした後、[Windows の機能] で Microsoft .NET Framework 3.5 のオプションの機能を初めて有効にしようとしたときに、機能がインストールされないことがあります。Microsoft .NET Framework 3.5 の機能を追加する前にインストールを “段階的に実行” した場合にこのエラーが発生することがあります。
この問題を解決するには、更新プログラム 3005628 をインストールします。
この問題を回避する方法の詳細については、以下のサポート技術情報番号をクリックしてください。
3002547Windows 8、Windows Server 2012、Windows 8.1、または Windows Server 2012 R2 でセキュリティ更新プログラム 2966827 または 2966828 をインストールした後に Microsoft .NET Framework 3.5 のオプションの Windows 機能を有効化できないことがある

更新プログラム 3005628「Windows 8、Windows 8.1、Windows Server 2012、および Windows Server 2012 R2 上の .NET Framework 3.5 の更新プログラム」の説明を読むと、原因としては、.NET Framework 3.5がインストールされていないのに、.NET Framework 3.5に関する更新プログラムがインストールされていることが原因である模様。

しかし、2023年9月現在では更新プログラム3005628 を適用してみたが効力はなかった。

更新履歴をみると「.NET Framework 3.5, 4.6.1, 4.7, 4.7.1, 4.7.2, 4.8 のセキュリティおよび品質ロールアップ」がある。

しかし、インストールされた更新プログラムを確認すると、.NET Framework関連のパッチがないのでインストールできないように見える。

ただ、調べたところwusaコマンドを使うとアンインストールできそうな感じだったので「wusa /uninstall /kb:5030184」でアンインストールを試みたが「インストールされていない」と表示される。

2023 年 9 月 12 日 – Windows Server 2012 R2 用 .NET Framework 3.5、4.6.2、4.7、4.7.1、4.7.2、4.8 のセキュリティおよび品質ロールアップ (KB5030184)」を確認すると、実はこの更新プログラム「Windows Server 2012 R2 用 .NET Framework 3.5 のセキュリティおよび品質ロールアップについて (KB5029915)」「Windows Server 2012 R2 用 .NET Framework 4.6.2、4.7、4.7.1、4.7.2 のセキュリティおよび品質ロールアップについて (KB5029916)」「Windows Server 2012 R2 用 .NET Framework 4.8 のセキュリティおよび品質ロールアップについて (KB5029917)」の3本だてらしい。

更新履歴上にこの3つは載ってないけど、試してみるかと「wusa /uninstall /kb:5029915」を実行すると、アンインストールに成功した。

おや?と思って、先ほどのインストールされた更新プログラム画像を再確認するとKB5029915がいた・・・

なるほど・・・「更新履歴」と「インストールされた更新プログラム」とで表示されるKB番号が別なのか・・・

で・・・これで大丈夫かな?と、.Net 3.5追加を試してみたらエラーになった。

もう一度更新一覧を確認しなおすともう1つ「.NET Framework 3.5, 4.6.1, 4.7, 4.7.1, 4.7.2, 4.8 のセキュリティおよび品質ロールアップ」があった

そちらは「2023 年 8 月 8 日 – Windows Server 2012 R2 用 .NET Framework 3.5、4.6.2、4.7、4.7.1、4.7.2、4.8 のセキュリティおよび品質ロールアップ (KB5029653)」で、「Windows Server 2012 R2 用 .NET Framework 3.5 のセキュリティおよび品質ロールアップについて (KB5028970)」「Windows Server 2012 R2 用 .NET Framework 4.6.2、4.7、4.7.1、4.7.2 のセキュリティおよび品質ロールアップについて (KB5028962)」「Windows Server 2012 R2 用 .NET Framework 4.8 のセキュリティおよび品質ロールアップについて (KB5028957)」なので、「wusa /uninstall /kb:5028970」でアンインストールを実行。

もしくは「インストールされた更新プログラム」から「Microsoft Windows (KB5028970)の更新プログラム」を右クリックして「アンインストール」でも大丈夫です。

役割と機能の追加から「.NET Framework 3.5」を実施してみたところ、今度は成功した。

この状態で新しくインストールされる更新プログラムを確認したところ、.NET 3.5専用のものの他に、先ほど.NET 3.5用のみアンインストールしたKB5030184もインストールされること、ということが確認出来た。

で、実際に適用したあとに、履歴一覧を確認するとKB5030184 が2回適用されていることが確認できた。