proxmox 8.1.4で適当にcephストレージ作ったらWARNINGが出た件の対処


proxmox 8.1.4を3サーバで作って、cpehストレージ作ってみるかー、と適当に設定した。

基本は「Deploy Hyper-Converged Ceph Cluster」を見ながらやったんだけど、CephFSを作成するときに、手順だと「pveceph fs create –pg_num 128 –add-storage」と書いてあったんだけど、「pveceph fs create」だけで実行したらどうなるんだろ?と思ってやってみたところ、警告が出た

注:いろいろ対処方法を検討したところ、指定しない場合のデフォルト値も128だった。

HEALTH_WARN: 1 pools have too many placement groups
Pool storagepool has 128 placement groups, should have 32

調べてみるとproxmoxのフォーラムに「CEPH pools have too many placement groups」という若干古め(2020年のpoxmox 6.3時代)のものが見つかった。

「pveceph pool ls」で現在の設定を確認

root@zstack137:~# ceph -v
ceph version 18.2.2 (e9fe820e7fffd1b7cde143a9f77653b73fcec748) reef (stable)
root@zstack137:~# pveversion
pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-8-pve)
root@zstack137:~# pveceph pool ls
lqqqqqqqqqqqqqqqqqwqqqqqqwqqqqqqqqqqwqqqqqqqqwqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqk
x Name            x Size x Min Size x PG Num x min. PG Num x Optimal PG Num x PG Autoscale Mode x PG Autoscale Target Size x PG Autoscale Target Ratio x Crush Rule Name x               %-Used x       Used x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x .mgr            x    3 x        2 x      1 x           1 x              1 x on                x                          x                           x replicated_rule x 3.08950029648258e-06 x    1388544 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_data     x    3 x        2 x     32 x             x             32 x on                x                          x                           x replicated_rule x                    0 x          0 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_metadata x    3 x        2 x     32 x          16 x             16 x on                x                          x                           x replicated_rule x 4.41906962578287e-07 x     198610 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x storagepool     x    3 x        2 x    128 x             x             32 x warn              x                          x                           x replicated_rule x   0.0184257291257381 x 8436679796 x
mqqqqqqqqqqqqqqqqqvqqqqqqvqqqqqqqqqqvqqqqqqqqvqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqj
root@zstack137:~#

「ceph osd pool autoscale-status」

root@zstack137:~# ceph osd pool autoscale-status
POOL               SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  BULK
.mgr             452.0k                3.0        449.9G  0.0000                                  1.0       1              on         False
cephfs_data          0                 3.0        449.9G  0.0000                                  1.0      32              on         False
cephfs_metadata  66203                 3.0        449.9G  0.0000                                  4.0      32              on         False
storagepool       2681M                3.0        449.9G  0.0175                                  1.0     128              warn       False
root@zstack137:~#

そういえば、むかし、cephをテスト構築した時もなんかあったな、と思い出して確認してみると2018年に「CephのOSD毎のPlacement Groupの数を確認する」というメモを残していた。

「ceph health」を実行してみると状況は違うようだった。

root@zstack137:~# ceph health
HEALTH_WARN 1 pools have too many placement groups

root@zstack137:~# ceph health detail
HEALTH_WARN 1 pools have too many placement groups
[WRN] POOL_TOO_MANY_PGS: 1 pools have too many placement groups
    Pool storagepool has 128 placement groups, should have 32
root@zstack137:~#
root@zstack137:~# ceph -s
  cluster:
    id:     9e085d6a-77f3-41f1-8f6d-71fadc9c011b
    health: HEALTH_WARN
            1 pools have too many placement groups

  services:
    mon: 3 daemons, quorum zstack136,zstack135,zstack137 (age 3h)
    mgr: zstack136(active, since 3h), standbys: zstack135
    mds: 1/1 daemons up, 1 standby
    osd: 9 osds: 9 up (since 3h), 9 in (since 3d)

  data:
    volumes: 1/1 healthy
    pools:   4 pools, 193 pgs
    objects: 716 objects, 2.7 GiB
    usage:   8.3 GiB used, 442 GiB / 450 GiB avail
    pgs:     193 active+clean

root@zstack137:~#

とはいえ、「ceph pg dump」の出力結果を整形して表示する下記コマンドが実行できるか確認してみる。

ceph pg dump | awk '
BEGIN { IGNORECASE = 1 }
 /^PG_STAT/ { col=1; while($col!="UP") {col++}; col++ }
 /^[0-9a-f]+\.[0-9a-f]+/ { match($0,/^[0-9a-f]+/); pool=substr($0, RSTART, RLENGTH); poollist[pool]=0;
 up=$col; i=0; RSTART=0; RLENGTH=0; delete osds; while(match(up,/[0-9]+/)>0) { osds[++i]=substr(up,RSTART,RLENGTH); up = substr(up, RSTART+RLENGTH) }
 for(i in osds) {array[osds[i],pool]++; osdlist[osds[i]];}
}
END {
 printf("\n");
 printf("pool :\t"); for (i in poollist) printf("%s\t",i); printf("| SUM \n");
 for (i in poollist) printf("--------"); printf("----------------\n");
 for (i in osdlist) { printf("osd.%i\t", i); sum=0;
   for (j in poollist) { printf("%i\t", array[i,j]); sum+=array[i,j]; sumpool[j]+=array[i,j] }; printf("| %i\n",sum) }
 for (i in poollist) printf("--------"); printf("----------------\n");
 printf("SUM :\t"); for (i in poollist) printf("%s\t",sumpool[i]); printf("|\n");
}'

無事実行できた。

root@zstack137:~# ceph pg dump | awk '
BEGIN { IGNORECASE = 1 }
 /^PG_STAT/ { col=1; while($col!="UP") {col++}; col++ }
 /^[0-9a-f]+\.[0-9a-f]+/ { match($0,/^[0-9a-f]+/); pool=substr($0, RSTART, RLENGTH); poollist[pool]=0;
 up=$col; i=0; RSTART=0; RLENGTH=0; delete osds; while(match(up,/[0-9]+/)>0) { osds[++i]=substr(up,RSTART,RLENGTH); up = substr(up, RSTART+RLENGTH) }
 for(i in osds) {array[osds[i],pool]++; osdlist[osds[i]];}
}
END {
 printf("\n");
 printf("pool :\t"); for (i in poollist) printf("%s\t",i); printf("| SUM \n");
 for (i in poollist) printf("--------"); printf("----------------\n");
 for (i in osdlist) { printf("osd.%i\t", i); sum=0;
   for (j in poollist) { printf("%i\t", array[i,j]); sum+=array[i,j]; sumpool[j]+=array[i,j] }; printf("| %i\n",sum) }
 for (i in poollist) printf("--------"); printf("----------------\n");
 printf("SUM :\t"); for (i in poollist) printf("%s\t",sumpool[i]); printf("|\n");
}'
dumped all

pool :  3       2       1       4       | SUM
------------------------------------------------
osd.3   4       5       1       13      | 23
osd.8   4       6       0       12      | 22
osd.6   2       4       0       15      | 21
osd.5   6       4       0       16      | 26
osd.2   3       3       0       15      | 21
osd.1   4       3       0       10      | 17
osd.4   1       1       0       16      | 18
osd.0   5       2       0       10      | 17
osd.7   3       4       0       21      | 28
------------------------------------------------
SUM :   32      32      1       128     |
root@zstack137:~#

poolによって差がありすぎている?

中国語のページで「ceph使用问题积累」というところがあって「HEALTH_WARN:pools have too many placement groups」と「HEALTH_WARN: mons are allowing insecure global_id reclaim」についての対処方法が載っている。

前者については↑で出てきたproxmoxフォーラム記事を参照元として「ceph mgr module disable pg_autoscaler」を実行してauto scale機能を無効化する、とある

後者については「ceph config set mon auth_allow_insecure_global_id_reclaim false」となっていた。

module設定変える前に「ceph mgr module ls」で状態確認

root@zstack137:~# ceph mgr module ls
MODULE
balancer           on (always on)
crash              on (always on)
devicehealth       on (always on)
orchestrator       on (always on)
pg_autoscaler      on (always on)
progress           on (always on)
rbd_support        on (always on)
status             on (always on)
telemetry          on (always on)
volumes            on (always on)
iostat             on
nfs                on
restful            on
alerts             -
influx             -
insights           -
localpool          -
mirroring          -
osd_perf_query     -
osd_support        -
prometheus         -
selftest           -
snap_schedule      -
stats              -
telegraf           -
test_orchestrator  -
zabbix             -
root@zstack137:~#

SUSEのページにあるSUSE Enterprise Storage 7 DocumentationのAdministration and Operations Guide「12 Determine the cluster state」を見るといろいろな状態確認コマンドがあった。

root@zstack137:~# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    450 GiB  442 GiB  8.3 GiB   8.3 GiB       1.85
TOTAL  450 GiB  442 GiB  8.3 GiB   8.3 GiB       1.85

--- POOLS ---
POOL             ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr              1    1  449 KiB        2  1.3 MiB      0    140 GiB
cephfs_data       2   32      0 B        0      0 B      0    140 GiB
cephfs_metadata   3   32   35 KiB       22  194 KiB      0    140 GiB
storagepool       4  128  2.6 GiB      692  7.9 GiB   1.84    140 GiB
root@zstack137:~#  ceph df detail
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    450 GiB  442 GiB  8.3 GiB   8.3 GiB       1.85
TOTAL  450 GiB  442 GiB  8.3 GiB   8.3 GiB       1.85

--- POOLS ---
POOL             ID  PGS   STORED   (DATA)   (OMAP)  OBJECTS     USED   (DATA)   (OMAP)  %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY  USED COMPR  UNDER COMPR
.mgr              1    1  449 KiB  449 KiB      0 B        2  1.3 MiB  1.3 MiB      0 B      0    140 GiB            N/A          N/A    N/A         0 B          0 B
cephfs_data       2   32      0 B      0 B      0 B        0      0 B      0 B      0 B      0    140 GiB            N/A          N/A    N/A         0 B          0 B
cephfs_metadata   3   32   35 KiB   18 KiB   17 KiB       22  194 KiB  144 KiB   50 KiB      0    140 GiB            N/A          N/A    N/A         0 B          0 B
storagepool       4  128  2.6 GiB  2.6 GiB  3.0 KiB      692  7.9 GiB  7.9 GiB  9.1 KiB   1.84    140 GiB            N/A          N/A    N/A         0 B          0 B
root@zstack137:~# 

TOO_MANY_PGSの時の対処としていかが書かれている

TOO_MANY_PGS
The number of PGs in use is above the configurable threshold of mon_pg_warn_max_per_osd PGs per OSD. This can lead to higher memory usage for OSD daemons, slower peering after cluster state changes (for example OSD restarts, additions, or removals), and higher load on the Ceph Managers and Ceph Monitors.

While the pg_num value for existing pools cannot be reduced, the pgp_num value can. This effectively co-locates some PGs on the same sets of OSDs, mitigating some of the negative impacts described above. The pgp_num value can be adjusted with:

proxmox「Deploy Hyper-Converged Ceph Cluster」のあたりをみると PG Autoscale Modeはwarnで設定されるのが標準であるようだ。

cephのautomated scalingを見ると「ceph config set global mon_target_pg_per_osd 100」で値を設定することが書かれているが、現在値の確認方法が書いてない。

ceph config get <who> <key>というのはわかったのだが、whoの部分がなんなのかがわからなかった。(globalではなかった)

「ceph config dump」を実行したところ、いま標準値から変更されているところであろう設定が出てきて、whoに該当するものとしてmonがあった。であればmon_target_pg_per_osdのwhoはmonだろうと試すと現在値らしきものが確認できた。

root@zstack137:~# ceph config dump
WHO  MASK  LEVEL     OPTION                                 VALUE  RO
mon        advanced  auth_allow_insecure_global_id_reclaim  false
root@zstack137:~# ceph config get mon  mon_target_pg_per_osd
100
root@zstack137:~#

とりあえず、「ceph mgr module disable pg_autoscaler」を実行してみたのだが、変更不可だった

root@zstack137:~# ceph mgr module disable pg_autoscaler
Error EINVAL: module 'pg_autoscaler' cannot be disabled (always-on)
root@zstack137:~#

じゃあ、「ceph osd pool set storagepool pgp_num 32」を実行してpgp_numを128から32に変更してみる

root@zstack137:~# ceph osd pool stats
pool .mgr id 1
  nothing is going on

pool cephfs_data id 2
  nothing is going on

pool cephfs_metadata id 3
  nothing is going on

pool storagepool id 4
  nothing is going on

root@zstack137:~# ceph osd pool get storagepool pgp_num
pgp_num: 128
root@zstack137:~# ceph osd pool set storagepool pgp_num 32
set pool 4 pgp_num to 32
root@zstack137:~# ceph osd pool get storagepool pgp_num
pgp_num: 125
root@zstack137:~# ceph osd pool get storagepool pgp_num
pgp_num: 119
root@zstack137:~#

徐々に変更されていく模様

root@zstack137:~# ceph -s
  cluster:
    id:     9e085d6a-77f3-41f1-8f6d-71fadc9c011b
    health: HEALTH_WARN
            Reduced data availability: 1 pg peering
            1 pools have too many placement groups
            1 pools have pg_num > pgp_num

  services:
    mon: 3 daemons, quorum zstack136,zstack135,zstack137 (age 5h)
    mgr: zstack136(active, since 5h), standbys: zstack135
    mds: 1/1 daemons up, 1 standby
    osd: 9 osds: 9 up (since 5h), 9 in (since 3d); 2 remapped pgs

  data:
    volumes: 1/1 healthy
    pools:   4 pools, 193 pgs
    objects: 716 objects, 2.7 GiB
    usage:   8.4 GiB used, 442 GiB / 450 GiB avail
    pgs:     0.518% pgs not active
             16/2148 objects misplaced (0.745%)
             190 active+clean
             2   active+recovering
             1   remapped+peering

  io:
    recovery: 2.0 MiB/s, 0 objects/s

root@zstack137:~# ceph health
HEALTH_WARN Reduced data availability: 1 pg peering; 1 pools have too many placement groups; 1 pools have pg_num > pgp_num
root@zstack137:~# ceph health detail
HEALTH_WARN 1 pools have too many placement groups; 1 pools have pg_num > pgp_num
[WRN] POOL_TOO_MANY_PGS: 1 pools have too many placement groups
    Pool storagepool has 128 placement groups, should have 32
[WRN] SMALLER_PGP_NUM: 1 pools have pg_num > pgp_num
    pool storagepool pg_num 128 > pgp_num 32
root@zstack137:~#

ある程度時間が経過したあと

root@zstack137:~# ceph health detail
HEALTH_WARN 1 pools have too many placement groups; 1 pools have pg_num > pgp_num
[WRN] POOL_TOO_MANY_PGS: 1 pools have too many placement groups
    Pool storagepool has 128 placement groups, should have 32
[WRN] SMALLER_PGP_NUM: 1 pools have pg_num > pgp_num
    pool storagepool pg_num 128 > pgp_num 32
root@zstack137:~# ceph pg dump | awk '
BEGIN { IGNORECASE = 1 }
 /^PG_STAT/ { col=1; while($col!="UP") {col++}; col++ }
 /^[0-9a-f]+\.[0-9a-f]+/ { match($0,/^[0-9a-f]+/); pool=substr($0, RSTART, RLENGTH); poollist[pool]=0;
 up=$col; i=0; RSTART=0; RLENGTH=0; delete osds; while(match(up,/[0-9]+/)>0) { osds[++i]=substr(up,RSTART,RLENGTH); up = substr(up, RSTART+RLENGTH) }
 for(i in osds) {array[osds[i],pool]++; osdlist[osds[i]];}
}
END {
 printf("\n");
 printf("pool :\t"); for (i in poollist) printf("%s\t",i); printf("| SUM \n");
 for (i in poollist) printf("--------"); printf("----------------\n");
 for (i in osdlist) { printf("osd.%i\t", i); sum=0;
   for (j in poollist) { printf("%i\t", array[i,j]); sum+=array[i,j]; sumpool[j]+=array[i,j] }; printf("| %i\n",sum) }
 for (i in poollist) printf("--------"); printf("----------------\n");
 printf("SUM :\t"); for (i in poollist) printf("%s\t",sumpool[i]); printf("|\n");
}'
dumped all

pool :  3       2       1       4       | SUM
------------------------------------------------
osd.3   4       5       1       15      | 25
osd.8   4       6       0       16      | 26
osd.6   2       4       0       16      | 22
osd.5   6       4       0       4       | 14
osd.2   3       3       0       11      | 17
osd.1   4       3       0       13      | 20
osd.4   1       1       0       17      | 19
osd.0   5       2       0       20      | 27
osd.7   3       4       0       16      | 23
------------------------------------------------
SUM :   32      32      1       128     |
root@zstack137:~# ceph osd pool autoscale-status
POOL               SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  BULK
.mgr             452.0k                3.0        449.9G  0.0000                                  1.0       1              on         False
cephfs_data          0                 3.0        449.9G  0.0000                                  1.0      32              on         False
cephfs_metadata  66203                 3.0        449.9G  0.0000                                  4.0      32              on         False
storagepool       2681M                3.0        449.9G  0.0175                                  1.0     128              warn       False
root@zstack137:~# pveceph pool ls
lqqqqqqqqqqqqqqqqqwqqqqqqwqqqqqqqqqqwqqqqqqqqwqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqk
x Name            x Size x Min Size x PG Num x min. PG Num x Optimal PG Num x PG Autoscale Mode x PG Autoscale Target Size x PG Autoscale Target Ratio x Crush Rule Name x               %-Used x       Used x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x .mgr            x    3 x        2 x      1 x           1 x              1 x on                x                          x                           x replicated_rule x 3.09735719383752e-06 x    1388544 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_data     x    3 x        2 x     32 x             x             32 x on                x                          x                           x replicated_rule x                    0 x          0 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_metadata x    3 x        2 x     32 x          16 x             16 x on                x                          x                           x replicated_rule x 4.43030785390874e-07 x     198610 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x storagepool     x    3 x        2 x    128 x             x             32 x warn              x                          x                           x replicated_rule x    0.018471721559763 x 8436679796 x
mqqqqqqqqqqqqqqqqqvqqqqqqvqqqqqqqqqqvqqqqqqqqvqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqj
root@zstack137:~#

pg_numを減らせる?

root@zstack137:~# ceph osd pool get storagepool pg_num
pg_num: 128
root@zstack137:~# ceph osd pool set storagepool pg_num 32
set pool 4 pg_num to 32
root@zstack137:~# ceph osd pool get storagepool pg_num
pg_num: 128
root@zstack137:~# ceph osd pool get storagepool pg_num
pg_num: 124
root@zstack137:~#

徐々に減ってる

ステータスはHEALTH_OLに変わった

root@zstack137:~# ceph osd pool get storagepool pg_num
pg_num: 119
root@zstack137:~# ceph health detail
HEALTH_OK
root@zstack137:~# pveceph pool ls
lqqqqqqqqqqqqqqqqqwqqqqqqwqqqqqqqqqqwqqqqqqqqwqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqk
x Name            x Size x Min Size x PG Num x min. PG Num x Optimal PG Num x PG Autoscale Mode x PG Autoscale Target Size x PG Autoscale Target Ratio x Crush Rule Name x               %-Used x       Used x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x .mgr            x    3 x        2 x      1 x           1 x              1 x on                x                          x                           x replicated_rule x 3.10063592223742e-06 x    1388544 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_data     x    3 x        2 x     32 x             x             32 x on                x                          x                           x replicated_rule x                    0 x          0 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_metadata x    3 x        2 x     32 x          16 x             16 x on                x                          x                           x replicated_rule x 4.43499772018185e-07 x     198610 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x storagepool     x    3 x        2 x    117 x             x             32 x warn              x                          x                           x replicated_rule x   0.0184909123927355 x 8436679796 x
mqqqqqqqqqqqqqqqqqvqqqqqqvqqqqqqqqqqvqqqqqqqqvqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqj
root@zstack137:~#

「ceph osd pool autoscale-status」の方のPG_NUMは即反映

root@zstack137:~# ceph osd pool autoscale-status
POOL               SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  BULK
.mgr             452.0k                3.0        449.9G  0.0000                                  1.0       1              on         False
cephfs_data          0                 3.0        449.9G  0.0000                                  1.0      32              on         False
cephfs_metadata  66203                 3.0        449.9G  0.0000                                  4.0      32              on         False
storagepool       2705M                3.0        449.9G  0.0176                                  1.0      32              warn       False
root@zstack137:~#

しばらく実行したらHEALTH_WARNになったときもあったが、比較的すぐにHEALTH_OKに戻ったりした。

root@zstack137:~# ceph health detail
HEALTH_WARN Reduced data availability: 2 pgs inactive, 2 pgs peering
[WRN] PG_AVAILABILITY: Reduced data availability: 2 pgs inactive, 2 pgs peering
    pg 4.22 is stuck peering for 2d, current state peering, last acting [6,5,2]
    pg 4.62 is stuck peering for 6h, current state peering, last acting [6,5,2]
root@zstack137:~#

しばらく時間がたって変更が終わったあとに状態をとってみた

root@zstack137:~# ceph health detail
HEALTH_OK
root@zstack137:~# pveceph pool ls
lqqqqqqqqqqqqqqqqqwqqqqqqwqqqqqqqqqqwqqqqqqqqwqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqqqqqqk
x Name            x Size x Min Size x PG Num x min. PG Num x Optimal PG Num x PG Autoscale Mode x PG Autoscale Target Size x PG Autoscale Target Ratio x Crush Rule Name x               %-Used x       Used x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x .mgr            x    3 x        2 x      1 x           1 x              1 x on                x                          x                           x replicated_rule x 3.13595910483855e-06 x    1388544 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_data     x    3 x        2 x     32 x             x             32 x on                x                          x                           x replicated_rule x                    0 x          0 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x cephfs_metadata x    3 x        2 x     32 x          16 x             16 x on                x                          x                           x replicated_rule x  4.4855224246021e-07 x     198610 x
tqqqqqqqqqqqqqqqqqnqqqqqqnqqqqqqqqqqnqqqqqqqqnqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqqqqqqqqqqqnqqqqqqqqqqqqu
x storagepool     x    3 x        2 x     32 x             x             32 x warn              x                          x                           x replicated_rule x   0.0186976287513971 x 8436679796 x
mqqqqqqqqqqqqqqqqqvqqqqqqvqqqqqqqqqqvqqqqqqqqvqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqj
root@zstack137:~# ceph -s
  cluster:
    id:     9e085d6a-77f3-41f1-8f6d-71fadc9c011b
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum zstack136,zstack135,zstack137 (age 6h)
    mgr: zstack136(active, since 6h), standbys: zstack135
    mds: 1/1 daemons up, 1 standby
    osd: 9 osds: 9 up (since 6h), 9 in (since 3d)

  data:
    volumes: 1/1 healthy
    pools:   4 pools, 97 pgs
    objects: 716 objects, 2.7 GiB
    usage:   8.6 GiB used, 441 GiB / 450 GiB avail
    pgs:     97 active+clean

root@zstack137:~# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    450 GiB  441 GiB  8.7 GiB   8.7 GiB       1.94
TOTAL  450 GiB  441 GiB  8.7 GiB   8.7 GiB       1.94

--- POOLS ---
POOL             ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr              1    1  449 KiB        2  1.3 MiB      0    137 GiB
cephfs_data       2   32      0 B        0      0 B      0    137 GiB
cephfs_metadata   3   32   35 KiB       22  194 KiB      0    137 GiB
storagepool       4   32  2.7 GiB      692  8.0 GiB   1.89    137 GiB
root@zstack137:~#

とりあえず対処できた模様?

Windows Updateしたらsamba ADへの接続がおかしくなった


sambaで作ったsamba AD環境がある。

最近、新しい仮想マシン作ってドメイン参加したあと、PowerShellで「Test-ComputerSecureChannel」を実行したら「False」と出た。

repairしても状況が変わらない

(ADサーバと該当マシン間の通信がセキュアチャネルを使って行われていないという状態)

いまつかっているsambaバージョンを確認すると4.18.2

# /usr/local/samba/sbin/smbd --version
Version 4.18.2
#

sambaのリリース履歴を見るとNetAppでも問題になったセキュリティ修正関連での修正が出ている。

CVE-2023-34967CVE-2022-2127CVE-2023-34968CVE-2023-34966 and CVE-2023-3347.

直接のバグとしては「Bug 15418 – secure channel faulty since Windows 10/11 update 07/2023」が該当している模様

バグであるならアップデートするしかないな、というわけで、samba 4.18.5へアップデート

展開して、make;make install; systemctl restart samba-ad-dc っと

# /usr/local/samba/sbin/smbd --version
Version 4.18.5
#

で、repairすると、今度は成功しました

VisionFive 2のU-boot/SPLをv2.5.0からv2.11.5にアップデートする話


Vision Five 2を最近使っていなかったので、SDK v2.5.0のU-boot/SPLのままで、OSも「Welcome to Armbian 23.05.0.0070 Lunar with bleeding edge Linux 5.15.0-starfive2」といわれてる感じになってた。

いまのU-Boot/SPLバージョンを https://github.com/starfive-tech/VisionFive2/releases で確認するとSDK v2.11.5というバージョンだった。

そして、https://debian.starfivetech.com/ で配布しているDebianも202303版でHDMIディスプレイに起動メッセージが出力されるようになった模様。

なので、U-boot/SPLを最新にしてみるか、とv2.11.5をダウンロードしてきて実行!

osakanataro@visionfive2:~/v2.11.5$ cat /proc/mtd
dev:    size   erasesize  name
mtd0: 00020000 00001000 "spl"
mtd1: 00300000 00001000 "uboot"
mtd2: 00100000 00001000 "data"
osakanataro@visionfive2:~/v2.11.5$ sudo flashcp -v u-boot-spl.bin.normal.out /dev/mtd0
u-boot-spl.bin.normal.out won't fit into /dev/mtd0!
osakanataro@visionfive2:~/v2.11.5$

・・・「u-boot-spl.bin.normal.out won’t fit into /dev/mtd0!」とはどういうこと?

Updating SPL and U-Boot of Flash を確認すると「Method 2 only supports versions equal to or later than VF2_v2.5.0.」と書いてあるので、サポートされてるはず?

とりあえずv2.6.0でやってみるか、と試すと、案の定成功

osakanataro@visionfive2:~/v2.6.0$ sudo flashcp -v u-boot-spl.bin.normal.out /dev/mtd0
Erasing blocks: 32/32 (100%)
Writing data: 124k/124k (100%)
Verifying data: 124k/124k (100%)
osakanataro@visionfive2:~/v2.6.0$ sudo flashcp -v visionfive2_fw_payload.img /dev/mtd1
Erasing blocks: 682/682 (100%)
Writing data: 2727k/2727k (100%)
Verifying data: 2727k/2727k (100%)
osakanataro@visionfive2:~/v2.6.0$

一回再起動

osakanataro@visionfive2:~/v2.11.5$ sudo flashcp -v u-boot-spl.bin.normal.out /dev/mtd0
u-boot-spl.bin.normal.out won't fit into /dev/mtd0!
osakanataro@visionfive2:~/v2.11.5$

まだ駄目なので、v2.8.0を適用

osakanataro@visionfive2:~/v2.8.0$ cat /proc/mtd
dev:    size   erasesize  name
mtd0: 00020000 00001000 "spl"
mtd1: 00300000 00001000 "uboot"
mtd2: 00100000 00001000 "data"
osakanataro@visionfive2:~/v2.8.0$ sudo flashcp -v u-boot-spl.bin.normal.out /dev/mtd0
Erasing blocks: 32/32 (100%)
Writing data: 127k/127k (100%)
Verifying data: 127k/127k (100%)
osakanataro@visionfive2:~/v2.8.0$ sudo flashcp -v visionfive2_fw_payload.img /dev/mtd1
Erasing blocks: 683/683 (100%)
Writing data: 2731k/2731k (100%)
Verifying data: 2731k/2731k (100%)
osakanataro@visionfive2:~/v2.8.0$

再起動しないでv2.10.4の適用も試してみたところ成功した。

osakanataro@visionfive2:~/v2.10.4$ cat /proc/mtd
dev:    size   erasesize  name
mtd0: 00020000 00001000 "spl"
mtd1: 00300000 00001000 "uboot"
mtd2: 00100000 00001000 "data"
osakanataro@visionfive2:~/v2.10.4$ sudo flashcp -v u-boot-spl.bin.normal.out /dev/mtd0
Erasing blocks: 32/32 (100%)
Writing data: 127k/127k (100%)
Verifying data: 127k/127k (100%)
osakanataro@visionfive2:~/v2.10.4$ sudo flashcp -v visionfive2_fw_payload.img /dev/mtd1
Erasing blocks: 684/684 (100%)
Writing data: 2734k/2734k (100%)
Verifying data: 2734k/2734k (100%)
osakanataro@visionfive2:~/v2.10.4$

もしやv2.11.5もいくか?と再チャレンジしてみたが駄目

osakanataro@visionfive2:~/v2.11.5$ cat /proc/mtd
dev:    size   erasesize  name
mtd0: 00020000 00001000 "spl"
mtd1: 00300000 00001000 "uboot"
mtd2: 00100000 00001000 "data"
osakanataro@visionfive2:~/v2.11.5$ sudo flashcp -v u-boot-spl.bin.normal.out /dev/mtd0
u-boot-spl.bin.normal.out won't fit into /dev/mtd0!
osakanataro@visionfive2:~/v2.11.5$

再起動してみても相変わらず駄目

osakanataro@visionfive2:~/v2.11.5$ cat /proc/mtd
dev:    size   erasesize  name
mtd0: 00020000 00001000 "spl"
mtd1: 00300000 00001000 "uboot"
mtd2: 00100000 00001000 "data"
osakanataro@visionfive2:~/v2.11.5$ sudo flashcp -v u-boot-spl.bin.normal.out /dev/mtd0
u-boot-spl.bin.normal.out won't fit into /dev/mtd0!
osakanataro@visionfive2:~/v2.11.5$

これはkernelもアップデートしないと駄目か?

しばらくarmbianのapt update/upgradeを実施・・・

osakanataro@visionfive2:~/v2.11.5$ cat /proc/mtd
dev:    size   erasesize  name
mtd0: 00020000 00001000 "spl"
mtd1: 00300000 00001000 "uboot"
mtd2: 00100000 00001000 "data"
osakanataro@visionfive2:~/v2.11.5$ sudo flashcp -v u-boot-spl.bin.normal.out /dev/mtd0
u-boot-spl.bin.normal.out won't fit into /dev/mtd0!
osakanataro@visionfive2:~/v2.11.5$ uname -a
Linux visionfive2 5.15.0-starfive2 #1 SMP Mon Feb 20 02:51:35 UTC 2023 riscv64 riscv64 riscv64 GNU/Linux
osakanataro@visionfive2:~/v2.11.5$

駄目かぁ…

RVspaceのフォーラム「Flashcp => /dev/mtd0 2.11.5」では/dev/mtdのパーテーション切り直せばなんとかなる?

githubのissue「Can not upgrade to firmware v2.11.5 #37」ではreleaseのとこにあるsdimageなどからブートすればアップデートできる、という話

とりあえずrvspaceのやつをやってみる

osakanataro@visionfive2:~/v2.11.5$ sudo mtdpart del /dev/mtd 2
mtdpart: error!: Cannot open /dev/mtd
         error 2 (No such file or directory)
osakanataro@visionfive2:~/v2.11.5$ sudo ls -l /dev/mtd*
crw------- 1 root root 90, 0  4月 18 22:29 /dev/mtd0
crw------- 1 root root 90, 1  4月 18 22:29 /dev/mtd0ro
crw------- 1 root root 90, 2  4月 18 22:29 /dev/mtd1
crw------- 1 root root 90, 3  4月 18 22:29 /dev/mtd1ro
crw------- 1 root root 90, 4  4月 18 22:29 /dev/mtd2
crw------- 1 root root 90, 5  4月 18 22:29 /dev/mtd2ro
brw-rw---- 1 root disk 31, 0  4月 18 22:29 /dev/mtdblock0
brw-rw---- 1 root disk 31, 1  4月 18 22:29 /dev/mtdblock1
brw-rw---- 1 root disk 31, 2  4月 18 22:29 /dev/mtdblock2
osakanataro@visionfive2:~/v2.11.5$

/dev/mtd というデバイスは無かった…

Updating SPL and U-Boot of SD Card and eMMC」だとオフィシャルdebianイメージだと起動に使うSDカードに直接新しいのを書き込んで更新という手も取れるらしい

armbianだとSDカードは1パーテーションであるため、この手法は使えなかった

また、Debian 202303イメージを直接ブートしてみる、というのをやってみたが、U-Bootのところで起動に失敗した。

で・・・結局のところv2.11.5のsdcard.img をmicrosdに書き込んで起動すると、mtdデバイスのパーテーション切り直しと、U-Boot/SPLの更新が自動的に行われて、HDMIディスプレイにもコンソール出力が出るようになりました

# cat /proc/mtd
dev:    size   erasesize  name
mtd0: 00040000 00001000 "spl"
mtd1: 00300000 00001000 "uboot"
mtd2: 00100000 00001000 "data"
#
# ls -l /dev/mtd*
crw-------    1 root     root       90,   0 Jan  1 00:00 /dev/mtd0
crw-------    1 root     root       90,   1 Jan  1 00:00 /dev/mtd0ro
crw-------    1 root     root       90,   2 Jan  1 00:00 /dev/mtd1
crw-------    1 root     root       90,   3 Jan  1 00:00 /dev/mtd1ro
crw-------    1 root     root       90,   4 Jan  1 00:00 /dev/mtd2
crw-------    1 root     root       90,   5 Jan  1 00:00 /dev/mtd2ro
brw-rw----    1 root     disk       31,   0 Jan  1 00:00 /dev/mtdblock0
brw-rw----    1 root     disk       31,   1 Jan  1 00:00 /dev/mtdblock1
brw-rw----    1 root     disk       31,   2 Jan  1 00:00 /dev/mtdblock2
#

が・・・・armbianで起動しなおすと/proc/mtdの結果が違う

osakanataro@visionfive2:~$ cat /proc/mtd
dev:    size   erasesize  name
mtd0: 00020000 00001000 "spl"
mtd1: 00300000 00001000 "uboot"
mtd2: 00100000 00001000 "data"
osakanataro@visionfive2:~$

armbian内蔵のu-boot/splが更新されないと駄目??

WiFi/Bluetooth搭載のOrange Pi 5Bが登場する?


Orange Pi 公式のビルドスクリプト の最近の更新に「Update for Orange Pi 5 v1.1.2 and support Orange Pi 5B」というのがあった

内容を確認すると

options+=("orangepi5"                 "Rockchip  RK3588S octa core 4-16GB RAM GBE USB3 USB-C NvME")
#options+=("orangepi5b"                 "Rockchip  RK3588S octa core 4-16GB RAM GBE USB3 USB-C WiFi/BT")

とある。

M.2スロットが削除され、WiFi/Bluetoothが搭載されている、ということになっている。

最初の発表の時に出てたスペックがOrange Pi 5Bとして登場してくるようだ

価格がどれくらいの違いがでるのかちょっと気になりますね

Orange Pi 5でChromium OS(openFyde) R102を動かす


2023/05/09追記: このページの記述はR102時代のものです。現状はR108のページを参照のこと


Orange Pi 5のCPUパワーがあればChrome OSが動くんじゃないかと検索してみると、Chromium OSベースのFydeOSのオープンソース版openFydeOrange Pi 5向け移植があるのを発見した。

なんと5日前に最初のコードが公開されて、3日前にイメージファイルが公開されたばかりというもの。

ChromiumOSはディスクI/Oが多いのでmicroSD起動ではなく、M.2 NVMe SSDかM.2 SATA SSDで使うことを推奨されている。

とりあえず、M.2 SATA SSDに書き込んで見ます。

orangepi@orangepi5:~$ ls
Desktop    Downloads  Orangepi5_1.1.0_ubuntu_jammy_desktop_xfce_linux5.10.110.img  Pictures  Templates
Documents  Music      orangepi5-openfyde-r102.img                                  Public    Videos
orangepi@orangepi5:~$ sudo dd if=orangepi5-openfyde-r102.img of=/dev/sda bs=4M conv=fdatasync status=progress
6933184512 bytes (6.9 GB, 6.5 GiB) copied, 16 s, 433 MB/s
1766+1 records in
1766+1 records out
7407221248 bytes (7.4 GB, 6.9 GiB) copied, 19.8569 s, 373 MB/s
orangepi@orangepi5:~$ sudo fdisk -l /dev/sda
GPT PMBR size mismatch (14467228 != 1000215215) will be corrected by write.
The backup GPT table is not on the end of the device.
Disk /dev/sda: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: Kingchuxing 512G
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 0A2133E6-2262-584F-8E41-66BED45870B0

Device       Start      End Sectors  Size Type
/dev/sda1  6078464 14467180 8388717    4G Linux filesystem
/dev/sda2    69632   135167   65536   32M ChromeOS kernel
/dev/sda3   499712  6078463 5578752  2.7G ChromeOS root fs
/dev/sda4   135168   200703   65536   32M ChromeOS kernel
/dev/sda5   495616   499711    4096    2M ChromeOS root fs
/dev/sda6    65600    65600       1  512B ChromeOS kernel
/dev/sda7    65601    65601       1  512B ChromeOS root fs
/dev/sda8   200704   233471   32768   16M Linux filesystem
/dev/sda9    65602    65602       1  512B ChromeOS reserved
/dev/sda10   65603    65603       1  512B ChromeOS reserved
/dev/sda11      64    65599   65536   32M unknown
/dev/sda12  364544   495615  131072   64M EFI System

Partition table entries are not in disk order.
orangepi@orangepi5:~$

ちなみにAndroidを書き込んだ場合は下記の様なパーテーション構成だったので、似た構成ではある、ってところですね。

orangepi@orangepi5:~$ sudo fdisk -l /dev/sda
Disk /dev/sda: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: Kingchuxing 512G
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 1B040000-0000-4531-8000-2CDC00004E46

Device       Start        End   Sectors  Size Type
/dev/sda1     8192      16383      8192    4M unknown
/dev/sda2    16384      28671     12288    6M unknown
/dev/sda3    28672      36863      8192    4M unknown
/dev/sda4    36864      45055      8192    4M unknown
/dev/sda5    45056      53247      8192    4M unknown
/dev/sda6    53248      55295      2048    1M unknown
/dev/sda7    55296     137215     81920   40M unknown
/dev/sda8   137216     333823    196608   96M unknown
/dev/sda9   333824    1120255    786432  384M unknown
/dev/sda10 1120256    1906687    786432  384M unknown
/dev/sda11 1906688    1939455     32768   16M unknown
/dev/sda12 1939456    1941503      2048    1M unknown
/dev/sda13 1941504    8314879   6373376    3G unknown
/dev/sda14 8314880 1000215151 991900272  473G unknown
orangepi@orangepi5:~$

つづいて、今回はM.2 SATA SSDなので、ssd-sataドライバを読み込ませる設定をします。

ただ、/dev/sda1をマウントして stateful_partition/unencrypted に Env.txt を作る、とあるのにディレクトリがない?

orangepi@orangepi5:~$ sudo mount /dev/sda1 /mnt
orangepi@orangepi5:~$ ls /mnt
dev_image  lost+found  unencrypted  var_overlay  vmlinuz_hd.vblock
orangepi@orangepi5:~$ ls /mnt/unencrypted/
import_extensions
orangepi@orangepi5:~$ ls /mnt/stateful_partition/unencrypted
ls: cannot access '/mnt/stateful_partition/unencrypted': No such file or directory
orangepi@orangepi5:~$

とりあえず書いてあるコマンドを実行してみる

orangepi@orangepi5:~$ echo overlays=ssd-sata | sudo tee /mnt/stateful_partition/unencrypted/Env.txt
tee: /mnt/stateful_partition/unencrypted/Env.txt: No such file or directory
overlays=ssd-sata
orangepi@orangepi5:~$ cat /mnt/stateful_partition/unencrypted/Env.txt
cat: /mnt/stateful_partition/unencrypted/Env.txt: No such file or directory
orangepi@orangepi5:~$

まあ、エラーになりますね

stateful_partitionは不要なのかな?と削った /mnt/unencrypted/Env.txt でファイルを作ってみた。

orangepi@orangepi5:~$ echo overlays=ssd-sata | sudo tee /mnt/unencrypted/Env.txt
overlays=ssd-sata
orangepi@orangepi5:~$ cat /mnt/unencrypted/Env.txt
overlays=ssd-sata
orangepi@orangepi5:~$

また、再起動前にmtdblock0にSATA用イメージを書き込みます。

orangepi@orangepi5:~$ sudo dd if=/usr/share/orangepi5/rkspi_loader_sata.img of=/dev/mtdblock0 status=progress
13894144 bytes (14 MB, 13 MiB) copied, 1 s, 13.9 MB/s
32768+0 records in
32768+0 records out
16777216 bytes (17 MB, 16 MiB) copied, 246.949 s, 67.9 kB/s
orangepi@orangepi5:~$

で、Linuxをshutdown してから、microSDを抜き、電源を入れると、openFydeが起動しました。

Fyde OSのオリジナルアカウントかGoogleアカウントでログインすることができます。

Googleアカウントでログインする場合は、普通のChrome OSと同じようにセットアップしていきます。

セットアップが終わるとまあ、普通のChromeOSです

Linux環境も使えます。

ブラウザでWebアクアリウムを動かしてみると10000匹でfps:38となかなかいい感じで動いている。この値はUbuntuでGPU有効で動かした場合より良いものになっている。

また、youtube再生した場合の動作も、openFydeで動かした方がよい感じになっている。

音声再生もきちんと動いている。

ブラウザで日本語入力を試みたところ、日本語入力モードへの切り替えはできたのですが、変換機能がうまく動いていないようで…

いや、ブラウザのURL入力欄では日本語入力ができないだけでした。

FydeOS App Storeを開くと「Android apps」という項目があるのですが、どういうチョイスかよくわからないものがインストールできるようです。