You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After adding a new disk to my fs, with label hdd, and the filesystem being formatted with metadata_target=ssd, somehow metadata ended up on the new disk (will report that as a separate issue). See bcachefs fs usage, the new drive is sdl:
Filesystem: b402b3da-8057-4d2f-acdc-4a3de16a7c38
Size: 162 TiB
Used: 153 TiB
Online reserved: 15.5 MiB
Data type Required/total Durability Devices
hdd (device 4): sdb rw
data buckets fragmented
free: 332 GiB 339865
sb: 3.00 MiB 4 1020 KiB
journal: 8.00 GiB 8192
btree: 0 B 0
user: 10.5 TiB 11096163 33.1 GiB
cached: 0 B 0
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 0
unstriped: 0 B 0
capacity: 10.9 TiB 11444224
hdd (device 3): sdc rw
data buckets fragmented
free: 332 GiB 340002
sb: 3.00 MiB 4 1020 KiB
journal: 8.00 GiB 8192
btree: 0 B 0
user: 10.5 TiB 11096026 34.6 GiB
cached: 0 B 0
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 0
unstriped: 0 B 0
capacity: 10.9 TiB 11444224
hdd (device 11): sdd rw
data buckets fragmented
free: 352 GiB 360728
sb: 3.00 MiB 4 1020 KiB
journal: 8.00 GiB 8192
btree: 0 B 0
user: 10.4 TiB 11075300 143 GiB
cached: 0 B 0
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 0
unstriped: 0 B 0
capacity: 10.9 TiB 11444224
hdd (device 6): sde rw
data buckets fragmented
free: 606 GiB 620686
sb: 3.00 MiB 4 1020 KiB
journal: 8.00 GiB 8192
btree: 0 B 0
user: 19.3 TiB 20352878 77.7 GiB
cached: 0 B 0
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 0
unstriped: 0 B 0
capacity: 20.0 TiB 20981760
hdd (device 7): sdf rw
data buckets fragmented
free: 606 GiB 620547
sb: 3.00 MiB 4 1020 KiB
journal: 8.00 GiB 8192
btree: 0 B 0
user: 19.3 TiB 20353017 77.2 GiB
cached: 0 B 0
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 0
unstriped: 0 B 0
capacity: 20.0 TiB 20981760
hdd (device 5): sdg rw
data buckets fragmented
free: 605 GiB 619981
sb: 3.00 MiB 4 1020 KiB
journal: 8.00 GiB 8192
btree: 0 B 0
user: 19.3 TiB 20353583 62.2 GiB
cached: 0 B 0
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 0
unstriped: 0 B 0
capacity: 20.0 TiB 20981760
hdd (device 9): sdh rw
data buckets fragmented
free: 612 GiB 626631
sb: 3.00 MiB 4 1020 KiB
journal: 8.00 GiB 8192
btree: 0 B 0
user: 19.3 TiB 20346933 110 GiB
cached: 0 B 0
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 0
unstriped: 0 B 0
capacity: 20.0 TiB 20981760
hdd (device 8): sdi rw
data buckets fragmented
free: 606 GiB 620688
sb: 3.00 MiB 4 1020 KiB
journal: 8.00 GiB 8192
btree: 0 B 0
user: 19.3 TiB 20352876 76.7 GiB
cached: 0 B 0
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 0
unstriped: 0 B 0
capacity: 20.0 TiB 20981760
hdd (device 10): sdj rw
data buckets fragmented
free: 629 GiB 643825
sb: 3.00 MiB 4 1020 KiB
journal: 8.00 GiB 8192
btree: 0 B 0
user: 19.3 TiB 20329739 111 GiB
cached: 0 B 0
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 0
unstriped: 0 B 0
capacity: 20.0 TiB 20981760
hdd (device 12): sdl rw
data buckets fragmented
free: 14.8 TiB 15500007
sb: 3.00 MiB 4 1020 KiB
journal: 8.00 GiB 8192
btree: 32.8 GiB 116250 80.7 GiB
user: 5.03 TiB 5341519 70.1 GiB
cached: 275 MiB 15788 15.1 GiB
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 0
unstriped: 0 B 0
capacity: 20.0 TiB 20981760
ssd (device 1): sdk4 rw
data buckets fragmented
free: 207 GiB 211629
sb: 3.00 MiB 4 1020 KiB
journal: 8.00 GiB 8192
btree: 379 GiB 684132 289 GiB
user: 40.0 GiB 49458 8.30 GiB
cached: 204 GiB 619445 401 GiB
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 4.00 MiB 4
unstriped: 0 B 0
capacity: 1.50 TiB 1572864
ssd (device 0): sdm4 rw
data buckets fragmented
free: 207 GiB 212024
sb: 3.00 MiB 4 1020 KiB
journal: 8.00 GiB 8192
btree: 379 GiB 683830 289 GiB
user: 40.0 GiB 49469 8.31 GiB
cached: 203 GiB 619340 402 GiB
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 5.00 MiB 5
unstriped: 0 B 0
capacity: 1.50 TiB 1572864
Dumping some snippets from IRC:
23|17:59:35 < stintel> 23|09:17:00 <@py1hon> there ought to be an easy way to move data/metadata to the correct target
23|17:59:40 < stintel> any pointers where to look?
23|17:59:46 <@py1hon> no
23|17:59:55 <@py1hon> "ought to be", not "is" :p
23|17:59:59 < stintel> I see
23|18:00:43 <@py1hon> technically that should be rebalance's job
23|18:00:59 <@py1hon> but rebalance doesn't know anything about moving metadata around, just data
23|18:02:09 <@py1hon> we'd want to add the concept of a rebalance scan for metadata, not data
23|18:02:39 <@py1hon> scans get triggered when you flip a filesystem level or inode option, they're cookies in the rebalance work btree
23|18:02:46 <@py1hon> so add a new cookie type
The text was updated successfully, but these errors were encountered:
After adding a new disk to my fs, with label hdd, and the filesystem being formatted with metadata_target=ssd, somehow metadata ended up on the new disk (will report that as a separate issue). See
bcachefs fs usage
, the new drive is sdl:Dumping some snippets from IRC:
The text was updated successfully, but these errors were encountered: