Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Destroying corrupted dataset results in kernel panic #16935

Open
jmigual opened this issue Jan 8, 2025 · 0 comments
Open

Destroying corrupted dataset results in kernel panic #16935

jmigual opened this issue Jan 8, 2025 · 0 comments
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@jmigual
Copy link

jmigual commented Jan 8, 2025

System information

Type Version/Name
Distribution Name Proxmox 8.3.0 with Debian GNU/Linux 12
Distribution Version 8.3.0 and 12
Kernel Version 6.8.12-5-pve
Architecture x86_64
OpenZFS Version zfs-2.2.6-pve1

Describe the problem you're observing

I have a pool that started showing lots of errors and I traced that to a bad RAM stick. After running scrub, the pool shows the following:

> sudo zpool status tank -v
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 02:58:02 with 3 errors on Wed Jan  8 01:10:14 2025
config:

        NAME                                              STATE     READ WRITE CKSUM
        tank                                              ONLINE       0     0     0
          draid3:6d:10c:1s-0                              ONLINE       0     0     0
            ata-WDC_WD2000FYYZ-01UL1B1_WD-WMC1P0030267    ONLINE       0     0     6
            ata-WDC_WD2001FFSX-68JNUN0_WD-WCC5C6HUR0AA    ONLINE       0     0     6
            ata-WDC_WD2002FAEX-007BA0_WD-WMAY01041720     ONLINE       0     0     6
            ata-WDC_WD2002FFSX-68PF8N0_WD-WMC6N0E0LP7J    ONLINE       0     0     6
            ata-WDC_WD2002FFSX-68PF8N0_WD-WMC6N0E85MRA    ONLINE       0     0     6
            ata-WDC_WD2002FFSX-68PF8N0_WD-WMC6N0E8R4PX    ONLINE       0     0     4
            ata-WDC_WD2002FFSX-68PF8N0_WD-WMC6N0E9PPT1    ONLINE       0     0     6
            ata-WDC_WD2003FYYS-02W0B1_WD-WCAY00437685     ONLINE       0     0     6
            ata-WDC_WD2003FYYS-02W0B1_WD-WCAY00540601     ONLINE       0     0     2
            ata-WDC_WD2003FYYS-02W0B1_WD-WCAY00738799     ONLINE       0     0     6
        logs
          mirror-1                                        ONLINE       0     0     0
            scsi-3600605b002fef1c02d29fa6882abfb16-part4  ONLINE       0     0     0
            scsi-3600605b002fef1c02d29fa6882ac4522-part4  ONLINE       0     0     0
        cache
          scsi-3600605b002fef1c02d29fa6882abfb16-part5    ONLINE       0     0     0
          scsi-3600605b002fef1c02d29fa6882ac4522-part5    ONLINE       0     0     0
        spares
          draid3-0-0                                      AVAIL

errors: Permanent errors have been detected in the following files:

        tank/vm-102-disk-1:<0x1>

In this pool I can read and write all the files in the tank dataset. The only dataset giving issues is the vm-102-disk-1.

Now, I run sudo zfs destroy tank/vm-102-disk-1 and the command either seems to succeed or sometimes just hangs forever. In both cases, the dataset is never destroyed, and a kernel panic is printed in the kernel log.

For now, my solution has been to create a new pool and copy all the data from my corrupted pool to the new one. I've also replaced my RAM with ECC RAM (lesson learned). Soon I plan to completely remove the pool, but I wanted to submit an issue just in case.

Describe how to reproduce the problem

Not sure TBF... :/

Include any warning/errors/backtraces from the system logs

Kernel Log
Jan 08 09:38:08 pve2 sudo[2560337]:  jmigual : TTY=pts/1 ; PWD=/home/jmigual ; USER=root ; COMMAND=/usr/sbin/zfs destroy tank/vm-102-disk-1
Jan 08 09:38:08 pve2 sudo[2560337]: pam_unix(sudo:session): session opened for user root(uid=0) by jmigual(uid=1000)
Jan 08 09:38:10 pve2 sudo[2560337]: pam_unix(sudo:session): session closed for user root
Jan 08 09:38:10 pve2 kernel: PANIC: zfs: adding existent segment to range tree (offset=1021d795a000 size=9000)
Jan 08 09:38:10 pve2 kernel: Showing stack for process 6251
Jan 08 09:38:10 pve2 kernel: CPU: 1 PID: 6251 Comm: txg_sync Tainted: P          IO       6.8.12-5-pve #1
Jan 08 09:38:10 pve2 kernel: Hardware name: Supermicro X8DTH-i/6/iF/6F/X8DTH, BIOS 2.1b       05/04/12  
Jan 08 09:38:10 pve2 kernel: Call Trace:
Jan 08 09:38:10 pve2 kernel:  <TASK>
Jan 08 09:38:10 pve2 kernel:  dump_stack_lvl+0x76/0xa0
Jan 08 09:38:10 pve2 kernel:  dump_stack+0x10/0x20
Jan 08 09:38:10 pve2 kernel:  vcmn_err+0xdb/0x130 [spl]
Jan 08 09:38:10 pve2 kernel:  zfs_panic_recover+0x75/0xa0 [zfs]
Jan 08 09:38:10 pve2 kernel:  range_tree_add_impl+0x27f/0x11c0 [zfs]
Jan 08 09:38:10 pve2 kernel:  ? abd_iter_advance+0x43/0x80 [zfs]
Jan 08 09:38:10 pve2 kernel:  range_tree_add+0x11/0x20 [zfs]
Jan 08 09:38:10 pve2 kernel:  metaslab_free_concrete+0x154/0x290 [zfs]
Jan 08 09:38:10 pve2 kernel:  ? _raw_spin_lock+0x17/0x60
Jan 08 09:38:10 pve2 kernel:  metaslab_free_impl+0xc1/0x110 [zfs]
Jan 08 09:38:10 pve2 kernel:  metaslab_free_dva+0x61/0x90 [zfs]
Jan 08 09:38:10 pve2 kernel:  metaslab_free+0x11e/0x1b0 [zfs]
Jan 08 09:38:10 pve2 kernel:  zio_free_sync+0x11d/0x130 [zfs]
Jan 08 09:38:10 pve2 kernel:  dsl_scan_free_block_cb+0x6a/0x1c0 [zfs]
Jan 08 09:38:10 pve2 kernel:  bptree_visit_cb+0x49/0x160 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_visitbp+0x3f5/0xae0 [zfs]
Jan 08 09:38:10 pve2 kernel:  ? resume_skip_check+0x26/0x80 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_dnode+0xd9/0x210 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_visitbp+0x70c/0xae0 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_dnode+0xd9/0x210 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_visitbp+0x948/0xae0 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_impl+0x1da/0x4a0 [zfs]
Jan 08 09:38:10 pve2 kernel:  ? __pfx_bptree_visit_cb+0x10/0x10 [zfs]
Jan 08 09:38:10 pve2 kernel:  traverse_dataset_destroyed+0x27/0x40 [zfs]
Jan 08 09:38:10 pve2 kernel:  ? __pfx_bptree_visit_cb+0x10/0x10 [zfs]
Jan 08 09:38:10 pve2 kernel:  bptree_iterate+0x1ea/0x390 [zfs]
Jan 08 09:38:10 pve2 kernel:  ? __pfx_dsl_scan_free_block_cb+0x10/0x10 [zfs]
Jan 08 09:38:10 pve2 kernel:  dsl_scan_sync+0x626/0x14a0 [zfs]
Jan 08 09:38:10 pve2 kernel:  ? zio_destroy+0x9a/0xe0 [zfs]
Jan 08 09:38:10 pve2 kernel:  spa_sync+0x5f1/0x1050 [zfs]
Jan 08 09:38:10 pve2 kernel:  ? spa_txg_history_init_io+0x120/0x130 [zfs]
Jan 08 09:38:10 pve2 kernel:  txg_sync_thread+0x207/0x3a0 [zfs]
Jan 08 09:38:10 pve2 kernel:  ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
Jan 08 09:38:10 pve2 kernel:  ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
Jan 08 09:38:10 pve2 kernel:  thread_generic_wrapper+0x5f/0x70 [spl]
Jan 08 09:38:10 pve2 kernel:  kthread+0xf2/0x120
Jan 08 09:38:10 pve2 kernel:  ? __pfx_kthread+0x10/0x10
Jan 08 09:38:10 pve2 kernel:  ret_from_fork+0x47/0x70
Jan 08 09:38:10 pve2 kernel:  ? __pfx_kthread+0x10/0x10
Jan 08 09:38:10 pve2 kernel:  ret_from_fork_asm+0x1b/0x30
Jan 08 09:38:10 pve2 kernel:  </TASK>
Jan 08 09:40:37 pve2 kernel: INFO: task txg_sync:6251 blocked for more than 122 seconds.
Jan 08 09:40:37 pve2 kernel:       Tainted: P          IO       6.8.12-5-pve #1
Jan 08 09:40:37 pve2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 08 09:40:37 pve2 kernel: task:txg_sync        state:D stack:0     pid:6251  tgid:6251  ppid:2      flags:0x00004000
Jan 08 09:40:37 pve2 kernel: Call Trace:
Jan 08 09:40:37 pve2 kernel:  <TASK>
Jan 08 09:40:37 pve2 kernel:  __schedule+0x401/0x15e0
Jan 08 09:40:37 pve2 kernel:  schedule+0x33/0x110
Jan 08 09:40:37 pve2 kernel:  vcmn_err+0xe8/0x130 [spl]
Jan 08 09:40:37 pve2 kernel:  zfs_panic_recover+0x75/0xa0 [zfs]
Jan 08 09:40:37 pve2 kernel:  range_tree_add_impl+0x27f/0x11c0 [zfs]
Jan 08 09:40:37 pve2 kernel:  ? abd_iter_advance+0x43/0x80 [zfs]
Jan 08 09:40:37 pve2 kernel:  range_tree_add+0x11/0x20 [zfs]
Jan 08 09:40:37 pve2 kernel:  metaslab_free_concrete+0x154/0x290 [zfs]
Jan 08 09:40:37 pve2 kernel:  ? _raw_spin_lock+0x17/0x60
Jan 08 09:40:37 pve2 kernel:  metaslab_free_impl+0xc1/0x110 [zfs]
Jan 08 09:40:37 pve2 kernel:  metaslab_free_dva+0x61/0x90 [zfs]
Jan 08 09:40:37 pve2 kernel:  metaslab_free+0x11e/0x1b0 [zfs]
Jan 08 09:40:37 pve2 kernel:  zio_free_sync+0x11d/0x130 [zfs]
Jan 08 09:40:37 pve2 kernel:  dsl_scan_free_block_cb+0x6a/0x1c0 [zfs]
Jan 08 09:40:37 pve2 kernel:  bptree_visit_cb+0x49/0x160 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_visitbp+0x3f5/0xae0 [zfs]
Jan 08 09:40:37 pve2 kernel:  ? resume_skip_check+0x26/0x80 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_dnode+0xd9/0x210 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_visitbp+0x70c/0xae0 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_dnode+0xd9/0x210 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_visitbp+0x948/0xae0 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_impl+0x1da/0x4a0 [zfs]
Jan 08 09:40:37 pve2 kernel:  ? __pfx_bptree_visit_cb+0x10/0x10 [zfs]
Jan 08 09:40:37 pve2 kernel:  traverse_dataset_destroyed+0x27/0x40 [zfs]
Jan 08 09:40:37 pve2 kernel:  ? __pfx_bptree_visit_cb+0x10/0x10 [zfs]
Jan 08 09:40:37 pve2 kernel:  bptree_iterate+0x1ea/0x390 [zfs]
Jan 08 09:40:37 pve2 kernel:  ? __pfx_dsl_scan_free_block_cb+0x10/0x10 [zfs]
Jan 08 09:40:37 pve2 kernel:  dsl_scan_sync+0x626/0x14a0 [zfs]
Jan 08 09:40:37 pve2 kernel:  ? zio_destroy+0x9a/0xe0 [zfs]
Jan 08 09:40:37 pve2 kernel:  spa_sync+0x5f1/0x1050 [zfs]
Jan 08 09:40:37 pve2 kernel:  ? spa_txg_history_init_io+0x120/0x130 [zfs]
Jan 08 09:40:37 pve2 kernel:  txg_sync_thread+0x207/0x3a0 [zfs]
Jan 08 09:40:37 pve2 kernel:  ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
Jan 08 09:40:37 pve2 kernel:  ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
Jan 08 09:40:37 pve2 kernel:  thread_generic_wrapper+0x5f/0x70 [spl]
Jan 08 09:40:37 pve2 kernel:  kthread+0xf2/0x120
Jan 08 09:40:37 pve2 kernel:  ? __pfx_kthread+0x10/0x10
Jan 08 09:40:37 pve2 kernel:  ret_from_fork+0x47/0x70
Jan 08 09:40:37 pve2 kernel:  ? __pfx_kthread+0x10/0x10
Jan 08 09:40:37 pve2 kernel:  ret_from_fork_asm+0x1b/0x30
Jan 08 09:40:37 pve2 kernel:  </TASK>
Jan 08 09:42:40 pve2 kernel: INFO: task txg_sync:6251 blocked for more than 245 seconds.
Jan 08 09:42:40 pve2 kernel:       Tainted: P          IO       6.8.12-5-pve #1
Jan 08 09:42:40 pve2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 08 09:42:40 pve2 kernel: task:txg_sync        state:D stack:0     pid:6251  tgid:6251  ppid:2      flags:0x00004000
Jan 08 09:42:40 pve2 kernel: Call Trace:
Jan 08 09:42:40 pve2 kernel:  <TASK>
Jan 08 09:42:40 pve2 kernel:  __schedule+0x401/0x15e0
Jan 08 09:42:40 pve2 kernel:  schedule+0x33/0x110
Jan 08 09:42:40 pve2 kernel:  vcmn_err+0xe8/0x130 [spl]
Jan 08 09:42:40 pve2 kernel:  zfs_panic_recover+0x75/0xa0 [zfs]
Jan 08 09:42:40 pve2 kernel:  range_tree_add_impl+0x27f/0x11c0 [zfs]
Jan 08 09:42:40 pve2 kernel:  ? abd_iter_advance+0x43/0x80 [zfs]
Jan 08 09:42:40 pve2 kernel:  range_tree_add+0x11/0x20 [zfs]
Jan 08 09:42:40 pve2 kernel:  metaslab_free_concrete+0x154/0x290 [zfs]
Jan 08 09:42:40 pve2 kernel:  ? _raw_spin_lock+0x17/0x60
Jan 08 09:42:40 pve2 kernel:  metaslab_free_impl+0xc1/0x110 [zfs]
Jan 08 09:42:40 pve2 kernel:  metaslab_free_dva+0x61/0x90 [zfs]
Jan 08 09:42:40 pve2 kernel:  metaslab_free+0x11e/0x1b0 [zfs]
Jan 08 09:42:40 pve2 kernel:  zio_free_sync+0x11d/0x130 [zfs]
Jan 08 09:42:40 pve2 kernel:  dsl_scan_free_block_cb+0x6a/0x1c0 [zfs]
Jan 08 09:42:40 pve2 kernel:  bptree_visit_cb+0x49/0x160 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_visitbp+0x3f5/0xae0 [zfs]
Jan 08 09:42:40 pve2 kernel:  ? resume_skip_check+0x26/0x80 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_dnode+0xd9/0x210 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_visitbp+0x70c/0xae0 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_dnode+0xd9/0x210 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_visitbp+0x948/0xae0 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_impl+0x1da/0x4a0 [zfs]
Jan 08 09:42:40 pve2 kernel:  ? __pfx_bptree_visit_cb+0x10/0x10 [zfs]
Jan 08 09:42:40 pve2 kernel:  traverse_dataset_destroyed+0x27/0x40 [zfs]
Jan 08 09:42:40 pve2 kernel:  ? __pfx_bptree_visit_cb+0x10/0x10 [zfs]
Jan 08 09:42:40 pve2 kernel:  bptree_iterate+0x1ea/0x390 [zfs]
Jan 08 09:42:40 pve2 kernel:  ? __pfx_dsl_scan_free_block_cb+0x10/0x10 [zfs]
Jan 08 09:42:40 pve2 kernel:  dsl_scan_sync+0x626/0x14a0 [zfs]
Jan 08 09:42:40 pve2 kernel:  ? zio_destroy+0x9a/0xe0 [zfs]
Jan 08 09:42:40 pve2 kernel:  spa_sync+0x5f1/0x1050 [zfs]
Jan 08 09:42:40 pve2 kernel:  ? spa_txg_history_init_io+0x120/0x130 [zfs]
Jan 08 09:42:40 pve2 kernel:  txg_sync_thread+0x207/0x3a0 [zfs]
Jan 08 09:42:40 pve2 kernel:  ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
Jan 08 09:42:40 pve2 kernel:  ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
Jan 08 09:42:40 pve2 kernel:  thread_generic_wrapper+0x5f/0x70 [spl]
Jan 08 09:42:40 pve2 kernel:  kthread+0xf2/0x120
Jan 08 09:42:40 pve2 kernel:  ? __pfx_kthread+0x10/0x10
Jan 08 09:42:40 pve2 kernel:  ret_from_fork+0x47/0x70
Jan 08 09:42:40 pve2 kernel:  ? __pfx_kthread+0x10/0x10
Jan 08 09:42:40 pve2 kernel:  ret_from_fork_asm+0x1b/0x30
Jan 08 09:42:40 pve2 kernel:  </TASK>
Jan 08 09:44:43 pve2 kernel: INFO: task txg_sync:6251 blocked for more than 368 seconds.
Jan 08 09:44:43 pve2 kernel:       Tainted: P          IO       6.8.12-5-pve #1
Jan 08 09:44:43 pve2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 08 09:44:43 pve2 kernel: task:txg_sync        state:D stack:0     pid:6251  tgid:6251  ppid:2      flags:0x00004000
Jan 08 09:44:43 pve2 kernel: Call Trace:
Jan 08 09:44:43 pve2 kernel:  <TASK>
Jan 08 09:44:43 pve2 kernel:  __schedule+0x401/0x15e0
Jan 08 09:44:43 pve2 kernel:  schedule+0x33/0x110
Jan 08 09:44:43 pve2 kernel:  vcmn_err+0xe8/0x130 [spl]
Jan 08 09:44:43 pve2 kernel:  zfs_panic_recover+0x75/0xa0 [zfs]
Jan 08 09:44:43 pve2 kernel:  range_tree_add_impl+0x27f/0x11c0 [zfs]
Jan 08 09:44:43 pve2 kernel:  ? abd_iter_advance+0x43/0x80 [zfs]
Jan 08 09:44:43 pve2 kernel:  range_tree_add+0x11/0x20 [zfs]
Jan 08 09:44:43 pve2 kernel:  metaslab_free_concrete+0x154/0x290 [zfs]
Jan 08 09:44:43 pve2 kernel:  ? _raw_spin_lock+0x17/0x60
Jan 08 09:44:43 pve2 kernel:  metaslab_free_impl+0xc1/0x110 [zfs]
Jan 08 09:44:43 pve2 kernel:  metaslab_free_dva+0x61/0x90 [zfs]
Jan 08 09:44:43 pve2 kernel:  metaslab_free+0x11e/0x1b0 [zfs]
Jan 08 09:44:43 pve2 kernel:  zio_free_sync+0x11d/0x130 [zfs]
Jan 08 09:44:43 pve2 kernel:  dsl_scan_free_block_cb+0x6a/0x1c0 [zfs]
Jan 08 09:44:43 pve2 kernel:  bptree_visit_cb+0x49/0x160 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_visitbp+0x3f5/0xae0 [zfs]
Jan 08 09:44:43 pve2 kernel:  ? resume_skip_check+0x26/0x80 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_dnode+0xd9/0x210 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_visitbp+0x70c/0xae0 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_visitbp+0x306/0xae0 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_dnode+0xd9/0x210 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_visitbp+0x948/0xae0 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_impl+0x1da/0x4a0 [zfs]
Jan 08 09:44:43 pve2 kernel:  ? __pfx_bptree_visit_cb+0x10/0x10 [zfs]
Jan 08 09:44:43 pve2 kernel:  traverse_dataset_destroyed+0x27/0x40 [zfs]
Jan 08 09:44:43 pve2 kernel:  ? __pfx_bptree_visit_cb+0x10/0x10 [zfs]
Jan 08 09:44:43 pve2 kernel:  bptree_iterate+0x1ea/0x390 [zfs]
Jan 08 09:44:43 pve2 kernel:  ? __pfx_dsl_scan_free_block_cb+0x10/0x10 [zfs]
Jan 08 09:44:43 pve2 kernel:  dsl_scan_sync+0x626/0x14a0 [zfs]
Jan 08 09:44:43 pve2 kernel:  ? zio_destroy+0x9a/0xe0 [zfs]
Jan 08 09:44:43 pve2 kernel:  spa_sync+0x5f1/0x1050 [zfs]
Jan 08 09:44:43 pve2 kernel:  ? spa_txg_history_init_io+0x120/0x130 [zfs]
Jan 08 09:44:43 pve2 kernel:  txg_sync_thread+0x207/0x3a0 [zfs]
Jan 08 09:44:43 pve2 kernel:  ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
Jan 08 09:44:43 pve2 kernel:  ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
Jan 08 09:44:43 pve2 kernel:  thread_generic_wrapper+0x5f/0x70 [spl]
Jan 08 09:44:43 pve2 kernel:  kthread+0xf2/0x120
Jan 08 09:44:43 pve2 kernel:  ? __pfx_kthread+0x10/0x10
Jan 08 09:44:43 pve2 kernel:  ret_from_fork+0x47/0x70
Jan 08 09:44:43 pve2 kernel:  ? __pfx_kthread+0x10/0x10
Jan 08 09:44:43 pve2 kernel:  ret_from_fork_asm+0x1b/0x30
Jan 08 09:44:43 pve2 kernel:  </TASK>
@jmigual jmigual added the Type: Defect Incorrect behavior (e.g. crash, hang) label Jan 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

1 participant