It’s been quite a while since I last posted something here. It’s not that I haven’t come across many issues in tech, it’s just that I haven’t had enough time to write anything.

So I’m going to share how to easily clear the ZFS metadata on a disk previously used in a ZFS pool. Here is an example - I have an external drive, and it is detected as /dev/da0 when plugged into a FreeBSD server. Though I don’t remember having used it as a member of a ZFS pool called tank, the following output does show that it is the case:

root@nas:~ # zpool import
   pool: tank
     id: 8517408286460165080
  state: FAULTED
status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
        tank                      FAULTED  corrupted data
          mirror-1                DEGRADED
            diskid/DISK-Z1E5PFCH  UNAVAIL  cannot open
            da0                   UNAVAIL  corrupted data

This disk is currently re-purposed as an external drive for off-site backup. It’s encrypted with geli(8), and contains ZFS file system on it. Therefore it needs to be attached to a FreeBSD server, then gets imported as a ZFS pool (and it is not called tank). Though everything works fine, the above message kinda borders me somehow. So I want to get rid of that.

I spent about 10 minutes searching for a solution to clear this disk meta data without causing any data lost on the disk. Finally I found a working solution - just clear the zpool label by running zpool lableclear -f DRIVER_NAME. In my system, the disk drive is detected as /dev/da0.

root@nas:~ # zpool labelclear -f da0

I can verify that there is no more error message complaining about tank pool in FAULTED state.

root@nas:~ # zpool import

Nice! Thanks to this blog post. Note that the zpool labelclear should work on Linux as well. For me, this disk is no use on Linux system as it’s geli(8) encrypted.

Read more: