Tag Archives: zfs

Zpool Not Automatically Mounted on Boot on Fedora

I love ZFS, but there is something that has annoyed me for quite sometimes – the zpool doesn’t get automatically mounted on boot. To work around this, I had to login as root and run:

# zpool import tank

to import tank pool before I login with my normal user account.

Well, I finally found a solution. This is what’s written on zfsonlinux/zfs WIFI about Fedora specifically:

Systemd Update:

When upgrading to the zfs-0.6.5.8 release it’s recommended that users manually reset the zfs systemd presets. Failure to do so can result in the pool not automatically importing when the system is rebooted.

systemctl preset zfs-import-cache zfs-import-scan zfs-mount zfs-share \
zfs-zed zfs.target

Okay, here’s how I fix my issue. First, ensure that a pool that I want to automatically mounted is manually mounted first. Then just run that lengthy command shown above:

# zpool import tank
# systemctl preset zfs-import-cache zfs-import-scan zfs-mount zfs-share zfs-zed zfs.target
Created symlink /etc/systemd/system/zfs-mount.service.wants/zfs-import-cache.service → /usr/lib/systemd/system/zfs-import-cache.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-import-cache.service → /usr/lib/systemd/system/zfs-import-cache.service.
Created symlink /etc/systemd/system/zfs-share.service.wants/zfs-mount.service → /usr/lib/systemd/system/zfs-mount.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-mount.service → /usr/lib/systemd/system/zfs-mount.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-share.service → /usr/lib/systemd/system/zfs-share.service.
Created symlink /etc/systemd/system/zed.service → /usr/lib/systemd/system/zfs-zed.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-zed.service → /usr/lib/systemd/system/zfs-zed.service.

Now, that makes me a much happier ZFS user!

Ref: zfsonlinux/zfs

Install ZFS on Fedora 25 with kernel 4.8.13-300

After I had my Fedora 24 upgraded to 25 on my desktop, ZFS no longer worked. I tried to remove zfs package and reinstalled; it didn’t work.

Here’s how I got it working after trying many things. There could be a different way to fix it, though.

Let’s check the version of the kernel:

[root@sangkae ~]# uname -r
4.8.13-300.fc25.x86_64

Check if spl, zfs are already installed:

[root@sangkae ~]# dnf info spl
Last metadata expiration check: 1:01:18 ago on Thu Dec 15 22:42:25 2016.
Installed Packages
Name        : spl
Arch        : x86_64
Epoch       : 0
Version     : 0.6.5.8
Release     : 1.fc25
Size        : 48 k
Repo        : @System
From repo   : zfs
Summary     : Commands to control the kernel modules
URL         : http://zfsonlinux.org/
License     : GPLv2+
Description : This package contains the commands to verify the SPL
            : kernel modules are functioning properly.

[root@sangkae ~]# dnf info zfs
Last metadata expiration check: 1:01:26 ago on Thu Dec 15 22:42:25 2016.
Installed Packages
Name        : zfs
Arch        : x86_64
Epoch       : 0
Version     : 0.6.5.8
Release     : 1.fc25
Size        : 808 k
Repo        : @System
From repo   : zfs
Summary     : Commands to control the kernel modules and libraries
URL         : http://zfsonlinux.org/
License     : CDDL
Description : This package contains the ZFS command line utilities.

Manually install spl, dkms:

# dkms install spl/0.6.5.8
# dkms install zfs/0.6.5.8

Load zfs module:

# modprobe zfs

Voilla, I got ZFS working again.

FreeBSD upgrade pool ‘zroot’

Today I successfully upgraded my FreeBSD home nas server from 10.3 to 11.0. This is the final release of version 11.0, though the official announcement is expected to be made on September 28.

After the system upgrade, I need to also upgrade the 2 zpools (tank and zroot) so they can have new features. Upgrading tank was easy, all I needed to do was running this command:

# zpool upgrade tank
This system supports ZFS pool feature flags.

Enabled the following features on 'tank':
  sha512
  skein

For zroot, in addition to running the above command (by replacing the actually zpool name to zroot), I also need to update the boot code.

root@nas:~ # zpool upgrade zroot
This system supports ZFS pool feature flags.

Enabled the following features on 'zroot':
  sha512
  skein

If you boot from pool 'zroot', don't forget to update boot code.
Assuming you use GPT partitioning and da0 is your boot disk
the following command will do it:

        gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0

What’s the boot code? Good question. Not sure what it is, I’ll find out later by reading the FreeBSD document.

The boot disk in my server is not da0. How do we find out what it is?

root@nas:~ # gpart show
=>       34  125045357  ada4  GPT  (60G)
         34       1024     1  freebsd-boot  (512K)
       1058    4194304     2  freebsd-swap  (2.0G)
    4195362  120850029     3  freebsd-zfs  (58G)

In my case, it’s ada4, and the partition the boot sits on is ada4p1.
So, I can now proceed to update the boot code:

root@nas:~ # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada4
partcode written to ada4p1
bootcode written to ada4

Reboot the machine, and voilla it’s “still” working.

Losing ZFS storage for Docker

I use ZFS as a storage driver for docker engine running on my machine. Today after my machine rebooted from a crash, yes Linux system crashes too, I notice that all my docker images and containers disappeared.

~ ❯❯❯ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
~ ❯❯❯ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

First thing came to my mind was “did I accidentally destroy docker zfs data set last night?”

# zfs list
NAME                                                                                USED  AVAIL  REFER  MOUNTPOINT
tank                                                                               1.52T   239G   120K  /tank
tank/docker                                                                         999M   239G  73.9M  /var/lib/docker

It’s still there. At that point I suspected that Docker might no longer use ZFS as its data storage.

~ ❯❯❯ docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.12.1
Storage Driver: devicemapper
 Pool Name: docker-253:0-2753561-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 11.8 MB
 Data Space Total: 107.4 GB
 Data Space Available: 31.79 GB
 Metadata Space Used: 581.6 kB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.147 GB
...

The output information from docker info command confirmed my suspicion. But how do we change the storage back to ZFS instead of devicemapper? According to ZFS storage in practice the only prerequisite to have ZFS as the data storage is having /var/lib/docker as a ZFS dataset.

I was under the impression that tank/docker was mounted to /var/lib/docker. Actually /var/lib/docker directory lived on my local LVM file system (hence devicemapper was the storage driver).

To fix this, I stopped docker service, cleared out /var/lib/docker, and re-mounted the tank/docker dataset.

# systemctl stop docker
# rm -rf /var/lib/docker/*
# mount tank/docker
# systemctl start docker

Let’s see if it’s working again.

docker info
Containers: 6
 Running: 0
 Paused: 0
 Stopped: 6
Images: 18
Server Version: 1.12.1
Storage Driver: zfs
 Zpool: tank
 Zpool Health: ONLINE
 Parent Dataset: tank/docker
 Space Used By Parent: 1047826432
 Space Available: 263167705088
 Parent Quota: no
 Compression: off

# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
abiosoft/caddy      latest              af55a59400be        2 days ago          40.69 MB
...

Everything seemed to back to normal. I’m still not sure why tank/docker wasn’t mounted on boot, but I’ll leave it for another day. As for now, I’m quite happy.

ZFS zpool upgrade

I’m running ZFS on Fedora 23 and I notice that there’re new features which can be enabled in the existing pool.

# zpool status
  pool: tank
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0

errors: No known data errors

Get the ZFS version for tank’s pool:

# zpool get version tank
NAME  PROPERTY  VALUE    SOURCE
tank  version   -        default

Huh? I’m not sure why there’s no value set on ZFS running on Linux.

# zpool upgrade
This system supports ZFS pool feature flags.

All pools are formatted using feature flags.


Some supported features are not enabled on the following pools. Once a
feature is enabled the pool may become incompatible with software
that does not support the feature. See zpool-features(5) for details.

POOL  FEATURE
---------------
tank
      filesystem_limits
      large_blocks

Well, let’s upgrade the pool:

# zpool upgrade -a
This system supports ZFS pool feature flags.

Enabled the following features on 'tank':
  filesystem_limits
  large_blocks

It seems to have been successfully upgraded. Let’s verify:

 # zpool status
  pool: tank
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0

errors: No known data errors

# zpool get version tank
NAME  PROPERTY  VALUE    SOURCE
tank  version   -        default

Ref: http://freebsd.pro/topic/12/

ZFS on CentOS

First we need to add the ZFS on Linux repository to our system by installing a zfs-release package as shown below:

$ sudo yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release-1-3.el6.noarch.rpm
$ sudo yum install zfs 

Once, zfs has been installed, we can create a storage pool. There’re many RAID configuration to choose from, and I’m not going to get into it. However, if you want to learn more, you can read this ZFS Administration article which explains in great detail about ZFS RAID(Z).

In my case, I have three 1TB disks and pick RAIDZ-1.

# zpool create tank raidz1 sda sdb sdc
# zpool status tank
  pool: tank
 state: ONLINE   
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sda     ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0

Source:

Changing ZFS pool to use disk ID instead of disk assignment

This what I have:

# zpool status
  pool: tank
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0

To convert the pool to use disk’s ID instead of device files such as /dev/sda, we need to export the storage pool.

# zpool export tank

After this command is executed, the pool tank is no longer visible on the system. Now, we’re ready to reimport the pool:

# zpool import -d /dev/disk/by-id tank
# zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank  29.8G   192M  29.6G     0%  1.00x  ONLINE  -
# zpool status
  pool: tank
 state: ONLINE
  scan: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        tank                                       ONLINE       0     0     0
          raidz1-0                                 ONLINE       0     0     0
            ata-VBOX_HARDDISK_VB5844dcdb-0b3799a3  ONLINE       0     0     0
            ata-VBOX_HARDDISK_VBd864f75d-f80a1f7b  ONLINE       0     0     0
            ata-VBOX_HARDDISK_VBb978bce3-baec4252  ONLINE       0     0     0

Source: