Zfs

From The World according to Vissie
Jump to navigation Jump to search

https://github.com/zfsonlinux/zfs/wiki/Debian

For Debian Jessie, ZFS packages are provided by backports.

Add jessie-backports repository (ZFS packages are in contrib area):

echo "deb http://ftp.debian.org/debian jessie-backports main contrib" >> /etc/apt/sources.list.d/backports.list
apt update

Install kernel headers:

apt install linux-headers-$(uname -r)

Install zfs packages:

apt-get install -t jessie-backports zfs-dkms

After the server has been rebooted, check that zfsonlinux is installed and runnig well.

dpkg -l | grep zfs
ii  debian-zfs                     7~jessie                    amd64        Native ZFS filesystem metapackage for Debian.
ii  libzfs2                        0.6.5.2-2                   amd64        Native ZFS filesystem library for Linux
ii  zfs-dkms                       0.6.5.2-2                   all          Native ZFS filesystem kernel modules for Linux
ii  zfsonlinux                     6                           all          archive.zfsonlinux.org trust package
ii  zfsutils                       0.6.5.2-2                   amd64        command-line tools to manage ZFS filesystems

The above result shows that zfs on linux is already installed, so we can go on with creating the first pool.

zpool list
no pools available

The drives need to have no partitions on them.

Always use the long /dev/disk/by-id/* aliases with ZFS. Using the /dev/sd* device nodes directly can cause sporadic import failures, especially on systems that have more than one storage pool.

ls -la /dev/disk/by-id

will list the aliases.

Create a Mirrored pool

zpool create -m /export/zfs vissiepool -o ashift=12 mirror c1t1d0 c1t2d0
zpool list

Adding Devices to a Storage Pool You can dynamically add disk space to a pool by adding a new top-level virtual device. This disk space is immediately available to all datasets in the pool. To add a new virtual device to a pool, use the zpool add command. For example:

zpool add zeepool mirror /dev/disk/by-id/scsi-SATA_disk2 /dev/disk/by-id/scsi-SATA_disk3

The command also supports the -n option so that you can perform a dry run. For example:

zpool add -n zeepool mirror /dev/disk/by-id/scsi-SATA_disk3 /dev/disk/by-id/scsi-SATA_disk4
would update 'zeepool' to the following configuration:
     zeepool
       mirror
           c1t0d0
           c1t1d0
       mirror
           c2t1d0
           c2t2d0
       mirror
           c3t1d0
           c3t2d0


Now you can see that the pool of three storage devices have created 57.4GB of redundant storage. However, instead of creating regular directories inside swapnil0, I will create a dataset. There are many advantages of using datasets over directories, and the biggest one is to create snapshots.

Next, I’ll create a dataset called tux inside swapni0.

zfs create swapnil0/tux
zfs create swapnil0/images
zfs create swapnil0/videos

Because in Debian, you perform everything as root, you will need to change the the owner to your the user so you can write files to these datasets. see Drive security for detail. i To add a storage pool of just the drives spaces all in one big one (I use this as a temp drive) do:

zpool create tempool /dev/disk/by-id/ata-ST3320820AS_9QF3Z3F4 /dev/disk/by-id/ata-ST3500413AS_9VMYTM26 

If you have error and NEED to force it, use the -f.

zpool create -f tempool /dev/disk/by-id/ata-ST3320820AS_9QF3Z3F4 /dev/disk/by-id/ata-ST3500413AS_9VMYTM26 

Notes!!

If, after a reboot, your pool is not there, just do a import

zpool import vissiepool

To play, one can use disk space, files, to play with the configs.

dd if=/dev/urandom of=disk4 bs=1MB count=128
zpool detach pool1 /root/diskerror
zpool attach pool1 /root/diskworking /root/disknew

-o ashift=12 – this is not required, but is a general performance tuning option that might be worth trying.

zfs set compression=lz4 mypool/fs1
zfs get compressratio mypool/fs1

The following are the valid compression properties:

on
off
lzjb
gzip
gzip[1-9]
zle

Yay! Our first zpool! First thing we should notice here is that we're seeing the full 3T capacity even though we said we wanted raidz1, which means that we're using 1TB of those 3TB for parity. That's because the zpool command shows us raw capacity, not usable capacity. We can see the usable capacity by using the zfs command, which queries filesystems rather than zpools (or by standard system tools like df):

zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
ars    709K  1.96T   181K  /ars

If you had issues, and all is fixed now, you can try:

zpool clear z-mirror


Some nice commands

sudo zpool iostat -v
                                                       capacity     operations    bandwidth
pool                                                 alloc   free   read  write   read  write
---------------------------------------------------  -----  -----  -----  -----  -----  -----
tempool                                              6.39G   292G      0      0     26     65
  mirror                                             6.39G   292G      0      0     26     65
    ata-ST3320820AS_9QF3Z3F4                             -      -      0      0    161    158
    ata-ST3500413AS_9VMYTM26                             -      -      0      0    124    158
---------------------------------------------------  -----  -----  -----  -----  -----  -----
vissiepool                                           1.54T  2.08T    379     16  46.5M   158K
  mirror                                             1.54T   279G    378      7  46.5M  53.5K
   at a-ST2000LM007-1R8174_WDZ10R68                      -      -    377      1  46.5M  38.5K
   ata -ST2000LM007-1R8174_WDZ117D5                      -      -      0    374  25.2K  46.2M
 mirror                                               876M  1.81T      0      8    691   105K
    ata-TOSHIBA_MQ01UBB200_44FAT3INT                     -      -      0      3    547   105K
    usb-TOSHIBA_External_USB_3.0_20130618015770-0:0      -      -      0      3    618   105K
---------------------------------------------------  -----  -----  -----  -----  -----  -----

Gathering ZFS Storage Pool Status Information You can use the zpool status interval and count options to gather statistics over a period of time. In addition, you can display a time stamp by using the -T option. For example:

# zpool status -T d 3 2
zpool status -T d 3 2
Tue Nov  2 10:38:18 MDT 2010
  pool: pool
 state: ONLINE
 scan: none requested
config:
       NAME        STATE     READ WRITE CKSUM
       pool        ONLINE       0     0     0
         c3t3d0    ONLINE       0     0     0
errors: No known data errors
 pool: rpool
state: ONLINE
scan: resilvered 12.2G in 0h14m with 0 errors on Thu Oct 28 14:55:57 2010
config:
       NAME          STATE     READ WRITE CKSUM
       rpool         ONLINE       0     0     0
         mirror-0    ONLINE       0     0     0
           c3t0d0s0  ONLINE       0     0     0
           c3t2d0s0  ONLINE       0     0     0
errors: No known data errors
Tue Nov  2 10:38:21 MDT 2010
 pool: pool
state: ONLINE
scan: none requested
config:
       NAME        STATE     READ WRITE CKSUM
       pool        ONLINE       0     0     0
         c3t3d0    ONLINE       0     0     0
errors: No known data errors
 pool: rpool
state: ONLINE
scan: resilvered 12.2G in 0h14m with 0 errors on Thu Oct 28 14:55:57 2010
config:
       NAME          STATE     READ WRITE CKSUM
       rpool         ONLINE       0     0     0
         mirror-0    ONLINE       0     0     0
           c3t0d0s0  ONLINE       0     0     0
           c3t2d0s0  ONLINE       0     0     0
errors: No known data errors

My cron jobs

sudo crontab -e
# Example of job definition:
# .---------------- minute (0 - 59)
# |  .------------- hour (0 - 23)
# |  |  .---------- day of month (1 - 31)
# |  |  |   .------- month (1 - 12) OR jan,feb,mar,apr ...
# |  |  |   |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue...
# |  |  |   |  |
# *  *  *   *  * command to be executed
##### All my zfs crons #####
# Run every day at 16h00
 0 09 *     *  *   /home/vissie/scripts/system/zfs/checkZFSstatus.sh 
# Runs Every First Sunday of the month at 23h00
 0 22 1-7   *  *   [ "$(date '+\%a')" == "Sun" ] && zpool scrub tempool
 0 22 15-22 *  *   [ "$(date '+\%a')" == "Sun" ] && zpool scrub tempool
 0 23 1-7   *  *   [ "$(date '+\%a')" == "Sun" ] && zpool scrub vissiepool
 0 23 15-22 *  *   [ "$(date '+\%a')" == "Sun" ] && zpool scrub vissiepool
##### All my S.M.A.R.T crons #####
# Run Every Friday at and just past 04 in the morning.
 0 04 *     *  FRI /home/vissie/scripts/system/zfs/scanzfs.sh
 4 04 *     *  FRI /home/vissie/scripts/system/zfs/smartscan.s