ZFS and Ubuntu Home Server howto

A while ago I bought myself a HP Microserver – a cheap, low power box which has four 3.5″ drive bays and an optical drive bay. I bought it to run as a home server which would back up all my data as well as serve up video, music and photos around the house. I had decided before buying that I wanted to store my data using the ZFS filesystem since ZFS was the only filesystem at the time which offered guaranteed data integrity. (It still is the only filesystem of release quality which offers this, although BTRFS is catching up.) I have become almost obsessed with ZFS because of the overwhelming benefits it offers but I won’t go into them here. Instead I recommend watching this talk by the creators of ZFS (Part 1, part 2, part 3) or read through the accompanying slides. [PDF]

HP Microserver - openI meant at the time to write about how I set up my system but never did get around to it, so here is what I did in the end. The server arrived with 2GB of ECC RAM and a 250GB hard disk. I eventually upgraded this to 8GB RAM and added two 2TB hard disks, although I started with one 2TB disk and added the second as a mirror when finances allowed. ZFS checks the integrity of the stored data through checksums and so it can always tell you when there is data corruption but it can only silently heal the problem if it has either a mirror or a RAID-Z/Z2 (Equivalent to RAID 5 or 6.)

ZFS is available as part of FreeNAS, FreeBSD, Solaris, and a number of Solaris derivatives. I initially installed FreeNAS 8. FreeNAS runs from a USB stick which I put in the handy internal USB socket, but while that was great for storing and sharing files it was not so good for running bittorrent on or using SSH to connect from out of the house. I also tried Solaris but I ended up going back to what I know and using Ubuntu Linux 12.04 LTS. Although licensing prevents ZFS from being included with Linux it is trivial to add it yourself.

I have assumed a certain level of knowledge on the reader’s part. If it doesn’t make much sense to you then you might be better off with FreeNAS or an off-the-shelf NAS box.

After installing Ubuntu and fully updating it I did the following:

sudo add-apt-repository ppa:zfs-native/stable

sudo apt-get update

sudo apt-get install ubuntu-zfs

…and that was it. It is a lot more complicated to use ZFS as your root filesystem on Linux, so I don’t.

Update: as of Ubuntu 16.04 ZFS will be supported directly. You will be able to install ZFS with the following rather than adding a third-party repository:

sudo apt-get install zfsutils-linux

Next, I had to set up the ZFS storage pool. The creators of ZFS on Linux recommend that you use disk names starting with /dev/disk/by-id/ rather than /dev/sda, /dev/sdb etc as they are more consistent (particularly the wwn identifier) so look in that folder to see what disk names you have.

ls -l /dev/disk/by-id/

The example pool name given is tank but I strongly recommend that you use something else. To create a single disk storage pool with no mirror:

sudo zpool create tank /dev/disk/by-id/wwn-0x5000c5004f14aa06

To add a mirror to that later you would type:

sudo zpool attach tank /dev/disk/by-id/wwn-0x5000c5004f14aa06 /dev/disk/by-id/wwn-0x5000c500400303dd

Or if starting with two disks to put in a mirror, your initial command would be:

sudo zpool create tank mirror /dev/disk/by-id/wwn-0x5000c5004f14aa06 /dev/disk/by-id/wwn-0x5000c500400303dd

I prefer to use mirrors as they are generally faster, however if you want a RAID5-type setup use:

sudo zpool create tank raidz1 … … … (3 or more disk identifiers)

The system will create your storage pool, create a filesystem of the same name and automatically mount it, in this case under /tank.

“sudo zpool list” will show you that a pool has been created as well as the raw space in the pool and the space available.

“sudo zpool status” will show you the disks that make up the pool.

Screenshot showing output of zpool list and zpool status commandsWhile you can just start storing data in your newly-created filesystem (in /tank in our example) that isn’t the best way to use ZFS. Instead you should create additional filesystems within your storage pool to hold different types of data. This will allow you to do things like set compression, deduplication, quotas and snapshots differently for each set of data or backup an individual filesystem with zfs send. You use the zfs command to create your filesystems. Some examples:

sudo zfs create tank/music

sudo zfs create tank/videos

sudo zfs create tank/backups

The above examples will create filesystems in the pool and will automatically mount them as subfolders of the main filesystem. Note that the name is in the format pool / filesystem name and there is no leading slash on the pool name.

Check that your filesystems have been created:

sudo zfs list

Screenshot showing output of zfs list commandNow we need to share the data, otherwise it’s not much of a server. ZFS will automatically manage sharing through NFS (Unix/Linux) or SMB (Windows) but you must first install the server software. For sharing to Windows clients use:

sudo apt-get install samba

To add NFS use:

sudo apt-get install nfs-kernel-server

You don’t need to configure much because ZFS handles most settings for you, but you might wish to change the workgroup name for Samba in /etc/samba/smb.conf.

To share a ZFS filesystem you change a property using the zfs command. For Windows clients:

sudo zfs set sharesmb=on tank/music

sudo zfs set sharesmb=on tank/videos

For Unix / Linux clients:

sudo zfs set sharenfs=on tank/backups

Or you can share the whole lot at once by sharing the main pool. The sub-filesystems will inherit the sharing property unless you turn them off:

sudo zfs set sharesmb=on tank

sudo zfs set sharesmb=off tank/music

You can check whether your filesystems are shared or not:

sudo zfs get sharesmb,sharenfs

At this point you should be able to see your shares from other computers on the network but you probably won’t have permission to access them. You will need to ensure that the file permissions and owners are set correctly, and you will also have to add an account and set a password for  use when connecting through Samba. If your username is ella then use:

sudo smbpasswd -a ella

to set your Samba password, and make sure that ella has permission to access all the files in your shared folders:

sudo chown -R ella:ella /tank/videos

Other useful features of ZFS that you should look up include snapshots and zfs send/recieve. I hope this short guide has been helpful if you are trying to set up a ZFS server. Let me know in the comments.

Updated 29/02/2016 to remove some personal details, add information about ZFS support in Ubuntu and add some explanations noted in the comments.

Author: Latentexistence

The world is broken and I can't fix it because I am broken. I can, however, rant about it all and this is where I do that when I can get my thoughts together. Most of the time you'll find my words on Twitter rather than here though. I sometimes write for Where's The Benefit too.

17 thoughts on “ZFS and Ubuntu Home Server howto”

  1. Great guide. I’ve got the HP Proliant N54L ordered (currently waiting for it to come back in stock) and trying to work out how I’ll set it up. This is a great guide for me, as I want to run Linux + ZFS.

    I see you’re running Ubuntu Linux 12.04 LTS. Are you running the desktop version with the GUI or just the server version?

    I’ll let you know if I get it working once it arrives. Thanks again.

    Vince.

    1. I am running the desktop version, mostly so that I can occasionally run a browser on it when my main system is busy. I don’t think the resources saved by running server would make any difference that I would notice, but it might improve security slightly to not have the extra desktop stuff.

  2. Great writeup on zfs, thanks. I also have the same server running a Linux (MD) raid 1 array (ext4) and backed up with a crashplan account. Ubuntu 12.04 (non-gui) has been rock solid for me so far. I also use Powernap which is a great solution for a home server. Will look to use zfs or brfs in the near future to replace ext4.

  3. Thank you so much for this simple guide. Of all the things I’ve had to do on my server, this has been by far the easiest and it is most definitely due to how well you’ve constructed this guide.

    I opted for a raidz configuration on 3 old drives and I’m still shocked on how simple it is to use zfs. Yay!

  4. I’ve zfs installed on my headless ubuntu server 11.04. My OS is currently running on a partition on a disk that is different from the zfs disks. Since my ubuntu version is no longer supported, I wanted to upgrade to ubuntu server 12.04 LTS. The only way to upgrade is to re-install the OS using a burnt ISO on CD. My question is by reinstalling the OS, will my zfs partitions be destroyed? I think not but I just want to be sure. After reinstalling, I just need to reinstall the zfs SW from the ppa and I’m good to go right? Thanks for your help.

    1. Just make sure you don’t touch the zfs disks during the install and they should import just fine after the install is finished and ZOL is installed and working again. You might even want to disconnect the zfs disks until the server upgrade is done just to be *very* sure the disks don’t get clobbered.

  5. HI, Great guide as I just finded it after getting N54L on discount. My question is that do you still run linux from USB stick as ports on N54L are usb2 so little worries will my NAS brain timeout occationally if I run OS from usb stick? Ideal solution for me would be to run Debian OS from usb stick and create raidz2 pool with 5 2TB drives to it in order to get better redundancy than what raidz1 provides. What do you think about this?

  6. Great guide I used it to build my first zfs array. I have a dual quad core hp workstation with 3 Wd Reds 3tb and 16gb ram. Running beautifully thanks x

  7. I know this is an old thread but hopefully someone in the know (perhaps the article’s original author) is still reading from time to time. Can anyone explain why “..instead you should create additional filesystems within your storage pool to hold different types of data”, rather than simply create a share in the main filesystem, with sub-folders? Sharing multiple filesystems using SMB looks inelegant in Windows Explorer, as each filesystem is shown as a top level folder. It also makes drive mapping a PITA.

    1. The reason is because zpools (what was referred to as “file systems”) can actually act like separate file systems, meaning you can set them to behave differently as-if they were separate partitions or disks, even though they aren’t. For example, I might make a /media zpool and run it with minimal compression and no deduplication because large media files don’t gain much from those things (and would be a performance disaster). But then I might make a /documents zpool that runs high compression and deduplication on it because I am using it to back up millions of text documents that would highly benefit from both of those settings.

      There are other things besides just compression and deduplication that can be set on a zpool level, but those are just examples. My home server is set up similar to what I described (but I don’t use deduplication because it is expensive and not needed at home) and I map them as separate network drives for Windows devices in my home.

      1. Excellent, thanks for the explanation. So it might be better to say “you can set up one or more zpools. Whether you use one or several will be dependent on what you plan to store and how you want it to behave” (and then cite your examples). Thanks again.

        1. Well, the way he said it isn’t totally wrong, but it could have used more explanation I suppose. It’s a good habit with ZFS to use zpools even if you aren’t immediately under the need to do things like I described. At the same time, you want to use them right as using them incorrectly can cause some headaches.

          The reason it’s a good habit is because the settings can be changed over time, and it’s much easier to already have your data in a zpool that can be updated rather than having to create one and move all of your data there. The reason you have to be smart about how they are used is because they essentially behave as totally separate block devices, meaning that moving/copying files across zpools has all of the same overhead as moving across two totally separate physical drives on the same controller. For, for example, I actually think the OP’s suggestion of making separate zpools for “Video”, “Music”, and “Photos” is an OK idea, but any further granularity than that can be a nuisance. If I would break my photos down into “Archive” and “Album” zpools, now it gets annoying when I want to move files from my archive to my albums or vice-versa because I’m making the sata controller and hard drive do double duty, and it’s noticeably slower. Hope that makes sense 🙂

          1. Yes, all makes sense thanks. My setup/requirements are reasonably straightforward – music, videos (for streaming via Plex) and backups from family laptops, onto a 4 x 4TB RAID-Z1 Ubuntu server. (Yes, I know 4 x 4 isn’t optimal, but… 🙂 So I probably could have gone with one Zpool but instead set it up as outlined in the original article. I’m using Samba to share it and even though I’ve got Linux clients here too I’m just using SMB because, well, I think because I’m lazy! Backup folders are mapped in Windows clients just because it makes it easier for the backup software (the very excellent Bvckup 2) and non-techy users.

          2. Nice, thanks for sharing all of that. Right now I have an old Dell Pentium D machine running Nas4Free, which is where I learned all of my ZFS stuff. Now I’m thinking of finally upgrading and I want to use a Linux server because I work in it every day and feel more comfortable, plus way more software available. Sounds like ZFS on Linux is working out pretty darn well for a lot of you. I think I’ll try that.

            Honestly, I use Samba over NFS most of the time as well (not at work, but at home) just because I like that you get a little bit of security with Samba versus basically nothing with NFS lol.

  8. Good lookin’ out! Just used this guide to successfully migrate my Ubuntu Server 14.04 LTS home media (Plex) server to ZFS!

Comments are closed.