ZFS support in loader(8) being continually added/removed

FreeBSD users should be aware of the massive rash of commits which have occurred over the past few weeks with regards to LOADER_ZFS_SUPPORT functionality. This functionality has been added, removed, tinkered with, re-added, removed, etc. numerous times. Proof is provided below. As of this writing, LOADER_ZFS_SUPPORT has been disabled entirely. Please see this commit:

This affects both i386 and amd64, despite the pathname implying otherwise.

FreeBSD users should be outraged by this, and be questioning why said changes are not being fully tested before being committed. I’ll use this opportunity as confirmation of further proof that all administrators should be paying VERY close attention to commits to src-all in RELEASE and STABLE branches.

I consider this evidence further justification for keeping one’s root filesystem as UFS.


FreeBSD and ZFS — horrible raidz1 speed — finale

A follow-up to the following two posts of mine:

The problem I described has not recurred since enabling prefetch. So it seems whatever performance-related problems we had with prefetch when ZFS was first committed to FreeBSD “back in the day” have since been addressed. I wish I could pinpoint where/when/how this was fixed, but the beast is complex…

I’ve since re-enabled prefetch on my co-located production servers (Intel ICH7-based) and I’m seeing great improvements there too. Those are single-disk systems (e.g. no raidz1 in use) too.

I’d recommend that users who have previously disabled the ZFS prefetch mechanism on FreeBSD should re-enable it and reboot. :-)

FreeBSD and ZFS — horrible raidz1 read speed

I’ve been noticing what appears to be absolutely horrible speeds from a ZFS raidz1 pool, but only in some circumstances — specifically, when mutt and header caching (for Maildir) is used.

The setup is as follows:

ad4: 190782MB <WDC WD2000JD-00HBB0 08.02D08> at ata2-master SATA150
ad8: 715404MB <WDC WD7501AALS-00J7B0 05.00K05> at ata4-master SATA300
ad10: 715404MB <WDC WD7501AALS-00J7B0 05.00K05> at ata5-master SATA300
ad12: 715404MB <WDC WD7500AACS-00D6B0 01.01A01> at ata6-master SATA300
ad14: 715404MB <WDC WD7500AACS-00D6B0 01.01A01> at ata7-master SATA300

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad8     ONLINE       0     0     0
            ad10    ONLINE       0     0     0
            ad12    ONLINE       0     0     0
            ad14    ONLINE       0     0     0

Filesystem   1024-blocks      Used      Avail Capacity  Mounted on
storage/home    67108864     95232   67013632     0%    /home
storage       2149507840 227146240 1922361600    11%    /storage
/dev/ad4s1e      8122126     52450    7419906     1%    /tmp

This indicates that /home (e.g. /home/jdc/Maildir) lives on the ZFS pool storage, consisting of four (4) SATA300 disks. The UFS2-based stuff (OS disk, etc.) consists of a single SATA150 disk.

Continue reading

FreeBSD ZFS v13 MFC — force unmount is experimental

As mentioned in previous blog post, ZFS version 13 has been MFC’d to RELENG_7. Things looked great even after upgrading zpool and zfs metadata… until I shut down the machine (e.g. reboot, shutdown -p now, etc.). A message in similar format is being spit out on VGA/serial console, once per ZFS pool/filesystem listed:

Force unmount is experimental - report any problems.

The code responsible for the message is here:

Further investigative efforts turn up this, which indicate this is may be “by design”…

…but I’d like to know why FreeBSD is setting MS_FORCE in fflag when calling zfs_umount(), especially when the machine is being shut down.

Someone did report this problem back in November 2008 (when the code was slightly different), but no one responded. However please note that the message in question is output once per ZFS pool/filesystem listed, and has nothing to do with ZFS as a root filesystem or not:

I’ve mailed Kip about this, so we’ll see what transpires…

ZFS version 13 committed to RELENG_7

Looks like Kip Macy just MFC’d ZFS version 13 to RELENG_7. The src/UPDATING message is:

        Update ZFS to version 13. ZFS users will need to re-build
        kernel and world. Existing pools will continue to work
        without upgrade. If a pool is upgraded it will no longer be
        usable by older kernel revs. ZFS send / recv between
        pool version 6 and pool version 13 is not supported.

I’m not going to list off all the commits, but below is the commit message.

Continue reading