Vishal MehtaComment

A ZFS developer’s analysis of the good and bad in Apple’s new APFS file system

Vishal MehtaComment

A ZFS developer’s analysis of the good and bad in Apple’s new APFS file system
Encryption options are great, but Apple's attitude on checksums is still funky.

Apple announced a new file system that will make its way into all of its OS variants (macOS, tvOS, iOS, watchOS) in the coming years. Media coverage to this point has been mostly breathless elongations of Apple's developer documentation. With a dearth of detail I decided to attend the presentation and Q&A with the APFS team at WWDC. Dominic Giampaolo and Eric Tamura, two members of the APFS team, gave an overview to a packed room; along with other members of the team, they patiently answered questions later in the day. With those data points and some first-hand usage I wanted to provide an overview and analysis both as a user of Apple-ecosystem products and as a long-time operating system and file system developer.
The overview is divided into several sections. I'd encourage you to jump around to topics of interest or skip right to the conclusion (or to the tweet summary). Highest praise goes to encryption; ire to data integrity.

Basics
APFS, the Apple File System, was itself started in 2014 with Giampaolo as its lead engineer. It's a stand-alone, from-scratch implementation (an earlier version of this post noted a dependency on Core Storage, but Giampaolo set me straight in this comment). I asked him about looking for inspiration in other modern file systems such as BSD's HAMMER, Linux's btrfs, or OpenZFS (Solaris, illumos, FreeBSD, Mac OS X, Ubuntu Linux, etc.), all of which have features similar to what APFS intends to deliver. (And note that Apple built a fairly complete port of ZFS, though Giampaolo was not apparently part of the group advocating for it.) Giampaolo explained that he was aware of them as a self-described file system guy (he built the file system in BeOS, unfairly relegated to obscurity when Apple opted to purchase NeXTSTEP instead), but didn't delve too deeply for fear, he said, of tainting himself.
Giampaolo praised the APFS testing team as being exemplary. This is absolutely critical. A common adage is that it takes a decade to mature a file system, and my experience with ZFS more or less confirms this. Apple will be delivering APFS broadly with 3-4 years of development so will need to accelerate quickly to maturity.

Paying down debt
HFS was introduced in 1985 when the Mac 512K (of memory! Holy smokes!) was Apple's flagship. HFS+, a significant iteration, shipped in 1998 on the G3 PowerMacs with 4GB hard drives. The typical storage capacity of a home computer has increased by a factor of over 1,000 since 1998 (and let’s not even talk about 1985).. HFS+ has been pulled in a bunch of competing directions with different forks for different devices (e.g. I've been told by inside sources that the iOS team created their own HFS variant, working so covertly that not even the Mac OS team knew) and different features (e.g. journaling, case sensitivity). It's old. It's a mess. And, critically, it's missing a bunch of features that are really considered basic costs of doing business for most operating systems. Wikipedia lists nanosecond timestamps, checksums, snapshots, and sparse file support among those missing features. Add to that the obvious gap of large device support and you've got a big chunk of the APFS feature list.
APFS first and foremost pays down the unsustainable technical debt that Apple has been carrying in HFS+. (In 2001 ZFS grew from a similar need where UFS had been evolved since 1977.) It unifies the multifarious forks. It introduces the expected features. In general it first brings the derelict building up to code.
Compression is an obvious common feature that's missing in the APFS feature list. It's conceptually quite easy, I told the development team (we had it in ZFS from the outset), so why not include it? To appeal to Giampaolo's BeOS nostalgia I even recalled my job interview with Be in 2000 when they talked about how compression actually improved overall performance since data I/O is far more expensive than computation (obvious now, but novel then). The Apple folks agreed, and—in typical Apple fashion—neither confirmed nor denied while strongly implying that it's definitely a feature we can expect in APFS. I'll be surprised if compression isn't included in its public launch.

Encryption
Encryption is clearly a core feature of APFS. This comes from diverse requirements from the various devices; for example, multiple keys within file systems on the iPhone, or per-user keys on laptops. I heard the term "innovative" quite a bit at WWDC, but here the term is aptly applied to APFS. It supports several different encryption choices for a file system:
    •    Unencrypted
    •    Single-key for metadata and user data
    •    Multi-key with different choices for metadata, files, and even sections of a file ("extents")
Multi-key encryption is particularly relevant for portables where all data might be encrypted, but unlocking your phone provides access to an additional key and therefore additional data. Unfortunately this doesn't seem to be working in the first beta of macOS Sierra (specifying fileEncryption when creating a new volume with diskutil results in a file system that reports "Is Encrypted" as "No").
Related to encryption, I noticed an undocumented feature while playing around with diskutil (which prompts you for interactive confirmation of the destructive power of APFS unless this is added to the command-line: -IHaveBeenWarnedThatAPFSIsPreReleaseAndThatIMayLoseData; I'm not making this up). APFS (apparently) supports the ability to securely and instantaneously erase a file system with the "effaceable" option when creating a new volume in diskutil. This presumably builds a secret key that cannot be extracted from APFS and encrypts the file system with it. A secure erase then need only delete the key rather than needing to scramble and re-scramble the full disk to ensure total eradication. Various iOS docs refer to this capability requiring some specialized hardware; it will be interesting to see what the option means on macOS. Either way, let's not mention this to the FBI or NSA, agreed?

Snapshots and backup
APFS brings a much-desired file system feature: snapshots. A snapshot lets you freeze the state of a file system at a particular moment and continue to use and modify that file system while preserving the old data. It does so in a space-efficient fashion where, effectively, changes are tracked and only new data takes up additional space. This has the potential to be extremely valuable for backup by efficiently tracking the data that has changed since the last backup.

ZFS includes snapshots and serialization mechanisms that make it efficient to back up file systems or transfer file systems to a remote location. Will APFS work like that? Probably not, answered Giampaolo. ZFS sends all changed data, while Time Machine can have exclusion lists and the like. That seems surmountable, but we'll see what Apple does. APFS right now is incompatible with Time Machine due to the lack of directory hard links, a fairly disgusting implementation that likely contributes to Time Machine's questionable reliability. Hopefully APFS will create some efficient serialization for Time Machine backup.
While APFS dev manager Eric Tamura demonstrated snapshots at WWDC, the required utilities aren't included in the macOS Sierra beta. I used DTrace (technology I'm increasingly amazed that Apple ported from OpenSolaris) to find a tantalizingly named new system call fs_snapshot; I'll leave it to others to reverse engineer its proper use.

Management
APFS brings another new feature known as "space sharing." A single APFS "container" that spans a device can have multiple "volumes" (file systems) within it. Apple contrasts this with the static allocation of disk space to support multiple HFS+ instances, which seems both specious and an uncommon use case. Both ZFS and btrfs have a similar concept of a shared pool of storage with nested file systems for administration and management.
Speaking with Giampaolo and other members of the APFS team, we discussed how volumes are the unit by which users can control things like snapshots and encryption. You'd want multiple volumes to correspond with different policies around those settings. For example, while you might want to snapshot and backup your system each day, the massive /private/var/vm/sleepimage (for saving memory when hibernating) should live on its own and not be backed up.
Space sharing is more like an operational detail than a game changing feature. You can think of it like special folders with snapshot and encryption controls—which is probably why Apple's marketing department has yet to make me a job offer. Adding new volumes can fail with an opaque error (does -69625 mean anything to you?), but using a larger disk image resolve the problem.


Space efficiency
A modern trend in file systems has been to store data more efficiently to effectively increase the size of your device. Common approaches to this include compression (which, as noted above, is very very likely coming) and deduplication. Dedup finds common blocks and avoids storing them multiply. This is potentially highly beneficial for file servers where many users or many virtual machines might have copies of the same file; it's probably not useful for the single-user or few-user environments that Apple cares about. (Yes, they have server-ish offerings but their heart clearly isn't into it.) It's also furiously hard to do well, as I painfully learned while supporting ZFS.

Apple's sort-of-unique contribution to space efficiency is constant-time cloning of files and directories. As a quick aside, "files" in macOS are often really directories; it's a convenient lie they tell to allow logically related collections of files to be treated as indivisible units. Right-click an application and select "Show Package Contents" to see what I mean. Accordingly, I'm going to use the term "file" rather than "file or directory" in sympathy for the patient readers who have made it this far.
With APFS, if you copy a file within the same file system (or possibly the same container; more on this later), no data is actually duplicated. Instead, a constant amount of metadata is updated and the on-disk data is shared. Changes to either copy cause new space to be allocated (this is called "copy on write," or COW). btrfs also supports this and calls the feature "reflinks"—link by reference.
I haven't seen this offered in other file systems (btrfs excepted), and it clearly makes for a good demo, but it got me wondering about the use case. Copying files between devices (e.g. to a USB stick for sharing) still takes time proportional to the amount of data copied of course. Why would I want to copy a file locally? The common case I could think of is the layman's version control: "thesis," "thesis-backup," "thesis-old," "thesis-saving because I'm making edits while drunk."
There are basically three categories of files:
    •    Files that are fully overwritten each time; images, MS Office docs, videos, etc.
    •    Files that are appended to, like log files
    •    Files with a record-based structure, such as database files
For the average user, most files fall into that first category. So with APFS I can make a copy of my document and get the benefits of space sharing, but those benefits will be eradicated as soon as I save the new revision. Perhaps users of larger files have a greater need for this and have a better idea of how it might be used.
Personally, my only use case is taking a file—say time-shifted Game of Thrones episodes falling into the "fair use" section of copyright law—and sticking it in Dropbox. Currently I need to choose to make a copy or permanently move the file to my Dropbox folder. Clones would let me do this more easily. But then, so would hard links (a nearly ubiquitous file system feature that lets a file appear in multiple directories).
Clones open the door for potential confusion. While copying a file may take up no space, so too deleting a file may free no space. Imagine trying to free space on your system, and needing to hunt down the last clone of a large file to actually get your space back.

APFS engineers don't seem to have many use cases in mind; at WWDC they asked for suggestions from the assembled developers (the best I've heard is for copied virtual machines, which is not exactly a mass-market problem). If the focus is generic revision control, I'm surprised that Apple didn't shoot for a more elegant solution. One could imagine functionality with APFS that allows a user to enable per-file Time Machine—change tracking for any file. This would create a new type of file where each version is recorded transparently and automatically. You could navigate to previous versions, prune the history, or delete the whole pile of versions at once (with no stray clones to hunt down). In fact, Apple introduced something related 5 years ago, but I've literally never seen or heard of it until researching this post (show of hands if you've clicked "Browse All Versions…"). APFS could clean up its implementation, simplify its use, and bring generic support for all applications. None of this solves my Game of Thrones storage problem, but I'm not even sure it's much of a problem…
Side note: Finder copy creates space-efficient clones, but cp from the command line does not.
Performance
APFS claims to be optimized for flash. Flash memory (NAND) is the stuff in your speedy SSD. Apple changed the computing industry when it put flash into the iPod and iPhone, volumes for which fundamentally changed the economics of flash. This consumer change impacted the enterprise (as it often does), giving rise to hybrid and all-flash arrays. Ten years ago flash cost as much as DRAM; now it's challenging the economics of hard disks.

SSDs mimic the block interface of conventional hard drives, but the underlying technology is completely different. In particular, while magnetic media can read or write sectors arbitrarily, flash erases large chunks (blocks) and reads and writes smaller chunks (pages). The management is done by what's called the flash translation layer (FTL), software that makes blocks and pages appear more like a hard drive. (Editor's note: we've got a huge, 10,000 word breakdown on how SSDs work if you'd like to know more). An FTL is very similar to a file system, creating a virtual mapping (a translation) between block addresses and locations within the media. Apple controls the full stack, including the SSD, FTL, and file system; they could have built something differentiated, optimizing this components to work together. What APFS does, however, is simply write in patterns known to be more easily handled by NAND. It's a file system with flash-aware characteristics rather than one written explicitly for the native flash interfaces—more or less what you'd expect in 2016.

Also on the topic of flash, APFS includes TRIM support. TRIM is a command in the ATA protocol that allows a file system to indicate to an SSD (specifically to its FTL) that some space has been freed. SSDs require significant free space and perform better when there's more of it; they include more physical space than they advertise. For example, my 1TB SSD includes 1TB (240 = 10244) bytes of flash but only reports 931GB of available space, sneakily matching the storage industry's self-serving definition of 1TB (10004 = 1 trillion bytes). With more free space, FTLs can trade off space efficiency for performance and longevity. TRIM has become expected of file systems; it's unsurprising that APFS supports it. The problem with TRIM is that it's only useful when there's free space: it's something of a benchmark special. Once your disk is mostly full (as mine are in my laptop and phone basically at all times) TRIM doesn't do anything for you. I doubt that TRIM will bring any discernible benefit for APFS users beyond the placebo effect of feature parity.


APFS also focuses on latency: Apple's number one goal is to avoid the beach ball of doom. APFS addresses this with I/O QoS (quality of service) to prioritize accesses that are immediately visible to the user over background activity that doesn't have the same time-constraints. This is inarguably a benefit to users and a sophisticated file system capability.
Data Integrity
Arguably the most important job of a file system is preserving data integrity. Here's my data, don't lose it, don't change it. If file systems could be trusted absolutely then the "only" reason for backup would be the idiot operators (i.e. you and me). There are a few mechanisms that file systems employ to keep data safe.
Redundancy


APFS makes no claims with regard to data redundancy. As Apple's Eric Tamura noted at WWDC, most Apple devices have a single storage device (i.e. one logical SSD) making RAID, for example, moot. Instead, redundancy comes from lower layers such as Apple RAID (apparently a thing), hardware RAID controllers, SANs, or even the "single" storage devices themselves.
As an aside, note that SSDs in most Apple products where APFS will run include multiple more-or-less independent NAND chips. High-end SSDs do implement data redundancy within the device, but it comes at the price of reduced capacity and performance. As noted above, the "flash-optimization" of APFS doesn't actually extend much below the surface of the standard block device interface, but the raw materials for innovation are there.
Also, APFS removes the most common way of a user achieving local data redundancy: copying files. A copied file in APFS actually creates a lightweight clone with no duplicated data. Corruption of the underlying device would mean that both "copies" were damaged, whereas with full copies localized data corruption would affect just one.

Crash Consistency
Computer systems can fail at any time—crashes, bugs, power outages, whatever—so file systems need to anticipate and recover from these scenarios. The old-old-old school method is to plod along and then have a special utility to check and repair the file system during boot (fsck, short for file system check). More modern systems labor to achieve an always-consistent format, or only narrow windows of inconsistency, obviating the need for the full, expensive fsck. ZFS, for example, builds up new state on disk and then atomically transitions from the previous state to the new one with a single atomic operation.
Overwriting data creates the most obvious opening for inconsistency. If the file system needs to overwrite several regions there is a window where some regions represent the new state and some represent the former state. Copy-on-write (COW) is a method to avoid this by always allocating new regions and then releasing old ones for reuse, rather than modifying data in-place. APFS claims to implement a "novel copy-on-write metadata scheme"; APFS lead developer Dominic Giampaolo emphasized the novelty of this approach without delving into the details. In conversation later, he made it clear that APFS does not employ the ZFS mechanism of copying all metadata above changed user data which allows for a single, atomic update of the file system structure.
It's surprising to see that APFS includes fsck_apfs—even after asking Giampaolo I'm not sure why it would be necessary. For comparison, I don't believe there's been an instance where fsck for ZFS would have found a problem that the file system itself didn't already know how to detect. But Giampaolo was just as confused about why ZFS would forego fsck, so perhaps it's just a matter of opinion.

Checksums
Notably absent from the APFS intro talk was any mention of checksums. A checksum is a digest or summary of data used to detect (and correct) data errors. The story here is surprisingly nuanced. APFS checksums its own metadata, but not user data. The justification for checksumming metadata is strong: there's not much of it relative to user data (so the checksums don't consume much storage) and losing metadata can cast a potentially huge shadow of data loss. If, for example, metadata for a top-level directory is corrupted, then potentially all data on the disk could be rendered inaccessible. ZFS duplicates metadata (and triple-duplicates top-level metadata) for exactly this reason.
Explicitly not checksumming user data is a little more interesting. The APFS engineers I talked to cited strong ECC protection within Apple storage devices. Both NAND flash SSDs and magnetic media HDDs use redundant data to detect and correct errors. The Apple engineers contend that Apple devices basically don't return bogus data. NAND uses extra data, e.g. 128 bytes per 4KB page, so that errors can be corrected and detected. (For reference, ZFS uses a fixed size 32 byte checksum for blocks ranging from 512 bytes to megabytes. That's small by comparison, but bear in mind that the SSD's ECC is required for the expected analog variances within the media.) The devices have a bit error rate that's low enough to expect no errors over the device's lifetime. In addition there are other sources of device errors where a file system's redundant check could be invaluable. SSDs have a multitude of components, and in volume consumer products they rarely contain end-to-end ECC protection, leaving the possibility of data being corrupted in transit. Further, their complex firmware can (does) contain bugs that can result in data loss.
The Apple folks were quite interested in my experience with regard to bit rot (aging data silently losing integrity) and other device errors. I've seen many instances where devices raised no error but ZFS (correctly) detected corrupted data. Apple has some of the most stringent device qualification tests for its vendors; I trust that they really do procure the best components. Apple engineers I spoke with claimed that bit rot was not a problem for users of their devices, but if your software can't detect errors then you have no idea how your devices really perform in the field. ZFS has found data corruption on multi-million dollar storage arrays; I would be surprised if it didn't find errors coming from TLC (i.e. the cheapest) NAND chips in some of Apple's devices. Recall the (fairly) recent brouhaha regarding storage problems in the high-capacity iPhone 6. At least some of Apple's devices have been imperfect.
As someone who has data he cares about on a Mac, who has seen data lost from HFS, and who knows that even expensive, enterprise-grade equipment can lose data, I would gladly sacrifice 16 bytes per 4KB—less than 1% of my device's size.
Scrub
As data ages you might occasionally want to check for bit rot. Likely fsck_apfs can accomplish this; though as noted there's no data redundancy and no checksums for user data, so scrub would only help to find problems and likely wouldn't help to correct them. And if it makes it any easier for Apple to reverse course, let's say it's for the el cheap-o drive I bought from Fry's not for the gold-plated device I got from Apple.

Summing Up
I'm not sure Apple absolutely had to replace HFS+, but likely they had passed an inflection point where continuing to maintain and evolve the 30+ year old software was more expensive than building something new. APFS is a product born of that assessment.
Based on what Apple has shown I'd surmise that its core design goals were:
    •    satisfying all consumers (laptop, phone, watch, etc.)
    •    encryption as a first-class citizen
    •    snapshots for modernized backup
Those are great goals that will benefit all Apple users, and based on the WWDC demos APFS seems to be on track (though the macOS Sierra beta isn't quite as far along).

In the process of implementing a new file system the APFS team has added some expected features. HFS was built when 400KB floppies ruled the Earth (recognized now only as the ubiquitous and anachronistic save icon). Any file system started in 2014 should of course consider huge devices, and SSDs—check and check. Copy-on-write (COW) snapshots are the norm; making the Duplicate command in the Finder faster wasn't much of a detour. The use case is unclear—it's a classic garbage can theory solution in search of a problem—but it doesn't hurt and it makes for a fun demo. The beach ball of doom earned its nickname; APFS was naturally built to avoid it.
There are some seemingly absent or ancillary design goals: performance, openness, and data integrity. Squeezing the most IOPS or throughput out of a device probably isn't critical on watchOS, and it's relevant only to a small percentage of macOS users. It will be interesting to see how APFS performs once it ships (measuring any earlier would only misinform the public and insult the APFS team).
The APFS development docs have a bullet on open source: "An open source implementation is not available at this time." I don't expect APFS to be open source at this time or any other, but prove me wrong, Apple. If APFS becomes world-class I'd love to see it in Linux and FreeBSD—maybe Microsoft would even jettison their ReFS experiment. My experience with OpenZFS has shown that open source accelerates that path to excellence. It's a shame that APFS lacks checksums for user data and doesn't provide for data redundancy. Data integrity should be job one for a file system, and I believe that's true for a watch or phone as much as it is for a server.
APFS will be an improvement at stability for Apple users of all kinds, on every device. There are some clear wins and some missed opportunities. Now that APFS has been shared with the world, the development team is probably listening. While Apple is clearly years past the decision to build from scratch rather than adopting existing modern technology, there's time to raise the priority of data integrity and openness. I'm impressed by Apple's goal of using APFS by default within 18 months. Regardless of how it goes, it will be an exciting transition.

Geek, Pilot, Paraglider, Athiest.