nix nfs
May 7, 2026 - ⧖ 14 minA short and sweet post today.
I run a server at my home with ~48TiB in RAIDZ2, and it's full of fun stuff like legally acquired media, and service application data. To make this content easily available on my other machines, I use the Networking File System (NFS)1.
The biggest hurdle to using NFS is that your UID/GUIDs on your client and server should match up precisely. Failure to do so can cause service interruptions, or worse, security vulnerabilities. Fortunately, by setting them explicitly in our configs, Nix affords us an easy way to ensure that our UIDs/GUIDs are consistent across all our machines.
Overview
- Server Setup
- Client Setup
- File Permissions
Networking File System
NFS works with a client-server model, one machine, in this case my archive server (which hosts the 48Tib storage pool) allows connections from client machines to read and write to these disks over the network.
Server Setup
You can setup NFS through the Nix services nfs attribute set2, which is mercifully short.
{ config , ... } :
let
# Using the Wireguard Flake from my other posts, we can easily set the
# correct subnet in our /exports
network = config . networking . wgqt . interfaces . <net> . cidr ;
# This machine's address on the trusted interface
address = config . networking . wgqt . interfaces . <net> . address ;
# Ths machine which runs Sharkey, Wireguard address
sharkey = config . networking . wgqt . interface . <net> . peers . <shakey-machine> . address ;
# Path to the large ZFS Pool
zpool = "/zpools/hdd" ;
in
{
# Enable sharing of the zpools over NFS
services . nfs = {
server = {
enable = true ;
# ro - read only
# rw - read-write
# fsid - file system ID
# async - allow asynchronous writes (quicker)
# wdelay - batch writing to disk, if another write command is inbound
# root_squash - remote root users have their UIDs squashed to 'nobody'
# no_subtree_check - needed for ZFS file system boundary file handles
exports = ''
# FILE SYSTEM ROOT
${ zpool } ${ network } (rw,fsid=0,no_subtree_check)
# WORKING DIRECTORIES
${ zpool } /apps ${ network } (ro,async,wdelay,root_squash,no_subtree_check)
# Only machines which run the matching program can connect to the dataset
${ zpool } /apps/sharkey ${ sharkey } (rw,async,wdelay,root_squash,no_subtree_check)
# ... Other App Datasets
# READ ONLY
${ zpool } /media ${ network } (ro,root_squash,no_subtree_check)
${ zpool } /media/music ${ network } (ro,sync,root_squash,no_subtree_check)
${ zpool } /media/images ${ network } (ro,sync,root_squash,no_subtree_check)
${ zpool } /media/videos ${ network } (ro,sync,root_squash,no_subtree_check)
${ zpool } /media/series ${ network } (ro,sync,root_squash,no_subtree_check)
${ zpool } /media/movies ${ network } (ro,sync,root_squash,no_subtree_check)
'' ;
# Only allow the latest version of NFS
extraNfsdConfig = ''
vers2=off
vers3=off
vers4=on
vers4.0=off
vers4.1=off
vers4.2=on
'' ;
# Only allow connections on this interface/address
hostName = address ;
# Number of threads to use
nproc = 8 ;
};
};
# Ensure the port is open (only TCP for NFS >=v4)
networking . firewall . allowedTCPPorts = [ 2049 ];
}
Networking
The exports specify a subnet of addresses which are allowed to access the share. I pick a subnet owned by one of my Wireguard interfaces I setup using my wgqt-flake [ source ] [ series ]. This encrypts all of our data in transit, and adds another layer of security by requiring any client connecting to the server to first be able to join the VPN - something which by virtue of Wireguard requires pre-agreement between both machines.
hostName also only permits connecting to the NFS server via an address on the same secured interface.
Exports
I export my apps/ directory as read-write, because it's the backing store for apps which might be running remotely. For example, my Sharkey3 server runs on a remote server, but all the data in my users' drives is securely backed up to my ZPool in my armoire.
I export my media/** directories as read only, since only the local machine should be managing those files, and my clients should only be consuming them. I also need to export the media/ directory itself because each of the sub 'directories' is actually a child dataset and therefore another filesystem. This allows NFS to calculate a path into the child datasets (e.g. media/movies). Alternatively, I could have added the crossmnt flag to the root export, automatically traversing filesystem boundaries, but I prefer the explicit approach.
Flags
The flags are briefly explained in the code snippet above, but I want to expound upon a few of the more obtuse ones:
fsid=0
By exporting the top level directory, and specifying fsid=0 I can make it the root of the NFS, making all other mounts relative to this path. This allows the clients to abbreviate their mounts (e.g. they can import /apps instead of /zpools/hdd/apps), and prevents them from addressing paths outside this root by mistake or malevolence.
no_subtree_check
If you export a subdirectory of a larger filesystem, e.g. my Zpools and ZFS datasets, then NFS will prevent clients from accessing files outside of that subtree. This prevents users from guessing inodes outside the subtree, or hardlinking/renaming files outside the subtree. This slows down the server's performance because it must check every access by recreating the path to ensure it hasn't been moved outside the subtree. Since I implicitly trust the connecting machines, and it's okay for them to access any file inside the share, I forego this check. In addition, ZFS datasets mount themselves within the server's host filesystem (since I don't boot from ZFS), creating a boundary point which can cause confusion and stale file handles when we do the check. I don't want to deal with obscure stale file pointer handle bugs, so I accept this risk instead.
root_squash
Now we get to NFS's biggest disadvantage - primitive permission structure. What stops a clients from running as root on their own machine and overriding all the permissions in a directory? In this case, the NFS server will take any file command running with root permissions (UID = 0), and change it's UID to a special UID (typically the nobody UID; 65535).
extraConfig
This ensures that clients can only connect using NFS version 4.2. NFS v4 has many desirable performance and security updates - by setting this on the server side I ensure I never forgo them by forgetting to specify it on the client's side.
Client Setup
Client setup is pretty easy, just declare an NFS and mount it where you want. In my case, I mount all my NFS shares under a top level /imports directory. You'll notice that we don't have to specify the full path, e.g. /zpools/hdd/media/music, this is because we set the root of the NFS filesystem on the server to /zpools/hdd/ by setting that export's file system ID to 0 (fsid=0).
{ config , ... } :
let
address = config . networking . wgqt . interfaces . <net> . peers . <nfs-server> . address ;
# Flags common to R/W and RO mounts
common = [
# Retry operations indefinitely on server failure
"hard"
# Use NFS v4.2
"nfsvers=4.2"
# When to mount/unmount
# Don't automatically mount at boot time
"noauto"
# Mount when first accessed
"x-systemd.automount"
# Unmount after 10 minutes of inactivity
"x-systemd.idle-timeout=600"
];
# Flags for read only mounts
options = [
# Read-Only mount
"ro"
# Cache the files locally using cachefilesd
"fsc"
# Tells systemd to wait until network.target is reached before trying to
# mount this filesystem
"_netdev"
# Don't update access time in the file attributes
"noatime"
"nodiratime"
] ++ common ;
in
{
# Cache read only files locally for faster access
services . cachefilesd = {
enable = true ;
extraConfig = ''
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%
'' ;
};
#########################
# NETWORKED FILESYSTEMS #
#########################
fileSystems = {
# R/W application data
"/imports/apps" = {
device = " ${ address } :/apps" ;
fsType = "nfs" ;
options = common ;
};
# Music Library
"/imports/music" = {
device = " ${ address } :/media/music" ;
fsType = "nfs" ;
inherit options ;
};
# ... additional mounts as necessary
};
}
Flags
They're explained briefly above, but I thought I would go into more depth of some of the more obtuse ones. If you want to go really indepth, check out nfs(5)4
hard
What to do when the server isn't reachable. If set to soft the file operations timeout and return an I/O error. If set to hard the NFS daemon will keep retrying. Set it based on your application needs.I prefer hard, as when I rebuild my server my NFS daemon might restart and I would prefer my applications hang for a second and then resume as if nothing happened than deal with I/O errors and possibly crash.
noatime/nodiratime
I can improve performance by not updating the last time a file or directory was accessed by an NFS client. I personally prefer this, as I like the NFS server to do all the file management, and only want to access times to reflect when the server touches a file, not when a client reads it. For application data, last accessed time might be important to the functioning of a program, so it's only applied to the read only options set.
sec=sys
We trust that the UIDs/GUIDs are not falsified for the purposes of accessing data. We use NixOS to keep out UIDs/GUIDs in sync, and even if an attacker compromised one of our clients, it could still only mess with its own application data.
Caching Files
We enable cachefilesd5 to allow for local caching of read only files. This should improve playback of media on clients by allowing them to read ahead and cache the file it's playing back from.
Is this worthwhile? Debatable.6 Over fast network connections it will likely not show a performance improvement and may even degrade performance by doubling the I/O contention. This is because Linux will pre-read the file and store the pages in RAM which is fast (unless your network is slow). Using cachefilesd will cause the RAM to be written to disk and back.
I choose to use cachefilesd because I want consistent performance. If my connection is spotty, this ameliorates that issue through amortization. In addition, I often listen to the same songs on repeat, or if I'm editing video I want that footage locally cached so I can scrub through it without redownloading it. In addtion, my backing store is slow (HDD disk go spinny slow) but my clients have fast NVME drives so I'm not worried about write contention issues.
extraConfig
This informs cachefilesd how much data to cache, when to start and stop culling data from the cache, and what to do in emergency low space situations.
Quick Guide
F -> Files (inodes)
B -> Blocks (disk space)
# When free space/files on the device fall below this amount, start evicting
# files from the cache
Cull -> Start Culling
# When free space/files on the device rise above this amount, stop evicting
# files and resume normal cache operation
Run -> Stop Culling
# When free space/files on the device fall below this amount, refuse all new
# writes until the situation improves
Stop -> Hard Limit
In our config, we start culling at 7% free space remaining until we have at least 10% free space left. If we ever find ourselves in a situation with <=3% free space on the device, we shut the cache down unitl the situation resolves.
Permissions
Using Nix we can explicitly set all of our our UIDs/GUIDs to ensure that our users and services can only access what they're supposed to. Since we only allow other machines on the VPN to access the NFS share, we know they won't try to intentionally circumvent these controls; however it is still good practice to adhere to the principle of least privilege7. I'm more concerned that I will accidentally clobber my own work than an attacker will infiltrate it, and tight user permissions means one more hurdle I must force myself over before shooting my foot off.
File Permissions
On the server, we can use Systemd tmpfile rules to ensure that our export share directories exist, have the correct permissions, and that any new files created in them have the correct ownership and permissions.
{ config , sharkey , ... } :
let
# ...
# Get Sharkey UID from its flake
sharkey = sharkey . config . uid ;
# Path to the large ZFS Pool
zpool = "/zpools/hdd" ;
in
{
# Ensure that the exported directories and permissions exist
systemd . tmpfiles . rules = [
# d -> directory
# 2 -> Setgid directory – new files inherit group
# 774 -> RWX for Owner + Group; everyone else can read
# Owned by root, and media-players group (they manage the files on the server)
# - -> These directories never expire (don't clean them up lol)
# Top level is owned by root alone
"d ${ zpool } 2774 root root -"
# Application data is owned by root alone
"d ${ zpool } /apps 2775 root root -"
# Each application dataset is only accessibly by its daemon
"d ${ zpool } /apps/sharkey 2774 ${ sharkey } ${ sharkey } -"
# Other App Datasets...
# Media files owned by root and media-players
# (they manage the files on the server)
"d ${ zpool } /media 2775 root media-players -"
"d ${ zpool } /media/music 2775 root media-players -"
"d ${ zpool } /media/images 2775 root media-players -"
"d ${ zpool } /media/videos 2775 root media-players -"
"d ${ zpool } /media/series 2775 root media-players -"
"d ${ zpool } /media/movies 2775 root media-players -"
];
# ...
}
I'm not going to make a one shot systemd service file which converts each file to the correct permissions - it's way faster to sudo chown -R root /zpools/hdd/media, sudo chgrp -R media-players /zpools/hdd/media, and sudo chmod 2774 -R /zpools/hdd/media etc... and then I won't have to wait for 48TiB of files to be checked every time I boot. Going forward, all new files will have the correct permissions.
User IDs
I create my users via my identities-flake. It sets mutable users to false (preventing attackers and myself from adding them non-declaratively) and explicitly set each user's UID so I can rely on it in other places. You can easily do the same with a single line of Nix Config.
{
users . users . < username > = {
uid = 1000 ;
# ...
};
}
Service IDs
Similar to User IDs, I set each service's ID explicitly, so the server knows which UID/GUID to set for which directory. My Sharkey flake explicity sets the Sharkey daemon UID and my NFS server can use that same flake's information to use the same UID for it's tmpfile rules.
Fin
I really should wrap this in a flake with some custom options so that my shares exported always match my shares imported, but this will have to do for now. Forgive me.