Why we built our own NFS client in Golang
The first step in building a metadata index, as you might imagine, is actually getting the file metadata so that we can index it. This is easy — all we have to do is mount the NFS server:
sudo mount nfs.host.com:/nfs/mount/path localmount
and then write code to walk the
localmount directory, collecting metadata.
Ah, if only it had been that easy, we would have saved many late nights and I could end this article here. The problem with reading from a mounted volume is that you’re limited by what the kernel’s NFS client can do. And the Linux kernel’s NFS client implementation is not exactly built for high performance. We can figure out what the NFS client is doing by inspecting the network traffic. Here’s a run of Wireshark as we walk the mounted directory:
Okay, so there are a few interesting things to note here. First, looking through the calls, we appear to be doing, for every directory we want to explore, a GETATTR (which returns file attributes), and ACCESS (which returns allowed access rights), and a READDIRPLUS call. READDIRPLUS is the NFS operation that gets us directory names and metadata, so what are the other two operations there for?
Well, as it turns out, this is the client being extremely polite. For every file or directory that it encounters, the client first knocks on the door to check its attributes (mode bits, owner, etc) and access rights (READ, LOOKUP, DELETE). Only if the client concludes that it has permission to come in and that the reader will be successful will it issue a READDIRPLUS. While knocking first may be polite in social settings, that’s entirely unnecessary here — it would be much faster to barge through every door we see, and only worry about the consequences if we encounter a locked door later on.
Another issue becomes apparent as we look at the timestamps — there is only one request outstanding at a time! The NFS client assumes you’re only performing one operation at a time and so creates a single connection capable of serially handling requests. This makes sense for manually browsing, but for automatic scanning, this strategy is about as slow as trying to suck up a lake through a crazy straw. To go faster, we need to perform more than one operation at a time (make the straw wider), use multiple connections (add more straws), and optimize the performance of each operation (make the straws less winding) — all of which require building our own NFS client.
The Igneous NFS Client
The core of the Igneous NFS client is an RPC (remote procedure call) connection. This connection is built on top of the connection infrastructure provided by Golang’s
net package and supports several outbound requests at once. A group of these connections comprises an RPC connection pool, and requests are distributed across this pool.
All reads and writes from this connection are streamed to or from our purpose-built encoding library that encodes/decodes into XDR — the NFS RPC format. XDR encoding was an area where we were able to make significant performance improvements over time — read this post by Igneous’ cofounder, Byron Rakitzis, for a deeper dive on that.
We can compare our NFS client to a mounting solution to measure performance side-by-side. To do so, we set up a simple test that connects to a small NetApp server with typical latency characteristics. Then, we run a parallel walker using the two NFS client implementations — a reader for a locally mounted directory, and our user-space NFS client.
kernel mount: walked 1923 files in 15.22401775suserspace client: walked 1923 files in 308.955766ms
These results are consistent across multiple runs, confirming that the difference is not due to caching effects. With our NFS client, we’re able to crawl this file system about 50 times faster!
Clearly, it was worth it to write our own NFS client. For similar reasons, and with similar methods and results, we wrote our own SMB client, and as a result, can pull metadata off of any filer very quickly.