Recently, I needed to marshal a Go struct to JSON and BSON (binary JSON, a serialization format developed by MongoDB), but one of the fields in my struct was an interface that needed special handling.
Typically, it’s easy enough to provide custom marshaling functions: just make
MyStruct needs to be marshaled, the marshaler will call our implementation of the marshal functions to marshal the value for
myStruct.Custom. All we need to do is provide
MarshalBSON functions for every implementation of
Unfortunately, in my case, this was not an option…
As a Go developer, you may sometimes find yourself needing two versions of the same struct. Often, this is evidence of a code smell, but at times it’s unavoidable.
Maybe you have two versions of a struct, one for the API-facing layer and one for the database layer, and need to ensure that whenever you or someone on your team adds a field to struct A, they do the same to struct B. Or maybe you have an auxiliary struct to enable a custom JSON marshal, and need to ensure the auxiliary struct always matches the original.
If so, an…
In previous posts of this series, we’ve been focusing mostly on the algorithms and strategies that we used to build Data Discover, from reading to transporting and finally indexing and querying the data. However, even with carefully chosen data structures and algorithms, every part of the process will quickly run into the limits of what a single machine can do. The only way to get beyond this point is to split the work amongst multiple computers. Therefore, it’s critical…
We’ve come a long way since we first started this series! First, we pulled data from your storage device onto a VM in your datacenter. Then, the VM packaged this data and wrote it into a sorted list in the cloud. Now, finally, we are ready to use this data to build a search index, a metadata structure that can be used to process queries and return results.
Let’s suppose our entire dataset — all the files on the customer…
In the previous post, we talked about how we could efficiently pull metadata from billions of files in a file system. But this metadata alone is not enough to quickly answer queries. Consider, for example, the sample question we posed at the beginning: What is the total size of all .mp3 files in our datacenter?”. …
The first step in building a metadata index, as you might imagine, is actually getting the file metadata so that we can index it. This is easy — all we have to do is mount the NFS server:
sudo mount nfs.host.com:/nfs/mount/path localmount
and then write code to walk the
localmount directory, collecting metadata.
Ah, if only it had been that easy, we would have saved many late nights and I could end this article here. The problem with reading from…
How many files do you have on your computer? How much total space do they take up? Can you, armed with only the file name, find a specific file that you worked on 3 years ago? If your disk is getting full, which files would you delete or move first, in order to free up space?
Chances are that you know where to go to answer these questions, especially if you have file management software. These tools usually build a metadata index, a data structure that maps file names to metadata like access time, modified time, file size etc., and…