Speeding up fdupes

tl;dr: use jdupes

I was merging some fileserver content, and realised I would inevitably end up with duplicates. “Aha”, I thought “time to use good old fdupes“. Well yes, except a few hours later, fdupes was still only at a few percent. Turns out running it on a collected merged mélange of files which are several terabytes in size is not a speedy process.

Enter jdupes, Jody Bruchon’s fork of fdupes. It’s reportedly many times faster than the original, but that’s only half the story. The key, as with things like Project Euler is to figure out the smart way of doing things– in this case smart way is to find duplicates on a subset of files. That might be between photo directories if you think you might have imported duplicates.

In my case, I care about disk space (still haven’t got that LTO drive), and so restricting the search to files over, say, 50 megabytes seemed reasonable. I could probably have gone higher. Even still, it finished in minutes, rather than interminable hours.

/jdupes -S -Z -Q -X size-:50M -r ~/storage/

NB: Jdoy Bruchon makes an excellent point below about the use of -Q. From the documentation:

-Q --quick skip byte-for-byte confirmation for quick matching
WARNING: -Q can result in data loss! Be very careful!

As I was going to manually review (± delete) the duplicates myself, potential collisions are not a huge issue. I would not recommend using it if data loss is a concern, or if using the automated removal option.

jdupes is in Arch AUR and some repos for Debian, but the source code is easy to compile in any case.

Wanted: One LTO-4/5/6/7 Drive!

I am something of a digital hoarder. I have files dating back to one of the earliest computers that anyone in my family owned. I think I even still have diskettes for an older word processor, the name of which escapes me at the moment. As such, I have slightly more than average storage requirements.

At present I handle these requirements via a Linux fileserver, using 3TB drives RAID6’d via mdadm. On top of that I use LVM to serve up some volumes for Xen, but that’s not strictly relevant to storage.

Looking at the capacities of LTO makes me quite covetous. LTO tapes are small, capacious and reliable– with a few tapes, I could archive a fair amount of data. I could also move the tapes outside my house- and lo, offline offsite backups!

Sadly, drives are expensive, unless you’re stepping back relatively small capacity* LTO-2 drives.

At present, given the cost of drives, some back-of-the-envelope calculations show that for any reasonable** dataset, simply buying hard drives (at time of writing, 3TB is cheapest per GP) is the most cost-effective means of archiving. Given that is where the focus of development is, I don’t think this is likely to change soon.

I’ll just have to wait for a going-out-of-business auction, and hope the liquidators overlook the value of the backup system…