[solved] js52: /usr/lib/libmozjs-52.so.0 exists in filesystem

Living on the edge in Arch Linux land is a fun activity everyone should try (at least once). However, a full system package upgrade caused the following today:

# pacman -Syyu
(...)
error: failed to commit transaction (conflicting files) 
js52: /usr/lib/libmozjs-52.so.0 exists in filesystem

I’m not the only one to have the issue. Seems the official way of getting past this is to rename the file, at least per this bug report.

Update: There’s an Arch news post that adds a modicum more information:

Due to the SONAME of /usr/lib/libmozjs-52.so not matching its file name, ldconfig created an untracked file /usr/lib/libmozjs-52.so.0. This is now fixed and both files are present in the package.

To pass the upgrade, remove /usr/lib/libmozjs-52.so.0 prior to upgrading.

I think this is the first time I’ve needed to do a manual intervention for a package upgrade for the time I’ve been running Arch; so all in all not bad.

Better Backups: Pick a System

You have backups, right?

— SuperUser’s chat room motto


This started out as an intro to bup. Somewhere along the way it underwent a philosophical metamorphosis.

I’m certainly not the first person to say that backups are like insurance. They are a bit of a hassle to figure out which one will work best, you set it up and forget about it, and hopefully you won’t need it*.

Many moons ago, I had backups taken care of by a a simple shell script. Later, this got promoted to a python script which handled hourly, daily weekly and monthly rotation; and saved space by using hard links (cp -al ...). It even differentiated between local and remote backups. That was probably my backup zenith, at least when time and effort are factored in.

Really, the more sensible approach is rather than reinvent the wheel, use an existing tried-and-tested solution. So I moved to rdiff-backup and it was good; being simple it meant I could set up ‘fire-and-forget’ backups via cron. I was able to restore files from backups that I had set up and then forgotten about.

With the recent expansion of the fileserver ongoing, now’s a good time to take stock and re-evaluate options. I have created a Xen DomU dedicated to backups (called pandora, aptly) with it’s own dedicated logical volume. From here, I need to decide:

1) whether to keep going with rdiff-backup or switch to eg bup or borg
2) figure out if different machines could use different schedules or approaches (answer: probably); and if so, what those would be (answer: …)

I don’t want to spend too long on this — premature optimisation being the root of all evil — but the aim is to create a backup system which is:

  • robust
  • reliable
  • maintenance-minimal

: If you *do use your backups or insurance a lot, it’s probably a sign that something is going wrong somewhere

File creation time on ext4 (Linux)

tl;dr: since coreutils stat does not show file ‘birth’ time, use debugfs -R stat <inode> FS

I was curious as to when I wrote a particular time-saving script, so I figured I would look up the file creation time:

$ stat ~/scripts/goprofootage.sh
 File: /home/robert/scripts/goprofootage.sh
  Size: 1001            Blocks: 8          IO Block: 4096   regular file
Device: fe01h/65025d    Inode: 792618      Links: 1
Access: (0755/-rwxr-xr-x)  Uid: ( 1000/  robert)   Gid: ( 1000/  robert)
Access: 2018-01-07 08:23:04.816666962 +0000
Modify: 2017-05-13 19:09:30.760094062 +0100
Change: 2017-05-13 19:09:30.760094062 +0100                        
 Birth: -

Err, well. No birth date? ext4 does support file creation timestamps, so it’s just a simple matter of getting at them.

Enter debugfs, part of e2fsprogs (at least on this Arch install). We can stat an inode to get a creation time:


$ stat -c %i ~/scripts/goprofootage.sh
792618
# debugfs -R 'stat <792618>'  /dev/mapper/840ssd-home
debugfs 1.43.7 (16-Oct-2017)
Inode: 792618   Type: regular    Mode:  0755   Flags: 0x80000
Generation: 3863725318    Version: 0x00000000:00000001
User:  1000   Group:  1000   Project:     0   Size: 1001
File ACL: 0    Directory ACL: 0
Links: 1   Blockcount: 8
Fragment:  Address: 0    Number: 0    Size: 0
 ctime: 0x59174bda:b53875b8 -- Sat May 13 19:09:30 2017
 atime: 0x5a51d8e8:c2b56548 -- Sun Jan  7 08:23:04 2018
 mtime: 0x59174bda:b53875b8 -- Sat May 13 19:09:30 2017
crtime: 0x58efaf27:2234f628 -- Thu Apr 13 18:02:31 2017
Size of extra inode fields: 32
EXTENTS:
(0):5096134

Or, if you’d rather combine the above into a one-liner (NB needs root):

 # debugfs -R "stat <$(stat -c %i ~/scripts/goprofootage.sh)>"  /dev/mapper/840ssd-home 2>/dev/null  | grep crtime | cut -d ' ' -f4-9
Thu Apr 13 18:02:31 2017

Quick beets import tip

Starting with beets to manage and organise your music library? Read the ‘getting started‘ guide? An additional quick tip:

Import your first few albums individually using

beet import -t $album_directory

The -t flag is for timid(ly).

Why? If you’re like me, you might not be in 100% agreement with how MusicBrainz represents the match metadata; and -t will ask for confirmation which you can either accept ([Enter] or A) or reject (U for ‘Use as-is’).

If the first few matches are fine, you can drop the flag; if not, you can figure out how to finesse it to import files to your liking via beets‘s excellent plugins.

[Fixed] MySQL: Table is marked as crashed and last (automatic?) repair failed (+ WordPress)

tl;dr: run myisamchk on the problematic table

I’ve run into the following error in my Apache error.log recently:

Table 'database.tablename' is marked as crashed and last (automatic?) repair failed

Fortunately the fix is simple: run myisamchk on the table which is marked as crashed:


$ sudo su
# service mysql stop
# cd /var/lib/mysql/databasename
# myisamchk -r tablename
MyISAM-table 'tablename' is not fixed because of errors
Try fixing it by using the --safe-recover (-o), the --force (-f)
 option or by not using the --quick (-q) flag
# myisamchk -r -o -f tablename
Data records: 107435
Found block that points outside data file at 16166832
# service mysql start

I’ve run into these errors before due to running out of disk space on the (admittedly tiny) VPS I had.

I also had this problem with a WordPress database able, causing the often-seen and unhelpfully terse:

Error establishing a database connection

Interestingly, this wasn’t getting bounced to error.log, and I had to use the WordPress database repair screen to track down which one needed the fix (which was the same myisamchk).

All sorted now!

Speeding up fdupes

tl;dr: use jdupes

I was merging some fileserver content, and realised I would inevitably end up with duplicates. “Aha”, I thought “time to use good old fdupes“. Well yes, except a few hours later, fdupes was still only at a few percent. Turns out running it on a collected merged mélange of files which are several terabytes in size is not a speedy process.

Enter jdupes, Jody Bruchon’s fork of fdupes. It’s reportedly many times faster than the original, but that’s only half the story. The key, as with things like Project Euler is to figure out the smart way of doing things– in this case smart way is to find duplicates on a subset of files. That might be between photo directories if you think you might have imported duplicates.

In my case, I care about disk space (still haven’t got that LTO drive), and so restricting the search to files over, say, 50 megabytes seemed reasonable. I could probably have gone higher. Even still, it finished in minutes, rather than interminable hours.

/jdupes -S -Z -Q -X size-:50M -r ~/storage/

NB: Jdoy Bruchon makes an excellent point below about the use of -Q. From the documentation:

-Q --quick skip byte-for-byte confirmation for quick matching
WARNING: -Q can result in data loss! Be very careful!

As I was going to manually review (± delete) the duplicates myself, potential collisions are not a huge issue. I would not recommend using it if data loss is a concern, or if using the automated removal option.

jdupes is in Arch AUR and some repos for Debian, but the source code is easy to compile in any case.

Compressing Teamspeak 3 Recordings Using sox

tl;dr: Loop through the files in bash, sox them to FLAC

Success!

I’ve been combining fileserver contents recently, and I came across a little archive of Teamspeak 3 recordings:

$ du -sh .
483G /home/robert/storage/media/ts_recordings/

Eep.

I wrote a quick-and-dirty script to convert the files:


#!/bin/bash

n=0
total=$(ls *.wav|wc)
ls *.wav | while read file; do
        sox -q ${file} ${file%.*}.flac
        if [ -e ${file%.*}.flac ]; then
                if ! [ -s {file%.*}.flac ]; then
                        rm ${file}
                else
                        echo "${file%.*}.flac is zero-length!"
                fi
        else
                echo "Failed on ${file}"
        fi

        ((n++))
        if  ! ((n % 10 )); then
                echo "${n} of ${total}"
        fi
done

The script checks that the FLACs replacing the WAVs exist and are not zero-length before removing the original.

This was fine, but after finishing, I was still left with a bunch of uncompressed files in RF64 format, which unfortunately errored.

It turns out sox 14.4.2 added RF64 read support, so I grabbed that on my Arch machine, and converted the few remaining files (substituting wav ? rf64 twice in the script above.

The final result?

$ du -sh .
64G /home/robert/storage/raid6/media/ts_recordings/

400 gigs less space and still lossless? Ahh, much better.

[Solved] “Logical volume is used by another device”

tl;dr: use dmsetup remove before trying lvremove

Note: Volume group and logical volume names have been substituted here. I’m not entirely sure it’s necessary, but better safe than sorry. If following this, please use the names of your volume group[s] and logical volume[s]

I am in the process of combining fileserver information, and so I have been touching parts of the system not usually looked at in the normal case of day-to-day operations. For some reason, on one of my logical volumes I had created a partition table and added a partition. Of course, that worked normally so there was no reason to be aware of this — clearly I had blanked the fact that I did it at all not long after doing so — until recently.

The Problem

Logical volume vg/lv-old is used by another device.

After copying the data over to a new logical volume, I wanted to remove the now-unnecessary original logical volume that contained the partition. Easy, right?


# lvremove -v /dev/vg/lv-old
    DEGRADED MODE. Incomplete RAID LVs will be processed.
    Using logical volume(s) on command line
  Logical volume vg/lv-old is used by another device.

Okay, what’s using it? cat /proc/mounts reports that it isn’t mounted. lsof and fuser return nothing. Maybe retrying the command will work*… nope.

There are a bunch of posts around this, mostly saying “make sure it is umounted first”, or “try using -f with lvremove“. And the old favourite: “a reboot fixed it”.

Find Out device-mapper’s Mapping

Well, the culprit in this case seemed to be device-mapper creating a mapping which counted as ‘in-use’. Check for the mapping via:


# dmsetup info -c | grep old
vg-lv--old       253   9 L--w    1    2      1 LVM-6O3jLvI6ZR3fg6ZpMgTlkqAudvgkfphCyPcP8AwpU2H57VjVBNmFBpL
Tis8ia0NE

Find Out Mapped Device

Then use that to find out what is holding it:


$ ls -la /sys/dev/block/253\:9/holders

drwxr-xr-x 2 root root 0 Dec 12 01:07 .
drwxr-xr-x 8 root root 0 Dec 12 01:07 ..
lrwxrwxrwx 1 root root 0 Dec 12 01:07 dm-18 -> ../../dm-18

Remove Device (via `dmsetup remove`)

Then do a dmsetup remove on that device-mapper device:


# dmsetup remove /dev/dm-18

Retry `lvremove`

And you’re good to go with lvremove:


# lvremove -v /dev/vgraid6/lv-old
    DEGRADED MODE. Incomplete RAID LVs will be processed.
    Using logical volume(s) on command line
Do you really want to remove active logical volume lv-old? [y/n]: y
    Archiving volume group "vg" metadata (seqno 35).
    Removing vg-lv--old (253:9)
    Releasing logical volume "lv-old"
    Creating volume group backup "/etc/lvm/backup/vg" (seqno 36).
  Logical volume "lv-old" successfully removed

Bish bash bosh!

Addendum

*: I’m not sure of the thought process behind “just try it again”.

I’m reminded of a short bit of Darrell Hammond’s stand up (paraphrased):

“You know that message you get when you dial the wrong number that tells you to ‘check you have the right number and dial again’? Well, women will check the number and try again. Men will try the same number, but this time we’ll push the buttons a ******** harder…”

[Solved] “Filesystem is already n blocks long. Nothing to do!”

tl;dr: if you’re sure you did everything right, use lsblk or parted (etc) to see if a partition table is present on your logical volume.

So I am in the process of merging the content of two fileservers, and had the need to extend a logical volume to accommodate some additional data. No problem- that’s one of the benefits of using LVM!

Except after resizing, I ran into a problem:


$ lvextend +150G /dev/vg/lvinquestion
$ resize2fs /dev/vg/lvinquestion
> The filesystem is already 268435200 (4k) blocks long. Nothing to do!

Wait, what? Aside from the fact I could have combined the comments by including the --resizefs option to lvextend, why was resize2fs complaining that there was “Nothing to do!”?

Fortunately SE Arquade user ToxicFrog had the answer:

@bertieb parted reports the partition size, not the filesystem size
If it’s a partitioned LV you need to resize the partition after expanding the VL
*LV

Ah, whoops! I’m not sure why I partitioned the LV (it only had one partition) but I must have done so.

lsblk confirmed the partition:


sdh                                8:112  0   2.7T  0 disk
??sdh1                             8:113  0   2.7T  0 part
  ??md1                            9:1    0  10.9T  0 raid6
  (...)
    ??vg-lv                      253:9    0   1.2T  0 lvm
    ? ??vg-lv1                   253:18   0   1.1T  0 part

So, then what? Well, I used dd to copy the filesystem to a new logical volume, then extended that, and finally removed the original:


# dd if=/dev/dm-18 bs=1M | pv -s 1T |  dd of=/dev/vg/lv-new bs=1M
# lvextend --resizefs -L 1.15T /dev/vg/lv-new
# lvremove /dev/vg/lv
# lvrename vg lv-new lv

(pv was included to give a nice progress indicator, rather than faffing around with SIGUSR1)

And that was that. There was a slight problem with removing the original logical volume, but more on that later…

Browsing MySQL Backups

tl;dr: Seems the quickest way of doing this was to fire up a VM, install mysql-server and mysql-client and browse that way.

I have backups of things. This is important, because as the old adage goes: running without backups is data loss waiting to happen. I’m not sure if that’s the adage, but it’s something resembling what I say to people. I’m a real hit at parties.

I wanted to check the backups of the database powering this blog, as there was a post that could swear I remembered referring to (iterating over files in bash) but couldn’t find. I had a gzipped dump of the MySQL database, and wanted to check that.

zgrep bash mysql.sql.gz | less was my first thought, but that gave me a huge amount of irrelevant stuff.

A few iterations later and I was at zgrep bash mysql.sql.gz | grep -i iterate | grep -i files | grep -v comments and none the wiser. I had hoped there was some tool to perform arbitrary queries on dump files, rather than going through a proper database server, but that’s basically sqlite and to my limited searches, didn’t seem to exist for MySQL.

What I ended up doing was firing up a VM, installing mysql-server and mysql-client and dumping the dump into that server via zcat:

zcat mysql.sql.gz | mysql -u 'root' -p database

And then querying the database: select post_title, post_date from wp_posts where post_title like '%bash%' followed by select post_content from wp_posts where post_title like '%terate%';

And the post is back!