Quick Hacks: A Script to Extract a Single Image/Frame From Video

Long ago, I posted the simple way to get a frame of a video using ffmpeg. I’ve been using that technique for a long time.

It can be a bit unwieldy for iteratively finding a specific frame, as when using a terminal you have to move the cursor to the time specification. So I wrote a very small wrapper script to put the time part at or towards the end:


#!/bin/bash
# f.sh - single frame

USAGE="f.sh infile timecode [outfile]"

if [ "$#" == "0" ]; then
        echo "$USAGE"
        exit 1
fi

if [ -e "$1" ]; then
        video="$1"
else
        echo "file not found: $1"
        exit 1
fi

if [ ! -z "$2" ]; then
        time="$2"
else
        echo "Need timecode!"
        exit 1
fi

# if we have a filename write to that, else imagemagick display

if [ ! -z "$3" ]; then
        echo "ffmpeg -i \"$video\" -ss $time  -vframes 1 -f image2 \"$3\""
        ffmpeg -loglevel quiet -hide_banner -ss $time -i "$video" -vframes 1 -f image2 "$3"
else
        echo "ffmpeg -i \"$video\" -ss $3  -vframes 1 -f image2 - | display"
        ffmpeg -hide_banner -loglevel quiet -ss $time  -i "$video" -vframes 1 -f image2 - | display
fi

Most of that is usage explanation, but broadly it has two modes:

  • display an image (f.sh video time)
  • write an image (f.sh video time image)

It’s more convenient to use it, hit ? and amend the time than to move the cursor into the depth of an ffmpeg command.

Quick Hacks: A script to import photos to month-based directories (like Lightroom)

tl;dr: A bash script written in 15 minutes imports files as expected!

I was clearing photos off an SD card so that I have space to photograph a friend’s event this evening. Back on Windows, I would let Lightroom handle imports. Darktable is my photo management software of choice, but it leaves files where they are during import:

Importing a folder does not mean that darktable copies your images into another folder. It just means that the images are visible in lighttable and thus can be developed.

I had photos ranging from July last year until this month, so I needed to put them in directories from 2017/07 to 2018/02. But looking up metadata, copying and pasting seems like a tedious misuse of my time* so I wrote a little script to do so. It is not robust due to some assumptions (eg that the ‘year’ directory already exists) but it got the job done.

#!/bin/bash
# importcanon.sh - import from (mounted) sd card to directories based on date

CARD_BASEDIR="/tmp/canon"
PHOTO_PATH="DCIM/100CANON/"

TARGET_BASEDIR="/home/robert/mounts/storage/photos"

function copy_file_to_dir() {
    if [ ! -d "$2" ]; then
        echo "$2 does not exist!"
        mkdir "$2"
    fi
    cp "$1" "$2"
}

function determine_import_year_month() {
    #echo "exiftool -d "%Y-%m-%d" -S -s -DateTimeOriginal $1"
    yearmonth=$(exiftool -d "%Y/%m/" -S -s -DateTimeOriginal "$1")
    echo $yearmonth
}

printf "%s%sn" "$CARD_BASEDIR" "$PHOTO_PATH"

i=0
find "$CARD_BASEDIR/$PHOTO_PATH" -type f | while read file
do
    ym=$(determine_import_year_month "$file")
    copy_file_to_dir "$file" "$TARGET_BASEDIR/$ym"
    if let "$i %10 == 0"; then
        echo "Processed file $i ($file)"
    fi
    let i++

done

This uses exiftool to extract the year and month (in the form YYYY/MM), and that is used to give a target to cp.

The enclosing function has a check to see if the directory exists ([ ! -d "$2" ]) before copying. Using rsync would have achieved the effect of auto-creating a directory if needed, but that i) involves another tool ii) probably slows things down slightly due to invocation time iii) writing it this way let me remind myself of how to check for directory existence.

I still occasionally glance at how to iterate over files in bash, even though there are other ways of doing so!

There is also a little use of modulo in there to print some status output.

Not pretty, glamorous or robust but it got the job done!


*: Golden rule: leave computers to do things that they are good at

File creation time on ext4 (Linux)

tl;dr: since coreutils stat does not show file ‘birth’ time, use debugfs -R stat <inode> FS

I was curious as to when I wrote a particular time-saving script, so I figured I would look up the file creation time:

$ stat ~/scripts/goprofootage.sh
 File: /home/robert/scripts/goprofootage.sh
  Size: 1001            Blocks: 8          IO Block: 4096   regular file
Device: fe01h/65025d    Inode: 792618      Links: 1
Access: (0755/-rwxr-xr-x)  Uid: ( 1000/  robert)   Gid: ( 1000/  robert)
Access: 2018-01-07 08:23:04.816666962 +0000
Modify: 2017-05-13 19:09:30.760094062 +0100
Change: 2017-05-13 19:09:30.760094062 +0100                        
 Birth: -

Err, well. No birth date? ext4 does support file creation timestamps, so it’s just a simple matter of getting at them.

Enter debugfs, part of e2fsprogs (at least on this Arch install). We can stat an inode to get a creation time:


$ stat -c %i ~/scripts/goprofootage.sh
792618
# debugfs -R 'stat <792618>'  /dev/mapper/840ssd-home
debugfs 1.43.7 (16-Oct-2017)
Inode: 792618   Type: regular    Mode:  0755   Flags: 0x80000
Generation: 3863725318    Version: 0x00000000:00000001
User:  1000   Group:  1000   Project:     0   Size: 1001
File ACL: 0    Directory ACL: 0
Links: 1   Blockcount: 8
Fragment:  Address: 0    Number: 0    Size: 0
 ctime: 0x59174bda:b53875b8 -- Sat May 13 19:09:30 2017
 atime: 0x5a51d8e8:c2b56548 -- Sun Jan  7 08:23:04 2018
 mtime: 0x59174bda:b53875b8 -- Sat May 13 19:09:30 2017
crtime: 0x58efaf27:2234f628 -- Thu Apr 13 18:02:31 2017
Size of extra inode fields: 32
EXTENTS:
(0):5096134

Or, if you’d rather combine the above into a one-liner (NB needs root):

 # debugfs -R "stat <$(stat -c %i ~/scripts/goprofootage.sh)>"  /dev/mapper/840ssd-home 2>/dev/null  | grep crtime | cut -d ' ' -f4-9
Thu Apr 13 18:02:31 2017

Compressing Teamspeak 3 Recordings Using sox

tl;dr: Loop through the files in bash, sox them to FLAC

Success!

I’ve been combining fileserver contents recently, and I came across a little archive of Teamspeak 3 recordings:

$ du -sh .
483G /home/robert/storage/media/ts_recordings/

Eep.

I wrote a quick-and-dirty script to convert the files:


#!/bin/bash

n=0
total=$(ls *.wav|wc)
ls *.wav | while read file; do
        sox -q ${file} ${file%.*}.flac
        if [ -e ${file%.*}.flac ]; then
                if ! [ -s {file%.*}.flac ]; then
                        rm ${file}
                else
                        echo "${file%.*}.flac is zero-length!"
                fi
        else
                echo "Failed on ${file}"
        fi

        ((n++))
        if  ! ((n % 10 )); then
                echo "${n} of ${total}"
        fi
done

The script checks that the FLACs replacing the WAVs exist and are not zero-length before removing the original.

This was fine, but after finishing, I was still left with a bunch of uncompressed files in RF64 format, which unfortunately errored.

It turns out sox 14.4.2 added RF64 read support, so I grabbed that on my Arch machine, and converted the few remaining files (substituting wav ? rf64 twice in the script above.

The final result?

$ du -sh .
64G /home/robert/storage/raid6/media/ts_recordings/

400 gigs less space and still lossless? Ahh, much better.

Timesaver: import and combine GoPro Footage with FFmpeg

I’ve been taking my GoPro to Sunday Morning Football (as it is known) for a while now, so I figured I’d automate the process of importing the footage (moving it from microSD) and combining it into one file (GoPro splits recordings by default).

So I have the following script:


#!/bin/bash

GOPRO="/tmp/gopro"
DATE="$(date +%Y-%m-%d)"
VIDEO_BASE="/home/robert/mounts/storage/video/unsorted"
VIDEO_DEST="$VIDEO_BASE/$DATE"

if [ -e $GOPRO ]; then
        echo "Copying..."
        rsync -aP --info=progress2 --remove-source-files --include='*.MP4' $GOPRO/DCIM/100GOPRO/ $VIDEO_DEST/
        echo "Joining..."
        cd $VIDEO_DEST
        #cd $GOPRO/DCIM/100GOPRO/
        for file in `ls *.MP4`; do echo "file '$file'" >> stitch.txt; done
        #RECORD_DATE="$(ffprobe -v quiet `ls *.MP4 | head -n1` -show_entries stream=index,codec_type:stream_tags=creation_time:format_tags=creation_time | grep creation_time | head -n1| cut -d '=' -f 2| cut -d ' ' -f1)"
        # new format:
        RECORD_DATE="$(ffprobe -v quiet `ls *.MP4 | head -n1` -show_entries stream=index,codec_type:stream_tags=creation_time:format_tags=creation_time | grep creation_time | head -n1| cut -d '=' -f 2| cut -d ' ' -f1| cut -d 'T' -f1)"
        #echo "$RECORD_DATE"
        ffmpeg -y -f concat -i stitch.txt -c copy $RECORD_DATE.mp4
else
        echo "GoPro microSD not mounted?"
fi

Assumptions:

  • the microSD is already mounted before running (under /tmp/gopro) – I had considered automating this, but I figured running a script in response to insertion of removable media was a bad idea; I could add the mkdir and mount commands here, but since the latter requires root privileges I’d rather not and it is quickly recalled from bash history in any case
    • the $VIDEO_BASE directory is mounted and created (this is pretty stable)
    • the GoPro won’t number directories higher than 100GOPRO (eg 101GOPRO)- it possibly would if dealing with eg timelapse, but I am not covering that case
    • the GoPro will set creation time correctly; so far it has reset to the default date a few times, probably related to battery
    • I want to keep the source files around after creation (the script could remove them)

Given the above the script may seem a bit fragile – and it is definitely tightly coupled to my assumptions – but it’s done the job for a few weeks at least, and the commands it was based on have been pretty stable since I started recording football last year.

Count Arguments In A Bash Script

Another useful tip I’m sure most people will be familiar with, but in bash scripts $# stores the number of arguments passed to the script. Eg, combine with $@ (all arguments) for batch processing (what I used it for):


foreach $arg in $@; do
[stuff]
[compare with $# to tell remaining items]
done

Very basic stuff, but it was new to me yesterday, and it might save someone a bit of time searching.