Categories
automation coding python video

Generating Text Captions for Shotcut

Making the video editing workload much lighter

Shotcut is a Free (GPLv3) cross-platform video editor. I’ve been using it a couple of times lately to put some simple clips together (like sorting the Take 2 copyright claim GTA Online video).

I figured I’d use it to take a clip of my friends and I getting schooled by someone with a bomb lance in Hunt: Showdown.

Actually, my first thought was to write a script to put a clip together using MELT — based on JSON, of course — but on reflection for these I wanted something a bit more refined.

So, enter Shotcut. One of the things I was keen to include were text-based captions. I’ve been including these in gifs (example) for a while now, and I think they work really well for video. They can be informative, and sometimes funny!

Text in Shotcut is doable natively via filters: text, HTML etc. But this felt awkward to me- I’d rather have something directly visible in the timeline which is easy to manipulate; and to add filters to itself if it comes to it.

So I decided… to write a script to generate images with these captions, based on — yup! — JSON. I quickly thew together a JSON file for the dialogue in clip I wanted to caption:

{ captions: [                                                                                                       
        [ 0, close by here],                                                                                        
        [ 0, other side of this wall],                                                                                      [ 1, yep yep yep],                                                                                          
        [ 2, That was a Sparks! :o],                                                                                
        [ 0, ohhhh fudge],                                                                                          
        [ 0, I die to this],                                                                                        
        [ 0, GADDAMMITTT],                                                                                          
        [1, what was that?],                                                                                        
        [0, bomblance :(],                                                                                          
        [1, where?],                                                                                                
        [2, he's with me],                                                                                                  [2, :(],                                                                                                    
        [0, you've got one bullet left],                                                                            
        [0, maybe on top if he's got a bomblance?],                                                                 
        [1, good idea],                                                                                             
        [0, is that not him at the gate?],                                                                          
        [1, dunno where he is],                                                                                             [2, he's on our bodies],                                                                                    
        [1, I know...],                                                                                             
        [1, WHAT?! *panicflee*],                                                                                    
        [1, this is a bit difficult],                                                                               
        [1, fuq! :(],                                                                                               
        [1, I should have run again],                                                                               
        [1, oh well],                                                                                               
        [0, "gg wp Flakel, you beat us o7"]                                                                         
]                                                                                                                   
}

Simple! The numbers refer to speakers; 0 is the first, 1 = 2nd, 2 = 3rd. I didn’t actually need to zero-index speakers, and in fact I can use text strings to denote who is speaking, but writing numbers is quicker if there’s twenty-five captions to do.

The script, which I will throw up on GitHub, goes through this and generates the caption for each item in the list. It has assigned colours for each ‘speaker’.

Due to familiarity, I was going to use imagemagick. But I originally used Pillow as I wanted to [re]gain a bit of familiarity with that. Once I had [re]acquainted myself with the few bits I needed it was relatively straightforward to generate a cropped image with the text appropriately sized, coloured and stroked; but I found myself wanting a full 1920×1080 frame as this made the Shotcut workflow much quicker since there was no need to set position if the image was the same size as the source video.

So I changed Pillow/PIL out for imagemagick and subprocess and redid the whole thing in a few minutes. The imagemagick version is significantly slower, but not so slow as to be intolerable even when wanting to tweak a couple of the captions.

I’m quite happy with how it turned out:

The ‘automatic’ text sizing could use a little tweak!

Lessons learned:

  • using something you’re familiar with is often easier than learning something new
  • PIL is faster than imagemagick for generating simple text on a transparent background
  • bomb lancers can be pretty deadly
Categories
automation computer vision programming python

Automating YouTube Uploads With OCR Part 8: Output

Nearly a working tool!

We’ve been using python and tesseract to OCR frames from a video footage of Deep Rock Galactic to extract metadata which we can use for putting the videos on YouTube.

Mutators

Nearly all of the elements are captured, there’s just the mutators left to capture: warnings and anomalies. These appear in text form on the starting screen on either side of the mission block:

Here we have a Cave Leech Cluster and a Rich Atmosphere.

Since the text of these mutators is known to a list of ten or less for each, we can detect them using a wide box, then hard-casting them to whichever potential output it has the smallest Levenshtein distance to.

Tie-Breaking Frames

The loading/ending frame detection works well for most, but on the odd one or two it suffers. It’s best to ignore the frames which are completely/pretty dark (ie either transition or fade-in) , and the ones that are very bright (eg light flash) as that hurts contrast and so hurts OCR.

Using ImageStat from PIL we can grab the frame mean (averaged across RGB values), then normalise it to add to our frame scoring function in the detection routine.

We want to normalise between 0 and 1, which is easy to do if you want to scale linearly between 0 and 255 (RGB max value): just divide the average by 255. But we won’t want that. Manually looking at a few good, contrasty frames it seemed that the value of 75 was the best- even by 150 the frame was looking quite washed out. So we want to have a score of 0 at mean pixel value of 0 and 150; and a score of 1 at mean pixel value of 75:

# Tie break score graph should look something like:
# (tb_val)          
# |    /\            
# |   /  \           
# |  /    \          
# |_/      \_ (x)                
# 0    75    150                
#                   
# For sake of argument using 75 as goldilocks value
# ie not too dark, not too bright

75 is thus our ‘goldilocks’ value- not too dark, not too light. So our tiebreak value is:

tb_val = (goldilocks - (abs(goldilocks - frame_mean)))/goldilocks

Output

Since we’ve gotten detection of the various elements to where we want them, we can start generating output. Our automated YT uploader works with JSON, and looks for the following fields: filename, title, description, tags, playlists, game, thumb ( ? time, title, additional), and scheduled.

Thumb time and additional we can safely ignore. Title is easy, as I use mission_type: mission_name. All of my Deep Rock Galactic uploads go into the one playlist. Tags are a bunch of things like hazard level, minerals, biome and some other common-to-all ones like “Deep Rock Galactic” (for game auto detection). The fun ones are description and scheduled.

Funnily enough, one of my earliest forays into javascript was a mad-libs style page which took the phrases via prompt() and put them in some text.

This was back in the days of IE4, and javascript wasn’t quite what it is today…

For the description, I took a bit of a “mad libs” style approach: use the various bits and pieces we’ve captured with a variety of linking verbs and phrases to give non-repetitive output. This mostly comes down to writing the phrases, sticking them in a bunch of lists and using random.choice() to pick one of them.

For obvious reasons, I don’t want to publish fifty-odd videos at once, rather spread them out over a period. I publish a couple of DRG videos on a Monday, Wednesday, Friday and at the weekend. To do this in python, I decided to use a generator, and call next() on it every time we need to populate the scheduled field. The function itself is fairly simple: if the time of scheduled_date is the earlier of the times at which I publish, go to the later one and return the full date; if it’s at the later time, increment by two days (if Monday/Wednesday), or one day and set the time to the earlier one.

We run this through json.dumps() and we have output! For example:

{
  "filename": "2019-10-17 19-41-38.mkv",
  "title": "Elimination: Illuminated Pocket",
  "description": "BertieB, Costello and graham get their orders from Mission Control and get dropped in to the Fungus Bogs to take on the mighty Dreadnoughts in Illuminated Pocket (Elimination)\n\nRecorded on 2019-10-17",
  "tags": [
    "Deep Rock Galactic",
    "DRG",
    "PC",
    "Co-op",
    "Gaming",
    "Elimination",
    "Dreadnought",
    "Fungus Bogs",
    "Hazard 4",
    "Magnite",
    "Enor Pearl"
  ],
  "playlists": "Deep Rock Galactic",
  "game": "drg",
  "thumb": {
    "title": "Pocket Elimination"
  },
  "scheduled": "2019-11-18 18:00"
}

Looks good!