Categories
backups wisdom

Monitor Your Backups!

In which I find out things have gone awry

Hey you! Yes, you! The one reading this. You have backups, right? Go check that they i) actually exist ii) are backing up at the right frequency iii) work. This is important, I’ll wait.


borg: Great for Backing Up

I’ve been using borg for backups for a couple of years now. It’s great- it does deduplication (saving tons of space!), only backs up what has changed (efficient! incremental!), and is somehow fun to use while doing so.

I wrote a script to take the backups, run as a systemd service each hour. All was well- it did error detection and emailed me when a backup failed.

But I had the occasion to check on the backups a couple days ago, and the latest one was from January. My first thought was disk space, but there was enough (albeit getting close to the limit). So I then checked the systemd output:

$ systemctl status periodic-backup
? periodic-backup.service - Take a periodic backup of directories
     Loaded: loaded (/usr/lib/systemd/system/periodic-backup.service; enabled; vendor preset: disabled)
     Active: inactive (dead) since Wed 2020-02-12 12:03:06 GMT; 45min ago
TriggeredBy: ? periodic-backup.timer
   Main PID: 1168530 (code=exited, status=0/SUCCESS)

Feb 12 12:03:02 zeus systemd[1]: Started Take a periodic backup of directories.
Feb 12 12:03:06 zeus systemd[1]: periodic-backup.service: Succeeded.

So the job was running and… succeeding, but not backing up?

Next step in diagnosis is to run the script manually, and make sure it still works. The script didn’t error, but it took a long time to complete- longer than a straightforward case of “large increment to backup since January”.

So I broke it down even further, and ran the borg command as written in the script. I got a prompt:

Warning: The repository at location ssh://bertieb@pandora/home/bertieb/backups/borg/zeus was previously located at ssh://pandora/~/backups/borg/zeus

Aha! It was waiting on input to proceed. One form is how the script access the repo, the other is how it is accessed from the command line. It’s a bit strange as the repo clearly didn’t move, and I’m not sure why it started treating the two differently.

Fortunately, borg has an environment var for just such an occasion: BORG_RELOCATED_REPO_ACCESS_IS_OK=yes

Monitoring

I asked in #borgbackup on Freenode about the issue, and folks said they had used a few things for independently monitoring backups:

I am indebted to Armageddon for mentioning the last one. While full-on monitoring with Prometheus looks interesting (especially in conjunction with grafana), it’s way overkill for my needs. Ditto Zabbix.

Healthchecks is a relatively simple tool which implements the concept, “we expect a ping/health-check at <such-and-such> a frequency; if we don’t get it then alert”.

Armageddon/Lazkani’s blog has a worked example of setting up Healthchecks to work with borgmatic (a tool to simplify borg backups). The official borgmatic ‘getting started’ guide is pretty good too.

The env vars in the Healthchecks docker image are used on creation; after they can be changed in local_settings.py

I set up Healthchecks using the linuxserver Docker imagebig note: the env vars listed there are used on creation, after that they can be changed in the data volume / directory under local_settings.py; that one held me up for a bit when i was trying to sort out email integration — and have added both my pre-existing scripts, and some new borgmatic backups.

Looking good!

If you use the helpful ‘crontab’ format for the period, make sure to match the timezone, or you’ll get period emails saying the backup has failed. Ask me how I know…

Categories
automation coding python video

Generating Text Captions for Shotcut

Making the video editing workload much lighter

Shotcut is a Free (GPLv3) cross-platform video editor. I’ve been using it a couple of times lately to put some simple clips together (like sorting the Take 2 copyright claim GTA Online video).

I figured I’d use it to take a clip of my friends and I getting schooled by someone with a bomb lance in Hunt: Showdown.

Actually, my first thought was to write a script to put a clip together using MELT — based on JSON, of course — but on reflection for these I wanted something a bit more refined.

So, enter Shotcut. One of the things I was keen to include were text-based captions. I’ve been including these in gifs (example) for a while now, and I think they work really well for video. They can be informative, and sometimes funny!

Text in Shotcut is doable natively via filters: text, HTML etc. But this felt awkward to me- I’d rather have something directly visible in the timeline which is easy to manipulate; and to add filters to itself if it comes to it.

So I decided… to write a script to generate images with these captions, based on — yup! — JSON. I quickly thew together a JSON file for the dialogue in clip I wanted to caption:

{ captions: [                                                                                                       
        [ 0, close by here],                                                                                        
        [ 0, other side of this wall],                                                                                      [ 1, yep yep yep],                                                                                          
        [ 2, That was a Sparks! :o],                                                                                
        [ 0, ohhhh fudge],                                                                                          
        [ 0, I die to this],                                                                                        
        [ 0, GADDAMMITTT],                                                                                          
        [1, what was that?],                                                                                        
        [0, bomblance :(],                                                                                          
        [1, where?],                                                                                                
        [2, he's with me],                                                                                                  [2, :(],                                                                                                    
        [0, you've got one bullet left],                                                                            
        [0, maybe on top if he's got a bomblance?],                                                                 
        [1, good idea],                                                                                             
        [0, is that not him at the gate?],                                                                          
        [1, dunno where he is],                                                                                             [2, he's on our bodies],                                                                                    
        [1, I know...],                                                                                             
        [1, WHAT?! *panicflee*],                                                                                    
        [1, this is a bit difficult],                                                                               
        [1, fuq! :(],                                                                                               
        [1, I should have run again],                                                                               
        [1, oh well],                                                                                               
        [0, "gg wp Flakel, you beat us o7"]                                                                         
]                                                                                                                   
}

Simple! The numbers refer to speakers; 0 is the first, 1 = 2nd, 2 = 3rd. I didn’t actually need to zero-index speakers, and in fact I can use text strings to denote who is speaking, but writing numbers is quicker if there’s twenty-five captions to do.

The script, which I will throw up on GitHub, goes through this and generates the caption for each item in the list. It has assigned colours for each ‘speaker’.

Due to familiarity, I was going to use imagemagick. But I originally used Pillow as I wanted to [re]gain a bit of familiarity with that. Once I had [re]acquainted myself with the few bits I needed it was relatively straightforward to generate a cropped image with the text appropriately sized, coloured and stroked; but I found myself wanting a full 1920×1080 frame as this made the Shotcut workflow much quicker since there was no need to set position if the image was the same size as the source video.

So I changed Pillow/PIL out for imagemagick and subprocess and redid the whole thing in a few minutes. The imagemagick version is significantly slower, but not so slow as to be intolerable even when wanting to tweak a couple of the captions.

I’m quite happy with how it turned out:

The ‘automatic’ text sizing could use a little tweak!

Lessons learned:

  • using something you’re familiar with is often easier than learning something new
  • PIL is faster than imagemagick for generating simple text on a transparent background
  • bomb lancers can be pretty deadly
Categories
timesavers troubleshooting

Recovering the Config of a Running Xen DomU

For those “oh poop” moments

I was in a situation where I had a running Xen guest, but the config file that defined the DomU was missing.

Fortunately, the listing command (xl list) has a long option, xl list -l, which prints out domain information in JSON format. This includes config information, from which the DomU configuration can be rebuilt.

Categories
automation python

Including Contemporaneous Info in my YouTube Workflow

From the Department of Wordy Titles

I have a set of tools that I have written to make interacting YouTube simpler, more straightforward, simplifying my workflow.

In the state it’s in it roughly looks like:

  1. record a bunch of videos
  2. upload the files and leave them in place
  3. run genjson on them to create a JSON template, including a reasonably-spaced publish schedule
  4. run get_ids to associate the JSON entries with the video’s YT videoId
  5. go through the videos, rewatch to decide on title, description and thumbnail frame and include this in the JSON entry
  6. run uploadytfootage to update the metadata

Most of the above is highly automated- even step 2 could be done away with if the default YouTube API quota didn’t limit one to roughly six videos per day.

The most labour-intensive part of the process is step 5. Because of the batch nature of the job, sometimes quite a few videos can pile up. For example, at time of writing I have 45 Hunt: Showdown videos from the past ten days to do.

Getting a short, catchy yet descriptive title and description for each of those will involve reacquainting myself with what those round[s] entailed. So I decided recently that I would try to do some of that work as I go: between rounds of Hunt, write out a putative title and description associated with a video file to another JSON file.

I also capture a short snippet or potential title on a notepad on my desk:

Between those hopefully the process will be a bit easier.

I also cooked up a short script to merge together the two JSON files. The crux of it is the filter that selects from the ‘contemporaneous note’ if it has an associated entry for a file in the generated JSON template list.

We are working with a list of dicts, so a list comprehension is handy. We want to select from the list of dicts an entire dict that matches the filename of the video. Roughly speaking:

next(item for item in json_c if item["file"] = filename)

Docs: list comprehension, next()
SO example: Python list of dictionaries search

If I am able to keep on top of titles and descriptions as I go, the only thing needed will be to find a good thumbnail frame! (though that’s kinda time consuming in itself, perhaps ML could be applied to that…)

Edit: Yes! Deep neural net thumbnails and convolutional neural nets (PDF)

Categories
automation python timesavers video

Rescheduling YouTube Videos using Python

More ‘exactly what it says on the tin’

A couple weeks ago, I had to renumber some Hunt: Showdown videos in a playlist:

Well, now I have another issue. When we started playing Hunt: Showdown, I was publishing the videos a couple a day on Mondays, Wednesdays and Fridays. Putting them all out at once is a bit of a crass move as it floods subscribers with notifications, so spreading then out is the Done Thing.1

However we’re now above 150 videos, and even after adding weekends to the schedule that still takes us up to, umm, May.

What I’d like to do is go back and redo the schedule so that all pending videos use Saturdays and Sundays, and maybe think about doing three or four per day, which would bring us down to about 8/6 weeks’ worth. That is still a lot, quite frankly, but pushing up the frequency further would be detrimental.

Changing the scheduled publish date would be even more painful than renumbering because it requires more clicks, I’d have to keep track and figure out when the next one was supposed to go out, and there are more to do (120-odd).

So back to python! I have already written a schedule-determiner for automating the generation of the pre-upload json template, so I can reuse — read: from genjson import next_scheduled_date — that for this task.

The filtering logic is straightforward: ignore anything not a Hunt video, skip anything before a defined start date (ie videos already published). From there change the current ‘scheduled’ date for the next one from the new schedule.

For the current set of scheduled videos that are not already published, the schedule of 3 videos each 5 days (15 per week) gives:

Current date: 2020-04-06 17:30
New date : 2020-03-09 20:00

So we’ve saved a month! Plus the pending videos (~40) will be done in two and a half weeks instead of four.

From here it’s straightforward to rewrite the scheduled field and use shoogle as before to change the dates, this time setting publishAt under status. Note that privacyStatus needs to be explicitly set to private, even if it is already set! This avoids a “400 The request metadata specifies an invalid scheduled publishing time” error.

Another thing done quickly with python!


1: On the note of ‘Done Things’, the thing to do would be to upload fewer videos in the first place.

I’ve considered that, and if a video is truly mundane and missable, I will omit it. But as well as being fun/interesting videos of individual rounds, the playlist should serve as a demonstration of our progress as players. The Dead by Daylight playlist does this: we start with no idea what’s going on or how to play properly, and by the final video — somewhere north of 300 — we are pretty competent.

Categories
cool programming

AI Dungeon 2 Is Fun Nonsense

Apparently, there’s something about Mary

It’s night. I’m a private detective from Chicago named Joseph, on the hunt for someone named Jim, and I have a gun and a badge. I’m in the woods, and I hear some noise from behind the trees. Suddenly an old man shoots an arrow from a bow at a hitherto-unseen target. He runs off, but I catch up with him and ask his name. It turns out that he’s also a detective from Chicago named John, and he’s also hot on the trail of Jim too.

I ask “How did you know my name?” and he replies, succinctly: “Because we’re both detectives.” I try to discuss the case with him, but he refuses to be drawn on it, preferring to cryptically state “I’m sure we’ll have some clues soon enough”.

We come across a small house in the woods, and I venture inside. A woman sits, reading quietly. I ask her about Jim, but she only says that he left long ago. I make a note of the house and return the next day without John. I look around and find some white socks and black pants. Ah-ha! These are crucial to the case. I put them on immediately. Surely it’s now only a matter of time before I find Jim.

I find only a shack, in which a single light bulb illuminates a strange assortment of books and papers with diagrams.

I go back outside, and see John, the other detective watching me cautiously. Clearly he’s jealous of my new socks and pants. He disappears into the woods. I run after him but find only a shack, in which a single light bulb illuminates a strange assortment of books and papers with diagrams. I picture Jim with this:

Combing through the strange lot of papers, I find one that might help my case! It’s a drawing. A drawing of a man in front of a tree. He has a hat, and the hat has horns. His eyes are wide open and staring at me.

This is Jim!

I find the tree in the drawing. It’s odd. It isn’t right. It seems to be made of wood, but it has cracks all over and seems as if it was never alive in the first place. Maybe it has Jim inside it? In any case it isn’t right. It has to go.

I break the tree apart, fling a piece at a nearby wall, which thuds, then silence.

The next day, I come home and see that everything is gone.


The above is how my first dabble with AI Dungeon 2 started. I was linked to it without context, so had no preconceptions going in. it all started off somewhat normally, I wondered if it was some kind of randomly-generated MUD (a old text-based system predating popular MMORPGs that let users create text-based worlds and interact with one another. But as things got slowly more odd it seemed like it was something else. It had the slightly weird, funny cadence that computer-generated text has.


I had come close to finding Jim. The house, the pants, the drawing in the shack, and the tree. They all fitted together, and I knew I must be close. I returned to the woods.

Thereupon I chanced on a woman sitting on a rock, crying. She explained that her sister Mary had gone missing only the night before. Perhaps Jim had a hand in this. I tried to explain the situation as best I could, but this only upset her more. So instead, I gave her a hug. This calmed her down, perhaps too much. She fell to the ground. She needed to be somewhere safe, but where? Ah! The shack! I carry her there.

Going in, I find a man dressed in an old coat and wearing glasses. He has long white hair that hangs down to his shoulders. His eyes are closed and he seems very tired looking. What the heck is he doing there? I demand to know his name.

“My name is James, but everyone calls me Jack.” Joseph, John, Jim, James, Jack… Wait! James? As in the unshortened form of Jim..? I have to think on my feet, and decide to act quickly.

“Where is Mary?”

I’ve got him now. Or so I think. But the man just sighs and shakes his head. He thinks he’s won. But I’m Joseph, a detective from Chicago. And Chicago detectives know how to roll with the punches, literally and figuratively. I decide to roll with this one and throw him off balance. I drop my voice, lean in close and growl:

“Where is Jim?”

“Mary..? She left with another guy named John.”

He yawns and rubs his eyes. He looks tired too. But he knows I’ve got him. “Mary… Jim… Where is Mary?” He’s trying to throw me, but he didn’t reckon with my Windy City credentials. He coughs and then speaks. “She left with another guy named John.”

Damn.

The one thing I wasn’t expecting. The one man I didn’t suspect.

Time for action. Mary and John can wait, but Jim’s my case and he has questions to answer. I grab Jim by the collar and pull him from behind the desk. He puts up a brief resistance, but he isn’t strong enough to break free. Up against the wall he goes, and I cuff his hands together behind his back. Time to take him downtown.


I’ve long enjoyed the output of Markov chains. They are some relatively simple procedures for generating sequences based on previous values and frequencies. You can apply this to text, and generate new text based on frequencies of letters, or words.

The old resources I used to learn about Markov Chains way back when have somewhat stuck in my head. I recall a reference to ‘Alice in Elsinore’; and that can be found at a page called ‘Fun with Markov Chains‘. There’s another bit which went into the varying lengths, how short lengths — say, one to three characters — produced gibberish that kinda almost looked like it might have been English once; and longer lengths gradually come closer and closer to the original text[s]. That seems to have been part of Programming Pearls, which used to be available to read online; I only managed to find part of that section archived on Jeff Atwood’s blog by use of some judicious Google search tools.

You can create some fun things with Markov chains. The examples given above included a generated Alice in Elsinore and the Revelation of Alice. I implemented Markov chain text generation as a command for an IRC bot that I wrote, which could talk in the ‘voice’ of my friends that hung out on there; that command was definitely my favourite.

I implemented Markov chain text generation as a command for an IRC bot that I wrote, which could talk in the ‘voice’ of my friends that hung out on there. That command was definitely my favourite.

Latterly, we’ve seen a resurgence in this with the rise in ‘AI’. Such as this ‘AI-written Harry potter fanfiction’

Harry Potter and the Portrait of What Looked Like a Large Pile of Ash
Hungry indeed

or less child-friendly things, like Trump speeches:

But calling any of this ‘AI’ is a stretch. It’s picking things based on random chance and frequency. If I have a sock drawer with thirty red socks, six green and two blue I’d be… a bit boring. But if I closed my eyes and picked socks from there, it would be a bit misleading to write an article saying “I got an AI to choose my clothes for the week and these are the results”.

But I digress.


Having brought in Jim, my attention must turn to Mary. Her sister was counting on me. I trusted my Chicago detective instincts and followed up on a lead that Jim spilled during his interrogation.

I went to the park. There I met two men, Mikey and Brenda. Apparently, they didn’t get along. I knew Mikey was hiding something, and decided to find out what it was. I dragged him into an alleyway, shoved my knee into his back, and started punching him.

I knew Mikey was hiding something, and decided to find out what it was. I dragged him into an alleyway, shoved my knee into his back, and started punching him.

Good Cop time was over, now it’s Bad Cop’s shift.

Mikey pleaded with me for mercy, this was all a misunderstanding, help would be forthcoming, he didn’t want to die, etc. I told him to shut up.

Where is Jim?” I asked in the same voice I used on Jim earlier… Wait, wait. Wasn’t Jim at the police station? “Oh, that’s right,” Mikey says. “He went home for the day.” I was confused, but went along with it. “Oh, good”. But then Mikey had a surprise for me. He grabbed me, threatened me and apologised. I sensed that Jim was a touchy subject best left alone, so asked about Mary.

“Mary?” Mikey asks. “Who’s Mary?” I explained about the woman’s missing sister. “What about her?” Mikey enquires further. But at that point we spot mart coming out of a store. I approach Mary, and she looks surprised to see me.

“Hey, you’re not my brother anymore,” Mary says. “Are…are you?”

Apparently she recognised me. I ask about her sister and Mary explains she’s at work.

At this point I realise something weird is going on. Sounds seem muffled, colours aren’t quite right, and time and place seem strangely elastic.


I thought perhaps AI Dungeon 2 was a bit like Sleep Is Death (Geisterfahrer) by Jason Rohrer, where the stories are written by players; or Cleverbot, where responses given by people are saved and can be reused.

But AI Dungeon 2 instead uses deep learning techniques to keep generating content, no matter what is thrown at it. It does have limitations, but it’s an interesting concept sprung from a Hackathon.

Best bit? It’s Free Software, MIT licensed! Check out its Github!


Things were getting weird. I tried to dance with Mary, which seemed like the thing to do at the time. She stared at me, but not in an uncomfortable way. I tried a backflip, and it ended with us falling asleep together1. Then I had to run away, far away; away from the voices shouting that we’re not sisters.

A group of men accosted me. They looked like they had been drinking heavily. I had to keep the initiative; my detective instincts took over and I slapped one of the men. It surprised the group. I slapped another one and it surprised them identically. But they started to beat me, which I guess was inevitable.

I tried everything to distract them. The harmonica, juggling, telling a joke. Fortunately, the last one worked. Unfortunately, at that moment a helicopter landed and I was kidnapped. Mary tried to rescue me, but the jailer was having none of her please for mercy or bribes. Eventually, he tired of the conversation and wandered off into the woods, and Mary went all Bastille day on the prisoners.


The narrative was based on my first interaction with AI Dungeon 2, which can be read in full.

1:

Categories
linux

Protip: Don’t dd Your Root Partition

In which our hero makes the titular mistake.

I was in the process of creating a new DomU, a virtual machine guest under Xen, and had just completed a basic Arch install.

At this point I thought “Oh, it would be handy to have a bare-bones Arch image ready to go, I should make that happen”. So I took an LVM snapshot of the logical volume in one terminal window, and continued with post-install setup in another.

I went to copy the logical volume using dd and tab completed:

$ dd if=/dev/vg/newdomudisk of=/dev/vg/a<TAB>
$ dd if=/dev/vg/newdomudisk of=/dev/vg/archroot

Because it’s an Arch install, I had probably named it ‘archsomething’, right? Well, no.

I had named the intended LV ‘basearch’ because it’s a base Arch install. While I continued customising the guest, I had a nagging feeling that something wasn’t right.

$ ls /etc
  Segmentation fault

Side note: this is almost the same point as Mario Wolczko in the [in]famous recovery story as told to alt.forklore.computers, archived in a bunch of places (mirror here). Only his error was “ls: not found.” The story is well worth a read for the creativity shown in recovery.


My reaction was ‘Oh poop‘. I stopped the dd. Unfortunately it had written a good couple of gigabytes by that point. The ssh connection stayed up for a while, letting me see that most things had been nuked. Then the connection hung, and the guests stopped responding.

I was caught out in this situation by a couple of things. My other server running the Xen hypervisor uses Debian as a base, so it didn’t cross my mind that an Arch logical volume would be the one with the hypervisor. I was also multitasking, and didn’t double-check the target (LV) before dd-ing.

So: make names obvious. Make them blindingly obvious. I’ve named the new LV containing root for the Xen hypervisor xenroot. and you can bet I’ll be double and triple-checking dd for a good while, at least!

Categories
automation python timesavers video

Renumbering Ordered Videos in a YouTube Playlist with Python

Doing exactly what it says on the tin

I’ve been playing Hunt: Showdown with friends recently. With these kids of things I like to stream and record the footage of us playing so that others can share our enjoyment — highs and lows! — and so we can watch them back later.

The videos are compiled in a playlist on YouTube, in the order recorded. The tools that I’ve written to help automate the process of getting the videos from a file on a hard drive to a proper YouTube video include numbering.

I realised that I had missed out three videos, which would throw off the numbering. The easy options would be to:

  • add them to the end of the playlist; downside: the video number wouldn’t reflect the order and progression
  • insert them in the right place manually; downside: it would take a long time to manually renumber subsequent videos (about ~60)
  • write a script to do this for me

Guess which one I picked?

Interacting with YouTube programmatically comes in two min forms: APIs or a wrapper like shoogle. The latter is what I am familiar with, and has the benefit o’ being a braw Scottish word to boot!

The list of video files I’ve uploaded is in json format, which makes interaction a cinch. The list is loaded, anything not a Hunt: Showdown video is skipped*, a regex matches the video number, if it’s over a number (59) in this case the number in the title is increased by 4 (I also had a duplicate number in the list!).

This title is then set using shoogle. The API has certain things it expects, so I had to ‘update’ both the title and the categoryId, though the latter remained the same. You also have to tell the API which parts you are updating, which in this case is the snippet.

As an example, the json passed to shoogle might look like:

{ "body": {
    "id": <ID>,
    "snippet": {
        "title": "Golden Battle (Hunt: Showdown #103)",
        "categoryId": "20"
        },
    },
 "part": "snippet"
}

From here it’s a simple matter to invoke shoogle (I use subprocess) to update the video title on YouTube.

The one caveat I would mention is that you only get 10 000 API credits per day by default. Updating the video costs 50 units per update, plus the cost of the resource (for snippet this is 2), which works out to 192 videos per day, max.

Once the list has been updated, I dump out the new list.

Much quicker than doing it manually, and the videos all have the right number!

Categories
all posts games

Take 2 Claims ‘WZLJHRS’

Jack Howitzer as Jack Howitzer in ‘Jack Howitzer’

I played some GTA V: Online the other night — my three word review: ‘fun but clunky’ — and uploaded the footage of it as I usually do, leaving it as a draft to be later updated with my automation tools.

Later on I saw I had a notification on YouTube and thought “Ah! Someone’s subscribed, or commented, or similar”. Actually, I had a copyright claim from Take 2 Interactive for ‘WZLJHRS’. What?

“There are some visibility restrictions on your video. However, your channel isn’t affected. No one can view this video due to one or more of the Content ID claims below. WZLJHRS: Video cannot be seen or monetized; Blocked in all territories by Take 2 Interactive”

The just under two minute segment in question was a GTA teevee programme (‘Jack Howitzer’, a documentary/mockumentary about a washed up action movie actor) I watched while waiting for my friend to arrive at my office. It had some funny moments.

I am mindful of YouTube’s content ID system, and I mute game music pre-emptively having been bitten in the past by that. I didn’t suspect for a second that a fake TV show in a game would result in an entire video being blocked.

I will have to amend the video and reupload.

PS: WZLJHRS: WZL ? Weazel News network || JHRS ? Jack Howitzer show?

Categories
automation python timesavers video

Improving Generated JSON Template for YouTube Uploads

Further automation automation

On a few of my Europa Universalis series, I’ve used a quick little python script to do take care of some of the predictable elements of the series — tags, title and video number — and work out a schedule.

Having gone through the process of uploading a lot of Dead by Daylight videos in the past, and with a large and growing set of Hunt: Showdown videos building up it seems like a good time to start adapting that script.

There is a significant hidden assumption here: my video file names are in ISO 8601 format, so we can sort based on filename.

As the previous uses had been EUIV videos the parameters were coded in as variables. This is obviously undesirable for a general-purpose script, so we need some way of passing in the things we want. And since we’re outputting JSON, why not use JSON formatting for the parameters file too?

We look for a supplied directory and file pattern, and pass those to glob.glob to be os.path.join-ed to build the file list. We then use a sorted() copy of the list which will have the videos in the correct — see assumption — order for the playlist.

Iterating through this sorted list, we can set the basics that uploadytfootage expects.

The only ‘fancy’ work here is in figuring out the schedule dates. Quoting my own docstring:

"""Based on:
    - the current scheduled date
    - valid days [M,Tu,W,Th,F,Sa,Su]
    - valid times (eg [1600, 1745, 2100])
    return the next scheduled date"""

I debated whether to make this a generator; and in the end I avoided it for reasons I can’t quite remember.

First we look at hours: if there’s a valid time later in the current day, use that. If not, we set the new hours part to the earliest of the valid times.

Next, days: if there’s a valid day of the week in the current week, set it to the next one. If not, take the difference of the current day and the earliest valid day away from 7 and add that to get the new day. That one might need a bit of explaining:

Valid: Monday (1) || Current: Friday (5):
7 – (5 – 1) = 3.

Using 3 for the days component of the timedelta gives us the Monday following the current Friday. We can also set the hours and minutes component of the time in that timedelta object.

Then it’s simply a matter of returning the value of the current scheduled date plus the timedelta!

In addition, I skip changing the scheduled date for any video that has “part” in the filename; on the basis that if it’s just been split for length — such as a three hour EUIV video split into hour segments — the different parts should all go out on the same day.

Having all the dates in the schedule figured out and set automatically is a huge timesaver.

The JSON provided by genjson is valid as uploadytfootage goes; but the only things that really need done are setting a title (if the videos in the series have different titles; EUIV playlists tend not to, Hunt ones do), a description, a thumbnail title and a thumbnail frame time.

Doing those few things are much quicker than redoing the metadata for each and every video.