all posts

Playing Fate RPG Online: Resources

Muddling through, one aspect at a time…

I was very kindly gifted a copy of Fate Core not too long ago

I had read about Fate before and done some preliminary exploring of the system. I like concept of shared storytelling via the interweaving of player characters, and having players invested in worldbuilding as well.

Side note: there was a recent question on playing online over on RPG SE which is worth a quick read.

With the coronavirus lockdown makes playing Fate simultaneously easier and more difficult. People generally have more time, and familiarity with teleconferencing is now the norm; but playing RPGs online loses some of the immediacy of in-person play.

So trying out Fate is one of my goals. There are a bunch of resources on the practicalities of playing Fate online which may be of use for others who want to do the same thing.


The Fate Roleplaying Game SRD (system reference document) is available online. The book I have is for Fate Core, but since all of us who will be playing it are quite new we will probably use Fate Accelerated rules.

It also has a page specifically put together for playing Fate online, based on a community request Reddit post.

That page was put together by Randy Oest, who has his own page on managing a Fate RPG campaign online. Many of the tips there are worth borrowing, and borrow them I will!

There is also a PDF of how to play Fate online posted on Twitter by @PG_YYZ in March. Some of it goes into setting up Discord and finding/collecting players on Reddit, but the rest revolves mainly around Roll20, which I’ve mentioned here previously.

Speaking of Roll20, Nathan Hare has a useful guide on his blog about playing Fate on Roll20 (with associated image album) from 2017. It goes into detail about layouts, tokens and macros; and between that and the PDF you should have a good idea of how to get a really good looking, slick setup on Roll20.

As an aside, while there’s plenty of customisation that can be done, but I’ll be sticking with a somewhat bare-bones approach for the first session at least. None of us know if we’ll be continuing the campaign, or using the first session or three to get familiar with Fate before starting afresh.

There’s another Reddit post from a couple of years ago with a few suggestions, such as keeping things simple with Google Slides/Writer/Draw.

My Plan

Having not yet tried Fate online, I obviou8ly cannot say what works best. That being said, my plan is roughly:

  • Roll20 as a VTT/virtual space to host the game itself; with the tokens images from either Nathan Hare or the second Reddit post
  • Zoom or similar for group video — I want to see the people I am playing with to read reactions / see who is about speak
  • Wekan as a self-hosted Trello/card alternative
  • Bookstack as a wiki for documenting story elements if we keep going

I’ll document how I set up the last two, and revisit this once I have a setup I like.

all posts rpg

Using Roll20 to run One-Shots

Playing roles

With the lockdown in full swing, my friends and I have returned to playing RPGs. We have done mash-ups over the years, where everyone suggests a couple of genres and a location; and we randomly choose two genres and run that combination in a randomly-chosen location. It makes for some interesting combinations — gothic horror exploration of a haunted asteroid? eighties coming-of-age action on a Viking ship? — and some fun gameplay. But I’m not here to tell you how to do that.

We decided to use Roll20 to host the experience. It’s a powerful VTT (virtual tabletop) with lots of features for running games way more complex than our simple one-shot mash-ups. But it takes some getting used to, so here are a few thing we’ve done for running our game.

In the Settings tab (tabs are in the top-right), turn off “Enabled background chat beep”. We also disabled all the Video and Audio chat as we are using mumble for voice comms. We may use it for video, or we may join the entire rest of the world, who are using Zoom en masse during lockdown.

For Macros, we set up a few. Our system uses “roll under for success” with d100s. So for example, my character’s Mental roll macro would look like:

/roll d100 - @{Eric "Red Beard" O'Zoltan|Mental}

This Mental roll did not go so well…

There is a way to automatically use the selected token, using selected instead of the character name, but we couldn’t get the ‘selected’ to work. This will at least pull in the stat on the character sheet (in this case, Mental) and show how far under the target it was rolled. This is needed in our system, but if you just want pass/fail, you can use the <? modifier just outside the curly braces.

You can get fancier and do decorations, more explanatory text etc but we set up the macros as we played, so it was done for expediency. We also set one up where a roll modifier can be assigned for rolls which are made more difficult/easy:

/roll d100 - (@{Eric "Red Beard" O'Zoltan|Mental} + ?{Modifier|0})

which prompts for the value.

Since we have three stats, my macros ended up looking like this.

Another small thing that we found helps people remember who everyone is to set a display name, and change the (Send) As: in chat to your player character. This is a slight corruption of the display name feature, the setting help text for which says it is for your out-of-character name. But we all know who each other’s ‘real’ names are, so a reminder of characters is welcome. You can see me as Eric in the lower-right corner of the following:

The game area is sketched in purple, with player and NPC tokens in the lower corner. We used the 3d dice too as it feels more like rolling real dice! I gave my token a small aura so I could quickly visually identify who I am.

Clicking on the token gives options to set values (which can be used with macros) which I used to track wounds. Setting token orientation and role-playing which way they would face in reaction to events happening outside of a battle context helped with immersion! I also used the small indicators to indicate what weapon/item I was wielding.

Lastly, even thought it was a one-shot session, we wrote short bios for our characters, mostly for each-other’s amusement — at least that’s how I wrote mine! — but it did help with role-playing and giving hooks we could use in the story.

We had lots of fun with this, hope y’all do too!

backups wisdom

Monitor Your Backups!

In which I find out things have gone awry

Hey you! Yes, you! The one reading this. You have backups, right? Go check that they i) actually exist ii) are backing up at the right frequency iii) work. This is important, I’ll wait.

borg: Great for Backing Up

I’ve been using borg for backups for a couple of years now. It’s great- it does deduplication (saving tons of space!), only backs up what has changed (efficient! incremental!), and is somehow fun to use while doing so.

I wrote a script to take the backups, run as a systemd service each hour. All was well- it did error detection and emailed me when a backup failed.

But I had the occasion to check on the backups a couple days ago, and the latest one was from January. My first thought was disk space, but there was enough (albeit getting close to the limit). So I then checked the systemd output:

$ systemctl status periodic-backup
? periodic-backup.service - Take a periodic backup of directories
     Loaded: loaded (/usr/lib/systemd/system/periodic-backup.service; enabled; vendor preset: disabled)
     Active: inactive (dead) since Wed 2020-02-12 12:03:06 GMT; 45min ago
TriggeredBy: ? periodic-backup.timer
   Main PID: 1168530 (code=exited, status=0/SUCCESS)

Feb 12 12:03:02 zeus systemd[1]: Started Take a periodic backup of directories.
Feb 12 12:03:06 zeus systemd[1]: periodic-backup.service: Succeeded.

So the job was running and… succeeding, but not backing up?

Next step in diagnosis is to run the script manually, and make sure it still works. The script didn’t error, but it took a long time to complete- longer than a straightforward case of “large increment to backup since January”.

So I broke it down even further, and ran the borg command as written in the script. I got a prompt:

Warning: The repository at location ssh://bertieb@pandora/home/bertieb/backups/borg/zeus was previously located at ssh://pandora/~/backups/borg/zeus

Aha! It was waiting on input to proceed. One form is how the script access the repo, the other is how it is accessed from the command line. It’s a bit strange as the repo clearly didn’t move, and I’m not sure why it started treating the two differently.

Fortunately, borg has an environment var for just such an occasion: BORG_RELOCATED_REPO_ACCESS_IS_OK=yes


I asked in #borgbackup on Freenode about the issue, and folks said they had used a few things for independently monitoring backups:

I am indebted to Armageddon for mentioning the last one. While full-on monitoring with Prometheus looks interesting (especially in conjunction with grafana), it’s way overkill for my needs. Ditto Zabbix.

Healthchecks is a relatively simple tool which implements the concept, “we expect a ping/health-check at <such-and-such> a frequency; if we don’t get it then alert”.

Armageddon/Lazkani’s blog has a worked example of setting up Healthchecks to work with borgmatic (a tool to simplify borg backups). The official borgmatic ‘getting started’ guide is pretty good too.

The env vars in the Healthchecks docker image are used on creation; after they can be changed in

I set up Healthchecks using the linuxserver Docker imagebig note: the env vars listed there are used on creation, after that they can be changed in the data volume / directory under; that one held me up for a bit when i was trying to sort out email integration — and have added both my pre-existing scripts, and some new borgmatic backups.

Looking good!

If you use the helpful ‘crontab’ format for the period, make sure to match the timezone, or you’ll get period emails saying the backup has failed. Ask me how I know…

automation coding python video

Generating Text Captions for Shotcut

Making the video editing workload much lighter

Shotcut is a Free (GPLv3) cross-platform video editor. I’ve been using it a couple of times lately to put some simple clips together (like sorting the Take 2 copyright claim GTA Online video).

I figured I’d use it to take a clip of my friends and I getting schooled by someone with a bomb lance in Hunt: Showdown.

Actually, my first thought was to write a script to put a clip together using MELT — based on JSON, of course — but on reflection for these I wanted something a bit more refined.

So, enter Shotcut. One of the things I was keen to include were text-based captions. I’ve been including these in gifs (example) for a while now, and I think they work really well for video. They can be informative, and sometimes funny!

Text in Shotcut is doable natively via filters: text, HTML etc. But this felt awkward to me- I’d rather have something directly visible in the timeline which is easy to manipulate; and to add filters to itself if it comes to it.

So I decided… to write a script to generate images with these captions, based on — yup! — JSON. I quickly thew together a JSON file for the dialogue in clip I wanted to caption:

{ captions: [                                                                                                       
        [ 0, close by here],                                                                                        
        [ 0, other side of this wall],                                                                                      [ 1, yep yep yep],                                                                                          
        [ 2, That was a Sparks! :o],                                                                                
        [ 0, ohhhh fudge],                                                                                          
        [ 0, I die to this],                                                                                        
        [ 0, GADDAMMITTT],                                                                                          
        [1, what was that?],                                                                                        
        [0, bomblance :(],                                                                                          
        [1, where?],                                                                                                
        [2, he's with me],                                                                                                  [2, :(],                                                                                                    
        [0, you've got one bullet left],                                                                            
        [0, maybe on top if he's got a bomblance?],                                                                 
        [1, good idea],                                                                                             
        [0, is that not him at the gate?],                                                                          
        [1, dunno where he is],                                                                                             [2, he's on our bodies],                                                                                    
        [1, I know...],                                                                                             
        [1, WHAT?! *panicflee*],                                                                                    
        [1, this is a bit difficult],                                                                               
        [1, fuq! :(],                                                                                               
        [1, I should have run again],                                                                               
        [1, oh well],                                                                                               
        [0, "gg wp Flakel, you beat us o7"]                                                                         

Simple! The numbers refer to speakers; 0 is the first, 1 = 2nd, 2 = 3rd. I didn’t actually need to zero-index speakers, and in fact I can use text strings to denote who is speaking, but writing numbers is quicker if there’s twenty-five captions to do.

The script, which I will throw up on GitHub, goes through this and generates the caption for each item in the list. It has assigned colours for each ‘speaker’.

Due to familiarity, I was going to use imagemagick. But I originally used Pillow as I wanted to [re]gain a bit of familiarity with that. Once I had [re]acquainted myself with the few bits I needed it was relatively straightforward to generate a cropped image with the text appropriately sized, coloured and stroked; but I found myself wanting a full 1920×1080 frame as this made the Shotcut workflow much quicker since there was no need to set position if the image was the same size as the source video.

So I changed Pillow/PIL out for imagemagick and subprocess and redid the whole thing in a few minutes. The imagemagick version is significantly slower, but not so slow as to be intolerable even when wanting to tweak a couple of the captions.

I’m quite happy with how it turned out:

The ‘automatic’ text sizing could use a little tweak!

Lessons learned:

  • using something you’re familiar with is often easier than learning something new
  • PIL is faster than imagemagick for generating simple text on a transparent background
  • bomb lancers can be pretty deadly
timesavers troubleshooting

Recovering the Config of a Running Xen DomU

For those “oh poop” moments

I was in a situation where I had a running Xen guest, but the config file that defined the DomU was missing.

Fortunately, the listing command (xl list) has a long option, xl list -l, which prints out domain information in JSON format. This includes config information, from which the DomU configuration can be rebuilt.

automation python

Including Contemporaneous Info in my YouTube Workflow

From the Department of Wordy Titles

I have a set of tools that I have written to make interacting YouTube simpler, more straightforward, simplifying my workflow.

In the state it’s in it roughly looks like:

  1. record a bunch of videos
  2. upload the files and leave them in place
  3. run genjson on them to create a JSON template, including a reasonably-spaced publish schedule
  4. run get_ids to associate the JSON entries with the video’s YT videoId
  5. go through the videos, rewatch to decide on title, description and thumbnail frame and include this in the JSON entry
  6. run uploadytfootage to update the metadata

Most of the above is highly automated- even step 2 could be done away with if the default YouTube API quota didn’t limit one to roughly six videos per day.

The most labour-intensive part of the process is step 5. Because of the batch nature of the job, sometimes quite a few videos can pile up. For example, at time of writing I have 45 Hunt: Showdown videos from the past ten days to do.

Getting a short, catchy yet descriptive title and description for each of those will involve reacquainting myself with what those round[s] entailed. So I decided recently that I would try to do some of that work as I go: between rounds of Hunt, write out a putative title and description associated with a video file to another JSON file.

I also capture a short snippet or potential title on a notepad on my desk:

Between those hopefully the process will be a bit easier.

I also cooked up a short script to merge together the two JSON files. The crux of it is the filter that selects from the ‘contemporaneous note’ if it has an associated entry for a file in the generated JSON template list.

We are working with a list of dicts, so a list comprehension is handy. We want to select from the list of dicts an entire dict that matches the filename of the video. Roughly speaking:

next(item for item in json_c if item["file"] = filename)

Docs: list comprehension, next()
SO example: Python list of dictionaries search

If I am able to keep on top of titles and descriptions as I go, the only thing needed will be to find a good thumbnail frame! (though that’s kinda time consuming in itself, perhaps ML could be applied to that…)

Edit: Yes! Deep neural net thumbnails and convolutional neural nets (PDF)

automation python timesavers video

Rescheduling YouTube Videos using Python

More ‘exactly what it says on the tin’

A couple weeks ago, I had to renumber some Hunt: Showdown videos in a playlist:

Well, now I have another issue. When we started playing Hunt: Showdown, I was publishing the videos a couple a day on Mondays, Wednesdays and Fridays. Putting them all out at once is a bit of a crass move as it floods subscribers with notifications, so spreading then out is the Done Thing.1

However we’re now above 150 videos, and even after adding weekends to the schedule that still takes us up to, umm, May.

What I’d like to do is go back and redo the schedule so that all pending videos use Saturdays and Sundays, and maybe think about doing three or four per day, which would bring us down to about 8/6 weeks’ worth. That is still a lot, quite frankly, but pushing up the frequency further would be detrimental.

Changing the scheduled publish date would be even more painful than renumbering because it requires more clicks, I’d have to keep track and figure out when the next one was supposed to go out, and there are more to do (120-odd).

So back to python! I have already written a schedule-determiner for automating the generation of the pre-upload json template, so I can reuse — read: from genjson import next_scheduled_date — that for this task.

The filtering logic is straightforward: ignore anything not a Hunt video, skip anything before a defined start date (ie videos already published). From there change the current ‘scheduled’ date for the next one from the new schedule.

For the current set of scheduled videos that are not already published, the schedule of 3 videos each 5 days (15 per week) gives:

Current date: 2020-04-06 17:30
New date : 2020-03-09 20:00

So we’ve saved a month! Plus the pending videos (~40) will be done in two and a half weeks instead of four.

From here it’s straightforward to rewrite the scheduled field and use shoogle as before to change the dates, this time setting publishAt under status. Note that privacyStatus needs to be explicitly set to private, even if it is already set! This avoids a “400 The request metadata specifies an invalid scheduled publishing time” error.

Another thing done quickly with python!

1: On the note of ‘Done Things’, the thing to do would be to upload fewer videos in the first place.

I’ve considered that, and if a video is truly mundane and missable, I will omit it. But as well as being fun/interesting videos of individual rounds, the playlist should serve as a demonstration of our progress as players. The Dead by Daylight playlist does this: we start with no idea what’s going on or how to play properly, and by the final video — somewhere north of 300 — we are pretty competent.

cool programming

AI Dungeon 2 Is Fun Nonsense

Apparently, there’s something about Mary

It’s night. I’m a private detective from Chicago named Joseph, on the hunt for someone named Jim, and I have a gun and a badge. I’m in the woods, and I hear some noise from behind the trees. Suddenly an old man shoots an arrow from a bow at a hitherto-unseen target. He runs off, but I catch up with him and ask his name. It turns out that he’s also a detective from Chicago named John, and he’s also hot on the trail of Jim too.

I ask “How did you know my name?” and he replies, succinctly: “Because we’re both detectives.” I try to discuss the case with him, but he refuses to be drawn on it, preferring to cryptically state “I’m sure we’ll have some clues soon enough”.

We come across a small house in the woods, and I venture inside. A woman sits, reading quietly. I ask her about Jim, but she only says that he left long ago. I make a note of the house and return the next day without John. I look around and find some white socks and black pants. Ah-ha! These are crucial to the case. I put them on immediately. Surely it’s now only a matter of time before I find Jim.

I find only a shack, in which a single light bulb illuminates a strange assortment of books and papers with diagrams.

I go back outside, and see John, the other detective watching me cautiously. Clearly he’s jealous of my new socks and pants. He disappears into the woods. I run after him but find only a shack, in which a single light bulb illuminates a strange assortment of books and papers with diagrams. I picture Jim with this:

Combing through the strange lot of papers, I find one that might help my case! It’s a drawing. A drawing of a man in front of a tree. He has a hat, and the hat has horns. His eyes are wide open and staring at me.

This is Jim!

I find the tree in the drawing. It’s odd. It isn’t right. It seems to be made of wood, but it has cracks all over and seems as if it was never alive in the first place. Maybe it has Jim inside it? In any case it isn’t right. It has to go.

I break the tree apart, fling a piece at a nearby wall, which thuds, then silence.

The next day, I come home and see that everything is gone.

The above is how my first dabble with AI Dungeon 2 started. I was linked to it without context, so had no preconceptions going in. it all started off somewhat normally, I wondered if it was some kind of randomly-generated MUD (a old text-based system predating popular MMORPGs that let users create text-based worlds and interact with one another. But as things got slowly more odd it seemed like it was something else. It had the slightly weird, funny cadence that computer-generated text has.

I had come close to finding Jim. The house, the pants, the drawing in the shack, and the tree. They all fitted together, and I knew I must be close. I returned to the woods.

Thereupon I chanced on a woman sitting on a rock, crying. She explained that her sister Mary had gone missing only the night before. Perhaps Jim had a hand in this. I tried to explain the situation as best I could, but this only upset her more. So instead, I gave her a hug. This calmed her down, perhaps too much. She fell to the ground. She needed to be somewhere safe, but where? Ah! The shack! I carry her there.

Going in, I find a man dressed in an old coat and wearing glasses. He has long white hair that hangs down to his shoulders. His eyes are closed and he seems very tired looking. What the heck is he doing there? I demand to know his name.

“My name is James, but everyone calls me Jack.” Joseph, John, Jim, James, Jack… Wait! James? As in the unshortened form of Jim..? I have to think on my feet, and decide to act quickly.

“Where is Mary?”

I’ve got him now. Or so I think. But the man just sighs and shakes his head. He thinks he’s won. But I’m Joseph, a detective from Chicago. And Chicago detectives know how to roll with the punches, literally and figuratively. I decide to roll with this one and throw him off balance. I drop my voice, lean in close and growl:

“Where is Jim?”

“Mary..? She left with another guy named John.”

He yawns and rubs his eyes. He looks tired too. But he knows I’ve got him. “Mary… Jim… Where is Mary?” He’s trying to throw me, but he didn’t reckon with my Windy City credentials. He coughs and then speaks. “She left with another guy named John.”


The one thing I wasn’t expecting. The one man I didn’t suspect.

Time for action. Mary and John can wait, but Jim’s my case and he has questions to answer. I grab Jim by the collar and pull him from behind the desk. He puts up a brief resistance, but he isn’t strong enough to break free. Up against the wall he goes, and I cuff his hands together behind his back. Time to take him downtown.

I’ve long enjoyed the output of Markov chains. They are some relatively simple procedures for generating sequences based on previous values and frequencies. You can apply this to text, and generate new text based on frequencies of letters, or words.

The old resources I used to learn about Markov Chains way back when have somewhat stuck in my head. I recall a reference to ‘Alice in Elsinore’; and that can be found at a page called ‘Fun with Markov Chains‘. There’s another bit which went into the varying lengths, how short lengths — say, one to three characters — produced gibberish that kinda almost looked like it might have been English once; and longer lengths gradually come closer and closer to the original text[s]. That seems to have been part of Programming Pearls, which used to be available to read online; I only managed to find part of that section archived on Jeff Atwood’s blog by use of some judicious Google search tools.

You can create some fun things with Markov chains. The examples given above included a generated Alice in Elsinore and the Revelation of Alice. I implemented Markov chain text generation as a command for an IRC bot that I wrote, which could talk in the ‘voice’ of my friends that hung out on there; that command was definitely my favourite.

I implemented Markov chain text generation as a command for an IRC bot that I wrote, which could talk in the ‘voice’ of my friends that hung out on there. That command was definitely my favourite.

Latterly, we’ve seen a resurgence in this with the rise in ‘AI’. Such as this ‘AI-written Harry potter fanfiction’

Harry Potter and the Portrait of What Looked Like a Large Pile of Ash
Hungry indeed

or less child-friendly things, like Trump speeches:

But calling any of this ‘AI’ is a stretch. It’s picking things based on random chance and frequency. If I have a sock drawer with thirty red socks, six green and two blue I’d be… a bit boring. But if I closed my eyes and picked socks from there, it would be a bit misleading to write an article saying “I got an AI to choose my clothes for the week and these are the results”.

But I digress.

Having brought in Jim, my attention must turn to Mary. Her sister was counting on me. I trusted my Chicago detective instincts and followed up on a lead that Jim spilled during his interrogation.

I went to the park. There I met two men, Mikey and Brenda. Apparently, they didn’t get along. I knew Mikey was hiding something, and decided to find out what it was. I dragged him into an alleyway, shoved my knee into his back, and started punching him.

I knew Mikey was hiding something, and decided to find out what it was. I dragged him into an alleyway, shoved my knee into his back, and started punching him.

Good Cop time was over, now it’s Bad Cop’s shift.

Mikey pleaded with me for mercy, this was all a misunderstanding, help would be forthcoming, he didn’t want to die, etc. I told him to shut up.

Where is Jim?” I asked in the same voice I used on Jim earlier… Wait, wait. Wasn’t Jim at the police station? “Oh, that’s right,” Mikey says. “He went home for the day.” I was confused, but went along with it. “Oh, good”. But then Mikey had a surprise for me. He grabbed me, threatened me and apologised. I sensed that Jim was a touchy subject best left alone, so asked about Mary.

“Mary?” Mikey asks. “Who’s Mary?” I explained about the woman’s missing sister. “What about her?” Mikey enquires further. But at that point we spot mart coming out of a store. I approach Mary, and she looks surprised to see me.

“Hey, you’re not my brother anymore,” Mary says. “Are…are you?”

Apparently she recognised me. I ask about her sister and Mary explains she’s at work.

At this point I realise something weird is going on. Sounds seem muffled, colours aren’t quite right, and time and place seem strangely elastic.

I thought perhaps AI Dungeon 2 was a bit like Sleep Is Death (Geisterfahrer) by Jason Rohrer, where the stories are written by players; or Cleverbot, where responses given by people are saved and can be reused.

But AI Dungeon 2 instead uses deep learning techniques to keep generating content, no matter what is thrown at it. It does have limitations, but it’s an interesting concept sprung from a Hackathon.

Best bit? It’s Free Software, MIT licensed! Check out its Github!

Things were getting weird. I tried to dance with Mary, which seemed like the thing to do at the time. She stared at me, but not in an uncomfortable way. I tried a backflip, and it ended with us falling asleep together1. Then I had to run away, far away; away from the voices shouting that we’re not sisters.

A group of men accosted me. They looked like they had been drinking heavily. I had to keep the initiative; my detective instincts took over and I slapped one of the men. It surprised the group. I slapped another one and it surprised them identically. But they started to beat me, which I guess was inevitable.

I tried everything to distract them. The harmonica, juggling, telling a joke. Fortunately, the last one worked. Unfortunately, at that moment a helicopter landed and I was kidnapped. Mary tried to rescue me, but the jailer was having none of her please for mercy or bribes. Eventually, he tired of the conversation and wandered off into the woods, and Mary went all Bastille day on the prisoners.

The narrative was based on my first interaction with AI Dungeon 2, which can be read in full.



Protip: Don’t dd Your Root Partition

In which our hero makes the titular mistake.

I was in the process of creating a new DomU, a virtual machine guest under Xen, and had just completed a basic Arch install.

At this point I thought “Oh, it would be handy to have a bare-bones Arch image ready to go, I should make that happen”. So I took an LVM snapshot of the logical volume in one terminal window, and continued with post-install setup in another.

I went to copy the logical volume using dd and tab completed:

$ dd if=/dev/vg/newdomudisk of=/dev/vg/a<TAB>
$ dd if=/dev/vg/newdomudisk of=/dev/vg/archroot

Because it’s an Arch install, I had probably named it ‘archsomething’, right? Well, no.

I had named the intended LV ‘basearch’ because it’s a base Arch install. While I continued customising the guest, I had a nagging feeling that something wasn’t right.

$ ls /etc
  Segmentation fault

Side note: this is almost the same point as Mario Wolczko in the [in]famous recovery story as told to alt.forklore.computers, archived in a bunch of places (mirror here). Only his error was “ls: not found.” The story is well worth a read for the creativity shown in recovery.

My reaction was ‘Oh poop‘. I stopped the dd. Unfortunately it had written a good couple of gigabytes by that point. The ssh connection stayed up for a while, letting me see that most things had been nuked. Then the connection hung, and the guests stopped responding.

I was caught out in this situation by a couple of things. My other server running the Xen hypervisor uses Debian as a base, so it didn’t cross my mind that an Arch logical volume would be the one with the hypervisor. I was also multitasking, and didn’t double-check the target (LV) before dd-ing.

So: make names obvious. Make them blindingly obvious. I’ve named the new LV containing root for the Xen hypervisor xenroot. and you can bet I’ll be double and triple-checking dd for a good while, at least!

automation python timesavers video

Renumbering Ordered Videos in a YouTube Playlist with Python

Doing exactly what it says on the tin

I’ve been playing Hunt: Showdown with friends recently. With these kids of things I like to stream and record the footage of us playing so that others can share our enjoyment — highs and lows! — and so we can watch them back later.

The videos are compiled in a playlist on YouTube, in the order recorded. The tools that I’ve written to help automate the process of getting the videos from a file on a hard drive to a proper YouTube video include numbering.

I realised that I had missed out three videos, which would throw off the numbering. The easy options would be to:

  • add them to the end of the playlist; downside: the video number wouldn’t reflect the order and progression
  • insert them in the right place manually; downside: it would take a long time to manually renumber subsequent videos (about ~60)
  • write a script to do this for me

Guess which one I picked?

Interacting with YouTube programmatically comes in two min forms: APIs or a wrapper like shoogle. The latter is what I am familiar with, and has the benefit o’ being a braw Scottish word to boot!

The list of video files I’ve uploaded is in json format, which makes interaction a cinch. The list is loaded, anything not a Hunt: Showdown video is skipped*, a regex matches the video number, if it’s over a number (59) in this case the number in the title is increased by 4 (I also had a duplicate number in the list!).

This title is then set using shoogle. The API has certain things it expects, so I had to ‘update’ both the title and the categoryId, though the latter remained the same. You also have to tell the API which parts you are updating, which in this case is the snippet.

As an example, the json passed to shoogle might look like:

{ "body": {
    "id": <ID>,
    "snippet": {
        "title": "Golden Battle (Hunt: Showdown #103)",
        "categoryId": "20"
 "part": "snippet"

From here it’s a simple matter to invoke shoogle (I use subprocess) to update the video title on YouTube.

The one caveat I would mention is that you only get 10 000 API credits per day by default. Updating the video costs 50 units per update, plus the cost of the resource (for snippet this is 2), which works out to 192 videos per day, max.

Once the list has been updated, I dump out the new list.

Much quicker than doing it manually, and the videos all have the right number!