I played some GTA V: Online the other night — my three word review: ‘fun but clunky’ — and uploaded the footage of it as I usually do, leaving it as a draft to be later updated with my automation tools.
Later on I saw I had a notification on YouTube and thought “Ah! Someone’s subscribed, or commented, or similar”. Actually, I had a copyright claim from Take 2 Interactive for ‘WZLJHRS’. What?
The just under two minute segment in question was a GTA teevee programme (‘Jack Howitzer’, a documentary/mockumentary about a washed up action movie actor) I watched while waiting for my friend to arrive at my office. It had some funny moments.
I am mindful of YouTube’s content ID system, and I mute game music pre-emptively having been bitten in the past by that. I didn’t suspect for a second that a fake TV show in a game would result in an entire video being blocked.
On a few of my Europa Universalis series, I’ve used a quick little python script to do take care of some of the predictable elements of the series — tags, title and video number — and work out a schedule.
Having gone through the process of uploading a lot of Dead by Daylight videos in the past, and with a large and growing set of Hunt: Showdown videos building up it seems like a good time to start adapting that script.
As the previous uses had been EUIV videos the parameters were coded in as variables. This is obviously undesirable for a general-purpose script, so we need some way of passing in the things we want. And since we’re outputting JSON, why not use JSON formatting for the parameters file too?
We look for a supplied directory and file pattern, and pass those to glob.glob to be os.path.join-ed to build the file list. We then use a sorted() copy of the list which will have the videos in the correct — see assumption — order for the playlist.
Iterating through this sorted list, we can set the basics that uploadytfootage expects.
The only ‘fancy’ work here is in figuring out the schedule dates. Quoting my own docstring:
- the current scheduled date
- valid days [M,Tu,W,Th,F,Sa,Su]
- valid times (eg [1600, 1745, 2100])
return the next scheduled date"""
I debated whether to make this a generator; and in the end I avoided it for reasons I can’t quite remember.
First we look at hours: if there’s a valid time later in the current day, use that. If not, we set the new hours part to the earliest of the valid times.
Next, days: if there’s a valid day of the week in the current week, set it to the next one. If not, take the difference of the current day and the earliest valid day away from 7 and add that to get the new day. That one might need a bit of explaining:
Using 3 for the days component of the timedelta gives us the Monday following the current Friday. We can also set the hours and minutes component of the time in that timedelta object.
Then it’s simply a matter of returning the value of the current scheduled date plus the timedelta!
In addition, I skip changing the scheduled date for any video that has “part” in the filename; on the basis that if it’s just been split for length — such as a three hour EUIV video split into hour segments — the different parts should all go out on the same day.
Having all the dates in the schedule figured out and set automatically is a huge timesaver.
The JSON provided by genjson is valid as uploadytfootage goes; but the only things that really need done are setting a title (if the videos in the series have different titles; EUIV playlists tend not to, Hunt ones do), a description, a thumbnail title and a thumbnail frame time.
Doing those few things are much quicker than redoing the metadata for each and every video.
(The traefik labels make the service available via http://maubot.bertieb.org)
Setting up a Bot
This gets you a management interface, but the bot itself needs to be set up. It’s not entirely obvious, though the instructions are present elsewhere in the wiki.
I manually created a user using the Riot web interface; though there are instructions on how to do it via CLI using mbc. If you go the manual route the access token that maubot asks for can be yound by clocking your avatar/username dropdown in the top left to access ‘Settings’ -> ‘Help & About’ -> ‘Access Token’:
Once done, you should have a bot:
But it won’t do anything just yet, you need to add plugins!
As an aside, I managed to run into a permissions issue at this point, where the maubot interface wasn’t responding via HTTP, and docker logs was complaining:
[2019-11-29 19:18:15,617] [INFO@maubot.init] Initializing maubot 0.1.0.dev28
[2019-11-29 19:18:15,618] [DEBUG@maubot.loader.zip] Preloading plugins...
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
File "/opt/maubot/maubot/__main__.py", line 58, in <module>
File "/opt/maubot/maubot/loader/zip.py", line 270, in init
File "/opt/maubot/maubot/loader/zip.py", line 253, in load_all
for file in os.listdir(directory):
PermissionError: [Errno 13] Permission denied: '/data/plugins'
It’s simplest to upload the desired plugins from the web interface itself. As the wiki points out elsewhere:
To add a plugin, upload a zip file containing the maubot.yaml and relevant files at the top level. Github releases of plugins have those premade (see i.e. https://github.com/TomCasavant/PollMaubot/releases – file casavant.tom.poll-v1.0.0.mbp) – mau.dev/maubot has a CI that makes those. Also, mbc build will make those with the relevant files.
Alternatively, you can compile the plugins to a .mbp yourself: clone the main maubot repo, run setup.py to get the dependencies, then clone the plugin[s] in maubot’s directory, and finally run mbc build <plugin> for the plugin you just cloned (per mChron’s comment).
Once you have the plugins you want, create an instance for them and assign them to the bot client. That interface will also show you configuration options and let you view logs, if needed. Then you’re good to go!
We set out to use OCR to extract metadata from frames of the loading and ending screens of Deep Rock Galactic to use to fill in details of videos destined for YouTube.
In other words we went from:
It’s always good to reflect when you’ve done something. Did it go well, or not as well as expected? What did you hope to achieve? Did you achieve that? What has it changed? There’s as many ways to reflect as there are things to reflect on.
In this project I wanted to achieve a greater degree of automation with my video creation workflow. Partly because it would save me time:
The other reason is because copying text is no longer the provenance of monks in a scriptorium- it’s a repetitive, uncreative task. I enjoy spending time playing games with my friends, and those videos are there so that they and others can relive and enjoy them too; spending time copying text is not a good use of my time.
However, there’s a more pertinent image for this sort of task:
There were 47 videos in the test batch. Let’s say that I would have spent five minutes per video copying across the title, writing a description, figuring out the tags and such; doing that manually would have taken 235 minutes, or nearly four hours. That might sound like a lot, but it’s certainly less time than I worked on the automation.
The automatic OCR will have ongoing benefits – there are more videos to process.
But the best part is that I learned. I learned about tesseract and OCR, a bit about OpenCV, and honed my python programming skills.
OCR is good enough to extract text from video stills. I assumed this, but it is good to have it confirmed.
Cleaning up images makes a huge difference to OCR accuracy. I could probably have improved detection in the opening image to use just that if I had cleaned up earlier in the process; but using both loading and ending images gives more metadata, so it worked out okay.
It’s really easy to leak file descriptors. Late on, when I went to test with a wider variety of videos, I ran into this issue “OSError: [Errno 24] Too many open files“. Instead of using tempfile.mktemp, which unexpectedly kept the fd, I had to use tempfile.NamedTemporaryFile. That one took a bit of hunting down as it looked like pytesseract was failing, and coincidentally they had a couple of issues in previous versions due to the same issue (mktemp vs NamedTemporaryFile)! Most confusing.
What Would I Do Differently?
Implement automated testing. This would have hugely helped in the refinements stage, where regressions in detection accuracy occurred as I refined. There were a couple of reasons that put me off at the time, but they were more excuses than reasons:
this was a “quick and dirty” attempt to get a tool working, refinements to it can come later
This an old, old excuse; proved false time and again. It’s sometimes phrased as “This is just a temporary fix, will do it properly later” and other variants. What it boils down to is “We’re going to do this the ‘wrong’ way for now, and change it later”.
It sounds fine, if you actually sort it later, but invariably that doesn’t happen. Time and effort have to be focused somewhere, and it’s a harder sell to redo something that “works” (however hackily) than to implement a new feature, or get a product out the door.
Here it was even worse: doing that work may well have improved the “quick and dirty” process.
the frame extraction + OCR processes aren’t quick, and tests should be quick to run; it’s also hard to break apart the pipeline
This excuse is on slightly firmer ground, but not by much! It’s true that these things take time, but they can be broken down to components and tested individually using sample images (for example).
It might not provide the coverage of a real life full data set, but it’ll catch the worst of regressions.
Future Improvements + Directions
Use only a start or end frame if one is missing. At the moment a video is skipped if either the start or end frame is not detected. That leaves the video to be done entirely manually- we could get at least some of the metadata from without the other.
Detect in-game menu screen. For times when I hit the record button too late (or OBS takes too long to spin up), I could go into the menu which has a couple of bits of metadata. I would need to remember to do this, but I usually realise I’ve hit record too late. Combined with the above improvement, we could increase video coverage.
Expand OCR to other games. This is non-trivial but an obvious way to go. Killing Floor 2 is the likeliest next candidate as at the moment it’s the one we play the most and also has metadata to capture.
Consider a further automated pipeline. As it stands, I have to run the program against videos manually; not a big deal. But a tool that detected new videos, automatically runs the OCR tool against them and puts them and the JSON output in a convenient place (± automatically uploading them to YouTube) would make the process more streamlined. This may be beyond my own need or indeed tolerance- I could see it being potentially frustrating if I wanted to manually handle a video differently.
Overall though, I am happy with how the tool turned out.
Traefik grabs the first port it sees, which on the dev image is 1080- we want port 9292. Use --label=traefik.http.routers.discourse-dev.port=9292
You need to set a dev host using en env var in the container: -e DISCOURSE_DEV_HOSTS=your_dev_hostname \
With the dev version of Discourse working, I wanted to let its connectivity be managed by the traefik proxy. But whichever way I sliced it, I would get a Bad Gateway error. The usual suspect for this is not setting a port, or having the service on a different network from traefik itself. However, this issue persisted for me.
I had to add the following to (discourse_source_root)/bin/docker/boot_dev, in the docker run ... section:
Unfortunately, when I followed the instructions to set up the dev instance, I was greeted with an ‘Unable to connect’ screen. (ERR_FAILED). Even using telnet from the same host failed:
bertieb@ubunutu-vm:~/discourse$ telnet 127.0.0.1 9292
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.
Dang. I tried this across fresh Arch and Ubuntu Server (19.10 + 18.04.3 LTS) installs and got the same thing.
Installing the non-Docker version worked but only for localhost; then a comment on that guide’s topic pointed me at a recent change to interface binding. Checking out the commit before that change let me connect from other hosts in both the Docker and non-Docker versions.
As of 2019-11-04 a later commit sorted this issues and added a specific flag (-b) for permitting connections from other hosts.
We’ve been finding a way to automate YouTube uploads using tesseract to OCR frames from Deep Rock Galactic videos to extract metadata used in the video on YouTube.
We got to the stage where we have good, useful JSON output that our automated upload tool can work on. Job done? Well, yes- I could point the tool at it and let it work on that, but it would take quite a while. You see, to give a broad test base and plenty of ‘live-fire’ ammunition, I let a blacklog of a month’s videos build up.
Automating Metadata Updates
Why is that an issue for an automated tool? The YouTube API by default permits 10 000 units per day of access, and uploading a video costs 1600 units. That limits us to six videos per day max, or five once the costs of other API calls are factored in. So I’d rather upload the videos in the background using the web API, and let our automated tool set the metadata.
For that we need the videoIds reported by the API. My tool of choice to obtain those was shoogle. I wrapped it in a python script to get the playlistId of the uploads playlist, then grabbed the videoIds of the 100 latest videos, got the fileDetails of those to get the uploaded fileName… and matched that list to the filename of JSON entries.
So far so good.
But one of the personal touches that I like to do, and that will likely not be automated away is to pick a frame from the video for the thumbnail. So I need a way to quickly go through the videos, find a frame that would make a good thumbnail, and add that as a field to thumb for the correct video entry. I’ve used xdotool in the past to speed up some of the more repetitive parts of data entry (if you’ve used AutoHotKey for Windows, it’s similar to that in some ways).
I threw together a quick script to switch to the terminal with vim, go to the filename of current video in VLC (VLC can expose a JSON interface with current video metadata- the ones I’m interested in are the filename and the current seek position), create a thumb ? time entry with the current time and then switch back to VLC. That script can be assigned a key combo in Openbox, so the process is: find frame, hit hotkey, find frame in next video, hotkey, repeat.
Though the process is streamlined, finding a good frame in 47 videos isn’t the quickest! But the final result is worth it:
We have videos with full metadata, thumbnail and scheduled date/time set.
I included a video that failed OCR due to a missing loading screen (I hit record too late). There’s a handful of those- I found five while doing the thumbnails. I could do a bit of further work and get partial output from the loading/ending screen alone; or I could bit the bullet and do those ones manually, using it as a reminder to hit the record button at the right time!
We’ve been using python and tesseract to OCR frames from a video footage of Deep Rock Galactic to extract metadata which we can use for putting the videos on YouTube.
Nearly all of the elements are captured, there’s just the mutators left to capture: warnings and anomalies. These appear in text form on the starting screen on either side of the mission block:
Here we have a Cave Leech Cluster and a Rich Atmosphere.
Since the text of these mutators is known to a list of ten or less for each, we can detect them using a wide box, then hard-casting them to whichever potential output it has the smallest Levenshtein distance to.
The loading/ending frame detection works well for most, but on the odd one or two it suffers. It’s best to ignore the frames which are completely/pretty dark (ie either transition or fade-in) , and the ones that are very bright (eg light flash) as that hurts contrast and so hurts OCR.
Using ImageStat from PIL we can grab the frame mean (averaged across RGB values), then normalise it to add to our frame scoring function in the detection routine.
We want to normalise between 0 and 1, which is easy to do if you want to scale linearly between 0 and 255 (RGB max value): just divide the average by 255. But we won’t want that. Manually looking at a few good, contrasty frames it seemed that the value of 75 was the best- even by 150 the frame was looking quite washed out. So we want to have a score of 0 at mean pixel value of 0 and 150; and a score of 1 at mean pixel value of 75:
# Tie break score graph should look something like:
# | /\
# | / \
# | / \
# |_/ \_ (x)
# 0 75 150
# For sake of argument using 75 as goldilocks value
# ie not too dark, not too bright
75 is thus our ‘goldilocks’ value- not too dark, not too light. So our tiebreak value is:
Since we’ve gotten detection of the various elements to where we want them, we can start generating output. Our automated YT uploader works with JSON, and looks for the following fields: filename, title, description, tags, playlists, game, thumb ( ? time, title, additional), and scheduled.
Thumb time and additional we can safely ignore. Title is easy, as I use mission_type: mission_name. All of my Deep Rock Galactic uploads go into the one playlist. Tags are a bunch of things like hazard level, minerals, biome and some other common-to-all ones like “Deep Rock Galactic” (for game auto detection). The fun ones are description and scheduled.
For the description, I took a bit of a “mad libs” style approach: use the various bits and pieces we’ve captured with a variety of linking verbs and phrases to give non-repetitive output. This mostly comes down to writing the phrases, sticking them in a bunch of lists and using random.choice() to pick one of them.
For obvious reasons, I don’t want to publish fifty-odd videos at once, rather spread them out over a period. I publish a couple of DRG videos on a Monday, Wednesday, Friday and at the weekend. To do this in python, I decided to use a generator, and call next() on it every time we need to populate the scheduled field. The function itself is fairly simple: if the time of scheduled_date is the earlier of the times at which I publish, go to the later one and return the full date; if it’s at the later time, increment by two days (if Monday/Wednesday), or one day and set the time to the earlier one.
We run this through json.dumps() and we have output! For example:
"filename": "2019-10-17 19-41-38.mkv",
"title": "Elimination: Illuminated Pocket",
"description": "BertieB, Costello and graham get their orders from Mission Control and get dropped in to the Fungus Bogs to take on the mighty Dreadnoughts in Illuminated Pocket (Elimination)\n\nRecorded on 2019-10-17",
"Deep Rock Galactic",
"playlists": "Deep Rock Galactic",
"title": "Pocket Elimination"
"scheduled": "2019-11-18 18:00"