• Most new users don't bother reading our rules. Here's the one that is ignored almost immediately upon signup: DO NOT ASK FOR FANEDIT LINKS PUBLICLY. First, read the FAQ. Seriously. What you want is there. You can also send a message to the editor. If that doesn't work THEN post in the Trade & Request forum. Anywhere else and it will be deleted and an infraction will be issued.
  • If this is your first time here please read our FAQ and Rules pages. They have some useful information that will get us all off on the right foot, especially our Own the Source rule. If you do not understand any of these rules send a private message to one of our staff for further details.
  • Please read our Rules & Guidelines

    Read BEFORE posting Trades & Request

General discussion about increasing the prevalence of SUBTITLES / CAPTIONS in fanedits

VarsityEditor

Well-known member
Messages
117
Reaction score
143
Trophy Points
48
Overview:

I think a good long term goal is to steer towards increasing the prevalence of edits having subtitles. Optional subtitles/captions are near-universal on commercial releases, but seem to be mostly ignored for fanedits, treated as a luxury item which usually isn’t worth bothering about.​
The purpose of this thread is to try to gather experiences, tips, ideas, advice, dos & don’ts, pitfalls, and best practice together, with the general aim of eventually being able to produce a “how-to” guide.​
I have no expectations that five years from now, 100% of edits will have captions included, or that FE.org will mandate inclusion of accurate subs as part of the requirements of a completed edit — I just think that it’s a good direction to move towards, and that the first step is to start to centralise knowledge to establish and spread best practice. Maybe in a year or two there will be a pretty straightforward guide available that new (or old) editors can be pointed towards to streamline the process of including subtitles. Maybe in five years, 25% of new edits on the site will have captions rather than 10% (I’m just making up numbers but you get what I mean).​


Terminology:

As I understand it, captions typically contain more than just the words that characters are speaking – they also include descriptions of audio ("door slams", "dog howls" etc) which help the viewer get a complete picture of what is happening if they can’t hear, while subtitles generally just give the words spoken, the main purpose being to provide a translation for the viewer, but otherwise assuming that the viewer can hear music and SFX.​
This thread is just meant for general hashing out of ideas about the editing process, and I’m using “captions/subtitles/subs” interchangeably.
I’m also talking about non-hardcoded subs here. As in, the primary viewing experience is without subs, but the viewer has the option to switch on “English Subtitles”.​


My personal view:

When I started editing I had no intention whatsoever of including subs. Fanedits are already a niche interest, and adding subs seemed like a completely unnecessary niche of a niche. Editing is enough of a detail oriented, time-consuming task as it is, and the cost-benefit analysis for adding subs just didn’t seem to make sense.​
Now that I’ve done a lot of editing and am in the process of adding subs to everything, I wish that I had just included them from the start! It would have been so quick and easy, and is so handy while editing.​
While we typically think of subs as something to benefit the viewer, but of little benefit to the editor, I have found it editing with captions in place to be a massive bonus. It’s ten times easier to scrub through the timeline looking for a particular moment in a scene when the words that they are speaking are appearing on the viewer. While I’ve found it to be a massive positive difference for the editing process, the main motivation for doing this is to make the final edit a more professional, complete product, which gives the viewer the best experience, just as we care about having smooth scene transitions or even audio levels.​




Here’s an overview of the various areas of relevance. I’ve added a few notes, but this is mostly just meant as a starting point.

I have my personal experience with adding subs to edits, but of course it’s limited to the platform/software which I use. This is something which will vary for others. I’d be grateful for input from anyone who has experience in doing this.


A— USE CASES:

A1. Including subs from the beginning of a new editing project.
For me, this is the ideal situation. If the standard captions are added to the unedited movie(s) before you start chopping it up, then you can basically just do all editing as normal from that point onward with effectively zero added work, and have perfect captions at the end.​
A2. Adding subs to a current/in progress editing project
This is what I’m currently doing a lot of. It has consisted of downloading available SRTs online to fit the source video, importing them to my NLE project, and moving them around as necessary to fit the edit. Nowhere near as tedious or time consuming as I expected, a one-time operation of roughly 30-60 mins for a 2+hour project, depending on complexity.​
A3. Adding subs to a complete edit/video
This might be done if somebody wants to add subs to an existing video – perhaps something they no longer have the project files for, or even something produced by someone else.​



B— SOURCES OF THE CAPTIONS:

B1. Sourced from the physical media rip
I wish I just did this when I made my video rips in the first place, but no bother, just use option 2–​
B2. Sourcing external SRT text files from websites
This is what I’m doing, and if available, it’s much better than the following options–​
B3. Manually
The old fashioned way. Ears listening and fingers typing. Insanely time-consuming, and an option in name alone!​
B4. Using AI speech recognition tools to automatically create captions
I started off trying to use this approach. In my experience, while it takes 99% of the work out of option 3, the results are nowhere near usable. Many captions will be mistimed, and many have mistakes (wrong words, spelling etc) which need to be manually corrected. Even when only 5% need correcting, it's still very time consuming compared to copy-pasting available SRT files.​



C— NLE SOFTWARE:

C1. Final Cut Pro X (Mac)
This is what I use. Adding hundreds of captions to the timeline will slow things down and have a noticeable performance hit, but that’s only when the captions are shown in the timeline. When switching off their visibility (in the timeline, not on the screen), there is no loss in performance.​
C2. DaVinci Resolve (free/pro)
I don’t normally use this, but dabbled around testing the AI captioning capabilities (pro version only). They are pretty good, but still require lots of manual correction.​
C3. Sony Vegas
I don’t know anything about this program, I just include it here as I’ve noticed many people mentioning it on these forums.​
4. Adobe Premiere/Others…



D— DEDICATED CAPTIONING SOFTWARE:

D1. Subtitle Edit
I haven’t used this, but have seen others here talking about it, apparently it does a good job when you need to manually edit the subs.​
D2. Others…
There are many tools/plugins/apps around now — mainly marketed toward the TikTok/Instagram influencer sphere — to create real-time captions from an audio/video file. I’ve tried a couple, and found them less good than the DaVinci Resolve version. Maybe good for a 60 second TikTok, but not fit for purpose for an hours long movie (see point B4).​



E— SHARING CAPTIONS:

E1. Hardcoding/burned-in to the video
Obviously not recommended as a general approach (except for non-English/non-primary language dialogue)​
E2. Packaged with video in container format (eg MKV instead of MP4)
The subs are contained within the single video file, and can be switched on in the user's video player.​
E3. Separate file
Eg .SRT text file (other formats available). A small text file which can be shared along with the video. Alternatively, they can be easily uploaded and hosted on the many websites for hosting/cataloguing subtitle files (same sites as used for source in B2), meaning they are accessible to viewers.​



Any input that anyone can give on any of these areas would be gratefully received! I will add more as time and experience allows...
 
Only subtitles I am doing are my own language which is polish. Sometimes I also include polish audio track if it can be done without much trouble (as in Witcher, JL Remixed or Last Duel edits).
Doing subtitles is a lot of work, so there is zero chance I would spent a lot of time on something I dont need (as english subtitles or captions that are useless to me), I am barely able to find time for doing fanedits.
 
I have had occasional requests for subs before, so have taken to doing them for my more recently started edits: while this does add an extra bit of work it forces me at least to pay close attention when reviewing my work, so it's not entirely unhelpful! Also leads to wonderment at the mistakes they contain, one managed to hear the word "Buxton" instead of "Folkestone"!

On a couple of earlier projects with non-English audio I had them burned into the master video file as I didn't then understand how to manipulate them separately in something like Subtitle Edit, but that that its own problems! Think I have it sorted now, main issue is trying to optimise the character recognition to minimise mistakes.

Find it much easier to provide independent .srt files, which the user can then add with MKVToolNix if they want. Have had occasions where I uploaded a 10 GB video and realised that there was a mistake in the subs, much more of a hassle if the files are tied together!
 
I always do subtitles for my edits now, and thanks to @CAL0901 , nearly all of my older edits have subtitles too. I find them really valuable, because I have a family member with ear trouble. I also just like the option for if I struggle to hear the dialogue due to a poor film mix, or older audio.

Vegas is not good for editing subtitles as you go though. I use subtitle edit and tweak the original sub file while I proof my final final file. Hit two birds with one stone.
 
A2. Adding subs to a current/in progress editing projectThis is what I’m currently doing a lot of. It has consisted of downloading available SRTs online to fit the source video, importing them to my NLE project, and moving them around as necessary to fit the edit. Nowhere near as tedious or time consuming as I expected, a one-time operation of roughly 30-60 mins for a 2+hour project, depending on complexity.

Just speaking for myself, this approach is torture even for an edit with 20 cuts, because you have to sit there and guess at the alignment. I've mentioned it before but if you use Premiere I wrote a script that will do most of the work for you, and allow you to also do foreign subs without understanding the language if they have matching timecodes. It's a lifesaver and I use it sometimes for my own subtitles if I start a project and forget to import subs before I start cutting - because I can generate the file and then pull it into the edited project in the NLE and everything just lines up, I don't have to run it every time.

The script is fairly straightforward, it just does the math for you, calculating based on the XML output where the subs should be, but it warns you if it's unsure about partially overlapping subs, giving you timestamps so you can delete them if these questionable edge cases are unwanted. And based on the subs you delete you'd need to do the same with the matching timecodes in other languages that aren't your native tongue.

The only other bit of difficulty is that you have to run the XML through my other clean-up script that I use to generate XML files for release, because the XML Premiere outputs is actually incredibly buggy with miscalculated math (at some framerates), etc and I was lazy and didn't want to copy all that code into the subtitle script. So you dump it in the other script and it'll process the XML into a clean 24fps file for the subtitle script. If anyone needs help setting it up just send me a message, it should be fairly straightforward but I'm happy to help if you have questions.
 
Just speaking for myself, this approach is torture even for an edit with 20 cuts, because you have to sit there and guess at the alignment.
Thanks, this is the kind of info I was looking for. I'm not sure what you mean when you say "guess at the alignment".

For me, I'm importing the SRT into my project where each line of text appears as an individual floating object on the timeline. Then you can select a scene's worth of the floating captions and just drag it over to the relevant scene. Alignment is most easily done by just lining the caption object up with the peaks in the audio waveform. I expected it to be a nightmare doing it this way, but it's fairly quick and trouble free (a single project being over two hours and composed of parts of between 4-10 TV episodes, each with its own SRT).

Is caption placement handled differently to this in Premiere?
 
It's the same process, but aligning it is a guess. You're trying to line it up with speech, and the timing isn't always the same for every subtitle, even throughout the same video. When I've done it manually I generally do an early part of speech first, and then find a dense section of back to back subtitles to see if those also align properly and adjust if they don't. But at the end of the day you could still be a few frames off from the original.

I usually do it by previewing the playback, to see if the timing seems natural since you need it to appear a fraction of a second before they start speaking, but not too long that the eye has time to read it beforehand, it's a delicate balance that I wouldn't be able to do via waveform. Easier just to go with the timing the creators already tweaked if I can, which is why I made the script, takes the subjectivity out of the alignment.
 
I always do subtitles for my edits now, and thanks to @CAL0901 , nearly all of my older edits have subtitles too. I find them really valuable, because I have a family member with ear trouble. I also just like the option for if I struggle to hear the dialogue due to a poor film mix, or older audio.

Vegas is not good for editing subtitles as you go though. I use subtitle edit and tweak the original sub file while I proof my final final file. Hit two birds with one stone.
Thank you for the mention.

As @unfair stated, there are more ways than one to create new/modify existing subtitles. His way sounds impressive. I wonder if that is how M4 made his for his Hobbit edit, but I digress.

For me, it's a hobby, something to help me pass time constructively. While I am behind on a couple of edits -- behind according to my "schedule" -- I am always happy and willing to be of help. My only caveat is I won't subtitle profanity; the occasional "damn" is okay, but not much else. [It's personal].

As for my format, suffice it to say @CatBus of Project Threepio is my mentor and I generally follow his formatting style and advice, with the exception of using a hybridized version of the SDH format as a norm, unless otherwise requested.

In many cases, I can provide a Spanish subtitle. I state many cases, because I prefer Castilian, rather than the so-called American/Latin American dialect, which oft times can make for considerable revision. But I'm not above providing the latter, if requested.

Admittedly, for the purpose of saving time, I will pull a/a few subtitle(s) from opensubtitiles.org and use it/them for my framework, and edit accordingly.

Please, feel free to PM me.
 
I'm guilty of being very lazy with subtitles.
I use them myself, so when I started fanediting, I said I'd make subs for all my edits, but only one has complete subs so far.

I usually use SubtitleEdit with WhisperAI to make a basic file and then go through it adjusting timings and correcting mistakes.
WhisperAI is very impressive, but it still requires almost every line to be adjusted, and then I'd inevitably make a change to the edit and have to change the timings on every line afterwards which got boring very quickly so I kind of gave up.

Going back and doing all of my subtitles is definitely on my to-do list, but it's a bit of a chore. On the bright side, you can use Handbrake to encode subtitles into a video file (either burned on or optional) so you don't have to worry about a bunch of files floating around. It's very handy.
 
then I'd inevitably make a change to the edit and have to change the timings on every line afterwards which got boring very quickly so I kind of gave up.

If you only made one change to the edit, you should only have to push one button to make the rest of your subs conform to how that affected the sub file.
 
If you only made one change to the edit, you should only have to push one button to make the rest of your subs conform to how that affected the sub file.

Depends on the NLE - with Premiere you'd be doing a ripple delete or using the B hotkey to add space/footage, which is how I work anyway, but if someone forgot they'd be out of sync and need manual adjustment.
 
I usually use SubtitleEdit with WhisperAI to make a basic file and then go through it adjusting timings and correcting mistakes.
WhisperAI is very impressive, but it still requires almost every line to be adjusted...
Yeah this is what I found. While it gets 95% of the dialogue correct, correcting the rest is still a massive chore. That's why I switched to just importing the original subs.
...then I'd inevitably make a change to the edit and have to change the timings on every line afterwards which got boring very quickly so I kind of gave up.
Ah I see. So this is the big difference to having the captions be part of the timeline in the NLE rather than doing it as an external task with SubtitleEdit. When you make a change, the captions just stick with the portion of the vid that they are assigned to, so you never have this problem.

On the bright side, you can use Handbrake to encode subtitles into a video file (either burned on or optional) so you don't have to worry about a bunch of files floating around. It's very handy.
Yeah it's one option, but certainly not desirable as default for most viewers to have burned in subs.
 
Depends on the NLE - with Premiere you'd be doing a ripple delete or using the B hotkey to add space/footage, which is how I work anyway, but if someone forgot they'd be out of sync and need manual adjustment.

They said they were using subtitle edit so my answer is based on that.
 
Depends on the NLE - with Premiere you'd be doing a ripple delete or using the B hotkey to add space/footage, which is how I work anyway, but if someone forgot they'd be out of sync and need manual adjustment.
These are the kind of differences between software that I'm hoping to find out more about in this thread. In FCP, everything is a ripple edit by default, so you'd have to go out of your way to de-sync the subs once they're in place. Overall, I'd hope to encourage people to find that the earlier you put subs into your NLE, the easier it becomes.
 
If you only made one change to the edit, you should only have to push one button to make the rest of your subs conform to how that affected the sub file.
I actually didn't know until recently that you can put subtitle files into an NLE. I was changing every line manually in SubtitleEdit. 😅
 
Is there a way to shift every line over in SE?

So whenever you change a subtitle, while it's highlighted you click "start set and offset rest". Every subtitle after that point will move with that subtitle. So once you've done that, every subtitle will be in time until the next point at which you've made a cut.

I might have to do a short tutorial or something as this keeps coming up and I wonder if people think subtitle creation is much more work than it needs to be. You absolutely do not have to create and/or edit every individual sub. The amount of work depends on the number of cuts in your edit, but you can even playback the edit at 2x speed while you're working and just skip ahead to each time there's a piece of dialogue.

I actually didn't know until recently that you can put subtitle files into an NLE. I was changing every line manually in SubtitleEdit

Not all NLEs allow for this. Vegas doesn't.
 
So whenever you change a subtitle, while it's highlighted you click "start set and offset rest". Every subtitle after that point will move with that subtitle. So once you've done that, every subtitle will be in time until the next point at which you've made a cut.

Synchronization > Adust all times (show earlier/later)...

From there you can move every line, selected lines, or the selected line onwards.
giphy.gif

So many hours of work...
 
Back
Top Bottom