• Most new users don't bother reading our rules. Here's the one that is ignored almost immediately upon signup: DO NOT ASK FOR FANEDIT LINKS PUBLICLY. First, read the FAQ. Seriously. What you want is there. You can also send a message to the editor. If that doesn't work THEN post in the Trade & Request forum. Anywhere else and it will be deleted and an infraction will be issued.
  • If this is your first time here please read our FAQ and Rules pages. They have some useful information that will get us all off on the right foot, especially our Own the Source rule. If you do not understand any of these rules send a private message to one of our staff for further details.
  • Please read our Rules & Guidelines

Captain Khajiit's Basic Guide to Decoding Video and Audio

TV's Frink

You Catch On Pretty Quick
Staff member
Donor
Faneditor
Messages
23,676
Reaction score
406
Trophy Points
193
The resize filter in vdub per Throw's guide.
 

Captain Khajiit

Well-known member
Donor
Messages
2,685
Reaction score
8
Trophy Points
48
Use Avisynth to resize if you want to convert to 720p. There are a number of built-in resizing filters.

Code:
Spline36Resize(1280,720)

I actually use ResampleHQ() to downscale. It is considerably slower, but the results are better. You will need to put the .dll in your Avisynth plugins directory, as usual.

Code:
ResampleHQ(1280,720)

Then select fast recompress and render your AVI.
 

TV's Frink

You Catch On Pretty Quick
Staff member
Donor
Faneditor
Messages
23,676
Reaction score
406
Trophy Points
193
"Error reading source frame 0: Avisynth read error: Unknown ResampleHQ internal error."

Here's my current script:

Code:
Loadplugin("C:\Editing Tools\AviSynth 2.6\plugins\resamplehq-x86.dll")
DirectShowSource("C:APIMoM.grf", fps=23.976, audio=false)
Assumefps(24000,1001)
ConverttoRGB(matrix="Rec709")
ResampleHQ(1280,720)
 

theslime

Well-known member
Messages
1,234
Reaction score
1
Trophy Points
41
Captain Khajiit said:
3c) Decoding progressive AVC with ffmpegsource2

Make sure that you have demuxed your video to MKV (not an elementary stream) with Eac3to. Download the latest version of ffms2. Inside the folder, you find will find the ffms2.dll and FFMS2.avs. Copy both to your Avisynth plugins directory. Make sure that you have the Haali Media Splitter installed. Use the following script, adjusting it to fit your directory structure.

Loadplugin("C:\Program Files (x86)\Haali\MatroskaSplitter\avss.dll")
FFmpegSource2(“wherever\whatever.mkv”)
Assumefps(24000,1001)# assumes a 23.976fps source

FFMS2 is frame accurate, but it occasionally gets the frame-rate wrong. The last line corrects it. It assumes a 23.976fps source, which your source probably is if it is progressive AVC. You might have a rare, pure-24fps BD, in which case use Assumefps(24,1).

Open your script in Virtualdub to preview it.

N.B. FFMS2 works by making an index. Sometimes this takes a while, so be patient.

I'm taking a stab at editing again, and I've used this guide religiously so far. Thanks so much!

However, this particular step really gave me some headaches since the video suddenly was six minutes longer than the source file (while the sound was correct) and thus wildly out of sync even a couple of minutes into the film. After a lot of head-scratching and trial and error, it seems the solution was - bizarrely - to remove the AssumeFPS line, even though MediaInfo says the source file is indeed 23.976, and thus it should be correct and harmless. Removing it made the script frame-accurate, the right length and in sync with the audio. Could it possibly be a bug in the FFMS2 code?

Also, I used FFvideoSource() instead of FFmpegSource2() since that's what their documentation says you should use for video-only (and considering my sound is six mono wavs, I went with that). More here.
 

Captain Khajiit

Well-known member
Donor
Messages
2,685
Reaction score
8
Trophy Points
48
Thanks for your post, and I'm sorry about your headache.

theslime said:
Could it possibly be a bug in the FFMS2 code?

I don't see how because AssumeFPS() is internal to AviSynth and nothing to do with FFMS2. If removing it solved your problem, then logically the problem should lie with Avisynth. Your observation does puzzle me because that line should have no effect on a video that is already 23.976fps.

Also, I used FFvideoSource() instead of FFmpegSource2() since that's what their documentation says you should use for video-only

FFmpegsource2 is essentially a wrapper for FFVideoSource and FFAudioSource, so I don't see why it would make any difference, but thanks for the link. I wrote the guide for others to use because I use DGIndexNV almost all the time and very rarely fall back on other methods. My reasoning was that if people did have trouble with their audio and had to use FFAudioSource, it would be easier if they were already using FFMS2 and simply had to point it to the audio track. But I'll have a look at that section again and consider revising the guide. :)
 

theslime

Well-known member
Messages
1,234
Reaction score
1
Trophy Points
41
Thanks again for your wonderful guide, I can't praise it enough!

Since audio is also a use case for your tutorial you should keep the guide as is when it comes FFmpegSource2. One function is easier to remember than two. :)

On the fps front, I blamed FFMS2 since it's been known to be slightly wonky compared to the DGIndex versions, but yeah, I see what you mean that Avisynth itself should be the culprit. But it's really weird, isn't it? Why would it even matter when the assumed fps matches the original fps? And yet it does to me. Googling didn't tell me much either, so it doesn't seem to be a common bug. I'd be interested in trying other 23.976 fps files later on to see if I can replicate the issue. Oh well. It works now.

Now the fun begins - actually editing a movie with Avisynth. Yes, I'm a masochist. ;)
 

Captain Khajiit

Well-known member
Donor
Messages
2,685
Reaction score
8
Trophy Points
48
theslime said:
Now the fun begins - actually editing a movie with Avisynth. Yes, I'm a masochist. ;)

As am I! I've made a few custom versions of films in which all the editing, video and audio, has been performed in AviSynth. In the process, I came across a few functions that helped me greatly. I'll have a look through my files and shoot you a PM detailing the functions and my usual method in the next day or so.
 

theslime

Well-known member
Messages
1,234
Reaction score
1
Trophy Points
41
Cool! I'm really interested in stealing more of your workflow. :)

Also, more specially, I'm really interested in good Avisynth colour correction tutorials. I've found a couple, but no good ones. Or is this one area where a visual tool is better, like Virtual Dub? I'm currently using Avisynth only, with a combination of AvsP and MPC for previewing. I remember having a little bit of success with Avanti some years ago, but I haven't tried it since.
 

theslime

Well-known member
Messages
1,234
Reaction score
1
Trophy Points
41
I'm experimenting now with colour grading an over-yellow and light (and slightly desaturated) clip to match the rest of the footage, and I found that a combination of Channel Mixer and Tweak did the trick. There are probably simpler solutions, but it worked for me.

Code:
FFvideoSource("C:\VIDEO\video.mkv").ConvertToRGB24().ChannelMixer(100,-9,-14,5,86,-8,7,0,114).ConvertToYV12().Tweak(0,1.1,-10,1)

This dials the yellow down a notch and then slightly ups the saturation and lowers the brightness. The only wonky part of it is that the plugins work in different colour spaces.

I'll stop derailing the thread now. :)
 

Captain Khajiit

Well-known member
Donor
Messages
2,685
Reaction score
8
Trophy Points
48
theslime said:
The only wonky part of it is that the plugins work in different colour spaces.

That's all right as long as you avoid multiple conversions by doing all the work that needs to be done in RGB in one go, which you have done. If your video is HD, you need to specify the correct matrix when converting to and from RGB, but I'm sure you know this. (The FAQ has the details.)

You might consider merging the chroma of your corrected clip back into the original.

Code:
vid=FFvideoSource("C:\VIDEO\video.mkv")

cc=vid \
.ConvertToRGB24() \
.ChannelMixer(100,-9,-14,5,86,-8,7,0,114) \
.ConvertToYV12() \
.Tweak(0,1.1,-10,1)

last=MergeChroma(vid,cc)

return last

The result can look subtly different from the corrected clip, but I find that keeping the luma of the original clip intact outweighs this.

EDIT: I notice that you adjusted the brightness with Tweak(), so the result of MergeChroma() would certainly look different. You could get round this by adjusting the brightness of your clip after merging the chroma.
 

Captain Khajiit

Well-known member
Donor
Messages
2,685
Reaction score
8
Trophy Points
48
Captain Khajiit said:
I'll have a look through my files and shoot you a PM detailing the functions and my usual method in the next day or so.

The PM was getting a bit long, so I'll post it here instead.

I recommend rendering a lossless AVI from which to work: it will make scrubbing through the video quicker. One advantage of using AviSynth is that you can save hard-drive space by making your lossless AVI a low-resolution video. A quick way to do this is to crop the borders and use Reduceby2().

Code:
WhateverSource(“my_source_file”)
Crop()
Reduceby2()

Keeping it YV12 will result in a smaller file. You can convert to RGB and back in your editing script, which you point to your rendered AVI.

Code:
vid=AviSource()
ConverttoRGB()#if necessary
#all
#my
#editing
ConverttoYV12()#if necessary
return vid
When you have finished, change the first line in your script to point to your source file (as shown below) and send the script to the encoder.

Code:
vid=WhateverSource(“my_source_file”)
ConverttoRGB()#if necessary
#all
#my
#editing
ConverttoYV12()#if necessary
return vid

To preview, I use VirtualDub. Using ShowFrameNumber() before any calls to Trim() anchors the numbers that you see to the frames of the original video, which helps when your script starts to become complicated. I find that putting each call to a function – especially to Trim() – on its own line makes the script easier to read.

Code:
vid=AviSource() \
.ShowFrameNumber(scroll=true)
videdit= \
Trim(vid,    0,        3000) ++ \
Trim(vid,    4100,        0)
return videdit

Stickboy's RemapFrames comes in useful if you want to replace frames or apply the same set of filters to certain sections of a video.


Audio

Start by dubbing the audio to the video.

Code:
vid=AviSource()
aud=WavSource()
dub=AudioDub(vid,aud)

I recommend downmixing your audio to stereo and converting it to a 16-bit WAV so you can hear it in VirtualDub and use the waveform monitor (View → Audio display). This can be done in BeHappy or eac3to. If you want surround sound, you can point the script to multi-channel source files when it is finished.

When it comes to crossfading audio, the built-in functions are limited. Dissolve() gives you a straight linear dissolve, which is rarely satisfactory. Gavino's ParabolicCrossfade() is more useful. Copy the following into Notepad and save it in your “Plugins” directory as ParabolicCrossFade.avsi so it loads automatically.

Code:
function ParabolicCrossFade(clip c1, clip c2, int n, float "factor") {
  factor = Default(factor, 2.0)
  d = Dissolve(c1, c2, n)
  df = Dissolve(c1.FadeOut0(n), c2.FadeIn0(n), n)
  audio = MixAudio(d, df, factor)
  AudioDub(d, audio)
}

Like Dissolve, ParabolicCrossFade() affects the video and the audio, so you can see where the crossfade begins and ends by looking at the video. To prevent clipping, you can change the mixing factor from the default of 2.0 all the way down to 1.0, which gives you a linear crossfade. I usually start with 1.2 and go from there.

If you do not want one of your audio crossfades to be accompanied by a visual dissolve, the trick is to edit the video and audio separately, a bit like having different tracks in an NLE. This way, you can have a straight cut in the video “track” and a dissolve in the audio “track”. To ensure that your video and audio do not go out of sync, use StackVertical(), making sure that the audio “track” is the one on top i.e. the first one specified.


Code:
vid=AviSource() \
.ShowFrameNumber(scroll=true)

aud=WavSource()

dub=AudioDub(vid,aud)

#Audio track

audedit= \
ParabolicCrossFade( \
Trim(dub,    0,    3000), \
Trim(dub,    4000,    0), 100, 1.2 \
)
#100 = no. of frames
#1.2 = the mixing factor

#Video track

videdit= \
Trim(vid,    0,        3000) ++ \
Trim(vid,    4100,        0)

stack=stackvertical(audedit, videdit)

return stack

I use SoundOut() – especially if I'm editng multi-channel audio – to render to FLAC.




Re: Color Correction

The answer to your question is that I would use VirtualDub if I were to adjust the colors of a video manually. It's possible to load VirtualDub plugins in Avisynth, so you can incorporate your work in VirtualDub into your script without having to render clips. (Conversion to RGB is essential here.)

To do proper color correction though, you need a calibrated broadcast monitor. Otherwise, what you see is not what you get. If you are trying to match two different versions of a film, you can sort of avoid this problem by settling for making one version match the other or by making them meet in the middle.

You can also look at the colors on a vectorscope, such as clrtools, which can be loaded in AviSynth. Make the following another .avsi.

Code:
# Arguments:
#   mode:  0=Histograms, 1=Show Hot Pixels, 2=Video Channels, 
#          3=Waveform Monitor, 4=Vectorscope
#
#   (input Characteristic args mainly used for luma calcs)
#   NTSC vs PAL
#   stdDef (601) vs hiDef (709)
#   rng255 vs rng219 (Pick rng255=false if you want luma black=0)
#
#   (show channels used in modes 0,2,3)
#   luma,red,green,blue
#
#   hotFixMode: 0=turn hot pixels black, 1=lower intensity, 2=lower saturation
#   showWFMgrid: turn it off to see better.


function VD_clrtools(clip clip, int "mode",
            \ bool "NTSC", bool "stdDef", bool "rng255",
            \ bool "luma", bool "red", bool "green", bool "blue",
            \ int "hotFixMode", bool "showWFMgrid")
{
  [B]LoadVirtualdubPlugin("C:\Program Files (x86)\AviSynth 2.5\plugins\clrtools.vdf", "_VD_clrtools")[/B]

  return clip._VD_clrtools(default(mode,0),0,0,
            \ default(NTSC,true)?1:0,0, 
            \ default(stdDef,true)?1:0, 
            \ default(rng255,true)?1:0, 
            \ default(luma,true)?1:0, 
            \ default(red,true)?1:0, 
            \ default(green,true)?1:0, 
            \ default(blue,true)?1:0, 
            \ default(hotFixMode,0), 
            \ default(showWFMgrid,true)?1:0)
}

__END__

#Example Usage:

#avisource("my.avi", pixel_type="RGB32")

#Luma Histogram Only
#VD_clrtools(mode=0,red=false,green=false,blue=false) 

#Luma Waveform Monitor
#VD_clrtools(mode=3,red=false,green=false,blue=false) 

#Vectorscope
#VD_clrtools(mode=4)

This will start you off.

Code:
AviSource().VD_clrtools(mode=4)

It returns a 632x632 video showing a vectorscope. If you want to see your video and the scope, you will have to resize and stack accordingly.

Code:
#for a 720x480 NTSC video
vid=AviSource()

scope=vid \
.VD_clrtools(mode=4) \
.BilinearResize(480,480)

last=StackHorizontal(vid,scope)

return last

If you have any interest in using frame interpolation to help with syncing two different transfers of a film, try jagabo's InsertFramesMC(). It uses MVTools.

Code:
function InsertFramesMC(clip Source, int N, int X)
{
  # inserts missing frames using motion interpolation
  # N is the frame number before which the sequence will be inserted
  # X is number of frames to insert
  # the video length is increased by X frames
  # won't work for N=0, N>last
  #
  # e.g. InsertFramesMC(101, 5) would
  # keep the source's frames from 0 to 100
  # create and insert 5 motion interpolated frames (based on source frames 100 and 101)
  # append the source's frames 101 to the end
  # audio is silent during the inserted frames

  start=Source.trim(N-1,-1) # frame before N, used for interpolation starting point
  end=Source.trim(N,-1) # frame at N, used for interpolation ending point
  start+end # join them into a two frame video
  AssumeFPS(1) # temporarily FPS=1 to use mflowfps

  super = MSuper()
  backward_vec = MAnalyse(super, isb = true)
  forward_vec = MAnalyse(super, isb = false)
  MFlowFps(super, backward_vec, forward_vec, blend=false, num=(X+1), den=1)
  Trim(1, X) # trim ends, leaving only the frames for insertion
  AssumeFPS(FrameRate(Source)) # return to source framerate for joining

  Source.trim(0,-N) ++ last ++ Source.trim(N,0) # join, before, inserted, after
}
 

theslime

Well-known member
Messages
1,234
Reaction score
1
Trophy Points
41
Looks like I have some reading to do! :)
Thanks so much for sharing your workflow. It looks massively useful!
 

ssj

Well-known member
Donor
Faneditor
Messages
3,883
Reaction score
2
Trophy Points
53
anybody else find this thread an awesome reminder of how computer illiterate you are? ;-)
 

Captain Khajiit

Well-known member
Donor
Messages
2,685
Reaction score
8
Trophy Points
48
You sent me a PM just now with the same message. Don't post it in the thread where it is out of context and likely to confuse other people. I have been helping you with your deleted scenes and have already shown you how to structure the command-line in question and linked you to a post showing you how to set it out.
 

Captain Khajiit

Well-known member
Donor
Messages
2,685
Reaction score
8
Trophy Points
48
theslime

By the way, with ParabolicCrossFade() and Dissolve(), both audio clips are fading into each other at the same rate. If you want one clip to fade out quickly but the other to fade in slowly, or vice versa, you accomplish this by crossfading the first into a blank clip and the second into that.

Code:
paraboliccrossfade( \
trim(first_clip,        20000,    20165), \
BlankClip(first_clip, length=80, color=$000000), 10, 1.4) \
.paraboliccrossfade( \
trim(second_clip,    20475,    22000), 100, 1.2 \
)
#Dissolve() can be called in the same fashion.

Here, the first clip drops out quickly, but the second one starts to come in before it does so. With a bit of tinkering, you can achieve the kind of crossfade that you find in an NLE.

EDIT: Of course, one can use FadeIn0()/FadeOut0() on one of the two clips to achieve a similar effect, but the advantage of using BlankClip() is that it provides a buffer between the two, allowing you to draw out the transition.
 

addiesin

Well-known member
Messages
5,888
Reaction score
1,502
Trophy Points
163
Captain,

Could you possibly assist me with an error? I'm following this guide and on the last part, I have copied my dvd, demuxed, and I'm now trying to use DGIndex to decode. However, I got an error after hitting "Start". It says "Too many pictures per GOP (>= 500). DGIndex will terminate."

It's still creating a file, which you can see in the screenshot, but it seems to be empty. The DVD is The Doctor Who Revisited american release, if that makes any difference. If you could help point me in the right direction, I would greatly appreciate it.

Screenshot of the error:
34znlnq.png
 

Captain Khajiit

Well-known member
Donor
Messages
2,685
Reaction score
8
Trophy Points
48
I suspect a bad rip. If you didn't rip with AnyDVD, try that. (The free trial should be fine.)
 
Top Bottom