Understanding Preprocessor Settings
Generally, the type of source footage determines the preprocessor settings. You can modify the settings and preview the results to make the source footage look as good as possible before encoding. Different settings are used depending on the nature and quality of the incoming video. See also: Previewing Preprocessor Clips.
A Preprocessor Profile allows you to adjust the following settings:
Common (Preprocessor)
Figure 6-3 shows the Common settings. Table 6-1 describes the settings.
Figure 6-3 Preprocessor Profile: Common Settings
Table 6-1 Preprocessor Profile: Common Settings and Descriptions
|
|
Profile Enabled |
Check the box to enable this profile for job processing. |
Task Mode |
This is a required setting and cannot be changed.
- standard: The Cisco MXE 3500 generates an intermediate uncompressed AVI file as the output of the preprocessing step.
- fast start: The fast start option is only useful when checking the Separate Capture from Preprocess box below. In this case, the Cisco MXE 3500 prefilter runs two passes: first it captures from SDI 'raw' without any filtering, then it preprocesses the capture file according to the preprocessor settings in a second pass. If fast-start is enabled, the second pass will run in fast-start mode.
|
Separate Capture from Preprocess |
Defines whether or not the preprocessing occurs simultaneously with the capture. |
MXF Capture Bit Rate |
The bit rate for the intermediate MXF file that gets produced during Live captures from HD sources when Separate Capture from Preprocess is enabled. Normally, the source audio tracks are down-converted to 16 bits before entering the preprocessor audio pipeline. In Audio Passthrough mode, the original audio is preserved during preprocessing. This may be necessary when encoding into formats with 24/20-bit audio or when passing through compressed audio tracks (Dolby-E, etc.). The only audio preprocessing that is still applied in this case is the one specified in the Audio Mapping section. |
Video (Preprocessor)
Figure 6-4 shows the Video settings. Table 6-2 describes the settings.
Figure 6-4 Preprocessor Profile: Video Settings
Table 6-2 Preprocessor Profile: Video Settings and Descriptions
|
|
CPU Usage |
Determines the resources available for preprocessing. Optimized for Quality
- Hardware-based captures : The capture card hardware capability and compute-intensive software preprocessing capabilities are used. This results in the highest quality output and is recommended for video-on-demand encoding.
- IP captures : Compute-intensive software preprocessing capabilities are used. This results in the highest quality output.
Optimized for Speed
- Hardware-based captures : The capture card hardware capability and simplified software preprocessing capabilities are used, leaving the maximum amount of resources available for encoding and distribution. This results in the fastest preprocessing and is recommended for Live Webcasting.
- IP captures : Simplified software preprocessing capabilities are used. This results in the fastest preprocessing but most of the Signal Processing features will not be available. It also greatly improves overall IP capture stability when the source stream is prone to outages and/or missing/corrupted packets. The only supported features in this mode are Graphic Overlays, Video Downscaling, Video Frame Rate Conversion, and Closed Captions Burn-in.
Note This mode cannot be used when output video dimensions are higher than the source dimensions and when the output frame rate is higher than the source frame rate. When used in combination with H.264 IP streaming, setting the encoded width and height to 0 in the H.264 profile enables the Smart Ingest feature and will result in the output dimensions matching the source ones, which allows using the same profile for different source dimensions.
|
Field Order |
Specifies which field will be used as the top field during de-interlacing.
- Automatic Top will be automatically detected. This is recommended.
- First on Top will be used as the top field.
- Second on Top will be used as the top field.
- Frame Footage does not require de-interlacing.
If you have selected an incorrect field order, it will be evident in the quality of the output. Some lesser-used formats will incorrectly report field order. Also, AVI and other formats may not specify the field order. If setting Field Order to Automatic yields poor results, specify First on Top or Second on Top. |
Single Field |
Specifies the method used to de-interlace interlaced video before it is encoded.
- Single Field Only: The top field will be interpolated. Half of the temporal information will be omitted because only information from the first field will be used. Recommended for fast-motion video.
- Two Fields Blend: Both fields into a single progressive field. All temporal information will be maintained. Recommended for slow-moving or stationary video images.
|
Motion Compensation |
This setting is not available on the Cisco MXE 3500. |
Vertical Shift |
The number of horizontal lines the video will be moved. The preprocessor shifts the entire video in the vertical plane by the amount specified. So, if the video is shifted by five pixels then each frame is moved up five lines and the first five lines are out. |
Inverse Telecine |
Inverse telecine algorithm tracks the 3:2 pull-down cadence even in portions of the media where, due to a lack of motion, the cadence is difficult to detect. The chance of a telecine phase change is 80% at every edit point. Note Inverse telecine is not compatible with Temporal Smoothing. If Temporal Smoothing is turned on (set greater than 1), then Inverse Telecine cannot be used. Requesting both results in a warning message, and Inverse Telecine is disabled.
Inverse Telecine in the “perfect” mode can be used when the media is known to have an unchanging telecine phase. This setting is used to reverse the frame insertion performed by the telecine process when film is converted to video. Inverse Telecine will remove inserted frames, which are unnecessary.
- Off: Processes video with frames as they are. Telecine frames will be retained, if they are present.
- Adaptive: The Cisco MXE 3500 will try to detect the telecine pattern and recreate the original frames. It constantly analyzes and adjusts to discontinuities (due to an edit, for example) in the telecine pattern. This is the most commonly used mode.
- Perfect 3:2: The Cisco MXE 3500 will analyze the footage and then adhere to a pattern without dynamically adjusting it. This mode should be used on unedited footage that was created using a 3:2 pull-down process.
Note Perfect 3:2 does not work when Audio Drift Compensation is enabled in Audio Preferences. |
Add/Remove VBI |
Note Only use this control when the vertical cropping is turned off. This setting helps maintain proper aspect ratios when converting between media types that do not both require a VBI. For example, if the a broadcast format is being converted to a web format, the VBI will be stripped from the video before adjusting image size, thus preserving the overall aspect ratio of the media. Yes: VBI will be stripped from VBI sources and added to non-VBI sources. No: No action taken. Auto: If the incoming source contains a VBI and the output media does not, it will be added. If the input media has no VBI and the output is to an analog broadcast format, the VBI will be added. You can use this feature to strip the VBI out and put in such a way that the aspect ratio is maintained when you go from one format to another. We suggest leaving this feature set to the default settings. When it is set to “Auto”, and the input height is 480 (or 486) and the output height is 512, or if the input height is 576 and the output height is 608, 32 (or 26) blank VBI lines will be added at the end of the preprocessing stage. Examples: 1. When set to Auto, and the input height is 480 (or 486) and the output height is 512, or if the input height is 576 and the output height is 608, 32 (or 26) blank VBI lines will be added at the end of the preprocessing stage. 2. If the input is 512 and: – VBI is set to Yes, 32 top lines will be cropped off (similar to setting the vertical cropping to 32) – VBI is set to Auto and the output is 480 (or 486), 32 (or 26) lines will be removed before preprocessing. 3. If the input is 608 and: – VBI is set to Yes, 32 top lines will be cropped off (similar to setting the vertical cropping to 32) – VBI is set to Auto and the output is 576, 32 lines will be removed before preprocessing. |
In Point |
Marks the point in time, relative to the beginning of the clip, to start encoding. In points and out points are used when only a section of a larger file will be encoded. In points are marked in hh:mm:ss:mmm, where the last section marks milliseconds. |
Out Point |
Marks the point in time, relative to the beginning of the clip, to stop encoding. Out points are marked in hh:mm:ss:mmm, where the last section marks milliseconds. Note In points and out points are not related to video timecodes. They are measured strictly in time elapsed from the start of the clip. Technically, they are not frame accurate, but allow frame accurate capture because they measure to the millisecond. |
Fade In |
Determines the number of seconds to fade-in from black to full brightness at the beginning of the video clip. Values range from 0 to 10 seconds. Fade In time is appended to the absolute beginning of the preprocess file including any bumpers that may be added. The default value is 0. |
Fade Out |
Determines the number of seconds to fade out from full brightness to black at the end of the video clip. Values range from 0 to 10 seconds, with 0 seconds the default. Fade Out time is appended to the absolute end of the preprocess file including any trailers that may be added. |
Telecine (Preprocessor)
Forward telecine takes 24fps to 30fps or 23.98fps to 29.97fps by creating a 2:3 pull-down cadence. Figure 6-5 shows Forward Telecine settings. Table 6-3 describes the settings.
Figure 6-5 Preprocessor Profile: Forward Telecine Settings
Table 6-3 Preprocessor Profile: Forward Telecine Settings and Descriptions
|
|
Enabled |
Turns forward telecine on or off. |
Field Dominance |
Sets the field dominance for the telecine algorithm, which is important because telecine sometime mixes two input frames to produce an output frame.
- Upper: Upper dominance places the earlier frame on the upper field (the one contributing the uppermost line in the frame). This is the default setting.
- Lower: Since encoders independently set the field dominance, you need to ensure that the telecine dominance matches the encoder dominance. The preprocessor does not know the dominance being created by the encoder. In fact, it is possible to have multiple encoders creating conflicting dominances.
|
Cadence |
Sets the cadence to 2:3 or 3:2. The default setting is 2:3. |
Cadence Origin Timecode |
Defines the start of the cadence. |
Crop (Preprocessor)
Crop settings are used to trim unwanted material from the outer edges of the incoming video image. All crop settings are expressed in source video pixels.
Crop settings do not change the frame size of the finished output. Non-uniform crop will result in changes to the aspect ratio of the image in the output file. For film-based input that requires a non-uniform crop, it is important to match the encoder output size to the cropped input size manually to avoid distorting the image.
Figure 6-6 shows Crop settings. Table 6-4 describes the settings.
Figure 6-6 Preprocessor Profile: Crop Settings
Table 6-4 Preprocessor Profile: Crop Settings and Descriptions
|
|
Crop Top |
Determines the number of pixels to trim from the top of the incoming video image. |
Crop Left |
Determines the number of pixels to trim from the left side of the incoming video image. |
Crop Right |
Determines the number of pixels to trim from the right side of the incoming video image. |
Crop Bottom |
Determines the number of pixels to trim from the bottom of the incoming video image. |
Bumpers and Trailers (Preprocessor)
Figure 6-7 shows Bumper and Trailer settings. Table 6-5 describes the settings.
Figure 6-7 Preprocessor Profile: Bumper and Trailer Settings
Table 6-5 Preprocessor Profile: Bumpers and Trailers Settings and Descriptions
|
|
Bumper File |
Specifies the file to be used as a bumper at the introduction of the encoded clip. Movie files of any Cisco-supported format or still files saved with a.mov file extension can be used as bumpers. |
Trailer File |
Specifies the file to be used as a trailer to follow the encoded clip. Movie files of any Cisco-supported format or still files saved with a.mov file extension can be used as trailers. |
Preprocess Bumper / Trailer |
Specifies whether to apply preprocessing settings to the bumper and/or trailer file.
- Checked: Specifies that preprocessing settings should be applied to the bumper/trailer clip. Use the On setting for video clips that have similar requirements to those of the source footage.
- Unchecked: Specifies that the bumper/trailer clip will not have preprocessing settings applied. The clip will be appended to the beginning of the source footage as it is. Use the Off setting for animated GIFs or other bumper/trailer files that do not require the same preprocessing as the source footage.
|
Separate Capture from Preprocess |
Instructs the Cisco MXE 3500 to separate the real-time audio and video capture step from the preprocessing step. As a result, the Cisco MXE 3500 will not apply the preprocessor setting until the media acquisition is entirely completed. This mode is recommended for encoding Live jobs with non-standard frame sizes such as 400x300 and/or with heavy preprocessor settings such as higher level of blur or noise reduction. Separating the preprocessing from the capture step guarantees that the preprocessing can be performed even while using the capture card as the input device.
- Checked: Specifies that the preprocessing will occur in two passes. The first pass will be capture the input completely, and the second pass will apply the preprocessing
- Unchecked: Specifies that the preprocessing will occur normally, i.e. capture and preprocessing together in the same pass.
|
MXF Capture Bit Rate |
Use this setting for higher quality encodes that require scaling and other preprocessing features. In this mode, a two-stage preprocessing is employed. On the first stage, the incoming video is encoded into a high-bitrate MPEG-2/I-frame only MXF format. The actual MXF bitrate is set in the Preprocessor Profile > MXF Capture Bit Rate. The valid bitrate range is 50 to 300 MBits. On the second stage, a regular file-based preprocessing is executed off that MXF file. |
Color (Preprocessor)
Figure 6-8 shows Color settings. Table 6-6 describes the settings.
Figure 6-8 Preprocessor Profile: Color Settings
Table 6-6 Preprocessor Profile: Color Settings and Descriptions
|
|
Brightness |
Adjusts luminance as measured against the source video. Values range from 50% (half as bright) to 150% (one and a half times as bright). The total value range is from 0 to 200%. Default value is 100%, which leaves brightness unchanged. |
Contrast |
Adjusts separation between the blackest black and the whitest white. Values range from 50% to 150%. The total valid range is 0 to 200%. The default value is 100%, which leaves color unchanged. |
Hue |
Adjusts hue of colors in the video from red (decrease) to green (increase). Values range from -10° to +10°. The total value range is -180° to +180°. The default value is 0°. |
Saturation |
Adjusts the amount of color in the video image expressed as a percentage of source video color. Values range from 50% to 150%. The total valid value range is 0 (remove all color) to 200 (double the color). The default value is 100%. |
Gamma |
Adjusts the mid-range (gray) luminance values of the video. This adjusts the luminance of mid-range colors, leaving black and white values unchanged. The mapping is applied in RGB space and each color channel independently receives the color correction. Values range from 0 to 40. The total valid value range is 0 to 255. The default value is 1.0. |
Black Point |
Defines the threshold for 100% black. Any pixel below the number entered here will be converted to black. Values range from 0 to 40. The total valid value range is from 0 to 255. The default value is 0. Setting black point higher will reduce detail in the dark areas of the video increasing compression quality. |
Black Point Transition |
Sets the amount of smoothing between black and surrounding colors. Black Point affects only pixels below the threshold set. Lower the value to maintain the sharpest transition, or increase the value for smoother transition. Values are 0 to 255. The default value is 15. |
White Point |
Defines the threshold for 100% white. All pixels above the number entered will be converted to white. Values range from 0 to 255. The default value is 255. Setting white point lower will reduce detail in the light areas of the video increasing compression quality. |
White Point Transition |
Sets the amount of smoothing between white and surrounding colors. Lower the value to maintain the sharpest transition, or increase value for smoother transition. Values for are 0 to 255. The default value is 15. |
Color Rescale |
Determines whether color will be expanded from video levels (16-235) to computer levels (0-255). The default value is Yes. Most video formats set 100% black (7.5 IRE) to 16 when mapped to 8 bit sampling and 100% white (100 IRE) to 235. Most computers set 100% black to 0 and 100% white to 255. Color rescale expands the range by mapping 16 to 0 and 235 to 255 to ensure that the color range is optimized for computer display.
- On: Luminance and color levels will be expanded from video levels (16-235) to computer levels (0-255). This is the default value.
- Off : Luminance and color levels will be unchanged from video levels (16-235).
If encoded video looks murky, with no true blacks or true whites, Color Rescale may be Off when it should be On. If encoded video has too much black and white, one possible cause may be that Color Rescale is On when it should be Off. |
601-709 Color Space |
Determines how color will be adjusted during conversion from HD to SD or SD to HD.
- 601(SD) – 709(HD)
- 709(HD) – 601(SD)
|
Noise Reduction (Preprocessor)
Figure 6-9 shows Noise Reduction Settings. Table 6-7 describes the settings.
Figure 6-9 Preprocessor Profile: Noise Reduction Settings
Table 6-7 Preprocessor Profile: Noise Reduction Settings and Descriptions
|
|
Temporal Smoothing |
Defines how frames are combined for interframe smoothing. This specifies the number of input frames to average when constructing an output frame. Values range from 1 to 4 frames in terms of the input frame rate from the source. The default value is 1, which results in no smoothing (a frame compared to itself will be an exact match). |
Blur |
Specifies how much to blur the source footage. Values range from 0 to 4.0. The total valid values range is from 0 to 10.0. Blur is generally used at lower bit rates to reduce image detail, which improves the overall appearance of the finished clip at high compression rates. Blurring degrades the image but enables better compression. |
Noise Reduce |
Used to remove small, irregular detail from the source video. The range of values refers to the size of the detail to be removed. Recommended range is from 0 to 3.0. Complete range is from 0 to 6.0. The default value is 0. |
Unsharp Mask Enabled |
Used to enhance edge detail in the image without enhancing other detail. If checked the Unsharp Radius and Unsharp Strength sliders are activated.
- Checked: Indicates that Unsharp Mask smoothing will be used. This reduces compression efficiency, but can improve perceived clarity of the image.
- Unchecked: Indicates that Unsharp Mask smoothing will not be used. This is the default value.
Unsharp Mask reduces compression efficiency, but can improve the perceived quality of the image. This is recommended for some video formats, such as VHS, and for multigenerational images where a sharper image is desired. |
Unsharp Radius |
Used only when Unsharp Mask is set to Yes. Increase the value to increase sharpening on larger objects within the image. Values range from 0 to 8.0. Default is 0. |
Unsharp Strength |
Used only when Unsharp Mask is set to Yes. Increase to increase the strength of the sharpening effect. Values range from 0 to 200. Complete range is 0 to 200. Default value is 100. |
Manage Input Extensions (Preprocessor)
Figure 6-10 shows the Manage Input Extensions settings. Table 6-8 describes the settings.
Figure 6-10 Preprocessor Profile: Manage Input Extensions Settings
Table 6-8 Preprocessor Profile: Input Extensions Settings and Descriptions
|
|
Mange Input Extensions |
This option allows you to handle file extensions based on a configuration file. First, follow these instructions: 1. Create and save a file that matches the XML format in the following example: Proprietary File Handling XML
<extension input="ts" treat-as="mpg" />
<extension input="" treat-as="gxf" type="directshow" />
<extension input="mp4" type="directshow" />
<extension input="avi" type="quicktime" />
In the example: – Line 2 tells the Cisco MXE 3500 to treat.ts extensions as.mpg extensions and to decode them using the default pipeline. – Line 3 tells the Cisco MXE 3500 to treat files without an extension as.gfx (Grass Valley) files and to decode them using DirectShow. Line 4 tells the Cisco MXE 3500 to use DirectShow to decode.mp4 files. – Line 5 tells the Cisco MXE 3500 to use QuickTime to decode.avi files. 2. On the Preprocessor Profile, Manage Input Extensions section, check the Enabled box. 3. Next to Configuration File, click the Browse button, and navigate to the new XML file (created in Step 1). Note Currently, the “treat-as” option cannot be combined with type=”quicktime”. |
Line21/VANC Data (Preprocessor)
Figure 6-11 shows Line21/VANC settings. You can select to add Line 21/VANC settings to the output by specifying the source from the options described in Table 6-9 .
See also: Extracting VBI Data from SD Pinnacle Sources (Line 21/VANC Data).
Figure 6-11 Preprocessor Profile: Line21/VANC Settings
Table 6-9 Preprocessor Profile: Source Settings and Descriptions
|
|
None |
This setting indicates that no Line 21 data will be added to the output. |
VBI (Line 21) |
The Cisco MXE 3500 passes the Line 21 information found in the Vertical Blanking Interval (VBI) of the source media to the encoded output. The output encoding differs depending upon the selected option. See also: Extracting VBI Data from SD Pinnacle Sources (Line 21/VANC Data).
- CC passthrough to VBI (Seachange, Pinnacle and GXF)
- CC passthrough to MPEG user data (Omneon, VOD)
|
Embedded (Line 21 /VANC) |
The Cisco MXE 3500 passes the closed captioning information found in the MPEG user data of the source media (currently only in MPEG-2 based.mov and Intermediate.ref files) and in an embedded VANC track (currently only in Avid DNxHD.mov files) to the encoded output. The output encoding differs depending upon the selected option.
- CC passthrough to VBI (Seachange, Pinnacle and GXF)
- CC passthrough to MPEG user data (Omneon, VOD)
|
Submission (CC File) |
The Cisco MXE 3500 will embed the data found in a Scenarist Caption file (.scc), Cheetah Caption file (.cap), NCI Caption file (.cap) or NCI Timed Roll-up file (.flc) to the encoded output. The output encoding differs depending upon the selected option.
- CC passthrough to VBI (Seachange, Pinnacle and GXF)
- CC passthrough to MPEG user data (Omneon, VOD)
File Name: If you enable Closed Captioning from a file source, you must specify the file location on the File Job submission page > Advanced section > Closed Captioning File at the time of submission. |
Extracting VBI Data from SD Pinnacle Sources (Line 21/VANC Data)
The Cisco MXE 3500 supports VBI data extraction from Standard Definition (SD) Pinnacle sources. You can extract the Line 21/VANC data from the VBI when ingesting SD Pinnacle sources. The Cisco MXE 3500 reconstructs the VBI data found in the MPEG user data fields before it enters the signal processing pipeline in the preprocessor.
Procedure
Step 1
On the Preprocessor Profile page, scroll down to the Line 21/VANC Data section.
Step 2
From the Source drop-down, select VBI (Line 21). See Figure 6-12.
Figure 6-12 Selecting VBI Source for Line 21/VANC Data
Note
For the Cisco MXE 3500 to identify a source file as being Pinnacle-based, the media file must have the.std extension or the file name itself must be std. The preprocessor will also read the supporting files (if present).
If the media (MPEG) file is named with an.std extension, the supporting file names must contain the.ft and.header extensions. If the media file is named std, the supporting files must be named ft and header. The supporting files must reside in the same directory as the std file.
Closed Captioning (Preprocessor)
Figure 6-13 Shows the Closed Captioning settings.
Figure 6-13 Preprocessor Profile: Closed Captioning Settings
Checking the Burn-in box allows you to render closed captions graphically on the screen. The graphic is white or colored characters on a black rectangle. The 'burned-in' captions will appear on the intermediate preprocessor.avi file as well as the encoded outputs.
Note
If the Burn-In box is checked and Line 21/VANC Data Source is set to Submission, then a caption file must be specified in the File Submission profile. If Embedded or VBI is selected, no caption file specification is needed.
Aspect Ratio Conversion (Preprocessor)
The Aspect Ratio Conversion tools provide several methods for scaling media between various formats. For example, an image with a 4:3 aspect can be converted to a 16:9 aspect, or vice-versa.
The Cisco MXE 3500 makes use of pixel aspect ratio information in the conversions. The Cisco MXE 3500 uses default assumptions about the pixel aspect ratio based on the pixel dimensions of an image. For example, an image size of 720x480 or 720x486 is assumed to be SD NTSC, and is assigned the NTSC pixel aspect ratio of 0.9. For complete control, the user may explicitly set both the input media pixel aspect ratio and the pixel aspect ratio for the preprocessor output image.
The input dimensions are read from the input media. The preprocessor output dimensions are set by the encoder which receives the preprocessed video. Remember that in the case where the preprocessor is supplying data for more than one encode, it produces the largest of the requested dimensions. The Aspect Ratio Conversion tools specify how to convert the input media to the preprocessor output.
Note
Pixel aspects are ignored in the Stretch to fit mode. For other modes, understanding the pixel aspects of both the input and output formats is important for preserving the appearance of the media and avoiding squashed or stretched images. Changing the pixel aspect will affect the size, stretching, and cropping of the encoded image.
Figure 6-14 shows Aspect Ratio Conversion settings. Table 6-10 describes the settings.
Figure 6-14 Preprocessor Profile: Aspect Ratio Conversion
Table 6-10 Preprocessor Profile: Aspect Ratio Conversion Settings and Descriptions
|
|
Mode |
Stretch to fit: This mode stretches or shrinks source media format to the dimensions of the preprocessor output. There is no adjustment to preserve the original aspect ratio of the image. The pixel aspect ratio settings are not used. Cropping: This mode changes image size without stretching the image. The Cisco MXE 3500 scales the image linearly, so that the output image is completely covered. The input and output image edges match in either the horizontal or vertical direction. Some of the image is lost to cropping in the other direction. The cropping is done equally from top and bottom or right and left. Cropping mode uses the supplied pixel aspect ratio information. Letterbox, Curtains: This mode linearly scales the images until they are completely held within the boundaries of the output dimensions. Unused space in the vertical direction introduces black bars (letterboxing) equally on the top and bottom of the output image. Alternately, if there is unused space horizontally, black bars (curtains) appear on the left and right sides of the output image. Letterbox/Curtains mode uses the supplied pixel aspect ratio information. |
Non-linear Stretch: This mode stretches the image more at the edges and not at all in the center. The non-linear stretching is in the horizontal direction; the vertical scaling is linear. This option can, for example, provides a full 16x9 output image from 4x3 source with no distortion near the image center. Non-linear stretch mode uses the supplied pixel aspect ratio information. Anamorphic: Anamorphic source video is a 16:9 widescreen format, which has been compressed horizontally to fit in a narrower, standard-size image, such as 720x480. This means each pixel is wide on the displayed image, with a pixel aspect ratio greater than 1.0. To tell the Cisco MXE 3500 your source material is anamorphic, you may select one of the anamorphic choices from the Input Pixel drop-down. Alternately, if you know the precise pixel aspect ratio, you can set Input Pixel to Custom and set the Pixel Aspect value manually. |
Input Pixel / Input Pixel Aspect |
This defines the pixel aspect ratio of the input media. In general, media presented to the Cisco MXE 3500 for ingest may arrive without specification of their video format. Pixel aspect ratio or simply pixel aspect is part of this format information, and describes the shape of the image element represented by each pixel. Pixels can be square or rectangular, depending on the format. Pixel aspect is pixel width divided by pixel height. The default setting tells the Cisco MXE 3500 to make certain industry-standard assumptions for the value for the pixel aspect based on the input image dimensions. Other standards may be selected from the drop-down list to override the default. For complete flexibility, there is a custom option that allows the pixel aspect to be set explicitly to any numerical value. This is entered in the Input Pixel Aspect box, which is enabled only for the custom setting. The MXE 3500 provides 'Smart Ingest' functionality, enabling users to automatically apply aspect ratio conversion algorithms (letterboxing/curtaining) to source footage without knowing source/destination pixel aspect ratios. When 'Auto' is selected for input pixel settings, the preprocessor will attempt to automatically determine aspect ratio of the source footage. In this mode a single preprocessor profile provides proper results for sources with different aspect ratios. |
Output Pixel / Output Pixel Aspect |
This defines the pixel aspect ratio of the preprocessor output. Note that this is the media presented as input to the Cisco MXE 3500 encoders. For single-encode jobs, the preprocessor produces media sized to match the encoded output dimensions. However, a Cisco MXE 3500 job may produce multiple encoded formats, in which case the preprocessor produces an intermediate media format matching the largest of the requested encode dimensions. The default setting tells the Cisco MXE 3500 to make certain industry-standard assumptions for the value for the pixel aspect based on the output image dimensions. Other standards may be selected from the drop-down list to override the default. For complete flexibility, there is a custom option that allows the pixel aspect to be set explicitly to any numerical value. This is entered in the Output Pixel Aspect box, which is enabled only for the custom setting. The MXE 3500 provides ‘Smart Ingest’ functionality, enabling users to automatically apply aspect ratio conversion algorithms (letterboxing/curtaining) to source footage without knowing source/destination pixel aspect ratios. When ‘Auto’ is selected for output pixel settings, a single preprocessor profile provides proper results for sources with different aspect ratios. |
Aspect Ration Conversion Example
Figure 6-15 shows Aspect Ratio Conversion examples.
Figure 6-15 Aspect Ratio Conversion Examples
Timecode (Preprocessor)
The Cisco MXE 3500 preprocessor prepares timecodes for the output media in various ways depending on the Source selection. These timecodes are metadata items passed on to the encoders for possible embedding. Not all encoders make use of timecodes. The Cisco MXE 3500 adds a timecode track to output media that support it.
Figure 6-16 shows Timecode settings. Table 6-11 describes the settings.
Figure 6-16 Preprocessor Profile: Timecode Settings
Table 6-11 Preprocessor Profile: Timecode Settings and Descriptions
|
|
Source |
Select one of the following:
– For File Jobs, the timecode is offset by the Start Timecode field set on the File Job Submission page. This value is provided at the time of job submission; it is not stored in the profile. – For Live Jobs, the timecode is assumed to start at 0.
- VBI: VITC timecode will be stripped from the incoming VBI and added to the appropriate location in the output media. See also: Extracting VBI Data from SD Pinnacle Sources (Timecode).
- Embedded: Timecode will be obtained from the source file metadata (for instance, from the GXF wrapper or from the Timecode track of a QuickTime file) and added to the appropriate location in the output media.
- Profile Specified: Timecodes are offset from the Start Timecode entry below the Source setting. This value is stored in the Preprocessor Profile.
|
Start Timecode |
Enter the timecode that will appear on the first encoded frame. You can match the source file timecode or start the timecode at 00:00:00:00. Indicate drop-frame (semi-colon separated, hh;mm;ss;ff) or non-drop frame (colon separated, hh:mm:ss:ff). |
Burn In |
When enabled, this feature takes the timecode that it read from the input and burns it into the image it creates. It is included on every frame. If this feature is enabled, you must specify the font height and location. |
Font Height (%) |
Specifies the size of the timecode. |
Horizontal/Vertical |
Specifies the location of the timecode. |
Extracting VBI Data from SD Pinnacle Sources (Timecode)
The Cisco MXE 3500 supports VBI data extraction from Standard Definition (SD) Pinnacle sources. You can extract the timecode from the VBI when ingesting SD Pinnacle sources. The Cisco MXE 3500 reconstructs the VBI data found in the MPEG user data fields before it enters the signal processing pipeline in the preprocessor.
Procedure
Step 1
On the Preprocessor Profile page, scroll down to the Timecode section.
Step 2
From the Source drop-down, select VBI. See Figure 6-17.
Figure 6-17 Selecting VBI Source for Timecode
Note
For the Cisco MXE 3500 to identify a source file as being Pinnacle-based, the media file must have the.std extension or the file name itself must be std. The preprocessor will also read the supporting files (if present).
If the media (MPEG) file is named with an.std extension, the supporting file names must contain the.ft and.header extensions. If the media file is named std, the supporting files must be named ft and header. The supporting files must reside in the same directory as the std file.
Watermarking (Preprocessor)
The Watermarking section allows you to select a file to be used as a graphic watermark (sometimes called a “bug”) that normally appears as an overlay in the lower corner of the screen.
Figure 6-18 shows Watermarking settings. Table 6-12 describes the settings.
Figure 6-18 Preprocessor Profile: Watermarking Settings
Table 6-12 Preprocessor Profile: Watermarking Settings and Descriptions
|
|
Image |
Determines which image file will be used as a watermark. The format of the watermark file must be.psd,.tga,.pct, or.bmp. |
Origin |
Identifies the reference point from which X Distance and Y Distance will be measured.
- Bottom-right: Watermark placement will be relative to the lower right corner of the source image.
- Bottom-left: Watermark placement will be relative to the lower left corner of the source image.
- Top-right: Watermark placement will be relative to the upper right corner of the source image.
- Top-left: Watermark placement will be relative to the upper left corner of the source image.
The watermark placement is expressed in terms of the input stream for ease of use. The Cisco MXE 3500 resizes the watermark accordingly and places it on the encoded output. This is important because the watermark is unaffected by other Preprocessor settings (except fade). If Crop settings are applied, watermark placement will be measured from the new edges defined by the Crop settings. |
Mode |
Determines the display mode for the watermark image.
- Composite: Straight composite of the watermark onto the source video. If an alpha channel is present, it is used in the compositing.
- Luminance: The luminance and hue of the image is altered according to the luminance and hue of the watermark.
|
Units |
The units select control that has two options: pixels (default) and percent. If Units drop-down list is set to pixels :
- The X distance and Y distance controls will support pixel values -768 to 768.
- The Width and Height controls are enabled.
- The Coverage area control (see below) is disabled.
|
X Distance |
Changes the location of the watermark image on the finished output file. This setting changes the placement of the watermark along the horizontal axis of the image. X-distance is expressed in pixels of the source image x coordinate. Values range from -768 to +768. The default value is 0, which places the image at the selected Origin. |
Y Distance |
Changes the location of the watermark image on the finished output file. This setting changes the placement of the watermark along the vertical axis of the image. Values range from -768 to +768. The default value is 0, which results in no change in the placement of the image. |
Width |
Determines the width of the watermark in terms of pixels of the source image. Values range from 1 to 768. The default value is 200. |
Height |
Determines the height of the watermark in terms of pixels of the source image. Values range from 1 to 576. The default value is 100. |
Coverage Area |
Determines the area of the source video that the watermark will cover. Units are in percent of the video image. Coverage area is a numeric control that selects values from 1.0 to 100.0 percent. This control is enabled only if the Units selector (see above) is set to percent. |
Opacity |
Determines how opaque or transparent the watermark image will be. The watermark can be made more or less noticeable by adjusting the opacity. Values are 0-200%. Default value is 100%. In Composite mode this is effectively an 'alpha' value, where 100% means full opacity. In Luminance mode this parameter effectively adjusts the strength of the watermark. |
Start Timecode |
This entry specifies the time when the watermark will appear, measured from the beginning of the clip. The format is HH:MM:SS.mmm, where the mmm are milliseconds. |
Duration |
This entry specifies the length of time in seconds that the watermark will be applied. Enter 0 to have the watermark display for the entire length of the clip. |
Fade Time |
This entry specifies the length of time in seconds it takes for the watermark to fade in and fade out. Fades happen within the duration time of the watermark, so a fade-in begins at the start time, and a fade-out finishes when the duration has expired. |
Audio (Preprocessor)
The Audio section of the Preprocessor Profile is used to modify settings after mixing and mapping audio channels and before encoding.
See also: Dolby DP 600 Program Optimizer.
Figure 6-19 shows the Audio settings. Table 6-13 describes the settings.
Figure 6-19 Preprocessor Profile: Audio Section
Table 6-13 Preprocessor Profile: Audio Settings and Descriptions
|
|
Audio Passthrough |
Passes the input audio through to the output with no preprocessing applied. |
Fade In |
Amount of time allotted for linear fade-in from silence at beginning of clip. Defined in seconds. Values range from 0 to 10 seconds with 0 seconds as the default. Default value is 0.0 seconds. |
Fade Out |
Amount of time allotted for linear fade-out to silence at the end of clip. Defined in seconds. Values range from 0 to 10 seconds with 0 seconds the default. |
Add Silent Audio Track |
When checked, this option inserts a silent audio track into the decoded output of the Preprocessor. This insertion only occurs if the source file does not contain any audio tracks. If the source file contains audio tracks, this option is ignored. If an Encoder Profile is set up to encode audio but the source file does not contain audio, the encoder will fail. A silent audio track can be inserted to provide an audio source to any encoders that expect/require audio. |
Audio Filters (Preprocessor)
Figure 6-20 shows Audio Filters settings. Table 6-14 describes the settings.
Figure 6-20 Preprocessor Profile: Audio Filters
Table 6-14 Preprocessor Profile: Audio Filter Settings and Descriptions
|
|
Low Pass |
Suppresses samples above the frequency assigned. Expressed in kilohertz. Values are 0 to 24. The default value is 0, which disables the filter. The term Low Pass indicates that lower frequencies are allowed to pass. Audio compression codecs work more efficiently when higher frequencies are suppressed. |
High Pass |
Suppresses frequencies below the set value. Expressed in kilohertz (kHz). Values are 0 to 200. The default value is 0. The term High Pass indicates that high frequencies are allowed to pass. Some types of noise or hum may be present at lower frequencies. Suppressing this noise can improve compression efficiency. |
Volume Filter Type |
Defines how the loudness of the audio is controlled. Specific Filter Type choices can activate controls in the lower part of the window.
- None: No adjustment is made.
- Adjust: Specifies the percentage by which the volume will be amplified or attenuated. The units are linear (waveform) units.
- Normalize: Specifies the percentage of the full scale that the typical volume should match. The Normalize setting is single-pass: it does not look at the entire audio clip. Instead, it uses a measure of the volume obtained in a fading window of approximately 10 seconds duration. This can be useful for Live capture. Values are 0 (silent) to 100 (maximum volume).
- 2-Pass Normalize: The entire clip is scaled so that the maximum sample in the clip is normalized to the given value. The 2-pass normalization is valid only with file-based media. Normalization values range from 0 (silent) to 100 (peak sample set to full scale).
- 1770 2-pass norm: This option enables audio normalization as defined in the international standard ITU-R BS.1770. The processing is two-pass, meaning that the audio content is scanned once by the Cisco MXE 3500 to measure the loudness, and scanned again to normalize the loudness. ITU-R BS.1770 is commonly used for normalizing 5.1 channel surround-sound media. It may also be used with stereo.
– Selecting 1770 2-pass norm displays the Target Volume box. Enter the desired normalization value here in LKFS units, as defined in the standard. These units are similar to dB full-scale units, and are negative. Commonly used values are in the range -17 to -25 LKFS. |
Volume Adjust |
For the Adjust option, this value specifies the scaling of the output audio. The units are linear (waveform) units as a percentage of the input level. Values are 0% (silent) to 200%, with 50% as the default. |
Volume Normalize |
For the Normalize option, this value specifies the volume of the output audio. The value is in linear (waveform) units and is a percentage of full scale. Values are 0% (silent) to 100%, with 25% as the default. For the 2-pass Normalize option, this value specifies the amplitude of the maximum sample in the audio clip. The value is in linear (waveform) units and is a percentage of full scale. Values are 0% to 100%, with 25% as the default. |
Compressor Threshold |
This is a single-pass dynamic range compressor with no look-ahead. It can be useful for controlling the volume in a Live capture situation. It is not recommended for use with file-based encoding. (A professional-quality two-pass compressor is available from Cisco. Contact your Sales representative.) The compressor maintains an RMS estimate of the typical audio level with a fading memory time constant of many seconds, and compresses relative to this empirically measured level. The Compressor value is the compression threshold level relative to the typical level measured in decibels of audio power. When the threshold is exceeded, audio loudness is attenuated by the Compressor Ratio. Therefore, lower Compressor values provide more compression. Values are –40 dB to +6 dB. |
Compressor Ratio |
Determines the amount of attenuation that will occur beyond the point defined in the Compressor threshold field. Values for ratio are 1 (no compression) to 20 (20:1 approaching limit). |
Input/Output Audio Channel Mapping (Preprocessor)
This feature is not available on the Cisco MXE 3500.
Thomson Nextamp Forensic Watermarking
Thomson Nextamp Forensic Watermarking is not available on the Cisco MXE 3500.
Graphics Overlay (Preprocessor)
To use this feature, you must purchase and install the Graphics Overlay feature license on the standalone Cisco MXE 3500 or the Resource Manager device. See the Deployment and Administration Guide for Cisco MXE 3500 for more information.
Cisco MXE 3500 synchronizes video and metadata with graphic templates during transcoding to produce dynamic multilayered titles, branded graphics, cross promotions, subtitles, captions and animations. Overlays are suitable for both small screen and large screen applications. Graphic templates are produced with Adobe authoring software used by most creative and design professionals. With Cisco MXE 3500 Graphics, editors incorporate built-in scene changes, animations, 8-bit alpha blending, and transitions – all with runtime metadata triggers. Adding graphic overlays to Cisco MXE 3500 output requires the following two additional inputs:
- A Flash.swf template that defines the attributes of graphical elements including, placement, color, and size. For example, text fields in the template are dynamic variables that are defined at run-time.
- An XML metadata description that defines the specific values for the graphical elements to be applied at encoding to the overlay. For example, titling text is supplied so that the same template can be reused on any video clip.
Graphic overlays (geometrical objects, text, metadata text, images, and/or movies) are applied to any Cisco MXE 3500-supported output format. The overlay may be applied to main content, bumpers, and/or trailers. The overlay is applied over media near the end of the preprocessing. The only video preprocessing operation that follows the overlays is forensic watermarking.
You can use any application, including Adobe Flash Pro 8 and Flash Creative Suite 3, Photoshop, and After Effects that produces a Flash 7.swf file with version 2.0 ActionScript™ applications to produce the graphic overlay template. You then create XML metadata control files in a text editor or a custom application. Using the Cisco MXE 3500 User Interface, the graphic overlay template (.swf file) and the metadata (XML) may be applied independently to each segment. The metadata can be applied as a time referenced XML file (for file jobs) or can be read from an XML file in real time (for live jobs).
In addition, the Cisco MXE 3500 supports the following file reference methods:
- Path name
- UNC path name
- URL
This section includes the following topics:
Understanding Graphics Overlay
This section covers the following topics:
Spatial Considerations
The overlays are always rectangular. They are resized according to the preprocessor output dimension. Overlays are not stretched. If the shape of the overlay and preprocessor output media do not match, the overlay will be sized as large a possible without cropping, meaning that it may not cover all of the output media area. The overlay is centered, so there may be strips on the left and right, or strips on the top and bottom not covered by the overlay. Overlay sizing may be understood by measuring widths and heights in pixel units. If your preprocessor output has an implied pixel aspect ratio, it is not considered.
Temporal Considerations
User-supplied overlay.swf files have a specific playback frame rate. This may or may not match the frame rate of the preprocessor output media. In case of a mismatch, the overlay may be temporally stretched or compressed by the preprocessor to better match the output frame rate. The frame rate change is done by dropping or replicating overlay frames. Such frame rate changes are not always done by the exact ratio of frame rates; a new rate is chosen for the overlay that preserves smooth motion.
End of.swf Movie
At the end of an.swf movie, the last frame will continue to be overlaid by default, until the end of the preprocessed output. Other behaviors may be programmed into the.swf file, if needed. For example, an.swf movie can jump back to the beginning and repeat.
Rendered Metadata
It is possible to change rendered metadata text on the overlay during the preprocessing. This is controlled by a metadata file that specifies lines of text to embed in the overlay at particular times.
Other Metadata
Metadata can be used to control the Flash overlay movie. For example, it is possible to jump to a different part of the Flash movie. This is set up in the.swf file during the Flash authoring process. A variable is assigned different values to indicate different locations in the.swf movie.
Bumpers and Trailers
Overlays may also be placed on bumpers and trailers, but they are handled completely independently from the main clip: the information that controls the overlays is specified separately for bumpers and trailers.
Note
Check the Preprocess Bumper and/or Preprocess Trailer box in the Preprocessor Profile to place overlays on bumpers and/or trailers.
Content/Bumper/Trailer Settings
Figure 6-21 shows the Content/Bumper/Trailer settings. Table 6-15 describes the settings.
Figure 6-21 Content/Bumper/Trailer Settings
Table 6-15 Content/Bumper/Trailer Settings
|
|
Enabled |
Check this box to enable the graphic overlay. |
Template File |
Click Browse to locate an.swf template file. |
Meta-Data File/URL |
If the.swf requires it, add an.xml file into this field. To view the overlay metadata content: The metadata descriptions listed above correspond to database items in the “statisticsType” table of the Cisco MXE 3500 DCS database. You can view the user-defined metadata items in the prefilter section of the Job XML here:
|
Creating an Overlay Metadata File
The metadata XML file holds metadata items that are transmitted to the Cisco MXE 3500 Graphics Overlay Flash Player at particular times in the preprocessed clip. These metadata items must have names that correspond to variables in the.swf template file. Use a text editor program to create the XML file. The format of the metadata XML file is defined in the “Flash Overlay Metadata XML—Overlay Control Commands” section.
Setting.SWF File Metadata Variables
This XML is used to communicate metadata and other commands affecting the Flash Overlay. It is sent via a text file, and may be changed in real time during the processing.
Note
Overlay Metadata XML is a sequence of events, each surrounded by an <event> tag. The metadata in each <event> is transmitted to the Flash Player at the event time. The events need not be listed in temporal order. The Flash Player may not respond instantly to metadata changes.
Example 6-1 shows overlay metadata XML. Table 6-15 describes the example.
Example 6-1 Overlay Metadata XML
<value>John Smith</value>
Table 6-16 Metadata XML Tags and Descriptions
|
|
<eventList> |
This tag encloses all the XML for the Flash Overlay metadata. |
<event> |
This tag encloses metadata to be used at a particular time. Multiple <event> children are allowed for <eventList> |
<time> |
This tag encloses the time (floating point, seconds since the start of the clip) for the metadata. |
<data> |
This tag encloses the metadata to be sent to the Flash player at the specified time <var> This tag encloses a.swf variable name and value. Multiple <var> children are allowed for <event> |
<var> |
This tag encloses a.swf variable name and value. Multiple <var> children are allowed for <event> |
<name> |
This tag encloses the name of a variable in the Flash.swf file. |
<value> |
This tag encloses a value for the variable in the Flash.swf file. |
Flash Overlay Metadata XML—Overlay Control Commands
Several commands can be embedded in the metadata XML file to control the appearance of the overlay, and can introduce certain types of animation. These commands are not metadata in the same sense as the <name><value> pairs. They are provided as a more convenient alternative to re-authoring the template file.
The commands control when the overlay appears and disappears. You may also create fades, wipes, and slides.
Animation Controls
Graphic overlays (in addition to their related ActionScript) are usually created with software such as Adobe Flash (Pro 8 to CS5 or later) or Adobe After Effects, or any program that outputs an.swf file.
The Cisco MXE 3500 offers animation controls that allow certain changes to the appearance of the overlay, via metadata XML tags, and without the need to produce another.swf file. Examples of what the Cisco MXE 3500 animation controls allow you to do are the following:
- Easily create fade-in and fade-out, wipes and slides.
- Use a single.swf file for different media clips, changing only the timing of the overlay appearance.
- Use an.swf file to create a semi-transparent “bug” logo that appears periodically over the video.
To create and adjust graphic overlays:
1.
Create the.swf file, which may include ActionScript.
2.
Use a text editor to insert animation XML into the metadata XML file.
Graphic Overlay XML
This section includes the following topics:
Basic Structure of the XML File
Animation controls go in the Flash Overlay Metadata XML file, which looks like this:
See also: Flash Overlay Metadata XML—Overlay Control Commands.
</eventList>
- The <event> tags may contain metadata items, timing information, and animation controls. Events start at particular times during the video. An event may specify an action that takes place over an extended period of time, not just at one instant.
- Event tags may not be nested inside other event tags.
- The file is read and parsed whenever the file is modified or saved. The overlay algorithm reads and acts on all of the events that precede the current time. For example, one event may define the timing of an overlay, while another event specifies a metadata value that affects the overlay via Flash ActionScript.
- While you can use multiple events, they should not overlap temporally if there is a conflict of functionality. If such events overlap, the result is undefined and may not give the desired effect.
Structure of an Event
An event tag may contain commands to control:
Metadata definitions may be mixed into any event. These are applied at the beginning of the event and "stick," that is, the metadata values are communicated to the Cisco MXE 3500 embedded Flash Player, where they are permanent until changed.
- Animation controls: any of the following tags.
These control how and when the overlay appears and disappears, how the Flash movie plays, and how it is positioned on the video. Every event is required to have a <time>, <starttime>, or <stoptime> tag. Times are referenced to the beginning of the clip.
Times and Timecodes
All tags that refer to time may have values given either in seconds (floating point) or as timecodes. Timecodes simply measure a length of time in HH:MM:SS:ff format instead of seconds; they do not reference any timecode embedded in the media. For example, the "duration" tag may hold a timecode that simply specifies the length of time in HH:MM:SS:ff format. The semicolon notation HH;MM;SS;ff may also be used with the standard meaning (two frames dropped every minute except for every 10th minute). Timecode values should only be used with PAL or NTSC output rates.
For example, <starttime>21.333</starttime> is equivalent to <starttime>00:00:21:10</starttime> (with an NTSC output rate).
The following tags will accept either timecodes or seconds (* means wildcard):
<time>, <starttime>, <stoptime>, <duration>, <fade>, <wipe->, <slide->, <repeat-period>, <repeat-duration>, <repeat-stoptime>
Event Time and Duration
Table 6-17 lists and describes the Event Time and Duration tags.
Table 6-17 Event Time and Duration Tags and Descriptions
|
|
<starttime> or <time> |
The start time of the event, in seconds, measured from the beginning of the clip. <time> may be used as shorthand for <starttime>. |
<duration> |
The duration of the event in seconds. By default, the duration is infinite (but “live” events have 0 duration by default). By default, overlays are removed at the end of the duration, although the details are controlled by the <off-transition> tag. |
<stoptime> |
May be used instead of <duration>. The duration is the difference between <stoptime> and <starttime>. If the <duration> tag also appears, the shortest time will be used. |
<starttime-from-end> |
The start time of the event, in seconds, measured from the end of the clip. Used only for file-based clips. |
<stoptime-from-end> |
The stop time of the event, in seconds, measured from the end of the clip. Used only for file-based clips. |
The Live Event
<live/>
- This special tag indicates that the commands enclosed in this event tag are executed immediately. The intent is that the XML in a live event could be changed in real time during a live encoding job. Metadata definitions will be immediately sent to the Flash player renderer for immediate inclusion in the overlay. The <live/> tag takes precedence over any <starttime> or <time> tag in the same event. When the metadata file is saved, the Cisco MXE 3500 will detect this and read the <eventList>. The <live> event will be assigned a start time equal to the current time.
- The XML file with a <live/> tag should have only one event. If there are multiple live events, only the last one in the file will be used.
- The <live/> event is reinitialized every time the metadata file is written or saved, so if the metadata file is written while the live event is active, that event may be restarted.
- You can use the <duration> tag or <stoptime> tag to define the duration of the live event.
- You can use transitions, <on-transition> or <off-transition>, to make the overlay appear or disappear. Note that in the live case, all transitions are of the <lag/> variety; the <lead/> and <center/> tags have no effect.
Opacity
<opacity-percent>
This tag defines the maximum opacity for an event. 100 means total opacity, which is the default. You can set this number lower, for example to 50%, to get a semi-transparent overlay for the duration of the event. A partial opacity multiplies any partial opacity due to fade-in or fade-out.
Transition Control
Table 6-18 lists and describes the Transition Control tags.
It is legal to combine a fade with a wipe or a slide transition, as long as the transition times match. If they don't, the fade time is discarded and the wipe or slide time is used for the fading as well.
Table 6-18 Transition Control Tags and Descriptions
|
|
<on-transition> and <off-transition> |
These are tags that enclose details of how the transitions happen. By default, the overlay is applied at the start time (this is the on-transition) and removed at the end of the event duration (off-transition). However, each of these tags may contain a block of XML specifying the details of the transition using the child tags below. |
<fade> |
This child tag specifies a fade time in seconds, either fade-in or fade-out, depending on whether the parent is an on-transition or an off-transition. |
<wipe-right> |
This child tag specifies a wipe time in seconds. The wipe travels from left to right. |
<wipe-left> |
This child tag specifies a wipe time in seconds. The wipe travels from right to left. |
<wipe-up> |
This child tag specifies a wipe time in seconds. The wipe travels from bottom to top. |
<wipe-down> |
This child tag specifies a wipe time in seconds. The wipe travels from top to bottom. |
<slide-right> |
This child tag specifies a slide time in seconds. The slide travels right from the left. |
<slide-left> |
This child tag specifies a slide time in seconds. The slide travels left from the right. |
<slide-up> |
This child tag specifies a slide time in seconds. The slide travels up from the bottom. |
<slide-down> |
This child tag specifies a slide time in seconds. The slide travels down from the top. |
<lag/> |
This child tag specifies that the transitions will lag the event time, that is, the transition begins happening at the event time. This is the default behavior, unless the "lead" or "center" tags appear. |
<lead/> |
This child tag specifies that the transitions will lead the event time, that is, the transition will start early and will complete at the event time. |
<center/> |
This child tag specifies that the transitions will be centered around the event time, that is, it will start before the event time and finish after the event time. |
<nonlinear> |
This changes the animation of a transition, making it go faster at one end and slower at the other. It affects fades, wipes, and slides. A value of 1 corresponds to the linear transitions that are used by default. Higher values slow the animation close to the time when the overlay is fully "on", and accelerates the animation close to the time when the overlay is fully "off". Good values to use are 2.0 to 3.0. Slides in particular benefit greatly with nonlinear motion. |
<delay> |
The transition is delayed from the usual time (start time or stop time) by a given number of seconds. This can be useful when dealing with rendering delays in the Flash player /.swf file. |
Automatic Repetitions
Table 6-19 lists and describes the Automatic Repetition tags.
Table 6-19 Automatic Repetition Tags and Descriptions
|
|
<repeat-period> |
This specifies that the event will automatically repeat with a period given in seconds. Repeating goes on forever, unless constrained with one of the tags below. |
<repeat-count> |
This specifies the number of times the event will occur. It is infinite by default. A value of 1 means the event happens one time (as if there were no <repeat-period> tag). A value of 0 turns off the event. |
<repeat-duration> |
This specifies that the event will repeat within a certain period of time given in seconds. The number of repetitions will be the largest integer multiple of the repeat periods that fit within the repeat duration. |
<repeat-stoptime> |
This specifies that the event will repeat until the video time exceeds a value given in seconds. The number of repetitions will be the largest integer multiple of the repeat periods that fit before the stop time. |
Flash Movie Control
Pausing the Flash player is independent of the overlay process. If the Flash movie is paused, the last Flash frame continues to be used for overlay. By synchronizing <pause> and <resume> with overlay transitions, it is possible to make the movie resume the playback from the same point where the movie stopped when the overlay was removed.
These are “sticky’ states, meaning once an overlay is paused, it will remain paused until there is a resume event, regardless of the presence of other events. Events that do only pause or resume may overlap other events.
Table 6-20 lists and describes the Flash Movie Control tags.
Table 6-20 Flash Movie Control Tags and Descriptions
|
|
<pause/> |
Stop the Flash player rendering. |
<resume/> |
Start the Flash player running from the point at which it was paused. |
Shortcut Controls
Two commands, <apply> and <clear>, may be used as convenient abbreviations to control the overlay in a simple way.
<starttime>20</starttime>
The example above will begin turning the overlay on at 20 seconds, with a fade-in time of 5 seconds. Note that this eliminates the need for the <on-transition> block.
Table 6-21 lists and describes the Shortcut Control tags.
Table 6-21 Shortcut Control Tags and Descriptions
|
|
<apply/> |
Start the overlay. The given value will be the fade-in time in seconds. |
<clear/> |
Remove the overlay. The given value will be the fade-out time in seconds. |
Overlay Positioning
Table 6-22 lists and describes the Overlay Positioning tags.
Table 6-22 Overlay Positioning Tags and Descriptions
|
|
<offset-right-pixels> |
Offsets the overlay horizontally by a given number of pixels. Default 0. |
<offset-left-pixels> |
Offsets the overlay horizontally by a given number of pixels. Default 0. |
<offset-up-pixels> |
Offsets the overlay vertically by a given number of pixels. Default 0. |
<offset-down-pixels> |
Offsets the overlay vertically by a given number of pixels. Default 0. |
<offset-x-pixels> |
Offsets the overlay vertically by a given number of pixels. Default 0. |
<offset-y-pixels> |
Same as <offset-up-pixels>. |
<offset-right-percent> |
Offsets the overlay horizontally by a percent of image width. Default 0. |
<offset-left-percent> |
Offsets the overlay horizontally by a percent of image width. Default 0. |
<offset-up-percent> |
Offsets the overlay vertically by a percent of image height. Default 0. |
<offset-down-percent> |
Offsets the overlay vertically by a percent of image height. Default 0. |
<offset-x-percent> |
Same as <offset-right-percent>. |
<offset-y-percent> |
Same as <offset-up-percent>. |
Debugging
<debug/>
This tag may be inserted as child of <eventList>. It generates a local text file named “GraphicOverlayDebug.txt” that contains timing information about the overlay events. This information may be useful in debugging the animation XML.
Examples
This section includes the following examples:
Basic Overlay Event
This overlay starts at 2 seconds and ends at 2+8=10 seconds, with a 1.5 second fade-in at 2 seconds and a 1.5 second fade-out beginning at 10 seconds. The overlay is completely removed at 11.5 seconds.
<starttime>00:00:02:00</starttime>
Centering the Transitions
<starttime>00:00:02:00</starttime>
<wipe-right>1.5</wipe-right>
<wipe-left>1.5</wipe-left>
Spanning Events
It is possible to use one event to turn on the overlay and use a separate event to turn off the overlay, as in this example. Spanning is useful because it allows events to be inserted in-between that can, for example, send new metadata to the Flash player to update the appearance of the overlay. In this example the opacity-percent tag is used, and the value "70" must appear in both events or there will be a discontinuous opacity change at 5.0 seconds.
<opacity-percent>70</opacity-percent>
<starttime>1.0</starttime>
<opacity-percent>70</opacity-percent>
Flash Rendering Delays
The details of the Flash movie may impact the timing of overlay animation.
Some Flash.swf files do not update the metadata values on every rendered frame, so there may be a significant delay between the time a packet of metadata ( <data>... </data>) is sent, and the time its effect appears in the overlay. One way to deal with this problem is to set up an event to transmit the metadata before it is needed. The example below shows how to send the metadata at time 0 and apply the overlay at time 1, ensuring that the Flash movie is updated before it is overlaid.
<data> <name>scene</name> <value>R</value> </data>
Another approach involves the <delay> control. This is mandated with <live> events, since only one event is allowed. The delay holds back the beginning of the overlay for 1.5 seconds while the Flash renderer reacts to the new data.
<value>Red Sox Win Again</value>
Complex Repeating Event
<var> <name>name</name> <value>Transition Test 1</value> </var>
<var> <name>name2</name> <value>Transition Test 2</value> </var>
<var> <name>title</name> <value>Graphic Overlay 1</value> </var>
<var> <name>title2</name> <value>Graphic Overlay 2</value> </var>
<duration>00:00:02:25</duration>
<repeat-period>00:00:04:10</repeat-period>
<repeat-duration>00:00:20:00</repeat-duration>
<offset-down-percent>8</offset-down-percent>
<offset-right>10.0</offset-right>
<slide-down>00:00:01:00</slide-down>
<nonlinear>2.0</nonlinear>
<wipe-left>00:00:01:00</wipe-left>
Subtitles
This feature is not available on the Cisco MXE 3500.
Previewing Preprocessor Clips
The Preview window allows you to see frame-by-frame results of settings such as cropping, color, noise reduction, and watermark options selected in the Preprocessor Profile. The image displayed in the Preview window shows a Before/After Split where the left side is the unprocessed image and the right side is the same image with the currently selected preprocessor options applied.
The Preview Window allows you to preview the following types of input media:
- File-based media: Allows you to preview the source file, view video before and after preprocessor settings have been applied, and set in and out points.
This section includes the following topics:
Opening the Preview Window
The Preview Window is a Cisco MXE 3500 application and works interactively with the Cisco MXE 3500 Web UI.
Note
Depending on your Windows theme setting, your Cisco MXE 3500 Tools frame may display in a different color.
Procedure
Step 1
Click Start > All Programs > Cisco > Media Experience Engine > Media Experience Engine Tools. Make sure the Preview tab is highlighted. See Figure 6-22.
Figure 6-22 Preview Window
Using the Preview Window
The Preview Window is used to view and fine tune preprocessor settings.
Note
Please note that some but not all preprocessor parameters are sent to the preview window. For example, graphic overlays are not visible in the Preview Window, but will display in the encoded clip and in the preprocessed.avi intermediate file.
Before You Begin
To link the preview features to the clip and Preprocessor Profile you are currently working with, verify that the ECS Server Name (Click the Cisco icon then Options) and the Server on the top right corner of the Cisco MXE 3500 User Interface are the same.
Procedure
Step 1
Open the Preprocessor Profile for the current job.
Step 2
Open the Preview Window.
Step 3
Click the Cisco icon in the upper left corner, and click Open Clip. See Figure 6-23.
Figure 6-23 Opening a Clip to Preview
Step 4
Navigate to the clip's location, select it, and click Open.
Step 5
Click the Play button. The clip displays in the Preview Window. Use the controls to manipulate the clip. See also: Preview Window Controls.
Step 6
Make any necessary adjustments to the Preprocessor Profile settings, and view the results in the Preview Window. Continue to fine tune the settings.
Preview Window Controls
- Before/After Split Slider: Slide the indicator to the left or right to adjust the amount of the image displayed unprocessed and the amount displayed with preprocessing options applied.
- Preview Pane: Displays a frame-by-frame view of the input video.
- In and Out Points: The full bar (base color white) represents the entire clip. To use the timeline:
–
Slide the green and red brackets left or right to define the in and out points of the clip (or touch the i and o keys on your keyboard). The In Point and Out Point counters reflect the bracket positions. The blue section is the portion of the clip that will be encoded.
–
Drag the white tab (below the timeline) to the right or left to view the clip.
–
Slide the gray zoom bar to the right to zoom in on a specific frame. The zoom status bar to the right displays the position of the zoom control relative to the entire clip.
- Refresh Profile: Make any desired changes to the Preprocessor Profile, save the profile, and click the Refresh Profile button to see the results in the After side of the Preview Window.
- Preview Size: Enter new dimensions, if needed, and click Ok. The clip will display in the new size.
- Thumbnails: Click the Capture Thumbnail button to save a thumbnail of the currently displayed frame using the default path, name, and image properties as defined at the time of system setup. You may also choose to change the size, format, quality, or output location of the thumbnail. The thumbnail image will be captured after the preprocessing is applied.
- Clip Details: Displays input and output clip properties such as width, height, and FPS.
Choosing Where to Set In and Out Points
Both the Preview Window and the Preprocessor Profile of a Job Profile allow you to define In Points and Out Points for file-based clips. The overlap is designed to allow users flexibility in determining whether these settings should be included as part of the Job Profile or whether they should be applied on a job-by-job basis.
Assign In Points and Out Points in a Job Profile when clips encoded with the profile have consistent information at the beginning or end that always needs to be trimmed. For example:
- If clips from a particular source always begin or end with color bars.
- If clips from a particular source are a uniform length and are preceded by or followed by superfluous footage.
- If the desired goal of the encoding is a uniform sample of how a profile will work with a variety of source material. For example, if a profile needs to be tested, encoding twenty seconds in the same section of multiple types of source material can give excellent results demonstrating what to expect when the profile is in production.
Assign the In Points and Out Points in the Preview Window whenever the In Point and Out Point are unique to the clip. For example:
- If the footage is unfamiliar, the In Point and Out Point will need to be set by someone visually reviewing the clip. The Preview Window allows the interaction required when the In Point and Out Point are unknown or not uniform across a set of clips.
- If clips are preceded or followed by unwanted material, but the amount that each clip needs to be trimmed is not uniform, setting the In Point and Out Point in the profile will provide a uniform trim. Additional fine tuning of the material to be encoded can be achieved by adjusting the In Point and Out Point in the Preview window.
The type of trim required by the media being encoded will determine the best option for setting In Points and Out Points.