StreamAware On-Demand is designed for video service providers to streamline quality assurance and control. The software ensures high-quality video by providing complete visibility across file-based workflows, while also enabling automated quality checks to verify videos meet delivery requirements. It uses the Emmy® award-winning IMAX VisionScience™ technology to provide a single, objective metric to monitor quality across the entire media supply chain and provide clear insights into its performance. The software also includes industry standard and customizable quality checks to automate video, audio, and metadata compliance. This results is complete visibility of quality across file-based workflows, and enables automation of tasks that typically require the human eye, improving efficiency and reducing the margin for human error to ensure video consistently meets the highest standards.
Key Features and Benefits
All-in-one solution - StreamAware On-Demand gives you powerful video quality measurement and quality control in one software platform. Easily meet content delivery requirements by verifying video quality and ensuring must-have elements including video, audio, and metadata compliance.
End-to-end visibility - Provides a clear view of all your video at any point in the media supply chain to monitor the video operation status and identify potential issues before they become problematic.
Trusted video quality metric - StreamAware On-Demand uses patented VisionScience to provide a ViewerScore™ that consistently measures quality across your workflow. You can use this score to accept, reject, or flag assets based on the quality you want to deliver. With 50 patents and 100,000+ citations worldwide, our proprietary VisionScience is scientifically proven to be the most accurate and complete measure of video quality in the industry.
Efficient automation - Say goodbye to manual video quality checks that strain human resources. StreamAware On-Demand is driven by automation, allowing you to automate tasks that typically require the human eye. This automation not only enhances efficiency but also reduces the margin for human error.
Auto scaling - StreamAware On-Demand offers a true auto-scaling solution that dynamically adapts to your workload. It automatically scales up resources to execute jobs as they arrive, ensuring you’re never constrained by infrastructure limitations.
Pinpoints video quality issues - StreamAware On-Demand is designed to pinpoint instances where video quality is compromised, using the same criteria as if evaluated by a team of ‘golden eyes.’ This industry-trusted solution helps you define the level of video quality you expect from your vendors and workflows.
Versatile environment - StreamAware On-Demand works seamlessly in any environment, whether it’s on-premises, private, or public cloud infrastructure. It’s a containerized solution that integrates effortlessly with leading container orchestration tools on cloud platforms.
Configurable content analysis - Enables customized media analysis templates to ensure content meets requirements for contribution and distribution.
Full HDR support - Supports multi-standard HDR workflows with confidence, including Gamut and Luminance Measurement, Dolby Vision Metadata validation, MaxCLL & MaxFALL cross-validation, and more.
Technical Specifications
Containers, Codecs, Format Support | |
---|---|
Containers | IMF, MXF, MOV, MP4, MPEG-TS, AVI, M4V, MPG, V210, WebM, YUV |
Video | Raw, Image Sequence (EXR, DX, Targa, TIFF, PNG…), ProRes, JPEG 2000, DNxHD, H.264/AVC, H.265/HEVC, VP9, AV1, MPEG-2, EVC, LCEVC |
Audio | PCM, MPEG-2, MPEG-3, Dolby Atmos, Dolby Digital (AC-3), Dolby Digital Plus (E-AC-3), AAC, HE-AAC |
Closed Captions | CEA-608/CEA-708 in Line 21 video |
HDR Formats | HDR10, HLG, HDR10+, Dolby Vision |
Perceptual Video Quality & Video Analysis | |
---|---|
IMAX ViewerScore (XVS) | No-reference: IMAX ViewerScore No-reference (XVS™ NR) evaluates video quality without requiring a reference. It employs deep neural networks (DNN) for assessment, making it a reliable no-reference metric. Full-reference: IMAX ViewerScore Full-reference (XVS™ FR) evaluates the perceptual quality of an output in relation to a reference. It considers both source quality (XVS™ NR) and IMAX Encoder Performance (XEPS) for the output. |
IMAX Encoder Performance Score (XEPS) | Evaluates the encoder performance by comparing its output to a reference source, focusing on encoding-related degradation. |
IMAX Banding Score (XBS) | A measurement of the amount of color banding that is present in a video. |
Viewer Modes | XVS and XEPS metrics can be adapted to the type of viewer: typical viewer, expert viewer, studio viewer |
Color Volume Difference (CVD) | Provides a metric for color quality assessment that supports both HDR and SDR content. The metric provides a visibility map that helps to quickly identify areas with significant perceptual color differences. |
Content Complexity Score (CCS) | Provides a measure to describe how difficult it is to encode an asset. Content Complexity is one of the most important metrics when making decisions about encoding configurations like bitrate allocation, resolution etc. |
Physical Noise | Physical Noise measures the standard deviation of camera/sensor noise. |
Visual Noise | Measures the standard deviation of noise through the contrast masking behavior of the content. |
Color Gamut | Analyzes pixel values and determines the gamut coverage of each frame. For HDR/WCG content, this helps content providers identify the portion of content that adheres to Rec 709, P3, or Rec 2020 color standards. |
Luminance | Assesses luminance values for each frame (max pixel luminance and overall frame luminance), and MaxFALL/MaxCLL for the asset, to help content providers maintain creative intent despite display limitations. |
Frame Viewer | The Frame Viewer allows users to drill in to perceptual quality issues by loading and viewing frames and maps, which are visualizations that show the where and why behind the XVS, XEPS and XBS metrics. |
Quality Control Features | |
---|---|
File and Video Properties | Frame Rate, Scan Type, Resolution, Resolution Name, Codec, Codec Profile, Chroma Subsampling, Dynamic Range, Dynamic Range Format, Pixel Format, Storage Aspect Ratio, Time Base, Timecode, Bit Depth, Horizontal Resolution, Vertical Resolution, Stream Index, Avg Bitrate (Mbps), Min Bitrate (Mbps), Max Bitrate (Mbps), Color Primaries, Color space, Color transfer characteristics, MXF/QuickTime/MP4/3GPP related metadata |
Container Compliance Checks | MXF (Op1a, AS10, AS11), QuickTime/MP4/3GPP |
Freeze Frame | Detects repeated frames. |
Black/Solid Color Frames | Detects repeated black or solid color frames. |
Color Bars | Detects SMPTE and EBU color bars for both SDR and HDR content. |
Time Modes | Time series data and detection start/end times can be viewed in Media Time or SMPTE Timecode. |
Score Checks | User-defined threshold and duration quality checks for: IMAX Viewer Score (XVS), IMAX Encoding Performance Score (XEPS), IMAX Banding Score (XBS), Color Volume Difference (CVD), Luminance. |
Content Layout Detection | Analyzes and identifies select criteria for the content as: black frames, rreeze frames, solid color, color bars as defined by SMPTE, EBU for SDR and HDR, audio silence. |
Active A/V Layout Detection | Identifies content segments with user-defined audio and video conditions, aiding in accurate subtitle alignment validation by highlighting moments of concurrent audio and video activity within the content. |
HDR Metadata | Color primaries, color space, color transfer characteristics, MaxFALL, MaxCLL. |
IMF Master Display Metadata | Mastering display maximum luminance, minimum luminance, mastering display primaries, mastering display whitepoint. |
Missing Closed Captions Check | CEA-608 & CEA-708 |
Cadence | Analyzes the frame structure and sequence in a video stream. It detects cadence patterns, fields, frames, and is essential for processes like deinterlacing, frame rate conversion, and quality control to ensure accurate video playback and quality preservation. |
Cadence Validation | Detects cadence patterns that fall outside the user-defined acceptable patterns or sequences. |
Broken Cadence Detection | Detection of frames that are out of order or missing within the expected cadence sequence. |
PSF Validation | Progressive segmented frame detection. |
Framerate Validation | Verifies that the frame rate falls within the user-defined acceptable range of frame rates. |
Framerate Cross-Validation | Checks that the frame rate information derived from both the container and the bitstream matches. |
Photosensitive Epilepsy Test | Performs Harding Test to detect flashes and intense patterns that could trigger photosensitive epilepsy, supports: SDR, HDR10, HLG, Dolby Vision (Master Display) |
Dolby Vision Metadata Validation | Supported formats: Embedded metadata (MXF (JPEG2K) files with interleaved metadata, IMF CPLs referencing MXF files with embedded metadata), sidecar metadata (Dolby Vision XML metadata file) |
MaxFALL & MaxCLL Cross-Validation | Checks that the MaxFALL and MaxCLL information obtained from the container, IMF/CPL, and Dolby Vision Metadata corresponds accurately to the measured values in the actual content. |
Audio Silence Detection | Applicable to all channels except LFE. |
Audio Loudness | Provides the following loudness measurements as defined by the ITU-R BS.1770-1/-2/-3/-4 standards: True Peak Level, Integrated Loudness (Program Loudness), Momentary Loudness, Short-Term Loudness, Loudness Range. |
Audio Clipping | Identifies when audio signal surpasses the maximum amplitude threshold of a recording or playback system, causing distortion and a harsh auditory experience. This distortion results from the flattening of the audio signal’s waveform at its peak points. |
Audio Clicks & Pops | Identifies and flags audible clicks and pops in audio tracks. These unwanted artifacts often occur due to discontinuities or irregularities in the audio signal and can be highly distracting to listeners. |
Audio Sidecar Support | Support of audio analysis and QC checks on audio sidecars. |
Audio Phase Mismatch | Identifies and flags discrepancies in the phase alignment of audio signals: Front Left/Front Right, Side Left/Side Right, Back Left/Back Right. Phase mismatch occurs when audio channels or components are not synchronized correctly, leading to phase cancellation and altered audio quality. |
User-Defined Logical Audio Sound-Field Groups | Allow users to define logical audio soundfield groups via a flexible schema for loudness measurements and audio quality checks. |
Job Analysis Templates | Customizable templates for automated submission of job analyses. |
QC Dashboard and Reports | QC status page, customizable dashboards, report export (PDF, CSV, JSON and others). |
StreamSmart™ On-Demand is an API-based software platform that integrates with existing encoding workflows and uses an AI-based decisioning engine to optimize bitrate. The software uses IMAX ViewerScore™ (XVS™), a perceptual quality metric that measures video quality based on human vision, to simultaneously protect video quality and reduce file sizes, ensuring bitrate reductions only occur when they are visually imperceptible. This results in the same viewer experience at an average 15% -25% savings in bandwidth, translating to millions in reduced distribution costs.
Key Features and Benefits
Saves more bits: StreamSmart On-Demand uses IMAX VisionScience™ technology to measure quality and reduce bits that are imperceptible to the human eye. The result is 15% -25% savings on distribution costs without compromising viewer experience.
Protects quality first: StreamSmart On-Demand leverages the IMAX ViewerScore™ (XVS™), a perceptual quality metric that measures video quality based on human vision and will only lower bitrate when it can guarantee that reductions are visually imperceptible.
Integrates seamlessly: Operates seamlessly in any environment, be it on-premises, private, or public cloud infrastructure. API-based software integrates into existing encoding workflows and uses a manifest manipulation approach to optimize bitrate, minimizing disruption to infrastructure and tech stack.
Enhances other optimization efforts: Whichever rate control or optimization method is already being used, StreamSmart can further improve bitrate reduction, on average by 15%-25%.
Technical Specifications
Third-Party Transcoder Support | |
---|---|
FFmpeg | Single-pass and two-pass VBR, CRF |
Elemental MediaConvert | QVBR (including auto-QVBR) |
Dolby Hybrik | Planned (2025) |
Input Containers, Codecs, Format Support | |
---|---|
Containers | IMF, MXF, MOV, MP4, MPEG-TS, ProRes, DNxHD (VC3) |
Video | AVC-I, H.264/AVC, H.265/HEVC, MPEG-2, JPEG 2000 |
Chroma Format & Bit Depth | YUV4:2:0, YUV4:2:2, 8 bit, 10 bit |
HDR Formats | HDR10, HLG, Dolby Vision |
Output Containers, Codecs, Format Support | |
---|---|
Containers | FFmpeg: MP4, AWS MediaConvert: MP4, HLS, DASH, CMAF |
Video | H.264/AVC, H.265/HEVC, AV1 |
HDR Formats | HDR10, HLG, Dolby Vision |
Manifests | .m3u8, .mpd |
The diagram below shows the system architecture of the IMAX Stream On-Demand Platform, including the components of the system, the interactions between them, and where and how a user or an automated system interacts with the solution.
-
The IMAX Stream On-Demand Platform is the customer-deployed portion of the system and is designed as a set of cooperating software (Docker) containers that are orchestrated by Kubernetes.
-
Stream On-Demand API & Services is a set of always-running containers that provide the following:
-
A REST API known as the Stream On-Demand Platform REST API that can be used to:
- submit new StreamAware On-Demand analysis (QC) requests,
- submit new StreamSmart On-Demand encoding optimization requests,
- visually inspect the results by extracting frames and a number of VisionScience maps (i.e. quality maps, banding maps, color volume difference maps) and
- query and configure the system.
ImportantThe Stream On-Demand Platform REST API provides an OpenAPI Specification (OAS) that can be used to generate language-specific clients for those interested in jump-starting an integration into their production workflow. Both Swagger Codegen and OpenAPITools openapi-generator are examples of tools that can be used for this purpose. Additionally, any laguage that has support for sending and receiving HTTP requests/responses in JSON format can be also be used with minimal effort.
Please contact your IMAX representative if you require any guidance.
-
Services for interacting with Kubernetes to take advantage of its various deployment, scaling, scheduling and management features
-
Services for interacting with various cloud API services (e.g. AWS S3)
For fixed scale deployments of the IMAX Stream On-Demand Platform, there is only ever one running instance of each container as the scalability needs here are expected to be relatively low. Elastic scale deployments of the cluster, by contrast, will elastically scale these containers to meet demand.
-
-
The Video Processors are Docker containers that perform the video processing (analysis in the case of StreamAware, encoding (or interfacing with encoders) and optimization in the case of StreamSmart), generating the metadata and metrics that are securely streamed to Insights. Video Processors are configured, deployed and managed by the Stream On-Demand Platform REST API in response to POST requests that invoke StreamAware or StreamSmart.
- Each Video Processor container is configured to request the compute resources (CPU/RAM) it needs in order to succesfully perform its analysis, given the properties of the assets being processed and the job that has been requested.
- Video Processor containers never stream any part of the video to Insights and, as such, no content (i.e. video, frames or images) ever leaves the customer’s environment/control.Note
Upon request, IMAX will provide examples of the metrics and metadata streamed to Insights.
- For fixed scale deployments of the IMAX Stream On-Demand Platform, the number of concurrently running Video Processors will scale but only to a fixed number, as the cluster itself is constrained by the underlying properties of the virtual machine (i.e. OVA/QEMU) or EC2 instance (i.e. AMI). Elastic scale deployments of the cluster, by contrast, can elastically scale without an upper bound, if so desired, in order to meet even the most demanding workloads.
- For StreamSmart jobs, the Video Processor performs the encoding when FFmpeg is used, and controls external encoding services (e.g. AWS Elemental MediaConvert) when they are used.
-
-
Insights is IMAX’s multi-tenant cloud-based data platform that leverages the power of our own VisionScience algorithms, along with leading business intelligence services, to generate, store, and present, meaningful representations of the analysis, encoding and optimization results. Some of the key features of Insights include:
- Performant, secure, and highly-available data storage that scales automatically
- Insights Stream On-Demand Web UI which can be used to:
- submit StreamAware and StreamEnhance jobs into the Stream On-Demand platform
- view and interact with a summary/status of all pending, running and completed analyses
- review results
- perform frame-level inspections of your content, including image and map comparisons
- Insights REST API which can be used to automate job status and progress tracking, and retrieve results, including:
- metrics and measurements across a variety of time granularities including: per-frame, per-second (1s, 2s, 5s, etc.), per-asset
- StreamAware QC overall result and specific failures
- StreamSmart perceptual quality scores and bitrate savings
- Insights dashboards and reports which are customizable content displays of analysis results that can be used interactively in the Web UI and exported in various archival formats such as PDF, PNG/JPG, Excel, JSON, etc.
-
The computer/laptop icon above illustrates that the full StreamSmart system is designed to be used in a headless manner (i.e. through API/programmatic access) and that all key features are exposed through their respective REST APIs.
-
The user/human icon and its connections above illusrate that:
- to support those integrating StreamAware and StreamSmart into their production workflows, IMAX has provided Postman collections with numerous REST API examples that can be imported into the Postman UI in order to become familiarized with the API request/response structures and
- the Insights Stream On-Demand Web UI can equally be used to interact graphically with the system by submitting the underlying API requests on the user’s behalf.
The IMAX Stream On-Demand Platform can be deployed in the following ways:
Fixed Scale
Description | System Requirements |
---|---|
As an EC2 instance on AWS using the IMAX Stream On-Demand Platform AMI | c5.24xlarge or c5a.24xlarge instance type and a 2.2TB root EBS volume (General Purpose SSD - gp3) |
As a virtual machine instance using an OVA or QEMU image | 24GB RAM (32GB RAM preferred), 24 CPU cores (32 cores preferred), 2.2TB of free disk space |
Elastic/Dynamic Scale
Description | System Requirements |
---|---|
On AWS EKS as a hosted SaaS instance (deployed in an IMAX account by IMAX) | |
On AWS EKS as a managed service (deployed in the customer account by IMAX) | An EKS cluster with at least 2 dynamically scalable node groups: minimum 2 c5.xlarge for a control services, and c5.4xlarge for data/processing services (scales down to 0 when not in use, scales up to a configurable limit) |
For details, refer to the Deployment Guides section.
The IMAX Stream On-Demand Platform is designed to be operated by human users through a web-based user interface, or programmatically through REST APIs. As described in the System Architecture section, the web UI is built on the REST APIs, so the same functionality and options are available through both approaches.
The “On-Demand” web application for the IMAX Stream On-Demand Platform can be found at https://insights.sct.imax.com/extensions/ondemand::app, or within Insights, by selecting On-Demand under the Applications menu item.
If you don’t already have an Insights account, contact your IMAX Customer Success representative.
Navigation within the On-Demand UI looks and functions as follows:
- Home - Loads the default page of the UI (Status), which is highlighted in the image.
- Analyze - Configure and submit a StreamAware On-Demand job.
- Optimize - Configure and submit a StreamSmart On-Demand job.
- Status - View the status, progress, and a summary of the results (once completed) of both StreamAware and StreamSmart jobs.
- Results - View the detailed results of StreamSmart and StreamSmart jobs. Typically navigated to by drilling into a job from the Status page, but can also be navigated to directly, in which case job selection will be required.
- Host selection - This drop-down menu is used to select the Stream On-Demand Platform instance to connect to and control. The colored indicator displays the connection status: green if the app is connected to and can communicate with the host, red if it cannot.
- Configuration - Configuration for Analysis and Optimization Templates (more details below) and host management (i.e. adding and deleting Stream On-Demand Platform instances).
The Insights menu can be accessed from the top left (using the “hamburger”), and user account settings can be accessed at the top right.
The majority of the funcionality of the UI requires a connection to the selected host. A dialog will appear to assist you in establishing the connection when required. Things to keep in mind:
- The system on which the browser/web UI is running requires network connectivity to TCP port 443 of the selected host. Depending on how and where the the instance is deployed, this may require a connection to a VPN or access from a specific allow-listed source IP address.
- If the Stream On-Demand Platform instance is deployed with a self-signed TLS certificate, you will need to accept the self-signed certificate before a connection can be established.
When using the UI for the first time, you will be prompted to add your host(s) if one has not already been configured for you.
To analyze one or more video files with StreamAware, navigate to the Analyze page:
The Analyze page allows you to configure and submit a StreamAware analysis, as defined by the Stream On-Demand REST API, allowing you to unlock the benefits of the features described in Terminology and Features. The layout of the page is as follows:
- Title - A title must be specified for each analysis. You can use the name of the asset that is being analyzed (e.g. “Big Buck Bunny”) or a description of the analysis that is being performed (e.g. “Perceptual quality analysis of Big Buck Bunny” or “QC of Big Buck Bunny”).
- Template - An analysis can be configured by loading the configuration from a saved template. Refer to Analysis and Optimization Templates for more details.
- Configuration - Expanding this section allows you to configure:
- Frames To Process - The number of frames of video to process (the default is to process all frames)
- Viewing Environment - Configures the viewing environment (viewer-type and device) for the device and viewer-type adaptive metrics like the IMAX ViewerScore and IMAX Encoder Performance Score.
- Features - Specific metrics and features, such as banding detection, can be enabled and disabled in this section.
- Content Layout Detection - Features related to content layout detection can be enabled and disabled in this section.
- Quality Checks - QC features like freeze frame detection and color bar detection can be enabled and configured in this section. The checks conifgured here apply to all assets in the analayis. Asset-specific Quality Checks are configured in the Assets section.
- Assets - This is where the content that is to be analyzed is specified. An analysis must have at least one “test” file. Optionally, a reference file and additional test files can be added, using the Add Reference Asset and Add Test Asset buttons. If no reference file is specified, a no-reference analysis will be performed. If a reference file is specified, a full-reference analysis will be performed for each of the test files specified. For each asset, the following options can be specified:
- Asset Format - The asset format (e.g. video file, image sequence etc.) must be selected. Some additional configuration must be provided for certain asset formats.
- Start Frame - This setting can be used to skip a specific number of frames at the beginning of the asset.
- Dynamic Range - StreamAware can automatically detect the dynamic range for most assets, which it will attempt to do when this is set to Auto Detect. To force the analysis into a specific mode, choose SDR or HDR.
- Asset Location - Specifies where the asset is located (AWS S3 or Kubernetes PVC), the URI/path, and how to authenticate (if applicable). S3 buckets can be browsed using the Choose button.
- Region of Interest - Limits the analysis to a particular region of interest. Disabled by default (entire frames are analyzed).
- Audio - Enables/disables audio loudness measurements, and used to enable and configure audio Quality Checks, such as audio silence detection, clipping detection, and audio phase mismatch.
- Sidecar - Add audio and Dolby Vision Metadata sidecar files to the analysis.
- Asset Quality Checks - Enable and configure asset-specific Quality Checks.
Once an analysis has been fully configured, use the Submit Analysis button to submit the job to the host that is selected in the host selection dropdown. If the job is accepted by the host, the Analysis Submission Result page will load. It provides a summary of the analysis configuration, and most importantly, shows the Analysis ID and presents a Status button which can be used to navigate to the Status page filtered on the newly submitted analysis. Refer to Monitoring Progress for more details.
To optimize an encode or an encoding ladder with StreamSmart, navigate to the Optimize page:
The Optimize page allows you to configure and submit a StreamSmart optimization, as defined by the Stream On-Demand REST API. The layout of the top of the page is as follows:
- Title - A title must be specified for each optimization. You can use the name of the asset that is being analyzed (e.g. “Big Buck Bunny”) or a description of the encode that is being performed (e.g. “Experiment with higher CRF values”).
- Template - An optimization can be configured by loading the configuration from a saved template. Refer to Analysis and Optimization Templates for more details.
- Encoder - The underlying encoder must be specified. The encoder that is selected determines how the encoder configuration is provided further down the page (see below).
- Input File - Selects the storage type and URI/path of the input/source file to encode. The storage types that are supported is dependent on the encoder that is selected (e.g. if MediaConvert is selected only S3 can be specified).
- Output Location - Specifies the storage type and path/folder where the output file(s) are placed. The actual names of the output files is specified later.
- Prune redundant renditions - Encoding optimization can sometimes eliminate the need for certain renditions, when optimization places them too close together. Enabling this feature will prune these redundant renditions out of the result, producing not only optimized encodes, but an optimized ladder.
How the encoder, ladder, and output file configuration is specified depends on the encoder that is selected.
FFmpeg
FFmpeg optimizations are configured by adding renditions, and specifying the FFmpeg command and output file suffix and extension for each one.
In this example, two renditions have been configured, optimization has been enabled for both of them, and the FFmpeg commands for each have been specified.
Note that the input and output files in the FFmpeg commands have been replaced by {INPUT_LOCATION}
and {OUTPUT_LOCATION}
. This is a requirement, and allows templates to be created and used in a way that requires only the input file and output location to be specified before submitting (the FFmpeg command does not need to be modified).
When using the Web UI, output files are named by appending the suffix that is configured in the Output File Name field to the input file name. An extension is also required (.mp4 in the example) and will be used to determine the type of output file that is produced.
To add renditions, click on the Add Rendition button. Once the configuration has been completed, click on the Submit Optimization button to submit the job.
AWS Elemental MediaConvert
MediaConvert optimizations are configured by loading your MediaConvert job configuration in JSON format into the Encoder Configuration field. The list of renditions is derived from this configuration, and optimization can be enabled/disabled for each one in the list that appears below.
In this example, the MediaConvert job configuration defines a single rendition, for which optimization is enabled.
Once the configuration has been completed, click on the Submit Optimization button to submit the job. If the job is accepted by the host, the Submission Result page will load. It provides a summary of the optimization configuration, and most importantly, shows the Analysis ID and presents a Status button which can be used to navigate to the Status page filtered on the newly submitted analysis. Refer to Monitoring Progress for more details.
To view the status of a job and monitor progress, click on the Status button on the job summary after submitting a new job, or navigate to the Status page:
The first element on the page is the job filter. If you navigated to the page from the job submission summary, the filter will be configured with your analysis ID. If you navigated directly to the page, the filter will be empty and all jobs will be displayed, sorted in descending order by job submission time (most recently submitted first). Filter elements can be deleted and added, including filtering by submission date, job title, analysis type (StreamAware and StreamSmart), and analysis ID.
Below the filter are some tiles that summarize the results. Clicking on the tiles will further filter the job list (e.g. clicking on the IN PROGRESS tile will filter the results down to just the currently in-progress jobs). Clicking again removes the additional filter. The following controls can be used to manage the filter and what is displayed on the page:
- Shows/hides the filter section of the page
- Shows/hides the summary tiles
- Clears all elements from the filter field
- Additional configuration options:
- Save filter - Filters can be saved and loaded at a later time
- Manage filters - Load a filter, specify your default filter, delete a filter
- Share URL - copies a URL to your clipboard that can be shared and will load the Status page with the current filter
- The slider at the bottom of the menu adjusts how often the results auto-refresh
- Click here to add an additional filter element
- Run the query or manually refresh the results
The jobs that match the filters are displayed at the bottom of the page. This example shows the results of filtering on a specific analysis ID:
The overall job status and progress is displayed in the STATUS column. Each row in the job list can be expanded to see the status for each file in the job:
Once a job has completed, the Status page will display a summary of the results for a job when the entry for the job is expanded. Here is an example of the summary for a StreamAware job:
- Clicking almost anywhere in the empty space of the main section of a job will will load the detailed results for the job in the Results page (see below for details). Clicking on the analysis ID will copy it to the clipboard.
- Dashboards are accessible via this drop-down menu. Some dashboards are provided by default, but custom dashboards specified to your use-case and needs can easily be created and linked to your account.
- Expand/collapse the file view.
- Launch the frame and map viewer for this file. This can be used to see and compare frames of video and quality maps at specific points in time (see below for details).
- Click here to look at a detailed log for the analysis of the file.
- There are options under this drop-down menu for cancelling (if the job is still in progress), deleting, re-submitting, and editing and re-summitting a job.
The scores displayed for the files are asset scores, refer to VisionScience Asset Scores for details.
The Test IDs for StreamAware jobs follow the following convention:
- No-Reference Analysis - the files have single-digit Test IDs, starting at 1.
- Full-Reference Analysis - the reference file has Test ID = 1, and the test files (the files being compared to the reference) have two-digit (separated by a dash) Test IDs, where hte first digit is the reference ID, and the second digit represents the test (e.g. 1-1, 1-2). In the example above, it is a reference-based analysis, 1 is the reference, and 1-1, 1-2, 1-3, and 1-4 are the files being tested.
Here is an example of the summary for a StreamSmart job:
These summarized StreamSmart results are similar to the results for StreamAware jobs except for:
- The analysis type is StreamSmart instead of StreamAware
- The device column has been removed.
- A savings column has been added, which displays the overall bitrate savings of the optimized encode when compared to the anchor (original unoptimized encode).
- The Test IDs are different:
- Input - The input/source file.
- Anchor <N> - The original unoptimized encode for the Nth rendition. This is the encode you would get if you ran the FFmpeg command or MediaConvert config directly without StreamSmart. In the example above, Anchor 1 is the unoptimized encode for the first rendition, and Anchor 2 is the unoptimized encode for the second rendition.
- Optimized <N> - The optimized encode for the Nth rendition. In the example above, Optimized 1 is the optimized encode for the first rendition, and Optimized 2 is the optimized encode for the second rendition.
As noted above, clicking in the whitespace of the overall section of the job loads the Results page, which shows a much more detailed view of the results, including time-series views of all of the metrics (including IMAX ViewerScore, Encoder Performance Score, Banding Score, and bitrate), and QC results if Quality Checks were enabled.
- The overall result - if any Quality Checks were enabled and one or more failed, the overall result will be Fail. If Quality Checks were not enabled, or if they all passed, the overall result will be Pass.
- If the job was configured with multiple viewer types and/or devices, the pair that determines which results are displayed can be selected here.
- The filter icon will pop up a dialog that can be used to search for and load the results of a different job.
- The gear icon allows you to configure some settings related to how the time-series data is displayed.
- The summary section, similar to the Status page, with additional details found by expanding the individual files:
- Quality Checks - A list of Quality Check failures for the file
- Video Metrics - Additional video metrics, such as Color Gamut and Color Volume Difference (if enabled).
- Video Metadata - Information about the video file, such as the frame rate, pixel format, and scan type.
- Audio - The list of audio streams, including metadata, and audio loudness measurements (if enabled).
- This section is displayed if there were any Quality Check failures. It shows the failures, and can be toggled between two modes:
- Summary - This view summarizes the Quality Check results by file. Each file can be expanded to show a summary of the checks that failed for that file, including the total number of failures, affected frames and duration. When a file is selected, the time-series view shows all of the segments in the file that have failures. When a specific type of failure is selected, the graph shows the sections that are affected by that specific type. Clicking on the highlighted section of the graph loads the Viewer for the first frame of the selected range.
- Detailed - This view shows the full list of all Quality Check failures, including the start and end time for each one. When a failure is selected, the graph highlights the section of video affected by the failure, and plots the associated metric (if applicable). Clicking on the highlighted section of the graph loads the Viewer for the first frame of the selected range.
- This section displays time-series views of the metrics that were collected by the job.
- Each file is represented by a different color, and the data for files can be hidden by unselecting the checkbox on the left side of the summary section (5).
- Different metrics can be selected by choosing one of the tabs at the top of this section, and tabs can be hidden by clicking on the gear icon.
- You can switch between a tabbed view and a stacked view by clicking on the expand/collapse icon.
- Zoom into a section of a graph by clicking and dragging across the area of interest.
- For StreamAware jobs, you can launch into the Viewer by clicking on a point on a graph. The Viewer allows you to view and compare frames and quality maps (see below for details).
As noted above, in most cases, clicking on a point on time-series graph will load the Viewer, which can be used to display and compare frames and quality maps.
In the example above, clicking on this point on the IMAX Encoder Performance Score graph would load frames for that point in time (which looks like a problem area that needs to be investigated further) into the Viewer:
- Selects the file that is displayed on the left of the hairline. If the same file is selected for the left side and the right side (2), the hairline will not be displayed.
- Selects the file that is displayed on the right of the hairline. If the same file is selected for the left side (1) and the right side, the hairline will not be displayed.
- Downloads the frame or map that is currently displayed for the file selected on the left (1).
- Downloads the frame or map that is currently displayed for the file selected on the right (2).
- Selects the image that is displayed for the file selected on the left: frame, quality map, banding map, color volume difference map.
- Selects the image that is displayed for the file selected on the right: frame, quality map, banding map, color volume difference map.
- When different files or images are displayed for the left and right sides, the hairline can be used and moved to compare them.
- Displays the current frame number and can be used to move backwards and forwards through the frames.
- The page can be exported as a self contained HTML page in a zip file, that can be shared with people that do not have access to Insights. The exported page will display what is currently visible with a functional hairline.
- Toggles the display of the metric graphs, which can be used to see where the currently visible frame/map sits within the asset. Clicking on a new point on the graph will move to that point in time.
- Time mode settings.
Analysis and Optimization Templates can be used to streamline the submission of StreamAware and StreamSmart jobs through the web UI. With templates, you can save commonly used configurations, and then load them on the Ananlyze and Optimize pages. The most commonly used configurations can be saved as defaults and will load automatically each time you start a new configuration.
New templates can be created in one of two ways:
- Templates can be managed by clicking on Analysis Templates or Optimization Templates after clicking on the gear icon on the right side of the top menu:
- New templates can be created by clicking on the New Template button at the top right of the Analyze and Optimize pages:
In the following Optimization Template example:
- FFmpeg is selected as the encoder.
- The Input File is left blank, which means it will not be filled in when the template is loaded.
- The Output Location is specified.
- Two renditions are configured and the output file suffixes are specified.
When this template is loaded, the only thing you would need to configure before submitting the job would be input file information.
A template can be set as the default template, which will load it by default when a new job is configured. To make a template the default, navigate to Analysis Templates or Optimization Templates under the gear menu, click on the three dots next to the template you want to make the default, and choose Set as default:
To load a template, select it from the Template dropdown on the Analyze or Optimize page:
Recall from the system architecture that both the IMAX Stream On-Demand Platform and Insights support being used in a headless/programmatic manner via the Stream On-Demand Platform REST API and Insights REST API, respectively.
The Stream On-Demand Platform REST API can be used to:
- submit new StreamAware On-Demand analysis (QC) requests,
- submit new StreamSmart On-Demand encoding optimization requests,
- visually inspect the video assets by extracting frames and a number of VisionScience maps (i.e. quality maps, banding maps, color volume difference maps),
- query and configure the system.
The Insights REST API can be used to:
- query analysis and optimization progress,
- fetch metrics and measurements across a variety of time granularities including: per-frame, per-second (1s, 2s, 5s, etc.), per-asset,
- fetch the StreamAware QC results and specific failures for an analysis,
- fetch the StreamSmart perceptual quality scores and bitrate savings for an optimization.
This section will treat each API separately and illustrate the most common endpoints and how to use them through examples.
The sections that follow are focused on the most common use case of how to sumbit a StreamAware analysis and StreamSmart optimization jobs. The Stream On-Demand Platform REST API provides additional endpoints for:
- visually inspecting the video assets by extracting frames and a number of VisionScience maps (i.e. quality maps, banding maps, color volume difference maps),
- cancelling and deleting analyses and/or optimizations and
- querying and configuring the system.
You are strongly encouraged to consult the Stream On-Demand Platform REST API specification and the Using Postman section below for additional details and examples on these endpoints, along with instructions on how to use the Postman UI with the API.
The Stream On-Demand Platform REST API provides an OpenAPI Specification (OAS) that can be used to generate language-specific clients for those interested in jump-starting an integration into their production workflow. Both Swagger Codegen and OpenAPITools openapi-generator are examples of tools that can be used for this purpose. Additionally, any laguage that has support for sending and receiving HTTP requests/responses in JSON format can be also be used with minimal effort.
Please contact your IMAX representative if you require any guidance.
Let’s walk through an example of using the Stream On-Demand Platform API to sumbit a new analysis to the system’s StreamAware component, along with the related JSON requests and responses.
In this example, we are going to use the StreamAware /analyses endpoint to perform a no-reference analysis of a single video asset.
{
"content": {
"title": "NR Analysis With XVS Score Check"
},
"subjectAssets": [
{
"name": "dog_running.mp4",
"path": "royalty_free/dog_running/source",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
},
"qualityCheckConfig": {
"scoreChecks": [
{
"metric": "SVS",
"threshold": 85,
"durationSeconds": 5,
"viewingEnvironmentIndex": 0
}
]
}
}
],
"analyzerConfig": {
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "EXPERT"
}
]
}
}
Notice in the JSON request body above that:
- our asset lives in an S3 bucket and we are relying on IAM roles to provide read-only access to the bucket,
- we are configuring the On-Demand Analyzer to target the LG OLED65C9PUA device from an expert viewer’s perspective and
- we are configuring a score check on our asset that will consider the video to have failed if there is any period of 5 or more seconds where the IMAX ViewerScore (XVS) for our selected device falls below 85.
A successful response from the Stream On-Demand Platform API would look something like the following:
{
"submittedAnalyses": [
{
"analyzerConfig": {
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "EXPERT"
}
]
},
"id": "13afcc15-6808-4819-9cd3-fde950dda0f2",
"subjectAsset": {
"content": {
"title": "NR Analysis With XVS Score Check"
},
"hdr": false,
"name": "dog_running.mp4",
"path": "royalty_free/dog_running/source",
"qualityCheckConfig": {
"scoreChecks": [
{
"durationSeconds": 5,
"metric": "SVS",
"skipEnd": 0,
"skipStart": 0,
"threshold": 85,
"viewingEnvironmentIndex": 0
}
]
},
"storageLocation": {
"name": "video-files",
"type": "S3"
}
},
"submissionTimestamp": "2023-08-08T17:53:40.969Z",
"testId": "1"
}
]
}
Notice in the response above the line "id": "13afcc15-6808-4819-9cd3-fde950dda0f2"
that reports back the analysis UUID which is a required key for fetching the associated results from Insights.
Referring back to the system architecture, at this point the Stream On-Demand API & Services have:
- validated the POST Analysis Request,
- ensured that the asset is present and accessible,
- inspected the properties of the underlying asset to determine the compute resources needed to analyze the asset,
- scheduled a new Analyzer container to process the video file and
- prepared and returned a response to the caller.
Once the Analyzer container starts:
- the Analyzer analyzes the video asset and streams the IMAX VisionScience measurements and metrics to Inisghts (Note: no part of the video itself is streamed to Insights),
- Insights applies additional viewer intelligence algorithms to the incoming streams before storing the data and,
- upon completion of the analysis, acknowledgements are recognized by both sides and the Analyzer container is destroyed (i.e. elastic scaling) in order to release the compute resources being used by the IMAX Stream On-Demand Platform.
Let’s walk through an example of using the Stream On-Demand Platform API to sumbit a new optimization to the system’s StreamSmart component, along with the related JSON requests and responses.
In this example, we are going to use the StreamSmart /optimizations endpoint to perform an optimization of a video asset using the FFmpeg encoder.
{
"content": {
"title": "Example Title"
},
"input": {
"assetUri": "s3://videos/examples/example_source_file.mp4"
},
"encoderConfig": {
"type": "FFmpegConfig",
"encodes": [
{
"command": [
"ffmpeg -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -b:v 4500k -maxrate 4500k -bufsize 6000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/examples/output/encoded_video.mp4"
}
}
]
}
}
A successful response from the Stream On-Demand Platform API would look something like the following:
{
"uuid": "5fb38092-f787-4fc9-9789-6b0834cba832",
"organization": "IMAX",
"site": "StreamSmart",
"status": "Created",
"config": {
"content": {
"title": "Example Title"
},
"encoderConfig": {
"encodes": [
{
"command": [
"ffmpeg -i {INPUT_LOCATION} -c:v libx264 -x264-params ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0 -profile:v high -level:v 4.1 -preset slow -b:v 4500k -maxrate 4500k -bufsize 6000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"name": "examples/output/encoded_video.mp4",
"storageLocation": {
"name": "videos",
"type": "S3"
}
}
}
],
"type": "FFmpegConfig"
},
"input": {
"name": "examples/example_source_file.mp4",
"storageLocation": {
"name": "videos",
"type": "S3"
}
}
}
}
Notice in the response above the line "id": "5fb38092-f787-4fc9-9789-6b0834cba832"
that reports back the optimization UUID which is a required key for fetching the associated results from Insights.
Alternatively, you can submit an optimization that uses AWS Elemental MediaConvert (EMC) using an example like the following:
{
"content": {
"title": "Example Title"
},
"encoderConfig": {
"type": "EMCConfig",
"config": {
"JobTemplate": "",
"Queue": "arn:aws:mediaconvert:us-east-1:315835334412:queues/Default",
"UserMetadata": {},
"Role": "arn:aws:iam::315835334412:role/mediaconvert-optimizer",
"Settings": {
"OutputGroups": [
{
"CustomName": "top-profile-encode",
"Name": "CMAF",
"Outputs": [
{
"ContainerSettings": {
"Container": "CMFC"
},
"VideoDescription": {
"Width": 1920,
"ScalingBehavior": "STRETCH_TO_OUTPUT",
"Height": 1080,
"TimecodeInsertion": "DISABLED",
"AntiAlias": "ENABLED",
"Sharpness": 50,
"CodecSettings": {
"Codec": "H_264",
"H264Settings": {
"InterlaceMode": "PROGRESSIVE",
"NumberReferenceFrames": 3,
"Syntax": "DEFAULT",
"Softness": 0,
"GopClosedCadence": 1,
"GopSize": 2,
"Slices": 1,
"GopBReference": "ENABLED",
"HrdBufferSize": 16000000,
"MaxBitrate": 8000000,
"EntropyEncoding": "CABAC",
"RateControlMode": "QVBR",
"QvbrSettings": {
"QvbrQualityLevel": 9
},
"CodecProfile": "HIGH",
"MinIInterval": 0,
"AdaptiveQuantization": "AUTO",
"CodecLevel": "AUTO",
"SceneChangeDetect": "ENABLED",
"QualityTuningLevel": "SINGLE_PASS",
"UnregisteredSeiTimecode": "DISABLED",
"GopSizeUnits": "SECONDS",
"ParControl": "INITIALIZE_FROM_SOURCE",
"NumberBFramesBetweenReferenceFrames": 3,
"RepeatPps": "DISABLED",
"DynamicSubGop": "ADAPTIVE"
}
}
},
"NameModifier": "_8Mbps"
}
],
"OutputGroupSettings": {
"Type": "CMAF_GROUP_SETTINGS",
"CmafGroupSettings": {
"TargetDurationCompatibilityMode": "SPEC_COMPLIANT",
"WriteHlsManifest": "ENABLED",
"WriteDashManifest": "ENABLED",
"SegmentLength": 4,
"Destination": "s3://s3-bucket/destination/path/",
"FragmentLength": 2,
"SegmentControl": "SEGMENTED_FILES",
"WriteSegmentTimelineInRepresentation": "ENABLED",
"ManifestDurationFormat": "FLOATING_POINT",
"StreamInfResolution": "INCLUDE"
}
}
}
],
"Inputs": [
{
"AudioSelectors": {
"Audio Selector 1": {
"DefaultSelection": "DEFAULT"
}
},
"VideoSelector": {
"ColorSpace": "FOLLOW",
"Rotate": "DEGREE_0",
"AlphaBehavior": "DISCARD"
},
"FilterEnable": "AUTO",
"PsiControl": "USE_PSI",
"FilterStrength": 0,
"DeblockFilter": "DISABLED",
"DenoiseFilter": "DISABLED",
"TimecodeSource": "ZEROBASED",
"FileInput": "s3://s3-bucket/sources/source.mov"
}
]
},
"AccelerationSettings": {
"Mode": "DISABLED"
},
"StatusUpdateInterval": "SECONDS_15",
"Priority": 0,
"HopDestinations": []
},
"region": "us-east-1",
"endpointURL": "https://vasjpylpa.mediaconvert.us-east-1.amazonaws.com"
}
}
Referring back to the system architecture, at this point the Stream On-Demand API & Services have:
- validated the POST Optimization Request,
- ensured that the asset is present and accessible,
- inspected the properties of the underlying asset to determine the compute resources needed to optimize the asset,
- scheduled a new Optimizer container to process the video file and
- prepared and returned a response to the caller.
Once the Optimizer container starts:
- the Optimizer uses the chosen encoder, along with the specified configuration, to encode, optimize and analyze the video, streaming the IMAX VisionScience measurements and metrics to Inisghts (Note: no part of the video itself is streamed to Insights),
- Insights applies additional viewer intelligence algorithms to the incoming streams before storing the data and,
- upon completion of the optimization process, acknowledgements are recognized by both sides and the Optimizer container is destroyed (i.e. elastic scaling) in order to release the compute resources being used by the IMAX Stream On-Demand Platform.
The Insights REST API can be used to:
- query analysis and optimization progress,
- fetch metrics and measurements across a variety of time granularities including: per-frame, per-second (1s, 2s, 5s, etc.), per-asset,
- fetch the StreamAware QC results and specific failures for an analysis,
- fetch the StreamSmart perceptual quality scores and bitrate savings for an optimization.
In order to use the Insights REST API, you will need an Insights account along with an API clientId
and clientSecret
. If you have not done so already, please use the IMAX Help Center to create a ticket specifying your organization and the email address you wish to use as your account login. Alternatively, you can send these values to your IMAX representative using Slack/email. In short order, you should receive an invitation email with further instructions on how to finish setting up your Insights account. Your support ticket will also include your clientId
and clientSecret
.
Your API clientId
and clientSecret
are unique to your account and should be kept private. Please let your IMAX representative know if you need new values generated, in the case where they are lost or compromised.
The Insights REST API is secured using OAuth 2.0 tokens. As such, you are required to submit a POST request to the login
endpoint in order to receive an OAuth2 token, as shown below:
curl -X POST \
https://insights.sct.imax.com/api/4.0/login \
-header 'Accept: application/application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'client_id=<clientID>&client_secret=<clientSecret>'
where <clientID>
and <clientSecret>
are the values you received when setting up your Insights account.
The alphanumeric OAuth2 token will be included in the payload of the response and labelled as access_token
:
{
"access_token": "4RtxmBmskRFfcnvJXjVdXHZbWDbwCdrHHJH8qJNp",
"token_type": "Bearer",
"expires_in": 3600
}
The OAuth2 token must be included as a Bearer token in the Authorization header for all requests sent to an Insights REST API endpoint, as shown below:
curl -X POST \
https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false \
-H 'Authorization: Bearer 4RtxmBmskRFfcnvJXjVdXHZbWDbwCdrHHJH8qJNp'
.
.
.
Each OAuth2 token expires after 1 hour, regardless of activity. To obtain a new token, you will need to resubmit the POST request to the login
endpoint.
The examples in this section use cURL to illustrate sending requests to the Insights REST API. The Bearer tokens used in the headers are for illustrative purposes only and should be replaced by the access_token
you receive from the /login
endpoint, as presented above.
The Insights REST API exposes a single endpoint for all requests: /api/4.0/queries/run/json?cache=false&limit=-1
and each request must be sent using HTTP POST semantics.
The Insights platform supports caching and response size limits for improved performance, especially from within the Web UI. When using the REST API, however, it is recommended that you bypass the cache and remove the result govenor by adding the cache=false
and limit=-1
query parameters, respectively, to all your requests, as shown above. In this manner, you will be guaranteed to see all of the latest data.
Let’s examine the body of a typical Insights POST request shown below:
{
"model": "on_demand",
"view": "vod_analysis_results",
"fields": [
"vod_analysis_results.test_id",
"vod_analysis_results.test_video_time_1s",
"vod_analysis_results.xvs"
.
.
],
"filters": {
"vod_analysis_results.analysis_uuid": "59508525-070d-428b-8857-d5482bf7c690",
.
.
},
"sorts": [
"vod_analysis_results.test_video_time_1s",
"vod_analysis_results.test_id",
.
.
]
}
- Every Insights request must specify the
model
attribute. For both the StreamAware and StreamSmart On-Demand products, the value should always beon_demand
. - Every Insights request must specify a
view
attribute (see Supported Views below). - Every Insights request should specify a list of view
fields
representing the data that will be included in the response. A field is specified using the<view>.<field_name>
notiation. - An Insights request may specify a list of
filters
which are view fields that are constrained to various values and/or ranges in order to limit/control the data retrieved. The most common filter to use here is<view>.analysis_uuid
, which constrains the results to a single analysis or optimization UUID (see example above). Filters become invaluable when your Insights responses return large datasets, as is the case when fetching frame results. - An Insights request may specify a list of
sorts
which are view fields that control the sort order applied to the data in the response. Sorting is particularly useful when retrieving results in a time series.
The examples provided in this sections below use cURL to send requests to the Insights REST API. When learning and/or experimenting with the Insights REST API, you may find it preferrable and easier to use a GUI tool like Postman to submit requests and view their responses. The IMAX team has created a Postman API collection and envionment which can be imported and used directly in Postman to aid in this effort. The structure of the Postman collection is meant to be a useful sampling/representation of the various examples presented in the sections below.
Supported views
Insights supports the following views:
View | Details |
---|---|
vod_analysis_results |
This view exposes fields to capture the various VisionScience measurements and metrics (e.g. XVS, XEPS, XBS, CVD, CCM, physical and visual noise, HDR color gamut and luminance) for an analysis or optimization over multiple measurement periods (1s, 2s, 5s, 10s, 30s, 60s) and aggregate calculations (min, max, avg), thus enabling one to create a comprehensive time-series view of any supported measurement or metric. Additionally, this view exposes a subview, vod_video_asset_score , that exposes fields to capture the various asset scores (e.g. XVS, XEPS, XBS and CCS) for an analysis or optimization. |
vod_analysis_frame_results |
This view exposes fields to capture the various VisionScience measurements and metrics (e.g. XVS, XEPS, XBS, CVD, CCM, physical and visual noise, HDR color gamut and luminance) for each frame of the assets included in an analysis or optimization. |
vod_quality_checks |
This view exposes fields to capture the various video quality checks, asset audio quality checks and asset score checks supported by StreamAware. |
vod_audio_loudness |
This view exposes fields to capture various audio loudness measurements and metrics (e.g. integrated loudness, loudness range, momentary loudness, short-term loudness, true peak level) for the assets included in an analysis. |
vod_closed_captions_metadata |
This view exposes fields to capture the periods of missing closed captions metadata for the assets included in an analysis. |
vod_video_metadata |
This view exposes fields to capture the video metadata (e.g. title, resolution, dynamic range, encoder, frame rate, time base, aspect ratio, frame count, color primaries and space, color transfer characteristics and content/frame light levels) for the assets included in an analysis. |
vod_audio_metadata |
This view exposes fields to capture the various audio metadata (e.g. stream index/pid, channel count, channel layout, channel assignment, codec, coding mode, language, sample rate, Dolby Atmos bed channels, Dolby Atmos dynamic objects, IMF MCA audio content/element, quantization bits etc.) for the assets included in an analysis. |
vod_cadence_pattern |
This view exposes fields to capture metadata about the various video cadence pattern detected (e.g. start/end time, duration, pattern, pattern offset etc.) for the assets included in an analysis. |
vod_status_analysis |
This view exposes fields to capture a summary of the status of an analysis (e.g. percent done, test count, quality check failure count) |
vod_status_test |
This view exposes fields to capture the current status of an analysis (e.g. percent done, current step/state) |
vod_status_updates |
This view exposes fields to capture additional details associated with current status of an analysis (e.g. state details, error details) |
The views above, along with the fields they support, are presented in more detail, along with some examples, in the sections below.
Common view fields
Most Insights views for the StreamAware and StreamSmart products expose several common fields which can be used to add detail and context to your Insights responses. Use the following list of common fields:
Field Name | Details |
---|---|
organization |
The top level of the data topology. |
site |
The second level of the data topology. |
analysis_uuid |
The analysis ID |
title |
The name/title of the video. |
test_id |
The test ID of the asset (eg. 1, 1-1) |
reference_file |
The path of the reference video (applies to FR analysis only) |
reference_video |
Concatenation of the reference video path and identifier (applies to FR analysis only) |
reference_video_no_path |
Concatenation of the reference video file (without path) and identifier (applies to FR analysis only) |
reference_video_identifier |
The reference video identifier (applies to FR analysis only) |
reference_video_json |
The JSON representation of the reference video, exactly as submitted into the API request (applies to FR analysis only) |
test_file |
The path of the subject/test video |
test_video |
Concatenation of the subject/test video path and identifier |
test_video_no_path |
Concatenation of the subject/test video file (without path) and identifier |
test_video_identifier |
The subject/test video identifier |
test_video_json |
The JSON representation of the subject/test video, exactly as submitted into the API request |
stream_index |
The index used to uniquely identify the stream (video/audio) in the test video |
viewer_type |
The viewer type for which scores were calculated |
device.device_name |
The device name |
In the rare case where a view doesn’t happen to support one of the common fields listed above, Insights will simply ignore field and omit it from the response.
Whether you are using StreamAware to analyze video assets or StreamSmart to produce optimized encodings, the work done by the system can take some time to complete. Both StreamAware analyses and SteamSmart optimizations go through various lifecycle states and the system sends a notification for each state transition and event update to the Insights platform. Please refer to the following table for a list of all the possible states and their meanings:
STATE | DESCRIPTION |
---|---|
Queued |
Indicates that the system has successfully received a StreamAware analysis or StreamSmart optimization request and it has been scheduled for processing according to resource availability. |
Estimating |
Indicates that the system has scheduled, is in the process of completing or has successfully completed the task to estimate the amount of computing resources (RAM, CPU) needed to complete the analysis. Once the estimation is complete, the status message will include the computing resources required. This state applies only to StreamAware analyses. |
Initializing |
Indicates that the On-Demand Analyzer has been successfully started, in the case of a StreamAware analysis, or that the StreamSmart workflow engine has been successfully started, in the case of a StreamSmart optimization. For a StreamAware analysis, the status message captures details on how the On-Demand Analyzer was started, configured and the version in use. |
Aligning |
Indicates that the On-Demand Analyzer is in the process of completing or has completed the temporal alignment of the video assets. While in progress, the status message will include the percentage completed. Once complete, the status message will include the offsets being used in both the reference and subject/test assets. This state applies only to full-reference StreamAware analyses and requires that temporal alignment is enabled (default). |
Pre-processing |
Indicates that the video assets are being transferred (if applicable) and preprocessed in preparation for encoding. This state applies only to StreamSmart optimizations. |
Encoding |
Indicates that video assets are being encoded using the encoder and encoder settings specified in the optimization request. While in progress, the status message will include the percentage completed if supported by the underlying encoder. This state applies only to StreamSmart optimizations. |
Analyzing |
Indicates the On-Demand Analyzer is in the process of completing or has completed analyzing the video asset(s). While in progress, the status message will include the percentage completed, along with the total number of frames. Once completed, the requested measurements and metrics are ready for post-processing by the Insights data platform. |
Post-processing |
Indicates that segment selection and the creation of optimized results is in progress. The transferring of the optimized results to the target output location is also done during this state. This state applies only to StreamSmart optimizations. |
Results Ready |
Indicates that the Insights data platform has completed all post-processing of the StreamAware analysis or StreamSmart optimization results and that they are ready to be retrieved/used. This is a terminal state. |
Failed |
Indicates that the system was unable to process one or more of the asset(s) in the StreamAware analysis or StreamSmart optimization request. The message associated with this status should indicate the nature of the failure. For StreamAware analyses, a failure to estimate is generally not considered critical and the system should make every effort to analyze the asset(s) using default values for the computing resources. Other failures, such an inability to achieve temporal alignment with the chosen configuration values, may require remedial action to be applied in order to achieve success. For StreamSmart optimizations, failures encountered during the pre-processing, encoding or post-processing states are generally considered fatal as they usually indicate a problem reading, encoding the asset with the specified encoder configuration or writing the results to the desired output location. This is a terminal state. |
The IMAX Stream On-Demand Platform is designed to process large volumes of assets, of varying complexity and duration, concurrently. Due to the non-deterministic order of this processing, it becomes important for users of the system to be able to see the status of their StreamAware analyses and/or StreamSmart optimzations. The Insights REST API supports a pull model for retreiving status information and exposes 3 views for this purpose:
View | Details |
---|---|
vod_status_analysis |
This view exposes fields to capture a summary of the status of an analysis or optimization (e.g. percent done, test count, quality check failure count) |
vod_status_test |
This view exposes fields to capture the current status of an analysis or optimization (e.g. percent done, current step/state) |
vod_status_updates |
This view exposes fields to capture additional details associated with current status of an analysis or optimization (e.g. state details, error details) |
Use the vod_status_analysis
view and an analysis/optimization UUID filter to see a summary or roll-up status of a given analysis/optimization, as shown below:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer 7FYtb3FB39ngTgDXF4WsCXb2NMvR3PRvhJGgypVx' \
--data '{
"model": "on_demand",
"view": "vod_status_analysis",
"fields": [
"vod_status_analysis.analysis_uuid",
"vod_status_analysis.time",
"vod_status_analysis.min_time",
"vod_status_analysis.stage",
"vod_status_analysis.percent_done",
"vod_status_analysis.test_count",
"vod_quality_checks.count"
],
"filters": {
"vod_status_analysis.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b"
}
}'
Consider the following JSON sample response:
[
{
"vod_status_analysis.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_analysis.time": "2023-08-30 16:41:00",
"vod_status_analysis.min_time": "2023-08-30 16:40:14",
"vod_status_analysis.stage": "Results Ready",
"vod_status_analysis.percent_done": 100,
"vod_status_analysis.test_count": 3,
"vod_quality_checks.count": 4
}
]
Note in the example response above:
- the
test_count
value indicates that there are 3 separate NR and/or FR analyses being performed and - the
count
value indicates that there were 4 quality check failures found.
To fetch the status for a given analysis/optimization and to see the details of all no-reference and/or full-reference analyses being performed, use the vod_status_test
view and filter on the analysis UUID, as shown below:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer mFs5sKXwcVnCS4FC6kKbdFYqzcQYqVmHCKnKsNk5' \
--data '{
"model": "on_demand",
"view": "vod_status_test",
"fields": [
"vod_status_test.analysis_uuid",
"vod_status_test.time",
"vod_status_test.test_id",
"vod_status_test.test_video",
"vod_status_test.reference_video",
"vod_status_test.stage",
"vod_status_test.percent_done"
],
"filters": {
"vod_status_test.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b"
},
"sorts": [
"vod_status_test.test_id"
]
}'
Assuming that 2e4db929-d00b-493a-98fb-77e321f3219b
represents a full-reference analysis, you may see a response like the following if you query for the status immediately after submitting via the Stream On-Demand Platform REST API:
[
{
"vod_status_test.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_test.time": "2023-08-30 16:40:36",
"vod_status_test.test_id": "1",
"vod_status_test.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_status_test.reference_video": null,
"vod_status_test.stage": "Analyzing",
"vod_status_test.percent_done": 33.33
},
{
"vod_status_test.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_test.time": "2023-08-30 16:40:37",
"vod_status_test.test_id": "1-1",
"vod_status_test.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_31.mp4",
"vod_status_test.reference_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_status_test.stage": "Analyzing",
"vod_status_test.percent_done": 20.12
},
{
"vod_status_test.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_test.time": "2023-08-30 16:40:37",
"vod_status_test.test_id": "1-2",
"vod_status_test.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_41.mp4",
"vod_status_test.reference_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_status_test.stage": "Aligning",
"vod_status_test.percent_done": 88.75
}
]
Notice the following in the response above:
- The NR analysis of 1 (
dog_running.mp4
) 33% complete. - The FR analysis of 1-1 (
dog_running_1080_h264_qp_31.mp4
) has been temporarily aligned with the reference (i.e.Aligning
state is finished) and is 20% of the way through its analysis. - The 1-2 asset (
dog_running_1080_h264_qp_41.mp4
) is still being temporarily aligned with the reference and, as such, has not yet started its FR analysis.
Repeating the request above some minutes later results in the following response:
[
{
"vod_status_test.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_test.time": "2023-08-30 16:40:55",
"vod_status_test.test_id": "1",
"vod_status_test.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_status_test.reference_video": null,
"vod_status_test.stage": "Results Ready",
"vod_status_test.percent_done": 100
},
{
"vod_status_test.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_test.time": "2023-08-30 16:41:00",
"vod_status_test.test_id": "1-1",
"vod_status_test.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_31.mp4",
"vod_status_test.reference_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_status_test.stage": "Results Ready",
"vod_status_test.percent_done": 100
},
{
"vod_status_test.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_test.time": "2023-08-30 16:40:59",
"vod_status_test.test_id": "1-2",
"vod_status_test.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_41.mp4",
"vod_status_test.reference_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_status_test.stage": "Results Ready",
"vod_status_test.percent_done": 100
}
]
In the result above, all analyses are complete and their results are ready to be used.
If you an encounter an error or unexpected behavior with a given analysis, you can choose to fetch a log of all the status messages for the analysis using the vod_status_updates
view and filters on the vod_status_updates.analysis_uuid
and vod_status_updates.test_id
, as shown below:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer rjV3dmFFrWfNWKPMmtcRtQdrPtjhYczCBFHPbn5z' \
--data '{
"model": "on_demand",
"view": "vod_status_updates",
"fields": [
"vod_status_updates.analysis_uuid",
"vod_status_updates.time",
"vod_status_updates.test_id",
"vod_status_updates.test_video",
"vod_status_updates.reference_video",
"vod_status_updates.stage",
"vod_status_updates.percent_done",
"vod_status_updates.details",
"vod_status_updates.error_details"
],
"filters": {
"vod_status_updates.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_updates.test_id": "1-1"
},
"sorts": [
"vod_status_updates.test_id",
"vod_status_updates.time desc"
]
}'
Fetch the vod_status_updates.details
and vod_status_updates.error_details
to see detailed messages and errors.
Since the volume of status updates can be extremely large, you are strongly encouraged to filter on both vod_status_updates.test_id
and vod_status_updates.stage
in order to reduce the volume of the response and focus on the asset of interest.
Sort on the vod_status_updates.time
field in descending order to see the most recent messages first.
An abbreviated sample JSON response would be as follows:
[
{
"vod_status_updates.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_updates.time": "2023-08-30 16:41:01.176000",
"vod_status_updates.reference_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_status_updates.test_id": "1-1",
"vod_status_updates.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_31.mp4",
"vod_status_updates.stage": "Analyzing",
"vod_status_updates.details": "{\"result\":\"pass\",\"total\":492}",
"vod_status_updates.error_details": null,
"vod_status_updates.percent_done": 1
},
{
"vod_status_updates.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_updates.time": "2023-08-30 16:41:01.122000",
"vod_status_updates.reference_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_status_updates.test_id": "1-1",
"vod_status_updates.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_31.mp4",
"vod_status_updates.stage": "Analyzing",
"vod_status_updates.details": "{\"progress\":{\"current\":492,\"total\":492}}",
"vod_status_updates.error_details": null,
"vod_status_updates.percent_done": 1
},
{
"vod_status_updates.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_updates.time": "2023-08-30 16:41:00.937000",
"vod_status_updates.reference_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_status_updates.test_id": "1-1",
"vod_status_updates.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_31.mp4",
"vod_status_updates.stage": "Analyzing",
"vod_status_updates.details": null,
"vod_status_updates.error_details": null,
"vod_status_updates.percent_done": 1
},
{
"vod_status_updates.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_updates.time": "2023-08-30 16:41:00.937000",
"vod_status_updates.reference_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_status_updates.test_id": "1-1",
"vod_status_updates.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_31.mp4",
"vod_status_updates.stage": "Results Ready",
"vod_status_updates.details": null,
"vod_status_updates.error_details": null,
"vod_status_updates.percent_done": 1
},
{
"vod_status_updates.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_updates.time": "2023-08-30 16:41:00.615000",
"vod_status_updates.reference_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_status_updates.test_id": "1-1",
"vod_status_updates.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_31.mp4",
"vod_status_updates.stage": "Analyzing",
"vod_status_updates.details": "{\"progress\":{\"current\":485,\"total\":492}}",
"vod_status_updates.error_details": null,
"vod_status_updates.percent_done": 0.9858
},
{
"vod_status_updates.analysis_uuid": "2e4db929-d00b-493a-98fb-77e321f3219b",
"vod_status_updates.time": "2023-08-30 16:40:59.613000",
"vod_status_updates.reference_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_status_updates.test_id": "1-1",
"vod_status_updates.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_31.mp4",
"vod_status_updates.stage": "Analyzing",
"vod_status_updates.details": "{\"progress\":{\"current\":466,\"total\":492}}",
"vod_status_updates.error_details": null,
"vod_status_updates.percent_done": 0.9472
},
.
.
.
]
If the system encounters an error while processing an asset, it will report the content of the error in the vod_status_updates.error_details
field for inspection. Many errors will contain sufficient information such that a remedy can be readily applied and the analysis rerun with minimal effort (i.e. fixing an invalid asset path or filename). Some errors are more serious and may require input from a video analyst and/or manipulation of the assets themselves. A failure to achieve automatic alignment between the reference and subject asset, for example, will result in an alignment error and manual actions must be taken to apply fixes before resubmitting the analysis. If an error does occur for which the cause or solution is unknown, please post a trouble ticket using the IMAX Help Center.
The following table lists the various fields of interest on the vod_status_analysis
, vod_status_test
and vod_status_updates
views for extracting data on the status of an analysis. You can use these fields in the body of any request sent to the Insights REST API.
Metric/Measurement Field | Details |
---|---|
vod_status_analysis.min_time |
The time that the analysis started |
vod_status_analysis.time |
The time the status was updated |
vod_status_analysis.step |
The current step of analysis |
vod_status_analysis.percent_done |
The completion percentage of the current stage |
vod_status_analysis.test_count |
The number of files analyzed |
vod_quality_checks.count |
The number of quality check failures |
Metric/Measurement Field | Details |
---|---|
vod_status_test.min_time |
The time that the analysis started |
vod_status_test.time |
The time the status was updated |
vod_status_test.step |
The current step of analysis |
vod_status_test.percent_done |
The completion percentage of the current stage |
vod_quality_checks.count |
The number of quality check failures |
vod_status_test.asset_xvs |
The Asset IMAX ViewerScore |
Metric/Measurement Field | Details |
---|---|
vod_status_updates.time |
The time the event occurred |
vod_status_updates.max_event_time |
The time that the event started |
vod_status_updates.min_event_time |
The latest time associated with the event |
vod_status_updates.stage |
The current stage of the analysis |
vod_status_updates.percent_done |
The completion percentage of the current stage |
vod_status_updates.details |
Additional details related to the event |
vod_status_updates.error_details |
The error description extracted from status updates that indicate that there was an error |
The Insights REST API is used to:
- fetch metrics and measurements across a variety of time granularities including: per-frame, per-second (1s, 2s, 5s, etc.), per-asset,
- fetch the StreamAware QC overall result and specific failures for an analysis and
- fetch the StreamSmart perceptual quality scores and bitrate savings for an optimization.
As with all things in Insights, the data above is modeled and accessed through a collection of views and fields on those views. The sections below provide details on each of these views and examples of how to use them to fetch the results of your StreamAware analyses and StreamSmart optimizations.
The focus of this section is on the options for retrieving video and audio VisionScience measurements and metrics from a given analysis or optimization. To that end, the discussion will center largely around the following Insights views:
View | Details |
---|---|
vod_analysis_results |
This view exposes fields to capture the various VisionScience measurements and metrics (e.g. XVS, XEPS, XBS, CVD, CCM, physical and visual noise, HDR color gamut and luminance) for an analysis or optimization over multiple measurement periods (1s, 2s, 5s, 10s, 30s, 60s) and aggregate calculations (min, max, avg), thus enabling one to create a comprehensive time-series view of any supported measurement or metric. Additionally, this view exposes a subview, vod_video_asset_score , that exposes fields to capture the various asset scores (e.g. XVS, XEPS, XBS and CCS) for an analysis or optimization. |
vod_analysis_frame_results |
This view exposes fields to capture the various VisionScience measurements and metrics (e.g. XVS, XEPS, XBS, CVD, CCM, physical and visual noise, HDR color gamut and luminance) for each frame of the assets included in an analysis or optimization. |
vod_audio_loudness |
This view exposes fields to capture various audio loudness measurements and metrics (e.g. integrated loudness, loudness range, momentary loudness, short-term loudness, true peak level) for the assets included in an analysis. |
vod_cadence_pattern |
This view exposes fields to capture metadata about the various video cadence pattern detected (e.g. start/end time, duration, pattern, pattern offset etc.) for the assets included in an analysis. |
An Asset Score is a summary metric that provides a single value for an entire asset that is more accurate than peforming the mathematical average of the underlying frame scores. StreamAware and StreamSmart provide Asset Scores for the following metrics:
- IMAX ViewerScore (XVS),
- IMAX Encoder Performance Score (XEPS),
- IMAX Banding Score (XBS) and
- IMAX Content Complexity Score (XCCS).
Using the Stream On-Demand Platform REST API, we can submit a full-reference analysis that configures the On-Demand Analyzer to capture XBS and CCS in addition to XVS and XEPS, that are captured by default, as shown below:
curl --location 'https://example.sct.imax.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
"content": {
"title": "Dog Running FR - Asset Scores"
},
"referenceAssets": [
{
"name": "dog_running.mp4",
"path": "royalty_free/dog_running/source",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc"
}
}
],
"subjectAssets": [
{
"name": "dog_running_1080_h264_qp_31.mp4",
"path": "royalty_free/dog_running/outputs",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc"
}
},
{
"name": "dog_running_1080_h264_qp_41.mp4",
"path": "royalty_free/dog_running/outputs",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc"
}
}
],
"analyzerConfig": {
"enableBandingDetection": true,
"enableComplexityAnalysis": true,
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "TYPICAL"
}
]
}
}'
Asset XVS, XEPS, XBS and CCS are exposed as fields on the vod_video_asset_score
view, which is a subview of the vod_analysis_results
view and can be fetched from Insights as shown below:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer NPp7SWngHvMXYJsyPRNQXc6nKGgZ4Dx7FFWsb9S5' \
--data '{
"model": "on_demand",
"view": "vod_analysis_results",
"fields": [
"vod_analysis_results.test_id",
"device.device_name",
"vod_analysis_results.reference_video",
"vod_analysis_results.test_video",
"vod_video_asset_score.asset_xvs",
"vod_video_asset_score.asset_xeps",
"vod_video_asset_score.asset_xbs",
"vod_video_asset_score.asset_content_complexity"
],
"filters": {
"vod_analysis_results.analysis_uuid": "de66d3ab-7469-476b-9d15-965e173811ce"
},
"sorts": [
"vod_analysis_results.test_id"
]
}'
The JSON response payload is as follows:
[
{
"vod_analysis_results.analysis_uuid": "de66d3ab-7469-476b-9d15-965e173811ce",
"vod_analysis_results.test_id": "1",
"device.device_name": "OLED65C9PUA",
"vod_analysis_results.reference_video": null,
"vod_analysis_results.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_video_asset_score.asset_xvs": 95.375,
"vod_video_asset_score.asset_xeps": null,
"vod_video_asset_score.asset_xbs": 0,
"vod_analysis_results.asset_content_complexity": 37.4
},
{
"vod_analysis_results.analysis_uuid": "de66d3ab-7469-476b-9d15-965e173811ce",
"vod_analysis_results.test_id": "1-1",
"device.device_name": "OLED65C9PUA",
"vod_analysis_results.reference_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_analysis_results.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_31.mp4",
"vod_video_asset_score.asset_xvs": 83.416,
"vod_video_asset_score.asset_xeps": 88.02,
"vod_video_asset_score.asset_xbs": 18.2,
"vod_analysis_results.asset_content_complexity": null
},
{
"vod_analysis_results.analysis_uuid": "de66d3ab-7469-476b-9d15-965e173811ce",
"vod_analysis_results.test_id": "1-2",
"device.device_name": "OLED65C9PUA",
"vod_analysis_results.reference_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/source/dog_running.mp4",
"vod_analysis_results.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_41.mp4",
"vod_video_asset_score.asset_xvs": 60.048,
"vod_video_asset_score.asset_xeps": 60.057,
"vod_video_asset_score.asset_xbs": 42.6,
"vod_analysis_results.asset_content_complexity": null
}
]
We can make the following conclusions regarding the values above:
- The reference asset (
dog_running.mp4
) is a reasonably high quality asset as its Asset XVS score of 95 and Asset XBS of 0 places it in the excellent category where impairments, including color banding, are imperceptable for the typical viewer. - The 1-1 subject/test asset (
dog_running_1080_h264_qp_31.mp4
) qualifies as an excellent encode as its Asset XVS is above 80 and, although it has some banding, its Asset XBS of 18.2 falls into the imperceptible category. The Asset XEPS of 88 indicates that the encoder did an excellent job of maintaining the reference quality. - The 1-2 subject/test asset (
dog_running_1080_h264_qp_41.mp4
) qualifies as a fair encode as its Asset XVS is only 60 and its Asset XBS of 42 means that the color banding is creeping into the slight annoying category. The Asset EPS score of 60 indicates that the encoder did not do a very good job of maintaining the reference quality, especially considering that the content complexity score of the reference (Asset CCS) is reasonably low at only 37. - The example above is purposefully terse in order to illustrate the fetching of Asset Scores. The
vod_analysis_results
has many additional fields that may of interest to include in your response. Please refer to Supported view fields for details.
For full-reference analyses, the Asset CCS is computed only on the (SDR) reference asset.
When you’re interested in a more granular treatment of the VisionScience scores for a given asset, you’ll want to use the measurement period fields provided on the vod_analysis_results
view in order to build a time series view of your asset’s scores.
Using the Stream On-Demand Platform REST API, let’s submit a no-reference analysis for an HDR asset that configures the On-Demand Analyzer to capture color information and statistics, in addition to banding, as shown below:
curl --location 'https://example.sct.imax.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
"content": {
"title": "HDR NR Test"
},
"subjectAssets": [
{
"name": "samsung-demo-reel-version-a.mp4",
"path": "content-similarity-assets/UHD-HDR-MP4-samsung-demo-reel",
"storageLocation": {
"type": "PVC",
"name": "on-201"
},
"hdr": true
}
],
"analyzerConfig": {
"enableBandingDetection": true,
"enableColorInformation": true,
"enableColorStatsCollection": true,
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "TYPICAL"
}
]
}
}'
Next, let’s fetch our analysis results from Insights, including color gamut and luminance data, in a time series view using a measurement period of 2s:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer 9tVkH455J5Z8X5jGVDBpRbSwBJz5f7vcHTftRv7K' \
--data '{
"model": "on_demand",
"view": "vod_analysis_results",
"fields": [
"vod_analysis_results.analysis_uuid",
"vod_analysis_results.test_video_time_2s",
"vod_analysis_results.test_video",
"vod_analysis_results.xvs",
"vod_analysis_results.max_xvs",
"vod_analysis_results.min_xvs",
"vod_analysis_results.avg_xbs",
"vod_analysis_results.max_xbs",
"vod_analysis_results.min_xbs",
"vod_analysis_results.color_primaries",
"vod_analysis_results.color_space",
"vod_analysis_results.color_transfer_characteristics",
"vod_analysis_results.max_content_light_level",
"vod_analysis_results.max_frame_average_light_level",
"vod_analysis_results.color_gamut",
"vod_analysis_results.color_gamut_rec709",
"vod_analysis_results.color_gamut_p3d65",
"vod_analysis_results.color_gamut_rec2020",
"vod_analysis_results.color_gamut_unknown",
"vod_analysis_results.max_pixel_luminance",
"vod_analysis_results.min_pixel_luminance",
"vod_analysis_results.avg_frame_luminance",
"vod_analysis_results.max_frame_luminance",
"vod_analysis_results.min_frame_luminance",
"vod_analysis_results.frame_count"
],
"filters": {
"vod_analysis_results.analysis_uuid": "b9b437a7-f966-4e2b-8bb4-02d9cd1aa2a8"
},
"sorts": [
"vod_analysis_results.test_video_time_2s"
]
}'
The abbreviated JSON response payload is as follows:
[
{
"vod_analysis_results.analysis_uuid": "b9b437a7-f966-4e2b-8bb4-02d9cd1aa2a8",
"vod_analysis_results.test_video_time_2s": "00:00:02",
"vod_analysis_results.test_video": "file:///mnt/on-201/content-similarity-assets/UHD-HDR-MP4-samsung-demo-reel/samsung-demo-reel-version-a.mp4",
"vod_analysis_results.color_primaries": "bt2020",
"vod_analysis_results.color_space": "bt2020nc",
"vod_analysis_results.color_transfer_characteristics": "smpte2084",
"vod_analysis_results.max_content_light_level": 1000,
"vod_analysis_results.max_frame_average_light_level": 1000,
"vod_analysis_results.xvs": 97.9375,
"vod_analysis_results.max_xvs": 98,
"vod_analysis_results.min_xvs": 97.875,
"vod_analysis_results.avg_xbs": 53.6875,
"vod_analysis_results.max_xbs": 56,
"vod_analysis_results.min_xbs": 50,
"vod_analysis_results.color_gamut": "P3D65",
"vod_analysis_results.color_gamut_rec709": 0,
"vod_analysis_results.color_gamut_p3d65": 48,
"vod_analysis_results.color_gamut_rec2020": 0,
"vod_analysis_results.color_gamut_unknown": 0,
"vod_analysis_results.max_pixel_luminance": 10000,
"vod_analysis_results.min_pixel_luminance": 0,
"vod_analysis_results.avg_frame_luminance": 281.4792,
"vod_analysis_results.max_frame_luminance": 283,
"vod_analysis_results.min_frame_luminance": 280,
"vod_analysis_results.frame_count": 48
},
{
"vod_analysis_results.analysis_uuid": "b9b437a7-f966-4e2b-8bb4-02d9cd1aa2a8",
"vod_analysis_results.test_video_time_2s": "00:00:04",
"vod_analysis_results.test_video": "file:///mnt/on-201/content-similarity-assets/UHD-HDR-MP4-samsung-demo-reel/samsung-demo-reel-version-a.mp4",
"vod_analysis_results.color_primaries": "bt2020",
"vod_analysis_results.color_space": "bt2020nc",
"vod_analysis_results.color_transfer_characteristics": "smpte2084",
"vod_analysis_results.max_content_light_level": 1000,
"vod_analysis_results.max_frame_average_light_level": 1000,
"vod_analysis_results.xvs": 97.9165,
"vod_analysis_results.max_xvs": 98,
"vod_analysis_results.min_xvs": 97.833,
"vod_analysis_results.avg_xbs": 55.7083,
"vod_analysis_results.max_xbs": 61,
"vod_analysis_results.min_xbs": 52,
"vod_analysis_results.color_gamut": "P3D65",
"vod_analysis_results.color_gamut_rec709": 0,
"vod_analysis_results.color_gamut_p3d65": 48,
"vod_analysis_results.color_gamut_rec2020": 0,
"vod_analysis_results.color_gamut_unknown": 0,
"vod_analysis_results.max_pixel_luminance": 9043,
"vod_analysis_results.min_pixel_luminance": 0,
"vod_analysis_results.avg_frame_luminance": 280.0417,
"vod_analysis_results.max_frame_luminance": 281,
"vod_analysis_results.min_frame_luminance": 279,
"vod_analysis_results.frame_count": 48
},
{
"vod_analysis_results.analysis_uuid": "b9b437a7-f966-4e2b-8bb4-02d9cd1aa2a8",
"vod_analysis_results.test_video_time_2s": "00:00:06",
"vod_analysis_results.test_video": "file:///mnt/on-201/content-similarity-assets/UHD-HDR-MP4-samsung-demo-reel/samsung-demo-reel-version-a.mp4",
"vod_analysis_results.color_primaries": "bt2020",
"vod_analysis_results.color_space": "bt2020nc",
"vod_analysis_results.color_transfer_characteristics": "smpte2084",
"vod_analysis_results.max_content_light_level": 1000,
"vod_analysis_results.max_frame_average_light_level": 1000,
"vod_analysis_results.xvs": 96.5925,
"vod_analysis_results.max_xvs": 98,
"vod_analysis_results.min_xvs": 95.185,
"vod_analysis_results.avg_xbs": 45.3542,
"vod_analysis_results.max_xbs": 60,
"vod_analysis_results.min_xbs": 28,
"vod_analysis_results.color_gamut": "P3D65",
"vod_analysis_results.color_gamut_rec709": 0,
"vod_analysis_results.color_gamut_p3d65": 48,
"vod_analysis_results.color_gamut_rec2020": 0,
"vod_analysis_results.color_gamut_unknown": 0,
"vod_analysis_results.max_pixel_luminance": 8835,
"vod_analysis_results.min_pixel_luminance": 0,
"vod_analysis_results.avg_frame_luminance": 236.8125,
"vod_analysis_results.max_frame_luminance": 282,
"vod_analysis_results.min_frame_luminance": 184,
"vod_analysis_results.frame_count": 48
},
.
.
.
]
Notice in the results above:
- We have one result in the response for each 2s measurement period (i.e. 2s, 4s, 6s etc).
- Insights provides the mathematical average, minimum and maximum for the various metrics and measurements requested. These values are calculated using the values of all the underlying frames that make up the measurement period (i.e. a frame count of 48 frames in the example above).
- We choose to sort the results based off the chosen measurement period (i.e.
vod_analysis_results.test_video_time_2s
)
When fetching scores for a full-reference (FR) time series view, you are strongly encouraged to use reference video time fields (e.g. vod_analysis_results.reference_video_time_1s
) as your only time-based sort field as the reference time should always be the x-axis. You are also cautioned not to mix different measurement periods between the reference and test.
When fetching scores for a no-reference (NR) time series view, you are strongly encouraged to use test video time fields (e.g. vod_analysis_results.test_video_time_1s
) as your only time-based sort field.
The vod_analysis_results
view exposes fields and subviews (e.g. vod_video_asset_score
) to capture the various VisionScience measurements and metrics for an analysis or optimization over an entire asset (i.e. Asset Scores) and/or various measurement periods (time series). The following subsections lists the various fields of interest on the vod_analysis_results
view (and its subviews) that can be used in the body of any request sent to the Insights REST API.
Metrics and measurements
The Insights vod_analysis_results
view exposes the following list of fields:
Metric/Measurement Field | Details |
---|---|
vod_analysis_results.end_time |
The time when the analysis completed |
vod_analysis_results.avg_bitrate |
The average bitrate of the video |
vod_analysis_results.max_bitrate |
The maximum bitrate of the video |
vod_analysis_results.min_bitrate |
The minimum bitrate of the video |
vod_analysis_results.xvs |
The IMAX ViewerScore over the aggregated number of frames. |
vod_analysis_results.max_xvs |
The maximum per frame IMAX ViewerScore over the aggregated number of frames. |
vod_analysis_results.min_xvs |
The minimum IMAX ViewerScore over the aggregated number of frames. |
vod_analysis_results.xeps |
The IMAX Encoder Performance Score over the aggregated number of frames. |
vod_analysis_results.min_xeps |
The minimum IMAX Encoder Performance Score over the aggregated number of frames. |
vod_analysis_results.max_xeps |
The maximum IMAX Encoder Performance Score over the aggregated number of frames. |
vod_analysis_results.avg_xbs |
The average amount of banding during the aggregated number of frames |
vod_analysis_results.max_xbs |
The maximum measure of banding during the aggregated number of frames |
vod_analysis_results.min_xbs |
The minimum measure of banding during the aggregated number of frames |
vod_analysis_results.avg_content_complexity |
The average content complexity over the aggregated number of frames. |
vod_analysis_results.min_content_complexity |
The minimum content complexity over the aggregated number of frames. |
vod_analysis_results.max_content_complexity |
The maximum content complexity over the aggregated number of frames. |
vod_analysis_results.avg_psnr |
The average PSNR value measured during the aggregated number of frames |
vod_analysis_results.max_psnr |
The maximum PSNR value measured during the aggregated number of frames |
vod_analysis_results.min_psnr |
The minimum PSNR value measured during the aggregated number of frames |
vod_analysis_results.avg_vmaf |
The average VMAF value measured during the aggregated number of frames |
vod_analysis_results.max_vmaf |
The maximum VMAF value measured during the aggregated number of frames |
vod_analysis_results.min_vmaf |
The minimum VMAF value measured during the aggregated number of frames |
vod_analysis_frame_results.color_transfer_characteristics |
The transfer characteristics (e.g. smpte2084) extracted from the video metadata |
vod_analysis_results.avg_color_volume_difference |
The average BT2124 color difference value measured during the aggregated number of frames |
vod_analysis_results.max_color_volume_difference |
The maximum BT2124 color difference value measured during the aggregated number of frames |
vod_analysis_results.min_color_volume_difference |
The minimum BT2124 color difference value measured during the aggregated number of frames |
vod_analysis_results.color_gamut |
The overall detected color gamnut for the frame, set of frames, or asset |
vod_analysis_results.color_gamut_rec709 |
The number of frames where the Rec.705 color space was detected |
vod_analysis_results.color_gamut_p3d65 |
The number of frames where the P3D65 color space was detected |
vod_analysis_results.color_gamut_rec2020 |
The number of frames where the Rec.2020 color space was detected |
vod_analysis_results.color_gamut_unknown |
The number of frames where a known color gamut could not be detected |
vod_analysis_results.min_pixel_luminance |
The lowest measured pixel light level (in nits) in all of the aggregated frames |
vod_analysis_results.max_pixel_luminance |
The highest measured pixel light level (in nits) in all of the aggregated frames. MaxCLL (Maximum Content Light Level) is the value of this field across an entire asset. |
vod_analysis_results.avg_frame_luminance |
The average frame light level (average of the frame’s pixel light levels, in nits) of the aggregated frames. |
vod_analysis_results.min_frame_luminance |
The lowest measured frame light level (average of the frame’s pixel light levels, in nits) in all of the aggregated frames |
vod_analysis_results.max_frame_luminance |
The highest measured frame light level (average of the frame’s pixel light levels, in nits) in all of the aggregated frames. MaxFALL (Maximum Frame Average Light Level) is the value of this field across an entire asset. |
vod_analysis_results.avg_visual_noise |
The average visual noise value measured during the aggregated number of frames |
vod_analysis_results.min_visual_noise |
The minimum visual noise value measured during the aggregated number of frames |
vod_analysis_results.max_visual_noise |
The maximum visual noise value measured during the aggregated number of frames |
vod_analysis_results.avg_physical_noise |
The average physical noise value measured during the aggregated number of frames |
vod_analysis_results.min_physical_noise |
The minimum physical noise value measured during the aggregated number of frames |
vod_analysis_results.max_physical_noise |
The maximum physical noise value measured during the aggregated number of frames |
And the following filter-only fields:
Metric/Measurement Field | Details |
---|---|
vod_analysis_results.use_smpte_timecode |
Apply the SMPTE start timecode to time value fields |
Measurement periods
Insights supports the following measurement period fields:
Measurement Period Field | Details |
---|---|
vod_analysis_results.reference_video_time_1s |
The reference video time in 1 second intervals |
vod_analysis_results.test_video_time_1s |
The test video time in 1 second intervals |
vod_analysis_results.reference_video_time_2s |
The reference video time in 2 second intervals |
vod_analysis_results.test_video_time_2s |
The test video time in 2 second intervals |
vod_analysis_results.reference_video_time_5s |
The reference video time in 5 second intervals |
vod_analysis_results.test_video_time_5s |
The test video time in 5 second intervals |
vod_analysis_results.reference_video_time_10s |
The reference video time in 10 second intervals |
vod_analysis_results.test_video_time_10s |
The test video time in 10 second intervals |
vod_analysis_results.reference_video_time_30s |
The reference video time in 30 second intervals |
vod_analysis_results.test_video_time_30s |
The test video time in 30 second intervals |
vod_analysis_results.reference_video_time_min |
The reference video time in 1 minute intervals |
vod_analysis_results.test_video_time_min |
The test video time in 1 minute intervals |
For some use cases, it may prove desirable to fetch the requested VisionScience measurements and metrics for each frame in your analyzed assets. Frame level data is provided using the vod_analysis_frame_results
view.
Using the Stream On-Demand Platform REST API, we can submit a full-reference analysis for HDR assets that configures the On-Demand Analyzer to capture color information and statistics, in addition to XBS (color banding), XEPS (encoder performance) and CVD (color volume differncing), as shown below:
curl --location 'https://example.sct.imax.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
"content": {
"title": "HDR FR Demo"
},
"referenceAssets": [
{
"hdr": true,
"name": "demo1_hdr10_ref.mp4",
"storageLocation": {
"type": "PVC",
"name": "on-201"
},
"path": "demo/encode_validation/HDR_demo"
}
],
"subjectAssets": [
{
"hdr": true,
"name": "demo1_hdr10_test_cbr_1920x1080_5400kbps.mp4",
"storageLocation": {
"type": "PVC",
"name": "on-201"
},
"path": "demo/encode_validation/HDR_demo"
}
],
"analyzerConfig": {
"enableBandingDetection": true,
"enableColorInformation": true,
"enableColorStatsCollection": true,
"enableColorVolumeDifference": true,
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "TYPICAL"
},
{
"device": {
"name": "ssimpluscore"
},
"viewerType": "TYPICAL"
},
{
"device": {
"name": "xl2420t"
},
"viewerType": "TYPICAL"
}
]
}
}'
The following Insights request fetches the XVS, XEPS, XBS and CVD scores along with the color gamut and pixel and frame luminance values for every frame:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer 29RJCp5vn6WVt68rj4XMqh3dmyHcFbBxzD2WsM9J' \
--data '{
"model": "on_demand",
"view": "vod_analysis_frame_results",
"fields": [
"vod_analysis_frame_results.analysis_uuid",
"vod_analysis_frame_results.reference_video",
"vod_analysis_frame_results.test_video",
"vod_analysis_frame_results.test_id",
"vod_analysis_frame_results.reference_video_time_1frame",
"vod_analysis_frame_results.test_video_time_1frame",
"vod_analysis_frame_results.avg_xeps",
"vod_analysis_frame_results.avg_xvs",
"vod_analysis_frame_results.avg_xbs",
"vod_analysis_frame_results.color_primaries",
"vod_analysis_frame_results.color_space",
"vod_analysis_frame_results.color_transfer_characteristics",
"vod_analysis_frame_results.color_gamut",
"vod_analysis_frame_results.avg_color_volume_difference",
"vod_analysis_frame_results.avg_pixel_luminance",
"vod_analysis_frame_results.min_pixel_luminance",
"vod_analysis_frame_results.min_pixel_luminance_x",
"vod_analysis_frame_results.min_pixel_luminance_y",
"vod_analysis_frame_results.max_pixel_luminance",
"vod_analysis_frame_results.max_pixel_luminance_x",
"vod_analysis_frame_results.max_pixel_luminance_y",
"vod_analysis_frame_results.avg_frame_luminance",
"device.device_name"
],
"filters": {
"vod_analysis_frame_results.analysis_uuid": "de66d3ab-7469-476b-9d15-965e173811ce"
},
"sorts": [
"vod_analysis_frame_results.reference_video_time_1frame",
"vod_analysis_frame_results.test_id"
]
}'
The snippet below represents a small sampling of the response output:
[
.
.
{
"vod_analysis_frame_results.analysis_uuid": "0e37bc9b-34ca-45ea-aaf5-da713fada956",
"vod_analysis_frame_results.reference_video": null,
"vod_analysis_frame_results.test_video": "file:///mnt/on-201/demo/encode_validation/HDR_demo/demo1_hdr10_ref.mp4",
"vod_analysis_frame_results.test_id": "1",
"vod_analysis_frame_results.reference_video_time_1frame": "00:00:04:23",
"vod_analysis_frame_results.test_video_time_1frame": "00:00:04:23",
"vod_analysis_frame_results.color_primaries": "bt2020",
"vod_analysis_frame_results.color_space": "bt2020nc",
"vod_analysis_frame_results.color_transfer_characteristics": "smpte2084",
"device.device_name": "OLED65C9PUA",
"vod_analysis_frame_results.avg_xeps": null,
"vod_analysis_frame_results.avg_xvs": 100,
"vod_analysis_frame_results.avg_xbs": 50,
"vod_analysis_frame_results.color_gamut": "P3D65",
"vod_analysis_frame_results.avg_color_volume_difference": null,
"vod_analysis_frame_results.min_pixel_luminance": 0,
"vod_analysis_frame_results.min_pixel_luminance_x": 1170,
"vod_analysis_frame_results.min_pixel_luminance_y": 0,
"vod_analysis_frame_results.max_pixel_luminance": 749,
"vod_analysis_frame_results.max_pixel_luminance_x": 2490,
"vod_analysis_frame_results.max_pixel_luminance_y": 1028,
"vod_analysis_frame_results.avg_frame_luminance": 37
},
{
"vod_analysis_frame_results.analysis_uuid": "0e37bc9b-34ca-45ea-aaf5-da713fada956",
"vod_analysis_frame_results.reference_video": "file:///mnt/on-201/demo/encode_validation/HDR_demo/demo1_hdr10_ref.mp4",
"vod_analysis_frame_results.test_video": "file:///mnt/on-201/demo/encode_validation/HDR_demo/demo1_hdr10_test_cbr_1920x1080_5400kbps.mp4",
"vod_analysis_frame_results.test_id": "1-1",
"vod_analysis_frame_results.reference_video_time_1frame": "00:00:04:23",
"vod_analysis_frame_results.test_video_time_1frame": "00:00:04:23",
"vod_analysis_frame_results.color_primaries": "bt2020",
"vod_analysis_frame_results.color_space": "bt2020nc",
"vod_analysis_frame_results.color_transfer_characteristics": "smpte2084",
"device.device_name": "OLED65C9PUA",
"vod_analysis_frame_results.avg_xeps": 92,
"vod_analysis_frame_results.avg_xvs": 89,
"vod_analysis_frame_results.avg_xbs": 23,
"vod_analysis_frame_results.color_gamut": "P3D65",
"vod_analysis_frame_results.avg_color_volume_difference": 8.52735,
"vod_analysis_frame_results.min_pixel_luminance": 0,
"vod_analysis_frame_results.min_pixel_luminance_x": 1198,
"vod_analysis_frame_results.min_pixel_luminance_y": 0,
"vod_analysis_frame_results.max_pixel_luminance": 1237,
"vod_analysis_frame_results.max_pixel_luminance_x": 300,
"vod_analysis_frame_results.max_pixel_luminance_y": 674,
"vod_analysis_frame_results.avg_frame_luminance": 37
},
.
.
]
Please note the following about the Insights request and response above:
- While the
vod_analysis_frame_results
view exposes fields for the average, minimum and maximum values for most of the VisionScience measurements and metrics, one need only request the average (e.g.avg_xvs
) since the granularity of this view is already at the lowest (i.e. frame) level. Requesting the minimum and maximum fields will result in identical values to that of the average. - Where pixel values are provided, however, average, minimum and maximum will have proper meaning and different results (i.e. see
avg/min/max_pixel_luminance
above). - The
vod_analysis_frame_results
view exposes two fields,reference_video_time_1frame
andtest_video_time_1frame
, to capture the respective time within the reference and subject/test videos. These time fields are of the format HH:MM:SS:FF, where FF is the frame number within the given second and the possible values depend on the framerate of the underlying asset.
While there is no limit on the size of a given response from the Insights REST API, be aware that fetching all frame scores for an analysis may result in extremely large responses that size in megabytes, depending on the number of assets in your analysis, the length of the assets, the framerate of the assets and the number of devices scored.
As with all other queries submitted to the Insights REST API, you are encouraged to use filters to help reduce the size of your results and focus on specific parts of your data set. Using the query above, we can add a subject/test asset filter (i.e. vod_analysis_frame_results.test_id
) and device filter (i.e. device.device_name
) to return all the frame scores just for the subject/test asset identified by 1-1 and the OLED65C9PUA
device:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer 29RJCp5vn6WVt68rj4XMqh3dmyHcFbBxzD2WsM9J' \
--data '{
"model": "on_demand",
"view": "vod_analysis_frame_results",
"fields": [
"vod_analysis_frame_results.analysis_uuid",
"vod_analysis_frame_results.reference_video",
"vod_analysis_frame_results.test_video",
"vod_analysis_frame_results.test_id",
"vod_analysis_frame_results.reference_video_time_1frame",
"vod_analysis_frame_results.test_video_time_1frame",
"vod_analysis_frame_results.avg_xeps",
"vod_analysis_frame_results.avg_xvs",
"vod_analysis_frame_results.avg_xbs",
"vod_analysis_frame_results.color_primaries",
"vod_analysis_frame_results.color_space",
"vod_analysis_frame_results.color_transfer_characteristics",
"vod_analysis_frame_results.color_gamut",
"vod_analysis_frame_results.avg_color_volume_difference",
"vod_analysis_frame_results.avg_pixel_luminance",
"vod_analysis_frame_results.min_pixel_luminance",
"vod_analysis_frame_results.min_pixel_luminance_x",
"vod_analysis_frame_results.min_pixel_luminance_y",
"vod_analysis_frame_results.max_pixel_luminance",
"vod_analysis_frame_results.max_pixel_luminance_x",
"vod_analysis_frame_results.max_pixel_luminance_y",
"vod_analysis_frame_results.avg_frame_luminance",
"device.device_name"
],
"filters": {
"vod_analysis_frame_results.analysis_uuid": "de66d3ab-7469-476b-9d15-965e173811ce",
"vod_analysis_frame_results.test_id": "1-1",
"device.device_name": "OLED65C9PUA"
},
"sorts": [
"vod_analysis_frame_results.reference_video_time_1frame",
"vod_analysis_frame_results.test_id"
]
}'
Supported view fields
The vod_analysis_frame_results
view exposes fields to capture the various VisionScience measurements and metrics (e.g. XVS, XEPS, XBS, CVD, CCM, physical and visual noise, HDR color gamut and luminance) for each frame of the assets included in an analysis. The following section lists the various fields of interest on the vod_analysis_frame_results
view that can be used in the body of any request sent to the Insights REST API.
Metric/Measurement Field | Details |
---|---|
vod_analysis_frame_results.reference_video_time_1frame |
The reference video time with frame number within each second |
vod_analysis_frame_results.test_video_time_1frame |
The test video time with frame number within each second |
vod_analysis_frame_results.end_time |
The time when the analysis completed |
vod_analysis_frame_results.color_gamut |
The overall detected color gamnut for the frame, set of frames, or asset |
vod_analysis_frame_results.color_gamut_rec709 |
The number of frames where the Rec.705 color space was detected |
vod_analysis_frame_results.color_gamut_p3d65 |
The number of frames where the P3D65 color space was detected |
vod_analysis_frame_results.color_gamut_rec2020 |
The number of frames where the Rec.2020 color space was detected |
vod_analysis_frame_results.color_gamut_unknown |
The number of frames where a known color gamut could not be detected |
vod_analysis_frame_results.avg_color_volume_difference |
The average BT2124 color difference value measured during the aggregated number of frames |
vod_analysis_frame_results.max_color_volume_difference |
The maximum BT2124 color difference value measured during the aggregated number of frames |
vod_analysis_frame_results.min_color_volume_difference |
The minimum BT2124 color difference value measured during the aggregated number of frames |
vod_analysis_frame_results.avg_visual_noise |
The average visual noise value measured during the aggregated number of frames |
vod_analysis_frame_results.max_visual_noise |
The maximum visual noise value measured during the aggregated number of frames |
vod_analysis_frame_results.min_visual_noise |
The minimum visual noise value measured during the aggregated number of frames |
vod_analysis_frame_results.avg_physical_noise |
The average physical noise value measured during the aggregated number of frames |
vod_analysis_frame_results.max_physical_noise |
The maximum physical noise value measured during the aggregated number of frames |
vod_analysis_frame_results.min_physical_noise |
The minimum physical noise value measured during the aggregated number of frames |
vod_analysis_frame_results.max_pixel_luminance |
The highest measured pixel light level (in nits) in all of the aggregated frames. MaxCLL (Maximum Content Light Level) is the value of this field across an entire asset. |
vod_analysis_frame_results.max_pixel_luminance_x |
The image X coordinate of the highest measured pixel light level in the frame. |
vod_analysis_frame_results.max_pixel_luminance_y |
The image Y coordinate of the highest measured pixel light level in the frame. |
vod_analysis_frame_results.min_pixel_luminance |
The lowest measured pixel light level (in nits) in all of the aggregated frames |
vod_analysis_frame_results.min_pixel_luminance_x |
The image X coordinate of the lowest measured pixel light level in the frame. |
vod_analysis_frame_results.min_pixel_luminance_y |
The image Y coordinate of the lowest measured pixel light level in the frame. |
vod_analysis_frame_results.avg_frame_luminance |
The average frame light level (average of the frame’s pixel light levels, in nits) of the aggregated frames. |
vod_analysis_frame_results.max_frame_luminance |
The highest measured frame light level (average of the frame’s pixel light levels, in nits) in all of the aggregated frames. MaxFALL (Maximum Frame Average Light Level) is the value of this field across an entire asset. |
vod_analysis_frame_results.min_frame_luminance |
The lowest measured frame light level (average of the frame’s pixel light levels, in nits) in all of the aggregated frames |
vod_analysis_frame_results.avg_xeps |
The average IMAX Encoder Performance Score over the aggregated number of frames. |
vod_analysis_frame_results.max_xeps |
The maximum IMAX Encoder Performance Score over the aggregated number of frames. |
vod_analysis_frame_results.min_xeps |
The minimum IMAX Encoder Performance Score over the aggregated number of frames. |
vod_analysis_frame_results.avg_psnr |
The average PSNR value measured during the aggregated number of frames |
vod_analysis_frame_results.max_psnr |
The maximum PSNR value measured during the aggregated number of frames |
vod_analysis_frame_results.min_psnr |
The minimum PSNR value measured during the aggregated number of frames |
vod_analysis_frame_results.avg_bytes |
The average number of bytes in a frame |
vod_analysis_frame_results.max_bytes |
The maximum number of bytes in a frame |
vod_analysis_frame_results.min_bytes |
The minimum number of bytes in a frame |
vod_analysis_frame_results.avg_xbs |
The average amount of banding during the aggregated number of frames |
vod_analysis_frame_results.max_xbs |
The maximum measure of banding during the aggregated number of frames |
vod_analysis_frame_results.min_xbs |
The minimum measure of banding during the aggregated number of frames |
vod_analysis_frame_results.avg_vmaf |
The average VMAF value measured during the aggregated number of frames |
vod_analysis_frame_results.max_vmaf |
The maximum VMAF value measured during the aggregated number of frames |
vod_analysis_frame_results.min_vmaf |
The minimum VMAF value measured during the aggregated number of frames |
vod_analysis_frame_results.avg_xvs |
The average IMAX ViewerScore over the aggregated number of frames. |
vod_analysis_frame_results.max_xvs |
The maximum IMAX ViewerScore over the aggregated number of frames. |
vod_analysis_frame_results.min_xvs |
The minimum IMAX ViewerScore over the aggregated number of frames. |
And the following filter-only fields:
Metric/Measurement Field | Details |
---|---|
vod_analysis_results.use_smpte_timecode |
Apply the SMPTE start timecode to time value fields |
vod_analysis_frame_results.time_mode |
Display real time or SMPTE timecode |
vod_analysis_frame_results.use_drop_frame |
Enables/disables drop frames |
When audio measurement is enabled, each physical audio track within a video asset will be measured based using the measurement standard configured within the audio object of the analysis submission. Using the Stream On-Demand Platform REST API, we can submit a new analysis choosing to include an audio configuration object with any of the associated assets, an example of which is shown below:
{
"audio": {
"groups": [
{
"qualityCheckConfig": {
"loudnessChecks": [
{
"checkType": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 8
},
{
"checkType": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 30
},
{
"checkType": "MAX_INTEGRATED_LOUDNESS",
"enabled": true,
"threshold": -2
},
{
"checkType": "MAX_MOMENTARY_LOUDNESS",
"enabled": true,
"duration": 2,
"skipStart": 1.25,
"skipEnd": 1.25,
"threshold": -2
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_BS_1770_3",
"enabled": true
}
}
]
}
}
In the current release only a single group is permitted. All configuration and quality checks are applied to all discovered physical audio tracks within the video asset. Future releases will add the following:
- performing measurements and quality checks on selectable tracks and
- mixing of uncompressed audio channels from physical audio tracks to create a logical audio track for measurement and quality checks.
The audio loudness results can be fetched from Insights using the vod_audio_loudness
view as shown below
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer 9tVkH455J5Z8X5jGVDBpRbSwBJz5f7vcHTftRv7K' \
--data '{
"model": "on_demand",
"view": "vod_audio_loudness",
"fields": [
"vod_audio_loudness.analysis_uuid",
"vod_audio_loudness.test_video_time",
"vod_audio_loudness.integrated_loudness",
"vod_audio_loudness.loudness_range",
"vod_audio_loudness.high_lra",
"vod_audio_loudness.low_lra",
"vod_audio_loudness.max_momentary_loudness",
"vod_audio_loudness.min_momentary_loudness",
"vod_audio_loudness.max_short_term_loudness",
"vod_audio_loudness.min_short_term_loudness",
"vod_audio_loudness.max_true_peak_channel_ch1",
"vod_audio_loudness.min_true_peak_channel_ch1",
"vod_audio_loudness.max_true_peak_channel_ch2",
"vod_audio_loudness.min_true_peak_channel_ch2",
"vod_audio_loudness.max_true_peak",
"vod_audio_loudness.min_true_peak",
"vod_audio_loudness.stream_index"
],
"filters": {
"vod_audio_loudness.analysis_uuid": "b8b634d3-11f5-4598-b842-9cde5d58c713"
},
"sorts": [
"vod_audio_loudness.test_video_time desc"
]
}'
The abbreviated JSON response payload is as follows:
[
{
"vod_audio_loudness.analysis_uuid": "b8b634d3-11f5-4598-b842-9cde5d58c713",
"vod_audio_loudness.test_video_time": "00:03:00",
"vod_audio_loudness.stream_index": 1,
"vod_audio_loudness.integrated_loudness": -23.53,
"vod_audio_loudness.loudness_range": 4.14,
"vod_audio_loudness.high_lra": -21.86,
"vod_audio_loudness.low_lra": -26,
"vod_audio_loudness.max_momentary_loudness": -22.52,
"vod_audio_loudness.min_momentary_loudness": -25.05,
"vod_audio_loudness.max_short_term_loudness": -22.87,
"vod_audio_loudness.min_short_term_loudness": -23.27,
"vod_audio_loudness.max_true_peak_channel_ch1": -19.58,
"vod_audio_loudness.min_true_peak_channel_ch1": -31.7,
"vod_audio_loudness.max_true_peak_channel_ch2": -19.66,
"vod_audio_loudness.min_true_peak_channel_ch2": -31.7,
"vod_audio_loudness.max_true_peak": -10.52,
"vod_audio_loudness.min_true_peak": -120
},
{
"vod_audio_loudness.analysis_uuid": "b8b634d3-11f5-4598-b842-9cde5d58c713",
"vod_audio_loudness.test_video_time": "00:02:59",
"vod_audio_loudness.stream_index": 1,
"vod_audio_loudness.integrated_loudness": null,
"vod_audio_loudness.loudness_range": null,
"vod_audio_loudness.high_lra": null,
"vod_audio_loudness.low_lra": null,
"vod_audio_loudness.max_momentary_loudness": -20.88,
"vod_audio_loudness.min_momentary_loudness": -24.08,
"vod_audio_loudness.max_short_term_loudness": -23.24,
"vod_audio_loudness.min_short_term_loudness": -23.55,
"vod_audio_loudness.max_true_peak_channel_ch1": -18.56,
"vod_audio_loudness.min_true_peak_channel_ch1": -24.58,
"vod_audio_loudness.max_true_peak_channel_ch2": -18.56,
"vod_audio_loudness.min_true_peak_channel_ch2": -24.44,
"vod_audio_loudness.max_true_peak": -9.4,
"vod_audio_loudness.min_true_peak": -120
},
{
"vod_audio_loudness.analysis_uuid": "b8b634d3-11f5-4598-b842-9cde5d58c713",
"vod_audio_loudness.test_video_time": "00:02:58",
"vod_audio_loudness.stream_index": 1,
"vod_audio_loudness.integrated_loudness": null,
"vod_audio_loudness.loudness_range": null,
"vod_audio_loudness.high_lra": null,
"vod_audio_loudness.low_lra": null,
"vod_audio_loudness.max_momentary_loudness": -22.14,
"vod_audio_loudness.min_momentary_loudness": -24.72,
"vod_audio_loudness.max_short_term_loudness": -23.49,
"vod_audio_loudness.min_short_term_loudness": -23.88,
"vod_audio_loudness.max_true_peak_channel_ch1": -19.58,
"vod_audio_loudness.min_true_peak_channel_ch1": -27.74,
"vod_audio_loudness.max_true_peak_channel_ch2": -19.49,
"vod_audio_loudness.min_true_peak_channel_ch2": -27.54,
"vod_audio_loudness.max_true_peak": -10.46,
"vod_audio_loudness.min_true_peak": -120
},
.
.
.
]
We can make the following conclusions based on the above results:
- The asset is 3 minutes in length
- The stream index in the response indicates the first physical audio track that is scored
- Audio track enumeration should be related to the
vod_audio_metadata
view
- Audio track enumeration should be related to the
- Each result object contains 1s worth of measurements
- This means that for min/max True Peak, Short-Term Loudness and Momentary loudness is only for that reporting period and do not represent measurement for the entire asset
- Integrated Loudness and Loudness Range are returned only on the last data point
"vod_audio_loudness.test_video_time": "00:03:00"
in the example above as these measures are based on the entire audio track.
- There are at least 2 audio channels monitored and their individual True Peak measurements are returned
- The mapping of audio channel index (
ch1
,ch2
) map are discussed in the following section
- The mapping of audio channel index (
Details regarding units for these measures can be found in the Audio section.
Please note that audio loudness measurements are reported in terms of when they occur during the presentation of video.
Audio frames that exist in an asset or sidecar will not be processed if the presentation time for those frames is prior to the presentation time of the first frame of video in the analysis. Processing of audio frames will begin at the time of the first video frame, such that audio loudness and quality checks always be reported in terms of a video time greater than or equal to 0 seconds.
In the event that audio presentation continues after the last frame of video, audio frames will continue to be processed. Audio loudness measurements and quality checks will continue to be reported in terms of video time, as if the presentation of video had continued until the end of playback for all associated audio tracks.
True Peak and Channel Ordering
The vod_audio_loudness
view enumerates the individual channels within the track when reporting True Peak levels from channels 1 to 32 based on the channel layout. This is based off the information in Audio section titled Number of Channels to Channel Layout Mapping and Number of Channels to Channel Layout Mapping. Note that this is not an exhaustive list and other channel layouts may differ.
Channel Layout | Channel Mapping |
---|---|
Mono | 1-FC |
Stereo | 1-FL 2-FR |
2.1 | 1-FL 2-FR 3-LFE |
4.0 | 1-FL 2-FR 3-FC 4-BC |
5.0 (back) | 1-FL 2-FR 3-FC 4-BL 5-BR |
5.1 (back) | 1-FL 2-FR 3-FC 4-LFE 5-BL 6-BR |
6.1 | 1-FL 2-FR 3-FC 4-LFE 5-BC 6-SL 7-SR |
7.1 | 1-FL 2-FR 3-FC 4-LFE 5-BL 6-BR 7-SL 8-SR |
Hexadecagonal | 1-FL 2-FR 3-FC 4-BL 5-BR 6-BC 7-SL 8-SR 9-TFL 10-TFC 11-TFR 12-TBL 13-TBC 14-TBR 15-WL 16-WR |
22.2 | 1-FL 2-FR 3-FC 4-LFE 5-BL 6-BR 7-FLC 8-FRC 9-BC 10-SL 11-SR 12-TC 13-TFL 14-TFC 15-TFR 16-TBL 17-TBC 18-TBR 19-LFE2 20-TSL 21-TSR 22-BFC 23-BFL 24-BFR |
Supported view fields
The vod_audio_loudness
view exposes fields to capture various audio loudness measurements and metrics (e.g. integrated loudness, loudness range, momentary loudness, short-term loudness, true peak level) for the assets included in an analysis. The following section lists the various fields of interest on the vod_audio_loudness
view that can be used in the body of any request sent to the Insights REST API.
Metric/Measurement Field | Details |
---|---|
vod_audio_loudness.loudness_measurement_standard |
|
vod_audio_loudness.test_video_time |
The test video time with the interval automatically selected based on the length of the audio track |
vod_audio_loudness.test_video_time_1s |
The test video time in 1 second intervals |
vod_audio_loudness.integrated_loudness |
|
vod_audio_loudness.loudness_range |
|
vod_audio_loudness.high_lra |
|
vod_audio_loudness.low_lra |
|
vod_audio_loudness.max_momentary_loudness |
The maximum Momentary Loudness over the aggregation interval. |
vod_audio_loudness.max_short_term_loudness |
The maximum Short-Term Loudness over the aggregation interval. |
vod_audio_loudness.max_true_peak |
The maximum True Peak Level of all channels. |
vod_audio_loudness.max_true_peak_channel_ch1 |
The maximum Treak Peak Level for channel 1. |
vod_audio_loudness.max_true_peak_channel_ch2 |
The maximum Treak Peak Level for channel 2. |
vod_audio_loudness.max_true_peak_channel_ch3 |
The maximum Treak Peak Level for channel 3. |
vod_audio_loudness.max_true_peak_channel_ch4 |
The maximum Treak Peak Level for channel 4 |
vod_audio_loudness.max_true_peak_channel_ch5 |
The maximum Treak Peak Level for channel 5. |
vod_audio_loudness.max_true_peak_channel_ch6 |
The maximum Treak Peak Level for channel 6. |
vod_audio_loudness.max_true_peak_channel_ch7 |
The maximum Treak Peak Level for channel 7. |
vod_audio_loudness.max_true_peak_channel_ch8 |
The maximum Treak Peak Level for channel 8. |
vod_audio_loudness.max_true_peak_channel_ch9 |
The maximum Treak Peak Level for channel 9. |
vod_audio_loudness.max_true_peak_channel_ch10 |
The maximum Treak Peak Level for channel 10. |
vod_audio_loudness.max_true_peak_channel_ch11 |
The maximum Treak Peak Level for channel 11. |
vod_audio_loudness.max_true_peak_channel_ch12 |
The maximum Treak Peak Level for channel 12. |
vod_audio_loudness.max_true_peak_channel_ch13 |
The maximum Treak Peak Level for channel 13. |
vod_audio_loudness.max_true_peak_channel_ch14 |
The maximum Treak Peak Level for channel 14. |
vod_audio_loudness.max_true_peak_channel_ch15 |
The maximum Treak Peak Level for channel 15. |
vod_audio_loudness.max_true_peak_channel_ch16 |
The maximum Treak Peak Level for channel 16. |
vod_audio_loudness.max_true_peak_channel_ch17 |
The maximum Treak Peak Level for channel 17. |
vod_audio_loudness.max_true_peak_channel_ch18 |
The maximum Treak Peak Level for channel 18. |
vod_audio_loudness.max_true_peak_channel_ch19 |
The maximum Treak Peak Level for channel 19. |
vod_audio_loudness.max_true_peak_channel_ch20 |
The maximum Treak Peak Level for channel 20. |
vod_audio_loudness.max_true_peak_channel_ch21 |
The maximum Treak Peak Level for channel 21. |
vod_audio_loudness.max_true_peak_channel_ch22 |
The maximum Treak Peak Level for channel 22. |
vod_audio_loudness.max_true_peak_channel_ch23 |
The maximum Treak Peak Level for channel 23. |
vod_audio_loudness.max_true_peak_channel_ch24 |
The maximum Treak Peak Level for channel 24. |
vod_audio_loudness.max_true_peak_channel_ch25 |
The maximum Treak Peak Level for channel 25. |
vod_audio_loudness.max_true_peak_channel_ch26 |
The maximum Treak Peak Level for channel 26. |
vod_audio_loudness.max_true_peak_channel_ch27 |
The maximum Treak Peak Level for channel 27. |
vod_audio_loudness.max_true_peak_channel_ch28 |
The maximum Treak Peak Level for channel 28. |
vod_audio_loudness.max_true_peak_channel_ch29 |
The maximum Treak Peak Level for channel 29. |
vod_audio_loudness.max_true_peak_channel_ch30 |
The maximum Treak Peak Level for channel 30. |
vod_audio_loudness.max_true_peak_channel_ch31 |
The maximum Treak Peak Level for channel 31. |
vod_audio_loudness.max_true_peak_channel_ch32 |
The maximum Treak Peak Level for channel 32. |
vod_audio_loudness.min_momentary_loudness |
The minimum Momentary Loudness over the aggregation interval. |
vod_audio_loudness.min_short_term_loudness |
The minimum Short-Term Loudness over the aggregation interval. |
vod_audio_loudness.min_true_peak |
The minimum True Peak Level of all channels. |
vod_audio_loudness.min_true_peak_channel_ch1 |
The minimum Treak Peak Level for channel 1. |
vod_audio_loudness.min_true_peak_channel_ch2 |
The minimum Treak Peak Level for channel 2. |
vod_audio_loudness.min_true_peak_channel_ch3 |
The minimum Treak Peak Level for channel 3. |
vod_audio_loudness.min_true_peak_channel_ch4 |
The minimum Treak Peak Level for channel 4 |
vod_audio_loudness.min_true_peak_channel_ch5 |
The minimum Treak Peak Level for channel 5. |
vod_audio_loudness.min_true_peak_channel_ch6 |
The minimum Treak Peak Level for channel 6. |
vod_audio_loudness.min_true_peak_channel_ch7 |
The minimum Treak Peak Level for channel 7. |
vod_audio_loudness.min_true_peak_channel_ch8 |
The minimum Treak Peak Level for channel 8. |
vod_audio_loudness.min_true_peak_channel_ch9 |
The minimum Treak Peak Level for channel 9. |
vod_audio_loudness.min_true_peak_channel_ch10 |
The minimum Treak Peak Level for channel 10. |
vod_audio_loudness.min_true_peak_channel_ch11 |
The minimum Treak Peak Level for channel 11. |
vod_audio_loudness.min_true_peak_channel_ch12 |
The minimum Treak Peak Level for channel 12. |
vod_audio_loudness.min_true_peak_channel_ch13 |
The minimum Treak Peak Level for channel 13. |
vod_audio_loudness.min_true_peak_channel_ch14 |
The minimum Treak Peak Level for channel 14. |
vod_audio_loudness.min_true_peak_channel_ch15 |
The minimum Treak Peak Level for channel 15. |
vod_audio_loudness.min_true_peak_channel_ch16 |
The minimum Treak Peak Level for channel 16. |
vod_audio_loudness.min_true_peak_channel_ch17 |
The minimum Treak Peak Level for channel 17. |
vod_audio_loudness.min_true_peak_channel_ch18 |
The minimum Treak Peak Level for channel 18. |
vod_audio_loudness.min_true_peak_channel_ch19 |
The minimum Treak Peak Level for channel 19. |
vod_audio_loudness.min_true_peak_channel_ch20 |
The minimum Treak Peak Level for channel 20. |
vod_audio_loudness.min_true_peak_channel_ch21 |
The minimum Treak Peak Level for channel 21. |
vod_audio_loudness.min_true_peak_channel_ch22 |
The minimum Treak Peak Level for channel 22. |
vod_audio_loudness.min_true_peak_channel_ch23 |
The minimum Treak Peak Level for channel 23. |
vod_audio_loudness.min_true_peak_channel_ch24 |
The minimum Treak Peak Level for channel 24. |
vod_audio_loudness.min_true_peak_channel_ch25 |
The minimum Treak Peak Level for channel 25. |
vod_audio_loudness.min_true_peak_channel_ch26 |
The minimum Treak Peak Level for channel 26. |
vod_audio_loudness.min_true_peak_channel_ch27 |
The minimum Treak Peak Level for channel 27. |
vod_audio_loudness.min_true_peak_channel_ch28 |
The minimum Treak Peak Level for channel 28. |
vod_audio_loudness.min_true_peak_channel_ch29 |
The minimum Treak Peak Level for channel 29. |
vod_audio_loudness.min_true_peak_channel_ch30 |
The minimum Treak Peak Level for channel 30. |
vod_audio_loudness.min_true_peak_channel_ch31 |
The minimum Treak Peak Level for channel 31. |
vod_audio_loudness.min_true_peak_channel_ch32 |
The minimum Treak Peak Level for channel 32. |
StreamAware can be used to detect and report the cadence patterns for the videos in a given analysis. To enable cadence pattern detection, you would use the Stream On-Demand Platform REST API, to submit an analyzerConfig
object with the enableCadencePatternDetection
set to true
as shown below:
"analyzerConfig": {
"enableCadencePatternDetection": true,
"enableBandingDetection": true,
"enableColorVolumeDifference": true,
.
.
.
}
The cadence pattern results can be fetched from Insights using the vod_cadence_pattern
view as shown below:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer 9tVkH455J5Z8X5jGVDBpRbSwBJz5f7vcHTftRv7K' \
--data '{
"model": "on_demand",
"view": "vod_cadence_pattern",
"fields": [
"vod_cadence_pattern.cadence_pattern",
"vod_cadence_pattern.details",
"vod_cadence_pattern.pattern_start_frame",
"vod_cadence_pattern.pattern_end_frame",
"vod_cadence_pattern.cadence_pattern_offset",
"vod_cadence_pattern.pattern_start_time_timecode",
"vod_cadence_pattern.pattern_end_time_timecode",
"vod_cadence_pattern.test_id"
],
"filters": {
"vod_cadence_pattern.analysis_uuid": "28fe7faa-e396-4b7a-8fc0-9b9f34915ccf"
},
"sorts": [
"vod_cadence_pattern.pattern_start_frame"
]
}'
A few things to note:
- The
vod_cadence_pattern.details
field contains additional information such as a confidence threshold that can be used to filter out results- The JSON object is in this form
"{\"confidence\":66}"
- The JSON object is in this form
- The
vod_cadence_pattern.cadence_pattern
will be one of the cadences described in the Cadence Pattern Detection section orUnknown (Low Motion)
or1:1 (No Pattern)
The above request will give the following JSON result:
[
{
"vod_cadence_pattern.pattern_start_frame": 1,
"vod_cadence_pattern.pattern_end_frame": 90,
"vod_cadence_pattern.test_id": "1",
"vod_cadence_pattern.cadence_pattern": "Unknown (Low Motion)",
"vod_cadence_pattern.details": "{\"confidence\":66}",
"vod_cadence_pattern.cadence_pattern_offset": 0,
"vod_cadence_pattern.pattern_start_time_timecode": "00:00:00:01",
"vod_cadence_pattern.pattern_end_time_timecode": "00:00:03:00"
},
{
"vod_cadence_pattern.pattern_start_frame": 2,
"vod_cadence_pattern.pattern_end_frame": 101,
"vod_cadence_pattern.test_id": "1-1",
"vod_cadence_pattern.cadence_pattern": "Unknown (Low Motion)",
"vod_cadence_pattern.details": "{\"confidence\":73}",
"vod_cadence_pattern.cadence_pattern_offset": 0,
"vod_cadence_pattern.pattern_start_time_timecode": "00:00:00:02",
"vod_cadence_pattern.pattern_end_time_timecode": "00:00:03:11"
},
{
"vod_cadence_pattern.pattern_start_frame": 91,
"vod_cadence_pattern.pattern_end_frame": 1800,
"vod_cadence_pattern.test_id": "1",
"vod_cadence_pattern.cadence_pattern": "7:8",
"vod_cadence_pattern.details": "{\"confidence\":100}",
"vod_cadence_pattern.cadence_pattern_offset": 8,
"vod_cadence_pattern.pattern_start_time_timecode": "00:00:03:01",
"vod_cadence_pattern.pattern_end_time_timecode": "00:01:00:00"
},
{
"vod_cadence_pattern.pattern_start_frame": 102,
"vod_cadence_pattern.pattern_end_frame": 1800,
"vod_cadence_pattern.test_id": "1-1",
"vod_cadence_pattern.cadence_pattern": "2:2:3:3",
"vod_cadence_pattern.details": "{\"confidence\":100}",
"vod_cadence_pattern.cadence_pattern_offset": 8,
"vod_cadence_pattern.pattern_start_time_timecode": "00:00:03:12",
"vod_cadence_pattern.pattern_end_time_timecode": "00:01:00:00"
}
]
We can make the following conclusions based on the results above:
- There are two assets in the analysis, one with test ID
1
and one with test ID1-1
- For asset with test ID
1
- There were 2 observed cadences
- One cadence at the beginning of the sequence had low motion and thus a cadence could not be detected
- Reported as
"Unknown (Low Motion)"
and was observed between frame 1 to frame 90
- Reported as
- There was another observed cadence from frame 91 to the end of the asset - frame 1800
- This cadence was reported as
"7:8"
with 100% confidence
- This cadence was reported as
- For asset with test ID
1-1
- There were 2 observed cadences
- One cadence at the beginning of the sequence had low motion and thus a cadence could not be detected
- Reported as
"Unknown (Low Motion)"
and was observed between frame 2 to frame 101
- Reported as
- There was another observed cadence from frame 102 to the end of the asset - frame 1800
- This cadence was reported as
"2:2:3:3"
with 100% confidence
- This cadence was reported as
Supported view fields
The vod_cadence_pattern
view exposes fields to capture metadata about the various video cadence pattern detected (e.g. start/end time, duration, pattern, pattern offset etc.) for the assets included in an analysis. The following section lists the various fields of interest on the vod_cadence_pattern
view that can be used in the body of any request sent to the Insights REST API.
Metric/Measurement Field | Details |
---|---|
vod_cadence_pattern.cadence_pattern |
The cadence pattern Ex. 3:2 |
vod_cadence_pattern.pattern_start_frame |
The start time of the detected cadence pattern in frames |
vod_cadence_pattern.pattern_end_frame |
The end time of the detected cadence pattern in frames |
vod_cadence_pattern.pattern_start_time_timecode |
The start time of the detected cadence pattern as a SMPTE Timecode |
vod_cadence_pattern.pattern_start_time_full_label |
The start time of the detected cadence pattern in HH:MM:SS.mmm format (hours, minutes, seconds, milliseconds) |
vod_cadence_pattern.pattern_end_time_timecode |
The end time of the detected cadence pattern as a SMPTE Timecode |
vod_cadence_pattern.pattern_end_time_full_label |
The end time of the detected cadence pattern in HH:MM:SS.mmm format (hours, minutes, seconds, milliseconds) |
vod_cadence_pattern.pattern_duration_frames |
The duration of the detected cadence pattern in frames |
vod_cadence_pattern.pattern_duration_percentage |
The duration of the detected cadence pattern as a percentage of asset duration |
vod_cadence_pattern.cadence_pattern_offset |
The offset of the pattern (number of frames shifted/missing from the start of the pattern) |
vod_cadence_pattern.details |
Additional information from the cadence detection algorithm |
And the following filter-only fields:
Metric/Measurement Field | Details |
---|---|
vod_analysis_results.use_smpte_timecode |
Apply the SMPTE start timecode to time value fields |
vod_analysis_frame_results.time_mode |
Display real time or SMPTE timecode |
vod_analysis_frame_results.use_drop_frame |
Enables/disables drop frames |
StreamAware supports various quality checks which can be used to identify assets with unacceptable quality. The focus of this section is on the options for retrieving quality check data from a given analysis. To that end, the discussion will center largely around the following Insights views:
View | Details |
---|---|
vod_quality_checks |
This view exposes fields to capture the various video quality checks, asset audio quality checks and asset score checks supported by StreamAware. |
vod_closed_captions_metadata |
This view exposes fields to capture the periods of missing closed captions metadata for the assets included in an analysis. |
StreamAware can be used to perform a number of video quality checks. A video quality check focuses on conditions outside of perceptual quality that would result in unaccepable quality, such as freeze frames, black frames, solid color frames, FPS mismatches, unwanted cadence changes and incorrect and/or mismatched metadata.
Using the Stream On-Demand Platform REST API, you can submit a no-reference analysis to find all known video quality problems, as shown below:
curl --location 'https://example.sct.imax.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
"content": {
"title": "Video Quality Check Failures"
},
"subjectAssets": [
{
"name": "meridian_short_2020.mp4",
"path": "demo",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc"
}
}
],
"analyzerConfig": {
"qualityCheckConfig": {
"enabled": true
},
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "TYPICAL"
}
]
}
}'
When you enable the qualityCheckConfig
as shown above, the On-Demand Analyzer will perform all the video quality checks allowed by your feature license that are configurable at the per-analysis level, using the default configuration values described here. However, in our example, we want to override the defaults and find video quality check failures at more granular durations while also relaxing the sensitivity of the freeze frame detection, as shown below:
curl --location 'https://example.sct.imax.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
"content": {
"title": "Video Quality Check Failures"
},
"subjectAssets": [
{
"name": "meridian_short_2020.mp4",
"path": "demo",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc"
}
}
],
"analyzerConfig": {
"qualityCheckConfig": {
"enabled": true,
"duration": 1,
"freezeFrame": {
"sensitivity": 25
}
},
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "TYPICAL"
}
]
}
}'
Let’s check for video quality check failures by querying Insights using the fields on the vod_quality_checks
view, as shown below:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer mF8hbXKbJZxfCTxbPstKjchG8b72g22rMszRQDk6' \
--data '{
"model": "on_demand",
"view": "vod_quality_checks",
"fields": [
"vod_quality_checks.analysis_uuid",
"vod_quality_checks.time",
"vod_quality_checks.title",
"vod_quality_checks.test_id",
"vod_quality_checks.test_video",
"vod_quality_checks.category",
"vod_quality_checks.type",
"vod_quality_checks.interval_start_time",
"vod_quality_checks.interval_start_time_seconds",
"vod_quality_checks.interval_end_time",
"vod_quality_checks.interval_end_time_seconds",
"vod_quality_checks.check_duration_seconds",
"vod_quality_checks.check_duration_percentage",
"vod_quality_checks.test_video_duration",
"vod_quality_checks.test_video_duration_seconds",
"vod_quality_checks.trimmed_test_video_duration",
"vod_quality_checks.trimmed_test_video_duration_seconds",
"vod_quality_checks.details"
],
"filters": {
"vod_quality_checks.analysis_uuid": "53702b37-b30c-4c79-9c31-409a907a11ac"
},
"sorts": [
"vod_quality_checks.interval_start_time"
]
}'
You are encouraged to sort by vod_quality_checks.interval_start_time
so that the quality check failures are reported in the same order in which they appear in the asset.
Quality check events pertaining to the entire asset will not have a start time, end time or duration associated with them.
The abbreviated JSON response payload is as follows:
[
{
"vod_quality_checks.analysis_uuid": "af2889e7-26f6-46b0-99e8-41848f0e2b00",
"vod_quality_checks.time": "2023-04-14 11:39:48.244000",
"vod_quality_checks.title": "UHD HDR Source Validation",
"vod_quality_checks.test_id": "1",
"vod_quality_checks.test_video": "file:///mnt/on-201/demo/meridian_short_2020.mp4",
"vod_quality_checks.category": "Video",
"vod_quality_checks.type": "Freeze Frame",
"vod_quality_checks.interval_start_time": "00:01:05.549",
"vod_quality_checks.interval_start_time_seconds": 65.549,
"vod_quality_checks.interval_end_time": "00:01:09.102",
"vod_quality_checks.interval_end_time_seconds": 69.102,
"vod_quality_checks.test_video_duration": "00:02:52",
"vod_quality_checks.test_video_duration_seconds": 172.088,
"vod_quality_checks.trimmed_test_video_duration": "00:02:52",
"vod_quality_checks.trimmed_test_video_duration_seconds": 172.088,
"vod_quality_checks.details": null,
"vod_quality_checks.check_duration_seconds": 3.553,
"vod_quality_checks.check_duration_percentage": 0.020646413463
},
{
"vod_quality_checks.analysis_uuid": "af2889e7-26f6-46b0-99e8-41848f0e2b00",
"vod_quality_checks.time": "2023-04-14 11:41:30.477000",
"vod_quality_checks.title": "UHD HDR Source Validation",
"vod_quality_checks.test_id": "1",
"vod_quality_checks.test_video": "file:///mnt/on-201/demo/meridian_short_2020.mp4",
"vod_quality_checks.category": "Video",
"vod_quality_checks.type": "Black Frame",
"vod_quality_checks.interval_start_time": "00:01:18.495",
"vod_quality_checks.interval_start_time_seconds": 78.495,
"vod_quality_checks.interval_end_time": "00:01:23.483",
"vod_quality_checks.interval_end_time_seconds": 83.483,
"vod_quality_checks.test_video_duration": "00:02:52",
"vod_quality_checks.test_video_duration_seconds": 172.088,
"vod_quality_checks.trimmed_test_video_duration": "00:02:52",
"vod_quality_checks.trimmed_test_video_duration_seconds": 172.088,
"vod_quality_checks.details": null,
"vod_quality_checks.check_duration_seconds": 4.988,
"vod_quality_checks.check_duration_percentage": 0.028985170378
},
.
.
.
]
Notice in the results above:
- A freeze frame condition was detected at approximately 1:05 and lasted 3.5s.
- A black frame condition was detected at approximately 1:18 and lasted almost 5s.
Cadence Pattern Quality Checks
There are three types of cadence pattern quality checks that can be made:
- Multiple Cadences - a quality check is raised if multiple cadence patterns are observed in the test asset
- Broken Cadence - a quality check is raised if a broken cadence pattern is detected where broken cadence pattern means a dropped field/frame within the cadence pattern
- Allowed Cadences - a quality check is raised if a cadence pattern is observed but not within the list
Using the Stream On-Demand Platform REST API, we can configure the On-Demand Analyzer as shown below:
"analyzerConfig": {
"enableCadencePatternDetection": true,
"qualityCheckConfig": {
"multipleCadences": true,
"brokenCadence": true,
"allowedCadences": ["2:2"]
}
.
.
.
}
In the example above, we do the following:
- enable measurement of Cadence Pattern Detection,
- raise a quality check if multiple cadence patterns are detected,
- raise a quality check if a broken cadence pattern is detected and
- raise a quality check if any cadence pattern that is detected is not “2:2”.
To check if this is the case, the following query to Insights can be made:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer mF8hbXKbJZxfCTxbPstKjchG8b72g22rMszRQDk6' \
--data '{
"model": "on_demand",
"view": "vod_quality_checks",
"fields": [
"vod_quality_checks.test_id",
"vod_quality_checks.interval_start_time",
"vod_quality_checks.interval_end_time",
"vod_quality_checks.type",
"vod_quality_checks.details"
],
"filters": {
"vod_quality_checks.analysis_uuid": "810e1040-7825-49b2-96a1-0716cc7e1e66"
},
"sorts": [
"vod_quality_checks.interval_start_time_full_label",
"vod_quality_checks.type"
]
}'
When one of the conditions for the quality check is triggered, the response will look similar to the following
[
{
"vod_quality_checks.test_id": "1",
"vod_quality_checks.interval_start_time": "00:00:00.000",
"vod_quality_checks.interval_end_time": "00:00:30.000",
"vod_quality_checks.type": "Cadence Not Allowed",
"vod_quality_checks.details": "Detected cadence 1:1:1:2"
}
]
We can make the following conclusions based on the above results:
- there is a quality check failure with asset test ID 1,
- the duration of where the quality check is raised is between 0 and 30 seconds
- the quality check type is
"Cadence Not Allowed"
and - the detected cadence pattern is
"1:1:1:2"
which was not in the quality check config allowed cadence pattern list"2:2"
.
StreamAware can be used to perform a number of audio quality checks.
Audio Loudness Quality Checks
An audio loudness quality check performs level-based and range-based assessments of the audio tracks to ensure that standards of acceptabilty are being met.
There are two types of audio loudness quality checks supported:
- Level Based Quality Checks
- Range Based Quality Checks
Level Based Quality Checks
Level Based Quality Checks only have a single threshold that will trigger a quality check failure. This includes
- MAX_MOMENTARY_LOUDNESS
- MAX_SHORT_TERM_LOUDNESS
Once the threshold is exceeded for the duration amount, a quality check failure is raised.
An example "audio"
section for an asset within an analysis submission for a MAX_SHORT_TERM_LOUDNESS
quality check can be found below.
{
.
.
.
"audio": {
"groups": [
{
"loudnessMeasurements": {
"enabled": true,
"algorithm": "ITU_BS_1770_3"
},
"qualityCheckConfig": {
"loudnessChecks": [
{
"checkType": "MAX_SHORT_TERM_LOUDNESS",
"enabled": true,
"threshold": -130.7,
"duration": 8
}
]
}
}
],
"enabled": true
}
.
.
.
}
The above quality check will fail if the max Short-Term Loudness exceeds -130.7 LUFS for at least 8s.
To check if this is the case, the following query to Insights can be made.
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer mF8hbXKbJZxfCTxbPstKjchG8b72g22rMszRQDk6' \
--data '{
"model": "on_demand",
"view": "vod_quality_checks",
"fields": [
"vod_quality_checks.test_id",
"vod_quality_checks.interval_start_time_full_label",
"vod_quality_checks.interval_end_time_full_label",
"vod_quality_checks.type",
"vod_quality_checks.check_duration_seconds",
"vod_quality_checks.details",
"vod_quality_checks.stream_index"
],
"filters": {
"vod_quality_checks.analysis_uuid": "a650491d-5365-41e9-9c86-8d63d9859472"
},
"sorts": [
"vod_quality_checks.interval_start_time_full_label",
"vod_quality_checks.type"
]
}'
When the quality check is triggered, the JSON response will look like this:
[
{
"vod_quality_checks.test_id": "1",
"vod_quality_checks.interval_start_time_full_label": "00:00:00.000",
"vod_quality_checks.interval_end_time_full_label": "00:02:59.800",
"vod_quality_checks.type": "High Short-Term Loudness",
"vod_quality_checks.details": "Short-Term Loudness was above the configured threshold of -130.7 LUFS for longer than 8 seconds",
"vod_quality_checks.stream_index": 1,
"vod_quality_checks.check_duration_seconds": 179.8
}
]
We can make the following conclusions based on the above results:
- There was an exception raised on test_id 1
- A stream index of 1 means that this quality check failure occured on the first observed audio track
- See the
vod_audio_metadata
for the ability to link more metadata to this result
- See the
- The analyzer measured a Short-Term Loudness above -130.7 LUFS for greater than 8 seconds
- The period of Short-Term Loudness occurred during video playback between 00:00:00.000 and 00:02:59.800
Range Based Quality Checks
Often a single threshold is insufficient and instead a value should remain within a specific range. This type of quality check is supported for the following check types
- MAX_TRUE_PEAK_LEVEL
- MIN_TRUE_PEAK_LEVEL
- MAX_LOUDNESS_RANGE
- MIN_LOUDNESS_RANGE
- MAX_INTEGRATED_LOUDNESS
- MIN_INTEGRATED_LOUDNESS
Range Based Quality Checks can also be used as Level Based Quality Checks by omitting either the MAX or MIN when defining loudnessChecks
object in the job submission.
An example "audio"
section for an asset within an analysis submission can be found below.
{
.
.
.
"audio": {
"groups": [
{
"loudnessMeasurements": {
"enabled": true,
"algorithm": "ITU_BS_1770_3"
},
"qualityCheckConfig": {
"loudnessChecks": [
{
"checkType": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 10.1
},
{
"checkType": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 15.1
}
]
}
}
],
"enabled": true
}
.
.
.
}
The above quality check will fail if the Loudness Range is outside the range of 10.1 LU to 15.1 LU.
To check if this is the case, the following query to Insights can be made.
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer mF8hbXKbJZxfCTxbPstKjchG8b72g22rMszRQDk6' \
--data '{
"model": "on_demand",
"view": "vod_quality_checks",
"fields": [
"vod_quality_checks.test_id",
"vod_quality_checks.interval_start_time_full_label",
"vod_quality_checks.interval_end_time_full_label",
"vod_quality_checks.type",
"vod_quality_checks.details",
"vod_quality_checks.stream_index"
],
"filters": {
"vod_quality_checks.analysis_uuid": "a650491d-5365-41e9-9c86-8d63d9859472"
},
"sorts": [
"vod_quality_checks.interval_start_time_full_label",
"vod_quality_checks.type"
]
}'
When the quality check is triggered, the response will look like this:
[
{
"vod_quality_checks.test_id": "1",
"vod_quality_checks.interval_start_time_full_label": null,
"vod_quality_checks.interval_end_time_full_label": null,
"vod_quality_checks.type": "Low Loudness Range",
"vod_quality_checks.details": "Loudness Range of 4.14 LU was below the configured threshold of 10.1 LU",
"vod_quality_checks.stream_index": 1
}
]
We can make the following conclusions based on the above results:
- There was an exception raised on test_id 1
- A stream index of 1 means that this quality check failure occured on the first observed audio track
- See the
vod_audio_metadata
for the ability to link more metadata to this result
- See the
- The analyzer measured a Loudness Range of 4.14 LU upon completion of the analysis
- Loudness Range applies to the entire asset and therefore
interval_start_time_full_label
andinterval_end_time_full_label
are null
Audio Artifact Quality Checks
An audio artifact quality check performs level-based and signal-based assessments of the audio tracks to ensure that unwanted noise (eg. clicks, pops and clipping distortion) does not occur during playback.
Clicks and Pops Quality Checks
The On-Demand Analyzer can be configured to determine if clicks and pops are audible in audio tracks belonging to an asset.
An example "audio"
section with this quality check enabled for an asset within an analysis submission can be found below.
{
.
.
.
"audio": {
"groups": [
{
"loudnessMeasurements": {
"enabled": true,
"algorithm": "ITU_BS_1770_3"
},
"qualityCheckConfig": {
"clicksAndPopsCheck": [
{
"sensitivity": 50,
"enabled": true,
"skipStart": 1.25,
"skipEnd": 1.25
}
]
}
}
],
"enabled": true
}
.
.
.
}
An optional sensitivity parameter is provided to tune the detection algorithm, where a higher value leads to more detections but may result in false positives.
The above quality check event will be triggered if any clicks or pops are detected within an audio track.
To check if this is the case, the following query to Insights can be made.
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer mF8hbXKbJZxfCTxbPstKjchG8b72g22rMszRQDk6' \
--data '{
"model": "on_demand",
"view": "vod_quality_checks",
"fields": [
"vod_quality_checks.test_id",
"vod_quality_checks.interval_start_time_full_label",
"vod_quality_checks.interval_end_time_full_label",
"vod_quality_checks.type",
"vod_quality_checks.interval_start_time_seconds",
"vod_quality_checks.interval_end_time_seconds",
"vod_quality_checks.check_duration_seconds",
"vod_quality_checks.details",
"vod_quality_checks.stream_index"
],
"filters": {
"vod_quality_checks.analysis_uuid": "a650491d-5365-41e9-9c86-8d63d9859472"
},
"sorts": [
"vod_quality_checks.interval_start_time_full_label",
"vod_quality_checks.type"
]
}'
When the quality check is triggered, the response will look like this:
[
{
"vod_quality_checks.test_id": "1",
"vod_quality_checks.interval_start_time_full_label": null,
"vod_quality_checks.interval_end_time_full_label": null,
"vod_quality_checks.type": "Clicks/Pops On Front Center Channel",
"vod_quality_checks.interval_start_time_seconds": "00:00:04:500",
"vod_quality_checks.interval_end_time_seconds": "00:00:04:550",
"vod_quality_checks.check_duration_seconds": 0.05,
"vod_quality_checks.details": "Click/Pops were detected for the front center channel",
"vod_quality_checks.stream_index": 1
}
]
We can make the following conclusions based on the above results:
- There was an exception raised on test_id 1
- A stream index of 1 means that this quality check failure occured on the first observed audio track
- See the
vod_audio_metadata
for the ability to link more metadata to this result
- See the
- Clicks and/or pops were detected on the front center channel for 0.05 seconds, between the 4.50 and 4.55 second mark of video playback
Clipping Quality Check
The On-Demand Analyzer can be configured to determine if clipping is present in audio tracks belonging to an asset.
An example "audio"
section with this quality check enabled for an asset within an analysis submission can be found below.
{
.
.
.
"audio": {
"groups": [
{
"loudnessMeasurements": {
"enabled": true,
"algorithm": "ITU_BS_1770_3"
},
"qualityCheckConfig": {
"clippingCheck": [
{
"sensitivity": 50,
"enabled": true,
"duration": 1.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
]
}
}
],
"enabled": true
}
.
.
.
}
An optional sensitivity parameter is provided to tune the detection algorithm, where a higher value leads to more detections but may result in false positives.
The above quality check event will be triggered if any periods of clipping of at least 1.5 seconds are detected within an audio track.
To check if this is the case, the following query to Insights can be made.
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer mF8hbXKbJZxfCTxbPstKjchG8b72g22rMszRQDk6' \
--data '{
"model": "on_demand",
"view": "vod_quality_checks",
"fields": [
"vod_quality_checks.test_id",
"vod_quality_checks.interval_start_time_full_label",
"vod_quality_checks.interval_end_time_full_label",
"vod_quality_checks.type",
"vod_quality_checks.interval_start_time_seconds",
"vod_quality_checks.interval_end_time_seconds",
"vod_quality_checks.check_duration_seconds",
"vod_quality_checks.details",
"vod_quality_checks.stream_index"
],
"filters": {
"vod_quality_checks.analysis_uuid": "a650491d-5365-41e9-9c86-8d63d9859472"
},
"sorts": [
"vod_quality_checks.interval_start_time_full_label",
"vod_quality_checks.type"
]
}'
When the quality check is triggered, the response will look like this:
[
{
"vod_quality_checks.test_id": "1",
"vod_quality_checks.interval_start_time_full_label": null,
"vod_quality_checks.interval_end_time_full_label": null,
"vod_quality_checks.type": "Clipping On Front Left Channel",
"vod_quality_checks.interval_start_time_seconds": "00:00:10:500",
"vod_quality_checks.interval_end_time_seconds": "00:00:12:500",
"vod_quality_checks.check_duration_seconds": 2.000,
"vod_quality_checks.details": "Clipping was detected for the front left channel for longer than 1.5 seconds",
"vod_quality_checks.stream_index": 1
}
]
We can make the following conclusions based on the above results:
- There was an exception raised on test_id 1
- A stream index of 1 means that this quality check failure occured on the first observed audio track
- See the
vod_audio_metadata
for the ability to link more metadata to this result
- See the
- Clipping was detected on the front left channel for 2 seconds, between the 10.5 and 12.5 second mark of video playback
StreamAware can be used to perform a number of score checks. A score check allows one to apply numeric thresholds to various VisionScience measurements and metrics and reject assets that don’t meet the threshold.
Using the Stream On-Demand Platform REST API, let’s submit a full-reference analysis for a couple of encoded assets with suspected quality issues, as shown below:
curl --location 'https://example.sct.imax.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
"content": {
"title": "Dog Running FR"
},
"referenceAssets": [
{
"name": "dog_running.mp4",
"path": "royalty_free/dog_running/source",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc"
}
}
],
"subjectAssets": [
{
"name": "dog_running_1080_h264_qp_31.mp4",
"path": "royalty_free/dog_running/outputs",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc"
},
"qualityCheckConfig": {
"scoreChecks": [
{
"metric": "SVS",
"threshold": 85,
"durationSeconds": 1,
"viewingEnvironmentIndex": 0
},
{
"metric": "SBS",
"threshold": 40,
"durationSeconds": 1,
"viewingEnvironmentIndex": 0
}
]
}
},
{
"name": "dog_running_1080_h264_qp_41.mp4",
"path": "royalty_free/dog_running/outputs",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc"
},
"qualityCheckConfig": {
"scoreChecks": [
{
"metric": "SVS",
"threshold": 85,
"durationSeconds": 1,
"viewingEnvironmentIndex": 0
},
{
"metric": "SBS",
"threshold": 40,
"durationSeconds": 1,
"viewingEnvironmentIndex": 0
}
]
}
}
],
"analyzerConfig": {
"enableBandingDetection": true,
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "TYPICAL"
}
]
}
}'
Notice in the request above:
- We have configured an XVS (formerly SVS and still referenced this way in the API request) score check on both subject/test assets where we consider it be unacceptable to have any period of more than 1s where the IMAX ViewerScore is less than 85.
- We have configured an XBS (formerly SBS and still referenced this way in the API request) score check on both subject/test assets where we consider it be unacceptable to have any period of more than 1s where the IMAX BandingScore is greater than 40.
The Stream On-Demand Platform REST API requires the inclusion of a valid ViewingEnvironment (device + viewer) when configuring score checks for XVS, XEPS and XBS.
As with the other quality checks, let’s use the fields on the vod_quality_checks
view to find any score check failures, as shown below:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer 22srnScNTRYKNTpPpBtMBfTYjHDJqTPkfhQX6TQK' \
--data '{
"model": "on_demand",
"view": "vod_quality_checks",
"fields": [
"vod_quality_checks.analysis_uuid",
"vod_quality_checks.time",
"vod_quality_checks.title",
"vod_quality_checks.test_id",
"vod_quality_checks.test_video",
"vod_quality_checks.category",
"vod_quality_checks.type",
"vod_quality_checks.interval_start_time",
"vod_quality_checks.interval_start_time_seconds",
"vod_quality_checks.interval_end_time",
"vod_quality_checks.interval_end_time_seconds",
"vod_quality_checks.check_duration_seconds",
"vod_quality_checks.check_duration_percentage",
"vod_quality_checks.test_video_duration",
"vod_quality_checks.test_video_duration_seconds",
"vod_quality_checks.trimmed_test_video_duration",
"vod_quality_checks.configuration_threshold",
"vod_quality_checks.trimmed_test_video_duration_seconds"
],
"filters": {
"vod_quality_checks.analysis_uuid": "5473a508-4dc8-47e2-9c89-792a4fafb061"
},
"sorts": [
"vod_quality_checks.interval_start_time"
]
}'
The abbreviated JSON response payload is as follows:
[
{
"vod_quality_checks.analysis_uuid": "5473a508-4dc8-47e2-9c89-792a4fafb061",
"vod_quality_checks.time": "2023-08-29 23:22:40.488000",
"vod_quality_checks.title": "Dog Running FR",
"vod_quality_checks.test_id": "1-2",
"vod_quality_checks.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_41.mp4",
"vod_quality_checks.category": "Video",
"vod_quality_checks.type": "SSIMPLUS Viewer Score",
"vod_quality_checks.interval_start_time": "00:00:00.000",
"vod_quality_checks.interval_start_time_seconds": 0,
"vod_quality_checks.interval_end_time": "00:00:19.160",
"vod_quality_checks.interval_end_time_seconds": 19.16,
"vod_quality_checks.test_video_duration": "00:00:20",
"vod_quality_checks.test_video_duration_seconds": 19.68,
"vod_quality_checks.trimmed_test_video_duration": "00:00:20",
"vod_quality_checks.configuration_threshold": "85",
"vod_quality_checks.trimmed_test_video_duration_seconds": 19.68,
"vod_quality_checks.check_duration_seconds": 19.16,
"vod_quality_checks.check_duration_percentage": 0.973577235772
},
{
"vod_quality_checks.analysis_uuid": "5473a508-4dc8-47e2-9c89-792a4fafb061",
"vod_quality_checks.time": "2023-08-29 23:22:24.839000",
"vod_quality_checks.title": "Dog Running FR",
"vod_quality_checks.test_id": "1-1",
"vod_quality_checks.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_31.mp4",
"vod_quality_checks.category": "Video",
"vod_quality_checks.type": "SSIMPLUS Viewer Score",
"vod_quality_checks.interval_start_time": "00:00:00.000",
"vod_quality_checks.interval_start_time_seconds": 0,
"vod_quality_checks.interval_end_time": "00:00:05.280",
"vod_quality_checks.interval_end_time_seconds": 5.28,
"vod_quality_checks.test_video_duration": "00:00:20",
"vod_quality_checks.test_video_duration_seconds": 19.68,
"vod_quality_checks.trimmed_test_video_duration": "00:00:20",
"vod_quality_checks.configuration_threshold": "85",
"vod_quality_checks.trimmed_test_video_duration_seconds": 19.68,
"vod_quality_checks.check_duration_seconds": 5.28,
"vod_quality_checks.check_duration_percentage": 0.268292682927
},
{
"vod_quality_checks.analysis_uuid": "5473a508-4dc8-47e2-9c89-792a4fafb061",
"vod_quality_checks.time": "2023-08-29 23:22:30.009000",
"vod_quality_checks.title": "Dog Running FR",
"vod_quality_checks.test_id": "1-2",
"vod_quality_checks.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_41.mp4",
"vod_quality_checks.category": "Video",
"vod_quality_checks.type": "SSIMPLUS Banding Score",
"vod_quality_checks.interval_start_time": "00:00:09.560",
"vod_quality_checks.interval_start_time_seconds": 9.56,
"vod_quality_checks.interval_end_time": "00:00:11.240",
"vod_quality_checks.interval_end_time_seconds": 11.24,
"vod_quality_checks.test_video_duration": "00:00:20",
"vod_quality_checks.test_video_duration_seconds": 19.68,
"vod_quality_checks.trimmed_test_video_duration": "00:00:20",
"vod_quality_checks.configuration_threshold": "40",
"vod_quality_checks.trimmed_test_video_duration_seconds": 19.68,
"vod_quality_checks.check_duration_seconds": 1.68,
"vod_quality_checks.check_duration_percentage": 0.085365853659
},
{
"vod_quality_checks.analysis_uuid": "5473a508-4dc8-47e2-9c89-792a4fafb061",
"vod_quality_checks.time": "2023-08-29 23:22:41.502000",
"vod_quality_checks.title": "Dog Running FR",
"vod_quality_checks.test_id": "1-1",
"vod_quality_checks.test_video": "file:///mnt/video-files-pvc/royalty_free/dog_running/outputs/dog_running_1080_h264_qp_31.mp4",
"vod_quality_checks.category": "Video",
"vod_quality_checks.type": "SSIMPLUS Viewer Score",
"vod_quality_checks.interval_start_time": "00:00:16.480",
"vod_quality_checks.interval_start_time_seconds": 16.48,
"vod_quality_checks.interval_end_time": "00:00:19.160",
"vod_quality_checks.interval_end_time_seconds": 19.16,
"vod_quality_checks.test_video_duration": "00:00:20",
"vod_quality_checks.test_video_duration_seconds": 19.68,
"vod_quality_checks.trimmed_test_video_duration": "00:00:20",
"vod_quality_checks.configuration_threshold": "85",
"vod_quality_checks.trimmed_test_video_duration_seconds": 19.68,
"vod_quality_checks.check_duration_seconds": 2.68,
"vod_quality_checks.check_duration_percentage": 0.136178861789
}
]
Notice in the results above:
- The
dog_running_1080_h264_qp_31.mp4
asset failed the XVS (SVS) score check at two different times for a total of approximately 6s. However, this asset passed the XBS score check which indicates that the level of color banding is acceptable. To know exactly what the XBS (SBS) values were, you would need to use consult theanalysis_results
view (see here). - The
dog_running_1080_h264_qp_41.mp4
asset failed both the XVS and XBS score checks but its low XVS scores is clearly the main problem as 97% of the asset’s duration was of unacceptable quality, while only 1.68s or 8% of the asset had unacceptable levels of color banding.
Duration filtering and On-Demand Analyzer Configuration
Most quality checks have a duration associated with them, either specified in seconds or frames, after which the triggered condition will cause a warning or failure event to be raised. Refer to the Stream On-Demand Platform REST API for details on configuring quality checks. Once an anlysis has been submitted, changes to any associated quality check would require a new analysis. However, as you’ve already learned, the Insights REST API allows you to filter on any view field, including vod_quality_checks.check_duration_seconds
. The implication here is that you can effectively alter the duration of any quality check post-analysis, thereby changing the results of the check, by applying modifiers to a vod_quality_checks.check_duration_seconds
filter, as shown in the example below:
.
.
.
"filters": {
"vod_quality_checks.analysis_uuid": "{{last_analysis_uuid}}"
"vod_quality_checks.check_duration_seconds": "<5"
}
.
.
.
The caveat here is that you cannot chose a smaller value for the vod_quality_checks.check_duration_seconds
field than what was used to configure the On-Demand Analyzer via the REST API.
"analyzerConfig": {
"qualityCheckConfig": {
"enabled": true,
"duration": 1
}
}
In the example above, we configured the On-Demand Analyzer to apply a quality check with a duration of 1s which is less than the 5s we used in the Insights filter above.
This raises the question of why we don’t always configure all quality checks with the lowest possible duration. While this would lead to the most flexibility in terms of post-analysis filtering using Insights, it may also put a significant amount of additional computational effort on the On-Demand Analyzer itself, thereby slowing down your analyses. The Stream On-Demand Platform REST API has made efforts to pick reasonable defaults for all applicable quality checks but there may be times where you want to lower these in order to give you additional flexibility with your results.
Filtering results based on percentages
In some production scenarios, it may prove difficult to determine, in advance of submitting the analysis, how much of the start and end of the asset, in seconds, you may wish to exempt from quality check analysis. It may be easier to specify amounts of the asset to skip at either end as percentages of the overall length of the asset. Consider, for example, feature length films where it is understood that the first and last 5% of the asset are typically studio logos and credits. Similarly, one may find it useful to define a quality check failure in terms of its duration with respect to the (trimmed) asset length, instead of in absolute seconds. A freeze frame, for example, that lasts 5% of a 30s commercial may not be considered a quality check failure worthy of asset rejection yet a 5% freeze frame in a 120m feature length film surely would. For these types of situations, you are encouraged to filter your quality check results using the following percentage-based fields:
vod_quality_checks.skip_start_percentage
vod_quality_checks.skip_end_percentage
vod_quality_checks.check_duration_percentage
The following is an example query where we’ve skipped the first and last 5% of the asset and told the system to only find video quality check failures that exceed 4.5% of the trimmed/shortened asset length:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer mF8hbXKbJZxfCTxbPstKjchG8b72g22rMszRQDk6' \
--data '{
"model": "on_demand",
"view": "vod_quality_checks",
"fields": [
"vod_quality_checks.analysis_uuid",
"vod_quality_checks.time",
"vod_quality_checks.comparison_analysis_uuid",
"vod_quality_checks.title",
"vod_quality_checks.organization",
"vod_quality_checks.site",
"vod_quality_checks.test_id",
"vod_quality_checks.test_video",
"vod_quality_checks.stream_index",
"vod_quality_checks.category",
"vod_quality_checks.type",
"vod_quality_checks.interval_start_time",
"vod_quality_checks.interval_start_time_seconds",
"vod_quality_checks.interval_end_time",
"vod_quality_checks.interval_end_time_seconds",
"vod_quality_checks.check_duration_seconds",
"vod_quality_checks.check_duration_percentage",
"vod_quality_checks.test_video_duration",
"vod_quality_checks.test_video_duration_seconds",
"vod_quality_checks.trimmed_test_video_duration",
"vod_quality_checks.trimmed_test_video_duration_seconds",
"vod_quality_checks.details"
],
"filters": {
"vod_quality_checks.analysis_uuid": "9bac5d71-83ad-43f5-b3c5-f5a076dd7be7",
"vod_quality_checks.check_duration_percentage": ">=4.5",
"vod_quality_checks.skip_start_percentage": "5",
"vod_quality_checks.skip_end_percentage": "5",
"vod_quality_checks.category": "video",
},
"sorts": [
"vod_quality_checks.interval_start_time"
]
}'
The following is an example response for the request above:
[
{
"vod_quality_checks.analysis_uuid": "9bac5d71-83ad-43f5-b3c5-f5a076dd7be7",
"vod_quality_checks.time": "2021-02-24 17:34:13.998000",
"vod_quality_checks.comparison_analysis_uuid": null,
"vod_quality_checks.title": "Curated Big Buck Bunny with Quality Check Failures",
"vod_quality_checks.organization": "IMAX",
"vod_quality_checks.site": "VOD Testing",
"vod_quality_checks.test_id": "1",
"vod_quality_checks.test_video": "/mnt/videos/test/big_buck_bunny_problematic.mp4",
"vod_quality_checks.stream_index": 1,
"vod_quality_checks.category": "video",
"vod_quality_checks.type": "Solid Color Frame",
"vod_quality_checks.interval_start_time": "00:00:20.062",
"vod_quality_checks.interval_start_time_seconds": 20.062,
"vod_quality_checks.interval_end_time": "00:00:25.025",
"vod_quality_checks.interval_end_time_seconds": 25.025,
"vod_quality_checks.check_duration_seconds": 4.963,
"vod_quality_checks.check_duration_percentage": 4.58917498,
"vod_quality_checks.test_video_duration": "00:02:00",
"vod_quality_checks.test_video_duration_seconds": 120.162,
"vod_quality_checks.trimmed_test_video_duration": "00:01:48",
"vod_quality_checks.trimmed_test_video_duration_seconds": 108.1458,
"vod_quality_checks.details": null
},
{
"vod_quality_checks.analysis_uuid": "9bac5d71-83ad-43f5-b3c5-f5a076dd7be7",
"vod_quality_checks.time": "2021-02-24 17:35:24.607000",
"vod_quality_checks.comparison_analysis_uuid": null,
"vod_quality_checks.title": "Curated Big Buck Bunny with Quality Check Failures",
"vod_quality_checks.organization": "IMAX",
"vod_quality_checks.site": "VOD Testing",
"vod_quality_checks.test_id": "1",
"vod_quality_checks.test_video": "/mnt/videos/test/big_buck_bunny_problematic.mp4",
"vod_quality_checks.stream_index": 1,
"vod_quality_checks.category": "video",
"vod_quality_checks.type": "Freeze Frame",
"vod_quality_checks.interval_start_time": "00:01:20.080",
"vod_quality_checks.interval_start_time_seconds": 80.08,
"vod_quality_checks.interval_end_time": "00:01:25.085",
"vod_quality_checks.interval_end_time_seconds": 85.085,
"vod_quality_checks.check_duration_seconds": 5.005,
"vod_quality_checks.check_duration_percentage": 4.62801144,
"vod_quality_checks.test_video_duration": "00:02:00",
"vod_quality_checks.test_video_duration_seconds": 120.162,
"vod_quality_checks.trimmed_test_video_duration": "00:01:48",
"vod_quality_checks.trimmed_test_video_duration_seconds": 108.1458,
"vod_quality_checks.details": null
},
{
"vod_quality_checks.analysis_uuid": "9bac5d71-83ad-43f5-b3c5-f5a076dd7be7",
"vod_quality_checks.time": "2021-02-24 17:35:34.671000",
"vod_quality_checks.comparison_analysis_uuid": null,
"vod_quality_checks.title": "Curated Big Buck Bunny with Quality Check Failures",
"vod_quality_checks.organization": "IMAX",
"vod_quality_checks.site": "VOD Testing",
"vod_quality_checks.test_id": "1",
"vod_quality_checks.test_video": "/mnt/videos/test/big_buck_bunny_problematic.mp4",
"vod_quality_checks.stream_index": 1,
"vod_quality_checks.category": "video",
"vod_quality_checks.type": "Black Frame",
"vod_quality_checks.interval_start_time": "00:01:30.132",
"vod_quality_checks.interval_start_time_seconds": 90.132,
"vod_quality_checks.interval_end_time": "00:01:35.095",
"vod_quality_checks.interval_end_time_seconds": 95.095,
"vod_quality_checks.check_duration_seconds": 4.963,
"vod_quality_checks.check_duration_percentage": 4.58917498,
"vod_quality_checks.test_video_duration": "00:02:00",
"vod_quality_checks.test_video_duration_seconds": 120.162,
"vod_quality_checks.trimmed_test_video_duration": "00:01:48",
"vod_quality_checks.trimmed_test_video_duration_seconds": 108.1458,
"vod_quality_checks.details": null
}
]
Notice that the vod_quality_checks.check_duration_percentage
value reported for each video quality check failure exceeds 4.5% of the length of the asset, after skipping the first and last 5% (i.e. the trimmed asset length).
The vod_quality_checks
view exposes fields to capture the various video quality checks, asset audio quality checks and asset score checks supported by StreamAware. The following subsections lists the various fields of interest on the vod_quality_checks
view that can be used in the body of any request sent to the Insights REST API.
Metric/Measurement Field | Details |
---|---|
vod_quality_checks.stream_id |
Identifies the stream type (e.g. video/audio) and the stream number |
vod_quality_checks.configuration_metric |
For measurement and score-based quality checks, the value of this metric type are used for comparison to the threshold values |
vod_quality_checks.configuration_threshold |
For score-based quality checks, the threshold value that must be met to trigger a quality check event |
vod_quality_checks.configuration_duration_value |
The configured minimum duration of the triggered quality check event. This value has units corresponding to ‘Duration Units’ |
vod_quality_checks.configuration_duration_units |
The unit associated with the configured minimum duration of the triggered quality check event |
vod_quality_checks.configuration_skip_end |
The number of seconds at the end of the asset for which quality check events are ignored |
vod_quality_checks.configuration_skip_start |
The number of seconds at the start of the asset for which quality check events are ignored |
vod_quality_checks.viewer_type |
For score-based quality checks, the scores of this viewer type are used for comparison to the threshold values |
vod_quality_checks.category |
The category of quality check event. Ex. audio, video |
vod_quality_checks.type |
The type of quality check |
vod_quality_checks.details |
The summary of the details, with more detail displayed by hovering |
vod_quality_checks.metadata_validations_dolby_vision_source |
Dolby Vision Metadata Validation Source |
vod_quality_checks.metadata_validations_dolby_vision_result |
Dolby Vision Metadata Validation Result |
vod_quality_checks.metadata_validations_dolby_vision_summary |
Dolby Vision Metadata Validation Summary |
vod_quality_checks.interval_start_time |
The start time of the quality check event interval, displayed based on user preferences |
vod_quality_checks.interval_start_time_frames |
The start time of the quality check event interval in frames |
vod_quality_checks.interval_start_time_seconds |
The start time of the quality check event interval in seconds |
vod_quality_checks.interval_start_time_timecode |
The start time of the quality check event interval as a SMPTE Timecode |
vod_quality_checks.interval_end_time |
The end time of the quality check event interval, displayed based on user preferences |
vod_quality_checks.interval_end_time_frames |
The end time of the quality check event interval in frames |
vod_quality_checks.interval_end_time_seconds |
The end time of the quality check event interval in seconds |
vod_quality_checks.interval_end_time_timecode |
The end time of the quality check event interval as a SMPTE Timecode |
vod_quality_checks.qc_result |
Overall QC result for the analysis/test |
vod_quality_checks.count |
The number of quality check events |
vod_quality_checks.check_duration_frames |
The duration of the quality check failure in frames |
vod_quality_checks.check_duration_seconds |
The duration of the quality check failure in seconds |
vod_quality_checks.check_duration_percentage |
The duration of the quality check failure as a percentage of trimmed asset duration |
StreamAware and StreamSmart automatically capture both video and audio metadata associated with the assets being analyzed or optimized. The focus of this section is on the options for retrieving this metadata from a given analysis. To that end, the discussion will center largely around the following Insights views:
View | Details |
---|---|
vod_video_metadata |
This view exposes fields to capture the video metadata (e.g. title, resolution, dynamic range, encoder, frame rate, time base, aspect ratio, frame count, color primaries and space, color transfer characteristics and content/frame light levels) for the assets included in an analysis. |
vod_audio_metadata |
This view exposes fields to capture the various audio metadata (e.g. stream index/pid, channel count, channel layout, channel assignment, codec, coding mode, language, sample rate, Dolby Atmos bed channels, Dolby Atmos dynamic objects, IMF MCA audio content/element, quantization bits etc.) for the assets included in an analysis. |
To fetch the video metadata associated with an analysis or optimization, we use vod_video_metadata
view, as shown below:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer btPfpMGhxZTZYmQ7gH5tfGCFMRM93FFqC7kZsPj8' \
--data '{
"model": "on_demand",
"view": "vod_video_metadata",
"fields": [
"vod_video_metadata.analysis_uuid",
"vod_video_metadata.title",
"vod_video_metadata.organization",
"vod_video_metadata.site",
"vod_video_metadata.test_id",
"vod_video_metadata.test_video",
"vod_video_metadata.stream_index",
"vod_video_metadata.resolution",
"vod_video_metadata.resolution_name",
"vod_video_metadata.vertical_resolution_pixels",
"vod_video_metadata.horizontal_resolution_pixels",
"vod_video_metadata.dynamic_range",
"vod_video_metadata.dynamic_range_format",
"vod_video_metadata.encoder_decoder",
"vod_video_metadata.encoder_decoder_profile",
"vod_video_metadata.frame_rate",
"vod_video_metadata.bit_depth",
"vod_video_metadata.time_base",
"vod_video_metadata.time_base_numerator",
"vod_video_metadata.time_base_denominator",
"vod_video_metadata.storage_aspect_ratio",
"vod_video_metadata.storage_aspect_ratio_numerator",
"vod_video_metadata.storage_aspect_ratio_denominator",
"vod_video_metadata.frame_count",
"vod_video_metadata.color_primaries",
"vod_video_metadata.color_space",
"vod_video_metadata.color_transfer_characteristics",
"vod_video_metadata.max_content_light_level",
"vod_video_metadata.max_frame_average_light_level",
"vod_video_metadata.start_presentation_timestamp"
],
"filters": {
"vod_video_metadata.analysis_uuid": "1486832c-a022-43a2-a4d7-4f75a1a3cc52"
}
}'
The following is an example JSON response for the request above:
[
{
"vod_video_metadata.analysis_uuid": "1486832c-a022-43a2-a4d7-4f75a1a3cc52",
"vod_video_metadata.title": "HDR NR Test",
"vod_video_metadata.organization": "IMAX",
"vod_video_metadata.site": "VOD Demo",
"vod_video_metadata.test_id": "1",
"vod_video_metadata.test_video": "file:///mnt/on-201/UHD-HDR-MP4-samsung-demo-reel/samsung-demo-reel-version-a.mp4",
"vod_video_metadata.stream_index": 0,
"vod_video_metadata.resolution": "3840 x 2160",
"vod_video_metadata.resolution_name": "4K UHD",
"vod_video_metadata.vertical_resolution_pixels": 2160,
"vod_video_metadata.horizontal_resolution_pixels": 3840,
"vod_video_metadata.dynamic_range": "HDR",
"vod_video_metadata.dynamic_range_format": "HDR10",
"vod_video_metadata.encoder_decoder": "HEVC",
"vod_video_metadata.encoder_decoder_profile": "Main 10",
"vod_video_metadata.frame_rate": 24,
"vod_video_metadata.bit_depth": 10,
"vod_video_metadata.time_base": "1 / 12288",
"vod_video_metadata.time_base_numerator": 1,
"vod_video_metadata.time_base_denominator": 12288,
"vod_video_metadata.storage_aspect_ratio": "1:1",
"vod_video_metadata.storage_aspect_ratio_numerator": 1,
"vod_video_metadata.storage_aspect_ratio_denominator": 1,
"vod_video_metadata.frame_count": 241,
"vod_video_metadata.color_primaries": "bt2020",
"vod_video_metadata.color_space": "bt2020nc",
"vod_video_metadata.color_transfer_characteristics": "smpte2084",
"vod_video_metadata.max_content_light_level": 1000,
"vod_video_metadata.max_frame_average_light_level": 1000,
"vod_video_metadata.start_presentation_timestamp": 6144
}
]
Supported view fields
The vod_video_metadata
view exposes fields to capture the video metadata (e.g. title, resolution, dynamic range, encoder, frame rate, time base, aspect ratio, frame count, color primaries and space, color transfer characteristics and content/frame light levels) for the assets included in an analysis. The following section lists the various fields of interest on the vod_video_metadata
view that can be used in the body of any request sent to the Insights REST API.
Metric/Measurement Field | Details |
---|---|
vod_video_metadata.encoder_decoder |
The codec used to encode the test video |
vod_video_metadata.encoder_decoder_profile |
The codec profile used to encode the test video |
vod_video_metadata.frame_rate |
The frame rate (in frames per second) of the test video |
vod_video_metadata.reference_frame_rate |
The frame rate (in frames per second) of the reference video |
vod_video_metadata.bit_depth |
The bit depth of the test video |
vod_video_metadata.dynamic_range |
The dynamic range of the test video (e.g. SDR or HDR) |
vod_video_metadata.dynamic_range_format |
The dynamic range format of the test video (e.g. SDR, HLG, HDR10, Dolby Vision) |
vod_video_metadata.dolby_vision_level1_algorithm_version |
The Dolby Vision™ level1 metadata algorithm version of the test video |
vod_video_metadata.dolby_vision_level2_algorithm_version |
The Dolby Vision™ level2 metadata algorithm version of the test video |
vod_video_metadata.dolby_vision_metadata_version |
The Dolby Vision™ metadata version of the test video. |
vod_video_metadata.dolby_vision_profile |
The Dolby Vision™ profile of the test video (e.g. 4, 5, 7, 8.1, 9) |
vod_video_metadata.resolution |
The resolution of the test video in the format “horizontal x vertical” |
vod_video_metadata.resolution_name |
The resolution name of the test video (e.g. SD, HD, 4K UHD etc.) |
vod_video_metadata.horizontal_resolution_pixels |
The horizontal resolution of the test video in pixels |
vod_video_metadata.vertical_resolution_pixels |
The vertical resolution of the test video in pixels |
vod_video_metadata.canvas_aspect_ratio |
The canvas aspect ratio |
vod_video_metadata.canvas_aspect_ratio_denominator |
The canvas aspect ratio denominator |
vod_video_metadata.canvas_aspect_ratio_numerator |
The canvas aspect ratio numerator |
vod_video_metadata.chroma_subsampling |
The chroma subsampling |
vod_video_metadata.color_primaries |
The color primaries (e.g. bt2020) extraced from the video metadata |
vod_video_metadata.color_range |
The color range extracted from the video metadata |
vod_video_metadata.color_space |
The color space (e.g. bt2020nc) extracted from the video metadata |
vod_video_metadata.color_transfer_characteristics |
The transfer characteristics (e.g. smpte2084) extracted from the video metadata |
vod_video_metadata.container_format |
The container format |
vod_video_metadata.content_creator |
The content creator |
vod_video_metadata.metadata_validations_dolby_vision_source |
The Dolby Vision™ Metadata Validation Source |
vod_video_metadata.metadata_validations_dolby_vision_result |
The Dolby Vision™ Metadata Validation Result |
vod_video_metadata.metadata_validations_dolby_vision_summary |
The Dolby Vision™ Metadata Validation Summary |
vod_video_metadata.frame_layout |
Specifies frame layout (eg. interlaced, single frame, full frame, etc.) |
vod_video_metadata.image_aspect_ratio |
The image/display aspect ratio |
vod_video_metadata.image_aspect_ratio_denominator |
The image/display aspect ratio denominator |
vod_video_metadata.image_aspect_ratio_numerator |
The image/display aspect ratio numerator |
vod_video_metadata.mastering_display_maximum_luminance |
Maximum Display Mastering Luminance metadata as specified in ST 2086 |
vod_video_metadata.mastering_display_minimum_luminance |
Minimum Display Mastering Luminance metadata as specified in ST 2086 |
vod_video_metadata.mastering_display_primaries_blue_x |
Blue X value of Display Primary metadata as specified in ST 2086 |
vod_video_metadata.mastering_display_primaries_blue_y |
Blue Y value of Display Primary metadata as specified in ST 2086 |
vod_video_metadata.mastering_display_primaries_green_x |
Green X value of Display Primary metadata as specified in ST 2086 |
vod_video_metadata.mastering_display_primaries_green_y |
Green Y value of Display Primary metadata as specified in ST 2086 |
vod_video_metadata.mastering_display_primaries_red_x |
Red X value of Display Primary metadata as specified in ST 2086 |
vod_video_metadata.mastering_display_primaries_red_y |
Red Y value of Display Primary metadata as specified in ST 2086 |
vod_video_metadata.mastering_display_white_point_chromaticity_name |
Chromaticity of White Point metadata as specified in ST 2086 |
vod_video_metadata.mastering_display_white_point_chromaticity_x |
Chromaticity of White Point metadata as specified in ST 2086 |
vod_video_metadata.mastering_display_white_point_chromaticity_y |
Chromaticity of White Point metadata as specified in ST 2086 |
vod_video_metadata.max_content_light_level |
The maximum frame average light level (MaxFALL) extracted from the video metadata |
vod_video_metadata.max_frame_average_light_level |
The maximum frame average light level (MaxFALL) extracted from the video metadata |
vod_video_metadata.picture_compression |
Specifies the Compression scheme used |
vod_video_metadata.pixel_format |
The pixel format |
vod_video_metadata.scan_type |
The scan type (e.g. Interlaced or Progressive) |
vod_video_metadata.shot_count |
Specifies the number of shots the video is composed of |
vod_video_metadata.storage_aspect_ratio |
The storage aspect ratio |
vod_video_metadata.storage_aspect_ratio_denominator |
The storage aspect ratio denominator |
vod_video_metadata.storage_aspect_ratio_numerator |
The storage aspect ratio numerator |
vod_video_metadata.time_base |
The time base (e.g. 1/25) |
vod_video_metadata.time_base_denominator |
The time base denominator |
vod_video_metadata.time_base_numerator |
The time base numerator |
vod_video_metadata.timecode |
The SMPTE start timecode |
vod_video_metadata.reference_timecode |
The SMPTE start timecode of the reference video |
vod_video_metadata.reference_video_metadata_json |
All of the reference video metadata in JSON form |
vod_video_metadata.start_presentation_timestamp |
The raw presentation timestamp (PTS) of the first video frame |
To fetch the audio metadata associated with an analysis or optimization, we use vod_audio_metadata
view, the analysis/optimization UUID and the identifier associated with the desired audio stream as follows:
curl --location 'https://insights.sct.imax.com/api/4.0/queries/run/json?cache=false' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer btPfpMGhxZTZYmQ7gH5tfGCFMRM93FFqC7kZsPj8' \
--data '{
"model": "on_demand",
"view": "vod_audio_metadata",
"fields": [
"vod_audio_metadata.analysis_uuid",
"vod_audio_metadata.title",
"vod_audio_metadata.organization",
"vod_audio_metadata.site",
"vod_audio_metadata.test_video",
"vod_audio_metadata.test_id",
"vod_audio_metadata.stream_index",
"vod_audio_metadata.stream_pid",
"vod_audio_metadata.hls_variant_bandwidth",
"vod_audio_metadata.channel_count",
"vod_audio_metadata.channel_layout",
"vod_audio_metadata.codec",
"vod_audio_metadata.codec_profile",
"vod_audio_metadata.coding_mode",
"vod_audio_metadata.language",
"vod_audio_metadata.sample_rate",
"vod_audio_metadata.bitrate",
"vod_audio_metadata.time_base",
"vod_audio_metadata.start_presentation_timestamp",
"vod_audio_metadata.normalized_start_presentation_timestamp"
],
"filters": {
"vod_audio_metadata.analysis_uuid": "cdfe6d85-f003-401f-8855-1ec90ab988ba"
}
}'
Note that we are fetching all available fields and filtering on only a few for illustrative purposes. Any subset of the fields listed above can be used in the fields
and filters
sections.
The following is an example response for the request above:
[
{
"vod_audio_metadata.analysis_uuid": "cdfe6d85-f003-401f-8855-1ec90ab988ba",
"vod_audio_metadata.title": "Sample content",
"vod_audio_metadata.organization": "IMAX",
"vod_audio_metadata.site": "jon-eagles",
"vod_audio_metadata.test_video": "/videos/test-data/llama-NP-FF-CF-BF-NP.mp4",
"vod_audio_metadata.test_id": "1",
"vod_audio_metadata.stream_index": 1,
"vod_audio_metadata.stream_pid": null,
"vod_audio_metadata.hls_variant_bandwidth": null,
"vod_audio_metadata.channel_count": 2,
"vod_audio_metadata.channel_layout": "stereo",
"vod_audio_metadata.codec": "AAC",
"vod_audio_metadata.codec_profile": "LC",
"vod_audio_metadata.coding_mode": null,
"vod_audio_metadata.language": "eng",
"vod_audio_metadata.sample_rate": 48000,
"vod_audio_metadata.bitrate": 50368
"vod_audio_metadata.time_base": "1/48000",
"vod_audio_metadata.start_presentation_timestamp": 24000,
"vod_audio_metadata.normalized_start_presentation_timestamp": 0
}
]
Supported view fields
The vod_audio_metadata
view exposes fields to capture the various audio metadata (e.g. stream index/pid, channel count, channel layout, channel assignment, codec, coding mode, language, sample rate, Dolby Atmos bed channels, Dolby Atmos dynamic objects, IMF MCA audio content/element, quantization bits etc.) for the assets included in an analysis. The following section lists the various fields of interest on the vod_audio_metadata
view that can be used in the body of any request sent to the Insights REST API.
Metric/Measurement Field | Details |
---|---|
vod_audio_metadata.stream_pid |
Stream program identifier (if available) |
vod_audio_metadata.datasource |
The source the metadata is based on (e.g. playlist, stream, etc.) |
vod_audio_metadata.bit_stream_mode |
The bit stream mode. (e.g. Complete Main, Commentary, Emergency, etc.) |
vod_audio_metadata.channel_assignment |
The channel assignment in use (e.g. SMPTE 320M-A) |
vod_audio_metadata.channel_count |
Number of channels in the stream |
vod_audio_metadata.channel_layout |
The audio channel layout ex. stereo |
vod_audio_metadata.codec_profile |
The profile of the audio codec |
vod_audio_metadata.hls_variant_bandwidth |
The bandwidth of the analyzed variant stream (only available for HLS) |
vod_audio_metadata.language |
The language of the audio |
vod_audio_metadata.coding_mode |
|
vod_audio_metadata.codec |
The audio codec |
vod_audio_metadata.dolby_lfe |
|
vod_audio_metadata.dolby_dialnorm |
|
vod_audio_metadata.dolby_dialnorm2 |
|
vod_audio_metadata.dolby_atmos_bed_channels |
Layout of Atmos bed (fixed) channels as a comma-separated list of short channel names (L,R,C,LFE,Ls,Rs,Lrs,Rrs) |
vod_audio_metadata.dolby_atmos_dynamic_objects |
The number of Atmos dynamic objects. |
vod_audio_metadata.mca_content_kind |
The kind of content contained in the audio essence as described in SMPTE RP 428-4 |
vod_audio_metadata.mca_element_kind |
The kind of element contained in the audio essence as described in SMPTE RP 428-4 |
vod_audio_metadata.quantization_bits |
The maximum number of significant bits for the value without compression. |
vod_audio_metadata.sample_rate |
The sample rate of the audio stream in Hz |
vod_audio_metadata.bitrate |
The expected bitrate of the audio stream in bits/s |
vod_audio_metadata.mca_title |
Name of the overall program to which the audio essence track belongs |
vod_audio_metadata.mca_title_version |
MCA Title Version Version of the program to which the audio belongs, such as a specific cut of a movie |
vod_audio_metadata.time_base |
The timebase of the audio stream (e.g. 1/48000) |
vod_audio_metadata.start_presentation_timestamp |
The raw presentation timestamp (PTS) of the first audio frame |
vod_audio_metadata.normalized_start_presentation_timestamp |
The presentation timestamp (PTS) of the first audio frame, normalized in terms of video time (in milliseconds) |
vod_audio_metadata.normalized_start_presentation_timestamp
will return a negative value when the presentation time of the first audio frame occurs before the presentation time of the first video frame.
Installation and configuration
Both the IMAX Stream On-Demand Platform and Insights support being used in a headless/programmatic manner via the Stream On-Demand Platform REST API and Insights REST API, respectively. To familiarize yourself with the API experience, IMAX offers Postman collection and environment files which can be imported and configured into the aforementioned tool by following the instructions below:
-
Download, install and launch the Postman app.
-
Under Settings->General make sure you set the Max response size to 0 MB so no responses from the Insight’s REST API are truncated. This is especially useful if you plan to fetch results at per-frame granularity.
-
Under Settings->General make sure that the SSL certificate verifiation is set to Off.
-
(Recommended) Create a new workspace within Postman for isolation and ease of organization.
TipUse a workspace name that indicates the product and version, such as IMAX Stream On-Demand vX.YY
-
Import the Stream On-Demand Postman collection and environment files (File->Import).
-
From the Collections view (left sidebar), you should see a folder called IMAX Stream On-Demand with a set of subfolders and requests therein.
-
Make sure Postman is using the
IMAX Stream On-Demand
environment you imported above by selecting it from the Environment drop-down near the top right corner of the Postman GUI. -
Edit the
IMAX Stream On-Demand
environment by clicking on the eyeball button (Environment Quick Look) to the right of the drop-down and then the Edit link in the top right corner of the popup. Postman should launch a new tab showing an editable table of the environment variables.NoteModern versions of Postman also have an Environments button/view that can be selected from the sidebar on the left side of the UI.
-
In the table presented, modify the CURRENT VALUE column for the following key/value pairs:
VARIABLE CURRENT VALUE stream_host <stream_host>
insights_client_id <insights_client_id>
insights_client_secret <insights_client_secret>
where
<stream_host>
is the fully-qualified domain name (FQDN) or IP address of your IMAX Stream On-Demand Platform deployment and<insights_client_id>
and<insights_client_secret>
are the values API values sent to you by your IMAX representative after completing the Getting Started section of the Insights REST API.
Depending on the details of your deployment, the value for
<stream_host>
will differ:- For fixed scale deployments using an AMI deployed to an EC2 instance,
<stream_host>
will be the public DNS entry of your EC2 instance. - For fixed scale deployments on bare metal using a virtual machine (i.e. OVA/QEMU),
<stream_host>
will be the IP address/FQDN of your virtual machine instance. (Don’t forget to include the port if you used NAT) - For production VPC EKS deployments,
<stream_host>
will be the hostname you selected you chose in step 15. - For production VPC bare-metal/other deployments,
<stream_host>
will be the FQDN of the Ingress (i.e. `<ingress_hostname>) you chose in step 5.
Testing installation and configuration
To verify that the IMAX Stream On-Demand collection and envionment have been successfully imported, configured and selected:
-
Navigate to the IMAX Stream On-Demand->Stream On-Demand Platform REST API->System Services folder and select the
Version
GET request. -
Click the Send button.
-
You should see a response with details about your product version in the Response container of the Postman GUI (to the right or bottom of the UI).
Troubleshooting TipIf you don’t see the version details or receive an error, make sure you have selected and configured the
IMAX Stream On-Demand
environment (see steps 7 & 9 above).
Next, make sure that your IMAX Stream On-Demand Platform is deployed and running by loadinghttps://<stream_host>/api/v1/status
into your browser.
Submitting requests and viewing responses
At this point, you should be able to start submitting requests in the IMAX Stream On-Demand collection. In general, you will want to repeat the following process to submit new analyses or optimizations and view their results via the REST APIs:
-
Submit a new analysis by choosing a no-reference (NR) or full-reference (FR) request example of your choice (listed under the Stream On-Demand Platform REST API->StreamAware->Analyses folder).
Submit a new optimization by choosing the New optimization (FFmpeg - VBR) request example (listed under the Stream On-Demand Platform REST API->StreamSmart folder). -
Select the Body tab from the main view and edit the contents of your POST request to suit your needs.
See NewAnalysis for analyses
See Optimization for optimizations.Consider the analysis example below, where the following NR request modifications indicate that:
- our asset is lives in an S3 bucket and we are relying on IAM roles to provide read-only access to the bucket,
- we are configuring the On-Demand Analyzer to target the LG OLED65C9PUA device from an expert viewer’s perspective,
- we want to capture IMAX Banding Score (XBS) values and
- we are using a score check on our asset that will consider the video to have failed if there is any period of 5 or more seconds where the IMAX ViewerScore (XVS) for our selected device falls below 80.
{ "content": { "title": "NR Analysis With Score-Based Quality Checks" }, "subjectAssets": [ { "name": "Big_Buck_Bunny_h264_qp_21.ts", "path": "royalty_free/big_buck_bunny/outputs", "storageLocation": { "type": "S3", "name": "video-files", "credentials": { "useAssumedIAMRole": true } }, "qualityCheckConfig": { "scoreChecks": [ { "metric": "SVS", "threshold": 80, "durationSeconds": 5, "viewingEnvironmentIndex": 0 } ] } } ], "analyzerConfig": { "viewingEnvironments": [ { "device": { "name": "oled65c9pua" }, "viewerType": "EXPERT" } ] } }
Consider the optimization example below, where the following request modifications indicate that:- our source asset lives in an S3 bucket,
- we want to optimize the FFmpeg encode specified by the provided FFmpeg command and
- the output should be written to a different S3 bucket
{ "title": "My First Optimization", "config": { "encodes": [{ "commands": ["ffmpeg -r 24 -i {INPUT_LOCATION} -pix_fmt yuv420p -color_primaries bt709 -color_trc bt709 -colorspace bt709 -color_range mpeg -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=50:keyint_min=50:scenecut=0:stitchable=1\" -profile:v high -level:v 4.1 -b:v 5000k -maxrate 6250k -bufsize 10000k -r 24 -vf scale=1920x1080 -an {OUTPUT_LOCATION}"], "outputLocation": { "path": "tests/test1.ts", "storageLocation": { "type": "S3", "name": "encoding-test-assets" } } }], "inputLocation": { "key": "tests/test1-result.mov", "storageLocation": { "type": "S3", "name": "encoding-test-outputs" } } } }
-
Click the Send button.
-
Your response will be displayed in the Response container of the Postman GUI.
-
While the analysis or optimization is running, run the Insights REST API->Analysis/Optimization Status->Status Summary query to see the status and progress.
ImportantThe Postman collection and environment work together to remember the UUID of the last successfully submitted analysis or optimization. As such, any request run from the Stream On-Demand Platform REST API folder will have its resulting analysis or optimization UUID value automatically stored in Postman and fed into any subsequently executed query from the Insights REST API folder, to save you the bother of repeatedly copying and pasting the UUID. Additionally, all queries in the Insights REST API folder will automatically fetch and submit the OAuth tokens needed for secure interaction with the Insights REST API.
If you wish to fetch results for a prior/different analysis, simply edit the environment and paste the UUID of the desired analysis into the current value forlast_analysis_uuid
. -
(Optional) Select the Body tab from the main view and alter the details of your POST request to match your use case (see Insights REST API for examples and details).
NoteMany of the requests under the Insights REST API folder do not require editing in order to function unless a specific filter and/or sorting field/order is desired.
-
Click the Send button.
-
Your responses will be displayed in the Response container of the Postman GUI.
Note: Postman also supports saving/exporting results to a file.
The IMAX Stream On-Demand Platform can be configured to produce the following video measurements:
- No-Reference IMAX ViewerScore™ (NR XVS™)
- Full-Reference IMAX ViewerScore™ (FR XVS™)
- IMAX Encoder Performance Score (XEPS)
- IMAX Banding Score (XBS)
- IMAX Content Complexity Score (XCCS)
- Perceptual Color Volume Difference Score (CVD)
- Macroblocking
- Noise Estimation
- HDR-specific metrics such as MaxFALL, MaxCLL and color gamut detection
No-reference IMAX ViewerScore (NR XVS)
The no-reference IMAX ViewerScore (also known as “Source IMAX ViewerScore” or “NR XVS”, ) is a metric that assesses the quality of a source file or stream in a no-reference (NR) manner. This means that it does not require any additional information or reference material in order to evaluate the quality of the video content.
The NR XVS metric is highly accurate, with a strong correlation to both the reference-based score and the mean opinion score (MOS), and it uses deep neural networks (DNN) to make its evaluations. It is also computationally efficient, making it suitable for use at any stage of the content processing and delivery chain. The NR XVS metric supports all commonly used codecs, including ProRes, J2K, AVC, HEVC, MPEG2, and VP9, as well as all commonly used content attributes, including HDR content.
Full-reference IMAX ViewerScore (FR XVS)
Sometimes referred to as “FR XVS” or “IMAX Output ViewerScore”, it assesses the perceptual quality of an (encoded or processed) output in reference to a degraded (or pristine) source.
The quality of the output (FR XVS) is determined by both the quality of the source (NR XVS) and the performance of the encoder (XEPS). If the encoder is working well but the source material is poor, the output quality will also be poor. This is because encoding always results in some loss of information. Thus, the output quality is a combination of the source quality, the full-reference quality EPS, and the psychovisual effects included in the model.
The metric supports all impairments, content attributes, and content types, as well as cross-resolution and cross-frame rate analysis. IMAX is currently developing supporting cross-dynamic range analysis and tone-mapped quality assessment. IMAX VisionScience correlates linearly with perceived quality and covers the full quality range.
The score ranges from 0 to 100, where 0=totally unwatchable and 100=pristine, without visual impairments).
The metric supports all commonly used codecs for source degradations, including ProRes, J2K, AVC, HEVC, MPEG2, and VP9, and it is display device and viewing conditions adaptive.
IMAX Encoder Performance Score (XEPS)
The IMAX Encoder Performance Score assesses the performance of an encoder or transcoder by comparing its output with the source. The metric assumes that the source is pristine or nearly pristine, and focuses solely on measuring the degradation introduced by encoding impairments. EPS supports all impairments, content attributes, content types, cross-resolution, and cross-frame rate analysis. We are currently developing support for cross-dynamic range analysis. The metric correlates linearly with perceived quality and covers the full quality range. It also automatically aligns a source with output across resolutions and frame rates. It computes the quality and saliency maps to normalize across all content types. EPS is adaptive to device display and viewing conditions.
Reference/Source | Output |
---|---|
![]() |
![]() |
The quality map is generated as part of the IMAX Encoder Performance Score computation:
Encoding Examples
Example 1 | |
---|---|
The source video is high quality, and the encoder or transcoder is performing well. | |
![]() |
Example 2 | |
---|---|
The source video is high quality, but the encoder or transcoder is performing poorly. | |
![]() |
Example 3 | |
---|---|
The source video is poor quality, but the encoder or transcoder is performing well. | |
![]() |
Example 4 | |
---|---|
The source video is poor quality, and the encoder or transcoder is performing poorly. | |
![]() |
IMAX Banding Score (XBS)
The IMAX Banding Score (XBS) is a measurement of the amount of color banding that is present within the asset. Banding, otherwise known as color contouring, is a common artifact of the video compression process and due to the quantization introduced by the video codec. It is most obvious in areas where the content calls for the rendering of smooth gradient regions, such as the sky, water or ground. In these regions, pixel values that are meant to be represented uniquely so as to provide a gentle transition from one shade or color to another are, instead, aggregated into a much smaller set of values creating discrete bands of colors.
The algorithm analyzes each frame of your asset for the presence of banding and records a score indicating the severity of the banding, according to the following scale:
- [ 0- 20] Imperceptible
- [21- 40] Perceptible but not annoying
- [41- 60] Slightly annoying
- [61- 80] Annoying
- [81-100] Very annoying
The following frame is an example of an XBS of 0, meaning that there is no discernible banding present:
This next frame is example of an XBS of 68, which would qualify as annoying. Notice the discrete bands of colors both in the sky and the asphalt track.
Perceptual Color Volume Difference Score (CVD)
The IMAX ViewerScore and Encoder Performance Score take into account color quality. The perceptual color difference is a dedicated metric for color quality assessment that supports both HDR and SDR content. The metric provides a visibility map that helps to quickly identify areas with significant perceptual color differences. The score ranges from 0 to 100, where0 represents no deviation, <15 color difference is perceptible and 100 denotes colors are opposite.
Reference/Source | Output |
---|---|
![]() |
![]() |
The color volume differece map:
IMAX Content Complexity Score (XCCS)
The primary challenge for a video processing/encoding system to operate at the optimal level is the vast variety of video characteristics also called video complexity. Different content types (such as sports, news, nature, movie, and documentary) require the video system to run in different configurations.
Traditional measures, like spatial and temporal information, don’t describe the video complexity very well. For example, a frame with high spatial information doesn’t mean it’s difficult to encode.
IMAX Content Complexity Score (XCCS) is a measure to describe how difficult it is to encode an asset/any piece of content. Content Complexity is one of the most important metrics when making decisions about encoding configurations like bitrate allocation, resolution etc. Complexity measurements can be used to make more intelligent decisions regarding the type of encoder, and/or profile settings to apply, when encoding asset for downstream packaging or consumption.
The XCCS can deliver significant value by:
- saving bandwidth for relatively easy content while maintaining the target quality (as measured by IMAX ViewerScore),
- mintaining the target quality even for complex content by redistributing the resources wisely across content with different requirements,
- reducing the quality variation of the asset library,
- optimizing encoders for asset libraries with a wide range of content,
- saving time and encoding power and
- optimizing transmission costs
Macroblocking
Macroblocking artifacts are a type of visual distortion that can occur during video transmission when packets of data are lost. The severity and appearance of these artifacts can vary, depending on the content of the video, the encoding algorithm used, the packetization strategy, and how the decoder handles errors. Macroblocking detection is a method of analyzing a video frame to determine whether or not it contains macroblocking artifacts. While some non-reference quality metrics may be unable to accurately identify the presence of these artifacts, others, IMAX, is specifically designed to do so. This is important because these artifacts can significantly impact the perceived quality of the video.
The IMAX ViewerScore (XVS) takes macroblocking artifacts into consideration after enabling the macroblocking detection.
Noise estimation
Noise estimation is critical to measure how much noise exists in the content so that the quality assessment metric can be adapted to generate the objective quality score which is consistent with human opinion.
Noise and film grain are different. Noise needs to be suppressed, while film grain needs to be preserved as it is part of the creative intent. Instead of the structural fidelity of film grain, the human visual system (HVS) is sensitive to the statistical similarity which hasn’t been captured by the existing quality assessment methodologies.
IMAX VisionScience provides two noise metrics for content analysis:
-
Physical Noise
Physical Noise measures the standard deviation of camera/sensor noise when the statistical behavior of noise is random with gaussian (or similar) distribution. If noisy content is processed multiple times, then noise can become correlated with content and lose the randomness, which would reduce the effectiveness of the metric due to deviation of appearance of noise from "true noise.”
-
Visual Noise
Visual Noise measures the standard deviation of noise considering contrast masking behavior of underlying content. For example, noise is much more visible in plain regions than in areas with textures. Similar to the case of Physical Noise measurement, if noisy content is processed multiple times, then noise can become correlated with content and lose the randomness, which would reduce the effectiveness of the metric due to the deviation of the appearance of noise from "true noise.”
HDR
HDR displays are typically limited in terms of emitting enough luminance so they must perform tone mapping or luminance shifting/clipping to render HDR content.
Understanding minimum, maximum, and average luminance values for each frame as well as maximum frame average light level (MaxFALL) and maximum content level light level (MaxCLL) is very helpful for content providers to ensure the creative intent does not get compromised because of display capabilities. Embedded HDR10 metadata is not reliable and often gets ignored by HDR displays. VisionScience analyzes pixels and calculates per-frame min, max and average luminance value as well as MaxFALL and MaxCLL. This feature is used for Quality Control of HDR productions or to validate embedded metadata.
Color gamut detection
The beauty of HDR videos is not solely due to the brighter and darker pixels. Accommodating wider color gamuts (WCG) brings richer color tones to the content.
Grading HDR is typically done using a P3 color gamut. HDR delivery pipelines always use Rec 2020 primaries and so HDR metadata does not provide useful insight into the gamut coverage of the content.
VisionScience is able to look at pixel values and determine the gamut coverage of each frame. Receiving HDR/WCG content, content providers will know how much of the content is actually in Rec 709, P3 or Rec 2020 when receiving HDR/WCG content.
*StreamAware supports HLG, HDR10 and Dolby Vision analysis.
The IMAX ViewerScore and Encoder Performance Score assess perceived quality for any target display device and viewing environment:
-
Device adaptive
Scores are device adaptive; this means that the metrics consider the device display attributes and viewing conditions when computing the score and take into account content scaling performed to match device attributes. Impairment perceptibility varies significantly across device displays and viewing conditions. VisionScience takes into account content quality and playback behavior to accurately reflect viewer experience in real time.
The score can be used to assess degradation in the Viewer Experience introduced by the performance of content delivery and playback, adapted to the display device used by each viewer. It correlates linearly with perceived quality and covers the full quality range.
The scores support displays of any size, resolution, frame-rate, and brightness. The scores also adapt according to viewing distance, angle, and other important viewing conditions.
StreamAware supports more than 100 model devices from vendors like Samsung, LG, Sony, Canon, Apple, etc.
-
Viewer adaptive
Scores are Viewer Adaptive; IMAX ViewerScores and Encoder Performance Scores adapt according to the viewer type, based on the psychovisual behavior: Studio, Expert, and Typical viewers differ in the following ways:
-
Typical Viewer
The Typical Viewer is primarily concerned with the content of the video rather than the overall viewing experience. They tend to be more critical of quality drops and less likely to notice or appreciate improvements in quality. They also recognize that not all regions of the content are equally important, and their overall evaluation of the content is influenced by this fact.
-
Expert Viewer
The Expert Viewer, on the other hand, is primarily focused on the quality of the video rather than the content itself. They are aware that different regions of the content may be more or less important, and their overall evaluation of the video is largely dependent on the worst segments of the content.
-
Studio Viewer
The Studio Viewer is primarily concerned with the preservation of the creative intent of the video, and they place particular importance on regions of the content that are impaired or degraded in some way. They are less concerned with the overall quality of the video and more focused on ensuring that the content is being presented in a way that is consistent with the original creative vision.
-
Supported Devices
Name | Code | Manufacturer | Category | HDR | XBS Compatible) |
---|---|---|---|---|---|
SSIMPLUSCore | ssimpluscore | SSIMPLUS | SSIMPLUS | Yes | No |
55U8G | 55u8g | Hisense | TV | Yes | No |
65H9G | 65h9g | Hisense | TV | Yes | No |
65SM9500PUA | 65sm9500pua | LG | TV | Yes | No |
EA9800 | ea9800 | LG | TV | No | No |
OLED55C7P | oled55c7p | LG | TV | Yes | Yes |
OLED55C8PUA | oled55c8pua | LG | TV | Yes | Yes |
OLED55C9PUA | oled55c9pua | LG | TV | Yes | Yes |
OLED55E7N | oled55e7n | LG | TV | Yes | Yes |
OLED65C2PUA | oled65c2pua | LG | TV | Yes | Yes |
OLED65C7P | oled65c7p | LG | TV | Yes | Yes |
OLED65C9PUA | oled65c9pua | LG | TV | Yes | Yes |
OLED65G6P | oled65g6p | LG | TV | Yes | Yes |
OLED75C9PUA | oled75c9pua | LG | TV | Yes | Yes |
AS600 | as600 | Panasonic | TV | No | No |
TX-40CX680B | tx-40cx680b | Panasonic | TV | No | No |
TX65JZ2000B | tx65jz2000b | Panasonic | TV | Yes | Yes |
VT60 | vt60 | Panasonic | TV | No | No |
50PUT6400 | 50put6400 | Philips | TV | No | No |
F8500 | f8500 | Samsung | TV | No | No |
H7150 | h7150 | Samsung | TV | No | No |
HU9000 | hu9000 | Samsung | TV | No | No |
QN55Q8FNBFXZA | qn55q8fnbfxza | Samsung | TV | Yes | No |
QN55QN90A | qn55qn90a | Samsung | TV | Yes | No |
QN65QN900A | qn65qn900a | Samsung | TV | Yes | No |
QN75QN900A | qn75qn900a | Samsung | TV | Yes | No |
QN85QN900A | qn85qn900a | Samsung | TV | Yes | No |
UE40JU6400 | ue40ju6400 | Samsung | TV | No | No |
UE55JS9000T | ue55js9000t | Samsung | TV | Yes | No |
KD-55X8509C | kd-55x8509c | Sony | TV | No | No |
PVMX550 | pvmx550 | Sony | TV | Yes | No |
X9 | x9 | Sony | TV | No | No |
XBR-55A8F | xbr-55a8f | Sony | TV | Yes | No |
XBR75Z9F | xbr75z9f | Sony | TV | Yes | No |
XBRX950G | xbrx950g | Sony | TV | Yes | No |
55R646 | 55r646 | TCL | TV | Yes | No |
65Q825 | 65q825 | TCL | TV | Yes | No |
PQ9 | pq9 | Visio | TV | Yes | No |
B296CL | b296cl | Acer | Monitor | No | No |
Pro Display XDR | prodisplayxdr | Apple | Monitor | Yes | No |
iMac 21.5 4K | imac2154k | Apple | Monitor | No | No |
iMac 27 5K | imac275k | Apple | Monitor | No | No |
iMac 27inch | imac27inch | Apple | Monitor | Yes | No |
VG27HE | vg27he | Asus | Monitor | No | No |
XL2420T | xl2420t | BenQ | Monitor | No | No |
DP-V2421 | dpv2421 | Canon | Monitor | Yes | No |
DP-V3120 | dpv3120 | Canon | Monitor | Yes | No |
AW2721D | aw2721d | Dell | Monitor | Yes | No |
U2713HM | u2713hm | Dell | Monitor | No | No |
UP3216Q | up3216q | Dell | Monitor | No | No |
27MP35HQ | 27mp35hq | LG | Monitor | No | No |
38GN50TB | 38gn50tb | LG | Monitor | Yes | No |
38GN950B | 38gn950b | LG | Monitor | Yes | No |
LT3053 | lt3053 | Lenovo | Monitor | No | No |
PA242W | pa242w | NEC | Monitor | No | No |
436M | 436m | Philips | Monitor | Yes | No |
BVM-X300 | bvmx300 | Sony | Monitor | Yes | Yes |
Aspire S7 | aspires7 | Acer | Laptop | No | No |
Macbook Air 13inch | macbookair13inch | Apple | Laptop | No | No |
Macbook Pro | macbookpro | Apple | Laptop | No | No |
Macbook Pro 14inch | macbookpro14inch | Apple | Laptop | Yes | No |
Macbook Pro 16.2inch | macbookpro16.2inch | Apple | Laptop | Yes | No |
XPS 13 | xps13 | Dell | Laptop | Yes | No |
XPS 15 | xps15 | Dell | Laptop | No | No |
ThinkPad W540 | thinkpadw540 | Lenovo | Laptop | No | No |
iPhone 13 | iphone13 | Apple | Phone | Yes | No |
iPhone 13 Mini | iphone13mini | Apple | Phone | Yes | No |
iPhone 13 Pro | iphone13pro | Apple | Phone | Yes | No |
iPhone 13 Pro Max | iphone13promax | Apple | Phone | Yes | No |
iPhone 5S | iphone5s | Apple | Phone | No | No |
iPhone 6 | iphone6 | Apple | Phone | No | No |
iPhone 6 Plus | iphone6plus | Apple | Phone | No | No |
iPhone X | iphonex | Apple | Phone | Yes | No |
One (M8) | onem8 | HTC | Phone | No | No |
Nexus 6 | nexus6 | Motorola | Phone | No | No |
OnePlus 9 | oneplus9 | OnePlus | Phone | Yes | No |
OnePlus 9 Pro | oneplus9pro | OnePlus | Phone | Yes | No |
Galaxy Note 4 | galaxynote4 | Samsung | Phone | No | No |
Galaxy S21 | galaxys21 | Samsung | Phone | Yes | No |
Galaxy S21 Plus | galaxys21plus | Samsung | Phone | Yes | No |
Galaxy S21 Ultra | galaxys21ultra | Samsung | Phone | Yes | No |
Galaxy S5 | galaxys5 | Samsung | Phone | Yes | No |
Galaxy S6 Edge | galaxys6edge | Samsung | Phone | No | No |
iPad 2017 | ipad2017 | Apple | Tablet | Yes | No |
iPad 2021 | ipad2021 | Apple | Tablet | No | No |
iPad Air 2 | ipadair2 | Apple | Tablet | No | No |
iPad Mini 2 | ipadmini2 | Apple | Tablet | No | No |
iPad Mini 2021 | ipad2021mini | Apple | Tablet | No | No |
iPad Pro | ipadpro | Apple | Tablet | Yes | No |
iPad Pro 2021 12.9inch | ipad2021pro12.9 | Apple | Tablet | Yes | No |
Nexus 7 | nexus7 | Asus | Tablet | No | No |
Nexus 9 | nexus9 | HTC | Tablet | No | No |
Surface | surface | Microsoft | Tablet | No | No |
Surface Pro 8 | surfacepro8 | Microsoft | Tablet | Yes | No |
Surface Studio 2 | surfacestudio2 | Microsoft | Tablet | No | No |
Galaxy Tab S | galaxytabs | Samsung | Tablet | No | No |
Business Class FHD M1 | businessclassfhdm1 | Panasonic | IFE | No | No |
Business Class FHD M2 | businessclassfhdm2 | Panasonic | IFE | No | No |
Business Class UHD M3 | businessclassuhdm3 | Panasonic | IFE | No | No |
Economy Class FHD S2 | economyclassfhds2 | Panasonic | IFE | No | No |
Economy Class FHD S3 | economyclassfhds3 | Panasonic | IFE | No | No |
Economy Class HD S1 | economyclasshds1 | Panasonic | IFE | No | No |
First Class FHD L1 | firstclassfhdl1 | Panasonic | IFE | No | No |
First Class UHD L2 | firstclassuhdl2 | Panasonic | IFE | No | No |
SmallScreen | smallscreen | SSIMPlus | SmallScreen | No | No |
The On-Demand Analyzer, used in the StreamAware and StreamSmart products, streams scores for every frame of every asset analyzed to the Insights data platform, where our processing provides aggregates of these frame scores over time intervals called measurement periods and these values are used for various analytics, including many of the pre-configured views presented in the sections below. When it comes to generating a single score (XVS, XEPS, XBS or CCS) for all frames over a given measurement period, there are three value types that are available: average, minimum and maximum. As you would expect, these values are the mathematical average, minimum and maximum calculated across the population of frame scores over the given measurement/time period.
The following screen capture provides a useful illustration for understanding measurement periods:
Here we can see that the graph’s reference time is delineated into 2s intervals which we will use as the measurement period in this example. Using this approach, we’ve augmented the graph with labels for a selection of measurement periods showing the approximate average, minimum and maximum scores.
When you use the Insights REST API to fetch analysis results, you’ll have the ability to customize the measurement period used and, in some cases, the measurement type (i.e. average
, minimum
or maximum
). Generally speaking, you have the choice of the following measurement periods to use when aggregating the frame scores:
1s
2s
5s
10s
30s
60s
Unless requested otherwise, the default measurement period is 1s
and the type used is average
. You may, at any time, request scores down to the frame level, should your use case require that level of granularity.
In addition to extracting scores based on measurement periods, Insights provides Asset Scores which employ additional intelligence beyond simple mathematical averages of individual frames scores or measurement periods to arrive at a single score for an entire asset.
An Asset Score is a summary metric that assesses the overall experience of the viewer after watching a full asset or program or just a part of an asset or program. It is designed to accurately model the memory behavior of the human visual system and to take into account the response of the human visual system to variations in the viewer experience. For example, in typical scenarios, video quality changes significantly over time due to a number of reasons. Viewers’ opinion of video quality changes relies heavily on the amount, duration, and direction of such a change. VisionScience uses the Expectation Confirmation Theory to model the asymmetric behavior of the Human Visual System.
The StreamAware and StreamSmart products provide both the average and asset scores for the metrics:
- IMAX ViewerScore (XVS),
- IMAX Encoder Performance Score (XEPS),
- IMAX Banding Score (XBS)
It is generally recommended to use the Asset Score rather than the average score for content lengths that are longer than one minute in duration.
The Asset XVS, for example, provides a measurement that can be used to judge the overall quality of the entire asset, as perceived by the human visual system, taking into account the lingering effect of poor quality segments. The Asset XVS is particularly useful when your goal is to perform source selection on a population of assets by first removing all those with unacceptably low overall quality. Similarly, the Asset Encoder Performance Score (XEPS) can provide a single judgement of the encoder’s performance.
The following scale provides the guide for how to interpret IMAX XVS and XEPS scores:
- Excellent = Impairments are imperceptible
- Good = Impairments are perceptible but not annoying
- Fair = Impairments are slightly annoying
- Poor = Impairments are annoying
- Bad = Impairments are very annoying
Cadence is defined as the repetition pattern of frames when content that was originally produced at a lower framerate is presented at a higher frame rate as a result of frame rate conversion.
Since content may be transcoded multiple times between the original and the analyzed video and different transcoders apply different cadence patterns when performing frame rate conversion and deinterlacing, StreamAware applies heuristics to determine what cadence pattern is observed. Additionally, since some long form content may be a combination of different sources stitched together, StreamAware can detect several cadences throughout the asset and report the detected cadence patterns along with the times observed to Insights.
If there is not enough differentiation between frames within the content due to low motion, then StreamAware will not report a cadence but instead report "Unknown (Low Motion)"
as well as a confidence threshold for all detected cadence patterns.
The list of cadence patterns StreamAware can detect and their typical use case are
Pattern | Typical Use Case |
---|---|
1:1 (No Pattern) | The content has unique frames throughout and likely has the same framerate as the original |
2:2 | Every frame is repeated exactly once May be Progressive Segmented Frames or 25fps (frames per second) content encoded at 50fps |
2:3 / 3:2 | This is classic Telecine cadence pattern where the first frame is repeated once and the second frame is repeated twice This is often seen when 24fps is converted to 60fps |
1:1:1:2 | Every 4th frame is repeated once This is often seen when 24fps is converted to 30fps |
2:3:3:2 / 2:2:3:3 | An “improved” method of applying a 2:3 or 3:2 where the middle frame in the sequence does not a contain a mix of consecutive fields |
2:2:2:4 | An alternative method of converting from 24fps to 60fps where the 4th frame is repeated 4 times |
2:2:3:2:3 | Used when converting 25fps to 30fps content |
2:2:2:2:2:2:2:2:2:2:2:3 | Also known as 12:1 or 24:1, is used to convert 24fps to 25fps |
5:5 | Used to convert 12fps (typically animation) to 30Hz |
6:4 | Another way to convert 12fps (typically animation) to 30Hz |
7:8 / 8:7 | Used to convert 8fps (typically animation) to 30Hz |
Any cadence that is found that isn’t 1:1 (No Pattern)
and isn’t in the list above will be returned as Unknown (Low Motion)
Content similarity mode can detect content differences arising from frame insertions and deletions for two versions of the same title. In the event that a difference in content is detected, a quality check event will be raised indicating the first frame upon which the difference first occurred.
In order to tune the level at which a content difference is considered significant enough to trigger a quality check event, a configurable sensitivity parameter is provided. A higher sensitivity results in a check that will be triggered for minor content impairments and differences that occur over a short period of time. Meanwhile, a lower sensitivity results in a check that will be triggered only for obvious content differences that persist for many frames.
When operating in this mode, exactly one test and one reference asset must be provided for the analysis.
Additionally, the usual metrics will not be generated; instead both the reference and test will be evaluated in a no-reference mode. Thus, full-reference metrics such as XEPS, PSNR and CVD can not be enabled, nor can full-reference metrics be used as the basis for score based quality checks.
- Audio quality checks and loudness measurements are independently licensed features
- The loudness measurement standard used is a configuration option
- The default measurement standard is ITU-R BS.1770-4
- Silence quality checks are impacted by this choice as they are based off of True Peak Level which vary slightly by measurement standard
- All physical audio tracks within a video asset or sidecar will be measured when the feature is enabled
- Audio streams containing uncompressed PCM audio with container level channel layout metadata are supported
- Audio streams containing uncompressed PCM audio that do not have any channel layout metadata are limited to the following number of channels
- 1, 2, 3, 4, 5, 6, 7, 8, 16, 24
Loudness Metrics
StreamAware is capable of performing several loudness measurements that follow the ITU-R BS.1770, EBU Tech 3341 and EBU Tech 3342 measurement standards. These measurements can have Quality Checks applied to them.
- True Peak Level
- Integrated Loudness
- Momentary Loudness
- Short-Term Loudness
- Loudness Range
True Peak Level
True Peak Level is defined within the ITU-R BS.1770 specification.
True Peak Level measurement performs 4x oversampling of the audio samples of the track to determine what the true peak as opposed to the sample peak.
The unit of measurement for True Peak Level is dBTP or decibels relative to full scale, true-peak.
Integrated Loudness
Integrated Loudness (also known as Programme Loudness) is defined within the ITU-R BS.1770 specification.
Integrated Loudness is a loudness measurment that weights the individual channels to determine a perceived loudness.
Integrated Loudness is measured over the entire asset.
The unit of measurment for Integrated Loudness is LKFS or Loudness, K-weighted, relative to full scale. LKFS is equivalent to a decibel - an increase in the level of a signal by 1 dB will cause the loudness reading to increase by 1 LKFS.
Momentary Loudness
Momentary Loudness is defined within the EBU Tech 3341 specification.
Momentary Loudness is an ungated loudness measurment derived from Integrated Loudness that uses a sliding rectangular time window of length 400ms.
The unit of measurment for Momentary Loudness is LUFS or Loudness Units, relative to full scale. LUFS is equivalent to LKFS, but in EBU documentation the preferred term is LUFS.
Short-Term Loudness
Short-Term Loudness is defined within the EBU Tech 3341 specification.
Short-Term Loudness is an ungated loudness measurement derived from Integrated Loudness that uses a sliding rectangular time window of length 3s.
The unit of measurment for Short Term Loudness is LUFS.
Loudness Range
Loudness Range is defined within the EBU Tech 3342 specification.
Loudness Range quantifies the variation in a gated loudness measurement over a 3s period.
The unit of measurement for Loudness Range is LU or Loudness Units.
The unit of measurement for Loudness Range High Level and Loudness Range Low Level is LUFS.
Standards
The StreamAware product allows for configuration of the ITU-R BS.1770 measurement standard when performing loudness measurement and loudness based quality checks. A description of the standards can be found in the table below.
Standard | Description |
---|---|
ITU-R BS.1770-1 | Initial specification describing techniques used to measure programme loudness and true peaks. Integrated Loudness measured with ITU-R BS.1770-1 will be ungated. Only tracks with channel layouts up to 5.1 surround sound are supported. |
ITU-R BS.1770-2 | Integrated Loudness measurements are now gated at -70 dB. |
ITU-R BS.1770-3 | A FIR interpolating filter is introduced in the True Peak Level measurement. |
ITU-R BS.1770-4 | Tracks with channel layouts greater than 5.1 surround sound are supported. (7.1) |
The default measurement standard is ITU-R BS.1770-4.
Configuration and Supported Formats
StreamAware audio loudness measurements and quality checks support a variety of codecs and formats. This includes but is not limited to:
- Dolby Digital AC-3
- Dolby Digital Plus E-AC-3
- Dolby Digital Plus with Dolby Atmos E-AC-3
- AAC
- MPEG-2 Audio
- MPEG-3 Audio
- PCM Audio
If a desired codec is not listed above, customers should contact their IMAX representative for more information.
When performing audio loudness measurements, all physical audio tracks within an asset or sidecar will have their loudness measured according to the metrics listed in the Loudness Metrics section.
Audio Sidecars
StreamAware is capable of performing audio loudness measurements and quality checks on physical audio tracks supplied from up to 32 sidecar files associated with an asset. Audio sidecars are commonly used to provide alternate tracks containing different language options, audio descriptions, and channel layout configurations. Sidecar processing occurs in parallel with audio and video tracks from the main asset in the analysis, with minimal impact to the overall time of the analysis.
Audio sidecars carry tracks that are intended for simultaneous playback with the video asset belonging to an analysis. It is assumed that the presentation start time of both the audio sidecars and audio/video within the main asset are synchronized to begin at the same point in time.
The following container formats are supported for audio sidecars:
- MXF
- BWF
- WAV
- MP4
Audio sidecar files in MXF or MP4 format can only contain audio tracks. The presence of other types of tracks such as video or subtitles will result in a failure when submitting an analysis.
The following encoded formats are supported for audio sidecars:
- LPCM
- PCM
- AAC
- AC3
- E-AC3
For uncompressed formats, the number of channels within the sidecar file must match one of the channel counts supported by StreamAware to assume a proper channel layout. In this case, all channel from the sidecar file will be combined into a single audio group with the appropriate layout. The mapping between number of channels and layout is specified in the “PCM Audio Support” section below.
For compressed formats, the sidecar file may contain one or more physical tracks which will have a one-to-one mapping with audio groups reported by StreamAware. In this case, channel layout is defined by the metadata associated with the encoded format.
PCM Audio Support
Certain container formats (e.g. MXF) will have additional metadata indicating the channel layout of PCM audio and are fully supported. However, for PCM audio tracks that do not have an explicit channel layout, support is currently limited in StreamAware. The number of channels within the track will default to a specific channel layout. There are two tables to assist in tracking how PCM audio tracks without channel layout metadata will be processed by the analyzer. A mapping of channel acronyms and channels along with a list of channels to channel layout mapping supported in StreamAware can be found below.
Channel Acronym to Channel Name Mapping
The order the channels appear in the table below indicate the order they will appear in a layout.
e.g. Front Left will always be the first channel (if present) in the layout.
e.g. Back Center will never be after Side Left in a layout.
Channel Acronym | Channel Name |
---|---|
FL | Front Left |
FR | Front Right |
FC | Front Center |
LFE | Low Frequency Effects |
BL | Back Left |
BR | Back Right |
FLC | Front Left of Center |
FRC | Front Right of Center |
BC | Back Center |
SL | Side Left |
SR | Side Right |
TC | Top Center |
TFL | Top Front Left |
TFC | Top Front Center |
TFR | Top Front Right |
TBL | Top Back Left |
TBC | Top Back Center |
TBR | Top Back Right |
WL | Wide Left |
WR | Wide Right |
LFE2 | Low Frequency Effects 2 |
TSL | Top Side Left |
TSR | Top Side Right |
BFC | Bottom Front Center |
BFL | Bottom Front Left |
BFR | Bottom Front Right |
Number of Channels to Channel Layout Mapping
Number of Channels | Channel Layout | Channel Mapping |
---|---|---|
1 | Mono | 1-FC |
2 | Stereo | 1-FL 2-FR |
3 | 2.1 | 1-FL 2-FR 3-LFE |
4 | 4.0 | 1-FL 2-FR 3-FC 4-BC |
5 | 5.0 (back) | 1-FL 2-FR 3-FC 4-BL 5-BR |
6 | 5.1 (back) | 1-FL 2-FR 3-FC 4-LFE 5-BL 6-BR |
7 | 6.1 | 1-FL 2-FR 3-FC 4-LFE 5-BC 6-SL 7-SR |
8 | 7.1 | 1-FL 2-FR 3-FC 4-LFE 5-BL 6-BR 7-SL 8-SR |
16 | Hexadecagonal | 1-FL 2-FR 3-FC 4-BL 5-BR 6-BC 7-SL 8-SR 9-TFL 10-TFC 11-TFR 12-TBL 13-TBC 14-TBR 15-WL 16-WR |
24 | 22.2 | 1-FL 2-FR 3-FC 4-LFE 5-BL 6-BR 7-FLC 8-FRC 9-BC 10-SL 11-SR 12-TC 13-TFL 14-TFC 15-TFR 16-TBL 17-TBC 18-TBR 19-LFE2 20-TSL 21-TSR 22-BFC 23-BFL 24-BFR |
If the number of channels within the audio track is not within the set of supported channels, then customers are recommended to disable audio loudness measurement before running their analysis.
Clicks and Pops
StreamAware provides a quality check for clicks and pops that may occur within an audio track. Clicks and pops are caused by a variety of factors, including a poor recording environment, bad equipment, or a misaligned recording.
Clipping
StreamAware provides a quality check for clipping that may occur within an audio track. This distortion occurs when an audio signal exceeds the maximum limit of a recording or playback system. It typically happens when the volume level of the audio reaches or exceeds the maximum level that can be accurately reproduced, resulting in the waveform being “clipped” or truncated. This distortion introduces unwanted distortion and distortion artifacts to the audio signal, leading to a harsh and distorted sound.
Phase Mismatch
StreamAware provides a quality check for audio phase mismatch that may occur between left and right channel pairs within an audio track. This occurs when the signals of the channel pair are misaligned or out-of-phase, creating interference that weakens the soundwave resulting in thin, distorted output.
The Phase Mismatch quality check can be performed on a left and right channel pair that is at the front (e.g. 2.1 stereo), side (e.g. 7.1) or rear (e.g. 5.1) channel layout.
The quality check can be configured to perform a phase mismatch comparison in the following ways:
- Duration based, where the absolute difference in phase between two channels is compared against a threshold over a specified duration
- Average based, where the difference between the average phase (taken over the entire duration of the audio track) of two channels is compared against a threshold
Within StreamAware, the On-Demand Analyzer can check that Dolby Vision metadata belonging to an asset is valid and complies with industry best practices for the purposes of content mapping.
Dolby Vision metadata can be provided either through a sidecar XML file or interleaved with JPEG2000 frames inside of an MXF asset in PHDR (Prototype High Dynamic Range) format.
A sidecar XML file cannot be provided when PHDR formatted metadata exists within the MXF asset. This scenario would present ambiguity when applying content mapping. StreamAware will detect this scenario and prevent the analysis from being submitted.
The validation will fail if a sidecar XML file is provided that does not match the video asset contained within the analysis (eg. different frame count, frame rate, color encoding, etc.), although the metadata contents will still also be examined for violations.
If your license permits this feature, validation will be performed whenever metadata is detected in any of the aformentioned formats.
Validation Checks
The following table outlines the list of conditions that are checked within the metadata, along with the validation result that will be reported when a check does not pass.
Condition | Validation Result |
---|---|
Content mapping algorithm version is valid (Valid versions: 0.0, 2.0-2.5, 4.0-4.2) |
Fail |
Mastering display metadata is within valid ranges | Fail |
Target display metadata is within valid ranges | Fail |
Each target display has a unique peak brightness value | Fail |
Colour encoding metadata is valid | Fail |
Shots do not overlap with each other | Fail |
Shots do not have gaps between them | Fail |
No shots containing negative duration | Fail |
Per-frame offsets are within shot boundaries | Fail |
Level 1 metadata values are present and within the range of 0.0 to 1.0 | Fail |
One set of Level 1 metadata values per shot | Fail |
Level 2 and Level 8 target display IDs are recorded in Level 0 | Fail |
Level 2 and Level 8 metadata values are within -1.0 to 1.0 | Fail |
Level 3 metadata values are present and within -1.0 to 1.0 | Fail |
Level 9 metadata values are present and within -1.0 to 1.0 | Fail |
Level 11 metadata values are within the defined set of content type index values | Fail |
No duplicate trims exist in the same shot for a specific target display ID | Fail |
Number of displays requiring custom Level 10 metadata is less than 5 | Fail |
Predefined Mastering and Target displays are checked against the Dolby CM v2.4 Database for matching color primaries, white point, encoding, peak brightness and minimum brightness | Warning |
Mastering displays contain only standard color primaries | Warning |
Level 1 metadata values should not equal (0,0,0) for a duration longer than 1 seocond | Warning |
Level 1 metadata values should not be the same for more than 15 consecutive shots | Warning |
Level 2 or Level 8 metadata values should not be the same for more than 15 consecutive shots | Warning |
Shots that contain an HDR trim should also contain a 100-nit trim | Warning |
Shots should always contain trim values | Warning |
Shots should not contain trims with Level 2 or Level 8 lift greater than +0.025 with aspect ratio letterboxed | Warning |
One or more issues may be detected within the metadata during validation. When this occurs, the list of issues along with any associated frame ranges will be indicated in the metadata validation report. Any issues encountered that cause a validation failure must be addressed and resolved for proper content mapping. Meanwhile, warnings should be reviewed to ensure there are no undesired results in the video output.
An analysis can be configured on a per-asset basis to produce a quality check event if there are issues detected within the metadata.
Only failing validation results will trigger a quality check event.
StreamAware is capable of extracting metadata and performing container compliance checks on MXF (Material Exchange Format) files.
Essential MXF container and media essence metadata can be extracted and reported with the results of an analysis. Additionally, the container structure can be validated for conformance with industry standards in order to guarantee file integrity and compatibility with standard players.
This feature is currently in alpha for this release. Container metadata will only be reported if the MXF Container Compliance check has been enabled for a particular asset within an analysis.
MXF Container Compliance checks are an optionally licensed feature.
MXF metadata extraction includes (but is not limited to):
- Container info: essence container type, track identifiers, file author information, etc…
- General video info: codec, framerate, resolution, aspect ratio, etc.
- General audio info: codec, sample rate, bit depth, channel count, etc.
- Timecode track info: start timecode, VITC (Vertical Interval) timecode, LTC (Linear) timecode
MXF files are validated to ensure compliance with the following criteria:
- Input file must contain a valid MXF container
- SMPTE 377M headers must be present, with all expected metadata elements
- Timecode track must be present
- File must contain at least one valid essence track (e.g. video, audio, subtitles, etc.)
- All essence tracks must be UUID-based and properly identified
- Header, body and footer partitions must contain valid indexing and length information
StreamAware is capable of extracting metadata and performing container compliance checks on MP4 (QuickTime) files.
Essential MP4 container and media essence metadata can be extracted and reported with the results of an analysis. Additionally, the container structure can be validated for conformance with industry standards in order to guarantee file integrity and compatibility with standard players.
This feature is currently in alpha for this release. Container metadata will only be reported if MP4 Compliance Checks have been enabled for a particular asset within an analysis.
MP4 Compliance Checks are an optionally licensed feature.
MP4 (Quicktime) metadata extraction includes (but is not limited to):
- Container info: profile, progressive download, disabled media tracks, etc…
- General video info: framerate, bitrate, scan type, resolution, aspect ratio, color info, etc.
- General audio info: bitrate, sample rate, language, channel layout
- Timecode track info: track references, start timecodes, drop frame and other flags
- Video codec info: Prores, MJPEG-2000, XAVC, XDCAMHD, XDCAM50, DV, MPEG2, MPEG4, H264, and DNxHD
- Audio codec info: PCM, AAC, AC3, and E-AC3
MP4 (QuickTime) files are validated to ensure compliance with the following criteria:
- Input file must contain a valid MP4 container
- 3GPP files must match profile spec requirements
- Atom data must be complete and correctly sized
- Index atom entries and other required atom entries must exist
- Unique atoms must not be duplicated
- Time scale and duration information must be valid
- Audio and sound descriptor information must be valid
- Video descriptor information must be valid
- Timecode track information must be valid
StreamAware supports various quality checks which can be used to identify assets with unacceptably quality. Generally speaking a quality checks fall into one of the following categories:
-
Video
A video quality check focuses on conditions outside of perceptual quality that would result in unaccepable quality, such as freeze frames, black frames, solid color frames, FPS mismatches, unwanted cadence changes and incorrect and/or mismatched metadata.
-
Audio
An audio loudness quality check performs level-based and range-based assessments of the audio tracks to ensure that standards of acceptabilty are being met.
An audio artifact quality check performs level-based and signal-based assessments of the audio tracks to ensure that unwanted noise (eg. clicks, pops and clipping distortion) does not occur during playback. -
Score
A score check allows one to apply numeric thresholds to various VisionScience measurements and metrics and reject assets that don’t meet the threshold.
Video Quality Checks
StreamAware supports the following video quality checks, along with their respective default duration (beyond which the condition is considered a quality failure) and level (per-analysis or per-asset) to which they apply. A video quality check focuses on conditions outside of perceptual quality that would result in unaccepable quality. For more details on configuration options for the video checks, please review consult the Stream On-Demand Platform REST API and/or the Insights StreamAware On-Demand app.
Quality Check | Description | Default Duration | Configuration Level |
---|---|---|---|
Black frames | Consecutive black frames are detected that persist for a customizable duration | 10s | Per-Analysis |
Solid color frames | Consecutive frames of a single color (i.e. green, red) are detected that persist for a customizable duration | 10s | Per-Analysis |
Freeze frames | Consecutive frames of identical content are detected that persist for a customizable duration | 10s | Per-Analysis |
Color bar frames | Consecutive frames containing the common color test/calibration bar pattern are detected that persist for a customizable duration | 10s | Per-Analysis |
Cadence | Finds periods of mixed, broken or disallowed cadence | - | Per-Analysis |
Frame rate and scanning | Finds periods of mismatched frame rate (FPS) and/or scan type between probed values and actual bitstream | - | Per-Analysis |
Photosensitive Epilepsy (PSE) Detection | Finds periods of that may violate PSE Harding Tests (red/lumincance flashes, spatial pattern) | - | Per-Asset |
Dolby Vision metadata | Global, per-shot and per-frame metadata validation errors that may impact the ability for a playback device to apply Dolby Vision tone mapping | - | Per-Asset |
MaxFALL and MaxCLL metadata | Global, identifies discrepencies between MaxFALL/MaxCLL metadata and actual measured light levels | - | Per-asset |
Missing captions | A period, of a customizable duration, where the closed captions are detected as missing | 60s | Per-Analysis |
The On-Demand Analyzer uses the following order for video frame checks: black -> solid color -> color bar -> freeze.
Audio Quality Checks
StreamAware supports the following audio quality checks, along with their respective default duration (beyond which the condition is considered a quality failure) and level (per-analysis or per-asset) to which they apply. An audio quality check performs level-based and range-based assessments of the audio tracks to ensure that standards of acceptabilty are being met. For more details on configuration options for the video checks, please review consult the Stream On-Demand Platform REST API and/or the Insights StreamAware On-Demand app.
See the Audio section for more details regarding the measurements used for the Audio quality checks below.
Audio Quality Checks will apply to all audio tracks within an asset.
Quality Check | Description | Default Duration | Configuration Level |
---|---|---|---|
Max Short Term Loudness | A period, of customizable duration with a minimum value of 400 milliseconds, where the Short Term Loudness value exceeds a specified threshold | - | Per-Asset |
Max Momentary Loudness | A period, of customizable duration with a minimum value of 3 seconds, where the Momentary Loudness value exceeds a specified threshold | - | Per-Asset |
Min True Peak Level | A period, of customizable duration with a minimum value of 100 milliseconds, where the True Peak Level value is less than a specified threshold | - | Per-Asset |
Max True Peak Level | A period, of customizable duration with a minimum value of 100 milliseconds, where the True Peak Level value exceeds a specified threshold | - | Per-Asset |
Min Integrated Loudness | The Integrated Loudness value is less than a specified threshold | - | Per-Asset |
Max Integrated Loudness | The Integrated Loudness value exceeds a specified threshold | - | Per-Asset |
Min Loudness Range | The Loudness Range value is less than a specified threshold | - | Per-Asset |
Max Loudness Range | The Loudness Range value exceeds a specified threshold | - | Per-Asset |
Silence | A period, of customizable duration, where the audio true peak level of a channel are detected as less than the specified threshold | 60s | Per-Asset |
Clicks and Pops | Clicks and/or pops were detected over a period of time within an audio track | - | Per-Asset |
Clipping | A period, of customizable duration, where clipping was detected within an audio track | - | Per-Asset |
Standards
There are several standards related to audio loudness compliance.
Compliance Specification | Region | Recommended Quality Checks |
---|---|---|
ARIB TR-B32 | Japan | Min Integrated Loudness > -25 LKFS Max Integrated Loudness < -23 LKFS Max True Peak Level < -1 dBTP |
ATSC/85 | North America | Min Integrated Loudness > -26 LKFS Max Integrated Loudness < -22 LKFS Max True Peak Level < -2 dBTP |
EBU | Europe | Min Integrated Loudness > -25 LKFS Max Integrated Loudness < -23 LKFS Max Short Term Loudness < -18 LUFS Max True Peak Level < -1 dBTP |
OP-59 | Australia | Min Integrated Loudness > -25 LKFS Max Integrated Loudness < -23 LKFS Max True Peak Level < -2 dBTP |
Score Checks
StreamAware supports the following score checks, along with their respective default threshold, duration and level (per-analysis or per-asset) to which they apply. A score check allows one to apply numeric thresholds to various VisionScience measurements and metrics and reject assets that don’t meet the threshold. For more details on configuration options for the score checks, please review consult the Stream On-Demand Platform REST API and/or the Insights StreamAware On-Demand app.
Quality Check | Description | Default Threshold | Default Duration | Configuration Level |
---|---|---|---|---|
XVS (SVS) | IMAX ViewerScore | 80 (Web UI only) | 5 (Web UI only) | Per-Asset |
XEPS (EPS) | IMAX Encoder Performance Score | 80 (Web UI only) | 5 (Web UI only) | Per-Asset |
XBS (SBS) | IMAX Banding Score | 80 (Web UI only) | 5 (Web UI only) | Per-Asset |
CVD | Color Volume Difference Score | 80 (Web UI only) | 5 (Web UI only) | Per-Asset |
MIN_PIXEL_LUMINANCE | Minimum Pixel Luminance | 80 (Web UI only) | 5 (Web UI only) | Per-Asset |
MAX_PIXEL_LUMINANCE | Maximum Pixel Luminance | 80 (Web UI only) | 5 (Web UI only) | Per-Asset |
MIN_FRAME_LUMINANCE | Minimum Frame Luminance | 80 (Web UI only) | 5 (Web UI only) | Per-Asset |
MAX_FRAME_LUMINANCE | Maximum Frame Luminance | 80 (Web UI only) | 5 (Web UI only) | Per-Asset |
You may find it useful to compare reference and subject assets that differ in their frame rates. StreamAware supports the following cross-frame rate criteria:
- The frame rate of reference video is the same as the frame rate of test videos.
- The frame rate of reference video is two times the frame rate of test videos.
- The difference between the frame rates of the reference video and test video is less than 0.01.
- The difference between the frame rates of the reference video and two times the frame rate of the test video is less than 0.01.
In addition to the general cross-frame rate rules above, the On-Demand Analyzer has been enhanced to support a number of common cross-rate scenarios arising when comparing Drop-Frame (DF) with Non Drop-Frame (NDF) videos, including:
- 23.98 vs 24
- 24 vs 23.98
- 29.97 vs 30
- 30 vs 29.97
- 59.94 vs 60
- 60 vs 59.94
The On-Demand Analyzer requires all reference video assets in an full-reference anlaysis and all subject assets used in a no-reference analysis to be at least 235x235 for successful processing.
StreamAware supports the following video formats:
Category | Supported |
---|---|
Media container formats | AV1, AVI, AVR, AVS, DV, FLV, GXF, H261, H263, H264, HEVC, HLS, IFF, IVR, LVF, LXF, M4V, MJ2, Mjpeg, Mjpeg_2000, MOV, MP4, MPEG-PS, MPEG-TS, MKV, MPV, MVE, MXF, VP9, V210, WebM, YUV, 3G2, 3GP, Y4M |
Video codecs | Apple ProRes, AV1, Google VP9, H.261, H.263 / H.263-1996, H.263+, H.263-1998, H.263 version2, H.264,AVC, MPEG-4AVC, MPEG-4 part 10, HEVC, JPEG 2000, JPEG-LS, MPEG-1 video, MPEG-2 video, MPEG-4 part 2, On2 VP6, On2 VP3, On2 VP5, On2 VP6, On2 VP7, On2 VP8, QuickTime Animation video, Theora, Windows Media Video 7, Windows Media Video 8, Windows Media Video 9 |
HDR formats | HDR10, HLG, Dolby Vision® |
The following sections present guides for the customer-deployed IMAX Stream On-Demand Platform, which requires read-only access to your video assets and whose purpose is to analyze/score, encode and/or optimize your content and securely stream the resulting measurements and metrics to Insights. Please consult the System Architecture section for additional details.
The guides are divided into three sections:
-
These deployments are for those customers that are trialing the StreamAware and/or StreamSmart products or have deployed the system for lab/ad-hoc workloads where production-level features, such as high availability and elastic scalability, are unnecessary.
-
Elastic/Dynamic scale deployments
These deployments are for those customers that are ready to deploy the product into their production workflow, having leveraged the Stream On-Demand Platform REST API to achieve the desired level of integration. The IMAX Stream On-Demand Platform is installed in such a manner as to take full advantage of all features expected of a production system (i.e. high availability, elastic scalability).
Some customers may wish to deploy the IMAX Stream On-Demand Platform into an AWS Elastic Cloud Compute (EC2) instance. Typically, this deployment strategy is used for trials or lab usage of the product where the customer is also an AWS customer and their video assets are already stored in S3 buckets. To support this usage scenario, IMAX provides an Amazon Machine Image (AMI) which contains a full deployment of the IMAX Stream On-Demand Platform tthat can be deployed to the customer’s AWS account by following a short number of installation instructions. The caveat to this deployment model is that the system is constrained in its ability to scale by the hardware resources allocated to the given EC2 instance. For trial usage, however, concurrent, high-volume processing of video assets isn’t usually a priority.
It is strongly encouraged that someone with at least a basic understanding of the following AWS concepts and services perform the installation:
- AWS Web console,
- CloudFormation (creating/deleting stacks),
- EC2 (launching, configuring, instance profiles, security groups) and
- S3 buckets.
An AMI deployment has some madatory costs, as described below:
-
EC2 instance
This is the cost of Amazon EC2 instance that launches the AMI described herein. The cost here varies depending on the EC2 instance type you choose and whether you are using spot instances or have a savings plan with Amazon. For more information and current prices, please refer to Amazon’s EC2 On-Demaind Instance Pricing.
-
StreamAware and/or StreamSmart minute usage fees
These fees are the charges per minute of output video analysis, encoding and/or optimization charged by IMAX. IMAX does support private offerings via AWS Marketplace and access to the AMI outside of AWS Marketplace. Please contact your IMAX represenative for details.
The following steps are prerequisites for a successful deployment:
-
Provide your IMAX representative with the AWS account number and region that you will be using to launch the IMAX Stream On-Demand Platform AMI.
IMAX needs your account number and region in order to share the AMI with the appropriate account. Your IMAX representative will let you know once this has been done.ImportantYou are required to deploy your EC2 instance in the same region as the S3 bucket(s) holding your video assets. This restriction has the advantage of avoiding any AWS cross-region data transfer costs.
-
The AWS user account or role that you will use for deployment has the
AWS::AdminstratorAccess
permission. -
Your video assets are stored in Amazon S3.
-
Your AWS account allows for the creation of EC2 instances with the following resources:
c5.24xlarge
orc5a.24xlarge
instance type and a- 2.2TB root EBS volume (General Purpose SSD - gp3).
-
The network ACL associated with your VPC must allow inbound/ingress traffic on port 443 and 22 and outbound/egress internet access on port
443
. -
You’ve created an account to access your results in IMAX Insights.
-
You’ve received a feature license file from your IMAX representative, enabling the various products (StreamAware/StreamSmart) and features, according to your purchase/trial agreement.
The following steps and examples will use the AWS Web Console. Please contact your IMAX representative if you have any questions.
-
Login to your AWS account using the AWS Web Console and pick your target region.
-
From the AWS UI console, choose Services -> Management & Governance -> CloudFormation.
-
Click Create stack (with new resources) to create a new stack.
Select the Choose an existing template and Amazon S3 URL options, as shown below. The IMAX CloudFormation template you want can be found at the following Amazon S3 URL:
-
Name the stack appropriately for your purchased version (e.g. imax-stream-on-demand-platform-v3).
-
Specify your stack details, as shown and explained below:
-
Stream On-Demand Platform instance target availability zone (AZ)
Pick your desired availability zone. -
Stream On-Demand Platform web (HTTPS) access
Provide a comma-separated list (no spaces) of one or more CIDR blocks that, in aggregate, include the public IP addresses of all end users of your IMAX Stream On-Demand Platform instance.
Every user needs access to the HTTPS interface.TipMany corporate LANs are configured to use a single IP address/range for all outbound/external Internet access. You may find it convenient to use this IP address/range here, knowing that this should cover all users on your local network.
-
Stream On-Demand Platform SSH access
Provide a comma-separated list (no spaces) of one or more CIDR blocks that, in aggregate, include the public IP addresses for the administrator/deployer of your IMAX Stream On-Demand Platform instance.
The administrator/deployer needs access to the SSH (port 22) interface on your instance for potential deployment and troubleshooting activities. You are encouraged to use a single IP address here (i.e. x.x.x.x/32) where possible.NoteIf you are unable to isolate a single IP address for this administrator role and/or you would like to exercise a higher level of security, you are encouraged to remove the SSH inbound rule below from the resulting EC2 security group after the CloudFormation stack completes and add it back only if/when you have additional deployment or troubleshooting activities to perform.
-
Stream On-Demand Platform instance target VPC
Pick a VPC to use when deploying your IMAX Stream On-Demand Platform instance.
You need to ensure that the network access control list (ACL) associated with your target VPC allows inbound/ingress traffic on port 443 and 22 and outbound/egress internet access on port 443. In most cases, you should be able to use the default (AWS-created) VPC.
-
Stream On-Demand Platform EC2 instance type and EBS volume (in GB)
Pick an EC2 instance type and backing EBS volume.
The (encrypted) EBS storage volume is used to hold the reference video you are optimizing, the encoded outputs and any frame captures, banding, quality and/or color volume difference maps generated as part of inspecting the results. While the default/minimum size here is 2.2TB, if you plan to use StreamSmart, you should adjust this value such that it is sufficient to store 1.5 times the size of the largest reference video you expect to optimize. -
Read-only AWS S3 buckets (StreamAware and/or StreamSmart)
Provide a comma-separated list (no spaces) of S3 bucket(s) to which read-only access will be granted to the IMAX Stream On-Demand Platform instance. StreamAware On-Demand requires read-only access to S3 buckets. StreamSmart On-Demand requires read-only access for S3 buckets containing source/input videos and read-write access to the S3 buckets that will store the anchor and output videos. Use the full ARN for your S3 bucket(s) here. -
Read-write AWS S3 buckets (used only with StreamSmart)
Provide a comma-separated list (no spaces) of S3 bucket(s) to which read-write access will be granted to the IMAX Stream On-Demand Platform instance. StreamSmart On-Demand requires write access only to the S3 bucket(s) that will hold the anchor and output videos. Use the full ARN for your S3 bucket(s) here.NoteYou can specify some, none or all of the same buckets in the read-only and read-write lists.
-
-
Move through the rest of the New Stack wizard accepting all defaults and click on Submit.
-
Verify that the stack creation completes successfully and click on the Outputs tab.
Record the PublicDNS value of your IMAX Stream On-Demand Platform instance and use it as the
<ec2_public_dns>
value in the steps below. -
Verify that your IMAX Stream On-Demand Platform instance is operational.
Once your EC2 instance has successfully started, open a new tab in your browser and load the URL:
https://<ec2_public_dns>/
, where<ec2_public_dns>
is the public host name of your EC2 instance you recorded in the previous step (e.g. https://ec2-35-172-215-83.compute-1.amazonaws.com/).ImportantIt can take several minutes for the virtual machine to initialize and load the IMAX Stream On-Demand Platform. You can verify the operational status of the EC2 instance by visiting EC2->Instances in the AWS console.
The system provides TLS through the use of self-signed certificates and, as such, your browser will likely flag the URL/site as a potential security risk. Please direct your browser to accept the certificate and proceed to the site.
All users of the IMAX Stream On-Demand Platform will need to accept this certificate in their respective browsers.
The page displayed is a system launch and initial configuration page for your IMAX Stream On-Demand Platform instance. You should see something similar to the image below:
At this point, your system will show that it is
UNLICENSED
andOFFLINE
. The system should becomeACTIVE
andONLINE
simply by loading the feature license file provided to you by your IMAX represenative.NoteThe System health indicates that it is
OFFLINE
because two critical services (i.e. InsightsClientService and InsightsKafkaService) require a license in order to operate.Load your license file by expanding the License Information container, clicking on the
Upload license
link and browsing to your feature license file. Feel free to verify the contents of your license once it is loaded and displayed on the screen. If you collapse the License Information section you should see that your system is nowACTIVE
andONLINE
.NoteIf you notice that your system still shows
OFFLINE
, please expand the System health container and hover your mouse over the statuses for additional details. If you are unsure how to remedy the issue(s), please contact your IMAX representative for assistance. -
Click on the View Dashboard button.
From the same initial configuration page for your IMAX Stream On-Demand Platform instance shown above, click on the View Dashboard button. Your browser will create a new tab and load the On-Demand Status page in Insights. You will be required to login to your Insights account at this time. Once you’ve successfully logged in you will be prompted with a Host setup dialog that looks similar to the following:
Type in your
<ec2_public_dns>
recorded earlier into the dropdown, press the <Tab> key to leave the field and enable theTest connection
button. Click on theTest connetion
button. At this point, you should see your On-Demand Status page.NoteIf you see a message in red under the dropdown that says: Cannot connect to host. See details, this is an indication that the browser instance no longer has the system’s self-signed certificate. Click on the See details link which will launch the system’s
/status
endpoint in another tab. Please direct your browser to accept the certificate and proceed to the site. Once the page has been successfully loaded, you can return to the previous tab and click theTest Connection
button again, at which point, you should proceed to the On-Demand Status page. -
(StreamSmart - Optional) Configure access to your AWS Elemental MediaConvert (EMC) endpoint.
For those that wish to use AWS EMC with StreamSmart, you must tell the system about your EMC endpoint. You can use the /configurations endpoint to create a secret configuration for your EMC endpoint, as shown below:
curl -kvi -X POST "https://<ec2_public_dns>/api/v1/configurations" \ -H "Content-Type: application/json" \ -d '{ "type": "SECRET", "id": "mediaconvert-config", "config": { "data": { "accesskey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", "keyid": "AKIAIOSFODNN7EXAMPLE", "region": "us-east-1", "url": "https://vasjpylpa.mediaconvert.us-east-1.amazonaws.com" } } }'
ImportantPlease substitute the
accesskey
,keyid
andregion
values above for the details that apply to your specific AWS EMC endpoint.Be careful to ensure that you use the
mediaconvert-config
name when submitting your (secret) EMC configuration. -
(Optional) Download the Privacy Enhanced Mail (PEM) file associated with your IMAX Stream On-Demand Platform instance.
Run the following commands from a CLI:
keyPairID=$(aws ec2 --region=<region> describe-key-pairs --filters Name=key-name,Values=imax-stream-on-demand-platform --query KeyPairs[*].KeyPairId --output text) aws ssm --region=<region> get-parameter --name /ec2/keypair/$keyPairID --with-decryption --query Parameter.Value --output text > imax-stream-on-demand-platform.pem chmod 400 imax-stream-on-demand-platform.pem
where
<region>
is your chosen region (e.g.us-east-1
).NoteThe commands above require the AWS CLI.
The commands above will create a PEM file that is used for SSH access to your IMAX Stream On-Demand Platform EC2 instance and only needed by the administrator/deployer role for deployment and/or troubleshooting activities.
Upgrading Versions (~ 30 mins)
Upgrading to a new version of the IMAX Stream On-Demand Platform AMI is simply a matter of terminating/destroying your existing instance and installing the new one by repeating the installation steps above for the new AMI.
- Login to your AWS account using the AWS Web Console.
- From the AWS UI console, choose Services -> Management & Governance -> CloudFormation.
- Select the stack you created when deploying your current IMAX Stream On-Demand Platform instance (e.g. imax-stream-on-demand-platform-v3).
- Click on the Delete button to delete the stack. Deleting the stack should terminate the current IMAX Stream On-Demand Platform instance EC2 instance and remove any supporting AWS objects.
- Follow the Installation instructions above for the new IMAX Stream On-Demand Platform AMI.
Once the IMAX Stream On-Demand Platform instance is running, you are ready to submit new analyses and/or optimizations and inspect the results.
For the purposes of validating a new deployment or upgrade, you are encouraged to use the Insights Web UI to submit a new analysis or submit a new optimization, depending on whether you’re using StreamAware or StreamSmart.
If you encounter trouble or have any questions about installing/launching your IMAX Stream On-Demand Platform AMI, please create a support ticket by using the IMAX Help Center or feel free to reach out to your IMAX representative directly.
Enabling the Kubernetes Dashboard
For fixed scale deployments of the IMAX Stream On-Demand Platform in AWS, you can optionally (and temporarily) enable the Kubernetes Dashboard. The dashboard provides a Web UI for interacting with Kubernetes which can be especially useful in troubleshooting sessions with IMAX support. To enable the Kubernetes Dashboard, please follow the steps below:
-
Deploy and configure the Kubernetes Dashboard
From a CLI, run the following command:
ssh -i <key.pem> ubuntu@<ec2_public_dns> "~/startKubernetesDashboard.sh"
where
<key.pem>
is the key file you created in step 12 of the Installation and<ec2_public_dns>
is the public host name of your EC2 instance.ImportantThe IMAX Stream On-Demand Platform virtual machine provides SSH access using the following credentials
user: ubuntu, password: ubuntu
.
If you chose to disable the SSH inbound rule in your security group and/or NACL, you will need to add it back in order to execute the command above (see step 4 of the Installation).You should see output similar to the following:
Starting and configuring the Kubernetes Dashboard... namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created secret "kubernetes-dashboard-certs" deleted secret/kubernetes-dashboard-certs created deployment.apps/kubernetes-dashboard patched clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted Deploying Kubernetes Dashboard ingress and cluster-admin role binding... clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created ingress.networking.k8s.io/dashboard-ingress created Creating Kubernetes Dashboard token... secret/kubernetes-dashboard-token created Kubernetes Dashboard login token: yJhbGciOiJSUzI1NiIsImtpZCI6ImdDaldoR3E4bEowN1JmOGpwM0FLQ2pDVjhJMGNNMGxVRlpiMnlZcjVuNHcifQ.eyJpc3MiOi JrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm 5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2 hib2FyZC10b2tlbi02YzJnYyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdW Jlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjU2Mj X2SQBwGnXaRap89koCipxAJarjYZRCJyOLEbXhdzf-6oHxLySXX3S5FwRvzaAbFyE5wdhweYJaHsrhwQ Finished!
-
Login to the Kubernetes Dashboard using the login token.
At this point, your Kubernetes Dashboard should be accessible from a browser at the URL:
https://<ec2_public_dns>/dashboard
Use the
Kubernetes Dashboard login token
in the command output from the step above to log in to the Kubernetes Dashboard.
Disabling the Kubernetes Dashboard
Although access to the Kubernetes Dashboard is secured through the use of a private token, it is generally recommended that, after you have completed your troubleshooting effort, you should disable the Kubernetes Dashboard by removing it from the deployment.
From a CLI run the following command:
ssh -i <key.pem> ubuntu@<ec2_public_dns> "~/stopKubernetesDashboard.sh"
where key.pem
is the key file you created in step 12 of the Installation and <ec2_public_dns>
is the public host name of your EC2 instance.
The IMAX Stream On-Demand Platform virtual machine provides SSH access using the following credentials user: ubuntu, password: ubuntu
.
If you chose to disable the SSH inbound rule in your security group and/or NACL, you will need to add it back in order to execute the command above (see step 4 of the Installation).
You should see output similar to the following:
Stopping the Kubernetes Dashboard...
namespace "kubernetes-dashboard" deleted
serviceaccount "kubernetes-dashboard" deleted
service "kubernetes-dashboard" deleted
secret "kubernetes-dashboard-certs" deleted
secret "kubernetes-dashboard-csrf" deleted
secret "kubernetes-dashboard-key-holder" deleted
configmap "kubernetes-dashboard-settings" deleted
role.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
clusterrole.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
deployment.apps "kubernetes-dashboard" deleted
service "dashboard-metrics-scraper" deleted
deployment.apps "dashboard-metrics-scraper" deleted
Removing Kubernetes Dashboard ingress and cluster-admin role binding...
Finished!
This deployment model delivers the IMAX Stream On-Demand Platform as a virtual machine file and is meant for those customers wishing to deploy the product on to a bare metal server for trial and/or lab usage purposes. The solution options here are to use either an Open Virtual Appliance (OVA) or an open-source Quick Emulator (QEMU) container with KVM acceleration, both of which contain a full deployment of the IMAX Stream On-Demand Platform that is configured to work in environments with modest hardware capabilities, where the benefit of elastic scaling available in a cloud environment is unnecessary or unavailable.
The OVA solution has the advantage of working with several of the most common virtualization players and, as such, has good cross-platform support, including the Windows platform. Our OVA solution has been verified to work with both Oracle’s VirtualBox and VMware ESXi.
The QEMU container runs on any Linux-based systems and has been verified to work with tools such as VirtManager and oVirt.
It is recommended that someone with basic working knowledge of the following concepts perform the installation:
- Linux CLI basics (SSH) and
- basic networking (bridge and/or NAT adapters, HTTP/S proxy, IP addresses, hostnames).
For OVA deployments, knowledge or experience with VirtualBox is useful.
For QEMU deployments, knowledge of VirtManager is useful.
The following steps are prerequisites for a successful deployment:
-
From your IMAX representative, you should receive or request the following items:
-
IMAX feature license file enabling the various products (StreamAware/StreamSmart) and features, according to your purchase/trial agreement.
-
The OVA/QEMU that has been built for your trial/lab.
The OVA/QEMU will be large in size (multiple gigabytes) and named according to the following pattern:
imax-stream_MAJOR.MINOR.PATCH-BUILD_MM-DD-YYYY_HH-MM.ova/qcow2
whereMAJOR
,MINOR
,PATCH
andBUILD
represent the major, minor, patch and build numbers of the target build andYYYY
,HH
andMM
are the year, hour and date of the virtual machine’s creation.
-
-
The physical/virtual host machine must have the following minimum hardware resources that can be dedicated to the IMAX Stream On-Demand Platform virtual machine:
StreamAware:
- 24G RAM (32G RAM preferred)
- 24 CPU cores (32 cores preferred)
- 25G of free disk space (100G preferred)
StreamSmart (includes StreamAware):
- 24G RAM (32G RAM preferred)
- 24 CPU cores (32 cores preferred)
- 2.2TB of free disk space
NoteYou are strongly encouraged to use the preferred values above, when possible.
-
The physical/virtual host machine should, ideally, use a 64-bit Linux operating system.
Please ensure that your host machine has installed the following common networking and filesystem packages:
- openssh-server
- nfs-common
- nfs-utils (may not be available for all Linux distros)
- net-tools
- bridge-utils
NoteYou may use an alternative operating system (i.e. Windows/MacOS) provided that your virtualization player can host our supported virtual machine container (i.e. OVA or QEMU). Instructions in this guide, however, will assume the preferred Linux-based host operating system.
-
The physical/virtual host machine should have installed a virtualization player that can support your chosen virtual machine container (i.e. Open Virtualization Format (OVF) or QEMU) along with hardware-assisted virtualization (i.e. Intel VT-x or AMD-V).
For OVA files, one of the more popular virtualization players is Oracle’s VirtualBox.
For QEMU files, VirtManager is a common option. See here for instructions on how to install VirtManager and KVM on Ubuntu 20.04.On a Linux host machine, a positive value for the following command indicates that you have hardware-assisted virtualization enabled:
grep -Eoc '(vmx|svm)' /proc/cpuinfo
-
The video assets you wish to process must be either:
- mounted on the host machine in a folder that can be shared with the IMAX Stream On-Demand Platform virtual machine through the virtualization player,
- mountable as a Linux file system within the IMAX Stream On-Demand Platform virtual machine (you can SSH into the instance and install drivers as needed),
- available via a Network File System (NFS),
- and/or available via AWS S3.
In the case of NFS, the exported folder(s) must allow for read-only access to the assets without needing any specific user/group permissions and/or credentials.
In the case of S3, you must have the following details available:
- bucket name
- access key id
- access key
-
Your local network must allow outbound internet access on port
443
to the following hosts:NoteThe IMAX Stream On-Demand Platform virtual machine supports using SOCKS5 and HTTP/S forward proxies, if required. Please see the Installation section below for configuration details.
-
You’ve created an account to access your results in IMAX Insights.
The following steps illustrate installing IMAX Stream On-Demand Platform as an OVA container named imax-stream_3.1.0_05-27-2024_14-22.ova
, using VirtualBox as the virtualization player and a CLI. You should replace imax-stream_3.1.0_05-27-2024_14-22.ova
with the image filename for your release. Similar steps should apply to other virtualization player products, many of which have GUIs to achieve the same operations. Please contact your IMAX representative if you have any questions.
-
Import the IMAX Stream On-Demand Platform OVA into your virtualization player.
vboxmanage import imax-stream_3.1.0_05-27-2024_14-22.ova
Please wait 5 minutes for the virtual machine to successfully import and stand up the all the services inside.
-
Verify your virtual machine import.
vboxmanage list vms
You should see your VM name in this list (in this case
imax-stream_3.1.0_05-27-2024_14-22
) -
Configure your virtual machine network settings.
You will need the IMAX Stream On-Demand Platform virtual machine to be accessible on your local network. Similarly, the virtual machine must be able to connect to the internet (i.e. on port 443 to the Insights hosts discussed in the Insights Prerequisites). There are two documented approaches below: Bridge Adapter and NAT.
Bridged Adapter
If the local network, on which your host machine resides, has a DHCP server then the simplest and preferred approach is to configure your virtual machine with a bridge adapter that is bound to your host machine’s physical network interface. This will ensure that, once the virtual machine is started, it will get its own IP address from your DHCP server and would then be directly accessible from any machine on your local network. The caveat here is that if you don’t want the IP address to change when the host or virtual machine is rebooted, you may need to modify your DHCP server to allow static assignment based on the interface’s MAC address or provide some alternative DNS solution.vboxmanage modifyvm "imax-stream_3.1.0_05-27-2024_14-22" \ --nic1 bridged --bridgeadapter1 $(route | grep '^default' | grep -o '[^ ]*$' | head -n 1)
Notice in the example above
route | grep '^default' | grep -o '[^ ]*$' | head -n 1
is automated way of determining the default adapter on your host machine. If you’d prefer, you can substitute that logic with the name of the appropriate adapter (e.g.eth0
).
Network Address Translation - NAT
If you don’t have a DHCP server available, you can use Network Address Translation (NAT) and port forwarding to provide access to your virtual machine via your host machine. The following steps can be used with VirtualBox:vboxmanage modifyvm "imax-stream_3.1.0_05-27-2024_14-22" \ --nic1 nat --natpf1 "streamplatformhttps,tcp,,8443,,443" --natpf1 "streamplatformssh,tcp,,2222,,22"
Here we instruct VirtualBox to forward all packets that come to the host machine on port 8443 to port 443 (HTTPS) on the virtual machine. The same is done for port 2222 and the standard SSH port, 22. Using this approach you can access the IMAX Stream On-Demand Platform REST API using the hostname/IP of your host machine and the 8443 port. The following examples are common URLs used to interact with the IMAX Stream On-Demand Platform REST API, modified to account for NAT and the port forwarding example above:
https://<hostmachine>:8443/api/v1/status
(System Status)https://<hostmachine>:8443/api/v1/analyses
(Analysis submission endpoint)
For more details on VirtualBox networking options, please take a look here.
-
(Optional, OVA and VirtualBox only) Share the host folders containing your video assets with the IMAX Stream On-Demand Platform virtual machine.
vboxmanage sharedfolder add "imax-stream_3.1.0_05-27-2024_14-22" --name videos --hostpath /mnt/videos --readonly --automount
If you have one or more folders mounted on your host that contain video assets you wish to use, you can set them up as shared folders in the IMAX Stream On-Demand Platform virtual machine. In the example above, the folder you wish to share with the virtual machine is mounted on the host at
/mnt/videos
.ImportantPlease ensure that host folder names use only alphanumeric characters, dashes (
-
) and/or underscores (_
) in their names. -
Start the IMAX Stream On-Demand Platform virtual machine, choosing a
headless
mode, if available.vboxmanage startvm "imax-stream_3.1.0_05-27-2024_14-22" --type headless
It may take several minutes for the virtual machine to fully start and all networking services to become available.
-
(Bridged Adapter) If you used the bridged adapter in the network configuration step above, once the virtual machine is up and running, record the IP address it was assigned from the command below:
vboxmanage guestproperty get "imax-stream_3.1.0_05-27-2024_14-22" "/VirtualBox/GuestInfo/Net/0/V4/IP"
If the IP address is not yet available you will see a message similar to the following:
No value set!
. In this case, simply keep trying.
If you used NAT, this step does not apply to you and you will see an internal address here (most likely on the 10.0.2.x network) which you can ignore. -
(Optional) Make adjustments for outbound HTTP/SOCKS proxies.
If your network requires use of a proxy, please consult the appropriate section below.
ImportantThe IMAX Stream On-Demand Platform virtual machine provides SSH access using the following credentials
user: ubuntu, password: ubuntu
. If you are using Network Address Translation (NAT) to access your virtual machine, take care to add the SSH forwarding port to the commands below (i.e.ssh -p 2222...
).HTTP Proxy
Similarly, an HTTP proxy can be configured as follows:ssh ubuntu@<imax-stream-platform> "~/configureInsightsProxy.sh --http-proxy <IP>:<PORT>"
If your HTTP proxy server requires credentials, you can add them as follows:
ssh ubuntu@<imax-stream-platform> "~/configureInsightsProxy.sh --http-proxy <IP>:<PORT> --http-proxy-user <USER> --http-proxy-password <PASSWORD>"
where
<imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance (adjusting for any port forwarding that may apply).
SOCKS Proxy
If your network requires the use of a SOCKS5 or HTTP forward proxy in order to connect to external hosts, you can configure the IMAX Stream On-Demand Platform virtual machine to use a SOCKS5 proxy server as follows:ssh ubuntu@<imax-stream-platform> "~/configureInsightsProxy.sh --socks5-proxy <IP>:<PORT>"
If your SOCKS5 proxy server requires credentials, you can add them as follows:
ssh ubuntu@<imax-stream-platform> "~/configureInsightsProxy.sh --socks5-proxy <IP>:<PORT> --socks5-proxy-user <USER> --socks5-proxy-password <PASSWORD>"
where
<imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance (adjusting for any port forwarding that may apply). -
Verify that your IMAX Stream On-Demand Platform instance is operational.
Once your virtual machine has successfully started, open a new tab in your browser and load the URL:
https://<imax-stream-platform>/
, where<imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance (adjusting for any port forwarding that may apply).ImportantIt can take several minutes for the virtual machine to initialize and load the IMAX Stream On-Demand Platform.
The system provides TLS through the use of self-signed certificates and, as such, your browser will likely flag the URL/site as a potential security risk. Please direct your browser to accept the certificate and proceed to the site.
All users of the IMAX Stream On-Demand Platform will need to accept this certificate in their respective browsers.
The page displayed is a system launch and initial configuration page for your IMAX Stream On-Demand Platform instance. You should see something similar to the image below:
At this point, your system will show that it is
UNLICENSED
andOFFLINE
. The system should becomeACTIVE
andONLINE
simply by loading the feature license file provided to you by your IMAX represenative.NoteThe System health indicates that it is
OFFLINE
because two critical services (i.e. InsightsClientService and InsightsKafkaService) require a license in order to operate.Load your license file by expanding the License Information container, clicking on the
Upload license
link and browsing to your feature license file. Feel free to verify the contents of your license once it is loaded and displayed on the screen. If you collapse the License Information section you should see that your system is nowACTIVE
andONLINE
.NoteIf you notice that your system still shows
OFFLINE
, please expand the System health container and hover your mouse over the statuses for additional details. If you are unsure how to remedy the issue(s), please contact your IMAX representative for assistance. -
Click on the View Dashboard button.
From the same initial configuration page for your IMAX Stream On-Demand Platform instance shown above, click on the View Dashboard button. Your browser will create a new tab and load the On-Demand Status page in Insights. You will be required to login to your Insights account at this time. Once you’ve successfully logged in you will be prompted with a Host setup dialog that looks similar to the following:
- Type in your
<imax-stream-platform>
into the dropdown, where<imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance (adjusting for any port forwarding that may apply), - press the
<Tab>
key to leave the field and enable theTest connection
button and - click on the
Test connection
button.
At this point, you should see your On-Demand Status page.
NoteIf you see a message in red under the dropdown that says: Cannot connect to host. See details, this is an indication that the browser instance no longer has the system’s self-signed certificate. Click on the See details link which will launch the system’s
/status
endpoint in another tab. Please direct your browser to accept the certificate and proceed to the site. Once the page has been successfully loaded, you can return to the previous tab and click theTest Connection
button again, at which point, you should proceed to the On-Demand Status page. - Type in your
-
(StreamSmart) Provide the platform with access to your S3 bucket(s).
If you are using StreamSmart, you will need to add secrets to your deployment in order to read from the S3 buckets holding your video assets. Each S3 bucket requires a secret with field values for the
bucket name
,key id
andaccess key
. You can use the /s3AccessSecret endpoint on the Stream On-Demand Platform REST API to create a conformant secret, as shown below:curl -kvi -X PUT "https://<imax-stream-platform>/api/v1/s3AccessSecret" \ -H "Content-Type: application/json" \ -d '{ "bucketName": "mybucket", "clientId": "AKIAIOSFODNN7EXAMPLE", "clientSecret": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" }'
-
(StreamSmart - Optional) Configure access to your AWS Elemental MediaConvert (EMC) endpoint.
For those that wish to use AWS EMC with StreamSmart, you must tell the system about your EMC endpoint. You can use the /configurations endpoint to create a secret configuration for your EMC endpoint, as shown below:
curl -kvi -X POST "https://<imax-stream-platform>/api/v1/configurations" \ -H "Content-Type: application/json" \ -d '{ "type": "SECRET", "id": "mediaconvert-config", "config": { "data": { "accesskey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", "keyid": "AKIAIOSFODNN7EXAMPLE", "region": "us-east-1", "url": "https://vasjpylpa.mediaconvert.us-east-1.amazonaws.com", } } }'
ImportantPlease substitute the
accesskey
,keyid
andregion
values above for the details that apply to your specific AWS EMC endpoint.Be careful to ensure that you use the
mediaconvert-config
name when submitting your (secret) EMC configuration.
Upgrading Versions (~ 30 mins)
Upgrading to a new version of the IMAX Stream On-Demand Platform virtual machine is simply a matter of powering off your existing instance and installing the new one by repeating the Installation steps above for the new file.
-
Power off your existing IMAX Stream On-Demand Platform virtual machine
vboxmanage controlvm "imax-stream_3.1.0_05-27-2024_14-22" poweroff
-
(Optional) Unregister and, optionally, remove your existing IMAX Stream On-Demand Platform virtual machine
vboxmanage unregistervm "imax-stream_3.1.0_05-27-2024_14-22" --delete
The
--delete
switch is optional and will completely remove the OVA from your host system. -
Follow Installation instructions for your new IMAX Stream On-Demand Platform virtual machine/OVA, using the same choices as before.
Upgrading VM Hardware (~ 5 mins)
Allocating more CPU cores and/or memory to your IMAX Stream On-Demand Platform virtual machine will allow you to process more analyses and/or optimizations concurrently. To add hardware resources, simply power off your virtual machine, alter the CPU/RAM to the desired levels and restart. The following procedure illustrates how to do this using VirtualBox as the virtualization player:
-
Power off your existing IMAX Stream On-Demand Platform virtual machine
vboxmanage controlvm "imax-stream_3.1.0_05-27-2024_14-22" poweroff
-
Modify the CPU and/or RAM allocated to your IMAX Stream On-Demand Platform virtual machine
vboxmanage modifyvm "imax-stream_3.1.0_05-27-2024_14-22" --memory 32000 vboxmanage modifyvm "imax-stream_3.1.0_05-27-2024_14-22" --cpus 32
In the examples above, we set the memory to 32GB and the number of cores/cpus to 32.
-
Start the IMAX Stream On-Demand Platform virtual machine, choosing a
headless
mode, if available.vboxmanage startvm "imax-stream_3.1.0_05-27-2024_14-22" --type headless
Please wait 5 minutes for the virtual machine to successfully import and stand up all the services inside.
-
At this point, your IMAX Stream On-Demand Platform virtual machine should be up and running. Load the URL
https://<imax-stream-platform>/
in your browser, where<imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance (adjusting for any port forwarding that may apply). Verify that the system shows asACTIVE
andONLINE
.NoteIt can take several minutes for the virtual machine to initialize and load the IMAX Stream On-Demand Platform.
The system provides TLS through the use of self-signed certificates and, as such, your browser will likely flag the URL/site as a potential security risk. Please direct your browser to accept the certificate and proceed to the site.
The following steps illustrate installing IMAX Stream On-Demand Platform as a QEMU container named imax-stream_3.1.0_05-27-2024_14-22.qcow2
, using VirtManager as the virtualization player and a CLI. You should replace imax-stream_3.1.0_05-27-2024_14-22.qcow2
with the image filename for your release. Similar steps should apply to other virtualization player products, many of which have GUIs to achieve the same operations. Please contact your IMAX representative if you have any questions.
-
If you are not using root, make sure you’ve add your user to the
libvirt
andkvm
groups and then logout and back in.sudo usermod -aG libvirt $(whoami) sudo usermod -aG kvm $(whoami) logout
-
Configure your virtual machine network settings.
You will need the IMAX Stream On-Demand Platform virtual machine to be accessible on your local network. Similarly, the virtual machine must be able to connect to the internet (i.e. on port 443 to the Insights hosts discussed in the Insights Prerequisites). There are two documented approaches below: Bridge Adapter and NAT.
Bridged Adapter
If the local network, on which your host machine resides, has a DHCP server then the simplest and preferred approach is to configure your virtual machine with a bridge adapter that is bound to your host machine’s physical network interface. This will ensure that, once the virtual machine is started, it will get its own IP address from your DHCP server and would then be directly accessible from any machine on your local network. The caveat here is that if you don’t want the IP address to change when the host or virtual machine is rebooted, you may need to modify your DHCP server to allow static assignment based on the interface’s MAC address or provide some alternative DNS solution.Assuming your host is Linux and you are using the NetworkManager service to manage your network, you can do the following to establish a bridge network:
sudo nmcli con add ifname br0 type bridge con-name br0 sudo nmcli con add type bridge-slave ifname $(route | grep '^default' | grep -o '[^ ]*$' | head -n 1) master br0 sudo nmcli con modify br0 bridge.stp no sudo reboot
Notice in the example above
route | grep '^default' | grep -o '[^ ]*$' | head -n 1
is automated way of determining the default adapter on your host machine. If you’d prefer, you can substitute that logic with the name of the appropriate adapter (e.g.eth0
), if you know it.NoteDepending on the Linux variant on your host machine and what service you are using to manage your networking, your solution to creating a bridge network may be different. Moreover, your local network may not have a DHCP server from which new local IP addresses can be obtained. Consult instructions for your Linux variant and networking policies and make adjustments to the above as necessary.
Next, we add the bridge network to VirtManager by creating a file called
bridge.xml
somewhere on your host machine with the following contents:<network> <name>bridge</name> <forward mode="bridge"/> <bridge name="bridge" /> </network>
Add the network to VirtManager by executing the following:
virsh net-define bridge.xml virsh net-start bridge virsh net-autostart bridge
Verify that the bridge network is active by executing the following:
virsh net-list
Network Address Translation - NAT
If you don’t have a DHCP server available on your local network and your host machine’s IP/hostname is static, you can use Network Address Translation (NAT) and port forwarding to provide access to your virtual machine via your host machine.First, let’s modify the
default
network to assign the same local IP address to your IMAX Stream On-Demand Platform virtual machine:virsh net-edit default
Modify the default network’s <ip> element as shown below:
. . . <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254'/> <host mac='52:54:00:8f:7f:a5' name='imax-stream-on-demand-platform' ip='192.168.122.2'/> </dhcp> </ip>
Create and edit a QEMU hook file for libvirt as follows:
sudo touch /etc/libvirt/hooks/qemu sudo chmod a+rx /etc/libvirt/hooks/qemu
Use the following contents for the
/etc/libvirt/hooks/qemu
file:#!/bin/bash # IMPORTANT: Change the "VM NAME" string to match your actual VM Name. # In order to create rules to other VMs, just duplicate the below block and configure # it accordingly. if [ "${1}" = "imax-stream-on-demand-platform" ]; then # Update the following variables to fit your setup GUEST_IP=192.168.122.2 GUEST_PORT=443 HOST_PORT=8443 if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then /sbin/iptables -D FORWARD -o virbr0 -p tcp -d $GUEST_IP --dport $GUEST_PORT -j ACCEPT /sbin/iptables -t nat -D PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT fi if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then /sbin/iptables -I FORWARD -o virbr0 -p tcp -d $GUEST_IP --dport $GUEST_PORT -j ACCEPT /sbin/iptables -t nat -I PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT fi fi
Restart the default network and the
libvirtd
service:virsh net-destroy default virsh net-start default sudo systemctl restart libvirtd
The steps above ensure that a private IP address of
192.168.122.2
will always be assigned to the IMAX Stream On-Demand Platform virtual machine and that the host machine will provide NAT services on behalf of the virtual machine, mapping port 8443->443. -
Install the IMAX Stream On-Demand Platform QEMU into VirtManager.
If you are using the bridged adapter, run the following:
virt-install --virt-type kvm --name imax-stream-on-demand-platform --memory 32768 --vcpus 32 \ --graphics none --os-type Linux --os-variant ubuntu22.04 --disk <qemu_path> \ --import --noautoconsole --autostart --network bridge
If you are using NAT, run the following:
virt-install --virt-type kvm --name imax-stream-on-demand-platform --memory 32768 --vcpus 32 \ --graphics none --os-type Linux --os-variant ubuntu22.04 --disk <qemu_path> --import --noautoconsole --autostart --network=default,model=virtio,mac=52:54:00:8f:7f:a5
where
<qemu_path>
is the absolute path on disk where you copied the IMAX Stream On-Demand Platform QEMU container (e.g./home/userX/imax-stream_3.1.0_05-27-2024_14-22.qcow2
).Please wait 5 minutes for the virtual machine to successfully import and stand up all the services inside.
NoteNote that the command above gives the virtual machine 32G of RAM and 32 vCPUs. You may alter these to suit your underlying hardware but you should not use anything less than the minimum specification presented in the Installation Prerequisites.
-
Verify your virtual machine import.
virsh list --all
You should see your VM name in this list (in this case
imax-stream-on-demand-platform
) and the state should berunning
. -
(Bridged Adapter) Once the virtual machine is up and running, record the IP address it was assigned from the command below:
virsh domifaddr --source agent imax-stream-on-demand-platform 2> /dev/null | grep eth0 | awk -F " " '{print $4}' | cut -d"/" -f1
This IP address should be local to the network on which your host machine sits and will be furnished by the DHCP server on your local network.
NoteIf you don’t see an IP address immediately, keep retrying the command. It may take 5 minutes for the virtual machine to successfully import and stand up all the services inside.
If you are using NAT, this step does not apply to you.
-
(Network Address Translation - NAT) Update IMAX Stream On-Demand Platform to use the IP address of the host machine.
In order to support NAT, the IMAX Stream On-Demand Platform instance within the virtual machine needs to know about the local IP address of the host machine so it doesn’t ignore requests that come from that address. Run the following from a host machine CLI:
hostIP=$(ip a | grep ' '$(route | grep '^default' | grep -o '[^ ]*$' | head -n 1) | grep inet | awk -F " " '{print $2}' | cut -d"/" -f1) ssh ubuntu@192.168.122.2 "sed -i 's=nginx.ingress.kubernetes.io/server-alias: \"=nginx.ingress.kubernetes.io/server-alias: \"'$hostIP',=' ~/imax-stream-ingress.yaml" ssh ubuntu@192.168.122.2 "kubectl apply -f ~/imax-stream-ingress.yaml"
ImportantThe IMAX Stream On-Demand Platform virtual machine provides SSH access using the following credentials
user: ubuntu, password: ubuntu
.Note that when sending commands via SSH to the IMAX Stream On-Demand Platform virtual machine from the host machine, we use the fixed, private IP
192.168.122.2
. Using this IP to reach the virtual machine is only valid from within the host machine and not from anywhere else on your local network. -
(Optional) Make adjustments for outbound HTTP/SOCKS proxies.
If your network requires use of a proxy, please consult the appropriate section below.
ImportantThe IMAX Stream On-Demand Platform virtual machine provides SSH access using the following credentials
user: ubuntu, password: ubuntu
. If you are using Network Address Translation (NAT) to access your virtual machine, take care to add the SSH forwarding port to the commands below (i.e.ssh -p 2222...
).HTTP Proxy
Similarly, an HTTP proxy can be configured as follows:ssh ubuntu@<imax-stream-platform> "~/configureInsightsProxy.sh --http-proxy <IP>:<PORT>"
If your HTTP proxy server requires credentials, you can add them as follows:
ssh ubuntu@<imax-stream-platform> "~/configureInsightsProxy.sh --http-proxy <IP>:<PORT> --http-proxy-user <USER> --http-proxy-password <PASSWORD>"
where
<imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance (adjusting for any port forwarding that may apply).
SOCKS Proxy
If your network requires the use of a SOCKS5 or HTTP forward proxy in order to connect to external hosts, you can configure the IMAX Stream On-Demand Platform virtual machine to use a SOCKS5 proxy server as follows:ssh ubuntu@<imax-stream-platform> "~/configureInsightsProxy.sh --socks5-proxy <IP>:<PORT>"
If your SOCKS5 proxy server requires credentials, you can add them as follows:
ssh ubuntu@<imax-stream-platform> "~/configureInsightsProxy.sh --socks5-proxy <IP>:<PORT> --socks5-proxy-user <USER> --socks5-proxy-password <PASSWORD>"
where
<imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance (adjusting for any port forwarding that may apply). -
Verify that your IMAX Stream On-Demand Platform instance is operational.
Once your virtual machine has successfully started, open a new tab in your browser and load the URL:
https://<imax-stream-platform>/
, where<imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance (adjusting for any port forwarding that may apply).ImportantIt can take several minutes for the virtual machine to initialize and load the IMAX Stream On-Demand Platform.
The system provides TLS through the use of self-signed certificates and, as such, your browser will likely flag the URL/site as a potential security risk. Please direct your browser to accept the certificate and proceed to the site.
All users of the IMAX Stream On-Demand Platform will need to accept this certificate in their respective browsers.
The page displayed is a system launch and initial configuration page for your IMAX Stream On-Demand Platform instance. You should see something similar to the image below:
At this point, your system will show that it is
UNLICENSED
andOFFLINE
. The system should becomeACTIVE
andONLINE
simply by loading the feature license file provided to you by your IMAX represenative.NoteThe System health indicates that it is
OFFLINE
because two critical services (i.e. InsightsClientService and InsightsKafkaService) require a license in order to operate.Load your license file by expanding the License Information container, clicking on the
Upload license
link and browsing to your feature license file. Feel free to verify the contents of your license once it is loaded and displayed on the screen. If you collapse the License Information section you should see that your system is nowACTIVE
andONLINE
.NoteIf you notice that your system still shows
OFFLINE
, please expand the System health container and hover your mouse over the statuses for additional details. If you are unsure how to remedy the issue(s), please contact your IMAX representative for assistance. -
Click on the View Dashboard button.
From the same initial configuration page for your IMAX Stream On-Demand Platform instance shown above, click on the View Dashboard button. Your browser will create a new tab and load the On-Demand Status page in Insights. You will be required to login to your Insights account at this time. Once you’ve successfully logged in you will be prompted with a Host setup dialog that looks similar to the following:
- Type in your
<imax-stream-platform>
into the dropdown, where<imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance (adjusting for any port forwarding that may apply), - press the <Tab> key to leave the field and enable the
Test connection
button. - Click on the
Test connection
button.
At this point, you should see your On-Demand Status page.
NoteIf you see a message in red under the dropdown that says: Cannot connect to host. See details, this is an indication that the browser instance no longer has the system’s self-signed certificate. Click on the See details link which will launch the system’s
/status
endpoint in another tab. Please direct your browser to accept the certificate and proceed to the site. Once the page has been successfully loaded, you can return to the previous tab and click theTest Connection
button again, at which point, you should proceed to the On-Demand Status page. - Type in your
-
(StreamSmart) Provide the platform with access to your S3 bucket(s).
If you are using StreamSmart, you will need to add secrets to your deployment in order to read from the S3 buckets holding your video assets. Each S3 bucket requires a secret with field values for the
bucket name
,key id
andaccess key
. You can use the /s3AccessSecret endpoint on the Stream On-Demand Platform REST API to create a conformant secret, as shown below:curl -kvi -X PUT "https://<imax-stream-platform>/api/v1/s3AccessSecret" \ -H "Content-Type: application/json" \ -d '{ "bucketName": "mybucket", "clientId": "AKIAIOSFODNN7EXAMPLE", "clientSecret": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" }'
-
(StreamSmart - Optional) Configure access to your AWS Elemental MediaConvert (EMC) endpoint.
For those that wish to use AWS EMC with StreamSmart, you must tell the system about your EMC endpoint. You can use the /configurations endpoint to create a secret configuration for your EMC endpoint, as shown below:
curl -kvi -X POST "https://<imax-stream-platform>/api/v1/configurations" \ -H "Content-Type: application/json" \ -d '{ "type": "SECRET", "id": "mediaconvert-config", "config": { "data": { "accesskey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", "keyid": "AKIAIOSFODNN7EXAMPLE", "region": "us-east-1", "url": "https://vasjpylpa.mediaconvert.us-east-1.amazonaws.com", } } }'
ImportantPlease substitute the
accesskey
,keyid
andregion
values above for the details that apply to your specific AWS EMC endpoint.Be careful to ensure that you use the
mediaconvert-config
name when submitting your (secret) EMC configuration.
Upgrading Versions (~ 30 mins)
Upgrading to a new version of the IMAX Stream On-Demand Platform virtual machine is simply a matter of powering off your existing instance and installing the new one by repeating the Installation steps above for the new file.
-
Power off your existing IMAX Stream On-Demand Platform virtual machine.
virsh shutdown "imax-stream-on-demand-platform"
-
Undefine and, optionally, remove your existing IMAX Stream On-Demand Platform virtual machine.
virsh undefine "imax-stream-on-demand-platform" --remove-all-storage
The
--remove-all-storage
switch is optional and will completely remove the QEMU image from your host system. -
Follow Installation instructions for your new IMAX Stream On-Demand Platform virtual machine/QEMU, using the same choices as before.
Upgrading VM Hardware (~ 5 mins)
Allocating more CPU cores and/or memory to your IMAX Stream On-Demand Platform virtual machine will allow you to process more analyses and/or optimizations concurrently. To add hardware resources, simply power off your virtual machine, alter the CPU/RAM to the desired levels and restart.
-
Power off your existing IMAX Stream On-Demand Platform virtual machine.
virsh shutdown "imax-stream-on-demand-platform"
-
Modify the CPU and/or RAM allocated to your IMAX Stream On-Demand Platform virtual machine.
virsh edit "imax-stream-on-demand-platform"
Use the editor to change your values for memory and vcpu.
-
Start the IMAX Stream On-Demand Platform virtual machine.
virsh start "imax-stream-on-demand-platform"
Please wait 5 minutes for the virtual machine to successfully import and stand up all the services inside.
-
At this point, your IMAX Stream On-Demand Platform virtual machine should be up and running. Load the URL
https://<imax-stream-platform>/
in your browser, where<imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance (adjusting for any port forwarding that may apply). Verify that the system shows asACTIVE
andONLINE
.NoteIt can take several minutes for the virtual machine to initialize and load the IMAX Stream On-Demand Platform.
The system provides TLS through the use of self-signed certificates and, as such, your browser will likely flag the URL/site as a potential security risk. Please direct your browser to accept the certificate and proceed to the site.
As mentioned in the Installation Prerequisites, the IMAX Stream On-Demand Platform virtual machine can access any video assets that are:
- mounted on the host machine in a folder that is then shared with IMAX Stream On-Demand Platform virtual machine through the virtualization player,
- mountable as a Linux file system within the IMAX Stream On-Demand Platform virtual machine (you can SSH into the instance and install drivers as needed),
- available via a Network File System (NFS),
- and/or available via AWS S3.
Shared Host Folders (OVA, VirtualBox only)
If you shared one or more folders on the host machine with the IMAX Stream On-Demand Platform virtual machine using VirtualBox, each folder will be automatically mounted as a subfolder under /media
and named according to the following pattern: sf_<share_name>
, where sf
stands for shared folder and <share_name>
is the name you choose for your share. To access assets in these folders within the IMAX Stream On-Demand Platform, you will want to use the PVC storage type and the PVC name virtual-box-shares
which will put you at the root of /media
where you can then build the remainder of the path to your asset.
For example, if you shared the folder /mnt/videos
from the host machine as videos
using the following command:
vboxmanage sharedfolder "imax-stream-ondemand_2.21.0-25_09-07-2023_19-16" --name videos --hostpath /mnt/videos --readonly --automount
your assets would be mounted in virtual machine under /media/sf_videos
and you would use the virtual-box-shares
PVC along with the remainder of the path from the /media
folder, as shown in the following JSON snippet:
{
"name": "rocket_launch.mp4",
"storageLocation": {
"type": "PVC",
"name": "virtual-box-shares"
},
"path": "sf_videos/royalty_free/rocket_launch/source"
}
The JSON snippet above illustrates accessing the IMAX Stream On-Demand Platform using the REST API. However, the approach and values above apply equally to the Insights Web UI.
Mounted File System in Virtual Machine
The IMAX Stream On-Demand Platform provides a PVC called local-media
which can be used to access any folder in the virtual machine. As such, you can SSH into the virtual machine and mount any file system for which dirvers/support is available for Ubuntu 22.04. The local-media
PVC is configured to resolve to the root of the virtual machine (i.e. /
) and, as such, the paths you supply must be absolute, as demonstrated in the JSON snippet below:
{
"name": "rocket_launch.mp4",
"storageLocation": {
"type": "PVC",
"name": "local-media"
},
"path": "/media/sf_videos/royalty_free/rocket_launch/source"
}
The JSON snippet above illustrates accessing the IMAX Stream On-Demand Platform using the REST API. However, the approach and values above apply equally to the Insights Web UI.
Any mounted file system or folder in the virtual machine that you wish to access using the local-media
PVC must support read-only access by the ubuntu
user and group (i.e. uid=1000(ubuntu) gid=1000(ubuntu)).
NFS
Inside the IMAX Stream On-Demand Platform virtual machine is a small script that facilitates mounting an existing NFS server and exported folder. From a CLI, you can execute the following:
ssh ubuntu@<imax-stream-platform> "~/loadNFS.sh -s <NFS_SERVER> -p <READ_ONLY_FOLDER> -n <PVC_NAME>"
where:
<imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance (adjusting for any port forwarding that may apply)<NFS_SERVER>
is the NFS server host/IP (e.g.nas.imax.lan
),<READ_ONLY_FOLDER>
is the read-only folder path (e.g./Public
) and<PVC_NAME>
is the prefix for the Kubernetes Persistent Volume Claim (PVC) name to use (e.g.video-files
).
The NFS folder path must be accessible without needing any specific user/group permissions and/or credentials.
Before you run the command above, it is important to make sure that you have the exported NFS folder paths correctly specified and that they have been configured to allow access from the IMAX Stream On-Demand Platform virtual machine. On your Linux host machine, you can run the following command to see the list of exported NFS folders (assuming you have NFS utils installed):
showmount -e <NFS_SERVER>
Export list for <NFS_SERVER>:
/mnt/temp_hdd3/videos 172.31.0.0/16
In the example above, /mnt/temp_hdd3/videos
is the only exported folder path and it is available to any machine on the 172.31 network. Assuming that is correct for your environment (i.e. your IMAX Stream On-Demand Platform virtual machine has an IP on the 172.31 network), then you would run the following command to create a PVC for the NFS mount:
ssh ubuntu@<imax-stream-platform> "~/loadNFS.sh -s <NFS_SERVER> -p /mnt/temp_hdd3/videos -n video-files"
The IMAX Stream On-Demand Platform virtual machine provides SSH access using the following credentials user: ubuntu, password: ubuntu
. If you are using Network Address Translation (NAT) to access your virtual machine, take care to add the SSH forwarding port to the command above (i.e. ssh -p 2222...
).
Consider the following example:
ssh ubuntu@<imax-stream-platform> "~/loadNFS.sh -s nas.imax.lan -p /Public -n video-files"
where <imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance (adjusting for any port forwarding that may apply).
The script creates a Kubernetes NFS persistent volume (PV) called video-files-pv
, a corresponding persistent volume claim (PVC) called video-files-pvc
and a temporary testing pod that mounts the associated PVC and lists the contents at the root to ensure that the configuration is correct.
If successful, you should output like the following:
Creating NFS PV, PVC and testing pod for /Public on nas.imax.lan...
persistentvolume/video-files-pv created
persistentvolumeclaim/video-files-pvc created
pod/video-files-tester created
Checking for NFS PVC to be BOUND...
Checking for NFS tester pod to be RUNNING...
Checking for NFS tester pod to be RUNNING...
Mounted! Video files listing:
videoFolder1
videoFolder2
.
.
.
pod "video-files-tester" deleted
The name of the Kubernetes PVC (i.e. video-files-pvc
) is important and you should take note of its name as it will be required when submitting analyses using assets from the folder. You can use the script above to mount as many NFS folders as you like so long as you take care to ensure that each one has a unique PVC name.
AWS S3
You may also access AWS S3 buckets from your IMAX Stream On-Demand Platform instance, although if most/all of your assets are in AWS S3, you are strongly recommended to deploy using AWS EC2 (AMI). You will need to add secrets to your deployment in order to read from the S3 buckets holding your video assets.
Each S3 bucket requires a secret with field values for the bucket name
, key id
and access key
. You can use the /s3AccessSecret endpoint on the Stream On-Demand Platform REST API to create a conformant secret, as shown below:
curl -kvi -X PUT "https://<ec2_public_dns>/api/v1/s3AccessSecret" \
-H "Content-Type: application/json" \
-d '{
"bucketName": "mybucket",
"clientId": "AKIAIOSFODNN7EXAMPLE",
"clientSecret": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}'
Once the IMAX Stream On-Demand Platform instance is running, you are ready to submit new analyses and/or optimizations and inspect the results.
For the purposes of validating a new deployment or upgrade, you are encouraged to use the Insights Web UI to submit a new analysis or submit a new optimization, depending on whether you’re using StreamAware or StreamSmart.
If you encounter trouble or have any questions about installing/launching your IMAX Stream On-Demand Platform AMI, please create a support ticket by using the IMAX Help Center or feel free to reach out to your IMAX representative directly.
Enabling the Kubernetes Dashboard
For fixed scale deployments of the IMAX Stream On-Demand Platform in AWS, you can optionally (and temporarily) enable the Kubernetes Dashboard. The dashboard provides a Web UI for interacting with Kubernetes which can be especially useful in troubleshooting sessions with IMAX support. To enable the Kubernetes Dashboard, please follow the steps below:
-
Deploy and configure the Kubernetes Dashboard
From a CLI, run the following command:
ssh ubuntu@<imax-stream-platform> "~/startKubernetesDashboard.sh"
where
<imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance.ImportantThe IMAX Stream On-Demand Platform virtual machine provides SSH access using the following credentials
user: ubuntu, password: ubuntu
. If you are using Network Address Translation (NAT) to access your virtual machine, take care to add the SSH forwarding port to the command above (i.e.ssh -p 2222...
).You should see output similar to the following:
Starting and configuring the Kubernetes Dashboard... namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created secret "kubernetes-dashboard-certs" deleted secret/kubernetes-dashboard-certs created deployment.apps/kubernetes-dashboard patched clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted Deploying Kubernetes Dashboard ingress and cluster-admin role binding... clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created ingress.networking.k8s.io/dashboard-ingress created Creating Kubernetes Dashboard token... secret/kubernetes-dashboard-token created Kubernetes Dashboard login token: yJhbGciOiJSUzI1NiIsImtpZCI6ImdDaldoR3E4bEowN1JmOGpwM0FLQ2pDVjhJMGNNMGxVRlpiMnlZcjVuNHcifQ.eyJpc3MiOi JrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm 5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2 hib2FyZC10b2tlbi02YzJnYyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdW Jlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjU2Mj X2SQBwGnXaRap89koCipxAJarjYZRCJyOLEbXhdzf-6oHxLySXX3S5FwRvzaAbFyE5wdhweYJaHsrhwQ Finished!
-
Login to the Kubernetes Dashboard using the login token.
At this point, your Kubernetes Dashboard should be accessible from a browser at the URL:
https://<imax-stream-platform>/dashboard
ImportantIf you are using Network Address Translation (NAT) to access your IMAX Stream On-Demand Platform virtual machine, take care to add the HTTPS/443 forwarding port to the URL above.
Use the
Kubernetes Dashboard login token
in the command output from the step above to log in to the Kubernetes Dashboard.
Disabling the Kubernetes Dashboard
Although access to the Kubernetes Dashboard is secured through the use of a private token, it is generally recommended that, after you have completed your troubleshooting effort, you should disable the Kubernetes Dashboard by removing it from the deployment.
From a CLI run the following command:
ssh ubuntu@<imax-stream-platform> "~/stopKubernetesDashboard.sh"
where <imax-stream-platform>
is the IP address/FQDN of your IMAX Stream On-Demand Platform virtual machine instance.
The IMAX Stream On-Demand Platform virtual machine provides SSH access using the following credentials user: ubuntu, password: ubuntu
. If you are using Network Address Translation (NAT) to access your virtual machine, take care to add the SSH forwarding port to the command above (i.e. ssh -p 2222...
).
You should see output similar to the following:
Stopping the Kubernetes Dashboard...
namespace "kubernetes-dashboard" deleted
serviceaccount "kubernetes-dashboard" deleted
service "kubernetes-dashboard" deleted
secret "kubernetes-dashboard-certs" deleted
secret "kubernetes-dashboard-csrf" deleted
secret "kubernetes-dashboard-key-holder" deleted
configmap "kubernetes-dashboard-settings" deleted
role.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
clusterrole.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
deployment.apps "kubernetes-dashboard" deleted
service "dashboard-metrics-scraper" deleted
deployment.apps "dashboard-metrics-scraper" deleted
Removing Kubernetes Dashboard ingress and cluster-admin role binding...
Finished!
These deployments are for those customers that are ready to deploy the product into their production workflow, having leveraged the Stream On-Demand Platform REST API to achieve the desired level of integration. The IMAX Stream On-Demand Platform is installed in such a manner as to take full advantage of all features expected of a production system (i.e. high availability, elastic scalability).
The following deployment guide identifies the actions required to deploy the IMAX Stream On-Demand Platform as managed service within a customer’s AWS account. As a managed service, IMAX is responsible for deploying, managing, monitoring and upgrading the IMAX Stream On-Demand Platform and the customer is responsible for providing the AWS account that will act as the infrastructure to support the system. An architecture diagram of the AWS managed service is available below:
In order to facilitate a smooth deployment experience, IMAX has created instructions that require a minimum of manual steps by relying on a curated set of CloudFormation templates to add and configure the necessary AWS resources and services. As you will read below, to execute these templates, you need to create a CloudFormation stack for a given template, supply the required template parameters and then execute the stack. These tasks can be done using either the AWS web console or the CLI. These instructions herein walk you through the process using the AWS web console supplementing when useful with screen captures.
It is strongly recommended that someone with at least a basic understanding of the following concepts and AWS services perform the installation:
- CloudFormation (creating and configuring stacks),
- IAM (policy, roles),
- basic networking (Route 53, CIDR blocks, hostnames, DNS A and CNAME records) and
- Amazon S3 buckets.
-
Create your IMAX Insights account.
If you have not done so already, create your account in IMAX Insights by sending your preferred email address to your IMAX representative using Slack/email or the IMAX Help Center. Feel free at this time to pass along the email addresses of the users you expect to need access to IMAX Stream On-Demand Platform. Each user will be registered and receive a welcome email with instructions.
-
Login to the AWS console and pick your target region.
ImportantIt is strongly recommended that you use an AWS user/role that has the
AWS::AdminstratorAccess
policy so that you have all of the permissions required to execute the CloudFormation scripts in later steps.Use the AWS region selector in the web console to select the target region for the IMAX Stream On-Demand Platform deployment.
ImportantYou are required to deploy the system in the same region as the S3 bucket(s) holding your video assets. This restriction has the advantage of avoiding any AWS cross-region data transfer costs.
-
Create the stream-on-demand-platform-deployment-prerequisites CloudFormation stack
From the Search field beside the Services button in the menu bar:
- launch the CloudFormation service, and
- click the Create stack button to create a new stack,
- and select the Choose an existing template and Amazon S3 URL options.
The IMAX CloudFormation template for the deployment prerequisites can be found at the S3 URL:
https://imax-sct-public.s3.amazonaws.com/on-demand/cf-templates/stream-on-demand-platform-deployment-prerequisites.yaml
This CloudFormation template will establish the resources required to successfully deploy the IMAX Stream On-Demand Platform cluster in your AWS account. Specifically, the template creates the following resources:
- IAM policies
- stream-on-demand-platform-policy (used to create/manage EKS clusters, load balancers, auto-scalers etc - minimum set of permissions)
- AmazonEKS_EFS_CSI_Driver_Policy (Used to create/manage EFS file systems - minimum set of permissions)
- IAM role: stream-on-demand-platform-role (used for creating/managing the IMAX Stream On-Demand Platform cluster)
- KMS encryption keys (for the encryption/decryption of secrets and filesystems with the IMAX Stream On-Demand Platform cluster)
- TLS certificate (for encrypted access to the IMAX Stream On-Demand Platform cluster)
Please refer to the deployment architecture for more details.
-
Specify stack details
-
Name the stack stream-on-demand-platform-deployment-prerequisites.
-
The IMAX Stream On-Demand Platform cluster hostname parameter is the hostname for your IMAX Stream On-Demand Platform cluster.
The hostname should be a fully-qualified domain name (FQDN) that you can use to directly access your IMAX Stream On-Demand Platform cluster. Typically, the hostname will be a subdomain of a parent domain that your company already owns and may even be managed as a hosted zone within AWS Route 53.
CompanyX, for example, might own companyx.com and thus a logical hostname for the IMAX Stream On-Demand Platform cluster may be stream-on-demand-platform.companyx.com. Regardless of your choice of hostname, in later steps, you will need permission to add DNS A and CNAME records to the parent domain so traffic can be routed to the IMAX Stream On-Demand Platform cluster.
-
The IMAX Stream On-Demand Platform cluster access (CIDR blocks) parameter is the comma-delimited list of IPs that are allowed access to your cluster.
In order to secure access to your IMAX Stream On-Demand Platform cluster, you are encouraged to identify CIDR blocks that encompass the full range of IP addresses that are allowed access to the system.
-
The IMAX Stream On-Demand Platform cluster availability zone (AZ1/AZ2) parameters are the availability zones to use for the cluster deployment.
The IMAX Stream On-Demand Platform cluster is deployed across multiple availability zones in order to support higher availability. If you don’t have or specify valid preferences here, IMAX will select two on your behalf.
-
The IMAX Stream On-Demand Platform AWS IAM role ARN parameter is provided to you by your IMAX representative through Slack/e-mail or the IMAX Help Center.
This value represents the Amazon resource name (ARN) for the IMAX IAM role that you agree to grant restricted access to your account for the purposes of deploying, configuring, managing and monitoring your IMAX Stream On-Demand Platform cluster. This role is created by IMAX specifically for your managed instance and access to it is tightly controlled and monitored.
-
-
Configure stack options.
Feel free to add a tag here to mark all resources that the are created by the CloudFormation script (i.e.
stream-on-demand-platform
).You should be able to accept all the default options but pay particular attention to the IAM role. If the AWS user/role that you are using to build and execute your CloudFormation stack does not have administrative privileges (i.e. AWS::AdminstratorAccess policy) in your AWS account then you must either grant them to your user/role or pick an IAM role name from the dropdown illustrated below that does have the AWS::AdminstratorAccess policy.
-
Review and submit stack.
The only work to do on the review page is to select the checkbox that acknowledges that the CloudFormation script will be creating IAM resources, as shown below:
-
Create a CNAME record for your TLS certificate.
Once you submit your stack, the AWS console will show you the stack progress as a table of events. At the point where the script creates your TLS certificate (i.e. StreamOnDemandPlatformCertificate), the script will pause and output a DNS record in the Status reason column that looks something like the following:
Content of DNS Record is: {Name: _3ae0bc74e5128840e8c895b5aa3e28d8.vodmonitor.companyx.com.,Type: CNAME,Value: _bddcde8107a82addacbcb8b10dd9cc69.wrpxpmnjfs.acm-validations.aws.}
ImportantAt this point, you need to create a CNAME record in your hostname’s parent domain using the name and value displayed.
The stack will remain paused until this record has been successfully added and ownership of your chosen hostname can be validated using the TLS certificate.
Please note that it can take a few minutes after adding the CNAME record for it to propagate and be recognized.
If you take too long to create the CNAME record, the stack may time out in which case you will have to delete the stack and start again from step 3.If you see any errors in the stack progress, feel free to contact your IMAX representative if you are unsure how to determine and/or fix the root cause. Errors when running the stack can often be attributed to a lack of permissions to perform the actions in the template. Please refer to step 5 above.
-
Contact your IMAX representative upon successful completion.
Once the stack has successfully completed, please inform your IMAX representative that the installation prerequisites have been completed, preferrably using the IMAX Help Center. At this point, IMAX has what it needs to install the IMAX Stream On-Demand Platform cluster as a managed service in your account and you should be notified via your support ticket (or Slack/email) as to the target date for installation completion. If you’re are interested, please refer to section Managed Service Installation Details below for some high-level information on what AWS resources and services are created as part of the installation.
Managed Service Installation Details
The following is a list of the AWS resources and services that are created as part of installing the IMAX Stream On-Demand Platform cluster. Please refer to the deployment architecture diagram for a visual representation:
-
EKS cluster with at least 2 dynamically scalable node groups:
-
stream-platform-control-nodes - This node group hosts the control plane for your IMAX Stream On-Demand Platform cluster and supports the REST APIs and various system services for communicating with Kubernetes, Amazon S3 and the IMAX Insights cloud data platform. The number of nodes elastically scales (i.e. grows and shrinks) to meet demand and both the minimum and preferred node count here is 2 while the maximum is 10. Most usages of the system would rarely require more than 2 nodes running concurrently. By default, each node created in this group is a c5.xlarge on-demand EC2 instance and, due to the minimum count of 2, you can be guaranteed that at least two of the aforementioned EC2 instances will be running at all times (24/7/365). Since at least 2 nodes in this group should be available at all times, it is not recommended to use spot instances for this group.
-
stream-platform-data-nodes - This node group hosts the data plane for your IMAX Stream On-Demand Platform cluster and supports the pods/processes used for the probing, analyzing, encoding and optimizing of video assets and the streaming of their scores and metadata to IMAX Insights. The number of nodes elastically scales (i.e. grows and shrinks) to meet demand and both the minimum and preferred node count here is 0 while the maximum defaults to 30. Each node created in this group is a c5.4xlarge EC2 instance, but can increased if so desired. If your organization uses AWS reserved instance pricing or has a savings plan and you would like to use instead of spot instances, please inform your IMAX representative. Spot instances can be used here but be aware that, should a node be reclaimed, the IMAX Stream On-Demand Platform cluster will need to (automatically) restart the affected probe, analysis, encode or optimization.
-
stream-platform-data-nodes-N - Further node groups can be added for the data plane for your IMAX Stream On-Demand Platform cluster. In order to optimize EC2 resources specific use cases should be evaluated using various EC2 instance types. Your IMAX representative can help you evaluate the optimal EC2 instance sizing with regards to cost, system throughput, and concurrent jobs being performed.
-
-
Auto-scaler that controls the elastic scaling within the node groups.
-
Amazon EFS system to store any frames and IMAX VisionScience maps (quality, banding, color volume difference) when detailed inspection is requested.
-
AWS VPC endpoint for secure outbound connection to the IMAX Insights cloud data platform.
-
Various other AWS resources such as a VPC, Subnets, Elastic IPs, EC2 instances, LoadBalancers and logging support.
NoteAll application logs for the IMAX Stream On-Demand Platform cluster are configured to stream to IMAX’s Insights platform where they can be accessed via an ELK stack by IMAX Support.
The following sections should be completed only after the IMAX Stream On-Demand Platform cluster has been deployed and you have received such notification from your IMAX representative.
DNS Records
After the IMAX Stream On-Demand Platform cluster has been deployed, your IMAX representative will update your support ticket with the following DNS record information (record name and value will differ from example below):
DNS Record Type | Name | Value |
---|---|---|
A | stream-on-demand-platform.companyx.com | a012c3d440fd94c3c8fabfeb1ead3645-66809327.us-east-1.elb.amazonaws.com. |
The A record above associates the hostname you chose for your IMAX Stream On-Demand Platform cluster with the IP address of the AWS LoadBalancer that manages external HTTPS access to the system.
- At this point, you will need to add this DNS record to your parent domain (i.e. companyx.com in the example above) using the tooling/processes appropriate for your organization. If you use AWS Route 53 for your parent domain, for example, you would navigate to its associated hosted zone and add the records there.
- Once you’ve added the DNS A record, visit the Host page in your Insights account. You should see that your license is ACTIVE and your system health is ONLINE. If this is not the case, please check that you have added the DNS records above properly and waited 5-10 minutes for the values to propagate. If your system is still showing problems, please reach out to your IMAX representative.
IMAX Stream On-Demand Platform cluster namespace
Along with the DNS record information above, your IMAX representative will update your support ticket with the IMAX Stream On-Demand Platform cluster namespace. This string value represents the Kubernetes namespace used to hold all the Kubernetes pods for the various software components that comprise the IMAX Stream On-Demand Platform cluster. The value will be named according to the organization-site pattern. CompanyX with a site of Stream On-Demand Platform, for example, would use a namespace of companyx-stream-on-demand-platform. For now, you need only be aware that you will need this namespace value when providing access to the S3 buckets holding your video assets below.
Providing access to video assets in S3
Now that the IMAX Stream On-Demand Platform cluster is operational and reachable, you are ready to provide read-only access to the S3 bucket(s) that contain the video assets you wish to analyze. In order to facilitate a frictionless experience, IMAX has again curated a set of CloudFormation templates to create the AWS IAM policies and roles required to provide this read-only access.
-
Create the stream-on-demand-platform-s3-read-only-role CloudFormation stack.
Using the CloudFormation service:
- click the Create stack button to create a new stack,
- and choose the Template is ready and Amazon S3 URL options.
The IMAX CloudFormation template to create the IAM role for read-only access to your S3 bucket(s) can be found at the S3 URL:
https://imax-sct-public.s3.amazonaws.com/on-demand/cf-templates/stream-on-demand-platform-s3-read-only-role.yamlThis CloudFormation template will create an IAM role (stream-on-demand-platform-s3-read-only-role) that will serve as an aggregation point for all the S3 bucket read-only IAM policies that you will create in subsequent steps.
-
Specify stack details.
- Name the stack stream-on-demand-platform-s3-read-only-role.
- The IMAX Stream On-Demand Platform EKS cluster name parameter is the name of your IMAX Stream On-Demand Platform cluster. The default value of Stream-On-Demand-Platform should always be sufficient here but your IMAX representative will inform you as part of the post-installation handoff if it needs to change.
- Name the stack stream-on-demand-platform-s3-read-only-role.
-
Proceed to configure the stack options, review and submit the stack and monitor the stack progress.
Follow steps 5 and 6, respectively, from the Installation Prequisites section above.
Complete/repeat steps 4, 5 and 6 below for each S3 bucket that you wish to use with the IMAX Stream On-Demand Platform cluster.
-
Create the stream-on-demand-platform-add-s3-bucket CloudFormation stack.
Using the CloudFormation service, click the Create stack button to create a new stack. Choose the Template is ready and Amazon S3 URL options.
The IMAX CloudFormation template for adding an S3 bucket can be found at the S3 URL: https://imax-sct-public.s3.amazonaws.com/on-demand/cf-templates/stream-on-demand-platform-add-s3-bucket.yaml
This CloudFormation template will create an IAM policy that will permit read-only access to the S3 bucket in question. The policy will be attached to the IAM role created above (i.e. stream-on-demand-platform-s3-read-only-role), thereby permitting your EKS cluster to read from the S3 bucket.
-
Specify stack details.
- Name the stack in a way that identifies the specific bucket, such as stream-on-demand-platform-s3-<bucket_name>. For a bucket called imax-videos you might use stream-on-demand-platform-s3-imax-videos.
- The S3 bucket name parameter is the name of your S3 bucket. Use only the name here and not the ARN.
- Name the stack in a way that identifies the specific bucket, such as stream-on-demand-platform-s3-<bucket_name>. For a bucket called imax-videos you might use stream-on-demand-platform-s3-imax-videos.
-
Proceed to configure the stack options, review and submit the stack and monitor the stack progress.
Follow steps 5 and 6, respectively, from the Installation Prequisites section above.
-
Submit a test analysis.
At this point, the IMAX Stream On-Demand Platform is ready for use. Please see Next Steps in order to submit your first analysis and/or optimization.
Exercising explicit control of role-based access
As described in the deployment architecture, we use IAM role-based sharing (i.e. customer-management-role -> stream-on-demand-platform-role) in order to allow IMAX support to create and update the IMAX Stream On-Demand Platform cluster in the customer’s VPC. While the permissions for the stream-on-demand-platform-role are the minimum recommended for successful operation, some customers may want to exercise explicit control over when and how long this role-based sharing is available (i.e. limit to periods of creation, upgrade etc.). To disconnect the role-based sharing between the two VPCs, one can use the AWS Web console (or CLI) to update the trust relationship on the stream-on-demand-platform-role to deny the sts:AssumeRole
permission for the IMAX role, as shown in the example below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": {
"AWS": "arn:aws:iam::077383753319:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_companyx-vod-production_fa4ee44e6a19a4c1"
},
"Action": "sts:AssumeRole"
}
]
}
To re-enable the role-based sharing, you would change the "Effect": "Deny"
line to read "Effect": "Allow"
.
In the example above, the role ARN arn:aws:iam::077383753319:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_companyx-vod-production_fa4ee44e6a19a4c1
is being used for example/illustrative purposes only. The ARN for your specific installation will be provided as part of the Installation Prerequisites.
Now that the IMAX Stream On-Demand Platform cluster has been successfully deployed, you are ready to submit new analyses and/or optimizations and inspect the results.
You are encouraged to use the Insights Web UI to submit a new analysis or submit a new optimization, depending on whether you’re using StreamAware or StreamSmart.
If you encounter trouble or have any questions about the installation prerequisites or post installation steps, please create a support ticket by using the IMAX Help Center or to reach out to your IMAX representative directly.