StreamAware On-Demand is designed for video service providers to streamline quality assurance and control. The software ensures high-quality video by providing complete visibility across file-based workflows, while also enabling automated quality checks to verify videos meet delivery requirements. It uses the Emmy® award-winning IMAX VisionScience™ technology to provide a single, objective metric to monitor quality across the entire media supply chain and provide clear insights into its performance. The software also includes industry standard and customizable quality checks to automate video, audio, and metadata compliance. This results is complete visibility of quality across file-based workflows, and enables automation of tasks that typically require the human eye, improving efficiency and reducing the margin for human error to ensure video consistently meets the highest standards.
{id}
{id}
To submit one or more analyses for processing, send a POST
request to /analyses
.
An analysis can be either full-reference or no-reference.
Full-Reference
-
Used when you want to validate the performance of your encoder or compare one encoder or settings against another
-
Can compare any number of outputs, from a single asset to a full encoding ladder and/or HLS playlist
-
Compares each subject asset against a pristine reference to provide and score a pixel-by-pixel comparison
A full-reference analysis requires specifying both reference and subject assets and, during its operation, the On-Demand Analyzer will first make a no-reference pass on the reference asset and then will compare each subject asset against the reference. As such, this endpoint will return an Analysis object for the no-reference analysis of the reference in addition to one for each comparison with a subject asset. Let’s consider the following example as an illustration:
Reference Asset(s) Subject Asset(s) GOT_S2_EP1.mov GOT_S2_EP1_libx264_1920x1080_50-0.mov GOT_S2_EP1_libx264_1280x720_50-0.mov GOT_S2_EP1_libx264_960x540_50-0.mov Results in 1 no-reference analysis on the reference (
testId: 1
):- GOT_S2_EP1.mov
Results in 2 full-reference analyses:
- GOT_S2_EP1.mov —> GOT_S2_EP1_libx264_1920x1080_50-0.mov (
testId: 1-1
) - GOT_S2_EP1.mov —> GOT_S2_EP1_libx264_1280x720_50-0.mov (
testId: 1-2
) - GOT_S2_EP1.mov —> GOT_S2_EP1_libx264_960x540_50-0.mov (
testId: 1-3
)
- GOT_S2_EP1.mov
No-Reference
-
Used when you want to validate the quality of a source or reference asset (i.e. source valiation)
-
Analyzes a single asset in isolation to provide a pixel-by-pixel evaluation capable of detecting and scoring the impact of numerous video anomolies
A no-reference analysis requires specifying one or more subject assets, each of which is analyzed in isolation. The following example illustrates a common no-reference analysis:
Subject Asset(s) GOT_S2_EP1_libx264_1920x1080_50-0.mov GOT_S2_EP1_libx264_1280x720_50-0.mov GOT_S2_EP1_libx264_960x540_50-0.mov Results in 3 no-reference analyses:
- GOT_S2_EP1_libx264_1920x1080_50-0.mov (
testId: 1
) - GOT_S2_EP1_libx264_1280x720_50-0.mov (
testId: 2
) - GOT_S2_EP1_libx264_960x540_50-0.mov (
testId: 3
)
- GOT_S2_EP1_libx264_1920x1080_50-0.mov (
For more details on how to configure common requests, please consult the endpoint examples below and the NewAnalysis object that forms the request.
The StreamAware On-Demand Analyzer supports full-reference analyses where the assets do not share the same frame rate, albeit with some restrictions. Please refer to the Cross-frame rate support in the technical documentation for details.
The StreamAware On-Demand Analyzer supports a variety of video file formats. Please refer to the Supported video formats in the technical documentation for details.
Request body
The NewAnalysis request body is used to submit any combination of reference and subject assets you wish, enabling everything from ad-hoc no-reference analyses to full-reference encoding ladder comparisons. Please consult the description above, the endpoint example and/or the NewAnalysis object for more details.
Responses
The newly created analysis, populated with key attribute values.
The response returned from this endpoint indicates only that an analysis has been successfully submitted for processing. It makes no guarantees that the analysis will execute without error, nor does it indicate anything about the content or nature of the results, if available. To discover these details you are directed to consult the Insights REST API
Examples
Create (submit) a no-reference analysis for an asset all licensed quality checks enabled
curl --location --request POST 'https://localhost/api/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "Simple NR Test with Quality Checks - Big Buck Bunny"
},
"subjectAssets": [
{
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc"
}
}
],
"analyzerConfig": {
"enableBandingDetection": true,
"qualityCheckConfig": {
"enabled": true,
"duration": 5,
"skipStart": 10.25,
"skipEnd": 10.25,
"freezeFrame": {
"enabled": true,
"duration": 10
}
}
}
}'
HTTP/1.1 201 Created
Content-Type: application/json
{
"submittedAnalyses": [
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"qualityCheckConfig": {
"blackFrame": {
"duration": 5,
"enabled": true,
"skipEnd": 10.25,
"skipStart": 10.25
},
"colorBarFrame": {
"duration": 5,
"enabled": true,
"skipEnd": 10.25,
"skipStart": 10.25
},
"duration": 5,
"enabled": true,
"freezeFrame": {
"duration": 10,
"enabled": true,
"skipEnd": 10.25,
"skipStart": 10.25
},
"missingCaptions": {
"duration": 5,
"enabled": true,
"skipEnd": 10.25,
"skipStart": 10.25
},
"silence": {
"commonParameters": {
"duration": 5,
"enabled": true,
"skipEnd": 10.25,
"skipStart": 10.25
}
},
"skipEnd": 10.25,
"skipStart": 10.25,
"solidColorFrame": {
"duration": 5,
"enabled": true,
"skipEnd": 10.25,
"skipStart": 10.25
}
},
"viewingEnvironments": []
},
"id": "286703bc-ad9a-4f05-87e4-ffe0cce188dc",
"subjectAsset": {
"content": {
"title": "Simple NR Test with Quality Checks - Big Buck Bunny"
},
"hdr": false,
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"submissionTimestamp": "2022-03-25T22:04:34.601Z",
"testId": "1"
}
]
}
Create (submit) a full-reference analysis with manual temporal alignment using Asset startFrame
property
curl --location --request POST 'https://localhost/api/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "Simple FR analysis, no TA - Big Buck Bunny"
},
"referenceAssets": [
{
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
},
"startFrame": 1
}
],
"subjectAssets": [
{
"name": "Big_Buck_Bunny_h264_qp_21.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
},
"startFrame": 1
},
{
"name": "Big_Buck_Bunny_h264_qp_31.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
},
"startFrame": 1
}
],
"analyzerConfig": {
"enableBandingDetection": true,
"enableTemporalAlignment": false
}
}'
HTTP/1.1 201 Created
{
"submittedAnalyses": [
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableTemporalAlignment": false,
"viewingEnvironments": []
},
"id": "9f7c088b-97c2-4576-9cec-da46a3f6a704",
"subjectAsset": {
"content": {
"title": "Simple FR analysis, no TA - Big Buck Bunny"
},
"hdr": false,
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"startFrame": 1,
"storageLocation": {
"name": "video-files",
"type": "S3",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"submissionTimestamp": "2022-03-25T22:13:42.096Z",
"testId": "1"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"enableTemporalAlignment": false,
"viewingEnvironments": []
},
"id": "9f7c088b-97c2-4576-9cec-da46a3f6a704",
"referenceAsset": {
"content": {
"title": "Simple FR analysis, no TA - Big Buck Bunny"
},
"hdr": false,
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"startFrame": 1,
"storageLocation": {
"name": "video-files",
"type": "S3",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"subjectAsset": {
"content": {
"title": "Simple FR analysis, no TA - Big Buck Bunny"
},
"hdr": false,
"name": "Big_Buck_Bunny_h264_qp_21.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"startFrame": 1,
"storageLocation": {
"name": "video-files",
"type": "S3",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"submissionTimestamp": "2022-03-25T22:13:42.096Z",
"testId": "1-1"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"enableTemporalAlignment": false,
"viewingEnvironments": []
},
"id": "9f7c088b-97c2-4576-9cec-da46a3f6a704",
"referenceAsset": {
"content": {
"title": "Simple FR analysis, no TA - Big Buck Bunny"
},
"hdr": false,
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"startFrame": 1,
"storageLocation": {
"name": "video-files",
"type": "S3",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"subjectAsset": {
"content": {
"title": "Simple FR analysis, no TA - Big Buck Bunny"
},
"hdr": false,
"name": "Big_Buck_Bunny_h264_qp_31.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"startFrame": 1,
"storageLocation": {
"name": "video-files",
"type": "S3",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"submissionTimestamp": "2022-03-25T22:13:42.096Z",
"testId": "1-2"
}
]
}
Create (submit) a no-reference analysis for a single raw (.yuv) asset
curl --location --request POST 'https://localhost/api/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "HoneyBee - Raw/YUV"
},
"subjectAssets": [
{
"name": "HoneyBee_3840x2160_120fps_420_10bit_YUV.yuv",
"path": "royalty_free/yuv",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc-2"
},
"rawVideoParameters": {
"resolution": {
"width": 3840,
"height": 2160
},
"fps": 24,
"scanType": "P",
"pixelFormat": "YUV420P"
}
}
],
"analyzerConfig": {
"enableBandingDetection": true
}
}'
HTTP/1.1 201 Created
{
"submittedAnalyses": [
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"viewingEnvironments": []
},
"id": "7575ea3b-8d6d-4768-9227-b57814fec75f",
"subjectAsset": {
"content": {
"title": "HoneyBee - Raw/YUV"
},
"hdr": false,
"name": "HoneyBee_3840x2160_120fps_420_10bit_YUV.yuv",
"path": "royalty_free/yuv",
"rawVideoParameters": {
"fieldOrder": "TFF",
"fps": 24,
"pixelFormat": "YUV420P",
"resolution": {
"height": 2160,
"width": 3840
},
"scanType": "P"
},
"storageLocation": {
"name": "video-files-pvc-2",
"type": "PVC"
}
},
"submissionTimestamp": "2022-03-26T14:41:47.169Z",
"testId": "1"
}
]
}
Create (submit) a full-reference analysis for an HLS asset.
curl --location --request POST 'https://localhost/api/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"subjectAssets": [
{
"name": "Soccer.m3u8",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"path": "Soccer_MWC"
}
],
"referenceAssets": [
{
"name": "Soccer_1min.mp4",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc"
},
"path": "hlsReferences"
}
],
"analyzerConfig": {
"enableBandingDetection": true
}
}'
HTTP/1.1 201 Created
Content-Type: application/json
{
"submittedAnalyses": [
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"referenceAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer.m3u8",
"path": "Soccer_MWC",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"streamIdentifier": {
"bandwidth": 1713000,
"type": "HLSVariantIdentifier"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1-1"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"referenceAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer.m3u8",
"path": "Soccer_MWC",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"streamIdentifier": {
"bandwidth": 280000,
"type": "HLSVariantIdentifier"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1-2"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"referenceAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer.m3u8",
"path": "Soccer_MWC",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"streamIdentifier": {
"bandwidth": 19987000,
"type": "HLSVariantIdentifier"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1-3"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"referenceAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer.m3u8",
"path": "Soccer_MWC",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"streamIdentifier": {
"bandwidth": 582000,
"type": "HLSVariantIdentifier"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1-4"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"referenceAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer.m3u8",
"path": "Soccer_MWC",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"streamIdentifier": {
"bandwidth": 9561000,
"type": "HLSVariantIdentifier"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1-5"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"referenceAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer.m3u8",
"path": "Soccer_MWC",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"streamIdentifier": {
"bandwidth": 3042000,
"type": "HLSVariantIdentifier"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1-6"
}
]
}
Create (submit) a no-reference analyses for an IMF asset
curl --location --request POST 'https://localhost/api/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "Configure analysis with an IMF asset"
},
"subjectAssets": [
{
"name": "CPL_DTC-Master-SDR-ML5-R1-OV.xml",
"path": "/videos/imf/DTC-Master-SDR-ML5-R1-OV",
"storageLocation": {
"name": "videos",
"type": "PVC"
}
}
]
}'
HTTP/1.1 201 Created
{
"submittedAnalyses": [
{
"id": "756fb026-f4f4-47d8-ae8f-afd239643a55",
"subjectAsset": {
"content": {
"title": "Configure analysis with an IMF asset"
},
"hdr": false,
"name": "CPL_DTC-Master-SDR-ML5-R1-OV.xml",
"path": "/videos/imf/DTC-Master-SDR-ML5-R1-OV",
"storageLocation": {
"name": "videos",
"type": "PVC"
}
},
"submissionTimestamp": "2022-03-26T15:54:27.731Z",
"testId": "1"
}
]
}
Create (submit) a no-reference analyes for an Image Sequence asset
curl --location --request POST 'https://localhost/api/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "Image Sequence"
},
"subjectAssets": [
{
"name": "frameIndex%d-test.png",
"path": "compressed-videos/image-sequence/png",
"storageLocation": {
"type": "S3",
"name": "imax-compressed-videos"
},
"imageSequenceParameters": {
"fps": 25
}
}
],
"analyzerConfig": {
"enableBandingDetection": true
}
}'
HTTP/1.1 201 Created
{
"submittedAnalyses": [
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"viewingEnvironments": []
},
"id": "c096a553-ef1a-441c-ab09-bf56c28e7704",
"subjectAsset": {
"content": {
"title": "Image Sequence"
},
"hdr": false,
"imageSequenceParameters": {
"fps": 25
},
"name": "frameIndex%d-test.png",
"path": "compressed-videos/image-sequence/png",
"storageLocation": {
"name": "imax-compressed-videos",
"type": "S3"
}
},
"submissionTimestamp": "2022-03-26T15:54:27.731Z",
"testId": "1"
}
]
}
Create (submit) a full-reference analysis with score-based quality checks
curl --location --request POST 'https://localhost/api/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "FR Analysis With Score-Based Quality Checks"
},
"subjectAssets": [
{
"name": "Big_Buck_Bunny_h264_qp_21.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
},
"qualityCheckConfig": {
"scoreChecks": [
{
"metric": "SVS",
"threshold": 80,
"durationSeconds": 5,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
},
{
"metric": "SVS",
"threshold": 60,
"durationSeconds": 2,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
},
{
"metric": "SBS",
"threshold": 75,
"durationFrames": 48,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
}
]
}
}
],
"referenceAssets": [
{
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
}
}
],
"analyzerConfig": {
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "EXPERT"
},
{
"device": {
"name": "xl2420t"
},
"viewerType": "TYPICAL"
}
]
}
}'
HTTP/1.1 201 Created
{
"submittedAnalyses": [
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "EXPERT"
},
{
"device": {
"name": "xl2420t"
},
"viewerType": "TYPICAL"
}
]
},
"id": "944ade76-645a-4826-b500-3267fb1668f1",
"subjectAsset": {
"content": {
"title": "FR Analysis With Score-Based Quality Checks"
},
"hdr": false,
"name": "Big_Buck_Bunny_h264_qp_21.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"submissionTimestamp": "2022-06-30T15:48:40.648Z",
"testId": "1"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "EXPERT"
},
{
"device": {
"name": "xl2420t"
},
"viewerType": "TYPICAL"
}
]
},
"id": "944ade76-645a-4826-b500-3267fb1668f1",
"referenceAsset": {
"content": {
"title": "FR Analysis With Score-Based Quality Checks"
},
"hdr": false,
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"subjectAsset": {
"content": {
"title": "FR Analysis With Score-Based Quality Checks"
},
"hdr": false,
"name": "Big_Buck_Bunny_h264_qp_21.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"qualityCheckConfig": {
"scoreChecks": [
{
"durationSeconds": 5,
"metric": "SVS",
"reverseThresholdComparison": false,
"skipEnd": 1.25,
"skipStart": 1.25,
"threshold": 80,
"viewingEnvironmentIndex": 0
},
{
"durationSeconds": 2,
"metric": "SVS",
"reverseThresholdComparison": false,
"skipEnd": 1.25,
"skipStart": 1.25,
"threshold": 60,
"viewingEnvironmentIndex": 0
},
{
"durationFrames": 48,
"metric": "SBS",
"reverseThresholdComparison": false,
"skipEnd": 1.25,
"skipStart": 1.25,
"threshold": 75,
"viewingEnvironmentIndex": 0
}
]
},
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"submissionTimestamp": "2022-06-30T15:48:40.648Z",
"testId": "1-1"
}
]
}
Create (submit) a no-reference analysis for a Dolby Vision asset with a metadata sidecar
curl -X POST "https://localhost/api/v1/analyses" \
-H "Content-Type: application/json" \
-d '{
"subjectAssets": [
{
"content": {
"title": "Sparks - Dolby Vision"
},
"name": "20161103_1023_SPARKS_4K_P3_PQ_4000nits_DoVi.mxf",
"sidecars": [
{
"type": "DOLBY_VISION_METADATA",
"name": "20161103_SPARKS_DOVI_METADATA_AR_CORRECT.xml"
}
],
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "PVC",
"name": "videos"
},
"dynamicRange": "HDR"
}
],
"analyzerConfig": {
"enableBandingDetection": true
}
}'
Create (submit) a full-reference analysis with audio-based quality checks
curl --location --request POST 'https://localhost/api/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "FR Analysis With Audio-Based Quality Checks"
},
"referenceAssets": [
{
"content": {
"title": "Big Buck Bunny"
},
"assetUri": "s3://my-bucket-name/test/Big_Buck_Bunny_1080p_ref.mp4",
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "S3",
"name": "videos",
"credentials": {
"useAssumedIAMRole": true
}
},
"audio": {
"groups": [
{
"qualityCheckConfig": {
"loudnessChecks": [
{
"type": "MAX_TRUE_PEAK_LEVEL",
"enabled": true,
"duration": 1,
"skipStart": 1.25,
"skipEnd": 1.25,
"threshold": -2
},
{
"type": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 5
}
{
"type": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 25
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_3",
"enabled": true
}
}
]
}
}
],
"subjectAssets": [
{
"content": {
"title": "Big Buck Bunny"
},
"assetUri": "s3://my-bucket-name/test/Big_Buck_Bunny_1080p_test.mp4",
"name": "Big_Buck_Bunny_1080p@4000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "S3",
"name": "videos",
"credentials": {
"useAssumedIAMRole": true
}
},
"audio": {
"groups": [
{
"qualityCheckConfig": {
"loudnessChecks": [
{
"type": "MAX_TRUE_PEAK_LEVEL",
"enabled": true,
"duration": 1,
"skipStart": 1.25,
"skipEnd": 1.25,
"threshold": -2
},
{
"type": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 5
}
{
"type": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 25
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_3",
"enabled": true
}
}
]
}
}
],
"analyzerConfig": {
"enableBandingDetection": true
}
}
HTTP/1.1 201 Created
Content-Type: application/json
{
"submittedAnalyses": [
{
"id": "04a2e841-9c9e-4f50-9f9f-4a8b847f5b3e",
"content": {
"title": "FR Analysis With Audio-Based Quality Checks"
},
"referenceAssets": [
{
"content": {
"title": "Big Buck Bunny"
},
"assetUri": "s3://my-bucket-name/test/Big_Buck_Bunny_1080p_ref.mp4",
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "S3",
"name": "videos",
"credentials": {
"useAssumedIAMRole": true
}
},
"audio": {
"groups": [
{
"qualityCheckConfig": {
"loudnessChecks": [
{
"type": "MAX_TRUE_PEAK_LEVEL",
"enabled": true,
"duration": 1,
"skipStart": 1.25,
"skipEnd": 1.25,
"threshold": -2
},
{
"type": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 5
}
{
"type": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 25
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_3",
"enabled": true
}
}
]
}
}
],
"subjectAssets": [
{
"content": {
"title": "Big Buck Bunny"
},
"assetUri": "s3://my-bucket-name/test/Big_Buck_Bunny_1080p_test.mp4",
"name": "Big_Buck_Bunny_1080p@4000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "S3",
"name": "videos",
"credentials": {
"useAssumedIAMRole": true
}
},
"audio": {
"groups": [
{
"qualityCheckConfig": {
"loudnessChecks": [
{
"type": "MAX_TRUE_PEAK_LEVEL",
"enabled": true,
"duration": 1,
"skipStart": 1.25,
"skipEnd": 1.25,
"threshold": -2
},
{
"type": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 5
}
{
"type": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 25
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_3",
"enabled": true
}
}
]
}
}
],
"analyzerConfig": {
"enableBandingDetection": true
}
"submissionError": "",
"submissionTimestamp": "2018-01-01T14:20:22Z",
"testId": "1-1"
}
]
}
{id}
To update an existing analysis, send a PATCH
request to /analyses/{id}
where id is the UUID of the analysis to update.
Please see the AnalysisPatchRequest schema to understand the options supported by the analysis update operation.
Path variables
The UUID of the analysis to be cancelled.
Request body
Responses
The analysis was successfully patched/updated
Examples
Cancelling an analysis
curl -X PATCH "https://localhost/api/v1/analyses/04a2e841-9c9e-4f50-9f9f-4a8b847f5b3e" \
-H "Content-Type: application/json" \
-d '{
"status": "CANCELLED"
}'
HTTP/1.1 200 OK
{id}
To delete an analysis, send a DELETE
request to /analyses/{id}
where id is the UUID of the analysis to delete.
Only analyses that have been previously cancelled or completed can be deleted.
Path variables
The UUID of the analysis to be cancelled.
Responses
The analysis deletion request was successfully processed
Examples
Deleting an anlysis
curl -X DELETE "https://localhost/api/v1/analyses/04a2e841-9c9e-4f50-9f9f-4a8b847f5b3e"
HTTP/1.1 200 OK
Creates a frame capture for a given asset.
Request body
Responses
A PNG image that represents the frame capture.
Body
Examples
curl -X POST "https://localhost/api/v1/frames" \
-H "Content-Type: application/json" \
-d '{
"type": "FrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
},
"startFrame": {
"type": "PTS",
"value": 400
},
"additionalFrames": 24
}'
Creates the banding map for a frame of a given asset.
Banding Maps measure color banding presence at a pixel level as viewed by an “expert” on an OLED TV using a no-reference approach. The map is generated as part of one of several steps used in computing an IMAX Banding Score (XBS). The banding map is a binary map with white pixels showing banding presence, and does not reflect pixel-level variations in banding impairment visibility.
Request body
Responses
A PNG image that represents the frame’s banding map.
Body
Examples
curl -X POST "https://localhost/api/v1/bandingMaps" \
-H "Content-Type: application/json" \
-d '{
"type": "FrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.m3u8",
"path": "/mnt/nas/videos",
"storageLocation": {
"name": "/videos",
"type": "S3"
},
"streamIdentifier": {
"type": "HLSVariantIdentifier",
"bandwidth": 4997885,
"fallbackStreamIndex": 1
}
},
"startFrame": {
"type": "FrameIndex",
"value": 1200
},
"additionalFrames": 24
}'
Creates the quality map for a frame of a given asset.
Quality Maps are gray scale presentations of pixel-level perceptual quality that show the spatial distribution of impairments within a frame. Quality Maps illustrate where impairments occur at a pixel level. The maps provide the reason behind the quality score. Dark pixels show the impairments compared to the reference file. Areas that are not that important, such as the area around text, might have more white pixels. Generally, the darker the image, the lower the score.
Request body
Responses
A PNG image that represents the subjectAsset
frame’s quality map.
Body
Examples
curl -X POST "https://localhost/api/v1/qualityMaps" \
-H "Content-Type: application/json" \
-d '{
"type": "FullReferenceFrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"reference": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"referenceStartFrame": {
"type": "PTS",
"value": 1200
},
"additionalFrames": 24
}'
Creates a color volume difference map for a frame of a given asset.
Color volume difference maps are gray scale maps that illustrate pixel-level color and skin tone deviation with respect to the reference file. Brighter pixels correspond to a higher deviation.
Request body
Examples
A JSON body payload for requesting a color difference map between a subject and reference asset.
{
"type": "FullReferenceFrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"reference": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"referenceStartFrame": {
"type": "PTS",
"value": 1200
},
"additionalFrames": 24
}
Responses
A PNG image that represents the subjetAsset
frame’s color difference map.
Body
Examples
curl -X POST "https://localhost/api/v1/colorDifferenceMaps" \
-H "Content-Type: application/json" \
-d '{
"type": "FullReferenceFrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"reference": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"referenceStartFrame": {
"type": "PTS",
"value": 1200
},
"additionalFrames": 24
}'
Creates (and caches) all the captures (frames and maps) possible for the supplied asset(s). Use this endpoint if you expect to retrieve more than one type of capture (i.e. frame, banding map, quality map) for a given frame and want the system to pre-fetch and cache these images to reduce wait times on subsequent requests to any of the frame capture endpoints.
Request body
Responses
A PNG image that represents the frame capture.
Body
Examples
Creates all captures for the supplied asset and reference and returns the frame (i.e. content) in the response.
curl -X POST "https://localhost/api/v1/captures" \
-H "Content-Type: application/json" \
-d '{
"frameRequest": {
"type": "FullReferenceFrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"reference": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"referenceStartFrame": {
"type": "PTS",
"value": 1200
},
"additionalFrames": 24
},
"requestedCaptureType": "FRAME"
}'
Creates all captures for the supplied asset and returns the banding map in the response.
curl -X POST "https://localhost/api/v1/captures" \
-H "Content-Type: application/json" \
-d '{
"frameRequest": {
"type": "FrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"additionalFrames": 24
},
"requestedCaptureType": "BANDING_MAP"
}'
Retrieves the frame and map cache details.
Responses
Examples
curl -X GET "https://localhost/api/v1/cache"
HTTP/1.1 200 OK
Content-Type: application/json
{
"caches": [
{
"cacheId": "533db73-0f9a-4805-9651-c5dcd519dc37",
"numberOfFiles": 15182,
"sizeOfFiles": 1073741824,
"humanReadableSizeOfFiles": "1.0 G"
}
]
}
Clear the cache used to store the frames and maps (banding, quality, color volume difference) that have been previously requested (and cached).
Request parameters
The UTC date and time up to and before which a cached frame and/or map will be deleted
Responses
Examples
Delete all frames/maps up to a given date-time
curl -X DELETE "https://localhost/api/v1/cache?beforeTimestamp=2022-09-15T17:32:28Z"
HTTP/1.1 200 OK
Delete all frames/maps
curl -X DELETE "https://localhost/api/v1/cache"
HTTP/1.1 200 OK
StreamSmart On-Demand overlays on your existing encoding workflow and uses the most accurate measure of video quality, IMAX VisionScience™, to retain the same visual quality of experience, while delivering a significant reduction in delivery costs. IMAX’s unique VisionScience perceptual quality measurement technology "sees” video the way humans do and takes advantage of opportunities to decrease bits in a way that humans will not notice, and which other methods cannot match. StreamSmart uses this quality-measurement approach to analyze every frame of a video and optimize it for best picture quality and compression efficiency, guaranteeing an optimal viewer experience while maximizing bitrate savings, typically 15% above what top-of-the-line content-aware encoders are already delivering.
{id}
{id}
To create an optimized encoding, send a POST
request to the IMAX StreamSmart™ /optimizations
endopint.
This endpoint currently supports creating optimized encodings for the following encoders:
During optimization, IMAX StreamSmart will produce one or more renditions, each of which represents an encoded video using your chosen encoder, including all your desired encoder settings. Each is optimized whereby IMAX StreamSmart will produce an encoded version of the video that is of indistinguishable perceived quality to the un-optimized version using fewer bits.
FFmpeg
IMAX StreamSmart supports a number of FFmpeg encoding strategies including:
- Single-pass constant rate factor (CRF)
- Single-pass variable bitrate (VBR)
- Multi-pass variable bitrate (VBR)
Please see the examples to the side and FFmpegConfig for details.
AWS Elemental MediaConvert (EMC)
IMAX StreamSmart supports using the verbatim JSON config from an EMC invocation.
Please see the examples to the side and EMCConfig for details.
Request body
Responses
Examples
Optimizing an FFmpeg encoding (VBR)
curl -X POST "https://localhost/api/v1/optimizations" \
-H "Content-Type: application/json" \
-d '{
"content": {
"title": "Big Buck Bunny"
},
"input": {
"assetUri": "s3://videos/examples/Big_Buck_Bunny.mp4"
},
"encoderConfig": {
"type": "FFmpegConfig",
"encodes": [
{
"command": [
"ffmpeg -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -b:v 4500k -maxrate 6250k -bufsize 10000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/examples/output/encoded_video.mp4"
}
}
]
}
}'
Optimizing an FFmpeg encoding (multi-pass VBR)
curl -X POST "https://localhost/api/v1/optimizations" \
-H "Content-Type: application/json" \
-d '{
"content": {
"title": "Big Buck Bunny"
},
"input": {
"assetUri": "s3://videos/examples/Big_Buck_Bunny.mp4"
},
"encoderConfig": {
"type": "FFmpegConfig",
"encodes": [
{
"command": [
"ffmpeg -i {INPUT_LOCATION} -passlogfile {TEMP_FILE_1} -profile:v high -preset slow -pass 1 -vcodec libx264 -bf 0 -refs 4 -b:v 4500k -maxrate:v 4500k -bufsize:v 6000k -minrate:v 6000k -x264-params \"rc-lookahead=48:keyint=96:stitchable=1:keyint_min:48\" -copyts -start_at_zero -an -f mp4 /dev/null",
"ffmpeg -i {INPUT_LOCATION} -passlogfile {TEMP_FILE_1} -profile:v high -preset slow -pass 2 -vcodec libx264 -bf 0 -refs 4 -b:v 4500k -maxrate:v 4500k -bufsize:v 6000k -minrate:v 6000k -x264-params \"rc-lookahead=48:keyint=96:stitchable=1:keyint_min:48\" -copyts -start_at_zero -an -f mp4 {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/examples/output/encoded_video.mp4"
}
}
]
}
}'
Optimizing an FFmpeg encoding (VBR, multiple encodes)
curl -X POST "https://localhost/api/v1/optimizations" \
-H "Content-Type: application/json" \
-d '{
"content": {
"title": "Big Buck Bunny"
},
"input": {
"assetUri": "s3://videos/examples/Big_Buck_Bunny.mp4"
},
"encoderConfig": {
"type": "FFmpegConfig",
"encodes": [
{
"command": [
"ffmpeg -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -b:v 4500k -maxrate 6250k -bufsize 10000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/examples/output/output1.mp4"
}
},
{
"command": [
"ffmpeg -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -b:v 4000k -maxrate 6000k -bufsize 9000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/examples/output/output2.mp4"
}
},
{
"command": [
"ffmpeg -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -b:v 3500k -maxrate 5500k -bufsize 8500k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/examples/output/output3.mp4"
}
}
]
}
}'
Optimizing an FFmpeg encoding (CRF)
curl -X POST "https://localhost/api/v1/optimizations" \
-H "Content-Type: application/json" \
-d '{
"content": {
"title": "Big Buck Bunny"
},
"input": {
"assetUri": "s3://videos/examples/Big_Buck_Bunny.mp4"
},
"encoderConfig": {
"type": "FFmpegConfig",
"encodes": [
{
"command": [
"ffmpeg -r 24 -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -crf 23 -maxrate 4500k -bufsize 6000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/examples/output/encoded_video.mp4"
}
}
]
}
}'
Optimizing an FFmpeg encoding (CRF) with additional optimization configuration
curl -X POST "https://localhost/api/v1/optimizations" \
-H "Content-Type: application/json" \
-d '{
"content": {
"title": "Big Buck Bunny"
},
"input": {
"assetUri": "s3://videos/examples/Big_Buck_Bunny.mp4"
},
"encoderConfig": {
"type": "FFmpegConfig",
"encodes": [
{
"command": [
"ffmpeg -r 24 -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -crf 23 -maxrate 4500k -bufsize 6000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/examples/output/encoded_video.mp4"
},
"optimizationConfig": {
"key1": "value1",
"key2": "value2"
}
}
]
}
}'
Optimizing an EMC encoding
curl -X POST "https://localhost/api/v1/optimizations" \
-H "Content-Type: application/json" \
-d '{
"content": {
"title": "Big Buck Bunny"
},
"encoderConfig": {
"type": "EMCConfig",
"config": {
"JobTemplate": "",
"Queue": "arn:aws:mediaconvert:us-east-1:315835334412:queues/Default",
"UserMetadata": {},
"Role": "arn:aws:iam::315835334412:role/mediaconvert-optimizer",
"Settings": {
"OutputGroups": [
{
"CustomName": "top-profile-encode",
"Name": "CMAF",
"Outputs": [
{
"ContainerSettings": {
"Container": "CMFC"
},
"VideoDescription": {
"Width": 1920,
"ScalingBehavior": "STRETCH_TO_OUTPUT",
"Height": 1080,
"TimecodeInsertion": "DISABLED",
"AntiAlias": "ENABLED",
"Sharpness": 50,
"CodecSettings": {
"Codec": "H_264",
"H264Settings": {
"InterlaceMode": "PROGRESSIVE",
"NumberReferenceFrames": 3,
"Syntax": "DEFAULT",
"Softness": 0,
"GopClosedCadence": 1,
"GopSize": 2,
"Slices": 1,
"GopBReference": "ENABLED",
"HrdBufferSize": 16000000,
"MaxBitrate": 8000000,
"EntropyEncoding": "CABAC",
"RateControlMode": "QVBR",
"QvbrSettings": {
"QvbrQualityLevel": 9
},
"CodecProfile": "HIGH",
"MinIInterval": 0,
"AdaptiveQuantization": "AUTO",
"CodecLevel": "AUTO",
"SceneChangeDetect": "ENABLED",
"QualityTuningLevel": "SINGLE_PASS",
"UnregisteredSeiTimecode": "DISABLED",
"GopSizeUnits": "SECONDS",
"ParControl": "INITIALIZE_FROM_SOURCE",
"NumberBFramesBetweenReferenceFrames": 3,
"RepeatPps": "DISABLED",
"DynamicSubGop": "ADAPTIVE"
}
}
},
"NameModifier": "_8Mbps"
}
],
"OutputGroupSettings": {
"Type": "CMAF_GROUP_SETTINGS",
"CmafGroupSettings": {
"TargetDurationCompatibilityMode": "SPEC_COMPLIANT",
"WriteHlsManifest": "ENABLED",
"WriteDashManifest": "ENABLED",
"SegmentLength": 4,
"Destination": "s3://s3-bucket/destination/path/",
"FragmentLength": 2,
"SegmentControl": "SEGMENTED_FILES",
"WriteSegmentTimelineInRepresentation": "ENABLED",
"ManifestDurationFormat": "FLOATING_POINT",
"StreamInfResolution": "INCLUDE"
}
}
}
],
"Inputs": [
{
"AudioSelectors": {
"Audio Selector 1": {
"DefaultSelection": "DEFAULT"
}
},
"VideoSelector": {
"ColorSpace": "FOLLOW",
"Rotate": "DEGREE_0",
"AlphaBehavior": "DISCARD"
},
"FilterEnable": "AUTO",
"PsiControl": "USE_PSI",
"FilterStrength": 0,
"DeblockFilter": "DISABLED",
"DenoiseFilter": "DISABLED",
"TimecodeSource": "ZEROBASED",
"FileInput": "s3://s3-bucket/sources/source.mov"
}
]
},
"AccelerationSettings": {
"Mode": "DISABLED"
},
"StatusUpdateInterval": "SECONDS_15",
"Priority": 0,
"HopDestinations": []
}
}
}'
{id}
To update an existing optimization, send a PATCH
request to /optimizations/{id}
where id is the UUID of the analysis to update.
Please see the OptimizationPatchRequest schema to understand the options supported by the optimizations update operation.
Path variables
Request body
Examples
Cancelling an optimization
curl -X PATCH "https://localhost/api/v1/optimizations/04a2e841-9c9e-4f50-9f9f-4a8b847f5b3e" \
-H "Content-Type: application/json" \
-d '{
"status": "CANCELLED"
}'
HTTP/1.1 200 OK
{id}
To delete an optimization, send a DELETE
request to /optimization/{id}
where id is the UUID of the optimization to delete.
Only optimizations that have been previously cancelled or completed can be deleted.
Path variables
Responses
The optimization deletion request was successfully processed
The server cannot find the requested resource.
Examples
Deleting an optimization
curl -X DELETE "https://localhost/api/v1/optimizations/04a2e841-9c9e-4f50-9f9f-4a8b847f5b3e"
HTTP/1.1 200 OK
Used to query the status and configure the state of the IMAX Stream On-Demand Platform.
{id}
{type}
.{id}
List system version information for the Stream On-Demand Platform REST API.
Responses
Examples
curl -X GET "https://localhost/api/v1/version"
HTTP/1.1 200 OK
Content-Type: application/json
{
"version": {
"commitBranch": "stream-ondemand/release/3.1.0",
"commitHash": "facc2ef0a3c8ebc10819dc1218748f8d2cbfafd9",
"commitTime": "2022-05-02T18:58:44Z",
"stamped": "true",
"versionString": "3.1.0-12"
}
}
Fetches the system readiness/status by reporting on the individual readiness of all the services that comprise the system.
Responses
Examples
curl -X GET "https://localhost/api/v1/status"
HTTP/1.1 200 OK
Content-Type: application/json
{
"checks": [
{
"deploymentId": "d5b36ce6-d0e2-4dd2-bcf6-8a893b5fa1ef",
"serviceId": "5307bca3-e556-43c7-9c23-b94dc63c23d9",
"serviceName": "AnalysesService",
"status": "READY"
},
{
"deploymentId": "cc34d238-0a51-4f20-b2e7-2e121b94b414",
"serviceId": "92ec6f09-0c7a-4f5d-b493-32ce30fe2207",
"serviceName": "AnalysisLifecycleService",
"status": "READY"
},
{
"deploymentId": "6781c1c5-7862-4930-b176-152d293f087f",
"serviceId": "dbf2c6bf-5bff-4cbc-b17b-74db30729db5",
"serviceName": "AnalysisValidatorService",
"status": "READY"
},
{
"deploymentId": "6b8c7bd0-996f-441a-ad43-5e31a41af6ad",
"serviceId": "e5224e73-40ce-4251-a325-63bddf0f70ba",
"serviceName": "AnalyzerOpenApiRestService",
"status": "READY"
},
{
"deploymentId": "91421026-6c32-4546-99c2-7b89bfad43e2",
"serviceId": "ef6df8de-1491-4d88-b18e-d0dc5b395758",
"serviceName": "AnalyzerResourceEstimatorService",
"status": "READY"
},
{
"deploymentId": "d950eccf-5b5f-429f-800c-8d8e5138b298",
"serviceId": "b0d82272-ebf7-40b0-adfd-e73f67c8f333",
"serviceName": "AssetBrowsingOpenApiRestService",
"status": "READY"
},
{
"deploymentId": "b270a5ec-5d93-4109-a46f-ffef4b2e254e",
"serviceId": "2ecb90f3-6889-4d68-a5a5-bbe3c05477d8",
"serviceName": "AssetProbeService",
"status": "READY"
},
{
"deploymentId": "4da5f68b-93b2-4deb-b13b-663be9b29180",
"serviceId": "87411208-9278-4170-aa18-21c9fcc22776",
"serviceName": "BandingMapsService",
"status": "READY"
},
{
"deploymentId": "4bfc3ea7-d985-4cec-a8e1-9c49dd1851cc",
"serviceId": "04d0fb8c-f703-4865-9886-3e748915c7ed",
"serviceName": "CacheService",
"status": "READY"
},
{
"deploymentId": "8c36eaba-f337-4234-87b1-8dad67f75f46",
"serviceId": "b73e476a-24e7-4326-b7ce-9d0e655d8c9a",
"serviceName": "CapturesService",
"status": "READY"
},
{
"deploymentId": "1e3d1fcc-01c1-4feb-aba7-d22b966dff8f",
"serviceId": "5c9b03d5-c634-47da-9628-31509686031e",
"serviceName": "ColorDifferenceMapsService",
"status": "READY"
},
{
"deploymentId": "26e4558e-9c3c-4ba0-99e1-2e4a869896f5",
"serviceId": "2f0f6ded-da18-4599-91bb-07c472b20af5",
"serviceName": "CreateAnalysisEndpointHandler",
"status": "READY"
},
{
"deploymentId": "d6b6f1af-1789-4309-8c08-3eac11099b1b",
"serviceId": "8ad5de58-da84-41a2-ab2c-68753aee908d",
"serviceName": "CreateBandingMapsEndpointHandler",
"status": "READY"
},
{
"deploymentId": "cbbe9b94-5cb6-4fb8-a4d6-4b8d726f0b38",
"serviceId": "81692007-e614-4cdd-8c28-875bc65a50ca",
"serviceName": "CreateCapturesEndpointHandler",
"status": "READY"
},
{
"deploymentId": "b774a432-5b6c-48dc-8632-fddc2319825f",
"serviceId": "cdf7403c-4dea-41c0-a43a-9d54268ad8d4",
"serviceName": "CreateColorDifferenceMapsEndpointHandler",
"status": "READY"
},
{
"deploymentId": "04194b43-e5c3-419a-8af4-dbf66f94feaa",
"serviceId": "58db90cc-ee41-4214-81b7-3cfa3c128705",
"serviceName": "CreateConfigurationEndpointHandler",
"status": "READY"
},
{
"deploymentId": "2385c368-2e10-4301-9173-3cad70004243",
"serviceId": "9d8634f4-3456-49aa-9282-f10a5799285d",
"serviceName": "CreateFramesEndpointHandler",
"status": "READY"
},
{
"deploymentId": "7757a6b2-e6d4-4804-81bd-d3d6379e9c22",
"serviceId": "83f96b33-f31d-4416-b159-d5207dd9a7e6",
"serviceName": "CreateQualityMapsEndpointHandler",
"status": "READY"
},
{
"deploymentId": "a34f91a5-9a9d-4944-b9da-90a707140613",
"serviceId": "14a4813f-55e8-4979-bacb-9690448414c8",
"serviceName": "DeleteAnalysisEndpointHandler",
"status": "READY"
},
{
"deploymentId": "1188937f-d30e-4a03-8a2a-8a3e01e63ce7",
"serviceId": "930c7627-2091-497f-bd50-e98d96bf1bbe",
"serviceName": "DeleteCacheEndpointHandler",
"status": "READY"
},
{
"deploymentId": "0c8276e8-64fe-43d9-90dc-b7f67e5dabbf",
"serviceId": "e1bb022e-36c3-46d9-bac1-5c18bc54d5ed",
"serviceName": "FileCacheService",
"status": "READY"
},
{
"deploymentId": "48a75f9a-938c-423d-bbc4-82236bca652e",
"serviceId": "2717ec00-a279-475b-8683-a0845af5fc61",
"serviceName": "FilebeatConfigurationService",
"status": "READY"
},
{
"deploymentId": "e7d52f29-21c1-4f87-8a67-1618e2fe5704",
"serviceId": "5466034b-dac5-4d00-963d-f8632b01f05d",
"serviceName": "FrameServicesOpenApiRestService",
"status": "READY"
},
{
"deploymentId": "0ebf9d4f-3b4b-4a1c-ac3d-c5cf89a0b611",
"serviceId": "1cad1120-5b77-45bd-9253-73d63e811a03",
"serviceName": "FramesService",
"status": "READY"
},
{
"deploymentId": "55b9fde8-a858-430b-b3b9-c4ffa12ac5df",
"serviceId": "dda60962-8518-42ea-b60b-6c82961a5681",
"serviceName": "GetBucketLocationEndpointHandler",
"status": "READY"
},
{
"deploymentId": "0d5e9051-a72d-4b15-ab84-9282c4397092",
"serviceId": "fcff906d-8731-47b4-80c0-02eeb5f9118b",
"serviceName": "GetCacheDetailsEndpointHandler",
"status": "READY"
},
{
"deploymentId": "445f26e3-4c66-4c3b-a173-6e1c954f0e9c",
"serviceId": "f9ae2493-232d-4ba2-8b8a-62a703f10aaa",
"serviceName": "GetConfigurationEndpointHandler",
"status": "READY"
},
{
"deploymentId": "3b81bca6-eae0-4b2d-95e6-acbd00365bbe",
"serviceId": "6edd6831-08a8-4ff1-8591-2f9d95001ec3",
"serviceName": "GetFeatureLicenseEndpointHandler",
"status": "READY"
},
{
"deploymentId": "cc555fad-b1f3-4b8c-b7c9-5d01ef291a8a",
"serviceId": "a9e026b0-ebf3-4809-847e-eb74e16d1dd4",
"serviceName": "GetStatusEndpointHandler",
"status": "READY"
},
{
"deploymentId": "fd72af13-3f0e-4e7c-951e-3cc750df1e82",
"serviceId": "3936d21a-5cba-4b1d-8e11-d72456cc50a0",
"serviceName": "GetVersionEndpointHandler",
"status": "READY"
},
{
"deploymentId": "a950d391-0e2f-4cb6-89f5-14db3d7803fd",
"serviceId": "606235a3-2f81-4d68-beb7-789e6dc03a45",
"serviceName": "HeadBucketEndpointHandler",
"status": "READY"
},
{
"deploymentId": "82378f07-0a0e-4cb1-8c92-cb86decb7b8a",
"serviceId": "56938260-3fad-4141-a692-18ee160e7041",
"serviceName": "HeadObjectEndpointHandler",
"status": "READY"
},
{
"deploymentId": "31471811-93ee-44f6-912e-846d6834b8ca",
"serviceId": "69df8859-4800-4746-9a6f-1d6aab0c131c",
"serviceName": "HlsService",
"status": "READY"
},
{
"deploymentId": "5fcee6aa-e2da-4871-b455-01ff893f5d6e",
"serviceId": "633814b5-d998-4ddc-865b-8c4aaa83229b",
"serviceName": "HttpReverseProxyService",
"status": "READY"
},
{
"deploymentId": "a6a72ea8-5bb7-4733-85e1-3ea4d33f7337",
"serviceId": "02480f35-8e4d-4100-993c-73384a72f5c4",
"serviceName": "InsightsClientService",
"status": "READY"
},
{
"deploymentId": "cf387da5-5ffa-4c12-a37d-ad908f8c8e0d",
"serviceId": "d9137973-5f27-46cb-9c10-f599196d986f",
"serviceName": "InsightsKafkaService",
"status": "READY"
},
{
"deploymentId": "e5499065-bd5c-4d2b-aefe-937fbb250f3c",
"serviceId": "80565b3f-41d0-4978-a29a-f23fd0d062dd",
"serviceName": "JobTimeoutService",
"status": "READY"
},
{
"deploymentId": "b6c85faa-3b5b-49c7-aa00-b7b247aa1d2a",
"serviceId": "f933de93-c7d5-4960-90f9-07bd735b8df7",
"serviceName": "KubernetesConfigurationService",
"status": "READY"
},
{
"deploymentId": "4e34d73d-e226-4e4d-9876-e1dc48f2f43d",
"serviceId": "4671d050-113c-490e-bdd3-74d53edaa3a7",
"serviceName": "KubernetesFeatureLicenseService",
"status": "READY"
},
{
"deploymentId": "5e0fcfa0-2acf-4ecf-a221-ae3a79a760d0",
"serviceId": "89954c88-6a9a-41a5-93d3-7efeeacd0792",
"serviceName": "KubernetesJobJanitorService",
"status": "READY"
},
{
"deploymentId": "0e227dea-dd05-4479-bdb1-b2babf8c5deb",
"serviceId": "0fa1032f-28b1-4d16-ab4e-3c96dab82b23",
"serviceName": "KubernetesJobManagementService",
"status": "READY"
},
{
"deploymentId": "8af50b7a-37cb-4f33-9b9f-cdb304543f35",
"serviceId": "41104967-0699-4536-9e00-2e8d274bef26",
"serviceName": "KubernetesPodStatusService",
"status": "READY"
},
{
"deploymentId": "332634fc-6c5d-4de2-a3b5-9574a4c52a34",
"serviceId": "c8cc9133-4e85-462f-85ff-b2f556abeb9a",
"serviceName": "KubernetesServiceConfigurationProvider",
"status": "READY"
},
{
"deploymentId": "2ad26a6d-f6be-4aa2-add1-875aefcb293d",
"serviceId": "590b8e63-1d7b-4d50-bbe7-1640d04d7181",
"serviceName": "KubernetesSupportService",
"status": "READY"
},
{
"deploymentId": "9a0b7776-307f-48a0-a40f-8106561a7fcc",
"serviceId": "7ecdd934-1984-4ac7-b061-854a9b1624e7",
"serviceName": "ListBucketsEndpointHandler",
"status": "READY"
},
{
"deploymentId": "a916a21a-c594-4663-8496-ff70e2c845e8",
"serviceId": "93604b76-92e4-45cf-875c-e22f983acf07",
"serviceName": "ListObjectsEndpointHandler",
"status": "READY"
},
{
"deploymentId": "2e794590-8ccc-4a77-97e2-b0a1127d4c30",
"serviceId": "b872ea4a-5571-4dbc-9b4b-284dca47145d",
"serviceName": "PatchAnalysisEndpointHandler",
"status": "READY"
},
{
"deploymentId": "d367138c-54d5-4d88-bcce-0d3a7a56b5aa",
"serviceId": "18dd8d46-3cd1-4da8-963e-4257423a583f",
"serviceName": "PatchConfigurationEndpointHandler",
"status": "READY"
},
{
"deploymentId": "d0bb455b-ae10-40bc-bd65-3ba219ec7999",
"serviceId": "a4620745-cb76-4a82-aaa8-259e311099b8",
"serviceName": "PutFeatureLicenseEndpointHandler",
"status": "READY"
},
{
"deploymentId": "dc04dfd7-c864-41b7-a914-f5ff99a4bfb6",
"serviceId": "8f6a2535-9c82-477a-830d-2c97a0c0cf63",
"serviceName": "PutS3AccessSecretEndpointHandler",
"status": "READY"
},
{
"deploymentId": "96edac94-88d7-4712-9b1a-f5da04a170a8",
"serviceId": "e7e30de1-a51f-4ab0-84a2-1c998390b272",
"serviceName": "QualityMapsService",
"status": "READY"
},
{
"deploymentId": "c672fc76-3d97-4eee-ab24-7b60bba3ebcb",
"serviceId": "bc264ea9-bf26-4aa4-8faa-3ba21cf43676",
"serviceName": "ResourceEstimateHandlerService",
"status": "READY"
},
{
"deploymentId": "a32a580f-5197-49c0-af16-802888788f7c",
"serviceId": "d8dd8378-bc72-4abd-819d-56a8b23e7532",
"serviceName": "S3Service",
"status": "READY"
},
{
"deploymentId": "1a0e0b1e-5fe9-4158-9da7-566adf6d8c23",
"serviceId": "d9da3ec5-290a-4100-a361-6e7ca0800a99",
"serviceName": "StreamSmartArgoService",
"status": "READY"
},
{
"deploymentId": "a9d4d7c6-0032-4b1c-a898-d700de05c9ff",
"serviceId": "974975a0-f6c3-44c2-95d5-1ecda37eac0e",
"serviceName": "StreamSmartOpenApiRestService",
"status": "READY"
},
{
"deploymentId": "ffcc806f-5b38-4ccd-bb9d-58204b4eabeb",
"serviceId": "cddf9605-7bad-4d61-996a-afcd6c5ba45f",
"serviceName": "StreamSmartWorkflowControllerService",
"status": "READY"
},
{
"deploymentId": "5eba5102-1907-4b27-a9f8-24378377d04b",
"serviceId": "61da4490-8ca4-4b5e-87c6-b55f0068b9d7",
"serviceName": "SystemService",
"status": "READY"
},
{
"deploymentId": "ba0b8092-ee76-4288-93b5-5f80399ed6d1",
"serviceId": "c9113d97-0b98-4202-a4ea-7f5f31a2c7d9",
"serviceName": "SystemServicesOpenApiRestService",
"status": "READY"
},
{
"deploymentId": "1804adc5-fec6-42ff-914a-dc6765383bd1",
"serviceId": "bfde8742-b15c-4729-8678-7725d5b4e643",
"serviceName": "VertxEventBusProxyService",
"status": "READY"
}
],
"outcome": "READY"
}
Retrieves the current feature license.
Responses
Indicates that the feature license was succesfully retrieved.
Body
Examples
curl -X GET "https://localhost/api/v1/featureLicense"
HTTP/1.1 200 OK
Content-Type: text/plain
35a82e6900b6d5468073fbd0204e7b07546ec30f5e78f81af9fe4c95c8c88316
{
"bandingDetection": true,
"bandingMaps": true,
"color": true,
"colorDifferenceMaps": true,
"contentAttributes": true,
"contentComplexity": true,
"expiry": "2022-12-31",
"frameCaptures": true,
"hdrSupport": true,
"insights_analysis_url": "",
"insights_cli_overrides": false,
"insights_frame_scores": true,
"insights_password": "test-password",
"insights_qc_config_url": "",
"insights_scene_definitions_url": "",
"insights_servers": [],
"insights_username": "test-user",
"organization": "IMAX",
"otherVideoQualityMetrics": true,
"qualityChecks": {
"blackFrame": true,
"colorBarFrame": true,
"freezeFrame": true,
"missingCaptions": true,
"scoreChecks": true,
"silence": true,
"solidColorFrame": true
},
"qualityMaps": true,
"site": "Test Site"
}
Applies a product feature license.
Request body
Responses
Indicates that the feature license was successfully applied.
Examples
curl -X PUT "https://localhost/api/v1/featureLicense" \
-H "Content-Type: text/plain" \
-d '35a82e6900b6d5468073fbd0204e7b07546ec30f5e78f81af9fe4c95c8c88316
{
"bandingDetection": true,
"bandingMaps": true,
"color": true,
"colorDifferenceMaps": true,
"contentAttributes": true,
"contentComplexity": true,
"expiry": "2022-12-31",
"frameCaptures": true,
"hdrSupport": true,
"insights_analysis_url": "",
"insights_cli_overrides": false,
"insights_frame_scores": true,
"insights_password": "test-password",
"insights_qc_config_url": "",
"insights_scene_definitions_url": "",
"insights_servers": [],
"insights_username": "test-user",
"organization": "IMAX",
"otherVideoQualityMetrics": true,
"qualityChecks": {
"blackFrame": true,
"colorBarFrame": true,
"freezeFrame": true,
"missingCaptions": true,
"scoreChecks": true,
"silence": true,
"solidColorFrame": true
},
"qualityMaps": true,
"site": "Test Site"
}'
Add an AWS IAM access key granting read permissions to an Amazon S3 bucket for use with the system.
Request body
Responses
Indicates that the secret was added.
Examples
Add credentials to access the AWS Amazon S3 bucket bucket named “mybucket”
curl -X PUT "https://localhost/api/v1/s3AccessSecret" \
-H "Content-Type: application/json" \
-d '{
"bucketName": "mybucket",
"accessKey": {
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"accessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
}'
Creates a system or service configuration identified by its unique type
and id
.
Be careful to use this endpoint only at the direction of deployment/configuration instructions or at the request of your IMAX representative.
You cannot create configurations for internal services (i.e. where type is SERVICE).
Request body
Captures the type of the configuration.
You cannot create configurations for internal services (i.e. where type is SERVICE).
The unique id of the system component or service configuration you wish to create.
The JSON configuration content.
{
"data": {
"extraOption1":"extraOptionValue1",
"extraOption2":"extraOptionValue2"
}
}
Responses
The system or service configuration was successfully created.
Examples
Create extra configuration options for the On-Demand Analyzer.
curl -X POST "http://localhost/api/v1/configurations" \
-H "Content-Type: application/json" \
-d '{
"type": "NONSENSITIVE",
"id": "analyzer-extra-options",
"config": {
"data": {
"extraOption1":"extraOptionValue1",
"extraOption2":"extraOptionValue2"
}
}
}'
{id}
Fetches the system or service configuration identified by its unique id
.
Path variables
The unique id of the system component or service whose configuration you wish to retrieve.
Responses
The JSON configuration for the requested system component or service.
Body
Examples
System configuration for extra options for the On-Demand Analyzer.
{
"data": {
"extraOption1": "extraOptionValue1",
"extraOption2": "extraOptionValue2"
}
}
Examples
Fetch the system configuration for the extra options on the On-Demand Analyzer.
curl -X GET "http://localhost/api/v1/configurations/analyzer-extra-options"
HTTP/1.1 200 OK
Content-Type: application/json
{
"data": {
"extraOption1": "extraOptionValue1",
"extraOption2": "extraOptionValue2"
}
}
{type}
.{id}
Applies an update to a system or service configuration identified by its unique type
and id
.
Be careful to use this endpoint only at the direction of deployment/configuration instructions or at the request of your IMAX representative.
Path variables
Captures the type of the configuration.
The unique id of the system component or service whose configuration you wish to update.
Request body
Examples
System configuration for the AssetProbeService.
{
"extraOption1": "extraOptionValue1",
"extraOption2": "extraOptionValue2"
}
Responses
The system or service configuration was successfully updated.
Examples
Update the configuration for the AssetProbeService.
curl -X PATCH "http://localhost/api/v1/configurations/SECRET.AWS-EMC" \
-H "Content-Type: application/json" \
-d '{
"extraOption1": "extraOptionValue1",
"extraOption2": "extraOptionValue2"
}'
Configuration options for specifying the conditions under which segments of a content type are considered active. By default, all content types will be considered active when at least one audio channel is active throughout that segments duration.
{
"motionVideoSegments": {
"canBeActive": true,
"activeAudioChannelsDefinition": "SILENCE"
},
"blackFrameSegments": {
"canBeActive": false
},
"colorBarFrameSegments": {
"canBeActive": true,
"activeAudioChannelsDefinition": "ALL_CHANNELS_ACTIVE"
},
"freezeFrameSegments": {
"canBeActive": true,
"activeAudioChannelsDefinition": "ANY_CHANNEL_ACTIVE"
},
"solidColorFrameSegments": {
"canBeActive": true,
"activeAudioChannelsDefinition": "ANY_CHANNEL_ACTIVE"
}
}
Active segment definition for motion video segments
{
"canBeActive": true,
"activeAudioChannelsDefinition": "SILENCE"
}
Active segment definition for black frame segments
{
"canBeActive": false
}
Active segment definition for color bar frame segments
{
"canBeActive": true,
"activeAudioChannelsDefinition": "ALL_CHANNELS_ACTIVE"
}
Active segment definition for freeze frame segments
{
"canBeActive": true,
"activeAudioChannelsDefinition": "ANY_CHANNEL_ACTIVE"
}
Active segment definition for color frame segments
{
"canBeActive": true,
"activeAudioChannelsDefinition": "ALL_CHANNELS_ACTIVE"
}
Captures an analysis of a video asset from which frame scores are produced using On-Demand Analyzer. A no-reference (NR) analysis is performed on a single video asset only and its results can be used to judge the quality of the asset in isolation. A full-reference (FR) analysis is performed using two video assets: a reference asset against which you will compare a subject asset. Generally, the reference asset is the higher quality video and the subject asset is the resulting video having gone through some kind of transcoding, compression or general transformation. A full-reference analysis will give frame scores on the absolute quality of each asset as well as the comparative quality, allowing one to ascertain the impact of the transformation process on the overall quality.
Analyses are used as the payload in an AnalysisResponse and contain the attributes necessary to lookup the associated frame score results. For a successfully submitted analysis, the id
will represent a universally unique id (UUID) that can be used as a key to lookup the frame score results. Additionally the submissionTimestamp
will indicate the time at which the analysis was successfully submitted.
For an analysis that fails to be submitted, the id
and submissionTimestamp
attributes will be missing and the submissionError
attribute will contain details indicating the nature of the error. If you are unsure how to interpret the error or the workaround, please contact your IMAX representative.
{
"id": "04a2e841-9c9e-4f50-9f9f-4a8b847f5b3e",
"referenceAsset": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/ref/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "videos",
"credentials": {
"useAssumedIAMRole": true
}
},
"hdr": true,
"qualityCheckConfig": {
"scoreChecks": [
{
"metric": "SVS",
"threshold": 80,
"viewingEnvironmentIndex": 1,
"durationSeconds": 5,
"durationFrames": 1,
"skipStart": 1.25,
"skipEnd": 1.25
}
]
}
},
"subjectAsset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "videos",
"credentials": {
"useAssumedIAMRole": true
}
},
"hdr": true,
"qualityCheckConfig": {
"scoreChecks": [
{
"metric": "SVS",
"threshold": 80,
"viewingEnvironmentIndex": 1,
"durationSeconds": 5,
"durationFrames": 1,
"skipStart": 1.25,
"skipEnd": 1.25
}
]
}
},
"submissionTimestamp": "2018-01-01T14:20:22Z",
"testId": "1",
"content": {
"title": "Big Buck Bunny"
},
"analyzerConfig": {
"enableBandingDetection": true,
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua",
"resolution": {
"width": 1920,
"height": 1080
}
},
"viewerType": "TYPICAL"
}
]
}
}
The UUID for the analysis.
A description of the analysis which can be used for reference, categorization and search/filtering.
The reference asset against which you will compare a subject asset. This attribute is ONLY used for full-reference (FR) analyses.
The subject asset which you will use to compare against the reference asset (for full-reference analysis) or the asset against which you will perform a no-reference analysis.
Any error message resulting from the submission of the analysis. Note that the error represented here is meant to cover ONLY the submission of a new analysis for processing withing the Kubernetes cluster. It does NOT cover any errors that may be generated when the analysis is either scheduled or executed. These error messages will be available through alternative means (i.e. Kubernetes monitoring software - Prometheus and/or REST APIs available for result processing).
The UTC timestamp (using ISO-8601 representation) recording when the analysis was successfully submitted for analysis. Analyses that fail to submit corectly will not have a value for this attribute.
The test ID used to uniquely identify the asset within the analysis
Metadata about the content being analyzed in this analysis.
{
"title": "Big Buck Bunny"
}
Analyzer configuration options used in this analysis
The request body used when updating an analysis.
Currently, the system supports only the following update operations:
-
Cancelling an existing analysis
NoteOnly analyses that are currently in progress (i.e. scheduled, estimating, aligning, analyzing) can be cancelled
{
"status": "CANCELLED"
}
The desired analysis status
Cancels a running analysis
The response payload of the POST on analyses. This response will contain an Analysis object for each analysis represented in the NewAnalysis request body. A given analysis can either be submitted successfully or not. In both cases, the Analysis will contain attribute values that can be used to either fetch the resulting frame scores when successful, or address the error condition when a failure occurs (please see Analysis for more details).
Specification of configuration options for use by the analyzer at the analysis level. Configuration options for assets can be specified on the Asset object.
{
"enableBandingDetection": true,
"enableColorVolumeDifference": true,
"enableColorStatsCollection": true,
"enableVMAF": true,
"enablePSNR": true,
"qualityCheckConfig": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25,
"missingCaptions": {
"enabled": false,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
},
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "EXPERT"
},
{
"device": {
"name": "xl2420t"
},
"viewerType": "TYPICAL"
}
],
"temporalAlignment": {
"minSeconds": 5,
"maxSeconds": 120,
"maxSecondsHighFPS": 120
}
}
Detect the level of banding within the assets
Detect cadence patterns within the assets
For full-reference (FR) analyses, run complexity analysis on the reference asset(s). In a no-reference analysis, complexity analysis is run on the subject asset(s) instead.
Enable Colour Volume Difference calculation
Enable Luminance and Color Gamut stats collection
Enable VMAF score calculation for full-reference analyses
Enable PSNR score calculation for full-reference analyses
Controls whether the Analyzer will perform automatic temporal alignment or not. This flag applies only to full-reference analyses and it is recommended to leave enabled.
Enable physical noise calculation for the video. Physical Noise measures standard deviation of camera/sensor noise when statistical behaviour of noise is random with Gaussian (or similar) distribution.
Enable visual noise calculation for the video. Visual Noise measures the standard deviation of noise considering the contrast masking behaviour of the underlying content.
Enable temporal information collection for the video
Enable spatial information collection for the video
Enable color information collection for the video
Configuration options for quality checks.
{
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25,
"missingCaptions": {
"enabled": false,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
}
Specifications of environments under which the content is viewed
[
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "EXPERT"
},
{
"device": {
"name": "xl2420t"
},
"viewerType": "TYPICAL"
}
]
Number of frames to process. When specified in the context of a full-reference analysis, the value applies to the reference asset.
Configuration options for temporal alignment
{
"minSeconds": 5,
"maxSeconds": 120,
"maxSecondsHighFPS": 120
}
Configuration options for content layout detection.
Additional (undocumented) configuration options for use by the Analyzer and at the direction/suggestion of your IMAX representative.
{
"bandingDetectionThreshold": 40,
"macroBlocking": true
}
Represents a video asset in the system. There are several different supported formats for specifying the asset path. See examples.
{
"assetUri": "s3://videos/example/Big_Buck_Bunny.mp4"
}
{
"name": "example/Big_Buck_Bunny.mp4",
"storageLocation": {
"type": "S3",
"name": "videos"
}
}
{
"content": {
"title": "Big Buck Bunny"
},
"name": "/videos/Big_Buck_Bunny_1080p@5000kbps.mp4",
"storageLocation": {
"type": "S3",
"name": "test-bucket",
"credentials": {
"useAssumedIAMRole": true
}
}
}
{
"content": {
"title": "Sparks - Dolby Vision"
},
"name": "20161103_1023_SPARKS_4K_P3_PQ_4000nits_DoVi.mxf",
"sidecars": [
{
"type": "DOLBY_VISION_METADATA",
"name": "20161103_SPARKS_DOVI_METADATA_AR_CORRECT.xml"
}
],
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "PVC",
"name": "videos"
},
"dynamicRange": "HDR"
}
Metadata about the content contained in this asset. This can also be set automatically for all assets by including the content field in the NewAnalysis request.
{
"title": "Big Buck Bunny"
}
A URI describing the asset location, of the form
storageLocationType://storageLocationName/path/name
Either this field or all of name
, path
, and storageLocation
must be provided. Additional storageLocation
properties such as credentials
may still be specified alongside this field.
Any special characters, like space or hash, must be percent-encoded. For example, an S3 object with key my video#001.mp4
should be given as s3://my-bucket/mypath/my%20video%23001.mp4
.
To uniquely specify an asset location, either the assetUri
field or all of name
, path
, and storageLocation
must be provided.
The filename that represents the video. Although not required, one is encouraged to keep filenames unique, where possible.
If working with an image sequence asset (a collection of multiple image files at the same base path indexed by sequential numbers), a format string must be included in the file name to specify the position and format of the index.
The format string can take one of two forms:
%d
specifies numbers with no special formatting at the included position. For example,"sintel_%d.dpx"
will match an asset consisting of imagessintel_1.dpx
,sintel_2.dpx
, … ,sintel_10.dpx
, …%0[width]d
specifies numbers that are 0-padded consisting ofwidth
characters total. For example,"sintel_%03d.dpx"
will match an asset consisting of imagessintel_001.dpx
,sintel_002.dpx
,sintel_003.dpx
, …
The literal character %
can be escaped with the string %%
. For example, "big%%20buck%%20bunny%04d.png"
will match an asset consisting of images big%20buck%20bunny0001.png
, big%20buck%20bunny0002.png
, big%20buck%20bunny0003.png
, …
Note that only one format string can be specified in the file name. Additionally, if a format string is included, imageSequenceParameters
must also be provided.
The sidecar(s) to associate with the asset. If a sidecar does not specify a path, it is assumed to use the path associated with the asset.
[
{
"type": "DOLBY_VISION_METADATA",
"name": "20161103_SPARKS_DOVI_METADATA_AR_CORRECT.xml"
}
]
To uniquely specify an asset location, either the assetUri
field or all of name
, path
, and storageLocation
must be provided.
The path to the asset’s file (and possibly sidecar) location with the associated storage. The combination of the path and filename of a given asset must be unique.
To uniquely specify an asset location, you must use either the assetUri
field OR the name
and/or path
, and storageLocation
.
The storage location that houses the asset.
{
"type": "S3",
"name": "test-bucket",
"credentials": {
"useAssumedIAMRole": "true"
}
}
DEPRECATED
Only for use for backwards compatibility, please use dynamicRange
instead. This flag will cease to be supported in future versions.
Hint that the video asset should be high dynamic range (HDR). Note that if the asset cannot be HDR due to low bit depth, the analysis will fail.
Used to specify the dynamic range of the asset. The recommended option is the default of auto-detect. Forcing to HDR or SDR is usually not needed and should only be used if automatic detection of the dynamic range has failed.
Auto detect the asset dynamic range based on its metadata.
Treat the asset as having SDR (standard dynamic range), ignoring asset metadata.
Treat the asset as having HDR (high dynamic range), ignoring asset metadata.
Used to specify the packet identifier (see VideoPID and VideoPIDHex) for assets with multiple video streams. In the case of HLS, this identifier can be used to represent the HLS variant (see HLSVariantIdentifer).
The starting frame to use for this asset, as it pertains to the analysis. For video assets, this value is relative to the start of the asset, regardless of the absolute frame index at which it starts. If your asset starts at frame X, specifying a value of 100 for startFrame
will instruct the analyzer will ignore all frames in the asset from X -> X + 99.
However, in the case where your asset is an image sequence that does not start at frame 1, you must do the arithmetic to figure out the correct startFrame
that applies. Consider, for example, an image sequence that starts at frame 240. If you want skip the first 100 frames and start at frame 340, you would specify a startFrame
value of 100, not 340.
If unspecified, the analyzer will always use 1. This value is ignored for frame capture requests and one directed to use the frameIndex
in the body of the frame capture request instead.
Specification for the region of interest of this asset.
{
"originX": 20,
"originY": 0,
"regionHeight": 300,
"regionWidth": 400
}
Settings for RAW video formats. Must be provided if this asset has the file extension .yuv, .rgb or .bgr.
{
"resolution": {
"width": 720,
"height": 576
},
"fps": 25,
"scanType": "P",
"fieldOrder": "TFF",
"pixelFormat": "YUV420P"
}
Settings for image sequence assets. Must be provided if the name contains a format string (%d
or %0[width]d
) and the file extension is one of:
.png
.tif
.tiff
.dpx
.jpg
.jpeg
.j2c
.jp2
.jpc
.j2k
.exr
{
"fps": 25
}
Configuration options for asset-level quality checks
{
"scoreChecks": [
{
"metric": "SVS",
"threshold": 80,
"durationSeconds": 5,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
},
{
"metric": "SVS",
"threshold": 60,
"durationSeconds": 2,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
},
{
"metric": "SBS",
"threshold": 75,
"durationFrames": 48,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
}
]
}
Configuration options for audio groups that exist within this asset.
{
"groups": [
{
"qualityCheckConfig": {
"loudnessChecks": [
{
"type": "MAX_TRUE_PEAK",
"enabled": true,
"duration": 1,
"skipStart": 1.25,
"skipEnd": 1.25,
"threshold": -2
},
{
"type": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 5
}
{
"type": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 25
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_3",
"enabled": true
}
}
]
}
A unique identifer for an asset within a completed analysis. For a NR analysis or to pick the reference asset within a FR anlaysis, this will simply the integer ID associated with the asset. For a FR analysis, you will use the “refId-subjectId” format.
Quality checks specified on a per-asset basis.
{
"scoreChecks": [
{
"metric": "SVS",
"threshold": 80,
"durationSeconds": 5,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
}
]
}
Any number of score-based quality check definitions.
[
{
"metric": "SVS",
"threshold": 80,
"viewingEnvironmentIndex": 0,
"durationSeconds": 5,
"skipStart": 1.25,
"skipEnd": 1.25
}
]
Any number of metadata-based quality check definitions
[
{
"type": "DOLBY_VISION"
}
]
Configure Photosensitive Epilepsy Harding Tests
{
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25,
"extendedFailure": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"luminanceFlash": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"redFlash": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"spatialPattern": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"standard": "ITU_R_BT_1702_2"
}
MXF container compliance checks.
{
"enabled": true
}
QuickTime container compliance checks.
{
"enabled": true,
"duration": true,
"durationThreshold": 0.5,
"audioDescriptors": true,
"videoDescriptors": true,
"timecodeDescriptors": true
}
Represents configuration for audio measurements and audio specific quality checks.
{
"enabled": true,
"groups": [
{
"name": "md:audtrackid:org:bbc.co.uk:123456:main.audio.en.primary.surroundsound",
"language": "en-GB",
"soundfieldMapping": {
"type": "SoundfieldChannelMapping",
"mapping": [
{
"name": "myleftsoundfile.mxf",
"path": "/path/to/file",
"inputTrackIndex": 1,
"inputChannelIndex": 1,
"outputChannelLocation": "FL"
}, {
{
"name": "myrightsoundfile.mxf",
"path": "/path/to/file",
"inputTrackIndex": 1,
"inputChannelIndex": 2,
"outputChannelLocation": "FR"
}
]
},
"qualityCheckConfig": {
"loudnessChecks": [
{
"checkType": "MAX_SHORT_TERM_LOUDNESS",
"enabled": true,
"duration": 1,
"skipStart": 1,
"skipEnd": 1,
"threshold": 1
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_1",
"enabled": true
}
}
]
}
Enable or disable audio processing for the parent asset.
When set to disabled, no audio quality checks will be raised.
A collection of audio soundfield groups that exist within the parent asset.
Each audio group entry defines a soundfield group that can have loudness measured as well as quality checks defined.
Configure the quality check for Average Audio Phase Mismatch. Average phase mismatch measures the entire asset for discrepancies between the selected pair of audio channels.
Audio Phase Mismatch Detection identifies discrepancies in the phase alignment of audio channel pairs:
- Front Left / Front Right
- Side Left / Side Right
- Back Left / Back Right
Pairs of audio channels that are out of phase may weaken the soundwave, resulting in a distorted, thin output.
{
"enabled": true,
"threshold": {
"type": "DEGREE",
"value": 90
},
"channelPairs": [
"FL-FR"
]
}
Enable detection of average audio phase mismatch events.
The default threshold value is 120 degree when the type is DEGREE and -0.50 when the type is CORRELATION.
{
"type": "DEGREE",
"value": 90
}
Performs the average audio phase mismatch check against the selected channel pairs: “FL-FR”, “BL-BR” and “SL-SR”.
["FL-FR", "BL-BR", "SL-SR"]
An enum of supported channels that can be used to define a soundfield group.
Front Left
Front Right
Front Center
Low Frequency Effects
Back Left
Back Right
Front Left of Center
Front Right of Center
Back Center
Side Left
Side Right
Top Center
Top Front Left
Top Front Center
Top Front Right
Top Back Left
Top Back Center
Top Back Right
Wide Left
Wide Right
Low Frequency Effects 2
Top Side Left
Top Side Right
Bottom Front Center
Bottom Front Left
Bottom Front Right
Configure the quality check for Audio Clicks and Pops. Clicks and pops are caused by a variety of factors, including a poor recording environment, bad equipment, or a misaligned recording.
{
"enabled": true,
"skipStart": 1,
"skipEnd": 1,
"sensitivity": 50
}
Enable detection of audio click/pop detection events
The duration in seconds to ignore at the start of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the start of audio in order to eliminate unwanted quality check failures.
The duration in seconds to ignore at the end of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the end of audio in order to eliminate unwanted quality check failures.
The sensitivity of the check. Higher sensitivity means more detections, but more false positives.
Configure the quality check for Audio Clipping. This distortion occurs when an audio signal exceeds the maximum limit of a recording or playback system. It typically happens when the volume level of the audio reaches or exceeds the maximum level that can be accurately reproduced, resulting in the waveform being “clipped” or truncated. This distortion introduces unwanted distortion and distortion artifacts to the audio signal, leading to a harsh and distorted sound.
{
"enabled": true,
"duration": 0.05,
"skipStart": 1,
"skipEnd": 1,
"sensitivity": 50
}
Enable detection of audio clipping events
The minimum clipping duration in seconds required for an event to trigger.
The duration in seconds to ignore at the start of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the start of audio in order to eliminate unwanted quality check failures.
The duration in seconds to ignore at the end of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the end of audio in order to eliminate unwanted quality check failures.
The sensitivity of the check. Higher sensitivity means more detections, but more false positives.
Enumerates the audio segment audio channel activity types
All audio channels are silent
At least one audio channel is not silent
All audio channels are active
Represents audio measurements and audio specific quality check configuration for a particular audio group.
{
"name": "md:audtrackid:org:bbc.co.uk:123456:main.audio.en.primary.fivepointone",
"language": "en-CA",
"description": "A logical audio soundfield group",
"soundfieldMapping": {
"type": "SoundfieldTrackMapping",
"name": "mysoundfile.mxf",
"path": "/path/to/file",
"inputTrackIndex": 1,
"outputChannelLayout": [
"FL", "FR", "FC", "LFE", "SL", "SR"
]
},
"qualityCheckConfig": {
"loudnessChecks": [
{
"checkType": "MAX_MOMENTARY_LOUDNESS",
"enabled": true,
"duration": 1,
"skipStart": 1,
"skipEnd": 1,
"threshold": 1
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_3",
"enabled": true
}
}
{
"name": "md:audtrackid:org:bbc.co.uk:123456:main.audio.en.primary.surroundsound",
"language": "en-GB",
"description": "The logical audio soundfield group from sidecar files",
"soundfieldMapping": {
"type": "SoundfieldChannelMapping",
"mapping": [
{
"name": "myleftsoundfile.mxf",
"path": "/path/to/file",
"inputTrackIndex": 1,
"inputChannelIndex": 1,
"outputChannelLocation": "FL"
}, {
{
"name": "myrightsoundfile.mxf",
"path": "/path/to/file",
"inputTrackIndex": 1,
"inputChannelIndex": 2,
"outputChannelLocation": "FR"
}
]
},
"qualityCheckConfig": {
"loudnessChecks": [
{
"checkType": "MAX_SHORT_TERM_LOUDNESS",
"enabled": true,
"duration": 1,
"skipStart": 1,
"skipEnd": 1,
"threshold": 1
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_1",
"enabled": true
}
}
A unique description for an audio soundfield group. All names within the AudioGroup object must be unique.
The language for this specific audio soundfield group.
The description for this specific audio soundfield group.
Can be either a SoundfieldChannelMapping
or SoundfieldTrackMapping
object.
Can be either a SoundfieldChannelMapping
or SoundfieldTrackMapping
object.
A collection of audio specific quality checks that will be performed on the audio groups within the parent asset.
Configuration parameters for audio loudness measurements that are performed on an audio group.
A quality check that will be performed on an audio group.
{
"loudnessChecks": [
{
"checkType": "MAX_LOUDNESS_RANGE",
"enabled": true,
"duration": 1,
"skipStart": 1,
"skipEnd": 1,
"threshold": 1
}
],
"clippingCheck": {
"enabled": true,
"duration": 0.05,
"skipStart": 1,
"skipEnd": 1,
"sensitivity": 50
},
"clicksAndPopsCheck": {
"enabled": true,
"skipStart": 1,
"skipEnd": 1,
"sensitivity": 50
},
"phaseMismatchCheck": {
"enabled": "false",
"duration": 1,
"skipStart": 1,
"skipEnd": 1,
"smoothing": 1,
"threshold": {
"type": "DEGREE",
"value": 90
},
"channelPairs": [
"FL-FR"
]
},
"averagePhaseMismatchCheck": {
"enabled": false,
"threshold": {
"type": "DEGREE",
"value": 90
},
"channelPairs": [
"FL-FR"
]
}
}
Configuration for one or more loudness quality checks.
Configuration for audio clipping quality check
Configuration for audio clicks/pops quality check
Configuration for audio phase mismatch quality check
Configuration for average audio phase mismatch quality check
Used to define a quality check based on audio loudness measurements taken over a window of time.
Quality check failure events are generated when loudness values of the specified type are beyond the threshold
limit for at least duration
continuous seconds.
Certain types of checks are performed over the duration of the asset, where skipStart
, skipEnd
and duration
are not applicable. Those types are indicated within the schema definition below.
{
"checkType": "MAX_TRUE_PEAK_LEVEL",
"enabled": true,
"duration": 5,
"skipStart": 2.5,
"skipEnd": 1.25,
"threshold": -2
}
{
"checkType": "MAX_INTEGRATED_LOUDNESS",
"enabled": true,
"threshold": 1
}
Enable detection of this particular audio loudness quality check event.
The type of loudness check to perform.
The minimum continuous duration in seconds required for the loudness to exceed threshold
for an event to trigger.
Duration can only be specified for the following checkType
values:
MAX_MOMENTARY_LOUDNESS
MAX_SHORT_TERM_LOUDNESS
MIN_TRUE_PEAK_LEVEL
MAX_TRUE_PEAK_LEVEL
SILENCE
For the remaining checkType
values not listed above, duration
is always the length of the mixed audio track (ie. group) and is not allowed to be specified explicitly.
For a checkType
of MAX_MOMENTARY_LOUDNESS
, duration
must be greater than 0.4 seconds.
For a checkType
of MAX_SHORT_TERM_LOUDNESS
, duration
must be greater than 3 seconds.
The duration in seconds to ignore at the start of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the start of audio in order to eliminate unwanted quality check failures.
skipStart
can only be specified for the following checkType
values:
MAX_MOMENTARY_LOUDNESS
MAX_SHORT_TERM_LOUDNESS
MIN_TRUE_PEAK_LEVEL
MAX_TRUE_PEAK_LEVEL
SILENCE
The duration in seconds to ignore at the end of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the end of audio in order to eliminate unwanted quality check failures.
skipEnd
can only be specified for the following checkType
values:
MAX_MOMENTARY_LOUDNESS
MAX_SHORT_TERM_LOUDNESS
MIN_TRUE_PEAK_LEVEL
MAX_TRUE_PEAK_LEVEL
SILENCE
The upper or lower threshold limit which loudness values must exceed for duration
seconds for an event to trigger.
Loudness values less than the threshold for the following type
entries will cause an event to trigger:
MIN_INTEGRATED_LOUDNESS
(in LKFS)MIN_LOUDNESS_RANGE
(in LU)MIN_TRUE_PEAK_LEVEL
(in dBTP)SILENCE
(in dBTP)
Loudness values greater than the threshold for the following type
entries will cause an event to trigger:
MAX_INTEGRATED_LOUDNESS
(in LKFS)MAX_LOUDNESS_RANGE
(in LU)MAX_MOMENTARY_LOUDNESS
(in LUFS)MAX_SHORT_TERM_LOUDNESS
(in LUFS)MAX_TRUE_PEAK_LEVEL
(in dBTP)
The type of loudness check to register an AudioLoudnessCheck
for.
The techniques used to measure Momentary Loudness, Short-Term Loudness, and True Peak Level are defined by the following specifications:
-
ITU-R BS.1771-1 - Requirements for Loudness and True-peak indicating meters
https://www.itu.int/rec/R-REC-BS.1771-1-201201-I/en -
ITU-R BS.1770-4 - Algorithms to Measure Audio Programme Loudness and True-peak Audio Level
https://www.itu.int/rec/R-REC-BS.1770-4-201510-I/en
The technique used to measure Integrated Loudness is defined by the following family of specifications:
- ITU-R BS.1770 - Algorithms to Measure Audio Programme Loudness and True-peak Audio Level
https://www.itu.int/rec/R-REC-BS.1770
In addition to the technique defined within the ITU-R BS.1770 specification, the Loudness Range calculation also utilizes a cascaded gating scheme and the statistical distribution of loudness readings when determining the overall loudness range. This is performed in order to minimize the impact of low-level signals, background noise, silence and short bursts of unusually loud sound (eg. explosions in a movie) from dominating the loudness range. The loudness range measurement technique is described in more detail here:
- EBU TECH 3342 - Loudness Range: A Measure to Supplement EBU R128 Loudness Normalization
https://tech.ebu.ch/docs/tech/tech3342.pdf
Maximum Momentary Loudness (in LUFS) measured over an integration period of 400 milliseconds.
Maximum Short-Term Loudness (in LUFS) measured over an integration period of 3 seconds
Minimum True Peak Level (in dBTP) for each channel within a group
Maximum True Peak Level (in dBTP) for each channel within a group
Minimum Integrated Loudness (in LKFS)
Maximum Integrated Loudness (in LKFS)
Minimum Loudness Range (in LU)
Maximum Loudness Range (in LU)
Detect periods of silence, reported for left, right and all channels
Represents the algorithm used to measure perceived loudness.
ITU-R BS.1770-1 algorithm used to measure audio program loudness and true-peak audio level
ITU-R BS.1770-2 algorithm used to measure audio program loudness and true-peak audio level
ITU-R BS.1770-3 algorithm used to measure audio program loudness and true-peak audio level
ITU-R BS.1770-4 algorithm used to measure audio program loudness and true-peak audio level
Represents configuration for each of the audio loudness measurements that can be performed.
{
"enabled": true,
"algorithm": "ITU_R_BS_1770_1"
}
Controls whether audio loudness measurements are performed. This must be set to true
if any audio loudness quality checks are desired for the associated asset.
The algorithm to use for loudness (Momentary, Short-term, Integrated, Loudness Range) and True Peak level measurements.
Audio Phase Mismatch Detection identifies discrepancies in the phase alignment of audio channel pairs:
- Front Left / Front Right
- Side Left / Side Right
- Back Left / Back Right
Phase mismatch occurs when audio channels are misaligned, leading to phase cancellation and interference resulting in a distorted, thin output.
{
"enabled": true,
"duration": 1,
"skipStart": 1,
"skipEnd": 1,
"smoothing": 1,
"threshold": {
"type": "DEGREE",
"value": 90
},
"channelPairs": [
"FL-FR"
]
}
Enable detection of audio phase mismatch events.
The number of consecutive seconds required for an event to trigger.
The duration in seconds to ignore at the start of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the start of audio in order to eliminate unwanted quality check failures.
The duration in seconds to ignore at the end of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the end of audio in order to eliminate unwanted quality check failures.
The smoothing factor of the check. Higher values result in more aggressive smoothing and greater attenuation of outliers, while lower values preserve more of the original data.
The default threshold value is 160 degree when the type is DEGREE, and -0.94 when the type is CORRELATION.
{
"type": "DEGREE",
"value": 90
}
Performs the audio phase mismatch check against the selected channel pairs: “FL-FR”, “BL-BR” and “SL-SR”.
["FL-FR"]
["FL-FR", "BL-BR", "SL-SR"]
Configure the quality check threshold for (average) audio phase mismatch.
{
"type": "DEGREE",
"value": 90
}
The threshold type for audio phase mismatch detection.
The audio phase mismatch correlation.
The audio phase mismatch degrees.
The threshold value for the corresponding type. The value should be between 0 and 180 when the type is DEGREE, and between -1 and 1 when the type is CORRELATION.
Captures the supported parameter values for audio silence detection.
{
"threshold": -60,
"duration": 2.5
}
The loudness measurement below which an audio channel is considered silent for the purposes of determining if a segment has all channels active, any channels active, or no channels active. The default value is -60dbfs.
The minimum duration in seconds of a channel being below threshold
db for an audio channel to be considered silent. Segments shorter than this duration will be treated as active. The default value is 30 seconds.
Deprecated.
Important Note: As of version 2.21.0, this schema has been deprecated and it is no longer recommended to configure an analysis level audio silence quality check. Please refer to the asset level audio check configuration Audio
to perform silence quality checks in this version and future releases.
Captures the supported parameter values for audio silence detection.
{
"threshold": -60,
"commonParameters": {
"enabled": "true",
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
}
The loudness measurement below which the audio output for the asset is considered to be silent. The default value is -60 dBTP.
Common quality check configuration parameters.
{
"enabled": "true",
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
Represents the details about a given frame and map cache.
{
"cacheId": "533db73-0f9a-4805-9651-c5dcd519dc37",
"numberOfFiles": 15182,
"sizeOfFiles": 1073741824,
"humanReadableSizeOfFiles": "1.0 G"
}
The UUID of the file/map cache
The number of frame and/or map PNG files stored in the cache
The aggregate size of all the PNG files stored in the cache (in bytes)
The aggregate size of all the PNG files stored in the cache (in human readable form usig KMGTPE units)
The response payload of the GET on cache which lists the number of files and overall size of the frames and map files in each cache. Most deployments will have only one file/map cache.
{
"caches": [
{
"cacheId": "533db73-0f9a-4805-9651-c5dcd519dc37",
"numberOfFiles": 15182,
"sizeOfFiles": 1073741824,
"humanReadableSizeOfFiles": "1.0 G"
}
]
}
The list of known file/map caches
The request body used when creating all possible captures for a given frame.
{
"frameRequest": {
"type": "FullReferenceFrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"reference": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"referenceStartFrame": {
"type": "PTS",
"value": 1200
},
"additionalFrames": 24
},
"requestedCaptureType": "FRAME"
}
The frame request data. Use FrameRequestBody
for single asset and FullReferenceFrameRequestBody
to include a reference asset. Note that you must use FullReferenceFrameRequestBody
if you specify a requestedType
of QUALITY_MAP
or COLOR_DIFFERENCE_MAP
.
The capture type to send back in the response.
Represents the the different types of frame captures available (i.e. frame, banding map, quality map)
Represents the frame’s image content.
Represents binary map with white pixels showing banding presence.
Represents a gray scale presentation of pixel-level perceptual quality that show the spatial distribution of impairments within a frame.
Represents a gray scale representation of pixel-level color and skin tone deviation with respect to the reference file.
Captures the type of the configuration.
Contains metadata about the content contained within an asset. HLS variants that are part of the same presentation should have the same title.
{
"title": "Big Buck Bunny"
}
The title of the content
Configuration options for content layout detection
Captures the supported parameter values for the content similarity quality check.
When this check is enabled, will run the analysis in content similarity detection mode and detect content differences arising from frame insertions and deletions for two versions of the same title.
Note that in the content similarity detection mode, exactly one test and one reference asset must be provided. Additionally, the usual viewer score metrics will not be generated; instead both the reference and test will be evaluated in a no-reference mode. Thus, full-reference metrics such as PSNR and CVD can not be enabled, nor can full-reference metrics be used as the basis for score based quality checks.
{
"enabled": true,
"sensitivity": 75
}
Controls whether the content similarity quality check is enabled
The sensitivity of the content similarity detector, from 1-100.
Larger numbers, or those closer to 100 correspond to a more sensitive detector, meaning more events and potentially more false positives will be detected. Smaller numbers, or those closer to 1 correspond to a less sensitive detector, meaning less events will be detected. Lowering the sensitivity will usually result in less false positives at the cost of potentially increasing false negatives.
The default value is 50.
Authentication credentials for assets stored in Amazon S3.
In order to support some software features (i.e. frame/map captures), the system needs to persist the access credentials provided in this object into our secure data store. For this reason it is strongly recommended that you use useAssumedIAMRole
or the Add Amazon S3 bucket access endopint instead, so as to avoid the persisting of the IAM access key.
{
"useAssumedIAMRole": true
}
{
"accessKey": {
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"accessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
}
The AWS IAM access key that grants read permissions to the associated Amazon S3 bucket.
{
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"accessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
AWS Access Key ID for accessing assets stored in Amazon S3.
Deprecated: use accessKey
instead.
AWS Secret Access Key for accessing assets stored in Amazon S3.
Deprecated: use accessKey
instead.
Authenticate using the role already assumed by the underlying container
A specification of a device for which scores are calculated
{
"name": "oled65c9pua",
"resolution": {
"width": 1920,
"height": 1080
}
}
{
"name": "xl2420t"
}
The name of the display device
Resolution of the device specified as width and height in pixels
{
"width": 1920,
"height": 1080
}
Represents the configuration to apply when producing an optimized rendition using the AWS Elemental MediaConvert encoder. The config
JSON property below can accept the verbatim config from an EMC invocation. Additionally, this property supports templating and the following variables are available to be used:
Variable | Description |
---|---|
{INPUT_LOCATION} | represents the assetUri property on the input video included in the optimization |
Note that EMC only supports assets stored in S3.
For full examples of optimizing the EMC encoder, please see the examples provided on the /optimizations endpoint.
{
"type": "EMCConfig",
"config": {
"JobTemplate": "",
"Queue": "arn:aws:mediaconvert:us-east-1:315835334412:queues/Default",
"UserMetadata": {},
"Role": "arn:aws:iam::315835334412:role/mediaconvert-optimizer",
"Settings": {
"OutputGroups": [
{
"CustomName": "top-profile-encode",
"Name": "CMAF",
"Outputs": [
{
"ContainerSettings": {
"Container": "CMFC"
},
"VideoDescription": {
"Width": 1920,
"ScalingBehavior": "STRETCH_TO_OUTPUT",
"Height": 1080,
"TimecodeInsertion": "DISABLED",
"AntiAlias": "ENABLED",
"Sharpness": 50,
"CodecSettings": {
"Codec": "H_264",
"H264Settings": {
"InterlaceMode": "PROGRESSIVE",
"NumberReferenceFrames": 3,
"Syntax": "DEFAULT",
"Softness": 0,
"GopClosedCadence": 1,
"GopSize": 2,
"Slices": 1,
"GopBReference": "ENABLED",
"HrdBufferSize": 16000000,
"MaxBitrate": 8000000,
"EntropyEncoding": "CABAC",
"RateControlMode": "QVBR",
"QvbrSettings": {
"QvbrQualityLevel": 9
},
"CodecProfile": "HIGH",
"MinIInterval": 0,
"AdaptiveQuantization": "AUTO",
"CodecLevel": "AUTO",
"SceneChangeDetect": "ENABLED",
"QualityTuningLevel": "SINGLE_PASS",
"UnregisteredSeiTimecode": "DISABLED",
"GopSizeUnits": "SECONDS",
"ParControl": "INITIALIZE_FROM_SOURCE",
"NumberBFramesBetweenReferenceFrames": 3,
"RepeatPps": "DISABLED",
"DynamicSubGop": "ADAPTIVE"
}
}
},
"NameModifier": "_8Mbps"
}
],
"OutputGroupSettings": {
"Type": "CMAF_GROUP_SETTINGS",
"CmafGroupSettings": {
"TargetDurationCompatibilityMode": "SPEC_COMPLIANT",
"WriteHlsManifest": "ENABLED",
"WriteDashManifest": "ENABLED",
"SegmentLength": 4,
"Destination": "s3://s3-bucket/destination/path/",
"FragmentLength": 2,
"SegmentControl": "SEGMENTED_FILES",
"WriteSegmentTimelineInRepresentation": "ENABLED",
"ManifestDurationFormat": "FLOATING_POINT",
"StreamInfResolution": "INCLUDE"
}
}
}
],
"Inputs": [
{
"AudioSelectors": {
"Audio Selector 1": {
"DefaultSelection": "DEFAULT"
}
},
"VideoSelector": {
"ColorSpace": "FOLLOW",
"Rotate": "DEGREE_0",
"AlphaBehavior": "DISCARD"
},
"FilterEnable": "AUTO",
"PsiControl": "USE_PSI",
"FilterStrength": 0,
"DeblockFilter": "DISABLED",
"DenoiseFilter": "DISABLED",
"TimecodeSource": "ZEROBASED",
"FileInput": "s3://s3-bucket/sources/source.mov"
}
]
},
"AccelerationSettings": {
"Mode": "DISABLED"
},
"StatusUpdateInterval": "SECONDS_15",
"Priority": 0,
"HopDestinations": []
}
}
Must be "EMCConfig"
.
Represents a complete configuration for an EMC encoding job in JSON format. The content here can be used verbatim as if you were calling the EMC encoder directly.
This configuration supports (optional) templating and the following variables are available to be used:
Variable | Description |
---|---|
{INPUT_LOCATION} | represents the assetUri property on the input video included in the optimization |
{
"JobTemplate": "",
"Queue": "arn:aws:mediaconvert:us-east-1:315835334412:queues/Default",
"UserMetadata": {},
"Role": "arn:aws:iam::315835334412:role/mediaconvert-optimizer",
"Settings": {
"OutputGroups": [
{
"CustomName": "top-profile-encode",
"Name": "CMAF",
"Outputs": [
{
"ContainerSettings": {
"Container": "CMFC"
},
"VideoDescription": {
"Width": 1920,
"ScalingBehavior": "STRETCH_TO_OUTPUT",
"Height": 1080,
"TimecodeInsertion": "DISABLED",
"AntiAlias": "ENABLED",
"Sharpness": 50,
"CodecSettings": {
"Codec": "H_264",
"H264Settings": {
"InterlaceMode": "PROGRESSIVE",
"NumberReferenceFrames": 3,
"Syntax": "DEFAULT",
"Softness": 0,
"GopClosedCadence": 1,
"GopSize": 2,
"Slices": 1,
"GopBReference": "ENABLED",
"HrdBufferSize": 16000000,
"MaxBitrate": 8000000,
"EntropyEncoding": "CABAC",
"RateControlMode": "QVBR",
"QvbrSettings": {
"QvbrQualityLevel": 9
},
"CodecProfile": "HIGH",
"MinIInterval": 0,
"AdaptiveQuantization": "AUTO",
"CodecLevel": "AUTO",
"SceneChangeDetect": "ENABLED",
"QualityTuningLevel": "SINGLE_PASS",
"UnregisteredSeiTimecode": "DISABLED",
"GopSizeUnits": "SECONDS",
"ParControl": "INITIALIZE_FROM_SOURCE",
"NumberBFramesBetweenReferenceFrames": 3,
"RepeatPps": "DISABLED",
"DynamicSubGop": "ADAPTIVE"
}
}
},
"NameModifier": "_8Mbps"
}
],
"OutputGroupSettings": {
"Type": "CMAF_GROUP_SETTINGS",
"CmafGroupSettings": {
"TargetDurationCompatibilityMode": "SPEC_COMPLIANT",
"WriteHlsManifest": "ENABLED",
"WriteDashManifest": "ENABLED",
"SegmentLength": 4,
"Destination": "s3://s3-bucket/destination/path/",
"FragmentLength": 2,
"SegmentControl": "SEGMENTED_FILES",
"WriteSegmentTimelineInRepresentation": "ENABLED",
"ManifestDurationFormat": "FLOATING_POINT",
"StreamInfResolution": "INCLUDE"
}
}
}
],
"Inputs": [
{
"AudioSelectors": {
"Audio Selector 1": {
"DefaultSelection": "DEFAULT"
}
},
"VideoSelector": {
"ColorSpace": "FOLLOW",
"Rotate": "DEGREE_0",
"AlphaBehavior": "DISCARD"
},
"FilterEnable": "AUTO",
"PsiControl": "USE_PSI",
"FilterStrength": 0,
"DeblockFilter": "DISABLED",
"DenoiseFilter": "DISABLED",
"TimecodeSource": "ZEROBASED",
"FileInput": "s3://s3-bucket/sources/source.mov"
}
]
},
"AccelerationSettings": {
"Mode": "DISABLED"
},
"StatusUpdateInterval": "SECONDS_15",
"Priority": 0,
"HopDestinations": []
}
An array of configurations that control how IMAX Stream Smart™ optimizes the encode(s) produced from the config
. Note that the entries in this array are applied in sequential order to the encodes produced, until one list is exhausted.
Please consult your IMAX representative for more details on the applicability of these objects for your use case(s).
[
{
"key1": "value1"
},
{
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
]
Represents an intermediate or final encoding of an input video. Encodes are included in the context of an encoder configuration (e.g. FFmpegConfig) when your goal is to use IMAX Stream Smart™ to produce an optimized rendition of a given input video.
Encodes support templating and the following variables are available to be used in the encoding commands:
Variable | Description |
---|---|
{INPUT_LOCATION} | represents the assetUri property on the input video included in the optimization |
{OUTPUT_LOCATION} | represents the outputLocation of the encoded video |
{TEMP_FILE_1} . . {TEMP_FILE_5} |
represents the output location of up to 5 intermediate video/metadata files when performing multi-pass encoding |
{
"command": [
"ffmpeg -r 24 -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -crf 23 -maxrate 4500k -bufsize 6000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/example/output/encoded_video.mp4"
}
}
For single-pass encoding, this array holds the single encoding command used to produce the final encoded video.
For multi-pass encoding, this array holds all the encoding commands used to produce the intermediate video/metadata and, lastly, the final encoded video.
Requirements:
- The input must be given as
-i {INPUT_LOCATION}
- The output must be given as
{OUTPUT_LOCATION}
, which must be the last argument. For multipass, only the last pass requires this. - For CRF commands, please specify
-crf
(or omit to use the default). Optionally specify-maxrate
and/or-bufsize
. - For VBR commands, please specify
-b:v
,-maxrate
and-bufsize
(currently, all three are required). - Not all FFmpeg arguments/flags are supported. Unsupported arguments currently include:
-ss
,-sseof
,-t
,-to
,-fs
.
Encodes support templating and the following variables are available to be used in the encoding commands:
Variable | Description |
---|---|
{INPUT_LOCATION} | represents the assetUri property on the input video included in the optimization |
{OUTPUT_LOCATION} | represents the outputLocation of the encoded video |
{TEMP_FILE_1} . . {TEMP_FILE_5} |
represents the output location of up to 5 intermediate video/metadata files when performing multi-pass encoding |
[
"ffmpeg -r 24 -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -crf 23 -maxrate 4500k -bufsize 6000k -an {OUTPUT_LOCATION}"
]
[
"ffmpeg -r 24 -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -b:v 5000k -maxrate 6250k -bufsize 10000k -an {OUTPUT_LOCATION}"
]
[
"ffmpeg -i {INPUT_LOCATION} -passlogfile {TEMP_FILE_1} -profile:v high -preset slow -pass 1 -vcodec libx264 -bf 0 -refs 4 -b:v 4500k -maxrate:v 4500k -bufsize:v 6000k -minrate:v 6000k -x264-params \"rc-lookahead=48:keyint=96:stitchable=1:keyint_min:48\" -copyts -start_at_zero -an -f mp4 /dev/null",
"ffmpeg -i {INPUT_LOCATION} -passlogfile {TEMP_FILE_1} -profile:v high -preset slow -pass 2 -vcodec libx264 -bf 0 -refs 4 -b:v 4500k -maxrate:v 4500k -bufsize:v 6000k -minrate:v 6000k -x264-params \"rc-lookahead=48:keyint=96:stitchable=1:keyint_min:48\" -copyts -start_at_zero -an -f mp4 {OUTPUT_LOCATION}"
]
Represents the output location for the final encoded video. Commands can reference this location using {OUTPUT_LOCATION}.
The Optimization job will fail if there is already a file at the given output location, to prevent overwrites.
Additional configuration options that control how IMAX Stream Smart™ optimizes the resulting encode.
Please consult your IMAX representative for more details on the applicability of this object for your use case(s).
{
"key1": "value1",
"key2": "value2"
}
Encapsulates the encoder configuration used when producing the encoded video(s). Depending on your choice of encoder, you may have many configuration options available. The goal here is to supply IMAX Stream™ with the same configuration that you would use to produce your encoded version, which will serve as the baseline for the optimization process.
Currently, IMAX Stream™ supports the following encoders:
{
"type": "FFmpegConfig",
"encodes": [
{
"command": [
"ffmpeg -r 24 -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -crf 23 -maxrate 4500k -bufsize 6000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/example/output/encoded_video.mp4"
}
}
]
}
{
"type": "EMCConfig",
"config": {
"JobTemplate": "",
"Queue": "arn:aws:mediaconvert:us-east-1:315835334412:queues/Default",
"UserMetadata": {},
"Role": "arn:aws:iam::315835334412:role/mediaconvert-optimizer",
"Settings": {
"OutputGroups": [
{
"CustomName": "top-profile-encode",
"Name": "CMAF",
"Outputs": [
{
"ContainerSettings": {
"Container": "CMFC"
},
"VideoDescription": {
"Width": 1920,
"ScalingBehavior": "STRETCH_TO_OUTPUT",
"Height": 1080,
"TimecodeInsertion": "DISABLED",
"AntiAlias": "ENABLED",
"Sharpness": 50,
"CodecSettings": {
"Codec": "H_264",
"H264Settings": {
"InterlaceMode": "PROGRESSIVE",
"NumberReferenceFrames": 3,
"Syntax": "DEFAULT",
"Softness": 0,
"GopClosedCadence": 1,
"GopSize": 2,
"Slices": 1,
"GopBReference": "ENABLED",
"HrdBufferSize": 16000000,
"MaxBitrate": 8000000,
"EntropyEncoding": "CABAC",
"RateControlMode": "QVBR",
"QvbrSettings": {
"QvbrQualityLevel": 9
},
"CodecProfile": "HIGH",
"MinIInterval": 0,
"AdaptiveQuantization": "AUTO",
"CodecLevel": "AUTO",
"SceneChangeDetect": "ENABLED",
"QualityTuningLevel": "SINGLE_PASS",
"UnregisteredSeiTimecode": "DISABLED",
"GopSizeUnits": "SECONDS",
"ParControl": "INITIALIZE_FROM_SOURCE",
"NumberBFramesBetweenReferenceFrames": 3,
"RepeatPps": "DISABLED",
"DynamicSubGop": "ADAPTIVE"
}
}
},
"NameModifier": "_8Mbps"
}
],
"OutputGroupSettings": {
"Type": "CMAF_GROUP_SETTINGS",
"CmafGroupSettings": {
"TargetDurationCompatibilityMode": "SPEC_COMPLIANT",
"WriteHlsManifest": "ENABLED",
"WriteDashManifest": "ENABLED",
"SegmentLength": 4,
"Destination": "s3://s3-bucket/destination/path/",
"FragmentLength": 2,
"SegmentControl": "SEGMENTED_FILES",
"WriteSegmentTimelineInRepresentation": "ENABLED",
"ManifestDurationFormat": "FLOATING_POINT",
"StreamInfResolution": "INCLUDE"
}
}
}
],
"Inputs": [
{
"AudioSelectors": {
"Audio Selector 1": {
"DefaultSelection": "DEFAULT"
}
},
"VideoSelector": {
"ColorSpace": "FOLLOW",
"Rotate": "DEGREE_0",
"AlphaBehavior": "DISCARD"
},
"FilterEnable": "AUTO",
"PsiControl": "USE_PSI",
"FilterStrength": 0,
"DeblockFilter": "DISABLED",
"DenoiseFilter": "DISABLED",
"TimecodeSource": "ZEROBASED",
"FileInput": "s3://s3-bucket/sources/source.mov"
}
]
},
"AccelerationSettings": {
"Mode": "DISABLED"
},
"StatusUpdateInterval": "SECONDS_15",
"Priority": 0,
"HopDestinations": []
}
}
The response payload for any errors with all of the operations
{
"code": "SS-20000",
"description": "A Generic StreamSmart error occured"
}
{
"code": "SA-10000",
"description": "The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.",
"details": {
"type": "BodyProcessorException",
"message": "[Bad Request] Validation error for body application/json: Input doesn't match one of allowed values of enum: [DOLBY_VISION_METADATA, AUDIO]",
"causeType": "ValidationExceptionImpl",
"causeMessage": "Input doesn't match one of allowed values of enum: [DOLBY_VISION_METADATA, AUDIO]",
"actualContentType": "application/json",
"errorType": "VALIDATION_ERROR",
"invalidInputScope": "/subjectAssets/0/sidecars/0/type",
"invalidInputKeyword": "enum",
"invalidInput": "XYZ"
}
}
The code for the error
Generic StreamAware error code
Generic StreamSmart error code
Parsing Error
Invalid optimization specification
Failed to start optimization
Unauthorized encoder
Invalid token provided
A description of the error
Additional details for the error
{
"type": "BodyProcessorException",
"message": "[Bad Request] Validation error for body application/json: Input doesn't match one of allowed values of enum: [DOLBY_VISION_METADATA, AUDIO]",
"causeType": "ValidationExceptionImpl",
"causeMessage": "Input doesn't match one of allowed values of enum: [DOLBY_VISION_METADATA, AUDIO]",
"actualContentType": "application/json",
"errorType": "VALIDATION_ERROR",
"invalidInputScope": "/subjectAssets/0/sidecars/0/type",
"invalidInputKeyword": "enum",
"invalidInput": "XYZ"
}
Represents the configuration to apply when producing an optimized rendition using the FFmpeg encoder.
IMAX Stream™ supports a number of FFmpeg encoding strategies including:
- Single-pass constant rate factor (CRF)
- Single-pass variable bitrate (VBR)
- Multi-pass variable bitrate (VBR)
For full examples of optimizing the FFmpeg encoder, please see the examples provided on the /optimizations endpoint.
{
"type": "FFmpegConfig",
"encodes": [
{
"command": [
"ffmpeg -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -b:v 4500k -maxrate 4500k -bufsize 6000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/example/output/encoded_video.mp4"
}
}
]
}
Must be "FFmpegConfig"
.
An array of one or more (FFmpeg) encodes to apply to the input asset. If you are producing a single encoded video, your array here would include a single encode. Whereas if you are producing a ladder of encoded videos, your array would include multiple encodes.
[
{
"command": [
"ffmpeg -r 24 -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -crf 23 -maxrate 4500k -bufsize 6000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/example/output/encoded_video.mp4"
}
}
]
[
{
"command": [
"ffmpeg -r 24 -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -crf 23 -maxrate 4500k -bufsize 6000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/example/output/output1.mp4"
}
},
{
"command": [
"ffmpeg -r 24 -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -crf 25 -maxrate 4500k -bufsize 6000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/example/output/output2.mp4"
}
},
{
"command": [
"ffmpeg -r 24 -i {INPUT_LOCATION} -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=48:keyint_min=48:scenecut=0\" -profile:v high -level:v 4.1 -preset slow -crf 27 -maxrate 4500k -bufsize 6000k -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/example/output/output3.mp4"
}
}
]
Configures the FPS and scan type quality check. When the detected FPS or scan type differs from the probed FPS or scan type a “fps-mismatch” event is fired. The event is also fired when stream frame rate (if detected by demuxer) is different from the measured FPS.
{
"allowed": "30i,60p",
"enablePsfDetection": false,
"commonParameters": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
}
A comma separated list of allowed FPS and scan type combinations, such as 30i or 60p. If empty/unspecified everything is allowed. When the detected fps/scan combination is not one of the allowed ones, an “fps-not-allowed” event is fired.
Enables detection for “bad” interlaced videos created from PsF sources. Requires more time and cycles and may not be 100% correct.
Common quality check configuration parameters.
{
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
An identifier which can be used to uniquely identify a single frame within a video asset. The frame index/number within the sequential list of frames that consitute a video asset.
{
"type": "FrameIndex",
"value": 1200
}
Capture the schema type for use in oneOf
semantics.
Captures the frame index value.
If the video asset is being deinterlaced by frame (i.e. FrameNumber or FrameTime and not PTS) then this index tells the system whether it should seek to the first or second deinterlaced frame for the desired frame. This value is rarely needed and only useful in the context of a full-reference analysis and under certain scan type and frame rate combinations. Please consult your IMAX contact for more details.
The request body for any request to create a frame and/or map.
{
"type": "FrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.m3u8",
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "S3",
"name": "/videos"
},
"streamIdentifier": {
"type": "HLSVariantIdentifier",
"bandwidth": 4997885,
"fallbackStreamIndex": 1
}
},
"startFrame": {
"type": "FrameIndex",
"value": 1200
},
"additionalFrames": 24
}
Capture the schema type for use in oneOf
semantics.
The video asset for which you want to create the frame capture or map.
The frame at which to start capturing.
{
"type": "FrameIndex",
"value": 1200
}
The number of additional frames after startFrame
for which frame captures (or maps) will be automatically generated and cached. Decoding video, extracting frames and building maps can be expensive operations. Use this value to capture and cache a number of frames following startFrame
to support faster subsequent look-ahead request-response exchanges (i.e. useful in scroll forward functionality).
An identifier which can be used to uniquely identify a single frame within a video asset and is structured as a hybrid time-frame format where:
HH
is two-digit hour (00-24);MM
is two-digit minute (00-59);SS
is two-digit second (00-59);- and
FF
is frame number within the second; varies depending on asset frames per second (FPS).
{
"type": "FrameTime",
"value": "00:34:28:21",
"deinterlacingIndex": 1
}
Capture the schema type for use in oneOf
semantics.
Captures the frame time value.
If the video asset is being deinterlaced by frame (i.e. FrameNumber or FrameTime and not PTS) then this index tells the system whether it should seek to the first or second deinterlaced frame for the desired frame. This value is rarely needed and only useful in the context of a full-reference analysis and under certain scan type and frame rate combinations. Please consult your IMAX contact for more details.
Captures the supported parameter values for the freeze frame quality check.
{
"enabled": true,
"sensitivity": 75,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
Controls whether the freeze frame quality check is enabled
The sensitivity of the freeze frame detector, from 1-100. Larger numbers, or those closer to 100 correspond to a more sensitive detector, meaning more events, and potentially more false positives will be detected. Smaller numbers, or those closer to 1 correspond to a less sensitive detector, meaning less events will be detected. Lowering the sensitivity will usually result in less false positives at the cost of potentially increasing false negatives (true freeze frame events will be reported as unimpaired video). The default value is 50.
The number of consecutive seconds after which a freeze frame event will be reported as a quality check failure. The default value is 10s.
The number of seconds to ignore at the start of the asset. This value can be used to skip or ignore some portion at the start of asset in order to eliminate unwanted quality check failures. The default value is 0s.
The number of seconds to ignore at the end of the asset. This value can be used to skip or ignore some portion at the end of asset in order to eliminate unwanted quality check failures. The default value is 0s.
{
"include": true,
"duration": 2.5,
"sensitivity": 75
}
Whether or not to include this segment type in the content layout timeline
Number The minimum duration in seconds for freeze frame segments to be included in the content layout timeline. Freeze frame segments shorter than the specified duration will be treated as motion video.
The sensitivity of the freeze frame detector, from 1-100. Larger numbers, or those closer to 100 correspond to a more sensitive detector, meaning more events, and potentially more false positives will be detected. Smaller numbers, or those closer to 1 correspond to a less sensitive detector, meaning less events will be detected. Lowering the sensitivity will usually result in less false positives at the cost of potentially increasing false negatives (true freeze frame events will be reported as unimpaired video). The default value is 50.
The request body for any full-reference request to create a frame and/or map. To create a quality map, you must use a full-reference request.
{
"type": "FullReferenceFrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"reference": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"referenceStartFrame": {
"type": "PTS",
"value": 1200
},
"additionalFrames": 24
}
Capture the schema type for use in oneOf
semantics.
The subject asset in the context of the full-reference request.
The frame in the subject asset at which to start capturing.
{
"type": "PTS",
"value": 1400
}
The reference video asset to be used when creating a full-reference request. Remember that a quality map image for a given asset requires access to the original reference asset in order to calculate and show the spatial distribution of impairments between the two frames.
The reference frame at which to start capturing. This value is only needed when the corresponding frame values differ between reference and subject assets (i.e. there is temporal misalignment).
{
"type": "PTS",
"value": 1200
}
The reference frame at which to start capturing. This value is only needed when the corresponding frame values differ between reference and subject assets (i.e. there is temporal misalignment).
The number of additional frames after startFrame
for which frame captures (or maps) will be automatically generated and cached. Decoding video, extracting frames and building maps can be expensive operations. Use this value to capture and cache a number of frames following startFrame
to support faster subsequent look-ahead request-response exchanges (i.e. useful in scroll forward functionality).
Only used if video asset is HTTP Live Streaming (HLS). This type is used to specify which variant video stream is to be used. If not included, all variant streams are used.
{
"type": "HLSVariantIdentifier",
"bandwidth": 4997885,
"fallbackStreamIndex": 1
}
Capture the schema type for use in oneOf
semantics.
The bandwidth of the variant stream to be used as the subject asset (The value of the BANDWIDTH key of the corresponding EXT-X-STREAM-INF tag). If multiple variant streams with the same bandwidth exist, the first is used.
If multiple variant streams with the same bandwidth are found in the master playlist, those after the first are treated as fallback streams for that variant. The second stream with the same bandwidth has fallback index 0.
Represents an IAM access key which is comprised of two parts:
- an access key ID (for example,
AKIAIOSFODNN7EXAMPLE
) and - a secret access key (for example,
wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
)
{
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"accessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
The key identifier
The secret access key value
Settings for using image sequences as an asset:
.png
.tif
.tiff
.dpx
.jpg
.jpeg
.j2c
.jp2
.jpc
.j2k
.exr
This property is required when the asset name contains a format string (%d
or %0[width]d
)
{
"fps": 24
}
Frames per second.
Configuration options for ladder refinement. When enabled, IMAX Stream™ will intelligently optimize the output ABR ladder of the optimization job. It does this by pruning redundant renditions from the ladder. That is, if two renditions are very close in size, one will be removed to reduce the total size of the ladder. By default, this feature is disabled and the ladder will contain all optimized outputs produced by the job.
{
"enabled": true
}
Enable ladder refinement for the optimization
Used to define a quality check event based on metadata validity and correctness.
{
"type": "DOLBY_VISION"
}
{
"type": "MAXCLL_AND_MAXFALL",
"tolerance": 100,
"metadataSources": [
"CONTAINER"
]
}
The type of metadata to validate
The tolerance (+/-) between the measured and metadata values before a quality check is raised
tolerance can only be specified for the following type
values:
MAXCLL_AND_MAXFALL
For MAXCLL_AND_MAXFALL
quality checks the unit is nits
, and the default is 100.
Perform the metadata check against only the selected metadata source. Currently only used by MAXCLL_AND_MAXFALL
. By default all metadata sources are checked (if present).
[
"PLAYLIST"
]
Validate the container metadata against the measured. Raise a quality check on mismatch, or if no MaxFALL/MaxCLL container metadata was detected.
Validate the Dolby Vision metadata against the measured. Raise a quality check on mismatch, or if no MaxFALL/MaxCLL Dolby Vision metadata was detected. If the video does not have any Dolby Vision metadata (side car or embedded), then this check is ignored.
Validate the IMF CPL metadata against the measured. Raise a quality check on mismatch, or if no MaxFALL/MaxCLL CPL metadata was detected. If the video is not an IMF video (submitted via the CPL XML), then this check is ignored.
The type of metadata to register a quality check definition for
Validate Dolby Metadata based on https://professionalsupport.dolby.com/s/article/Dolby-Vision-Quality-Control-Metadata-Master-Mezzanine
Validate that Metadata from the container/CPL is consistant and matches measured light level values
QuickTime container compliance checks.
{
"enabled": true,
"duration": false,
"durationThreshold": 0.5,
"audioDescriptors": false,
"videoDescriptors": false,
"timecodeDescriptors": false
}
Enable/disable mp4 compliance checks.
Enable/Disable duration check.
Maximum absolute allowable difference in fractional seconds between the duration calculated from the track properties and the duration calculated from sample timestamps within the track.
Enable/Disable audio descriptor validation checks.
Enable/Disable video descriptor validation checks.
Enable/Disable timecode descriptor validation checks.
MXF container compliance checks.
{
"enabled": true
}
Enable/disable MXF compliance check.
Represents the request body used in an analyses POST request to submit a new analysis for processing using the specified assets.
A given analysis can either be full-reference or no-reference. A full-reference analysis requires specifying both reference and subject assets, whereas a no-reference analysis requires only a subject asset. For maximum efficiency, NewAnalysis has been designed to accept multiple reference and subject assets, with each subject asset being compared individually against all reference assets in separate analyses. This flexibility allows you to create a single request to execute anything from an ad-hoc no-reference analysis to multiple encoding ladder comparisons.
The following examples are representations in table format of how the system handles multiple reference and subject assets for common analysis scenarios:
Reference Asset(s) | Subject Asset(s) |
---|---|
GOT_S2_EP1.mov | GOT_S2_EP1_libx264_1920x1080_50-0.mov |
GOT_S2_EP1_libx264_1280x720_50-0.mov |
Results in 2 full-reference analyses:
- GOT_S2_EP1.mov —> GOT_S2_EP1_libx264_1920x1080_50-0.mov
- GOT_S2_EP1.mov —> GOT_S2_EP1_libx264_1280x720_50-0.mov
Reference Asset(s) | Subject Asset(s) |
---|---|
GOT_S2_EP1_libx264_1920x1080_50-0.mov | |
GOT_S2_EP1_libx264_1280x720_50-0.mov | |
GOT_S2_EP1_libx264_960x540_50-0.mov |
Results in 3 no-reference analyses:
- GOT_S2_EP1_libx264_1920x1080_50-0.mov
- GOT_S2_EP1_libx264_1280x720_50-0.mov
- GOT_S2_EP1_libx264_960x540_50-0.mov
Reference Asset(s) | Subject Asset(s) |
---|---|
GOT_S2_EP1.mov | GOT_S2_EP1_libx264_1920x1080_50-0.mov |
GOT_S2_EP1.mp4 | GOT_S2_EP1_libx264_1280x720_50-0.mov |
Results in 4 full-reference analyses:
- GOT_S2_EP1.mov —> GOT_S2_EP1_libx264_1920x1080_50-0.mov
- GOT_S2_EP1.mov —> GOT_S2_EP1_libx264_1280x720_50-0.mov
- GOT_S2_EP1.mp4 —> GOT_S2_EP1_libx264_1920x1080_50-0.mov
- GOT_S2_EP1.mp4 —> GOT_S2_EP1_libx264_1280x720_50-0.mov
For more details on how to structure the requests and responses for the examples above, please consult the POST endpoint on the analyses resource.
Since both no-reference and full-reference analyses require a subject asset, the subjectAssets
is a required attribute. For full-reference analyses, the referenceAssets
is also required.
{
"content": {
"title": "Big Buck Bunny"
},
"referenceAssets": [
{
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
}
],
"subjectAssets": [
{
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
}
],
"analyzerConfig": {
"enableComplexityAnalysis": true,
"enableBandingDetection": true,
"qualityCheckConfig": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25,
"freezeFrame": {
"enabled": true
},
"blackFrame": {
"enabled": true
}
},
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "TYPICAL"
}
],
"framesToProcess": 240,
"temporalAlignment": {
"minSeconds": 5,
"maxSeconds": 90,
"maxSecondsHighFPS": 30
}
}
}
Metadata about the content being analyzed in this analysis. If included, content metadata will automatically be propagated to all assets in this analysis.
{
"title": "Big Buck Bunny"
}
A description of the analysis which can be used for reference, categorization and search/filtering. This field may be deprecated in a future release of the API. As such, you are encouraged to use the content
field in place of this field whenver possible as it plays a more prominent/visible role in Insights reporting.
The reference asset against which you will compare a subject asset. This attribute is ONLY used for full-reference (FR) analyses.
[
{
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/sources",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
}
]
The subject asset(s) are the assets which you will use to compare against the reference asset (for full-reference analysis) or the asset(s) against which you will perform a no-reference analysis.
[
{
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
},
{
"name": "Big_Buck_Bunny_1080p@2000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
},
{
"name": "Big_Buck_Bunny_7200p@1000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
}
]
Configuration options for use by the analyzer at the analysis level. Configuration options for assets can be specified on the Asset object.
{
"enableComplexityAnalysis": false,
"enableBandingDetection": false,
"qualityCheckConfig": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25,
"freezeFrame": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
},
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "TYPICAL"
}
],
"framesToProcess": 240,
"temporalAlignment": {
"minSeconds": 5,
"maxSeconds": 90,
"maxSecondsHighFPS": 30
},
"additionalConfigurationOptions": {
"bandingDetectionThreshold": 40
}
}
Represents an encoding optimization job. Use this type when creating the body of a POST request sent to the /optimizations endpoint and processing the response.
Note that IMAX Stream™ currently only supports optimizations for assets stored in S3.
{
"content": {
"title": "Big Buck Bunny"
},
"input": {
"assetUri": "s3://videos-bucket/examples/Big_Buck_Bunny.mp4"
},
"encoderConfig": {
"type": "FFmpegConfig",
"encodes": [
{
"command": [
"ffmpeg -r 24 -i {INPUT_LOCATION} -pix_fmt yuv420p -color_primaries bt709 -color_trc bt709 -colorspace bt709 -color_range mpeg -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=50:keyint_min=50:scenecut=0:stitchable=1\" -profile:v high -level:v 4.1 -b:v 5000k -maxrate 6250k -bufsize 10000k -r 24 -vf scale=1920x1080 -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos-bucket/examples/output/encoded_video.mp4"
}
}
]
}
}
Metadata about the content contained in this asset.
{
"title": "Big Buck Bunny"
}
The input video for which to provide the optimized encoding.
If the codec configuration that you choose (i.e. see encoderConfig
below) supports templating, the assetUri
for this input will be made available as the variable {INPUT_LOCATION} for use in the encoder configuration.
Note that IMAX Stream™ currently only supports optimizations for assets stored in S3.
Note that in an optimization context, the system will ignore any properties on the input that do not apply (e.g. regionOfInterest, qualityCheckConfig, imageSequenceParamters and audio).
{
"assetUri": "s3://videos-bucket/examples/Big_Buck_Bunny.mp4"
}
The location to write the job’s output files. Currently, this location must be a path in S3.
This field is only supported for Elemental MediaConvert jobs. When given, we set the MediaConvert job’s Destination
field (in OutputGroup/OutputGroupSettings
) to this value.
{
"assetUri": "s3://videos-bucket/test/outputs/"
}
The encoder and the configuration used when producing the encoded video(s). Depending on your choice of encoder, you may have many configuration options available. The goal here is to supply IMAX Stream™ with the same configuration that you would use to produce your encoded version, which will serve as the baseline for the optimization process.
Configuration options for ladder refinement. Enable ladder refinement to have IMAX Stream™ intelligently optimize the output ABR ladder of this optimization job. This feature is disabled by default.
{
"enabled": true
}
By default, the Optimization will overwrite existing files when it writes to the output location. To prevent overwrites, set this flag to false. When false, the Optimization will fail if it tries to overwrite any existing file.
Additional (undocumented) configuration options for use with the optimization algorithms.
Please consult your IMAX representative for more details on the applicability of this object for your use case(s).
{
"key1": "value1",
"key2": "value2"
}
The request body used when updating an optimization.
Currently, the system supports only the following update operations:
-
Cancelling an existing opyimization
NoteOnly optimizations that are currently in progress (i.e. scheduled, estimating, aligning, analyzing) can be cancelled
{
"status": "CANCELLED"
}
Cancels a running optimization
Represents the response from a successful POST to the /optimizations endpoint. Use the id
on the optimization to fetch the results and information about the rendition’s quality and/or bitrate savings from Insights.
{
"id": "04a2e841-9c9e-4f50-9f9f-4a8b847f5b3e",
"content": {
"title": "Big Buck Bunny"
},
"input": {
"assetUri": "s3://videos/examples/Big_Buck_Bunny.mp4"
},
"encoderConfig": {
"type": "FFmpegConfig",
"encodes": [
{
"command": [
"ffmpeg -r 24 -i {INPUT_LOCATION} -pix_fmt yuv420p -color_primaries bt709 -color_trc bt709 -colorspace bt709 -color_range mpeg -c:v libx264 -x264-params \"ref=3:bframes=3:b_adapt=2:keyint=50:keyint_min=50:scenecut=0:stitchable=1\" -profile:v high -level:v 4.1 -b:v 5000k -maxrate 6250k -bufsize 10000k -r 24 -vf scale=1920x1080 -an {OUTPUT_LOCATION}"
],
"outputLocation": {
"assetUri": "s3://videos/examples/output/encoded_video.mp4"
}
}
]
},
"submissionTimestamp": "2018-01-01T14:20:22Z"
}
The UUID that represents the analysis that was done as part of the optimization process.
The UTC timestamp (using ISO-8601 representation) recording when the analysis was successfully submitted for analysis. Analyses that fail to submit corectly will not have a value for this attribute.
The location into which the system will save the optimized asset(s). The Optimization job will fail if there is already a file at an output location, to prevent overwrites.
There are two supported formats. Either specify assetUri
or both of name
and storageLocation
. See examples.
{
"assetUri": "s3://reference-assets/example/output/path/encoded_video.mp4"
}
{
"name": "example/output/path/encoded_video.mp4",
"storageLocation": {
"type": "S3",
"name": "reference-assets"
}
}
A URI describing the location of the video asset, of the form
storageLocationType://storageLocationName/path/name
Either this field or both of name
and storageLocation
must be provided.
Any special characters, like space or hash, must be percent-encoded. For example, an S3 object with key my video#001.mp4
should be given as s3://my-bucket/mypath/my%20video%23001.mp4
.
To uniquely specify an asset location, either the assetUri
field or both of name
and storageLocation
must be provided.
The full path and/or key for the video asset.
To uniquely specify an asset location, either the assetUri
field or both of name
and storageLocation
must be provided.
The storage location for the video asset.
{
"type": "S3",
"name": "test-bucket"
}
Configure Photosensitive Epilepsy Harding Tests
- Red Flash Detection
- Luminance Flash Detection
- Spatial Pattern Detection
- Extended Failures
{
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25,
"extendedFailure": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"luminanceFlash": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"redFlash": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"spatialPattern": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"standard": "ITU_R_BT_1702_2"
}
Enable all PSE Harding Tests. Can be overridden for individual tests.
The number of consecutive seconds after which the associated condition is considered to have failed its respective check. The default value is 0s (fail as soon as first detection is raised).
The number of seconds to ignore at the start of the asset. This value can be used to skip or ignore some portion at the start of asset in order to eliminate unwanted quality check failures. The default value is 0s.
The number of seconds to ignore at the end of the asset. This value can be used to skip or ignore some portion at the end of asset in order to eliminate unwanted quality check failures. The default value is 0s.
Configuration options for extended failure detection.
Configuration options for luminance flash detection.
Configuration options for red flash detection.
Configuration options for spatial pattern detection.
The standard to use for the Flash and Pattern Analyzer (FPA)
Ofcom
NAB 2006
ITU-R BT.1702-1
ITU-R BT.1702-2
Japan HDR
An identifier which can be used to uniquely identify a single frame within a video asset. The presentation timestamp metadata field used to achieve sychronization of an asset’s separate elementary streams when presented to the viewer.
{
"type": "PTS",
"value": 18542
}
Capture the schema type for use in oneOf
semantics.
Captures the PTS value.
Configuration options for supported video quality checks.
{
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
{
"enabled": false,
"freezeFrame": {
"enabled": true,
"duration": 5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"blackFrame": {
"enabled": true,
"duration": 5,
"skipStart": 1.25,
"skipEnd": 1.25
}
}
Enable detection of all video quality check events. Can be overridden for individual detections.
The number of consecutive seconds after which all included and enabled video and audio quality checks are considered to have failed their respective checks. Can be overridden for individual detections.
The number of seconds to ignore at the start of the asset. Applies to all included and enabled video and audio quality checks. Can be overridden for individual detections.
The number of seconds to ignore at the end of the asset. Applies to all included and enabled video and audio quality checks. Can be overridden for individual detections.
Configuration options for freeze frame detection.
Configuration options for black frame detection.
Configuration options for solid color frame detection.
Configuration options for color bars detection.
Configuration options for missing captions detection.
Deprecated. Configuration options for audio silence detection.
Important Note: As of version 2.21.0, this schema has been deprecated and it is no longer recommended to configure an analysis level audio silence quality check. Please refer to the asset level audio check configuration Audio
to perform silence quality checks in this version and future releases.
Configuration options for bitstream FPS and scan type mismatch detection
Enable a quality check for detection of multiple cadence patterns within an asset
Enable a quality check for detection of frames with a broken cadence
Enable a quality check for allowed cadences. Provide a list of cadences that are allowed to be present in the video.
[
"2:3",
"2:2"
]
Configuration options for content similarity detection.
{
"enabled": true,
"sensitivity": 75
}
Captures the supported parameter values for any video or audio quality check.
{
"enabled": "true",
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
Controls whether the associated video or audio quality check is enabled.
The number of consecutive seconds after which the associated condition is considered to have failed its respective check. For all video quality checks (i.e. black frames, solid color frames, freeze frames and color bar frames) the default value is 10s. For closed captions quality checks (i.e. missing captions) the default value is 60s.
The number of seconds to ignore at the start of the asset. This value can be used to skip or ignore some portion at the start of asset in order to eliminate unwanted quality check failures. The default value is 0s.
The number of seconds to ignore at the end of the asset. This value can be used to skip or ignore some portion at the end of asset in order to eliminate unwanted quality check failures. The default value is 0s.
An object that can be used to capture various configuration options that apply to the optimization algorithms.
Please consult your IMAX representative for more details on the applicability of this object for your use case(s).
{
"x1": 30,
"x2": 50,
"y1": 0,
"y2": 2.5
}
Settings needed to decode raw video with the following extensions:
.yuv
,.rgb
,.bgr
,.v210
, or.raw
.
{
"resolution": {
"width": 720,
"height": 576
},
"fps": 25,
"scanType": "P",
"fieldOrder": "TFF",
"pixelFormat": "YUV420P"
}
Resolution of the asset specified as width and height in pixels
{
"width": 1920,
"height": 1080
}
Frames per second
Scan Type
Interlaced
Progressive
Field Order
Top Field First
Bottom Field First
The pixel format
Specification for the region of interest
{
"originX": 20,
"originY": 0,
"regionHeight": 300,
"regionWidth": 400
}
x coordinate for region of interest origin
y coordinate for region of interest origin
height in pixels of the region of interest
width in pixels of the region of interest
A width and a height in pixels that specify the resolution of an asset
{
"width": 1920,
"height": 1080
}
Width of the video in pixels
Height of the video in pixels
Credentials for accessing an AWS Amazon S3 bucket
{
"bucketName": "mybucket",
"accessKey": {
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"accessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
}
AWS Amazon S3 bucket name.
The AWS IAM access key that grants read permissions to the associated Amazon S3 bucket.
{
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"accessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
Used to define a quality check event based on scores over a window of time. Quality check failure events are generated when scores of the specified type exceed threshold
for at least eventDuration
continuous seconds.
A viewingEnvironments
array in the AnalyzerConfig object must be specified in order to use score-based quality checks on SVS, EPS and SBS metrics.
{
"metric": "SVS",
"threshold": 80,
"durationSeconds": 5,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
}
The type of score to check for. Several restrictions apply regarding where each can be used:
SVS
,SBS
andLUMINANCE
can be applied to both Source (Reference) and Output (Test/Subjects), whereasEPS
andCVD
are only applicable to Output (Test/Subjects) assetsEPS
andCVD
can only be used in a full-reference analysis
The threshold that scores must exceed for eventDuration
seconds for an event to trigger. Scores lower than the threshold for SVS
, EPS
, MIN_FRAME_LUMINANCE
and MIN_PIXEL_LUMINANCE
and higher than the threshold for SBS
, CVD
, MAX_FRAME_LUMINANCE
and MAX_PIXEL_LUMINANCE
will cause an event to trigger.
For SVS
, EPS
, SBS
, and CVD
the max threshold is 100.
For MAX_FRAME_LUMINANCE
and MAX_PIXEL_LUMINANCE
score checks, the max threshold is 10000
Specifies the (0-based) index of the viewing environment to use for this quality check. Required for SVS, EPS, and SBS score checks.
The minimum continuous duration in seconds required for the target score to exceed threshold
for an event to trigger. Either durationSeconds
or durationFrames
must be specified, but both can not be specified simultaneously.
The minimum continuous duration in frames required for the target score to exceed threshold
for an event to trigger. Either durationSeconds
or durationFrames
must be specified, but both can not be specified simultaneously.
The duration in seconds to ignore at the start of the asset. This value can be used to skip or ignore some portion at the start of asset in order to eliminate unwanted quality check failures. The default value is 0s.
The duration in seconds to ignore at the end of the asset. This value can be used to skip or ignore some portion at the end of asset in order to eliminate unwanted quality check failures. The default value is 0s.
The type of score to register a quality check definition for. Several restrictions apply regarding where each can be used:
SVS
,SBS
andLUMINANCE
can be applied to both Source (Reference) and Output (Test/Subjects), whereasEPS
andCVD
are only applicable to Output (Test/Subjects) assetsEPS
andCVD
can only be used in a full-reference analysis
IMAX VisionScience Viewer Score
IMAX VisionScience Encoder Performance Score
IMAX VisionScience Banding Score
Color Volume Difference Score
Minimum Pixel Luminance Score
Maximum Pixel Luminance Score
Minimum Frame Luminance Score
Maximum Frame Luminance Score
Parameters for defining what constitutes an active segment for the purposes of constructing an active segment timeline. Allows specifying which segment types should be considered always inactive and under which audio conditions segment types should be considered active.
If set to false, this content type will always be considered inactive
The “least active” audio content type required for a segment of this content type:
SILENCE
: this content type will be considered active regardless of audioANY_CHANNEL_ACTIVE
: this content type will be considered active if at least one audio channel is activeALL_CHANNELS_ACTIVE
: this content type will only be considered active if all audio channels are active
Configuration options for specifying which content types and under which conditions should be reported in the content layout timeline. By default, all content type segments will be included with a default minimum duration of 10 seconds.
{
"blackFrameSegments": {
"include": true,
"duration": 0.5
},
"solidColorFrameSegments": {
"include": false
},
"colorBarFrameSegments": {
"include": true,
"duration": 1
},
"freezeFrameSegments": {
"include": true,
"duration": 0.25
},
"silenceDetection": {
"threshold": -80,
"duration": 5
}
}
Configuration options for black frame segments.
{
"include": true,
"duration": 0.5
}
Configuration options for color frame segments.
{
"include": false
}
Configuration options for color bar frame segments.
{
"include": true,
"duration": 1
}
Configuration options for freeze frame segments.
Configuration options for silence detection in audio segments.
{
"threshold": -80,
"duration": 5
}
A text file that accompanies a video asset and is used to provide metadata or supplemental data on the asset.
{
"type": "DOLBY_VISION_METADATA",
"name": "20161103_SPARKS_DOVI_METADATA_AR_CORRECT.xml"
}
{
"type": "DOLBY_VISION_METADATA",
"name": "20161103_SPARKS_DOVI_METADATA_AR_CORRECT.xml",
"path": "/mnt/videos"
}
The type of the sidecar.
The filename that represents the sidecar.
The path to the sidecar file. If not supplied, the sidecar file must use the same path as its associated Asset
.
The type of sidecar file that accompanies the video asset.
Dolby Vision metadata in XML format
Audio file in WAV, BWF, MXF or MP4 container format
Contains an ordered array of objects that define a soundfield group for measurement and quality checks. Each object specifies a source asset, track and channel index as well the corresponding output channel location. This collection is used to form an audio soundfield group for measurements.
Note that all input tracks in a group must have the same sample rate, sample format, but depth, bitstream mode, timebase and duration or the analysis will fail.
{
"type": "SoundfieldChannelMapping",
"mapping": [
{
"name": "myleftsoundfile.mxf",
"path": "/path/to/file",
"inputTrackIndex": 1,
"inputChannelIndex": 1,
"outputChannelLocation": "FL"
},
{
"name": "myrightsoundfile.mxf",
"path": "/path/to/file",
"inputTrackIndex": 1,
"inputChannelIndex": 2,
"outputChannelLocation": "FR"
},
]
}
Capture the schema type for use in oneOf
semantics.
The filename that represents the asset containing the audio track that will be used for a specific channel.
The path to the primary or sidecar asset’s file location.
The source track within the asset to be used for the soundfield channel.
The channel index within the source track to be used for teh soundfield channel.
The output location that the source channel will be mapped to.
A mapping of a single physical audio tracks within an embedded asset or as separate sidecar files that will be used to represent an audio soundfield group. The described channel layout for the track can be overriden with a user defined channel layout that will use a 1 to 1 mapping of input channels to output channels.
{
"type": "SoundfieldTrackMapping",
"name": "mysoundfile.mxf",
"path": "/path/to/file",
"inputTrackIndex": 1,
"outputChannelLayout": [
"FL", "FR", "FC", "LFE", "SL", "SR"
]
}
Capture the soundfield mapping type for use in oneOf
semantics.
The filename that represents the asset containing the audio track that will be used for a specific channel.
The path to the asset’s file (and possibly sidecar) location with the associated storage.
The source track of the asset to be used for the soundfield channel.
A channel layout that will override the channel layout that is described within the metadata of the track. The order in which the channels appear in the array will be mapped to the channels in the track. The amount of entries in the array must equal the number of channels in the track.
Captures the storage location used to house one or more assets. Every asset has a storage location.
{
"type": "S3",
"name": "test-bucket",
"credentials": {
"useAssumedIAMRole": true
}
}
{
"type": "PVC",
"name": "videos"
}
{
"type": "S3",
"name": "videos",
"credentials": {
"accessKey": {
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"accessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
}
}
An enumeration to capture the supported storage types.
A name required to access the root of the storage location. The root of the store location will be used with the path and name of the asset to uniquely identify the location of the asset. For Amazon S3, this value would likely be the S3 bucket name and would need to match the pattern (?!(^xn--|.+-s3alias$))^[a-z0-9][a-z0-9-]{1,61}[a-z0-9]$
. For a persistent volume backed by NFS, this would likely be the volume mount name. For HTTP, this must be the server hostname.
Authentication credentials for assets stored in Amazon S3.
{
"useAssumedIAMRole": "true"
}
An enumeration to capture the supported storage types.
Amazon S3
Any persistent volume claim that can be defined/supported in Kubernetes
A HTTP/HTTPS server (required for HLS)
Captures additional information about the video stream(s) within the asset. Specifically, this type can be:
- used to specify the packet identifier (see VideoPID and VideoPIDHex) for assets with multiple video streams or,
- in the case of HLS, used to represent the HLS variant (see HLSVariantIdentifer).
{
"type": "VideoPIDHex",
"identifier": "0x101"
}
{
"type": "HLSVariantIdentifier",
"bandwidth": 4997885,
"fallbackStreamIndex": 1
}
{
"type": "VideoPID",
"identifier": 1
}
Represents a (micro)service needed to support some function of the overall system.
{
"serviceName": "AnalysesService",
"serviceId": "f533db73-0f9a-4805-9651-c5dcd519dc37",
"deploymentId": "d8e89059-c7dd-454e-92ab-f61e4107d33b",
"status": "READY"
}
The name of the service
The UUID associated with service
The UUID associated with the service’s deployment within the system
The status of the servce
Any message (i.e. error, detail) that helps to clarify the state of the service when the status is not READY
The response payload of the GET on reaydz which contains the overall system readiness as well as the readiness of the individual (micro)services that comprise the system.
{
"checks": [
{
"deploymentId": "d8e89059-c7dd-454e-92ab-f61e4107d33b",
"serviceId": "f533db73-0f9a-4805-9651-c5dcd519dc37",
"serviceName": "AnalysesService",
"status": "READY"
},
{
"deploymentId" : "d2bcd3f6-79c6-43c7-9462-afa614d25176",
"serviceId" : "eb1e4722-461f-438b-95df-7bf3c6e30989",
"serviceName" : "AnalysisLifecycleService",
"status" : "READY"
}
],
"outcome": "READY"
}
An array of the individual readiness checks performed on the services that comprise the system.
The overall system readiness. All services that comprise the system must be READY or UNLICENSED in order for this value to be READY.
An enumeration to capture the supported system/service statuses.
Indicates that the system/service is operational
Indicates that the system/service is not operational
Indicates that the service is not operational due to a missing or invalid license.
This state applies only to an individual service and never the entire system.
Configuration options for temporal alignment
{
"minSeconds": 5,
"maxSeconds": 120,
"maxSecondsHighFPS": 120
}
The minimum duration of the misalignment between two videos in seconds
The maximum duration of the misalignment between two videos in seconds
The maximum duration of the misalignment in seconds between two videos in seconds for assets with a framerate of 120 frames per second or higher
The system version information for the deployed API.
{
"commitBranch": "stream-ondemand/release/3.1.0",
"commitHash": "facc2ef0a3c8ebc10819dc1218748f8d2cbfafd9",
"commitTime": "2022-05-02T18:58:44Z",
"stamped": "true",
"versionString": "3.1.0-12"
}
The git branch where the release was committed.
The hashcode associated with the release’s git commit.
The UTC timestamp associated with the release’s git commit.
Indicates if the version was stamped.
The alphanumeric system version for the API.
The response payload of the GET on version which contains the system version information.
{
"version": {
"commitBranch": "stream-ondemand/release/3.1.0",
"commitHash": "facc2ef0a3c8ebc10819dc1218748f8d2cbfafd9",
"commitTime": "2022-05-02T18:58:44Z",
"stamped": "true",
"versionString": "3.1.0-12"
}
}
The system version information for the API.
The packet identifier (PID) used to identify the video stream within the asset that you are interested in working with. In case of Multiple Program Transport Stream (MPTS), use this value to specify the Program ID (PID) of the video to be processed.
{
"type": "VideoPID",
"identifier": 1
}
Capture the schema type for use in oneOf
semantics.
Represents the desired video packet index.
The packet identifier (PID) used to identify the video stream within the asset that you are interested in working with. In case of Multiple Program Transport Stream (MPTS), use this value to specify the Program ID (PID) of the video to be processed.
{
"type": "VideoPIDHex",
"identifier": "0x101"
}
Capture the schema type for use in oneOf
semantics.
Represents the desired video packet index as a hexadecimal value in the form 0x101
Whether or not to include this segment type in the content layout timeline
The minimum duration in seconds for this segment type to be included in the content layout timeline. Segments of this type shorter than the specified duration will be treated as motion video.
The viewer type for which scores will be calculated
Represents a typical, untrained viewer
Represents a trained viewer schooled at spotting and judging video anomalies.
Represents a studio viewer trained in assessing the impact of video anomalies on the creator’s artistic intent.
A specification of the environment under which the content is viewed
{
"device": {
"name": "oled65c9pua",
"resolution": {
"width": 1920,
"height": 1080
}
},
"viewerType": "TYPICAL"
}
The display device
{
"name": "oled65c9pua",
"resolution": {
"width": 1920,
"height": 1080
}
}
The viewer type
The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.
Body
Examples
A generic parsing error occured when accessing the /optimizations endpoint
{
"code": "SS-10000",
"description": "A generic parsing error occurred"
}
A generic parsing error occurred when accessing the /analyses endpoint
{
"code": "SA-10000",
"description": "The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.",
"details": {
"type": "BodyProcessorException",
"message": "[Bad Request] Validation error for body application/json: Input doesn't match one of allowed values of enum: [DOLBY_VISION_METADATA, AUDIO]",
"causeType": "ValidationExceptionImpl",
"causeMessage": "Input doesn't match one of allowed values of enum: [DOLBY_VISION_METADATA, AUDIO]",
"actualContentType": "application/json",
"errorType": "VALIDATION_ERROR",
"invalidInputScope": "/subjectAssets/0/sidecars/0/type",
"invalidInputKeyword": "enum",
"invalidInput": "XYZ"
}
}
The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated.
The server cannot find the requested resource.
The request HTTP method is known by the server but has been disabled and cannot be used for that resource.
Used when the request is asking for a content-type that is not supported (i.e. XML when you only support JSON). The IMAX Stream On-Demand Platform API currently only supports the JSON content-type (i.e. application/json).
The server encountered an unexpected condition which prevented it from fulfilling the request.
The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay.