{id}
{id}
To submit one or more analyses for processing, send a POST
request to /analyses
.
An analysis can be either full-reference or no-reference.
Full-Reference
-
Used when you want to validate the performance of your encoder or compare one encoder or settings against another
-
Can compare any number of outputs, from a single asset to a full encoding ladder and/or HLS playlist
-
Compares each subject asset against a pristine reference to provide and score a pixel-by-pixel comparison
A full-reference analysis requires specifying both reference and subject assets and, during its operation, the SSIMPLUS® Analyzer will first make a no-reference pass on the reference asset and then will compare each subject asset against the reference. As such, this endpoint will return an Analysis object for the no-reference analysis of the reference in addition to one for each comparison with a subject asset. Let’s consider the following example as an illustration:
Reference Asset(s) Subject Asset(s) GOT_S2_EP1.mov GOT_S2_EP1_libx264_1920x1080_50-0.mov GOT_S2_EP1_libx264_1280x720_50-0.mov Results in 1 no-reference analysis on the reference (
testId: 1
):- GOT_S2_EP1.mov
Results in 2 full-reference analyses:
- GOT_S2_EP1.mov —> GOT_S2_EP1_libx264_1920x1080_50-0.mov (
testId: 1-1
) - GOT_S2_EP1.mov —> GOT_S2_EP1_libx264_1280x720_50-0.mov (
testId: 1-2
)
- GOT_S2_EP1.mov
No-Reference
-
Used when you want to validate the quality of a source or reference asset (i.e. source valiation)
-
Analyzes a single asset in isolation to provide a pixel-by-pixel evaluation capable of detecting and scoring the impact of numerous video anomolies
A no-reference analysis requires specifying one or more subject assets, each of which is analyzed in isolation. The following example illustrates a common no-reference analysis:
Subject Asset(s) GOT_S2_EP1_libx264_1920x1080_50-0.mov GOT_S2_EP1_libx264_1280x720_50-0.mov GOT_S2_EP1_libx264_960x540_50-0.mov Results in 3 no-reference analyses:
- GOT_S2_EP1_libx264_1920x1080_50-0.mov (
testId: 1
) - GOT_S2_EP1_libx264_1280x720_50-0.mov (
testId: 2
) - GOT_S2_EP1_libx264_960x540_50-0.mov (
testId: 3
)
- GOT_S2_EP1_libx264_1920x1080_50-0.mov (
For more details on how to configure common requests, please consult the endpoint examples in the right margin of this section and the NewAnalysis object that forms the request.
It is important to recognize that the response returned from this endpoint indicates only that an analysis has been successfully submitted for processing. It makes no guarantees that the analysis will execute without error, nor does it indicate anything about the content or nature of the frame score results, if available. To discover these details you are directed to consult the Insights product documentation.
The SSIMPLUS® Analyzer does support full-reference analyses where the assets do not share the same frame rate, albeit with some restrictions. Please refer to the Cross-frame rate support section for details.
Request body
The NewAnalysis request body is used to submit any combination of reference and subject assets you wish, enabling everything from ad-hoc no-reference analyses to full-reference encoding ladder comparisons. Please consult the description above, the endpoint example and/or the NewAnalysis object for more details.
Responses
Create (submit) a no-reference analysis for an asset all licensed quality checks enabled
curl --location --request POST 'https://research1.ssimwave.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "Simple NR Test with Quality Checks - Big Buck Bunny"
},
"subjectAssets": [
{
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc"
}
}
],
"analyzerConfig": {
"enableBandingDetection": true,
"qualityCheckConfig": {
"enabled": true,
"duration": 5,
"skipStart": 10.25,
"skipEnd": 10.25,
"freezeFrame": {
"enabled": true,
"duration": 10
}
}
}
}'
HTTP/1.1 201 Created
Content-Type: application/json
{
"submittedAnalyses": [
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"qualityCheckConfig": {
"blackFrame": {
"duration": 5,
"enabled": true,
"skipEnd": 10.25,
"skipStart": 10.25
},
"colorBarFrame": {
"duration": 5,
"enabled": true,
"skipEnd": 10.25,
"skipStart": 10.25
},
"duration": 5,
"enabled": true,
"freezeFrame": {
"duration": 10,
"enabled": true,
"skipEnd": 10.25,
"skipStart": 10.25
},
"missingCaptions": {
"duration": 5,
"enabled": true,
"skipEnd": 10.25,
"skipStart": 10.25
},
"silence": {
"commonParameters": {
"duration": 5,
"enabled": true,
"skipEnd": 10.25,
"skipStart": 10.25
}
},
"skipEnd": 10.25,
"skipStart": 10.25,
"solidColorFrame": {
"duration": 5,
"enabled": true,
"skipEnd": 10.25,
"skipStart": 10.25
}
},
"viewingEnvironments": []
},
"id": "286703bc-ad9a-4f05-87e4-ffe0cce188dc",
"subjectAsset": {
"content": {
"title": "Simple NR Test with Quality Checks - Big Buck Bunny"
},
"hdr": false,
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"submissionTimestamp": "2022-03-25T22:04:34.601Z",
"testId": "1"
}
]
}
Create (submit) a full-reference analysis with manual temporal alignment using Asset startFrame
property
curl --location --request POST 'https://research1.ssimwave.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "Simple FR analysis, no TA - Big Buck Bunny"
},
"referenceAssets": [
{
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
},
"startFrame": 1
}
],
"subjectAssets": [
{
"name": "Big_Buck_Bunny_h264_qp_21.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
},
"startFrame": 1
},
{
"name": "Big_Buck_Bunny_h264_qp_31.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
},
"startFrame": 1
}
],
"analyzerConfig": {
"enableBandingDetection": true,
"enableTemporalAlignment": false
}
}'
HTTP/1.1 201 Created
{
"submittedAnalyses": [
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableTemporalAlignment": false,
"viewingEnvironments": []
},
"id": "9f7c088b-97c2-4576-9cec-da46a3f6a704",
"subjectAsset": {
"content": {
"title": "Simple FR analysis, no TA - Big Buck Bunny"
},
"hdr": false,
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"startFrame": 1,
"storageLocation": {
"name": "video-files",
"type": "S3",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"submissionTimestamp": "2022-03-25T22:13:42.096Z",
"testId": "1"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"enableTemporalAlignment": false,
"viewingEnvironments": []
},
"id": "9f7c088b-97c2-4576-9cec-da46a3f6a704",
"referenceAsset": {
"content": {
"title": "Simple FR analysis, no TA - Big Buck Bunny"
},
"hdr": false,
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"startFrame": 1,
"storageLocation": {
"name": "video-files",
"type": "S3",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"subjectAsset": {
"content": {
"title": "Simple FR analysis, no TA - Big Buck Bunny"
},
"hdr": false,
"name": "Big_Buck_Bunny_h264_qp_21.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"startFrame": 1,
"storageLocation": {
"name": "video-files",
"type": "S3",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"submissionTimestamp": "2022-03-25T22:13:42.096Z",
"testId": "1-1"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"enableTemporalAlignment": false,
"viewingEnvironments": []
},
"id": "9f7c088b-97c2-4576-9cec-da46a3f6a704",
"referenceAsset": {
"content": {
"title": "Simple FR analysis, no TA - Big Buck Bunny"
},
"hdr": false,
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"startFrame": 1,
"storageLocation": {
"name": "video-files",
"type": "S3",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"subjectAsset": {
"content": {
"title": "Simple FR analysis, no TA - Big Buck Bunny"
},
"hdr": false,
"name": "Big_Buck_Bunny_h264_qp_31.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"startFrame": 1,
"storageLocation": {
"name": "video-files",
"type": "S3",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"submissionTimestamp": "2022-03-25T22:13:42.096Z",
"testId": "1-2"
}
]
}
Create (submit) a no-reference analysis for a single raw (.yuv) asset
curl --location --request POST 'https://research1.ssimwave.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "HoneyBee - Raw/YUV"
},
"subjectAssets": [
{
"name": "HoneyBee_3840x2160_120fps_420_10bit_YUV.yuv",
"path": "royalty_free/yuv",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc-2"
},
"rawVideoParameters": {
"resolution": {
"width": 3840,
"height": 2160
},
"fps": 24,
"scanType": "P",
"pixelFormat": "YUV420P"
}
}
],
"analyzerConfig": {
"enableBandingDetection": true
}
}'
HTTP/1.1 201 Created
{
"submittedAnalyses": [
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"viewingEnvironments": []
},
"id": "7575ea3b-8d6d-4768-9227-b57814fec75f",
"subjectAsset": {
"content": {
"title": "HoneyBee - Raw/YUV"
},
"hdr": false,
"name": "HoneyBee_3840x2160_120fps_420_10bit_YUV.yuv",
"path": "royalty_free/yuv",
"rawVideoParameters": {
"fieldOrder": "TFF",
"fps": 24,
"pixelFormat": "YUV420P",
"resolution": {
"height": 2160,
"width": 3840
},
"scanType": "P"
},
"storageLocation": {
"name": "video-files-pvc-2",
"type": "PVC"
}
},
"submissionTimestamp": "2022-03-26T14:41:47.169Z",
"testId": "1"
}
]
}
Create (submit) a full-reference analysis for an HLS asset.
curl --location --request POST 'https://research1.ssimwave.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"subjectAssets": [
{
"name": "Soccer.m3u8",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"path": "Soccer_MWC"
}
],
"referenceAssets": [
{
"name": "Soccer_1min.mp4",
"storageLocation": {
"type": "PVC",
"name": "video-files-pvc"
},
"path": "hlsReferences"
}
],
"analyzerConfig": {
"enableBandingDetection": true
}
}'
HTTP/1.1 201 Created
Content-Type: application/json
{
"submittedAnalyses": [
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"referenceAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer.m3u8",
"path": "Soccer_MWC",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"streamIdentifier": {
"bandwidth": 1713000,
"type": "HLSVariantIdentifier"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1-1"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"referenceAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer.m3u8",
"path": "Soccer_MWC",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"streamIdentifier": {
"bandwidth": 280000,
"type": "HLSVariantIdentifier"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1-2"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"referenceAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer.m3u8",
"path": "Soccer_MWC",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"streamIdentifier": {
"bandwidth": 19987000,
"type": "HLSVariantIdentifier"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1-3"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"referenceAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer.m3u8",
"path": "Soccer_MWC",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"streamIdentifier": {
"bandwidth": 582000,
"type": "HLSVariantIdentifier"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1-4"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"referenceAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer.m3u8",
"path": "Soccer_MWC",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"streamIdentifier": {
"bandwidth": 9561000,
"type": "HLSVariantIdentifier"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1-5"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": []
},
"id": "4d8eea6b-c530-44aa-83e8-717e0b618113",
"referenceAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer_1min.mp4",
"path": "hlsReferences",
"storageLocation": {
"name": "video-files-pvc",
"type": "PVC"
}
},
"subjectAsset": {
"content": {
"title": "Soccer Video - HLS - All bandwidths"
},
"hdr": false,
"name": "Soccer.m3u8",
"path": "Soccer_MWC",
"storageLocation": {
"name": "http://172.31.64.201:8084",
"type": "HTTP"
},
"streamIdentifier": {
"bandwidth": 3042000,
"type": "HLSVariantIdentifier"
}
},
"submissionTimestamp": "2022-03-26T15:24:08.072Z",
"testId": "1-6"
}
]
}
Create (submit) a no-reference analyses for an IMF asset
curl --location --request POST 'https://research1.ssimwave.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "Configure analysis with an IMF asset"
},
"subjectAssets": [
{
"name": "CPL_DTC-Master-SDR-ML5-R1-OV.xml",
"path": "/videos/imf/DTC-Master-SDR-ML5-R1-OV",
"storageLocation": {
"name": "videos",
"type": "PVC"
}
}
]
}'
HTTP/1.1 201 Created
{
"submittedAnalyses": [
{
"id": "756fb026-f4f4-47d8-ae8f-afd239643a55",
"subjectAsset": {
"content": {
"title": "Configure analysis with an IMF asset"
},
"hdr": false,
"name": "CPL_DTC-Master-SDR-ML5-R1-OV.xml",
"path": "/videos/imf/DTC-Master-SDR-ML5-R1-OV",
"storageLocation": {
"name": "videos",
"type": "PVC"
}
},
"submissionTimestamp": "2022-03-26T15:54:27.731Z",
"testId": "1"
}
]
}
Create (submit) a no-reference analyes for an Image Sequence asset
curl --location --request POST 'https://research1.ssimwave.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "Image Sequence"
},
"subjectAssets": [
{
"name": "frameIndex%d-test.png",
"path": "compressed-videos/image-sequence/png",
"storageLocation": {
"type": "S3",
"name": "ssimwave-compressed-videos"
},
"imageSequenceParameters": {
"fps": 25
}
}
],
"analyzerConfig": {
"enableBandingDetection": true
}
}'
HTTP/1.1 201 Created
{
"submittedAnalyses": [
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"viewingEnvironments": []
},
"id": "c096a553-ef1a-441c-ab09-bf56c28e7704",
"subjectAsset": {
"content": {
"title": "Image Sequence"
},
"hdr": false,
"imageSequenceParameters": {
"fps": 25
},
"name": "frameIndex%d-test.png",
"path": "compressed-videos/image-sequence/png",
"storageLocation": {
"name": "ssimwave-compressed-videos",
"type": "S3"
}
},
"submissionTimestamp": "2022-03-26T15:54:27.731Z",
"testId": "1"
}
]
}
Create (submit) a full-reference analysis with score-based quality checks
curl --location --request POST 'https://research1.ssimwave.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "FR Analysis With Score-Based Quality Checks"
},
"subjectAssets": [
{
"name": "Big_Buck_Bunny_h264_qp_21.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
},
"qualityCheckConfig": {
"scoreChecks": [
{
"metric": "SVS",
"threshold": 80,
"durationSeconds": 5,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
},
{
"metric": "SVS",
"threshold": 60,
"durationSeconds": 2,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
},
{
"metric": "SBS",
"threshold": 75,
"durationFrames": 48,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
}
]
}
}
],
"referenceAssets": [
{
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
}
}
],
"analyzerConfig": {
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "EXPERT"
},
{
"device": {
"name": "xl2420t"
},
"viewerType": "TYPICAL"
}
]
}
}'
HTTP/1.1 201 Created
{
"submittedAnalyses": [
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "EXPERT"
},
{
"device": {
"name": "xl2420t"
},
"viewerType": "TYPICAL"
}
]
},
"id": "944ade76-645a-4826-b500-3267fb1668f1",
"subjectAsset": {
"content": {
"title": "FR Analysis With Score-Based Quality Checks"
},
"hdr": false,
"name": "Big_Buck_Bunny_h264_qp_21.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"submissionTimestamp": "2022-06-30T15:48:40.648Z",
"testId": "1"
},
{
"analyzerConfig": {
"additionalConfigurationOptions": {},
"enableBandingDetection": true,
"enableComplexityAnalysis": false,
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "EXPERT"
},
{
"device": {
"name": "xl2420t"
},
"viewerType": "TYPICAL"
}
]
},
"id": "944ade76-645a-4826-b500-3267fb1668f1",
"referenceAsset": {
"content": {
"title": "FR Analysis With Score-Based Quality Checks"
},
"hdr": false,
"name": "Big_Buck_Bunny.mp4",
"path": "royalty_free/big_buck_bunny/source",
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"subjectAsset": {
"content": {
"title": "FR Analysis With Score-Based Quality Checks"
},
"hdr": false,
"name": "Big_Buck_Bunny_h264_qp_21.ts",
"path": "royalty_free/big_buck_bunny/outputs",
"qualityCheckConfig": {
"scoreChecks": [
{
"durationSeconds": 5,
"metric": "SVS",
"reverseThresholdComparison": false,
"skipEnd": 1.25,
"skipStart": 1.25,
"threshold": 80,
"viewingEnvironmentIndex": 0
},
{
"durationSeconds": 2,
"metric": "SVS",
"reverseThresholdComparison": false,
"skipEnd": 1.25,
"skipStart": 1.25,
"threshold": 60,
"viewingEnvironmentIndex": 0
},
{
"durationFrames": 48,
"metric": "SBS",
"reverseThresholdComparison": false,
"skipEnd": 1.25,
"skipStart": 1.25,
"threshold": 75,
"viewingEnvironmentIndex": 0
}
]
},
"storageLocation": {
"type": "S3",
"name": "video-files",
"credentials": {
"useAssumedIAMRole": true
}
}
},
"submissionTimestamp": "2022-06-30T15:48:40.648Z",
"testId": "1-1"
}
]
}
Create (submit) a no-reference analysis for a Dolby Vision asset with a metadata sidecar
curl -X POST "http://localhost:8888/api/vod/v1/analyses" \
-H "Content-Type: application/json" \
-d '{
"subjectAssets": [
{
"content": {
"title": "Sparks - Dolby Vision"
},
"name": "20161103_1023_SPARKS_4K_P3_PQ_4000nits_DoVi.mxf",
"sidecars": [
{
"type": "DOLBY_VISION_METADATA",
"name": "20161103_SPARKS_DOVI_METADATA_AR_CORRECT.xml"
}
],
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "PVC",
"name": "videos"
},
"hdr": true
}
],
"analyzerConfig": {
"enableBandingDetection": true
}
}'
Create (submit) a full-reference analysis with audio-based quality checks
curl --location --request POST 'https://research1.ssimwave.lan/api/vod/v1/analyses' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data-raw '{
"content": {
"title": "FR Analysis With Audio-Based Quality Checks"
},
"referenceAssets": [
{
"content": {
"title": "Big Buck Bunny"
},
"assetUri": "s3://my-bucket-name/test/Big_Buck_Bunny_1080p_ref.mp4",
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "S3",
"name": "videos",
"credentials": {
"useAssumedIAMRole": true
}
},
"audio": {
"groups": [
{
"qualityCheckConfig": {
"loudnessChecks": [
{
"type": "MAX_TRUE_PEAK_LEVEL",
"enabled": true,
"duration": 1,
"skipStart": 1.25,
"skipEnd": 1.25,
"threshold": -2
},
{
"type": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 5
}
{
"type": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 25
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_3",
"enabled": true
}
}
]
}
}
],
"subjectAssets": [
{
"content": {
"title": "Big Buck Bunny"
},
"assetUri": "s3://my-bucket-name/test/Big_Buck_Bunny_1080p_test.mp4",
"name": "Big_Buck_Bunny_1080p@4000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "S3",
"name": "videos",
"credentials": {
"useAssumedIAMRole": true
}
},
"audio": {
"groups": [
{
"qualityCheckConfig": {
"loudnessChecks": [
{
"type": "MAX_TRUE_PEAK_LEVEL",
"enabled": true,
"duration": 1,
"skipStart": 1.25,
"skipEnd": 1.25,
"threshold": -2
},
{
"type": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 5
}
{
"type": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 25
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_3",
"enabled": true
}
}
]
}
}
],
"analyzerConfig": {
"enableBandingDetection": true
}
}
HTTP/1.1 201 Created
Content-Type: application/json
{
"submittedAnalyses": [
{
"id": "04a2e841-9c9e-4f50-9f9f-4a8b847f5b3e",
"content": {
"title": "FR Analysis With Audio-Based Quality Checks"
},
"referenceAssets": [
{
"content": {
"title": "Big Buck Bunny"
},
"assetUri": "s3://my-bucket-name/test/Big_Buck_Bunny_1080p_ref.mp4",
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "S3",
"name": "videos",
"credentials": {
"useAssumedIAMRole": true
}
},
"audio": {
"groups": [
{
"qualityCheckConfig": {
"loudnessChecks": [
{
"type": "MAX_TRUE_PEAK_LEVEL",
"enabled": true,
"duration": 1,
"skipStart": 1.25,
"skipEnd": 1.25,
"threshold": -2
},
{
"type": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 5
}
{
"type": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 25
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_3",
"enabled": true
}
}
]
}
}
],
"subjectAssets": [
{
"content": {
"title": "Big Buck Bunny"
},
"assetUri": "s3://my-bucket-name/test/Big_Buck_Bunny_1080p_test.mp4",
"name": "Big_Buck_Bunny_1080p@4000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "S3",
"name": "videos",
"credentials": {
"useAssumedIAMRole": true
}
},
"audio": {
"groups": [
{
"qualityCheckConfig": {
"loudnessChecks": [
{
"type": "MAX_TRUE_PEAK_LEVEL",
"enabled": true,
"duration": 1,
"skipStart": 1.25,
"skipEnd": 1.25,
"threshold": -2
},
{
"type": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 5
}
{
"type": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 25
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_3",
"enabled": true
}
}
]
}
}
],
"analyzerConfig": {
"enableBandingDetection": true
}
"submissionError": "",
"submissionTimestamp": "2018-01-01T14:20:22Z",
"testId": "1-1"
}
]
}
{id}
To update an existing analysis, send a PATCH
request to /analyses/{id}
where id is the UUID of the analysis to update.
Please see the AnalysisPatchRequest schema to understand the options supported by the analysis update operation.
Path variables
The UUID of the analysis to be cancelled.
Request body
Responses
The analysis was successfully patched/updated
Cancelling an analysis
curl -X PATCH "http://localhost:8888/api/vod/v1/analyses/04a2e841-9c9e-4f50-9f9f-4a8b847f5b3e" \
-H "Content-Type: application/json" \
-d '{
"status": "CANCELLED"
}'
HTTP/1.1 200 OK
{id}
To delete an analysis, send a DELETE
request to /analyses/{id}
where id is the UUID of the analysis to delete.
Only analyses that have been previously cancelled or completed can be deleted.
Path variables
The UUID of the analysis to be cancelled.
Responses
The analysis deletion request was successfully processed
Deleting an anlysis
curl -X DELETE "http://localhost:8888/api/vod/v1/analyses/04a2e841-9c9e-4f50-9f9f-4a8b847f5b3e"
HTTP/1.1 200 OK
Creates a frame capture for a given asset.
Request body
Responses
A PNG image that represents the frame capture.
Body
curl -X POST "http://localhost:8888/api/vod/v1/frames" \
-H "Content-Type: application/json" \
-d '{
"type": "FrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
},
"startFrame": {
"type": "PTS",
"value": 400
},
"additionalFrames": 24
}'
Creates the banding map for a frame of a given asset.
Banding Maps measure color banding presence at a pixel level as viewed by an “expert” on an OLED TV using a no-reference approach. The map is generated as part of one of several steps used in computing a SSIMPLUS Banding Score (SBS). The banding map is a binary map with white pixels showing banding presence, and does not reflect pixel-level variations in banding impairment visibility.
Request body
Responses
A PNG image that represents the frame’s banding map.
Body
curl -X POST "http://localhost:8888/api/vod/v1/bandingMaps" \
-H "Content-Type: application/json" \
-d '{
"type": "FrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.m3u8",
"path": "/mnt/nas/videos",
"storageLocation": {
"name": "/videos",
"type": "S3"
},
"streamIdentifier": {
"type": "HLSVariantIdentifier",
"bandwidth": 4997885,
"fallbackStreamIndex": 1
}
},
"startFrame": {
"type": "FrameIndex",
"value": 1200
},
"additionalFrames": 24
}'
Creates the quality map for a frame of a given asset.
Quality Maps are gray scale presentations of pixel-level perceptual quality that show the spatial distribution of impairments within a frame. Quality Maps illustrate where impairments occur at a pixel level. The maps provide the reason behind the quality score. Dark pixels show the impairments compared to the reference file. Areas that are not that important, such as the area around text, might have more white pixels. Generally, the darker the image, the lower the score.
Request body
Responses
A PNG image that represents the subjectAsset
frame’s quality map.
Body
curl -X POST "http://localhost:8888/api/vod/v1/qualityMaps" \
-H "Content-Type: application/json" \
-d '{
"type": "FullReferenceFrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"reference": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"referenceStartFrame": {
"type": "PTS",
"value": 1200
},
"additionalFrames": 24
}'
Creates a color volume difference map for a frame of a given asset.
Color volume difference maps are gray scale maps that illustrate pixel-level color and skin tone deviation with respect to the reference file. Brighter pixels correspond to a higher deviation.
Request body
Examples
A JSON body payload for requesting a color difference map between a subject and reference asset.
{
"type": "FullReferenceFrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"reference": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"referenceStartFrame": {
"type": "PTS",
"value": 1200
},
"additionalFrames": 24
}
Responses
A PNG image that represents the subjetAsset
frame’s color difference map.
Body
curl -X POST "http://localhost:8888/api/frameservices/v1/colorDifferenceMaps" \
-H "Content-Type: application/json" \
-d '{
"type": "FullReferenceFrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"reference": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"referenceStartFrame": {
"type": "PTS",
"value": 1200
},
"additionalFrames": 24
}'
Creates (and caches) all the captures (frames and maps) possible for the supplied asset(s). Use this endpoint if you expect to retrieve more than one type of capture (i.e. frame, banding map, quality map) for a given frame and want the system to pre-fetch and cache these images to reduce wait times on subsequent requests to any of the frame capture endpoints.
Request body
Responses
A PNG image that represents the frame capture.
Body
Creates all captures for the supplied asset and reference and returns the frame (i.e. content) in the response.
curl -X POST "http://localhost:8888/api/vod/v1/captures" \
-H "Content-Type: application/json" \
-d '{
"frameRequest": {
"type": "FullReferenceFrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"reference": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"referenceStartFrame": {
"type": "PTS",
"value": 1200
},
"additionalFrames": 24
},
"requestedCaptureType": "FRAME"
}'
Creates all captures for the supplied asset and returns the banding map in the response.
curl -X POST "http://localhost:8888/api/vod/v1/captures" \
-H "Content-Type: application/json" \
-d '{
"frameRequest": {
"type": "FrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"additionalFrames": 24
},
"requestedCaptureType": "BANDING_MAP"
}'
Retrieves the frame and map cache details.
Responses
curl -X GET "http://localhost:8888/api/vod/v1/cache"
HTTP/1.1 200 OK
Content-Type: application/json
{
"caches": [
{
"cacheId": "533db73-0f9a-4805-9651-c5dcd519dc37",
"numberOfFiles": 15182,
"sizeOfFiles": 1073741824,
"humanReadableSizeOfFiles": "1.0 G"
}
]
}
Clear the cache used to store the frames and maps (banding, quality, color volume difference) that have been previously requested (and cached).
Request parameters
The UTC date and time up to and before which a cached frame and/or map will be deleted
Responses
Delete all frames/maps up to a given date-time
curl -X DELETE "http://localhost:8888/api/vod/v1/cache?beforeTimestamp=2022-09-15T17:32:28Z"
HTTP/1.1 200 OK
Delete all frames/maps
curl -X DELETE "http://localhost:8888/api/vod/v1/cache"
HTTP/1.1 200 OK
List system version information for the deployed API.
Responses
curl -X GET "http://localhost:8888/api/vod/v1/version"
HTTP/1.1 200 OK
Content-Type: application/json
{
"version": {
"commitBranch": "vod-production/release/2.14.2",
"commitHash": "facc2ef0a3c8ebc10819dc1218748f8d2cbfafd9",
"commitTime": "2022-05-02T18:58:44Z",
"stamped": "true",
"versionString": "2.14.2-12"
}
}
Fetches the system readiness/status by reporting on the individual readiness of all the services that comprise the system.
Responses
curl -X GET "http://localhost:8888/api/vod/v1/status"
HTTP/1.1 200 OK
Content-Type: application/json
{
"checks": [
{
"deploymentId": "19ef5378-b619-42d0-ae00-a9b9cd4fd9af",
"serviceId": "517142b9-f0cc-421a-aab9-66f8b4c8db85",
"serviceName": "AnalysesService",
"status": "READY"
},
{
"deploymentId": "d2bcd3f6-79c6-43c7-9462-afa614d25176",
"serviceId": "eb1e4722-461f-438b-95df-7bf3c6e30989",
"serviceName": "AnalysisLifecycleService",
"status": "READY"
},
{
"deploymentId": "80056dca-6263-44d6-a986-ad62868a4678",
"serviceId": "5ace17e0-7d9d-427b-8df0-76f18bf0ebab",
"serviceName": "AnalysisValidatorService",
"status": "READY"
},
{
"deploymentId": "bf6dcd98-b9c3-475d-b339-70bcba1c400a",
"serviceId": "64c23492-6913-4954-95f8-1145ffe52871",
"serviceName": "AnalyzerOpenApiRestService",
"status": "READY"
},
{
"deploymentId": "b0f82c15-3c5a-4698-9bed-ae102e8f8995",
"serviceId": "51cb3f5a-f990-4f5c-8274-9bd70c3abd52",
"serviceName": "AnalyzerResourceEstimatorService",
"status": "READY"
},
{
"deploymentId": "552cce3b-cb53-4b9d-b21e-33030cae367d",
"serviceId": "1c3f1d11-e69e-4328-8375-aff27ef2741a",
"serviceName": "AssetBrowsingOpenApiRestService",
"status": "READY"
},
{
"deploymentId": "10322a5b-7aef-4093-9f1c-181c4ad30d92",
"serviceId": "8f1af0f9-7498-484e-8b28-f6af1c643f35",
"serviceName": "AssetProbeService",
"status": "READY"
},
{
"deploymentId": "5353edc9-33b3-48c6-b1e7-eaa564c790a4",
"serviceId": "b7ba9521-7b56-4336-a325-bb721ef73841",
"serviceName": "BandingMapsService",
"status": "READY"
},
{
"deploymentId": "81083e23-bbbc-407d-a624-6f2e8109cca7",
"serviceId": "746cb3a0-b715-468c-912c-b8c06e3d3c96",
"serviceName": "CacheService",
"status": "READY"
},
{
"deploymentId": "2583e5bf-e7ee-404a-bb68-3dafa393dbd9",
"serviceId": "28a4dd44-2814-4e07-a940-12dec14d8bc3",
"serviceName": "CancelAnalysisEndpointHandler",
"status": "READY"
},
{
"deploymentId": "c85802f5-3757-452d-befa-ef404de5f06a",
"serviceId": "2a7b499a-d512-4a95-a0f5-7568e01254e7",
"serviceName": "CapturesService",
"status": "READY"
},
{
"deploymentId": "72cd56cf-6ef4-4c08-8955-44585754b00a",
"serviceId": "41084188-3647-4261-8cfd-837adb5136e1",
"serviceName": "ColorDifferenceMapsService",
"status": "READY"
},
{
"deploymentId": "f1c951e7-4213-4b8c-9bd7-6cae9d7b2535",
"serviceId": "87383d9c-9d66-4487-9caf-cae6b1f243ec",
"serviceName": "CreateAnalysisEndpointHandler",
"status": "READY"
},
{
"deploymentId": "a3af5ee6-1e65-447c-a315-f64a260ddb9c",
"serviceId": "1f70389f-c50f-4ca0-ac25-68a3ee6e3c01",
"serviceName": "CreateBandingMapsEndpointHandler",
"status": "READY"
},
{
"deploymentId": "b2bb27c4-8336-4b86-8b63-51e24d69cc8b",
"serviceId": "a78eec8e-251a-4515-881e-247353b8d365",
"serviceName": "CreateCapturesEndpointHandler",
"status": "READY"
},
{
"deploymentId": "f493327e-ee8f-48ac-a3c4-a30988f7fba3",
"serviceId": "d9c53399-03f7-4b02-b815-edf921e700a3",
"serviceName": "CreateColorDifferenceMapsEndpointHandler",
"status": "READY"
},
{
"deploymentId": "e49d023a-d46a-4383-8f2e-33c6f69e15e3",
"serviceId": "2242d34f-fcca-44e7-ac22-8ecf1d3a79cb",
"serviceName": "CreateFramesEndpointHandler",
"status": "READY"
},
{
"deploymentId": "05a09065-577e-4055-8f40-1748c37d2a6d",
"serviceId": "fdfbf7f0-b13d-43b0-83a0-b841505f4644",
"serviceName": "CreateQualityMapsEndpointHandler",
"status": "READY"
},
{
"deploymentId": "4fc91764-ea35-4f2a-95bc-cca342697d8b",
"serviceId": "22b0926b-949a-4314-946e-d11fc01852a4",
"serviceName": "DeleteCacheEndpointHandler",
"status": "READY"
},
{
"deploymentId": "170d9106-0978-4e7e-ba7e-2f1630612b70",
"serviceId": "c146de0e-ccb5-4ece-a5fc-a4bcd0732c17",
"serviceName": "FileCacheService",
"status": "READY"
},
{
"deploymentId": "afa7e4a7-ff1a-4733-801e-6f350e0830ac",
"serviceId": "7fbd6181-13be-4dbd-bd59-f2d8e8b7cd9a",
"serviceName": "FrameServicesOpenApiRestService",
"status": "READY"
},
{
"deploymentId": "29219d23-b091-4171-9c19-a5b39dd0441e",
"serviceId": "08880ab7-1049-4abd-8bd1-b11d634e1553",
"serviceName": "FramesService",
"status": "READY"
},
{
"deploymentId": "e051bd9a-68c9-4c02-aa4e-4f782e75f8ce",
"serviceId": "2a51a86a-8b50-4fe4-9ac8-e16339396979",
"serviceName": "GetBucketLocationEndpointHandler",
"status": "READY"
},
{
"deploymentId": "7aa1992e-06cd-454f-9b7f-66fbe625da6f",
"serviceId": "3777803e-2473-4489-974c-f1e564911e2a",
"serviceName": "GetFeatureLicenseEndpointHandler",
"status": "READY"
},
{
"deploymentId": "e9057bb5-5868-4265-90cf-5bb3e1dadde3",
"serviceId": "8eb1130f-d060-4879-b7c3-f2630d412909",
"serviceName": "GetStatusEndpointHandler",
"status": "READY"
},
{
"deploymentId": "71735350-ec2e-47b0-85dc-16b28a7d8d53",
"serviceId": "dd618b23-1f51-4a9d-a090-7364fc6df70a",
"serviceName": "GetVersionEndpointHandler",
"status": "READY"
},
{
"deploymentId": "01fe682a-2174-4adb-804e-62759d6ee417",
"serviceId": "8ca86e98-a8f8-477d-8dcc-9cee62a029a2",
"serviceName": "HeadBucketEndpointHandler",
"status": "READY"
},
{
"deploymentId": "4a5d6d44-72ab-4d6e-ac49-dd346685d3f8",
"serviceId": "61a22734-9fc6-4e45-a0cf-0f9fc41785c4",
"serviceName": "HeadObjectEndpointHandler",
"status": "READY"
},
{
"deploymentId": "26b7767c-f73c-4d26-b822-2a78a277fc09",
"serviceId": "ceafa127-cf34-4cde-9174-aa1b2522b304",
"serviceName": "HlsService",
"status": "READY"
},
{
"deploymentId": "661a2e69-1bdb-453c-9520-cabcc8862946",
"serviceId": "9083d147-7e76-4bcc-b68e-25ccee9408d3",
"serviceName": "HttpReverseProxyService",
"status": "READY"
},
{
"deploymentId": "8e2b51e7-e491-446b-b42a-fe5cf93a8a6e",
"serviceId": "d33c4ec0-9bad-4d96-8202-574de279dadd",
"serviceName": "JobTimeoutService",
"status": "READY"
},
{
"deploymentId": "f910fd99-bb9c-42ba-9e9a-b06934ec57b4",
"serviceId": "071b4fe9-5d92-4bdf-8b82-09af6f7d6886",
"serviceName": "KubernetesJobJanitorService",
"status": "READY"
},
{
"deploymentId": "469f42c9-8111-4727-855e-e1da678bbb91",
"serviceId": "880136f9-9b79-4502-95b6-cc9176bd90f9",
"serviceName": "KubernetesJobManagementService",
"status": "READY"
},
{
"deploymentId": "c291c358-b2d4-4315-95a5-7dd6871ddbbf",
"serviceId": "3a120e1a-f0f1-4828-934e-c9b2ee2bd0d9",
"serviceName": "KubernetesSupportService",
"status": "READY"
},
{
"deploymentId": "f1141256-bb8d-473b-9047-f88d194835e2",
"serviceId": "c9e4f48a-a571-47ea-a3f7-08f65c5fe20b",
"serviceName": "ListBucketsEndpointHandler",
"status": "READY"
},
{
"deploymentId": "ae996ca2-e0a9-40b5-b018-73f2212f612f",
"serviceId": "85b05f08-2af8-4be8-bc43-2d5f5e79986a",
"serviceName": "ListObjectsEndpointHandler",
"status": "READY"
},
{
"deploymentId": "41f9ab35-4bf5-4c8a-bbcb-a2ee401da1d1",
"serviceId": "8b1d7cbc-7be8-492d-97ac-d52ce4aa0758",
"serviceName": "PutFeatureLicenseEndpointHandler",
"status": "READY"
},
{
"deploymentId": "c84b4177-36c2-4fe1-ae39-31b1a7b9d848",
"serviceId": "b751f1ef-bb17-46a7-aa06-4af2973557c7",
"serviceName": "QualityMapsService",
"status": "READY"
},
{
"deploymentId": "5531c25f-7d1c-44b2-bfe5-e475c8af380b",
"serviceId": "c033a06b-c8ca-4c73-9a25-8cb911ea1cd8",
"serviceName": "ResourceEstimateHandlerService",
"status": "READY"
},
{
"deploymentId": "d883f86b-4194-46c7-b208-9d927ee84c11",
"serviceId": "237c8f90-900a-45f9-939a-bfab8dd6e286",
"serviceName": "S3Service",
"status": "READY"
},
{
"deploymentId": "945811c7-b730-40c2-826a-6feb7e234889",
"serviceId": "896183d5-b436-468e-9564-2a7eecf601a5",
"serviceName": "SystemService",
"status": "READY"
},
{
"deploymentId": "3b75fb52-4415-47de-aee1-bbdcc763c42d",
"serviceId": "2892a53a-3c02-4e13-a70c-1948a0aad4c0",
"serviceName": "SystemServicesOpenApiRestService",
"status": "READY"
},
{
"deploymentId": "f076f06d-b625-4b71-8a80-bf42b63f9fd3",
"serviceId": "149e6296-b2a0-48a8-abb3-278cab035be3",
"serviceName": "VertxEventBusProxyService",
"status": "READY"
}
],
"outcome": "READY"
}
Retrieves the current feature license.
Responses
Indicates that the feature license was succesfully retrieved.
Body
curl -X GET "http://localhost:8888/api/vod/v1/featureLicense"
HTTP/1.1 200 OK
Content-Type: text/plain
35a82e6900b6d5468073fbd0204e7b07546ec30f5e78f81af9fe4c95c8c88316
{
"bandingDetection": true,
"bandingMaps": true,
"color": true,
"colorDifferenceMaps": true,
"contentAttributes": true,
"contentComplexity": true,
"expiry": "2022-12-31",
"frameCaptures": true,
"hdrSupport": true,
"insights_analysis_url": "",
"insights_cli_overrides": false,
"insights_frame_scores": true,
"insights_password": "test-password",
"insights_qc_config_url": "",
"insights_scene_definitions_url": "",
"insights_servers": [],
"insights_username": "test-user",
"organization": "SSIMWAVE",
"otherVideoQualityMetrics": true,
"qualityChecks": {
"blackFrame": true,
"colorBarFrame": true,
"freezeFrame": true,
"missingCaptions": true,
"scoreChecks": true,
"silence": true,
"solidColorFrame": true
},
"qualityMaps": true,
"site": "Test Site"
}
Applies a product feature license.
Request body
Responses
Indicates that the feature license was successfully applied.
curl -X PUT "http://localhost:8888/api/vod/v1/featureLicense" \
-H "Content-Type: text/plain" \
-d '35a82e6900b6d5468073fbd0204e7b07546ec30f5e78f81af9fe4c95c8c88316
{
"bandingDetection": true,
"bandingMaps": true,
"color": true,
"colorDifferenceMaps": true,
"contentAttributes": true,
"contentComplexity": true,
"expiry": "2022-12-31",
"frameCaptures": true,
"hdrSupport": true,
"insights_analysis_url": "",
"insights_cli_overrides": false,
"insights_frame_scores": true,
"insights_password": "test-password",
"insights_qc_config_url": "",
"insights_scene_definitions_url": "",
"insights_servers": [],
"insights_username": "test-user",
"organization": "SSIMWAVE",
"otherVideoQualityMetrics": true,
"qualityChecks": {
"blackFrame": true,
"colorBarFrame": true,
"freezeFrame": true,
"missingCaptions": true,
"scoreChecks": true,
"silence": true,
"solidColorFrame": true
},
"qualityMaps": true,
"site": "Test Site"
}'
Add a secret for accessing an AWS Amazon S3 bucket
Request body
Responses
Indicates that the secret was added.
Add credentials to access the AWS Amazon S3 bucket bucket named “mybucket”
curl -X PUT "http://localhost:8888/api/vod/v1/s3AccessSecret" \
-H "Content-Type: application/json" \
-d '{
"bucketName": "mybucket",
"clientId": "AKIAIOSFODNN7EXAMPLE",
"clientSecret": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}'
Configuration options for specifying the conditions under which segments of a content type are considered active. By default, all content types will be considered active when at least one audio channel is active throughout that segments duration.
{
"motionVideoSegments": {
"canBeActive": true,
"activeAudioChannelsDefinition": "SILENCE"
},
"blackFrameSegments": {
"canBeActive": false
},
"colorBarFrameSegments": {
"canBeActive": true,
"activeAudioChannelsDefinition": "ALL_CHANNELS_ACTIVE"
},
"freezeFrameSegments": {
"canBeActive": true,
"activeAudioChannelsDefinition": "ANY_CHANNEL_ACTIVE"
},
"solidColorFrameSegments": {
"canBeActive": true,
"activeAudioChannelsDefinition": "ANY_CHANNEL_ACTIVE"
}
}
Active segment definition for motion video segments
{
"canBeActive": true,
"activeAudioChannelsDefinition": "SILENCE"
}
Active segment definition for black frame segments
{
"canBeActive": false
}
Active segment definition for color bar frame segments
{
"canBeActive": true,
"activeAudioChannelsDefinition": "ALL_CHANNELS_ACTIVE"
}
Active segment definition for freeze frame segments
{
"canBeActive": true,
"activeAudioChannelsDefinition": "ANY_CHANNEL_ACTIVE"
}
Active segment definition for color frame segments
{
"canBeActive": true,
"activeAudioChannelsDefinition": "ALL_CHANNELS_ACTIVE"
}
Captures an analysis of a video asset from which frame scores are produced using SSIMPLUS Analyzer. A no-reference (NR) analysis is performed on a single video asset only and its results can be used to judge the quality of the asset in isolation. A full-reference (FR) analysis is performed using two video assets: a reference asset against which you will compare a subject asset. Generally, the reference asset is the higher quality video and the subject asset is the resulting video having gone through some kind of transcoding, compression or general transformation. A full-reference analysis will give frame scores on the absolute quality of each asset as well as the comparative quality, allowing one to ascertain the impact of the transformation process on the overall quality.
Analyses are used as the payload in an AnalysisResponse and contain the attributes necessary to lookup the associated frame score results. For a successfully submitted analysis, the id
will represent a universally unique id (UUID) that can be used as a key to lookup the frame score results. Additionally the submissionTimestamp
will indicate the time at which the analysis was successfully submitted.
For an analysis that fails to be submitted, the id
and submissionTimestamp
attributes will be missing and the submissionError
attribute will contain details indicating the nature of the error. If you are unsure how to interpret the error or the workaround, please contact your SSIMWAVE representative.
{
"id": "04a2e841-9c9e-4f50-9f9f-4a8b847f5b3e",
"referenceAsset": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/ref/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "videos",
"credentials": {
"useAssumedIAMRole": true
}
},
"hdr": true,
"qualityCheckConfig": {
"scoreChecks": [
{
"metric": "SVS",
"threshold": 80,
"viewingEnvironmentIndex": 1,
"durationSeconds": 5,
"durationFrames": 1,
"skipStart": 1.25,
"skipEnd": 1.25
}
]
}
},
"subjectAsset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "videos",
"credentials": {
"useAssumedIAMRole": true
}
},
"hdr": true,
"qualityCheckConfig": {
"scoreChecks": [
{
"metric": "SVS",
"threshold": 80,
"viewingEnvironmentIndex": 1,
"durationSeconds": 5,
"durationFrames": 1,
"skipStart": 1.25,
"skipEnd": 1.25
}
]
}
},
"submissionTimestamp": "2018-01-01T14:20:22Z",
"testId": "1",
"content": {
"title": "Big Buck Bunny"
},
"analyzerConfig": {
"enableBandingDetection": true,
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua",
"resolution": {
"width": 1920,
"height": 1080
}
},
"viewerType": "TYPICAL"
}
]
}
}
The UUID for the analysis.
A description of the analysis which can be used for reference, categorization and search/filtering.
The reference asset against which you will compare a subject asset. This attribute is ONLY used for full-reference (FR) analyses.
The subject asset which you will use to compare against the reference asset (for full-reference analysis) or the asset against which you will perform a no-reference analysis.
Any error message resulting from the submission of the analysis. Note that the error represented here is meant to cover ONLY the submission of a new analysis for processing withing the Kubernetes cluster. It does NOT cover any errors that may be generated when the analysis is either scheduled or executed. These error messages will be available through alternative means (i.e. Kubernetes monitoring software - Prometheus and/or REST APIs available for result processing).
The UTC timestamp (using ISO-8601 representation) recording when the analysis was successfully submitted for analysis. Analyses that fail to submit corectly will not have a value for this attribute.
The test ID used to uniquely identify the asset within the analysis
Metadata about the content being analyzed in this analysis.
{
"title": "Big Buck Bunny"
}
Analyzer configuration options used in this analysis
The request body used when updating an analysis.
Currently, the system supports only the following update operations:
-
Cancelling an existing analysis
NoteOnly analyses that are currently in progress (i.e. scheduled, estimating, aligning, analyzing) can be cancelled
{
"status": "CANCELLED"
}
The desired analysis status
Cancels a running analysis
Specification of configuration options for use by the analyzer at the analysis level. Configuration options for assets can be specified on the Asset object.
{
"enableBandingDetection": true,
"enableColorVolumeDifference": true,
"enableColorStatsCollection": true,
"enableVMAF": true,
"enablePSNR": true,
"qualityCheckConfig": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25,
"missingCaptions": {
"enabled": false,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
},
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "EXPERT"
},
{
"device": {
"name": "xl2420t"
},
"viewerType": "TYPICAL"
}
],
"temporalAlignment": {
"minSeconds": 5,
"maxSeconds": 120,
"maxSecondsHighFPS": 120
}
}
Detect the level of banding within the assets
Detect cadence patterns within the assets
For full-reference (FR) analyses, run complexity analysis on the reference asset(s). In a no-reference analysis, complexity analysis is run on the subject asset(s) instead.
Enable Colour Volume Difference calculation
Enable Luminance and Color Gamut stats collection
Enable VMAF score calculation for full-reference analyses
Enable PSNR score calculation for full-reference analyses
Controls whether the Analyzer will perform automatic temporal alignment or not. This flag applies only to full-reference analyses and it is recommended to leave enabled.
Enable physical noise calculation for the video. Physical Noise measures standard deviation of camera/sensor noise when statistical behaviour of noise is random with Gaussian (or similar) distribution.
Enable visual noise calculation for the video. Visual Noise measures the standard deviation of noise considering the contrast masking behaviour of the underlying content.
Enable temporal information collection for the video
Enable spatial information collection for the video
Enable color information collection for the video
Configuration options for quality checks.
{
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25,
"missingCaptions": {
"enabled": false,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
}
Specifications of environments under which the content is viewed
[
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "EXPERT"
},
{
"device": {
"name": "xl2420t"
},
"viewerType": "TYPICAL"
}
]
Number of frames to process. When specified in the context of a full-reference analysis, the value applies to the reference asset.
Configuration options for temporal alignment
{
"minSeconds": 5,
"maxSeconds": 120,
"maxSecondsHighFPS": 120
}
Configuration options for content layout detection.
Additional (undocumented) configuration options for use by the Analyzer and at the direction/suggestion of your SSIMWAVE representative.
{
"bandingDetectionThreshold": 40,
"macroBlocking": true
}
Represents a video asset in the system.
{
"content": {
"title": "Big Buck Bunny"
},
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "S3",
"name": "/videos"
},
"streamIdentifier": {
"type": "VideoPID",
"identifier": 1
}
}
{
"content": {
"title": "Sparks - Dolby Vision"
},
"name": "20161103_1023_SPARKS_4K_P3_PQ_4000nits_DoVi.mxf",
"sidecars": [
{
"type": "DOLBY_VISION_METADATA",
"name": "20161103_SPARKS_DOVI_METADATA_AR_CORRECT.xml"
}
],
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "PVC",
"name": "videos"
},
"hdr": true
}
Metadata about the content contained in this asset. This can also be set automatically for all assets by including the content field in the NewAnalysis request.
{
"title": "Big Buck Bunny"
}
A URI describing the asset location, of the form
storageLocationType://storageLocationName/path/name
Either this field or all of name
, path
, and storageLocation
must be provided. Additional storageLocation
properties such as credentials
may still be specified alongside this field.
To uniquely specify an asset location,either the assetUri
field or all of name
, path
, and storageLocation
must be provided.
The filename that represents the video. Although not required, one is encouraged to keep filenames unique, where possible.
If working with an image sequence asset (a collection of multiple image files at the same base path indexed by sequential numbers), a format string must be included in the file name to specify the position and format of the index.
The format string can take one of two forms:
%d
specifies numbers with no special formatting at the included position. For example,"sintel_%d.dpx"
will match an asset consisting of imagessintel_1.dpx
,sintel_2.dpx
, … ,sintel_10.dpx
, …%0[width]d
specifies numbers that are 0-padded consisting ofwidth
characters total. For example,"sintel_%03d.dpx"
will match an asset consisting of imagessintel_001.dpx
,sintel_002.dpx
,sintel_003.dpx
, …
The literal character %
can be escaped with the string %%
. For example, "big%%20buck%%20bunny%04d.png"
will match an asset consisting of images big%20buck%20bunny0001.png
, big%20buck%20bunny0002.png
, big%20buck%20bunny0003.png
, …
Note that only one format string can be specified in the file name. Additionally, if a format string is included, imageSequenceParameters
must also be provided.
The sidecar(s) to associate with the asset. If a sidecar does not specify a path, it is assumed to use the path associated with the asset.
[
{
"type": "DOLBY_VISION_METADATA",
"name": "20161103_SPARKS_DOVI_METADATA_AR_CORRECT.xml"
}
]
To uniquely specify an asset location, either the assetUri
field or all of name
, path
, and storageLocation
must be provided.
The path to the asset’s file (and possibly sidecar) location with the associated storage. The combination of the path and filename of a given asset must be unique.
To uniquely specify an asset location,either the assetUri
field or all of name
, path
, and storageLocation
must be provided.
The storage location that houses the asset.
{
"name": "/videos",
"type": "S3"
}
Hint that the video asset should be high dynamic range (HDR). Note that if the asset cannot be HDR due to low bit depth, the analysis will fail.
Used to specify the packet identifier (see VideoPID and VideoPIDHex) for assets with multiple video streams. In the case of HLS, this identifier can be used to represent the HLS variant (see HLSVariantIdentifer).
{
"type": "HLSVariantIdentifier",
"bandwidth": 4997885,
"fallbackStreamIndex": 1
}
Used to specify the packet identifier (see VideoPID and VideoPIDHex) for assets with multiple video streams. In the case of HLS, this identifier can be used to represent the HLS variant (see HLSVariantIdentifer).
The starting frame to use for this asset, as it pertains to the analysis. For video assets, this value is relative to the start of the asset, regardless of the absolute frame index at which it starts. If your asset starts at frame X, specifying a value of 100 for startFrame
will instruct the analyzer will ignore all frames in the asset from X -> X + 99.
However, in the case where your asset is an image sequence that does not start at frame 1, you must do the arithmetic to figure out the correct startFrame
that applies. Consider, for example, an image sequence that starts at frame 240. If you want skip the first 100 frames and start at frame 340, you would specify a startFrame
value of 100, not 340.
If unspecified, the analyzer will always use 1. This value is ignored for frame capture requests and one directed to use the frameIndex
in the body of the frame capture request instead.
Specification for the region of interest of this asset.
{
"originX": 20,
"originY": 0,
"regionHeight": 300,
"regionWidth": 400
}
Settings for RAW video formats. Must be provided if this asset has the file extension .yuv, .rgb or .bgr.
{
"resolution": {
"width": 720,
"height": 576
},
"fps": 25,
"scanType": "P",
"fieldOrder": "TFF",
"pixelFormat": "YUV420P"
}
Settings for image sequence assets. Must be provided if the name contains a format string (%d
or %0[width]d
) and the file extension is one of:
.png
.tif
.tiff
.dpx
.jpg
.jpeg
.j2c
.jp2
.jpc
.j2k
.exr
{
"fps": 25
}
Configuration options for asset-level quality checks
{
"scoreChecks": [
{
"metric": "SVS",
"threshold": 80,
"durationSeconds": 5,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
},
{
"metric": "SVS",
"threshold": 60,
"durationSeconds": 2,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
},
{
"metric": "SBS",
"threshold": 75,
"durationFrames": 48,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
}
]
}
Configuration options for audio groups that exist within this asset.
{
"groups": [
{
"qualityCheckConfig": {
"loudnessChecks": [
{
"type": "MAX_TRUE_PEAK",
"enabled": true,
"duration": 1,
"skipStart": 1.25,
"skipEnd": 1.25,
"threshold": -2
},
{
"type": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 5
}
{
"type": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 25
}
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_3",
"enabled": true
}
}
]
}
A unique identifer for an asset within a completed analysis. For a NR analysis or to pick the reference asset within a FR anlaysis, this will simply the integer ID associated with the asset. For a FR analysis, you will use the “refId-subjectId” format.
Quality checks specified on a per-asset basis.
{
"scoreChecks": [
{
"metric": "SVS",
"threshold": 80,
"durationSeconds": 5,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
}
]
}
Any number of score-based quality check definitions.
Any number of metadata-based quality check definitions
[
{
"type": "DOLBY_VISION"
}
]
Configure Photosensitive Epilepsy Harding Tests
Represents configuration for audio measurements and audio specific quality checks.
{
"groups": [
{
"qualityCheckConfig": {
"loudnessChecks": [
{
"checkType": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 8
},
{
"checkType": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 30
},
{
"checkType": "MAX_INTEGRATED_LOUDNESS",
"enabled": true,
"threshold": -2
},
{
"checkType": "MAX_MOMENTARY_LOUDNESS",
"enabled": true,
"duration": 2,
"skipStart": 1.25,
"skipEnd": 1.25
"threshold": -2,
},
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_3",
"enabled": true
}
}
]
}
A collection of audio groups that exist within the parent asset.
For this release, only a single array entry is allowed, where the configuration is applied to all audio streams existing within the asset.
Enable or disable audio processing for the parent asset.
When set to disabled, no audio quality checks will be raised.
Enumerates the audio segment audio channel activity types
All audio channels are silent
At least one audio channel is not silent
All audio channels are active
Represents audio measurements and audio specific quality check configuration for a particular audio group.
{
"qualityCheckConfig": {
"loudnessChecks": [
{
"checkType": "MIN_LOUDNESS_RANGE",
"enabled": true,
"threshold": 8
},
{
"checkType": "MAX_LOUDNESS_RANGE",
"enabled": true,
"threshold": 30
},
{
"checkType": "MAX_INTEGRATED_LOUDNESS",
"enabled": true,
"threshold": -2
},
{
"checkType": "MAX_MOMENTARY_LOUDNESS",
"enabled": true,
"duration": 2,
"skipStart": 1.25,
"skipEnd": 1.25
"threshold": -2,
},
]
},
"loudnessMeasurements": {
"algorithm": "ITU_R_BS_1770_3",
"enabled": true
}
}
A collection of audio specific quality checks that will be performed on the audio groups within the parent asset.
Configuration parameters for audio loudness measurements that are performed on an audio group.
A quality check that will be performed on an audio group.
Configuration for one or more loudness quality checks.
Configuration for audio clipping quality check
Configuration for audio clicks/pops quality check
Configure the quality check for Audio Clicks and Pops. Clicks and pops are caused by a variety of factors, including a poor recording environment, bad equipment, or a misaligned recording.
Enable detection of audio click/pop detection events
The duration in seconds to ignore at the start of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the start of audio in order to eliminate unwanted quality check failures.
The duration in seconds to ignore at the end of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the end of audio in order to eliminate unwanted quality check failures.
The sensitivity of the check. Higher sensitivity means more detections, but more false positives.
Configure the quality check for Audio Clipping. This distortion occurs when an audio signal exceeds the maximum limit of a recording or playback system. It typically happens when the volume level of the audio reaches or exceeds the maximum level that can be accurately reproduced, resulting in the waveform being “clipped” or truncated. This distortion introduces unwanted distortion and distortion artifacts to the audio signal, leading to a harsh and distorted sound.
Enable detection of audio clipping events
The minimum clipping duration in seconds required for an event to trigger.
The duration in seconds to ignore at the start of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the start of audio in order to eliminate unwanted quality check failures.
The duration in seconds to ignore at the end of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the end of audio in order to eliminate unwanted quality check failures.
The sensitivity of the check. Higher sensitivity means more detections, but more false positives.
Used to define a quality check based on audio loudness measurements taken over a window of time.
Quality check failure events are generated when loudness values of the specified type are beyond the threshold
limit for at least duration
continuous seconds.
Certain types of checks are performed over the duration of the asset, where skipStart
, skipEnd
and duration
are not applicable. Those types are indicated within the schema definition below.
{
"checkType": "MAX_TRUE_PEAK_LEVEL",
"enabled": true,
"duration": 5,
"skipStart": 2.5,
"skipEnd": 1.25,
"threshold": -2
}
{
"checkType": "MAX_INTEGRATED_LOUDNESS",
"enabled": true,
"threshold": 1
}
The type of loudness check to perform.
Enable detection of this particular audio loudness quality check event.
The minimum continuous duration in seconds required for the loudness to exceed threshold
for an event to trigger.
Duration can only be specified for the following checkType
values:
MAX_MOMENTARY_LOUDNESS
MAX_SHORT_TERM_LOUDNESS
MIN_TRUE_PEAK_LEVEL
MAX_TRUE_PEAK_LEVEL
SILENCE
For the remaining checkType
values not listed above, duration
is always the length of the mixed audio track (ie. group) and is not allowed to be specified explicitly.
For a checkType
of MAX_MOMENTARY_LOUDNESS
, duration
must be greater than 0.4 seconds.
For a checkType
of MAX_SHORT_TERM_LOUDNESS
, duration
must be greater than 3 seconds.
The duration in seconds to ignore at the start of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the start of audio in order to eliminate unwanted quality check failures.
skipStart
can only be specified for the following checkType
values:
MAX_MOMENTARY_LOUDNESS
MAX_SHORT_TERM_LOUDNESS
MIN_TRUE_PEAK_LEVEL
MAX_TRUE_PEAK_LEVEL
SILENCE
The duration in seconds to ignore at the end of the mixed audio track (ie. group). This value can be used to skip or ignore some portion at the end of audio in order to eliminate unwanted quality check failures.
skipEnd
can only be specified for the following checkType
values:
MAX_MOMENTARY_LOUDNESS
MAX_SHORT_TERM_LOUDNESS
MIN_TRUE_PEAK_LEVEL
MAX_TRUE_PEAK_LEVEL
SILENCE
The upper or lower threshold limit which loudness values must exceed for duration
seconds for an event to trigger.
Loudness values less than the threshold for the following type
entries will cause an event to trigger:
MIN_INTEGRATED_LOUDNESS
(in LKFS)MIN_LOUDNESS_RANGE
(in LU)MIN_TRUE_PEAK_LEVEL
(in dBTP)SILENCE
(in dBTP)
Loudness values greater than the threshold for the following type
entries will cause an event to trigger:
MAX_INTEGRATED_LOUDNESS
(in LKFS)MAX_LOUDNESS_RANGE
(in LU)MAX_MOMENTARY_LOUDNESS
(in LUFS)MAX_SHORT_TERM_LOUDNESS
(in LUFS)MAX_TRUE_PEAK_LEVEL
(in dBTP)
The type of loudness check to register an AudioLoudnessCheck
for.
The techniques used to measure Momentary Loudness, Short-Term Loudness, and True Peak Level are defined by the following specifications:
-
ITU-R BS.1771-1 - Requirements for Loudness and True-peak indicating meters
https://www.itu.int/rec/R-REC-BS.1771-1-201201-I/en -
ITU-R BS.1770-4 - Algorithms to Measure Audio Programme Loudness and True-peak Audio Level
https://www.itu.int/rec/R-REC-BS.1770-4-201510-I/en
The technique used to measure Integrated Loudness is defined by the following family of specifications:
- ITU-R BS.1770 - Algorithms to Measure Audio Programme Loudness and True-peak Audio Level
https://www.itu.int/rec/R-REC-BS.1770
In addition to the technique defined within the ITU-R BS.1770 specification, the Loudness Range calculation also utilizes a cascaded gating scheme and the statistical distribution of loudness readings when determining the overall loudness range. This is performed in order to minimize the impact of low-level signals, background noise, silence and short bursts of unusually loud sound (eg. explosions in a movie) from dominating the loudness range. The loudness range measurement technique is described in more detail here:
- EBU TECH 3342 - Loudness Range: A Measure to Supplement EBU R128 Loudness Normalization
https://tech.ebu.ch/docs/tech/tech3342.pdf
Maximum Momentary Loudness (in LUFS) measured over an integration period of 400 milliseconds.
Maximum Short-Term Loudness (in LUFS) measured over an integration period of 3 seconds
Minimum True Peak Level (in dBTP) for each channel within a group
Maximum True Peak Level (in dBTP) for each channel within a group
Minimum Integrated Loudness (in LKFS)
Maximum Integrated Loudness (in LKFS)
Minimum Loudness Range (in LU)
Maximum Loudness Range (in LU)
Detect periods of silence, reported for left, right and all channels
Represents the algorithm used to measure perceived loudness.
ITU-R BS.1770-1 algorithm used to measure audio program loudness and true-peak audio level
ITU-R BS.1770-2 algorithm used to measure audio program loudness and true-peak audio level
ITU-R BS.1770-3 algorithm used to measure audio program loudness and true-peak audio level
ITU-R BS.1770-4 algorithm used to measure audio program loudness and true-peak audio level
Represents configuration for each of the audio loudness measurements that can be performed.
The algorithm to use for loudness (Momentary, Short-term, Integrated, Loudness Range) and True Peak level measurements.
Controls whether audio loudness measurements are performed. This must be set to true
if any audio loudness quality checks are desired for the associated asset.
Captures the supported parameter values for audio silence detection.
{
"threshold": -60,
"duration": 2.5
}
The loudness measurement below which an audio channel is considered silent for the purposes of determining if a segment has all channels active, any channels active, or no channels active. The default value is -60dbfs.
The minimum duration in seconds of a channel being below threshold
db for an audio channel to be considered silent. Segments shorter than this duration will be treated as active. The default value is 30 seconds.
Deprecated.
Important Note: As of version 2.21.0, this schema has been deprecated and it is no longer recommended to configure an analysis level audio silence quality check. Please refer to the asset level audio check configuration Audio
to perform silence quality checks in this version and future releases.
Captures the supported parameter values for audio silence detection.
{
"threshold": -60,
"commonParameters": {
"enabled": "true",
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
}
The loudness measurement below which the audio output for the asset is considered to be silent. The default value is -60 dBTP.
Common quality check configuration parameters.
{
"enabled": "true",
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
Represents the details about a given frame and map cache.
{
"cacheId": "533db73-0f9a-4805-9651-c5dcd519dc37",
"numberOfFiles": 15182,
"sizeOfFiles": 1073741824,
"humanReadableSizeOfFiles": "1.0 G"
}
The UUID of the file/map cache
The number of frame and/or map PNG files stored in the cache
The aggregate size of all the PNG files stored in the cache (in bytes)
The aggregate size of all the PNG files stored in the cache (in human readable form usig KMGTPE units)
The request body used when creating all possible captures for a given frame.
{
"frameRequest": {
"type": "FullReferenceFrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"reference": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"referenceStartFrame": {
"type": "PTS",
"value": 1200
},
"additionalFrames": 24
},
"requestedCaptureType": "FRAME"
}
The frame request data. Use FrameRequestBody
for single asset and FullReferenceFrameRequestBody
to include a reference asset. Note that you must use FullReferenceFrameRequestBody
if you specify a requestedType
of QUALITY_MAP
or COLOR_DIFFERENCE_MAP
.
The capture type to send back in the response.
Represents the the different types of frame captures available (i.e. frame, banding map, quality map)
Represents the frame’s image content.
Represents binary map with white pixels showing banding presence.
Represents a gray scale presentation of pixel-level perceptual quality that show the spatial distribution of impairments within a frame.
Represents a gray scale representation of pixel-level color and skin tone deviation with respect to the reference file.
Contains metadata about the content contained within an asset. HLS variants that are part of the same presentation should have the same title.
{
"title": "Big Buck Bunny"
}
The title of the content
Configuration options for content layout detection
Authentication credentials for assets stored in Amazon S3.
In order to support some software features (i.e. frame/map captures), the system needs to persist the access credentials provided in this object into our secure data store. For this reason it is recommended that you use useAssumedIAMRole
or Kubernetes secret instead of the clientId
/clientSecret
option.
{
"useAssumedIAMRole": true
}
AWS Access Key ID for accessing assets stored in Amazon S3.
AWS Secret Access Key for accessing assets stored in Amazon S3.
Authenticate using the role already assumed by the underlying container
A specification of a device for which scores are calculated
{
"name": "oled65c9pua",
"resolution": {
"width": 1920,
"height": 1080
}
}
The name of the display device
Resolution of the device specified as width and height in pixels
{
"width": 1920,
"height": 1080
}
An identifier which can be used to uniquely identify a single frame within a video asset. The frame index/number within the sequential list of frames that consitute a video asset.
{
"type": "FrameIndex",
"value": 1200
}
Capture the schema type for use in oneOf
semantics.
Captures the frame index value.
If the video asset is being deinterlaced by frame (i.e. FrameNumber or FrameTime and not PTS) then this index tells the system whether it should seek to the first or second deinterlaced frame for the desired frame. This value is rarely needed and only useful in the context of a full-reference analysis and under certain scan type and frame rate combinations. Please consult your SSIMWAVE contact for more details.
The request body for any request to create a frame and/or map.
{
"type": "FrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.m3u8",
"path": "/mnt/nas/videos",
"storageLocation": {
"type": "S3",
"name": "/videos"
},
"streamIdentifier": {
"type": "HLSVariantIdentifier",
"bandwidth": 4997885,
"fallbackStreamIndex": 1
}
},
"startFrame": {
"type": "FrameIndex",
"value": 1200
},
"additionalFrames": 24
}
Capture the schema type for use in oneOf
semantics.
The video asset for which you want to create the frame capture or map.
The frame at which to start capturing.
{
"type": "FrameIndex",
"value": 1200
}
The number of additional frames after startFrame
for which frame captures (or maps) will be automatically generated and cached. Decoding video, extracting frames and building maps can be expensive operations. Use this value to capture and cache a number of frames following startFrame
to support faster subsequent look-ahead request-response exchanges (i.e. useful in scroll forward functionality).
An identifier which can be used to uniquely identify a single frame within a video asset and is structured as a hybrid time-frame format where:
HH
is two-digit hour (00-24);MM
is two-digit minute (00-59);SS
is two-digit second (00-59);- and
FF
is frame number within the second; varies depending on asset frames per second (FPS).
{
"type": "FrameTime",
"value": "00:34:28:21",
"deinterlacingIndex": 1
}
Capture the schema type for use in oneOf
semantics.
Captures the frame time value.
If the video asset is being deinterlaced by frame (i.e. FrameNumber or FrameTime and not PTS) then this index tells the system whether it should seek to the first or second deinterlaced frame for the desired frame. This value is rarely needed and only useful in the context of a full-reference analysis and under certain scan type and frame rate combinations. Please consult your SSIMWAVE contact for more details.
Configures the FPS and scan type quality check. When the detected FPS or scan type differs from the probed FPS or scan type a “fps-mismatch” event is fired. The event is also fired when stream frame rate (if detected by demuxer) is different from the measured FPS.
{
"allowed": "30i,60p",
"enablePsfDetection": false,
"commonParameters": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
}
A comma separated list of allowed FPS and scan type combinations, such as 30i or 60p. If empty/unspecified everything is allowed. When the detected fps/scan combination is not one of the allowed ones, an “fps-not-allowed” event is fired.
Enables detection for “bad” interlaced videos created from PsF sources. Requires more time and cycles and may not be 100% correct.
Common quality check configuration parameters.
{
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
The request body for any full-reference request to create a frame and/or map. To create a quality map, you must use a full-reference request.
{
"type": "FullReferenceFrameRequest",
"asset": {
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"content": {
"title": "Big Buck Bunny"
},
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"startFrame": {
"type": "PTS",
"value": 1400
},
"reference": {
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"type": "S3",
"name": "/videos"
}
},
"referenceStartFrame": {
"type": "PTS",
"value": 1200
},
"additionalFrames": 24
}
Capture the schema type for use in oneOf
semantics.
The subject asset in the context of the full-reference request.
The frame in the subject asset at which to start capturing.
{
"type": "PTS",
"value": 1400
}
The reference video asset to be used when creating a full-reference request. Remember that a quality map image for a given asset requires access to the original reference asset in order to calculate and show the spatial distribution of impairments between the two frames.
The reference frame at which to start capturing. This value is only needed when the corresponding frame values differ between reference and subject assets (i.e. there is temporal misalignment).
{
"type": "PTS",
"value": 1200
}
The reference frame at which to start capturing. This value is only needed when the corresponding frame values differ between reference and subject assets (i.e. there is temporal misalignment).
The number of additional frames after startFrame
for which frame captures (or maps) will be automatically generated and cached. Decoding video, extracting frames and building maps can be expensive operations. Use this value to capture and cache a number of frames following startFrame
to support faster subsequent look-ahead request-response exchanges (i.e. useful in scroll forward functionality).
Only used if video asset is HTTP Live Streaming (HLS). This type is used to specify which variant video stream is to be used. If not included, all variant streams are used.
{
"type": "HLSVariantIdentifier",
"bandwidth": 4997885,
"fallbackStreamIndex": 1
}
Capture the schema type for use in oneOf
semantics.
The bandwidth of the variant stream to be used as the subject asset (The value of the BANDWIDTH key of the corresponding EXT-X-STREAM-INF tag). If multiple variant streams with the same bandwidth exist, the first is used.
If multiple variant streams with the same bandwidth are found in the master playlist, those after the first are treated as fallback streams for that variant. The second stream with the same bandwidth has fallback index 0.
Settings for using image sequences as an asset:
.png
.tif
.tiff
.dpx
.jpg
.jpeg
.j2c
.jp2
.jpc
.j2k
.exr
This property is required when the asset name contains a format string (%d
or %0[width]d
)
{
"fps": 24
}
Frames per second.
Used to define a quality check event based on metadata validity and correctness.
{
"type": "DOLBY_VISION"
}
{
"type": "MAXCLL_AND_MAXFALL",
"tolerance": 100,
"metadataSources": [
"CONTAINER"
]
}
The type of metadata to validate
The tolerance (+/-) between the measured and metadata values before a quality check is raised
tolerance can only be specified for the following type
values:
MAXCLL_AND_MAXFALL
For MAXCLL_AND_MAXFALL
quality checks the unit is nits
, and the default is 100.
Perform the metadata check against only the selected metadata source. Currently only used by MAXCLL_AND_MAXFALL
. By default all metadata sources are checked (if present).
[
"PLAYLIST"
]
Validate the container metadata against the measured. Raise a quality check on mismatch, or if no MaxFALL/MaxCLL container metadata was detected.
Validate the Dolby Vision metadata against the measured. Raise a quality check on mismatch, or if no MaxFALL/MaxCLL Dolby Vision metadata was detected. If the video does not have any Dolby Vision metadata (side car or embedded), then this check is ignored.
Validate the IMF CPL metadata against the measured. Raise a quality check on mismatch, or if no MaxFALL/MaxCLL CPL metadata was detected. If the video is not an IMF video (submitted via the CPL XML), then this check is ignored.
The type of metadata to register a quality check definition for
Validate Dolby Metadata based on https://professionalsupport.dolby.com/s/article/Dolby-Vision-Quality-Control-Metadata-Master-Mezzanine
Validate that Metadata from the container/CPL is consistant and matches measured light level values
For more details on how to structure of the requests Represents the request body used in an analyses POST request to submit a new analysis for processing using the specified assets.
A given analysis can either be full-reference or no-reference. A full-reference analysis requires specifying both reference and subject assets, whereas a no-reference analysis requires only a subject asset. For maximum efficiency, NewAnalysis has been designed to accept multiple reference and subject assets, with each subject asset being compared individually against all reference assets in separate analyses. This flexibility allows you to create a single request to execute anything from an ad-hoc no-reference analysis to multiple encoding ladder comparisons.
The following examples are representations in table format of how the system handles multiple reference and subject assets for common analysis scenarios:
Reference Asset(s) | Subject Asset(s) |
---|---|
GOT_S2_EP1.mov | GOT_S2_EP1_libx264_1920x1080_50-0.mov |
GOT_S2_EP1_libx264_1280x720_50-0.mov |
Results in 2 full-reference analyses:
- GOT_S2_EP1.mov —> GOT_S2_EP1_libx264_1920x1080_50-0.mov
- GOT_S2_EP1.mov —> GOT_S2_EP1_libx264_1280x720_50-0.mov
Reference Asset(s) | Subject Asset(s) |
---|---|
GOT_S2_EP1_libx264_1920x1080_50-0.mov | |
GOT_S2_EP1_libx264_1280x720_50-0.mov | |
GOT_S2_EP1_libx264_960x540_50-0.mov |
Results in 3 no-reference analyses:
- GOT_S2_EP1_libx264_1920x1080_50-0.mov
- GOT_S2_EP1_libx264_1280x720_50-0.mov
- GOT_S2_EP1_libx264_960x540_50-0.mov
Reference Asset(s) | Subject Asset(s) |
---|---|
GOT_S2_EP1.mov | GOT_S2_EP1_libx264_1920x1080_50-0.mov |
GOT_S2_EP1.mp4 | GOT_S2_EP1_libx264_1280x720_50-0.mov |
Results in 4 full-reference analyses:
- GOT_S2_EP1.mov —> GOT_S2_EP1_libx264_1920x1080_50-0.mov
- GOT_S2_EP1.mov —> GOT_S2_EP1_libx264_1280x720_50-0.mov
- GOT_S2_EP1.mp4 —> GOT_S2_EP1_libx264_1920x1080_50-0.mov
- GOT_S2_EP1.mp4 —> GOT_S2_EP1_libx264_1280x720_50-0.mov
For more details on how to structure the requests and responses for the examples above, please consult the POST endpoint on the analyses resource.
Since both no-reference and full-reference analyses require a subject asset, the subjectAssets
is a required attribute. For full-reference analyses, the referenceAssets
is also required.
{
"content": {
"title": "Big Buck Bunny"
},
"referenceAssets": [
{
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/videos/sources",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
}
],
"subjectAssets": [
{
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
}
],
"analyzerConfig": {
"enableComplexityAnalysis": true,
"enableBandingDetection": true,
"qualityCheckConfig": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25,
"freezeFrame": {
"enabled": true
},
"blackFrame": {
"enabled": true
}
},
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "TYPICAL"
}
],
"framesToProcess": 240,
"temporalAlignment": {
"minSeconds": 5,
"maxSeconds": 90,
"maxSecondsHighFPS": 30
}
}
}
Metadata about the content being analyzed in this analysis. If included, content metadata will automatically be propagated to all assets in this analysis.
{
"title": "Big Buck Bunny"
}
A description of the analysis which can be used for reference, categorization and search/filtering. This field may be deprecated in a future release of the API. As such, you are encouraged to use the content
field in place of this field whenver possible as it plays a more prominent/visible role in Insights reporting.
The reference asset against which you will compare a subject asset. This attribute is ONLY used for full-reference (FR) analyses.
[
{
"name": "Big_Buck_Bunny.mp4",
"path": "/mnt/nas/sources",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
}
]
The subject asset(s) are the assets which you will use to compare against the reference asset (for full-reference analysis) or the asset(s) against which you will perform a no-reference analysis.
[
{
"name": "Big_Buck_Bunny_1080p@5000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
},
{
"name": "Big_Buck_Bunny_1080p@2000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
},
{
"name": "Big_Buck_Bunny_7200p@1000kbps.mp4",
"path": "/mnt/nas/videos",
"storageLocation": {
"name": "/videos",
"type": "S3"
}
}
]
Configuration options for use by the analyzer at the analysis level. Configuration options for assets can be specified on the Asset object.
{
"enableComplexityAnalysis": false,
"enableBandingDetection": false,
"qualityCheckConfig": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25,
"freezeFrame": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
},
"viewingEnvironments": [
{
"device": {
"name": "oled65c9pua"
},
"viewerType": "TYPICAL"
}
],
"framesToProcess": 240,
"temporalAlignment": {
"minSeconds": 5,
"maxSeconds": 90,
"maxSecondsHighFPS": 30
},
"additionalConfigurationOptions": {
"bandingDetectionThreshold": 40
}
}
Configure Photosensitive Epilepsy Harding Tests
- Red Flash Detection
- Luminance Flash Detection
- Spatial Pattern Detection
- Extended Failures
{
"enabled": "true",
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25,
"extendedFailure": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"luminanceFlash": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"redFlash": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"spatialPattern": {
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"standard": "ITU_R_BT_1702_2"
}
Enable all PSE Harding Tests. Can be overridden for individual tests.
The number of consecutive seconds after which the associated condition is considered to have failed its respective check. The default value is 0s (fail as soon as first detection is raised).
The number of seconds to ignore at the start of the asset. This value can be used to skip or ignore some portion at the start of asset in order to eliminate unwanted quality check failures. The default value is 0s.
The number of seconds to ignore at the end of the asset. This value can be used to skip or ignore some portion at the end of asset in order to eliminate unwanted quality check failures. The default value is 0s.
Configuration options for extended failure detection.
Configuration options for luminance flash detection.
Configuration options for red flash detection.
Configuration options for spatial pattern detection.
The standard to use for the Flash and Pattern Analyzer (FPA)
Ofcom
NAB 2006
ITU-R BT.1702-1
ITU-R BT.1702-2
Japan HDR
An identifier which can be used to uniquely identify a single frame within a video asset. The presentation timestamp metadata field used to achieve sychronization of an asset’s separate elementary streams when presented to the viewer.
{
"type": "PTS",
"value": 18542
}
Capture the schema type for use in oneOf
semantics.
Captures the PTS value.
Configuration options for supported video quality checks.
{
"enabled": true,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
{
"enabled": false,
"freezeFrame": {
"enabled": true,
"duration": 5,
"skipStart": 1.25,
"skipEnd": 1.25
},
"blackFrame": {
"enabled": true,
"duration": 5,
"skipStart": 1.25,
"skipEnd": 1.25
}
}
Enable detection of all video quality check events. Can be overridden for individual detections.
The number of consecutive seconds after which all included and enabled video and audio quality checks are considered to have failed their respective checks. Can be overridden for individual detections.
The number of seconds to ignore at the start of the asset. Applies to all included and enabled video and audio quality checks. Can be overridden for individual detections.
The number of seconds to ignore at the end of the asset. Applies to all included and enabled video and audio quality checks. Can be overridden for individual detections.
Configuration options for freeze frame detection.
Configuration options for black frame detection.
Configuration options for solid color frame detection.
Configuration options for color bars detection.
Configuration options for missing captions detection.
Deprecated. Configuration options for audio silence detection.
Important Note: As of version 2.21.0, this schema has been deprecated and it is no longer recommended to configure an analysis level audio silence quality check. Please refer to the asset level audio check configuration Audio
to perform silence quality checks in this version and future releases.
Configuration options for bitstream FPS and scan type mismatch detection
Enable a quality check for detection of multiple cadence patterns within an asset
Enable a quality check for detection of frames with a broken cadence
Enable a quality check for allowed cadences. Provide a list of cadences that are allowed to be present in the video.
[
"2:3",
"2:2"
]
If set to true, will run the analysis in content similarity detection mode and enables the content similarity mismatch quality check. The purpose of this mode is to detect content differences arising from frame insertions and deletions for two versions of the same title.
Note that in this mode, exactly one test and one ref asset must be provided. Additionally, the usual viewer score metrics will not be generated; instead both the ref and test will be evaluated in a no-reference mode. Thus, full-reference metrics such as PSNR and CVD can not be enabled, nor can full-reference metrics be used as the basis for score based quality checks.
Captures the supported parameter values for any video or audio quality check.
{
"enabled": "true",
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
Controls whether the associated video or audio quality check is enabled.
The number of consecutive seconds after which the associated condition is considered to have failed its respective check. For all video quality checks (i.e. black frames, solid color frames, freeze frames and color bar frames) the default value is 10s. For closed captions quality checks (i.e. missing captions) the default value is 60s.
The number of seconds to ignore at the start of the asset. This value can be used to skip or ignore some portion at the start of asset in order to eliminate unwanted quality check failures. The default value is 0s.
The number of seconds to ignore at the end of the asset. This value can be used to skip or ignore some portion at the end of asset in order to eliminate unwanted quality check failures. The default value is 0s.
Captures the supported parameter values for the freeze frame quality check.
{
"enabled": true,
"sensitivity": 75,
"duration": 2.5,
"skipStart": 1.25,
"skipEnd": 1.25
}
Controls whether the freeze frame quality check is enabled
The sensitivity of the freeze frame detector, from 1-100. Larger numbers, or those closer to 100 correspond to a more sensitive detector, meaning more events, and potentially more false positives will be detected. Smaller numbers, or those closer to 1 correspond to a less sensitive detector, meaning less events will be detected. Lowering the sensitivity will usually result in less false positives at the cost of potentially increasing false negatives (true freeze frame events will be reported as unimpaired video). The default value is 50.
The number of consecutive seconds after which a freeze frame event will be reported as a quality check failure. The default value is 10s.
The number of seconds to ignore at the start of the asset. This value can be used to skip or ignore some portion at the start of asset in order to eliminate unwanted quality check failures. The default value is 0s.
The number of seconds to ignore at the end of the asset. This value can be used to skip or ignore some portion at the end of asset in order to eliminate unwanted quality check failures. The default value is 0s.
Whether or not to include this segment type in the content layout timeline
Number The minimum duration in seconds for freeze frame segments to be included in the content layout timeline. Freeze frame segments shorter than the specified duration will be treated as motion video.
The sensitivity of the freeze frame detector, from 1-100. Larger numbers, or those closer to 100 correspond to a more sensitive detector, meaning more events, and potentially more false positives will be detected. Smaller numbers, or those closer to 1 correspond to a less sensitive detector, meaning less events will be detected. Lowering the sensitivity will usually result in less false positives at the cost of potentially increasing false negatives (true freeze frame events will be reported as unimpaired video). The default value is 50.
Settings needed to decode raw video with the following extensions:
.yuv
,.rgb
,.bgr
,.v210
, or.raw
.
{
"resolution": {
"width": 720,
"height": 576
},
"fps": 25,
"scanType": "P",
"fieldOrder": "TFF",
"pixelFormat": "YUV420P"
}
Resolution of the asset specified as width and height in pixels
{
"width": 1920,
"height": 1080
}
Frames per second
Scan Type
Interlaced
Progressive
Field Order
Top Field First
Bottom Field First
The pixel format
Specification for the region of interest
{
"originX": 20,
"originY": 0,
"regionHeight": 300,
"regionWidth": 400
}
x coordinate for region of interest origin
y coordinate for region of interest origin
height in pixels of the region of interest
width in pixels of the region of interest
A width and a height in pixels that specify the resolution of an asset
{
"width": 1920,
"height": 1080
}
Width of the video in pixels
Height of the video in pixels
Credentials for accessing an AWS Amazon S3 bucket
AWS Amazon S3 bucket name.
AWS Access Key ID for accessing assets stored in Amazon S3.
AWS Secret Access Key for accessing assets stored in Amazon S3.
Used to define a quality check event based on scores over a window of time. Quality check failure events are generated when scores of the specified type exceed threshold
for at least eventDuration
continuous seconds.
A viewingEnvironments
array in the AnalyzerConfig object must be specified in order to use score-based quality checks on SVS, EPS and SBS metrics.
{
"metric": "SVS",
"threshold": 80,
"durationSeconds": 5,
"skipStart": 1.25,
"skipEnd": 1.25,
"viewingEnvironmentIndex": 0
}
The type of score to check for. Several restrictions apply regarding where each can be used:
SVS
,SBS
andLUMINANCE
can be applied to both Source (Reference) and Output (Test/Subjects), whereasEPS
andCVD
are only applicable to Output (Test/Subjects) assetsEPS
andCVD
can only be used in a full-reference analysis
The threshold that scores must exceed for eventDuration
seconds for an event to trigger. Scores lower than the threshold for SVS
, EPS
, MIN_FRAME_LUMINANCE
and MIN_PIXEL_LUMINANCE
and higher than the threshold for SBS
, CVD
, MAX_FRAME_LUMINANCE
and MAX_PIXEL_LUMINANCE
will cause an event to trigger.
For SVS
, EPS
, SBS
, and CVD
the max threshold is 100.
For MAX_FRAME_LUMINANCE
and MAX_PIXEL_LUMINANCE
score checks, the max threshold is 10000
Specifies the (0-based) index of the viewing environment to use for this quality check. Required for SVS, EPS, and SBS score checks.
The minimum continuous duration in seconds required for the target score to exceed threshold
for an event to trigger. Either durationSeconds
or durationFrames
must be specified, but both can not be specified simultaneously.
The minimum continuous duration in frames required for the target score to exceed threshold
for an event to trigger. Either durationSeconds
or durationFrames
must be specified, but both can not be specified simultaneously.
The duration in seconds to ignore at the start of the asset. This value can be used to skip or ignore some portion at the start of asset in order to eliminate unwanted quality check failures. The default value is 0s.
The duration in seconds to ignore at the end of the asset. This value can be used to skip or ignore some portion at the end of asset in order to eliminate unwanted quality check failures. The default value is 0s.
Parameters for defining what constitutes an active segment for the purposes of constructing an active segment timeline. Allows specifying which segment types should be considered always inactive and under which audio conditions segment types should be considered active.
If set to false, this content type will always be considered inactive
The “least active” audio content type required for a segment of this content type:
SILENCE
: this content type will be considered active regardless of audioANY_CHANNEL_ACTIVE
: this content type will be considered active if at least one audio channel is activeALL_CHANNELS_ACTIVE
: this content type will only be considered active if all audio channels are active
Configuration options for specifying which content types and under which conditions should be reported in the content layout timeline. By default, all content type segments will be included with a default minimum duration of 10 seconds.
{
"blackFrameSegments": {
"include": true,
"duration": 0.5
},
"solidColorFrameSegments": {
"include": false
},
"colorBarFrameSegments": {
"include": true,
"duration": 1
},
"freezeFrameSegments": {
"include": true,
"duration": 0.25
},
"silenceDetection": {
"threshold": -80,
"duration": 5
}
}
Configuration options for black frame segments.
{
"include": true,
"duration": 0.5
}
Configuration options for color frame segments.
{
"include": false
}
Configuration options for color bar frame segments.
{
"include": true,
"duration": 1
}
Configuration options for freeze frame segments.
Configuration options for silence detection in audio segments.
{
"threshold": -80,
"duration": 5
}
The type of score to register a quality check definition for. Several restrictions apply regarding where each can be used:
SVS
,SBS
andLUMINANCE
can be applied to both Source (Reference) and Output (Test/Subjects), whereasEPS
andCVD
are only applicable to Output (Test/Subjects) assetsEPS
andCVD
can only be used in a full-reference analysis
SSIMPLUS Viewer Score
SSIMPLUS Encoder Performance Score
SSIMPLUS Banding Score
Color Volume Difference Score
Minimum Pixel Luminance Score
Maximum Pixel Luminance Score
Minimum Frame Luminance Score
Maximum Frame Luminance Score
A text file that accompanies a video asset and is used to provide metadata or supplemental data on the asset.
{
"type": "DOLBY_VISION_METADATA",
"name": "20161103_SPARKS_DOVI_METADATA_AR_CORRECT.xml"
}
{
"type": "DOLBY_VISION_METADATA",
"name": "20161103_SPARKS_DOVI_METADATA_AR_CORRECT.xml",
"path": "/mnt/videos"
}
The type of the sidecar.
The filename that represents the sidecar.
The path to the sidecar file. If not supplied, the sidecar file must use the same path as its associated Asset
.
The type of sidecar file that accompanies the video asset.
Dolby Vision metadata in XML format
Captures the storage location used to house one or more assets. Every asset has a storage location.
{
"type": "S3",
"name": "test-bucket",
"credentials": {
"useAssumedIAMRole": "true"
}
}
{
"type": "PVC",
"name": "videos"
}
An enumeration to capture the supported storage types.
A name required to access the root of the storage location. The root of the store location will be used with the path and name of the asset to uniquely identify the location of the asset. For Amazon S3, this value would likely be the S3 bucket name. For a persistent volume backed by NFS, this would likely be the volume mount name. For HTTP, this must be the server hostname.
Authentication credentials for assets stored in Amazon S3.
{
"useAssumedIAMRole": "true"
}
An enumeration to capture the supported storage types.
Amazon S3
Any persistent volume claim that can be defined/supported in Kubernetes
A HTTP/HTTPS server (required for HLS)
Represents a (micro)service needed to support some function of the overall system.
{
"serviceName": "AnalysesService",
"serviceId": "f533db73-0f9a-4805-9651-c5dcd519dc37",
"deploymentId": "d8e89059-c7dd-454e-92ab-f61e4107d33b",
"status": "READY"
}
The name of the service
The UUID associated with service
The UUID associated with the service’s deployment within the system
The status of the servce
The error associated with the service when the status is NOT READY
An enumeration to capture the supported system/service statuses.
Indicates that the system/service is operational
Indicates that the system/service is not operational
Configuration options for temporal alignment
{
"minSeconds": 5,
"maxSeconds": 120,
"maxSecondsHighFPS": 120
}
The minimum duration of the misalignment between two videos in seconds
The maximum duration of the misalignment between two videos in seconds
The maximum duration of the misalignment in seconds between two videos in seconds for assets with a framerate of 120 frames per second or higher
The system version information for the deployed API.
{
"commitBranch": "vod-production/release/2.14.2",
"commitHash": "facc2ef0a3c8ebc10819dc1218748f8d2cbfafd9",
"commitTime": "2022-05-02T18:58:44Z",
"stamped": "true",
"versionString": "2.14.2-12"
}
The git branch where the release was committed.
The hashcode associated with the release’s git commit.
The UTC timestamp associated with the release’s git commit.
Indicates if the version was stamped.
The alphanumeric system version for the API.
The packet identifier (PID) used to identify the video stream within the asset that you are interested in working with. In case of Multiple Program Transport Stream (MPTS), use this value to specify the Program ID (PID) of the video to be processed.
{
"type": "VideoPID",
"identifier": 1
}
Capture the schema type for use in oneOf
semantics.
Represents the desired video packet index.
The packet identifier (PID) used to identify the video stream within the asset that you are interested in working with. In case of Multiple Program Transport Stream (MPTS), use this value to specify the Program ID (PID) of the video to be processed.
{
"type": "VideoPIDHex",
"identifier": "0x101"
}
Capture the schema type for use in oneOf
semantics.
Represents the desired video packet index as a hexadecimal value in the form 0x101
Whether or not to include this segment type in the content layout timeline
The minimum duration in seconds for this segment type to be included in the content layout timeline. Segments of this type shorter than the specified duration will be treated as motion video.
The viewer type for which scores will be calculated
Represents a typical, untrained viewer
Represents a trained viewer schooled at spotting and judging video anomalies.
Represents a studio viewer trained in assessing the impact of video anomalies on the creator’s artistic intent.
A specification of the environment under which the content is viewed
{
"device": {
"name": "oled65c9pua",
"resolution": {
"width": 1920,
"height": 1080
}
},
"viewerType": "TYPICAL"
}
The display device
{
"name": "oled65c9pua",
"resolution": {
"width": 1920,
"height": 1080
}
}
The viewer type
The response payload of the POST on analyses. This response will contain an Analysis object for each analysis represented in the NewAnalysis request body. A given analysis can either be submitted successfully or not. In both cases, the Analysis will contain attribute values that can be used to either fetch the resulting frame scores when successful, or address the error condition when a failure occurs (please see Analysis for more details).
The response payload of the GET on cache which lists the number of files and overall size of the frames and map files in each cache. Most deployments will have only one file/map cache.
{
"caches": [
{
"cacheId": "533db73-0f9a-4805-9651-c5dcd519dc37",
"numberOfFiles": 15182,
"sizeOfFiles": 1073741824,
"humanReadableSizeOfFiles": "1.0 G"
}
]
}
The list of known file/map caches
The response payload of the GET on reaydz which contains the overall system readiness as well as the readiness of the individual (micro)services that comprise the system.
{
"checks": [
{
"deploymentId": "d8e89059-c7dd-454e-92ab-f61e4107d33b",
"serviceId": "f533db73-0f9a-4805-9651-c5dcd519dc37",
"serviceName": "AnalysesService",
"status": "READY"
},
{
"deploymentId" : "d2bcd3f6-79c6-43c7-9462-afa614d25176",
"serviceId" : "eb1e4722-461f-438b-95df-7bf3c6e30989",
"serviceName" : "AnalysisLifecycleService",
"status" : "READY"
}
],
"outcome": "READY"
}
An array of the individual readiness checks performed on the services that comprise the system.
The overall system readiness. All services that comprise the system must be ready in order for this value to be READY.
The response payload of the GET on version which contains the system version information.
{
"version": {
"commitBranch": "vod-production/release/2.14.2",
"commitHash": "facc2ef0a3c8ebc10819dc1218748f8d2cbfafd9",
"commitTime": "2022-05-02T18:58:44Z",
"stamped": "true",
"versionString": "2.14.2-12"
}
}
The system version information for the API.
The schema for the 400 response.
The description of the response.
The reason for the response.
{
"description": "The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.",
"reason": "Bad Request"
}
The schema for the 403 response.
The description of the response.
The reason for the response.
{
"description": "The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated.",
"reason": "Forbidden"
}
The schema for the 404 response
The description of the response.
The reason for the response.
The schema for the 415 response.
The description of the response.
The reason for the response.
{
"description": "Used when the request is asking for a content-type that is not supported (i.e. XML when you only support JSON). The SSIMWAVE VOD Monitor REST API currently only supports JSON content-type (i.e. application/json).",
"reason": "Unsupported Media Type"
}
The schema for the 500 response.
The description of the response.
The reason for the response.
{
"description": "The server encountered an unexpected condition which prevented it from fulfilling the request.",
"reason": "Internal Server Error"
}
The schema for the 503 response.
The description of the response.
The reason for the response.
{
"description": "The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay.",
"reason": "Server Unavailable"
}
The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.
Body
{
"description": "The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.",
"reason": "Bad Request"
}
The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated.
Body
{
"description": "The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated.",
"reason": "Forbidden"
}
The server cannot find the requested resource.
Body
{
"description": "The server cannot find the requested resource.",
"reason": "Not Found"
}
Used when the request is asking for a content-type that is not supported (i.e. XML when you only support JSON). The SSIMWAVE VOD Monitor REST API currently only supports JSON content-type (i.e. application/json).
Body
{
"description": "Used when the request is asking for a content-type that is not supported (i.e. XML when you only support JSON). The SSIMWAVE VOD Monitor REST API currently only supports JSON content-type (i.e. application/json).",
"reason": "Unsupported Media Type"
}
The server encountered an unexpected condition which prevented it from fulfilling the request.
Body
{
"description": "The server encountered an unexpected condition which prevented it from fulfilling the request.",
"reason": "Internal Server Error"
}
The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay.
Body
{
"description": "The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay.",
"reason": "Server Unavailable"
}
The SSIMPLUS® Analyzer’s operation can be controlled using configuration parameters. For most deployments of the SSIMPLUS® Analyzer (Cluster), these configuration values are contained within a JSON document (config.json
) and, once set, apply to all analyses submitted via the REST API (unless overridden). Changing the config.json
requires updating a Kubernetes configuration map and is not something that is generally expected of an end user (although this can be be done using the Kubernetes Dashboard) . However, all the values in the config.json can be overridden on a per-analysis basis when specifying the NewAnlaysis data object (either via the REST API or the Insights VOD Monitor App), giving you the best of both worlds in that suitable defaults are always present but per-analysis customization is both supported and straightforward.
For those using the SSIMPLUS® Analyzer Docker image directly, configuration values are passed both within a JSON document (config.json
) and on the CLI as program switches/options. Since the Docker image deployment model requires manual execution of the SSIMPLUS® Analyzer (i.e. via CLI, scripts, etc.), you are free to change the configuration parameters for any given analysis as you see fit.
Scoring Devices
The SSIMPLUS® Analyzer can be configured to provide device-adaptive scores for a default set of devices. A table of all the available devices is provided below. To control the default devices used in your deployment, add the name from the Code
column in the table below for each of the desired devices to a devices
array in the Analyzer’s JSON configuration document as follows:
{
"devices": [
"ssimpluscore",
"ipad2021pro12.9",
"xl2420t",
"macbookpro16.2inch",
"iphone13promax",
"oled65c9pua"
],
.
.
.
}
Unless a different list is requested, VOD Monitor installations come, by default, with the Analyzer configured to capture device-adaptive scores for the following devices (one device per category):
- iPad Pro 2021 12.9" (Tablet)
- XL2420T (Monitor)
- iPhone 13 Pro Max (Smartphone)
- Macbook Pro 16.2" (Laptop)
- OLED65C9PUA (TV)
Available Devices
Name | Code | Manufacturer | Category | HDR | SBS Compatible |
---|---|---|---|---|---|
SSIMPLUSCore | ssimpluscore | SSIMPLUS | SSIMPLUS | Yes | No |
55U8G | 55u8g | Hisense | TV | Yes | No |
65H9G | 65h9g | Hisense | TV | Yes | No |
65SM9500PUA | 65sm9500pua | LG | TV | Yes | No |
EA9800 | ea9800 | LG | TV | No | No |
OLED55C7P | oled55c7p | LG | TV | Yes | Yes |
OLED55C8PUA | oled55c8pua | LG | TV | Yes | Yes |
OLED55C9PUA | oled55c9pua | LG | TV | Yes | Yes |
OLED55E7N | oled55e7n | LG | TV | Yes | Yes |
OLED65C2PUA | oled65c2pua | LG | TV | Yes | Yes |
OLED65C7P | oled65c7p | LG | TV | Yes | Yes |
OLED65C9PUA | oled65c9pua | LG | TV | Yes | Yes |
OLED65G6P | oled65g6p | LG | TV | Yes | Yes |
OLED75C9PUA | oled75c9pua | LG | TV | Yes | Yes |
AS600 | as600 | Panasonic | TV | No | No |
TX-40CX680B | tx-40cx680b | Panasonic | TV | No | No |
TX65JZ2000B | tx65jz2000b | Panasonic | TV | Yes | Yes |
VT60 | vt60 | Panasonic | TV | No | No |
50PUT6400 | 50put6400 | Philips | TV | No | No |
F8500 | f8500 | Samsung | TV | No | No |
H7150 | h7150 | Samsung | TV | No | No |
HU9000 | hu9000 | Samsung | TV | No | No |
QN55Q8FNBFXZA | qn55q8fnbfxza | Samsung | TV | Yes | No |
QN55QN90A | qn55qn90a | Samsung | TV | Yes | No |
QN65QN900A | qn65qn900a | Samsung | TV | Yes | No |
QN75QN900A | qn75qn900a | Samsung | TV | Yes | No |
QN85QN900A | qn85qn900a | Samsung | TV | Yes | No |
UE40JU6400 | ue40ju6400 | Samsung | TV | No | No |
UE55JS9000T | ue55js9000t | Samsung | TV | Yes | No |
KD-55X8509C | kd-55x8509c | Sony | TV | No | No |
PVMX550 | pvmx550 | Sony | TV | Yes | No |
X9 | x9 | Sony | TV | No | No |
XBR-55A8F | xbr-55a8f | Sony | TV | Yes | No |
XBR75Z9F | xbr75z9f | Sony | TV | Yes | No |
XBRX950G | xbrx950g | Sony | TV | Yes | No |
55R646 | 55r646 | TCL | TV | Yes | No |
65Q825 | 65q825 | TCL | TV | Yes | No |
PQ9 | pq9 | Visio | TV | Yes | No |
B296CL | b296cl | Acer | Monitor | No | No |
Pro Display XDR | prodisplayxdr | Apple | Monitor | Yes | No |
iMac 21.5 4K | imac2154k | Apple | Monitor | No | No |
iMac 27 5K | imac275k | Apple | Monitor | No | No |
iMac 27inch | imac27inch | Apple | Monitor | Yes | No |
VG27HE | vg27he | Asus | Monitor | No | No |
XL2420T | xl2420t | BenQ | Monitor | No | No |
DP-V2421 | dpv2421 | Canon | Monitor | Yes | No |
DP-V3120 | dpv3120 | Canon | Monitor | Yes | No |
AW2721D | aw2721d | Dell | Monitor | Yes | No |
U2713HM | u2713hm | Dell | Monitor | No | No |
UP3216Q | up3216q | Dell | Monitor | No | No |
27MP35HQ | 27mp35hq | LG | Monitor | No | No |
38GN50TB | 38gn50tb | LG | Monitor | Yes | No |
38GN950B | 38gn950b | LG | Monitor | Yes | No |
LT3053 | lt3053 | Lenovo | Monitor | No | No |
PA242W | pa242w | NEC | Monitor | No | No |
436M | 436m | Philips | Monitor | Yes | No |
BVM-X300 | bvmx300 | Sony | Monitor | Yes | Yes |
Aspire S7 | aspires7 | Acer | Laptop | No | No |
Macbook Air 13inch | macbookair13inch | Apple | Laptop | No | No |
Macbook Pro | macbookpro | Apple | Laptop | No | No |
Macbook Pro 14inch | macbookpro14inch | Apple | Laptop | Yes | No |
Macbook Pro 16.2inch | macbookpro16.2inch | Apple | Laptop | Yes | No |
XPS 13 | xps13 | Dell | Laptop | Yes | No |
XPS 15 | xps15 | Dell | Laptop | No | No |
ThinkPad W540 | thinkpadw540 | Lenovo | Laptop | No | No |
iPhone 13 | iphone13 | Apple | Phone | Yes | No |
iPhone 13 Mini | iphone13mini | Apple | Phone | Yes | No |
iPhone 13 Pro | iphone13pro | Apple | Phone | Yes | No |
iPhone 13 Pro Max | iphone13promax | Apple | Phone | Yes | No |
iPhone 5S | iphone5s | Apple | Phone | No | No |
iPhone 6 | iphone6 | Apple | Phone | No | No |
iPhone 6 Plus | iphone6plus | Apple | Phone | No | No |
iPhone X | iphonex | Apple | Phone | Yes | No |
One (M8) | onem8 | HTC | Phone | No | No |
Nexus 6 | nexus6 | Motorola | Phone | No | No |
OnePlus 9 | oneplus9 | OnePlus | Phone | Yes | No |
OnePlus 9 Pro | oneplus9pro | OnePlus | Phone | Yes | No |
Galaxy Note 4 | galaxynote4 | Samsung | Phone | No | No |
Galaxy S21 | galaxys21 | Samsung | Phone | Yes | No |
Galaxy S21 Plus | galaxys21plus | Samsung | Phone | Yes | No |
Galaxy S21 Ultra | galaxys21ultra | Samsung | Phone | Yes | No |
Galaxy S5 | galaxys5 | Samsung | Phone | Yes | No |
Galaxy S6 Edge | galaxys6edge | Samsung | Phone | No | No |
iPad 2017 | ipad2017 | Apple | Tablet | Yes | No |
iPad 2021 | ipad2021 | Apple | Tablet | No | No |
iPad Air 2 | ipadair2 | Apple | Tablet | No | No |
iPad Mini 2 | ipadmini2 | Apple | Tablet | No | No |
iPad Mini 2021 | ipad2021mini | Apple | Tablet | No | No |
iPad Pro | ipadpro | Apple | Tablet | Yes | No |
iPad Pro 2021 12.9inch | ipad2021pro12.9 | Apple | Tablet | Yes | No |
Nexus 7 | nexus7 | Asus | Tablet | No | No |
Nexus 9 | nexus9 | HTC | Tablet | No | No |
Surface | surface | Microsoft | Tablet | No | No |
Surface Pro 8 | surfacepro8 | Microsoft | Tablet | Yes | No |
Surface Studio 2 | surfacestudio2 | Microsoft | Tablet | No | No |
Galaxy Tab S | galaxytabs | Samsung | Tablet | No | No |
Business Class FHD M1 | businessclassfhdm1 | Panasonic | IFE | No | No |
Business Class FHD M2 | businessclassfhdm2 | Panasonic | IFE | No | No |
Business Class UHD M3 | businessclassuhdm3 | Panasonic | IFE | No | No |
Economy Class FHD S2 | economyclassfhds2 | Panasonic | IFE | No | No |
Economy Class FHD S3 | economyclassfhds3 | Panasonic | IFE | No | No |
Economy Class HD S1 | economyclasshds1 | Panasonic | IFE | No | No |
First Class FHD L1 | firstclassfhdl1 | Panasonic | IFE | No | No |
First Class UHD L2 | firstclassuhdl2 | Panasonic | IFE | No | No |
SmallScreen | smallscreen | SSIMPlus | SmallScreen | No | No |
You may find it useful to compare reference and subject assets that differ in their frame rates. The SSIMPLUS® Analyzer supports the following cross-frame rate criteria:
- The frame rate of reference video is the same as the frame rate of test videos.
- The frame rate of reference video is two times the frame rate of test videos.
- The difference between the frame rates of the reference video and test video is less than 0.01.
- The difference between the frame rates of the reference video and two times the frame rate of the test video is less than 0.01.
In addition to the general cross-frame rate rules above, the SSIMPLUS® Analyzer has been enhanced to support a number of common cross-rate scenarios arising when comparing Drop-Frame (DF) with Non Drop-Frame (NDF) videos, including:
- 23.98 vs 24
- 24 vs 23.98
- 29.97 vs 30
- 30 vs 29.97
- 59.94 vs 60
- 60 vs 59.94
SSIMPLUS® Analyzer requires all reference video assets in an FR anlaysis and all subject assets used in a NR analysis to be at least 235x235 for successful processing.
The SSIMPLUS® Analyzer supports the following video formats:
Category | Supported |
---|---|
Media container formats | AV1, AVI, AVR, AVS, DV, FLV, GXF, H261, H263, H264, HEVC, HLS, IFF, IVR, LVF, LXF, M4V, MJ2, Mjpeg, Mjpeg_2000, MOV, MP4, MPEG-PS, MPEG-TS, MKV, MPV, MVE, MXF, VP9, V210, WebM, YUV, 3G2, 3GP, Y4M |
Video codecs | Apple ProRes, AV1, Google VP9, H.261, H.263 / H.263-1996, H.263+, H.263-1998, H.263 version2, H.264,AVC, MPEG-4AVC, MPEG-4 part 10, HEVC, JPEG 2000, JPEG-LS, MPEG-1 video, MPEG-2 video, MPEG-4 part 2, On2 VP6, On2 VP3, On2 VP5, On2 VP6, On2 VP7, On2 VP8, QuickTime Animation video, Theora, Windows Media Video 7, Windows Media Video 8, Windows Media Video 9 |
HDR formats | HDR10, HLG, Dolby Vision® |