Verifying Video Signals Using Computer Vision and Machine Learning Techniques
High-quality video streams pass through various processes inside media workflows, including pre-processing, transcoding, editing, post-processing, etc. The media transformation in these processes can alter the properties of the intermediate video, potentially disrupting subsequent processes or degrading the final output video quality experienced by viewers. Reliable content delivery necessitates verification of video data to ensure an acceptable quality of experience (QoE). The verification of video data is not just limited to the measurement of degradation in perceived quality on the viewing screen but also considers validation of video parameters affecting the proper functioning of media devices. For example, the Y, Cb, and Cr values are altered after lossy compression that can result in out-of-gamut RGB data and hence will lead to erroneous functioning of display devices. Similarly, parameters like light levels, black bar widths, color bar types, telecine patterns, photosensitive epilepsy (PSE) levels and patterns, field order, scan types (interlaced or progressive), etc. also need validation before distribution. This paper explores critical video properties and demonstrates how computer vision, image processing, and machine learning techniques can measure and validate these properties and detect defects. Experiments utilizing machine learning (ML) techniques to quantify the quality degradation, due to lossy compression of video data, will also be discussed. A discussion of challenges and future directions to enhance the accuracy of measurement and detection is also included here.
Shekhar Madnani, Raman Kumar Gupta, Siddharth Gupta, Saurabh Jain | Interra Systems, Inc. | Cupertino, Calif., United States
$15.00