Topics
- 2024 BEITC Proceedings
- Application of 5G in Broadcasting
- Application of Large Language Model (LLM) in Media
- Applications of ATSC 3.0 Technology
- BPS as the Complementary PNT Solution
- Broadcast Facility Design
- Content Creation and Delivery Technology
- Cybersecurity for Broadcasters
- Data Delivery
- Digital Online Operations
- Emerging Technologies in Media Delivery
- Generative AI for Media
- Generative AI Uses and Video Transcoding
- Quantifying Quality in Video Technology
- Radio Topics
- Society of Cable Telecommunications Engineers
- Striving for Efficiency in Video Technology
- The NMCS Concept
- Timing Solutions for Broadcasters
- Video Encoding and Codecs
- Video Technology - Miscellaneous Topics
- 2023 BEITC Proceedings
- 2022 BEITC Proceedings
- 2021 BEITC Proceedings
- 2020 BEITC Proceedings
- Uncategorized
Quantifying Quality in Video Technology
Elevating Video Quality With the Video Compression Score Metric - $15
Date: April 3, 2024Topics: 2024 BEITC Proceedings, Quantifying Quality in Video TechnologyIn today’s media landscape, ensuring the delivery of high-quality content has become a top priority for service providers. Not only does it impact the viewing experience for an audience that is more discerning than ever, but it’s directly tied to customer satisfaction and retention, and therefore the bottom line. Compounding the issue is the ever-expanding volume of content being delivered. To effectively manage this enormous data flow, video content must be compressed before transmission, a process that can result in the degradation of content quality. The level of degradation depends on the temporal and spatial complexity of the content and encoding methods being adopted by the transcoders. With low-motion content like news programs, there are very small changes within frames (spatial domain) and across frames (temporal domain) that only require minor adjustments in encoding.
With high-motion or high-textured content like action movies or car races, there are major changes in the spatial and temporal domains, which necessitate an increase in the bits required for encoding. To handle this increased bandwidth requirement, the video encoder must either supply the number of bits the content demands or be adaptive enough to change the encoding method. If not handled correctly, artifacts can occur that impact the viewing experience, such as blockiness, blurriness, flickering, and more. To estimate compression degradation — a metric known as the video compression score — reference-based or non-reference-based methods can be used. The reference-based approach involves comparing the compressed video with the original content, while the non-reference-based approach doesn’t require this comparison.
This paper will focus on a non-reference-based approach for calculating the video compression score that utilizes the encoded video parameters of the compression bitstream. Attendees will discover the advantages of this approach for a variety of scenarios. These will include real-time quality monitoring for live video streaming, video conferencing, or security applications in which a reference video is often not available, and providing an objective quality assessment for automation, quality control, and troubleshooting purposes. To achieve the best results, they will learn how an AI neural network can be trained to estimate the video compression score; categorize the transcoded video as unacceptable, marginal, acceptable, and excellent; and correlate the results with well-known video quality methods such as Netflix’s VMAF. The result is a highly accurate, efficient, and cost-effective metric that can be used in real-time applications across the IPTV, OTT, and post-production markets.
Shekhar Madnani | Interra Systems | Cupertino, Calif., United States
Yogender Singh Choudhary | Interra Systems | Cupertino, Calif., United States
Muneesh Sharma | Interra Systems | Cupertino, Calif., United States
Making the CIE Chart Indispensable for Color Grading! - $15
Date: April 3, 2024Topics: 2024 BEITC Proceedings, Quantifying Quality in Video TechnologyWith the advent of Wide Color Gamut (WCG), constraining color grading to a smaller gamut while grading to a larger gamut is a common enough task or problem in post-production. For example, one might set the grade to DCI-P3, while constraining the colors to ITU-R BT.709. During this process, colorists typically need to determine the amount of color excursions outside the gamut of interest and then decide on whether they need to remap the colors or ignore the excursions and allow the colors to clip. Most colorists typically use a combination of traditional tools like waveform monitors, along with reference monitors and work experience to make that determination. The CIE chromaticity chart provides a 2D view of the chromaticity content in the image. So far, the general feeling in the industry is that the CIE chart is complex and difficult to use and therefore it has been mostly confined to textbooks and academic publications. This paper proposes a few innovations [1] that help demystify the CIE chart and enable it to provide instantaneous useful information that can help colorists make quick decisions during the color grading process. The first step involves “linearizing” the CIE chart by effectively unrolling it and creating Gamut Excursion Measurements (GEMs), that provides a quantitative snapshot of the excursions outside a gamut of interest. These excursions can then be visualized using a false color heat map to help make quick determinations regarding the excursions. Adding luminance qualification to the CIE chart helps to additionally constrain the CIE chromaticity visualization to certain luminance ranges of interest. Combinations of these enhancements provide tools to make effective and fast decisions during color grading.
Lakshmanan Gopishankar | Telestream LLC | Beaverton, Oregon, United States
Objective Evaluation System for 2K Video Quality Transcoded from 4K Source using Dynamic Thresholds and Statistical Methods - $15
Date: April 3, 2024Topics: 2024 BEITC Proceedings, Quantifying Quality in Video TechnologyObjective evaluation methodologies can be applied only to comparing images of the same resolutions, leaving several problems in assessing video quality across different resolutions. Our proposed method employs a spatial two-dimensional filter to enable pixel-by-pixel comparisons for automating the evaluation of 2K video quality transcoded from 4K. Detection thresholds are adapted for spatially and temporally complex videos, minimizing false alarms. For noise detection in specific regions, a small-area PSNR metric is applied with a statistical approach for enhanced accuracy. This automated approach matches or exceeds human review accuracy, offering cost reduction and improved quality assurance in 4K/2K productions.
Nariaki Takahashi | Japan Broadcasting Corporation | Shibuya, Tokyo, Japan
Keita Kataoka | Japan Broadcasting Corporation | Shibuya, Tokyo, Japan