Topics
- 2025 BEITC Proceedings
- Immersive Audio, Satellite and OTT Delivery
- Innovations in Live Production and Broadcast Workflows
- IP Networks and the Broadcast Chain: Fast Friends
- AI Applications: Sports, Newsrooms and Archives
- Making ATSC 3.0 Better than Ever
- AM Radio: Measurements and Modeling
- Making Radio Better Than Ever
- Brigital: Integrating Broadcast and Digital
- Production Advancements: Avatars and Immersive Content
- Broadcast Positioning System (BPS): Resilience and Precision
- Resilience, Safety and Protection for Broadcast Service
- Cybersecurity for Broadcasters
- Streaming Improvements: Low Latency and Multiview
- Embracing the Cloud: Transforming Broadcast Operations with ATSC 3.0 and Broadband Technologies
- Enhancing Video Streaming Quality and Efficiency
- 5G in Broadcast Spectrum and Video Quality Metrics
- Getting the Most out of ATSC 3.0
- AI Applications: Captions, Content Detection and Advertising Management
- 2024 BEITC Proceedings
- 2023 BEITC Proceedings
- 2022 BEITC Proceedings
- 2021 BEITC Proceedings
- 2020 BEITC Proceedings
5G in Broadcast Spectrum and Video Quality Metrics
Deploying 5G Broadcast in UHF Spectrum - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, 5G in Broadcast Spectrum and Video Quality MetricsWe describe how to deploy 5G Broadcast—a technology built on the LTE air-interface backbone—in the broadcast UHF spectrum, with a different channelization from cellular systems. LTE-based systems have classically had channelization in segments of 1.4, 3, 5, 10, 15 and 20 MHz, none of which are an exact match for channels in the broadcast UHF spectrum, which have typical channelization in segments of 6, 7 or 8 MHz. We first describe how new supportable channel bandwidths for the physical multicast channel (PMCH) were added to the 5G Broadcast standards, while minimizing the changes necessary to the 5G Broadcast synchronization signals and the bandwidths associated with these. Specifically, we describe how a small, backwards-compatible bandwidth for the synchronization signals (transmitted in the Cell Acquisition Subframes, CASs) were leveraged to indicate a larger, UHF-compliant bandwidth for the PMCH. We then describe the physical layer signals and parameters—such as the reference signals (RSs) and Transport Block Sizes (TBS)—that were adapted in accordance with the new PMCH bandwidths, followed by a discussion on higher-layer signaling via Radio Resource Control (RRC) that was necessary to configure these new PMCH bandwidths. We then highlight the subsequent addition of the broadcast UHF bands (with respective band numbers) to the 3GPP RAN4 specifications, that were crucial in facilitating broadcasters to deploy 5G Broadcast in their allocated spectrum.
Ayan Sengupta, Javier Rodriguez Fernandez, Alberto Rico Alvarino | Qualcomm Technologies Incorporated | San Diego, Calif., United States
Thomas Stockhammer | Qualcomm CDMA Technologies GmbH | Munich, Germany
Off-piste 5G in the Broadcast Auxiliary Service Band - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, 5G in Broadcast Spectrum and Video Quality Metrics5G New Radio (NR) can be used to provide flexible, high-capacity and low-latency networks suitable for broadcast content acquisition or delivery, but access to suitable spectrum can be challenging. One of the enablers for private network deployments is shared spectrum licensing, such as the upper n77 band (3.8–4.2 GHz) available in the UK and elsewhere in Europe. The Third Generation Partnership Project (3GPP) was created to develop mobile standards for WCDMA and TD-SCDMA and their respective core networks, and has continued to publish standards as radio access technologies have progressed to 4G and 5G. These standards define frequency bands, numerologies, duplex models and messaging (among many other things). While software-defined radio (SDR) is emerging as a viable and highly flexible solution for core and radio access network (RAN) functions, user equipment (UE) typically remain hardware based with modems that implement the 3GPP standards to ensure device compatibility. The flexibility of SDR RAN allows for wireless radio networks based on 5G NR to be built in non-3GPP defined spectrum bands, but there are no compatible devices to connect. In the USA, broadcasters have access to spectrum in the Broadcast Auxiliary Service (BAS) band (2025–2110 MHz), which coincides with the programme-making and special events (PMSE) band used in the UK and Europe. This allows for rapid licensing of 10/12 MHz channels for traditional wireless camera systems, such as COFDM, that could instead be used to license low-to-medium power private 5G NRbased networks capable of supporting multiple cameras and other IP-based workflows. This paper discusses the development of a flexible software-defined UE capable of connecting to non-3GPP 5G NR networks in BAS/PMSE spectrum.
Douglas G. Allan, Samuel R. Yoffe, Kenneth W. Barlee, Dani Anderson, Iain C. Chalmers, Malcolm R. Brew, Cameron A. Speirs, Robert W. Stewart | Neutral Wireless and University of Strathclyde | Glasgow, Scotland
Nicolas Breant, Jeremy Tastet, Sebastien Roques, Bastien Chague | AW2S | Bordeaux, France
Open-Source Low-Complexity Perceptual Video Quality Measurement with pVMAF - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, 5G in Broadcast Spectrum and Video Quality MetricsWith the rise of digital video services, viewers expect high-quality visuals, making Quality of Experience (QoE) a priority for providers. However, poor video processing can degrade visual quality, leading to detail loss and visible artifacts. Thus, accurately measuring perceptual quality is essential for monitoring QoE in digital video services. While viewer opinions are the most reliable measure of video quality, subjective testing is impractical due to its time, cost, and logistical demands. As a result, objective video quality metrics are commonly used to assess perceived quality. These models evaluate a distorted video and predict how viewers might perceive its quality. Metrics that compare the distorted video to the original source, known as full-reference (FR) metrics, are regarded as the most accurate approach. Traditional quality metrics like Sum of Absolute Differences (SAD), Sum of Squared Differences (SSD), and Peak Signal-to-Noise Ratio (PSNR) are computationally lightweight and commonly used within encoders for Video Quality Measurement (VQM) and other encoder optimization tasks. However, methods that simply measure pixel-wise differences often lack alignment with human perception as they do not account for the complex intricacies of the Human Visual System (HVS).
In recent years, more advanced metrics have been developed to better reflect human perception by incorporating HVS characteristics. Among these, Video Multi-method Assessment Fusion (VMAF) has become a widely accepted industry standard for evaluating video quality due to its high correlation with subjective opinions. However, the high computational demand of VMAF and similar perception-based metrics limits their suitability for real-time VQM. Consequently, encoders primarily offer only PSNR and Structural Similarity Index Measure (SSIM) for full-frame quality monitoring during encoding. While not the most accurate, these metrics are the only options that can be efficiently deployed during live encoding, as more advanced VQM approaches would consume too much processing resources needed for real-time encoding. To address these limitations, we introduced predictive VMAF (pVMAF), a novel video quality metric that achieves similar predictive accuracy to VMAF at a fraction of the computational cost, making it suitable for real-time applications.
pVMAF relies on three categories of low-complexity features: (i) bitstream features, (ii) pixel features, and (iii) elementary metrics. Bitstream features include encoding parameters like the quantization parameter (QP), which provide insights into compression. Pixel features are computed on either the original or reconstructed frames to capture video attributes relevant to human perception, such as blurriness and motion. Finally, elementary metrics, such as PSNR, contribute additional distortion information. These features are extracted during encoding and fed into a regression model that predicts frame-by-frame VMAF scores. Our regression model, a shallow feed-forward neural network, is trained to replicate VMAF scores based on these input features. Initially designed for H.264/AVC, we extended pVMAF’s applicability to more recent compression standards such as HEVC and AV1. In this paper, we explain how we developed and retrained pVMAF for x264 and SVT-AV1. Experimental results indicate that pVMAF effectively replicates VMAF predictions with high accuracy while maintaining high computational efficiency, making it well-suited for real-time quality measurement.
Jan De Cock, Axel De Decker, Sangar Sivashanmugam | Synamedia | Kortrijk, Belgium