Topics
- 2025 BEITC Proceedings
- Enhancing Video Streaming Quality and Efficiency
- 5G in Broadcast Spectrum and Video Quality Metrics
- Getting the Most out of ATSC 3.0
- AI Applications: Captions, Content Detection and Advertising Management
- Immersive Audio, Satellite and OTT Delivery
- Innovations in Live Production and Broadcast Workflows
- IP Networks and the Broadcast Chain: Fast Friends
- AI Applications: Sports, Newsrooms and Archives
- Making ATSC 3.0 Better than Ever
- AM Radio: Measurements and Modeling
- Making Radio Better Than Ever
- Brigital: Integrating Broadcast and Digital
- Production Advancements: Avatars and Immersive Content
- Broadcast Positioning System (BPS): Resilience and Precision
- Resilience, Safety and Protection for Broadcast Service
- Cybersecurity for Broadcasters
- Streaming Improvements: Low Latency and Multiview
- Embracing the Cloud: Transforming Broadcast Operations with ATSC 3.0 and Broadband Technologies
- 2024 BEITC Proceedings
- 2023 BEITC Proceedings
- 2022 BEITC Proceedings
- 2021 BEITC Proceedings
- 2020 BEITC Proceedings
2025 BEITC Proceedings
Deploying 5G Broadcast in UHF Spectrum - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, 5G in Broadcast Spectrum and Video Quality MetricsWe describe how to deploy 5G Broadcast—a technology built on the LTE air-interface backbone—in the broadcast UHF spectrum, with a different channelization from cellular systems. LTE-based systems have classically had channelization in segments of 1.4, 3, 5, 10, 15 and 20 MHz, none of which are an exact match for channels in the broadcast UHF spectrum, which have typical channelization in segments of 6, 7 or 8 MHz. We first describe how new supportable channel bandwidths for the physical multicast channel (PMCH) were added to the 5G Broadcast standards, while minimizing the changes necessary to the 5G Broadcast synchronization signals and the bandwidths associated with these. Specifically, we describe how a small, backwards-compatible bandwidth for the synchronization signals (transmitted in the Cell Acquisition Subframes, CASs) were leveraged to indicate a larger, UHF-compliant bandwidth for the PMCH. We then describe the physical layer signals and parameters—such as the reference signals (RSs) and Transport Block Sizes (TBS)—that were adapted in accordance with the new PMCH bandwidths, followed by a discussion on higher-layer signaling via Radio Resource Control (RRC) that was necessary to configure these new PMCH bandwidths. We then highlight the subsequent addition of the broadcast UHF bands (with respective band numbers) to the 3GPP RAN4 specifications, that were crucial in facilitating broadcasters to deploy 5G Broadcast in their allocated spectrum.
Ayan Sengupta, Javier Rodriguez Fernandez, Alberto Rico Alvarino | Qualcomm Technologies Incorporated | San Diego, Calif., United States
Thomas Stockhammer | Qualcomm CDMA Technologies GmbH | Munich, Germany
Digital-Only Boosters for HD Radio Single-Frequency Networks - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, Making Radio Better Than Ever2025 NAB BEIT Conference Proceedings Best Paper Award Winner
While analog FM and hybrid HD Radio boosters in Single Frequency Networks (SFNs) have been widely studied, deployed, and discussed, this paper will explore experimental observations of the first fielded high-power IBOC digital-only booster using off-the-shelf Gen4 HD Radio™ hardware. What makes this different from existing installations is that the booster transmits a digital-only MP1 signal with no host analog FM component. Topics include potential applications, appropriate timing of implementations, and impact on analog and digital signal performance in the field.
Alan Jurison | Independent Consultant | Syracuse, N.Y., United States
Jeff Baird, Mike Raide | Xperi | Columbia, Md., United States
Downtime Management in Multi-CDN Steering Systems - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, Cybersecurity for BroadcastersManaging CDN failures in multi-CDN streaming environments requires a careful balance between maintaining Quality of Experience (QoE) and ensuring timely detection of CDN recovery. Traditional methods rely on external probes or client-based monitoring, both of which have limitations in accuracy or feasibility. This paper explores an alternative approach that strategically assigns a small subset of users to a failing CDN while dynamically adjusting their Time-To-Live (TTL) based on buffer length information. By leveraging buffer-aware content steering, the system ensures that only users with sufficient buffer capacity are temporarily assigned to the faulty CDN, minimizing rebuffering events while maintaining continuous monitoring of the CDN’s recovery status.
Through simulations, we evaluate the trade-offs between different user assignment strategies and demonstrate that a Rotating Sacrifice with Buffer-Based TTL approach provides an effective balance. This method achieves rapid recovery detection, typically within seconds, while keeping QoE degradation at levels comparable to a baseline approach where no monitoring is performed. The findings highlight the benefits of incorporating buffer length data into the content steering process, leading to a fairer and more efficient multi-CDN orchestration. We advocate for the inclusion of buffer-level reporting as a standard parameter in content steering systems to improve resilience and service continuity in large-scale streaming operations.
Gwendal Simon | Synamedia | Rennes, France
Enhancing Fantasy League Engagement Through Efficient Hyper-Personalized Highlights - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, AI Applications: Sports, Newsrooms and ArchivesHyper-personalization offers the fan the opportunity to view content tailored very specifically for them. That requires fine-grained information about the user and about the content, plus an architecture that brings those together and a suitable base platform that can enable such capabilities. In this paper, the key enablers of making hyper-personalization possible are outlined, including the architectural impact of direct-to-consumer streaming, AI highlights creation and personalization technology. Its use is illustrated by means of an example, using a fantasy team to create a specifically customized on-demand highlights package involving players in the fan’s teams in a scalable, cost-effective manner.
Tony Jones | MediaKind | Southampton, Hampshire, United Kingdom
Enhancing Instream Shoppable Brand and Product Detection in Broadcast, OTT, and VOD Content through Multi-Model Object Detection and Real-Time SCTE-35/SEI/VMAP Integration - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, AI Applications: Captions, Content Detection and Advertising ManagementThe increasing adoption of shoppable video content in Broadcast, OTT, and VOD has transformed how consumers engage with digital media. Traditional advertising models struggle to create seamless, interactive, and real-time brand engagement, leading to a need for AI-driven solutions. Emerging technologies now enable real-time product detection, metadata embedding, and interactive instream commerce integration, making video content viewable and instantly shoppable. The paper introduces an AI-based platform that instantly detects brands and products in video streams before implementing metadata structures for interactive shopping platforms. By leveraging deep learning-based object detection models and metadata signaling protocols, the framework ensures synchronized, real-time engagement between consumers and brands, unlocking new monetization opportunities for advertisers.
Multiple AI detection engines under this framework based on YOLO, Mask R-CNN, ResNet, and MobileNet SSD precisely identify products and brand placements inside real-time and pre-recorded content. The detections from the system link up with SCTE-35 (live streams), SEI (compressed video streams), and VMAP (VOD-based ad scheduling), which enable precise frame-level interactivity for product engagement. Based on AI technology, the system allows moment-by-moment brand interactions and stream-based e-commerce capabilities, and advertisers can use it for content monetization. The framework enhances content creator and advertiser success while delivering improved consumer engagement since it automates product detection combined with advertisement synchronization features. This framework redefines media commerce by combining AI-based detection, metadata injection, and HTML-based interactivity, allowing for scalable, real-time, and seamless brand engagement in video content.
Chaitanya Mahanthi | Google, YouTube | New York, N.Y., United States
Enhancing Resilience: A Backup Communication System with Cyber Communications Integration for Broadcasters - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, Resilience, Safety and Protection for Broadcast ServiceBroadcasting has long been a critical backbone for emergency communications, particularly during catastrophic events. However, increasing cyber threats against communication infrastructure necessitate a dual-resilient system that withstands both physical and digital disruptions. This paper introduces FM-Based Cyber Communications, a hybrid solution integrating FM radio infrastructure with advanced cybersecurity layers to create a fail-safe emergency broadcast network. By leveraging FM Radio Data System (RDS) technology, satellite uplinks, and air-gapped failover mechanisms, this system ensures encrypted, redundant, and uninterrupted information flow even when cellular networks, IP-based alerting protocols, and traditional Emergency Alert Systems (EAS) are compromised. Field testing under simulated disaster conditions demonstrates superior resilience, minimal transmission latency, and rapid failover capabilities, making FM-Based Cyber Communications an essential addition to next-generation emergency broadcasting infrastructure.
Matthew Straeb | Global Security Systems LLC | Sarasota, Fl., United States
Field test of ATSC 3.0/BPS precise time distribution - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, Broadcast Positioning System (BPS): Resilience and PrecisionThe Broadcast Positioning System (BPS™) is a protocol for high-resolution time transfer between a reference clock at a ATSC 3.0 transmitter and a BPS receiver’s disciplined-clock output. Time transfer is a prerequisite for (and useful by-product of) positioning/navigation systems such as Global Navigation Satellite Systems (GNSS); for example, the Global Positioning System (GPS). In principle, BPS may address potential vulnerabilities in critical applications with GNSS dependence, mostly due to relatively weak GNSS signal levels at Earth’s surface. In 2024, BPS was added to the ATSC 3.0 transmission of the station KWGN in the Denver, Colorado metropolitan area. To measure BPS time transfer stability, BPS receivers were installed at two NIST campuses (the furthest: 106 km away) and compared against independent local atomic clock timescales. As an example, over one 50-day period and a non-line-of-sight (NLOS) transmission path of 30 km that includes terrain obstruction, we observed peak-to-peak time deviations on the order of tens of nanoseconds (including all variation of the reference time scales), a stability roughly comparable with ubiquitously deployed, single-band GPS receivers.
Jeff A. Sherman, David A. Howe | Time and Frequency Division, National Institute of Standards and Technology | Boulder, Colo., United States
Hidden Depths: Disguise’s Integration of Depth and Volumetric Capture Workflows for Virtual Production - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, Production Advancements: Avatars and Immersive ContentThe ability to capture a volumetric representation of performers in a production environment would enable a multitude of novel on-set and post-production workflows. Current methods are impractical due to their reliance on large numbers of cameras and consistent lighting conditions. We propose a framework for creating 2.5D assets from a single monocular video, allowing digitised performers to be viewed from a range of angles with the appearance of depth. An application processes the video offline, using depth and segmentation AI models to create a packaged 2.5D asset. This is then loaded into Disguise Designer for pre-visualisation of the performers within the virtual stage. Analysis of state-of-the-art depth inference models, using videos captured to represent the challenges of production environments, shows that it is possible to obtain coherent video depth maps in these conditions. However, metric models do not always identify absolute depth values accurately, and it is necessary to use models specifically tailored for video to ensure temporal consistency in the result. This work is intended to act as a foundation for more comprehensive 3D volumetric capture of performers in real production environments.
Chris Nash, Nathan Butt, Andrea Loriedo, Robin Spooner, Taegyun Ha, Phillip Coulam-Jones, James Bentley | Disguise | London, United Kingdom
Aljosa Smolic | Lucerne University of Applied Arts and Sciences | Lucerne, Switzerland
How the Eurovision Song Contest 2024 Leveraged the Benefits of ST 2110 and Transitioned Away from Baseband Video Formats - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, IP Networks and the Broadcast Chain: Fast FriendsThis paper explores the challenges associated with baseband video formats in large-scale live productions, emphasizing the advantages introduced by SMPTE ST 2110 [1]. By analyzing conventional point-to-point video transmission against IP-based systems, key benefits such as enhanced metadata management, streamlined infrastructure, and increased adaptability are highlighted. A case study of the Eurovision Song Contest 2024 demonstrates how these technologies were applied to improve performance and reliability in a demanding live environment.
Scott Blair, Joe Bleasdale | Megapixel | Los Angeles, Calif., United States
Innovative Internet Protocol Remote Production Devices using Open Standards - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, IP Networks and the Broadcast Chain: Fast FriendsGrowing demand for remote Internet Protocol (IP) production has led to an increase in demand for IP devices. Although it is possible to transmit audio signals using the SMPTE ST2110 standard, detailed control, such as adjusting audio levels and matrix control, is not allowed. Therefore, this study introduces innovative IP remote production devices that use open standards, Ember+, and investigates the use of an open protocol to handle control signals during remote IP production.
Additionally, using open standards for control signals, a device that converts the ST2110 standard and Dante—the major standard in the audio industry—for IP remote production, including control signals, is developed.
By using this converter with ST2110 compatible devices and Dante-compatible devices, we demonstrated its usefulness for IP remote production of the converter.
Takaya Omuro | Japan Broadcasting Corporation | Shibuya-ku, Tokyo, Japan