Topics
- 2025 BEITC Proceedings
- Broadcast Positioning System (BPS): Resilience and Precision
- Resilience, Safety and Protection for Broadcast Service
- Cybersecurity for Broadcasters
- Streaming Improvements: Low Latency and Multiview
- Embracing the Cloud: Transforming Broadcast Operations with ATSC 3.0 and Broadband Technologies
- Enhancing Video Streaming Quality and Efficiency
- 5G in Broadcast Spectrum and Video Quality Metrics
- Getting the Most out of ATSC 3.0
- AI Applications: Captions, Content Detection and Advertising Management
- Immersive Audio, Satellite and OTT Delivery
- Innovations in Live Production and Broadcast Workflows
- IP Networks and the Broadcast Chain: Fast Friends
- AI Applications: Sports, Newsrooms and Archives
- Making ATSC 3.0 Better than Ever
- AM Radio: Measurements and Modeling
- Making Radio Better Than Ever
- Brigital: Integrating Broadcast and Digital
- Production Advancements: Avatars and Immersive Content
- 2024 BEITC Proceedings
- 2023 BEITC Proceedings
- 2022 BEITC Proceedings
- 2021 BEITC Proceedings
- 2020 BEITC Proceedings
2025 BEITC Proceedings
Integrated Newsrooms with Generative AI: Efficiency, Accuracy, and Beyond - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, AI Applications: Sports, Newsrooms and ArchivesThe rapid evolution of generative AI is transforming the media industry, particularly in news production, distribution, and content value chain management. This paper explores a strategic approach to integrating generative AI technologies across newsroom operations, focusing on purpose-specific, lightweight models tailored to specific tasks. We examine how these targeted AI solutions can enable more efficient, agile, and audience-centric news operations while addressing critical considerations such as infrastructure demands, cost optimization, and implementation challenges.
Our research examines the core components of an integrated newsroom model, demonstrating how purposefully designed generative AI enhances production, quality assurance, engagement, and content reach through advanced techniques including translation, dubbing, and semantic search. By advocating for a judicious and frugal methodology, we aim to empower media organizations to harness the transformative potential of generative AI while mitigating risks and ensuring seamless integration into existing workflows.
The paper further explores the agentic AI approach that can be adopted to optimally orchestrate the newsroom workflow end to end.
Maheshwaran G, Punyabrota Dasgupta | Amazon Web Services India | Mumbai, Maharashtra, India
Low latency wireless broadcast production over 5G - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, Streaming Improvements: Low Latency and MultiviewWireless camera feeds are an integral source of content for programme making, and typically utilize licensed point-to-point radio links or “bonded-cellular” devices. Cellular bonding using public 4G and 5G networks has become a mainstay for electronic newsgathering and remote contribution feeds. These contributions can tolerate latencies up to several seconds. However, such latencies are far too long for broadcast production, where wireless cameras are cut in with cabled systems, or where remote camera control, tally and return video are required. Since 5G is a native IP technology, it can support bi-directional connectivity and facilitate additional services alongside ultra-low latency video. This paper explores the use of (private) 5G to support full low-latency wireless production workflows, discusses how 5G connectivity can augment existing wireless systems, and gives practical advice for configuring camera control systems over 5G.
Samuel Yoffe, Douglas G. Allan, Kenneth W. Barlee, Dani Anderson, Damien Muir, Malcolm R. Brew, Cameron Speirs, Robert W. Stewart | Neutral Wireless and University of Strathclyde | Glasgow, Scotland
Mark B. Waddell, Jonas Kröger-Mayes | BBS R&D and BBC Scotland | United Kingdom
Giulio Stante | RAI CRITS | Torino, Italy
Measurement of AM Band RF Noise Levels and Station Signal Attenuation - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, AM Radio: Measurements and ModelingThis report covers measurements of RF noise levels on various roadway types from open interstate highways to city streets, to determine how the noise would affect AM broadcast reception. These environments reflect the current habits of AM radio listening, which is primarily in vehicles. In addition to RF noise level, RF signal levels were measured for three AM stations operating on frequencies in the lower, middle and upper portions of the AM broadcast band. These measurements provide a better understanding of how AM radio reception is affected by RF signal strength and noise in a range of roadway environments from rural to dense urban environments.
John Kean | Cavell Mertz & Associates | Alexandria, Va., United States
Measurement of Radio Frequency Emissions from Electric Vehicles and Electric Vehicle Charging Systems in the AM Broadcast Band - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, AM Radio: Measurements and ModelingThis paper describes RF emission measurements conducted near two wireless electric vehicle charging facilities in Detroit, Michigan. The purpose of the measurements was to assess whether harmonic emissions from the charging facilities are likely to coincide with or fall near frequencies within the AM band. Audio recordings of AM broadcast reception were taken both while the charging facilities were operating and idle, with findings from these recordings to be detailed in a subsequent report.
Robert D. Weller, David H. Layer | National Association of Broadcasters | Washington, D.C., United States
Media campaign workflow using AI agents - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, AI Applications: Captions, Content Detection and Advertising ManagementThe increasing adoption of shoppable video content in Broadcast, OTT, and VOD has transformed how consumers engage with digital media. Traditional advertising models struggle to create seamless, interactive, and real-time brand engagement, leading to a need for AI-driven solutions. Emerging technologies now enable real-time product detection, metadata embedding, and interactive instream commerce integration, making video content viewable and instantly shoppable. The paper introduces an AI-based platform that instantly detects brands and products in video streams before implementing metadata structures for interactive shopping platforms. By leveraging deep learning-based object detection models and metadata signaling protocols, the framework ensures synchronized, real-time engagement between consumers and brands, unlocking new monetization opportunities for advertisers.
Multiple AI detection engines under this framework based on YOLO, Mask R-CNN, ResNet, and MobileNet SSD precisely identify products and brand placements inside real-time and pre-recorded content. The detections from the system link up with SCTE-35 (live streams), SEI (compressed video streams), and VMAP (VOD-based ad scheduling), which enable precise frame-level interactivity for product engagement. Based on AI technology, the system allows moment-by-moment brand interactions and stream-based e-commerce capabilities, and advertisers can use it for content monetization. The framework enhances content creator and advertiser success while delivering improved consumer engagement since it automates product detection combined with advertisement synchronization features. This framework redefines media commerce by combining AI-based detection, metadata injection, and HTML-based interactivity, allowing for scalable, real-time, and seamless brand engagement in video content.
Matt Ferris | Decentrix Inc. | Denver, Colo., United States
Network Distribution and the Internet Snake! - Transcoding Anywhere - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, IP Networks and the Broadcast Chain: Fast FriendsVideo content distribution faces challenges and conflicts, particularly in balancing high-quality delivery with technical and commercial constraints. Different network endpoints have varying requirements, such as bandwidth limitations or specific codec needs, making efficient distribution complex. While live transcoding can help, traditional solutions often rely on centralized data centers, leading to high processing costs and dependency on strong connectivity. This paper explores an alternative approach that decentralizes live transcoding, leveraging distributed edge-transcode technologies to optimize network utilization, reduce cloud OPEX costs, and enable scalable, cost-efficient content delivery.
David Edwards, Adam Nilsson, Pierre Le Fevre, Jonathan Smith | Net Insight | Stockholm, Sweden
NEXTGEN Incident Response Communication System – Using ATSC 3.0 - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, Getting the Most out of ATSC 3.0This paper provides a timely description of a National Aeronautics and Space Administration (NASA) research project investigating the use of a mobile ATSC 3.0 datacasting station to help support wildland fire management operations. We will discuss a proposed innovation called the NextGen Incident Response Communication System (NIRCS). NIRCS is a rapidly deployable, mobile, long-range broadcast communications system using ATSC 3.0 technology – the digital terrestrial broadcast system built on the internet protocol (IP) – to enable one-way datacasting of IP-compatible data, including ultra-high-definition video, high-fidelity audio, and other types of data packets (e.g., aircraft position messages).
Fred Engel | Device Solutions Inc. | Morrisville, N.C., United States
Tim Bagnall | Mosaic ATM | Leesburg, Va., United States
Mark Corl | Triveni Digital | Princeton, N.J., United States
Chris Pandich, Don Smith | PBS North Carolina | Research Triangle Park, N.C., United States
Jim Stenberg | Over The Air Consulting, LLC | Portland, Maine, United States
Tony Sammarco, Chris Lamb | Device Solutions Inc. | Morrisville, N.C., United States
Off-piste 5G in the Broadcast Auxiliary Service Band - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, 5G in Broadcast Spectrum and Video Quality Metrics5G New Radio (NR) can be used to provide flexible, high-capacity and low-latency networks suitable for broadcast content acquisition or delivery, but access to suitable spectrum can be challenging. One of the enablers for private network deployments is shared spectrum licensing, such as the upper n77 band (3.8–4.2 GHz) available in the UK and elsewhere in Europe. The Third Generation Partnership Project (3GPP) was created to develop mobile standards for WCDMA and TD-SCDMA and their respective core networks, and has continued to publish standards as radio access technologies have progressed to 4G and 5G. These standards define frequency bands, numerologies, duplex models and messaging (among many other things). While software-defined radio (SDR) is emerging as a viable and highly flexible solution for core and radio access network (RAN) functions, user equipment (UE) typically remain hardware based with modems that implement the 3GPP standards to ensure device compatibility. The flexibility of SDR RAN allows for wireless radio networks based on 5G NR to be built in non-3GPP defined spectrum bands, but there are no compatible devices to connect. In the USA, broadcasters have access to spectrum in the Broadcast Auxiliary Service (BAS) band (2025–2110 MHz), which coincides with the programme-making and special events (PMSE) band used in the UK and Europe. This allows for rapid licensing of 10/12 MHz channels for traditional wireless camera systems, such as COFDM, that could instead be used to license low-to-medium power private 5G NRbased networks capable of supporting multiple cameras and other IP-based workflows. This paper discusses the development of a flexible software-defined UE capable of connecting to non-3GPP 5G NR networks in BAS/PMSE spectrum.
Douglas G. Allan, Samuel R. Yoffe, Kenneth W. Barlee, Dani Anderson, Iain C. Chalmers, Malcolm R. Brew, Cameron A. Speirs, Robert W. Stewart | Neutral Wireless and University of Strathclyde | Glasgow, Scotland
Nicolas Breant, Jeremy Tastet, Sebastien Roques, Bastien Chague | AW2S | Bordeaux, France
Open-Source Low-Complexity Perceptual Video Quality Measurement with pVMAF - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, 5G in Broadcast Spectrum and Video Quality MetricsWith the rise of digital video services, viewers expect high-quality visuals, making Quality of Experience (QoE) a priority for providers. However, poor video processing can degrade visual quality, leading to detail loss and visible artifacts. Thus, accurately measuring perceptual quality is essential for monitoring QoE in digital video services. While viewer opinions are the most reliable measure of video quality, subjective testing is impractical due to its time, cost, and logistical demands. As a result, objective video quality metrics are commonly used to assess perceived quality. These models evaluate a distorted video and predict how viewers might perceive its quality. Metrics that compare the distorted video to the original source, known as full-reference (FR) metrics, are regarded as the most accurate approach. Traditional quality metrics like Sum of Absolute Differences (SAD), Sum of Squared Differences (SSD), and Peak Signal-to-Noise Ratio (PSNR) are computationally lightweight and commonly used within encoders for Video Quality Measurement (VQM) and other encoder optimization tasks. However, methods that simply measure pixel-wise differences often lack alignment with human perception as they do not account for the complex intricacies of the Human Visual System (HVS).
In recent years, more advanced metrics have been developed to better reflect human perception by incorporating HVS characteristics. Among these, Video Multi-method Assessment Fusion (VMAF) has become a widely accepted industry standard for evaluating video quality due to its high correlation with subjective opinions. However, the high computational demand of VMAF and similar perception-based metrics limits their suitability for real-time VQM. Consequently, encoders primarily offer only PSNR and Structural Similarity Index Measure (SSIM) for full-frame quality monitoring during encoding. While not the most accurate, these metrics are the only options that can be efficiently deployed during live encoding, as more advanced VQM approaches would consume too much processing resources needed for real-time encoding. To address these limitations, we introduced predictive VMAF (pVMAF), a novel video quality metric that achieves similar predictive accuracy to VMAF at a fraction of the computational cost, making it suitable for real-time applications.
pVMAF relies on three categories of low-complexity features: (i) bitstream features, (ii) pixel features, and (iii) elementary metrics. Bitstream features include encoding parameters like the quantization parameter (QP), which provide insights into compression. Pixel features are computed on either the original or reconstructed frames to capture video attributes relevant to human perception, such as blurriness and motion. Finally, elementary metrics, such as PSNR, contribute additional distortion information. These features are extracted during encoding and fed into a regression model that predicts frame-by-frame VMAF scores. Our regression model, a shallow feed-forward neural network, is trained to replicate VMAF scores based on these input features. Initially designed for H.264/AVC, we extended pVMAF’s applicability to more recent compression standards such as HEVC and AV1. In this paper, we explain how we developed and retrained pVMAF for x264 and SVT-AV1. Experimental results indicate that pVMAF effectively replicates VMAF predictions with high accuracy while maintaining high computational efficiency, making it well-suited for real-time quality measurement.
Jan De Cock, Axel De Decker, Sangar Sivashanmugam | Synamedia | Kortrijk, Belgium
Optimizing ATSC 3.0 Spectrum Utilization with Dynamic Resource Allocation and Management - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, Getting the Most out of ATSC 3.0The spectrum allocated to broadcasters is our most valuable resource, making it essential to find methods and technologies to optimize its use. Modern standards like ATSC 3.0 include technologies that enhance spectrum efficiency, such as advanced video codecs, an adaptable physical layer, and support for diverse data transmission. However, fully realizing the potential of these innovations remains an ongoing challenge. This paper examines the infrastructure, implementation techniques, and additional technology necessary to most efficiently utilize broadcast spectrum. Key use cases will be considered, including complete channel utilization to avoid unused (null) packets, dynamic data transfer adjustment to support critical short-term events, and scheduled physical layer adaptation to fit different applications. Additionally, an analysis will compare the spectral efficiency of typical ATSC 3.0 deployments with optimized configurations, highlighting potential improvements in spectral efficiency that can lead to improved channel monetization through increased service adaptability.
Nick Hottinger | One Media Technologies | Hunt Valley, Md., United States