Topics
- 2025 BEITC Proceedings
- AI Applications: Sports, Newsrooms and Archives
- Making ATSC 3.0 Better than Ever
- AM Radio: Measurements and Modeling
- Making Radio Better Than Ever
- Brigital: Integrating Broadcast and Digital
- Production Advancements: Avatars and Immersive Content
- Broadcast Positioning System (BPS): Resilience and Precision
- Resilience, Safety and Protection for Broadcast Service
- Cybersecurity for Broadcasters
- Streaming Improvements: Low Latency and Multiview
- Embracing the Cloud: Transforming Broadcast Operations with ATSC 3.0 and Broadband Technologies
- Enhancing Video Streaming Quality and Efficiency
- 5G in Broadcast Spectrum and Video Quality Metrics
- Getting the Most out of ATSC 3.0
- AI Applications: Captions, Content Detection and Advertising Management
- Immersive Audio, Satellite and OTT Delivery
- Innovations in Live Production and Broadcast Workflows
- IP Networks and the Broadcast Chain: Fast Friends
- 2024 BEITC Proceedings
- 2023 BEITC Proceedings
- 2022 BEITC Proceedings
- 2021 BEITC Proceedings
- 2020 BEITC Proceedings
AI Applications: Sports, Newsrooms and Archives
Enhancing Fantasy League Engagement Through Efficient Hyper-Personalized Highlights - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, AI Applications: Sports, Newsrooms and ArchivesHyper-personalization offers the fan the opportunity to view content tailored very specifically for them. That requires fine-grained information about the user and about the content, plus an architecture that brings those together and a suitable base platform that can enable such capabilities. In this paper, the key enablers of making hyper-personalization possible are outlined, including the architectural impact of direct-to-consumer streaming, AI highlights creation and personalization technology. Its use is illustrated by means of an example, using a fantasy team to create a specifically customized on-demand highlights package involving players in the fan’s teams in a scalable, cost-effective manner.
Tony Jones | MediaKind | Southampton, Hampshire, United Kingdom
Integrated Newsrooms with Generative AI: Efficiency, Accuracy, and Beyond - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, AI Applications: Sports, Newsrooms and ArchivesThe rapid evolution of generative AI is transforming the media industry, particularly in news production, distribution, and content value chain management. This paper explores a strategic approach to integrating generative AI technologies across newsroom operations, focusing on purpose-specific, lightweight models tailored to specific tasks. We examine how these targeted AI solutions can enable more efficient, agile, and audience-centric news operations while addressing critical considerations such as infrastructure demands, cost optimization, and implementation challenges.
Our research examines the core components of an integrated newsroom model, demonstrating how purposefully designed generative AI enhances production, quality assurance, engagement, and content reach through advanced techniques including translation, dubbing, and semantic search. By advocating for a judicious and frugal methodology, we aim to empower media organizations to harness the transformative potential of generative AI while mitigating risks and ensuring seamless integration into existing workflows.
The paper further explores the agentic AI approach that can be adopted to optimally orchestrate the newsroom workflow end to end.
Maheshwaran G, Punyabrota Dasgupta | Amazon Web Services India | Mumbai, Maharashtra, India
Vision and Language Models for Enhanced Archive Video Management - $15
Date: March 21, 2025Topics: 2025 BEITC Proceedings, AI Applications: Sports, Newsrooms and ArchivesArchival video collections contain a wealth of historical and cultural information. Managing and analyzing this data can be challenging due to the lack of metadata and inconsistent formatting across different sources. In particular, identifying and separating individual stories within a single archived tape is critical for efficient indexing, analysis and retrieval. However, manual segmentation is time-consuming and prone to human error. To address this challenge, we propose a novel approach that combines vision and language models to automatically detect transition frames and segment archive videos into distinct stories. A vision model is used to cluster frames of the video. Using recent robust automatic speech recognition and large language models, a transcript, a summary and a title for the story are generated. By leveraging computed features from the previous transition frames detection, we also propose a fine-grained chaptering of the segmented stories. We conducted experiments on a dataset consisting of 50 hours of archival video footage. The results demonstrated a high level of accuracy in detecting and segmenting videos into distinct stories. Specifically, we achieved a precision of 93% for an Intersection over Union threshold set at 90%. Furthermore, our approach has shown to have significant sustainability benefits as it is able to filter and remove approximately 20% of the content from the 50 hours of videos tested. This reduction in the amount of data that needs to be managed, analyzed and stored can lead to substantial cost savings and environmental benefits by reducing energy consumption and carbon emissions associated with data processing and storage.
Khalil Guetari, Yannis Tevissen, Frederic Petitpont | Moments Lab Research | Boulogne-Billancourt, France