Welcome to the NAB’s 2020 Broadcast Engineering and Information Technology (BEIT) Conference Proceedings. The papers offered here have been presented at the annual BEIT Conference at NAB Show, the world’s largest trade show for the media content creation and distribution industry.
The BEIT Conference program is established each year by the NAB Broadcast Engineering and Information Technology Conference Committee, a rotating group of senior technologists from NAB member organizations, along with representatives from the Society of Broadcast Engineers (SBE). The 2020 BEIT Conference Committee roster is available here.
The content available in the BEIT Conference Proceedings is covered under copyright provisions listed here.
2020 Proceedings Topics
- Advanced Advertising Technologies
- Advanced Emergency Alerting
- Artificial Intelligence Applications for Media Production
- Broadcast Facility Design
- Broadcast Workflows
- Converting to Ultra HD Television Broadcasting
- Cybersecurity for Broadcast
- Designing Cloud-based Facilities
- Emerging Radio Technologies — On-air and Online
- Improving OTT Video Quality
- IP Conversion: Broadcasters’ Research & Recommendations
- Managing HDR and SDR Content
- Media over IP: Security and Timing
- New Directions in IP-based Media Systems
- New Spectrum Issues
- New Technologies for Sports Coverage
- Next Gen Academy I: The Broadcast/Broadband Revolution
- Next Gen Academy II: Transport Matters
- Next Gen Academy III: Next Steps
- Next Gen Academy IV: Planning for SFNs
- Next Gen Academy V: Implementing SFNs
- Next Gen Academy VI: PHY Layer Issues
- Next Gen TV Audio
- Optimizing the OTT User Experience
- Refining Radio Delivery
- TV Repack and Next Gen Transition Preparation
- Using 5G Broadcasting for Content Delivery
- Using 5G Technologies for Media Production
- Using Artificial Intelligence for Closed Captioning
- Using the Cloud for Live Production
Other Proceedings
Toward a New Understanding of Frequency- and Impedance-Related Failures in Grounding Systems - $15
Date: April 26, 2020Topics: 2020 BEITC Proceedings, Broadcast Facility DesignThe importance of grounding (also referred to as ?earthing?) has been known for well over two centuries.?However, critical characteristics of damage-causing fault currents that reach a contemporary grounding system – often triggering equipment failure – are generally not sufficiently explored by engineers involved with design and installation of protective grounding.?This paper discusses the significant deficiencies in common grounding systems which occur due to the following:
– Inadequate mitigation of broadband fault current frequencies (especially in the?>60MHz range – which are very common in lightning)
– Existence of impedance ?walls? created by inefficient ground-rod-to-soil interfaces.?An examination of the dynamics of high frequency faults, and impedance mismatches in grounding systems is presented, demonstrating why these systems fail in spite of their adherence to commonly accepted design standards.?Developing a higher level of grounding protection within the Broadcast Industry?- which is increasingly necessary given equipment expense and sensitivity – therefore requires a deeper analysis and understanding of fault current components, characteristics, and events.
Thomas LaBarge | GroundLinx Technologies | Blue Ridge, Georgia, USA
Nancy Swartz | GroundLinx Technologies | Blue Ridge, Georgia, USA
Gordon Wysong | GroundLinx Technologies | Blue Ridge, Georgia, USA
John Broccoli | GroundLinx Technologies | Blue Ridge, Georgia, USA
John?H.? Belk | GroundLinx Technologies | Blue Ridge, Georgia, USA
Towards Designing a Subjective Assessment System for the Quality of Closed Captioning Using Artificial Intelligence - $15
Date: April 26, 2020Topics: 2020 BEITC Proceedings, Using Artificial Intelligence for Closed CaptioningA novel quality assessment system design for Closed Captioning (CC) is proposed. CC is originally designed to serve Deaf and Hard of Hearing (D/HoH) audiences for enjoying audio/visual content, similar to hearing audiences. Traditional quality assessment models have focus on empirical methods only, measuring quantitative accuracy by counting the number of word errors in the captions of show. Errors are specifically defined to be quantitative (e.g., spelling errors) and/or assessed by trained experts. However, D/HoH audiences have been outspoken about their dissatisfaction with current CC quality. One solution to this could be inviting human evaluators who represent different groups to assess the quality of CC at the end of each show, however, in reality, this would be difficult to do and impractical. We have developed an artificial intelligence (AI) system to include human subjective assessment in the CC quality assurance procedure. The system is designed to replicate the human evaluation process and can predict the subjective score for a given caption file. Probabilistic models of human evaluators were developed based on actual data from D/HoH audiences. Deep Neural Networks-Multilayer Perceptron (DNN-MLP) were then trained with the probability models and data collected. To date, the major findings of this process are:
1. The human subjective ratings for given caption quality prediction performance of DNN-MLP was higher than that of using some of the basic statistical regression models (polynomial fitting),
2. The user probability models of Deaf viewer and Hard of Hearing viewer seemed to represent the different characteristics between two primary service consumer groups, and
3. The artificial intelligence prediction system created based solely on literature seemed to be improved after training with the data based on user probability models.Somang Nam | University of Toronto | Toronto, ON, Canada
Deborah Fels | Ryerson University | Toronto, ON, Canada
Transitioning a Network Operations Center from HD-SDI to IP - $15
Date: April 26, 2020Topics: 2020 BEITC Proceedings, IP Conversion: Broadcasters' Research & RecommendationsThe PBS Network Operations Center (NOC) provides the content aggregation and delivery for the Public Television Community.?Like many other media facilities, the NOC was built as an HD-SDI-based facility.?We have added IP-based “islands” to the facility as the technology has advanced.?We have now reached the stage where demands for a more tightly integrated workflow to efficiently serve the needs of over-the-air and OTT delivery require that we move to a fully IP-based facility.?This includes moving a number of our on-premise functions to the “public cloud,”?integrating the cloud functions seamlessly with our on-premises functions, and providing our operations and maintenance staff the ability to easily monitor all the diverse elements of this “system.”?This project is a work in progress.?We will provide a snapshot?of where we are in this process, and then present a summary of the lessons learned.?This summary should provide the basis for others to structure their own facility transitions.
James (Andy) Butler | Public Broadcasting Service (PBS) | Alexandria, VA USA
Transitioning Broadcast to Cloud - $15
Date: April 26, 2020Topics: 2020 BEITC Proceedings, Designing Cloud-based FacilitiesWe analyze?the?differences between on-premise broadcast and cloud-based online video delivery workflows and identify?technologies?needed for bridging the gaps between them. Specifically, we note differences in ingest protocols, media formats, signal-processing chains, codec constraints, metadata, transport formats, delays, and means for implementing operations such as ad-splicing, redundancy and synchronization. To bridge the gaps, we suggest specific improvements in cloud ingest, signal processing, and transcoding stacks. Cloud playout is also identified as critically needed technology for convergence. Finally, based on all such considerations, we?offer sketches of several possible hybrid architectures, with different degrees of offloading of processing in cloud, that are likely to emerge in the future.
Yuriy Reznik | Brightcove, Inc. | Boston, MA, USA
Jordi Cenzano | Brightcove, Inc. | Boston, MA, USA
Bo Zhang | Brightcove, Inc. | Boston, MA, USA
Watson Captioning Live: Leveraging AI for Smarter, More Accessible Closed Captioning - $15
Date: April 26, 2020Topics: 2020 BEITC Proceedings, Using Artificial Intelligence for Closed CaptioningThe requirements for closed captioning were established more than two decades ago , but many broadcasters still struggle to deliver accurate, timely, and contextually-relevant captions. Breaking news, weather, and entertainment programming often feature delayed or incorrect captions, further
demonstrating that there is great room for improvement. These shortcomings lead to a confusing viewing experience for the nearly 48 million Americans with hearing loss and any other viewers who need captioning to fully digest content.Committed to transforming broadcasters? ability to provide all audiences with more impactful viewing experiences, IBM Watson Media launched Watson Captioning Live , a trainable, cloud-based solution producing accurate captions in real-time to ensure audiences have equal access to timely and vital
information. Combining breakthrough AI technology like machine learning models and speech recognition, Watson Captioning Live redefines industry captioning standards.The solution uses IBM Watson Speech to Text API to automatically ingest and transcribe spoken words and audio within a video. Watson Captioning Live is trained to automatically recognize and learn from data updates to ensure timely delivery of factually accurate captions. The product is designed to learn over time to increase its long-term value proposition for broadcast producers.
This paper will explore how IBM Watson Captioning Live leverages AI and machine learning technology to deliver accurate closed captions at scale, in real-time? to make programming more accessible for all.
Brandon Sullivan | The Weather Company Solutions | Austin, TX, USA
Wireless Microphone Operation for Mega-Events in the 1425-1525 MHz Band - $15
Date: April 26, 2020Topics: 2020 BEITC Proceedings, New Spectrum IssuesDue to the repurpossing of the 600 MHz band that is scheduled to be completed in July 2020, the alternate 1435 – 1525 MHz band will be more heavily used by broadcasters for wireless microphones to cover mega-events such as the Superbowl, World Series, Kentucky Derby, national elections, the Academy Awards, etc. This band has been used for wireless microphones through the use of special temporay authorization (STA) granted by the FCC. This procedure will effectively by normalized through an approval process with the Aerospace and Flight Test Radio Coordinating Council (?AFTRCC?), the agency that coordinates aeronautical mobile telemetry (i.e. flight training), the primary service in this band. Wireless microphone equipment must incorporate location, date, and time awareness. AFTRCC will provide a digital code (i.e. an electronic key) that will unlock the equipment, enabling it to work at the approved time and location. This paper and presentation will detail the regulations, eligilibity, and procedure for operating wireless microphones in this frequency band.
Ciaudelli | SENNHEISER Research & Innovation | Old Lyme, CT USA