Welcome to the NAB’s 2020 Broadcast Engineering and Information Technology (BEIT) Conference Proceedings. The papers offered here have been presented at the annual BEIT Conference at NAB Show, the world’s largest trade show for the media content creation and distribution industry.

The BEIT Conference program is established each year by the NAB Broadcast Engineering and Information Technology Conference Committee, a rotating group of senior technologists from NAB member organizations, along with representatives from the Society of Broadcast Engineers (SBE). The 2020 BEIT Conference Committee roster is available here.

The content available in the BEIT Conference Proceedings is covered under copyright provisions listed here.

2020 NAB BEIT Conference Proceedings

  • Toward a New Understanding of Frequency- and Impedance-Related Failures in Grounding Systems - $15

    Date: April 26, 2020
    Topics: ,

    The importance of grounding (also referred to as ?earthing?) has been known for well over two centuries.?However, critical characteristics of damage-causing fault currents that reach a contemporary grounding system – often triggering equipment failure – are generally not sufficiently explored by engineers involved with design and installation of protective grounding.?This paper discusses the significant deficiencies in common grounding systems which occur due to the following:
    – Inadequate mitigation of broadband fault current frequencies (especially in the?>60MHz range – which are very common in lightning)
    – Existence of impedance ?walls? created by inefficient ground-rod-to-soil interfaces.?

    An examination of the dynamics of high frequency faults, and impedance mismatches in grounding systems is presented, demonstrating why these systems fail in spite of their adherence to commonly accepted design standards.?Developing a higher level of grounding protection within the Broadcast Industry?- which is increasingly necessary given equipment expense and sensitivity – therefore requires a deeper analysis and understanding of fault current components, characteristics, and events.

    Thomas LaBarge | GroundLinx Technologies | Blue Ridge, Georgia, USA
    Nancy Swartz | GroundLinx Technologies | Blue Ridge, Georgia, USA
    Gordon Wysong | GroundLinx Technologies | Blue Ridge, Georgia, USA
    John Broccoli | GroundLinx Technologies | Blue Ridge, Georgia, USA
    John?H.? Belk | GroundLinx Technologies | Blue Ridge, Georgia, USA



  • Towards Designing a Subjective Assessment System for the Quality of Closed Captioning Using Artificial Intelligence - $15

    Date: April 26, 2020
    Topics: ,

    A novel quality assessment system design for Closed Captioning (CC) is proposed. CC is originally designed to serve Deaf and Hard of Hearing (D/HoH) audiences for enjoying audio/visual content, similar to hearing audiences. Traditional quality assessment models have focus on empirical methods only, measuring quantitative accuracy by counting the number of word errors in the captions of show. Errors are specifically defined to be quantitative (e.g., spelling errors) and/or assessed by trained experts. However, D/HoH audiences have been outspoken about their dissatisfaction with current CC quality. One solution to this could be inviting human evaluators who represent different groups to assess the quality of CC at the end of each show, however, in reality, this would be difficult to do and impractical. We have developed an artificial intelligence (AI) system to include human subjective assessment in the CC quality assurance procedure. The system is designed to replicate the human evaluation process and can predict the subjective score for a given caption file. Probabilistic models of human evaluators were developed based on actual data from D/HoH audiences. Deep Neural Networks-Multilayer Perceptron (DNN-MLP) were then trained with the probability models and data collected. To date, the major findings of this process are:

    1. The human subjective ratings for given caption quality prediction performance of DNN-MLP was higher than that of using some of the basic statistical regression models (polynomial fitting),
    2. The user probability models of Deaf viewer and Hard of Hearing viewer seemed to represent the different characteristics between two primary service consumer groups, and
    3. The artificial intelligence prediction system created based solely on literature seemed to be improved after training with the data based on user probability models.

    Somang Nam | University of Toronto | Toronto, ON, Canada
    Deborah Fels | Ryerson University | Toronto, ON, Canada



  • Transitioning a Network Operations Center from HD-SDI to IP - $15

    Date: April 26, 2020
    Topics: ,

    The PBS Network Operations Center (NOC) provides the content aggregation and delivery for the Public Television Community.?Like many other media facilities, the NOC was built as an HD-SDI-based facility.?We have added IP-based “islands” to the facility as the technology has advanced.?We have now reached the stage where demands for a more tightly integrated workflow to efficiently serve the needs of over-the-air and OTT delivery require that we move to a fully IP-based facility.?This includes moving a number of our on-premise functions to the “public cloud,”?integrating the cloud functions seamlessly with our on-premises functions, and providing our operations and maintenance staff the ability to easily monitor all the diverse elements of this “system.”?This project is a work in progress.?We will provide a snapshot?of where we are in this process, and then present a summary of the lessons learned.?This summary should provide the basis for others to structure their own facility transitions.

    James (Andy) Butler | Public Broadcasting Service (PBS) | Alexandria, VA USA



  • Transitioning Broadcast to Cloud - $15

    Date: April 26, 2020
    Topics: ,

    We analyze?the?differences between on-premise broadcast and cloud-based online video delivery workflows and identify?technologies?needed for bridging the gaps between them. Specifically, we note differences in ingest protocols, media formats, signal-processing chains, codec constraints, metadata, transport formats, delays, and means for implementing operations such as ad-splicing, redundancy and synchronization. To bridge the gaps, we suggest specific improvements in cloud ingest, signal processing, and transcoding stacks. Cloud playout is also identified as critically needed technology for convergence. Finally, based on all such considerations, we?offer sketches of several possible hybrid architectures, with different degrees of offloading of processing in cloud, that are likely to emerge in the future.

    Yuriy Reznik | Brightcove, Inc. | Boston, MA, USA
    Jordi Cenzano | Brightcove, Inc. | Boston, MA, USA
    Bo Zhang | Brightcove, Inc. | Boston, MA, USA



  • Using TV Transmission System Commissioning Reports - $15

    Date: April 26, 2020
    Topics: ,

    As US TV channels are repacked, impacted stations receive commissioning documentation for their transmission systems from manufacturers, contractors, and consultants. Too often, these reports are received, filed away, and not seen again until there?s a problem. And their creators often compartmentalize subsystems to the products they deliver or commission, not looking at the transmission system as a whole. This presentation will review the documentation typically received, describe the measurements most pertinent to ongoing station operation, and discuss the use of these baseline reports and data in maintenance and troubleshooting of transmission systems.

    Karl D. Lahm | Broadcast Transmission Services, LLC | Traverse City, MI, USA



  • Virtualization and Cloudification of Next Gen Broadcast Chain: Consequences and Opportunities - $15

    Date: April 26, 2020
    Topics: ,

    This presentation will delve into the true economic benefits of cloudification of NextGen TV broadcasts. From operating two-sided marketplaces, to yield optimization algorithms of spectrum bandwidth, this presentation will address various use cases & business opportunities resulting from upgrading the IT stack of a broadcast station.

    Using live demonstrations, this presentation will make a case for open APIs and the power of a central cloud operating across DMAs integrated with various broadcast chain vendors, orchestrating all broadcast operations as workflows in a microservices architecture to truly unleash the value of broadcast spectrum & network.

    This presentation will address the relevance of ATSC 3.0 in the 5G era, making a case for the value differentiation of broadcast bits versus telco bits. Looking deeper into the already underway phenomena of cloudified broadcast operations, this presentation will focus on specific NextGen broadcast functionality that will be originate from day 1 in the cloud.

    The migration to cloud with NextGen TV and visualization of broadcast chain not only makes economic sense but also enhances broadcast service bouquet. Making a case for two-sided broadcast marketplace model, we discuss Next Gen TV broadcast as a service paradigm and the enabling technologies that are enabling this reality.

    A virtualized broadcast chain operates over open APIs and common industry accepted messaging formats and XML taxonomies. This presentation describes a generic 3.0 chain components in detail and identifies a service orchestration layer on top to interoperate across vendor systems and multiple DMA chains.

    The NextGen TV system manager component is considered the brain of the NextGen TV broadcast. The interfaces and capabilities of the system manager are described in detail so broadcasters migrating to 3.0 can start planning the scope and purpose of the system manager as they transition to 3.0 broadcasts.

    Finally the presentation covers the need for a network of system managers to work in tandem to manage multiple broadcasts in a SFN world. The chief engineer and software architects and engineers of a broadcast station are ideal audience for this presentation.

    Learning Objectives:
    1. Attendees will walk away appreciating NextGen TV technology to offer services across multiple DMAs.
    2. Attendees will appreciate that broadcast is moving to an ?as a service? software-defined model with broadcaster spectrum with open API and cloud-based infrastructure.
    3. Attendees will go through life use-case analysis to identify ?as-a-service? opportunities and cloud-based service models both to downstream consumers as well as upstream enterprise.

    Prabu David | Michigan State University | East Lansing, MI, USA
    Chandra Kotaru | Gaian Solutions | San Jose, CA, USA



  • Watson Captioning Live: Leveraging AI for Smarter, More Accessible Closed Captioning - $15

    Date: April 26, 2020
    Topics: ,

    The requirements for closed captioning were established more than two decades ago , but many broadcasters still struggle to deliver accurate, timely, and contextually-relevant captions. Breaking news, weather, and entertainment programming often feature delayed or incorrect captions, further
    demonstrating that there is great room for improvement. These shortcomings lead to a confusing viewing experience for the nearly 48 million Americans with hearing loss and any other viewers who need captioning to fully digest content.

    Committed to transforming broadcasters? ability to provide all audiences with more impactful viewing experiences, IBM Watson Media launched Watson Captioning Live , a trainable, cloud-based solution producing accurate captions in real-time to ensure audiences have equal access to timely and vital
    information. Combining breakthrough AI technology like machine learning models and speech recognition, Watson Captioning Live redefines industry captioning standards.

    The solution uses IBM Watson Speech to Text API to automatically ingest and transcribe spoken words and audio within a video. Watson Captioning Live is trained to automatically recognize and learn from data updates to ensure timely delivery of factually accurate captions. The product is designed to learn over time to increase its long-term value proposition for broadcast producers.

    This paper will explore how IBM Watson Captioning Live leverages AI and machine learning technology to deliver accurate closed captions at scale, in real-time? to make programming more accessible for all.

    Brandon Sullivan | The Weather Company Solutions | Austin, TX, USA



  • Wireless Microphone Operation for Mega-Events in the 1425-1525 MHz Band - $15

    Date: April 26, 2020
    Topics: ,

    Due to the repurpossing of the 600 MHz band that is scheduled to be completed in July 2020, the alternate 1435 – 1525 MHz band will be more heavily used by broadcasters for wireless microphones to cover mega-events such as the Superbowl, World Series, Kentucky Derby, national elections, the Academy Awards, etc. This band has been used for wireless microphones through the use of special temporay authorization (STA) granted by the FCC. This procedure will effectively by normalized through an approval process with the Aerospace and Flight Test Radio Coordinating Council (?AFTRCC?), the agency that coordinates aeronautical mobile telemetry (i.e. flight training), the primary service in this band. Wireless microphone equipment must incorporate location, date, and time awareness. AFTRCC will provide a digital code (i.e. an electronic key) that will unlock the equipment, enabling it to work at the approved time and location. This paper and presentation will detail the regulations, eligilibity, and procedure for operating wireless microphones in this frequency band.

    Ciaudelli | SENNHEISER Research & Innovation | Old Lyme, CT USA