2020 BEITC Proceedings

  • Artificial Intelligence: Transforming the Live Sports Landscape - $15

    Date: April 26, 2020
    Topics: ,

    The proposed paper focuses on a media recognition AI engine that is trained to catalog an exhaustive range of elements for live sports in near real-time. It helps sports producers cut down the time and effort involved in highlight creation by 60% and enables OTT platforms to deliver high levels of interactivity and personalization with a create-your-own-highlights experience and a powerful live search option. The engine identifies storytelling graphics and events in the footage along with game content to stitch together an end-to-end story in highlights, and also provides finesse with smooth visual transitions and automatic audio leveling – just like a professional editor. The said engine helps sports content creators enhance viewer engagement, increase monetization and achieve greater scale and speed in live sports.

    Adrish Bera | Prime Focus Technologies | Burbank, CA, United States
    Amer Saleem



  • ATSC 3.0 - Building Media Networks for Next Gen TV - $15

    Date: April 26, 2020
    Topics: ,

    Successful delivery of Next Gen TV using ATSC 3 and preserving the Traditional ATSC 1 requires cooperation between broadcasters to deliver a user experience that meets and exceeds today?s online content streaming but maintains compliance with the FCC obligations.

    To build this out quickly this will require a flexible network strategy using Public and Private Network topologies that address the challenges of hardening of the required Synchronous SFN against malicious attacks both internal and external to guarantee the delivery of Emergency Alerts to the public during the time of Crisis. We will review different strategies in use today for Public and Private Networks used for South Korea ATSC 3, ATSC 1, DAB, DVB and ISDB-T systems.

    Learning Objectives:
    – Effective cost control of building a Regional Media Network between broadcasters, leveraging both public and private networks for live media delivery.
    – How to simplify the legal issues of connectivity, SLA and Retrans Agreements through a Software Defined Network (SDN).
    – Examples of recent outside attacks on terrestrial networks, with lessons learned from building out of the SFN and realistic ways to address the vulnerabilities.

    Christer Bohm | Net Insight AB | Stockholm, Sweden
    Per Lindgren | Net Insight AB | Stockholm, Sweden
    Arielle Slama | Net Insight AB | Stockholm, Sweden



  • ATSC 3.0 Backward Compatible SFN In-Band Distribution Link and In-Band Inter-Tower Wireless Network for Backhaul, IoT and Datacasting - $15

    Date: April 26, 2020
    Topics: ,

    Last year, we presented a ATSC 3.0 backward compatible in-band backhaul system. In this paper we will present the field measurement results that support the viability of the in-band backhaul system implementation using full-duplex transmission. We further present a full duplex Inter-Tower Communication System ? a re-configurable wireless network for SFN broadcasting, in-band inter-tower communications, and IoT/datacasting applications, while backward compatible with the ATSC 3.0.

    (1) SFN Broadcasting:

    Improve service quality for mobile, handheld, and indoor receptions;
    Allow new services: IoT, connected car, datacasting;
    One-to-many timely services for large rural areas for traffic map update, weather conditions, emergency warning.

    (2) In-band Distribution for SFN

    Eliminate studio-to-tower link spectrum requirement;
    Reduce broadcast operating costs;
    Spectrum sharing and re-use.

    (3) Inter-Tower Wireless Network

    Scalable & reconfigurable network embedded in a broadcast system;
    Broadcast network cue & control that do not rely on other telecom infrastructure ? surviving emergency and nature disaster;
    Backhaul data services among towers: IP-based IoT, and wide-area datacasting;
    Each tower can broadcast localized content in its coverage area (TDM/LDM);
    Inter-Tower Network can work under SFN, OCR, or Multi-Frequency Network environments;
    Full-Duplex Transmission: Transmitting and receiving on the same frequency ? improving spectrum efficiency;
    Dynamic Spectrum Re-Use and Sharing + LDM: Converging Broadcast and Wireless Broadband Services.

    This session will be rebroadcast on the?BEIT Express?channel?on May 13, 2020?at 9:15 p.m.?and May 14, 2020 at 5:15 a.m.?EDT (UTC -4).

    Yiyan Wu | Communications Research Centre Canada | Ottawa, Canada
    Liang Zhang | Communications Research Centre Canada | Ottawa, Canada
    Wei Li | Communications Research Centre Canada | Ottawa, Canada
    S?bastien Lafl?che | Communications Research Centre Canada | Ottawa, Canada
    Sung-Ik Park | Electronics and Telecommunications Research Institute | Daejeon, Korea
    Jae-young Lee | Electronics and Telecommunications Research Institute | Daejeon, Korea
    Heug-Mook Kim | Electronics and Telecommunications Research Institute | Daejeon, Korea



  • ATSC 3.0 Personalized Targeted Advertising - $15

    Date: April 26, 2020
    Topics: ,

    Among the numerous enhancements brought by ATSC 3.0 to build the next generation TV, personalized content is one of the most innovative and awaited. Currently consumers have become used to an interactive content consumption on their mobile devices, while legacy television continued its traditional one-way broadcast delivery path. In reaction to this widening gap, ATSC 3.0 paves the way for a new content delivery model based on a Broadcast/Broadband hybridization. This important step forward allows content providers to enhance a linear broadcasted program with highly customized content, such as local weather forecast or emergency alerts, and more specifically targeted content. The aim of this paper is to breakdown and explain the entire production workflow and show how this technology can lead to a win-win situation for both viewers, broadcasters, advertisers and equipment vendors.

    Luke Fay | Sony | San Diego, CA, US
    Lucas Gregory | ATEME | Rennes, France
    Micka?l Raulet | ATEME | Rennes, France
    | |
    | |
    | |
    | |



  • ATSC: Beyond Standards and a Look at the Future - $15

    Date: April 26, 2020
    Topics: ,

    As a standards development organization, the Advanced Television Systems Committee (ATSC) has been focused on completing the ATSC 3.0 suite of standards for the past several years. While amendments, corrigenda and other document extensions and maintenance are evergreen, ongoing activities in the standards realm, the ATSC mission also encompasses “looking around the corner” for the Next Big Thing in the evolution of broadcast media. To this end, ATSC has formed Planning Teams and other groups that are examining advanced broadcast technologies, automotive applications, global recognition of ATSC 3.0, developing a near-term service roadmap for ATSC 3.0 services and features, convergence with international data transmission systems such as the Internet and 4G/5G and other topics. This presentation will feature members of the ATSC leadership team laying out the details of this challenging workplan and current progress, how industry representatives can participate and the steps being taken toward developing a robust vision for the future of broadcasting.

    Madeleine Noland | ATSC | Washington, DC, USA
    Jerry Whitaker | ATSC | Washington, DC, USA
    Lynn Claudy | NAB | Washington, DC, USA



  • Audio Streaming: A New Approach - $15

    Date: April 26, 2020
    Topics: ,

    Most of your streaming audience wants something that just works reliably; they don’t care about the tech behind it. Many audio streams currently in service fall short. However, it is up to the content providers to make it work. If you stream audio, you need to know about this. This information is targeted to streaming professionals.

    This paper/presentation describes a standards-based method of quality live streaming audio with dramatically decreased operating costs, increased reliability, professional features, and a better user experience. Streaming audio has now evolved past legacy protocols allowing reliable content delivery to mobile and connected car dashboards, where all the audience growth continues.

    Greg Ogonowski | StreamS HiFi Radio/Modulation Index, LLC | Diamond Bar, CA USA
    Nathan Niyomtham | StreamS HiFi Radio/Modulation Index, LLC | Diamond Bar, CA USA



  • Audio-Specific Metadata To Enhance the Quality of Audio Streams and Podcasts - $15

    Date: April 26, 2020
    Topics: ,

    Audio-specific metadata was envisioned several years ago in the MPEG-D standard for Dynamic Range Control. The application of this metadata to online content awaited a newer audio codec and the current generation of mobile operating systems. Now that both are becoming widely available, this paper explains how audio content providers can offer new consumer benefits as well as a more compelling listening experience.

    This paper and presentation will explain the types of audio-specific metadata that monitor key characteristics of audio content, such as loudness, dynamic range and signal peaks. From simple to large-scale producers, the metadata sets are added to the content during encoding for real-time distribution?as in streams, or for file storage?as with podcasts.

    In the playback device or system, this metadata is decoded along with the audio data frames. Through diagrams the decoder operations are described, providing benefits such as loudness matching across different audio content?ending annoying blasting from some audio and reaching for the volume. It will be shown that audio dynamic range can be controlled according to the noise environment around the listener?quiet parts of a performance can be raised to audibility, but only for those who need it.

    An audio demonstration is planned to allow the audience to hear the same encoded program over a range of playout conditions with the same device, from riding public transit to full dynamic range for listeners who want highest fidelity. The workflows in production and distribution to add audio-specific metadata are explained, showing how content producers need to make only one target level for all listeners, rather than one for smart speakers, another for fidelity-conscious listeners, etc.

    John Kean | Cavell Mertz & Associates Inc. | Manassas, Virginia, USA
    Alex Kosiorek | Central Sound at Arizona PBS | Phoenix, Arizona, USA



  • Automated Brand Color Accuracy for Real-Time Video - $15

    Date: April 26, 2020
    Topics: ,

    Sports fans know their team?s colors and recognize inconsistencies in brand color display when their beloved team?s colors show up incorrectly. Repetition and consistency build brand recognition and that equity is valuable and worth protecting. Accurate display of specified colors on screen is impacted by many factors including the capture source, the display screen technology, and the ambient light which can range broadly throughout the event due to changes in time and weather (for outdoor fields) or mixed-artificial lighting (for indoor facilities). Changes to any of these factors demand adjustments of the output in order to maintain visual consistency. According to the industry standard, color management is handled by a technician who manually corrects footage from up to two dozen camera feeds in real-time to adjust for consistency across camera feeds. In contrast, the AI-powered ColorNet system ingests live video, adjusts each video frame with a trained machine learning model, and outputs a color corrected feed in real-time. This system is demonstrated using Clemson University?s orange specification, Pantone 165.? The machine learning model was trained using a dataset of raw and color-corrected videos of Clemson football games. The footage was manually color corrected using Adobe Premiere Pro. This trains the model to target specific color values and adjust only the targeted brand colors pixel-by-pixel without shifting any other surrounding colors in the frame, generating localized corrections while adjusting automatically to changes in lighting and weather. The ColorNet model successfully reproduces the manually-created corrections masks while being highly computationally efficient both in terms of prediction fidelity and speed. This approach has the ability to circumvent human error when color correcting while constantly adjusting for negative impacts caused by lighting or weather changes on the display of brand colors. Current progress is being made to beta test this model in live-time broadcast streaming during a live sporting event with large-format screens in partnership with Clemson University Athletics.

    Emma Mayes | Clemson University | Clemson, South Carolina, United States
    John Paul Lineberger | Clemson University | Clemson, South Carolina, United States
    Michelle Mayer | Clemson University | Clemson, South Carolina, United States
    Andrew Sanborn | Clemson University | Clemson, South Carolina, United States
    Hudson Smith | Clemson University | Clemson, South Carolina, United States
    Erica Walker | Clemson University | Clemson, South Carolina, United States



  • Bit-Rate Evaluation of Compressed HDR Using SL-HDR1 - $15

    Date: April 26, 2020
    Topics: ,

    From a signal standpoint, what differentiates High Dynamic Range (HDR) content from Standard Dynamic Range (SDR) content is the mapping of the pixel samples to actual colors and light intensity. Video compression encoders and decoders (of any type) are agnostic to that ? the encoder will take a signal, compress it, and at the other side, the decoder will re-create something that is ?about the same? as the signal fed to the encoder. Existing encoders may have been optimized to provide good results with SDR, but HDR will still flow through, albeit possibly requiring higher bit rates for good reproduction.

    Some of the HDR standards include dynamic metadata to help a display device render the content based on its capabilities. Some standards transmit a native HDR signal with metadata that allows the creation of anything from the original HDR all the way down to SDR. SL-HDR1, on the other hand, does the opposite: An SDR signal is transmitted, with metadata to inverse tone map to HDR. This metadata adds overhead to the video elementary stream.

    This paper focuses on the required bit rates to produce a final HDR signal over a compressed link. We compare encoding SMPTE-2084 PQ HDR signals directly versus using SL-HDR1 to generate an SDR signal plus dynamic metadata. The comparison is done objectively by comparing the PSNR of the decoded signal. The SMPTE-2084 HDR signal is used as a reference at a fixed bit rate, and the bit rate of the SL-HDR1 encoded signal is varied until it matches the PSNR, over a range of source material. The evaluation is done for both AVC (H.264) and HEVC (H.265). This is similar to the work by Touze and Kerkhof published in 2017, but using commercial equipment.

    Ciro Noronha | Cobalt Digital Inc. | Champaign, Illinois, USA
    Kyle Wilken | Cobalt Digital Inc. | Champaign, Illinois, USA
    Ryan Wallenberg | Cobalt Digital Inc. | Champaign, Illinois, USA



  • Broadcast Industry Guide Specs for SFN Design and Implementation - $15

    Date: April 26, 2020
    Topics: ,

    Designing a Single Frequency Network (SFN) for ATSC-3.0 deployment is complicated and time consuming. While the software tools are available to do the required studies, there are dozens of parameters that create thousands of possible outcomes. Many engineers will be designing systems, and those reports will be shared with many customers.

    In order for decision makers in our industry to make appropriate choices, a guide spec document has been created to standardize parameters to allow for standard reports. For example, initial FFT and Mod-Cod combinations will be identified from highly robust low data rate uses that will work for low C/N ratios to less robust high data rate uses that require higher C/N ratios. Standard FCC 10 M receive antenna heights are useful, but in addition, portable and typical indoor antenna heights are much lower and demonstrate more useful information.

    It is also essential for reports to be in common formats. For example, colors that indicated signal strength ranges need to be common to all maps, so that the end user of the maps for one design is not confused by the colors use from other design. Guide specifications for such trivial parameters will make reports easier to use. However, no guide specification should put limits on the input parameters or reporting format. It?s just as important to have standardized mapping as it is to allow for variances from the guide spec where necessary.

    Dennis Wallace | Meintel, Sgrignoli, & Wallace, LLC |
    Eric Dausman | Public Media Group