2025 BEITC Proceedings

JOHN
  • ATSC: Beyond Standards and a Look at the Future - $15

    Date: April 26, 2020
    Topics: ,

    As a standards development organization, the Advanced Television Systems Committee (ATSC) has been focused on completing the ATSC 3.0 suite of standards for the past several years. While amendments, corrigenda and other document extensions and maintenance are evergreen, ongoing activities in the standards realm, the ATSC mission also encompasses “looking around the corner” for the Next Big Thing in the evolution of broadcast media. To this end, ATSC has formed Planning Teams and other groups that are examining advanced broadcast technologies, automotive applications, global recognition of ATSC 3.0, developing a near-term service roadmap for ATSC 3.0 services and features, convergence with international data transmission systems such as the Internet and 4G/5G and other topics. This presentation will feature members of the ATSC leadership team laying out the details of this challenging workplan and current progress, how industry representatives can participate and the steps being taken toward developing a robust vision for the future of broadcasting.

    Madeleine Noland | ATSC | Washington, DC, USA
    Jerry Whitaker | ATSC | Washington, DC, USA
    Lynn Claudy | NAB | Washington, DC, USA



  • Audience Aware Streaming - $15

    Date: April 3, 2024
    Topics: ,

    Live linear streams are typically produced using a fixed bitrate ladder and codec mix. This approach locks in storage, compute, delivery costs and power consumption. Unpopular channels have the same operating costs as popular ones. Content distributors would much rather focus their resources on their popular content and maximize aggregate quality of experience (QoE) while minimizing storage, compute, delivery and power. This paper describes a collaboration between Ateme (a provider of video compression and delivery solutions) and Akamai (a global compute and delivery provider) in which such an optimization is achieved. Player viewing sessions are collected by Akamai using the common media client data (CMCD) standard and fed in a real-time data feed to an Ateme Stream Optimizer. The Optimizer gathers data on the viewership count, geo diversity, device diversity and device capability and then dynamically adjusts the cloud compute resources allocated to the stream, varying the bitrate ladder and codec mix offered so as to provide the best quality to the popular streams while minimizing delivery costs and power consumption The paper describes the proof-of-concept system that was built, and presents the real-world results that were obtained to validate the hypothesis that live linear OTT operations can be optimized with the addition of real-time audience data.

    Mickaël Raulet | ATEME | France
    Josselin Cozanet | ATEME | United States
    Khaled Jerbi | ATEME | Canada
    Will Law | Akamai | Switzerland



  • Audio Services Over ATSC 3.0: A Proof of Concept - $15

    Date: April 14, 2023
    Topics: ,

    (Winner, 2023 BEIT Conference Proceedings Best Paper Award)

    This paper provides a proof-of-concept for delivering audio services using ATSC 3.0. First, it outlines the initial starting requirements for the development of the system. It then covers the determinations made at each process step to fulfill these goals: encoding, delivery layer, physical layer, and receiver. Real-world results of the tests conducted in the Baltimore market are then discussed. Finally, the next steps in development and potential gaps in the standard to fill are suggested.

    Liam Power | ONEMEDIA | Hunt Valley, Maryland, United States



  • Audio Streaming: A New Approach - $15

    Date: April 26, 2020
    Topics: ,

    Most of your streaming audience wants something that just works reliably; they don’t care about the tech behind it. Many audio streams currently in service fall short. However, it is up to the content providers to make it work. If you stream audio, you need to know about this. This information is targeted to streaming professionals.

    This paper/presentation describes a standards-based method of quality live streaming audio with dramatically decreased operating costs, increased reliability, professional features, and a better user experience. Streaming audio has now evolved past legacy protocols allowing reliable content delivery to mobile and connected car dashboards, where all the audience growth continues.

    Greg Ogonowski | StreamS HiFi Radio/Modulation Index, LLC | Diamond Bar, CA USA
    Nathan Niyomtham | StreamS HiFi Radio/Modulation Index, LLC | Diamond Bar, CA USA



  • Audio-Specific Metadata To Enhance the Quality of Audio Streams and Podcasts - $15

    Date: April 26, 2020
    Topics: ,

    Audio-specific metadata was envisioned several years ago in the MPEG-D standard for Dynamic Range Control. The application of this metadata to online content awaited a newer audio codec and the current generation of mobile operating systems. Now that both are becoming widely available, this paper explains how audio content providers can offer new consumer benefits as well as a more compelling listening experience.

    This paper and presentation will explain the types of audio-specific metadata that monitor key characteristics of audio content, such as loudness, dynamic range and signal peaks. From simple to large-scale producers, the metadata sets are added to the content during encoding for real-time distribution?as in streams, or for file storage?as with podcasts.

    In the playback device or system, this metadata is decoded along with the audio data frames. Through diagrams the decoder operations are described, providing benefits such as loudness matching across different audio content?ending annoying blasting from some audio and reaching for the volume. It will be shown that audio dynamic range can be controlled according to the noise environment around the listener?quiet parts of a performance can be raised to audibility, but only for those who need it.

    An audio demonstration is planned to allow the audience to hear the same encoded program over a range of playout conditions with the same device, from riding public transit to full dynamic range for listeners who want highest fidelity. The workflows in production and distribution to add audio-specific metadata are explained, showing how content producers need to make only one target level for all listeners, rather than one for smart speakers, another for fidelity-conscious listeners, etc.

    John Kean | Cavell Mertz & Associates Inc. | Manassas, Virginia, USA
    Alex Kosiorek | Central Sound at Arizona PBS | Phoenix, Arizona, USA



  • Automated Brand Color Accuracy for Real-Time Video - $15

    Date: April 26, 2020
    Topics: ,

    Sports fans know their team?s colors and recognize inconsistencies in brand color display when their beloved team?s colors show up incorrectly. Repetition and consistency build brand recognition and that equity is valuable and worth protecting. Accurate display of specified colors on screen is impacted by many factors including the capture source, the display screen technology, and the ambient light which can range broadly throughout the event due to changes in time and weather (for outdoor fields) or mixed-artificial lighting (for indoor facilities). Changes to any of these factors demand adjustments of the output in order to maintain visual consistency. According to the industry standard, color management is handled by a technician who manually corrects footage from up to two dozen camera feeds in real-time to adjust for consistency across camera feeds. In contrast, the AI-powered ColorNet system ingests live video, adjusts each video frame with a trained machine learning model, and outputs a color corrected feed in real-time. This system is demonstrated using Clemson University?s orange specification, Pantone 165.? The machine learning model was trained using a dataset of raw and color-corrected videos of Clemson football games. The footage was manually color corrected using Adobe Premiere Pro. This trains the model to target specific color values and adjust only the targeted brand colors pixel-by-pixel without shifting any other surrounding colors in the frame, generating localized corrections while adjusting automatically to changes in lighting and weather. The ColorNet model successfully reproduces the manually-created corrections masks while being highly computationally efficient both in terms of prediction fidelity and speed. This approach has the ability to circumvent human error when color correcting while constantly adjusting for negative impacts caused by lighting or weather changes on the display of brand colors. Current progress is being made to beta test this model in live-time broadcast streaming during a live sporting event with large-format screens in partnership with Clemson University Athletics.

    Emma Mayes | Clemson University | Clemson, South Carolina, United States
    John Paul Lineberger | Clemson University | Clemson, South Carolina, United States
    Michelle Mayer | Clemson University | Clemson, South Carolina, United States
    Andrew Sanborn | Clemson University | Clemson, South Carolina, United States
    Hudson Smith | Clemson University | Clemson, South Carolina, United States
    Erica Walker | Clemson University | Clemson, South Carolina, United States