New Technologies for Sports Coverage

  • Artificial Intelligence: Transforming the Live Sports Landscape - $15

    Date: April 26, 2020
    Topics: ,

    The proposed paper focuses on a media recognition AI engine that is trained to catalog an exhaustive range of elements for live sports in near real-time. It helps sports producers cut down the time and effort involved in highlight creation by 60% and enables OTT platforms to deliver high levels of interactivity and personalization with a create-your-own-highlights experience and a powerful live search option. The engine identifies storytelling graphics and events in the footage along with game content to stitch together an end-to-end story in highlights, and also provides finesse with smooth visual transitions and automatic audio leveling – just like a professional editor. The said engine helps sports content creators enhance viewer engagement, increase monetization and achieve greater scale and speed in live sports.

    Adrish Bera | Prime Focus Technologies | Burbank, CA, United States
    Amer Saleem



  • Automated Brand Color Accuracy for Real-Time Video - $15

    Date: April 26, 2020
    Topics: ,

    Sports fans know their team?s colors and recognize inconsistencies in brand color display when their beloved team?s colors show up incorrectly. Repetition and consistency build brand recognition and that equity is valuable and worth protecting. Accurate display of specified colors on screen is impacted by many factors including the capture source, the display screen technology, and the ambient light which can range broadly throughout the event due to changes in time and weather (for outdoor fields) or mixed-artificial lighting (for indoor facilities). Changes to any of these factors demand adjustments of the output in order to maintain visual consistency. According to the industry standard, color management is handled by a technician who manually corrects footage from up to two dozen camera feeds in real-time to adjust for consistency across camera feeds. In contrast, the AI-powered ColorNet system ingests live video, adjusts each video frame with a trained machine learning model, and outputs a color corrected feed in real-time. This system is demonstrated using Clemson University?s orange specification, Pantone 165.? The machine learning model was trained using a dataset of raw and color-corrected videos of Clemson football games. The footage was manually color corrected using Adobe Premiere Pro. This trains the model to target specific color values and adjust only the targeted brand colors pixel-by-pixel without shifting any other surrounding colors in the frame, generating localized corrections while adjusting automatically to changes in lighting and weather. The ColorNet model successfully reproduces the manually-created corrections masks while being highly computationally efficient both in terms of prediction fidelity and speed. This approach has the ability to circumvent human error when color correcting while constantly adjusting for negative impacts caused by lighting or weather changes on the display of brand colors. Current progress is being made to beta test this model in live-time broadcast streaming during a live sporting event with large-format screens in partnership with Clemson University Athletics.

    Emma Mayes | Clemson University | Clemson, South Carolina, United States
    John Paul Lineberger | Clemson University | Clemson, South Carolina, United States
    Michelle Mayer | Clemson University | Clemson, South Carolina, United States
    Andrew Sanborn | Clemson University | Clemson, South Carolina, United States
    Hudson Smith | Clemson University | Clemson, South Carolina, United States
    Erica Walker | Clemson University | Clemson, South Carolina, United States



  • Implementing AI-powered Semantic Character Recognition in Motor Racing Sports - $15

    Date: April 26, 2020
    Topics: ,

    Oftentimes, TV producers overlay visual and textual media to provide context about racers appearing on screen, such as name, position and face shot. Typically, this is accomplished by a human producer visually identifying the racers on screen, manually toggling the contextual media associated to each one and coordinating with cameramen and other TV producers to keep the racer on shot while the contextual media is on screen. This labor-intensive process is mostly suited to static overlays and makes it difficult to overlay contextual information about many racers at the same time.

    This paper presents a system that largely automates these tasks and enables dynamic overlays that uses deep learning to automatically track the racers as they move on screen. This system is not merely theoretical, an implementation has already been deployed to live TV production for Formula E broadcasts.? We will present the challenges found and solved in the implementation of this system, and we will discuss the implications and planned future applications of this new technological development.

    Jose David Fern?ndez Rodr?guez | Virtually Live | M?laga, Spain
    David Daniel Albarrac?n Molina | Virtually Live | M?laga, Spain
    Jes?s Hormigo Cebolla | Virtually Live | M?laga, Spain