Topics
- 2024 BEITC Proceedings
- Application of 5G in Broadcasting
- Application of Large Language Model (LLM) in Media
- Applications of ATSC 3.0 Technology
- BPS as the Complementary PNT Solution
- Broadcast Facility Design
- Content Creation and Delivery Technology
- Cybersecurity for Broadcasters
- Data Delivery
- Digital Online Operations
- Emerging Technologies in Media Delivery
- Generative AI for Media
- Generative AI Uses and Video Transcoding
- Quantifying Quality in Video Technology
- Radio Topics
- Society of Cable Telecommunications Engineers
- Striving for Efficiency in Video Technology
- The NMCS Concept
- Timing Solutions for Broadcasters
- Video Encoding and Codecs
- Video Technology - Miscellaneous Topics
- 2023 BEITC Proceedings
- 2022 BEITC Proceedings
- 2021 BEITC Proceedings
- 2020 BEITC Proceedings
- Uncategorized
Application of Large Language Model (LLM) in Media
Creating and Implementing AI-Powered Conversational Search to Drive Viewer Engagement - $15
Date: April 3, 2024Topics: 2024 BEITC Proceedings, Application of Large Language Model (LLM) in MediaIn the past year, Generative AI has transitioned from a novel technology to a pivotal tool in media and entertainment, shifting the focus from its initial promise to its practical application in solving industry-specific challenges cost-effectively. Amidst expectations for the global artificial intelligence market to surpass $2.5 trillion by 2032 [1], early adoption and strategic technology planning are essential for media and entertainment (M&E) providers aiming to capitalize on AI marketplaces and applications that attract and retain audience engagement. This paper presents a technical demonstration of how broadcast and streaming services can swiftly integrate Generative AI to enhance consumer search and discovery, addressing the inefficiencies and frustrations with current systems. By utilizing Large Language Models (LLMs) for conversational search, the proposed solution speeds up content discovery and becomes more accurate over time by learning from interactions. Unlike traditional AI approaches that rely on limited metadata, our method leverages the vast information available on the web, allowing for richer, voice-command-driven search experiences. The demonstration will cover the technical integration within existing streaming infrastructures, highlighting the role of content management systems (CMS) in processing voice commands and facilitating web-based searches, thus offering a glimpse into potential future applications such as personalized highlight reels and AI-generated content.
Naveen Narayanan | Quickplay | Toronto, Ontario, Canada
Making Your Assets Mean Something: Evolving Asset Management Systems Using Semantic Technologies - $15
Date: April 3, 2024Topics: 2024 BEITC Proceedings, Application of Large Language Model (LLM) in MediaProduce more content! Faster! Cheaper! Make it available in different places! And never throw anything away! It is seemingly impossible for media enterprises to keep up with these escalating and often conflicting business pressures. How can media asset management systems evolve to provide the right content based on the intent of the user? Can a creator produce content faster using knowledge hidden within the content? How can assets be easily accessed from distributed locations to enrich the experience? And does content growth exist without exploding costs? The answer to all these questions is yes. It is derived by leveraging tools that provide contextual meaning to the content, so that more effective utilization of the content can be performed. It goes beyond providing rich searching mechanisms to now providing the ability to get insights into the content. It relies on tapping into richer information sources that go beyond logged metadata for a piece of content, to now harvest knowledge from scripts, stories, and dialogue. The answer lies in creating a connected environment between data silos where a central brain has knowledge not just where content is located, but of what is in the content. The answer is in the ability for a Creator to use their native language and express in natural conversational form what they want to achieve when searching for, analyzing, and manipulating the content. The answer lies in tools understanding a user’s creative intent to perform appropriate actions, versus the creative having to learn and manipulate complex user interfaces to achieve the same intent. This paper discusses how certain artificial intelligence (AI) technologies such as Semantic Embeddings, Knowledge Graphs, and Multimodal Large Language Models are coming together under the umbrella of Knowledge Management to solve this problem of meaningfully extracting and using information from content that takes us beyond what we can do today with Asset Management systems.
Shailendra Mathur | Avid Technology | Montreal, Quebec, Canada
Rob Gonsalves | Avid Technology | Burlington, Mass., United States
Roger Sacilotto | Avid Technology | Burlington, Mass., United States
Television Viewership Reimagined Through Generative AI - $15
Date: April 3, 2024Topics: 2024 BEITC Proceedings, Application of Large Language Model (LLM) in MediaThe word television viewership measurement, employing a variety of media metrics, has long been an established practice. It has evolved from methods like people meters to sweep surveys, marking substantial progress in media measurement technology. Nevertheless, the landscape of television and OTT (Over-the-Top) platforms has grown increasingly complex over the last decade. While traditional media measurement can address ‘what’ and ‘when’ questions, it falls short in answering ‘why’ inquiries. For example, it cannot explain why a show like ‘XYZ’ has experienced a consistent decline in TRP ratings in all target metropolitan markets over the past week.
Several intrinsic and extrinsic factors influence the success of a television show or consumer behavior. These factors may include, but are not limited to, the launch of a similar show on a competing network with a nearly identical storyline, the introduction of a popular reality show, live events, socio-economic conditions, sudden plot twists, social media sentiment, cyclical events like summer holidays or parliamentary elections, and more.
In this paper, we introduce a multi-dimensional Question-Answer (QnA) interface employing Retrieval Augmented Generation (RAG) and large language models (LLM), such as Anthropic Claude v2. The use of RAG in LLM-powered QnA bots is common practice to provide additional context and reduce hallucinations. We begin by defining a graph to query the show’s dependence on various factors and their relative significance. Each node within the graph represents a RAG source, providing contextual information about a specific show. When inquiring about the reasons behind a show’s poor TRP ratings based on viewership data, we gather contextual information from multiple sources, including social media, competitor data, machine learning-based content analysis, and socio-economic conditions. All this information is provided to the LLM as context, and it is tasked with reasoning. The LLM can then provide the most plausible reason or causality for the underperformance. Furthermore, we can engage in chain-of-thought questioning to delve deeper into follow-up inquiries.
Punyabrota D | AWS India | Mumbai, Maharashtra, India
Maheshwaran G | AWS India | Mumbai, Maharashtra, India