Television Viewership Reimagined Through Generative AI
The word television viewership measurement, employing a variety of media metrics, has long been an established practice. It has evolved from methods like people meters to sweep surveys, marking substantial progress in media measurement technology. Nevertheless, the landscape of television and OTT (Over-the-Top) platforms has grown increasingly complex over the last decade. While traditional media measurement can address ‘what’ and ‘when’ questions, it falls short in answering ‘why’ inquiries. For example, it cannot explain why a show like ‘XYZ’ has experienced a consistent decline in TRP ratings in all target metropolitan markets over the past week.
Several intrinsic and extrinsic factors influence the success of a television show or consumer behavior. These factors may include, but are not limited to, the launch of a similar show on a competing network with a nearly identical storyline, the introduction of a popular reality show, live events, socio-economic conditions, sudden plot twists, social media sentiment, cyclical events like summer holidays or parliamentary elections, and more.
In this paper, we introduce a multi-dimensional Question-Answer (QnA) interface employing Retrieval Augmented Generation (RAG) and large language models (LLM), such as Anthropic Claude v2. The use of RAG in LLM-powered QnA bots is common practice to provide additional context and reduce hallucinations. We begin by defining a graph to query the show’s dependence on various factors and their relative significance. Each node within the graph represents a RAG source, providing contextual information about a specific show. When inquiring about the reasons behind a show’s poor TRP ratings based on viewership data, we gather contextual information from multiple sources, including social media, competitor data, machine learning-based content analysis, and socio-economic conditions. All this information is provided to the LLM as context, and it is tasked with reasoning. The LLM can then provide the most plausible reason or causality for the underperformance. Furthermore, we can engage in chain-of-thought questioning to delve deeper into follow-up inquiries.
Punyabrota D | AWS India | Mumbai, Maharashtra, India
Maheshwaran G | AWS India | Mumbai, Maharashtra, India
$15.00