Articles
February 22, 2022

How media owners can transition from being “data-poor” to “data-rich” using video, text, and image content classification

A common contextual classification taxonomy across publishers’ inventory will help media owners control pricing and become more strategic partners to buyers in the cookie-less world

The benefits of the ongoing evolution from cookie-centric marketing to contextual targeting have focused heavily on the protection of individuals’ privacy. But the big industry question is how will media content owners, particularly content publishers, fare in this cookie-less environment? We believe the impact on and the solutions available to publishers warrant deeper contemplation than what we have seen written to date.

As we move cookie-less, a publisher’s ability to monetize their advertising will likely lead down one (or a combination) of the three following paths:

1)      Remain reliant on existing contextual data: use existing tools that are limited to scanning text (and not video or still imagery) with mappings to existing IAB standards. This option reflects a privacy-complaint solution that has previously been available in the market (yet ignores a critical future tentpole of classification of images and video).

o   Benefits:

  • Generally fits within the existing ecosystem standard and is currently accepted as a reasonable workaround in the privacy-first, cookie-less environment

o   Challenges:

  • Classic text-only contextual classification fails to capture much of the value of the advertising environment - including full brand safety – and accordingly results in lower CPMs
  • For video content, insights into the full context of the ad’s environment are severely curtailed when analyzed using text-only or static image-based contextual classification solutions
  • Reliance on text-only content-based solutions provide limited depth of insights and can present a risk given that content does not also capture sentiment, context, complete brand safety, and object identification of video
  • The value of video content is overlooked, and the monetization of video content does not reflect the true value of publishers’ assets

2)      Invest in CRM-based, 1st party data, and private-party deals. Publishers create a stand-alone walled garden of reader information with publisher-level data that cannot be shared out-of-platform to generate direct matching with brands for deep insights and additional revenue price premiums for deterministic matching.

o   Benefits:

  • Implementation of frequency caps (currently a major concern) for “logged-in” subscribers which enables publishers to charge a premium
  • Deterministic: advertisers know who they are reaching

Challenges:

  • Costly to implement and prove conversion within a walled garden
  • Sharing first-party data into walled gardens presents brand and publisher risk
  • Most independent publishers have less than 3% authenticated audiences and hence provide little surface area to match against brands
  • Brands are conflicted about the use of CRM and many industries such as CPG have no first-party data
  • Video content does not achieve the higher monetization it deserves

3) Precisely classify all inventory (text, image, video) into a common global taxonomy that can be used strategically by partners to increase the fidelity of activation and produce detailed results analysis. In this scenario, the publisher creates a standard taxonomy at scale for their entire content portfolio (versus just text today) that includes the ability to derive contextual information, object identification, emotion detection, brand safety (yes, for video!), and scene classification for monetization.

o   Benefits:

  • Highest scaled monetization opportunities out of existing three paths
  • Efficient inventory classification and mobilization
  • Consistent framework your buying partners can work with predictably
  • Additional and scalable analytical capabilities unlocked

o   Challenges:

  • Until recently, full comprehension of a page was not possible due to costs - only the rich media owners could afford video classification - and even then it was incomplete and low fidelity in the insights it provides

At Netra, given our success to date in providing media owners and their vendors with computer vision tech, we advocate the third path for publishers. By providing solutions and comprehension data that empower publishers to increase monetization of their assets, publishers are no longer “data-poor,” and they can shape their own monetization of content – and capitalize on the video and contextual opportunity. By delivering a scaled understanding of the entire corpus of a publisher’s content, Netra’s technology uniquely empowers publishers to realize the full value of all of their content for the first time.

By establishing a common taxonomy across their entire inventory (video, text, and image), publishers achieve more consistent results, understand brand safety’s impact on pricing, and have a better understanding of the content’s impact on audiences and outcomes. Publishers that are first to gravitate towards this ‘total comprehension” solution will also have the opportunity to shape and drive an industry-wide taxonomy that will become the standard for all publishers, such as the IAB Tech labs seller-defined audiences (SDAs).

If you want to upgrade your monetization using total comprehension or you are interested in learning more about how Netra can deliver affordable, scaled content taxonomy for you or your clients, reach out to us here.

Subscribe to our newsletter

Request a Netra demo