Historically, contextual based efforts have primarily focused on classifying the content an audience is consuming, without considering the power of creative assets’ inherent contextual data.
Ignoring the features of creative comes at great cost, especially as advertisers lose audience and other signal with the deprecation of the 3rd party cookie. However, when paired with content classification, automated creative content classification ensures that brands have consistent, quantified, and highly monetizable libraries of assets with a standard set of contextual attributes. If creative context classification is not done well, brands and partners are left with a library of creatives with inconsistent and/or sloppy attributes assigned. When a creative’s information is not properly or consistently tagged, a brand’s most valuable ads are at risk of being lost in the shuffle.
When brands are able to consistently categorize their creatives using the same taxonomy as the publisher’s content, they are able to create a powerful match between creative and content without relying on cookies or PII. With an optimized contextual-creative to contextual-content alignment, cookies are no longer needed to create powerful connections to audiences.
As cookies are no longer an option, brands are turning to contextual solutions. These solutions are not complete if they do not include creative contextual classification as part of their contextual strategies.
Why our API is game-changing
- A single, AI-based endpoint to classify image, and video creative assets that mirrors the depth of classification and quality achieved through hours of person-based review.
- Delivers summary information about the assets, as well as granular, consistent details about scenes for video.
- Creative assets are stamped with a common taxonomy and standards that allow for placement optimization, greater internal analysis, and performance tracking vs. manual, time-consuming review.
How it’s implemented
- For video content, we recognize both objects and text at the frame level, which is then used to construct the “aboutness” of a scene which is then aggregated to provide a summary of the video asset.
- Outputs are expressed in a multi-label classification format which include people, places, objects, activities, emotions, affinities, brand logos, ad type, brand archetypes, personality archetypes, and a quality score.
- In terms of interpreting results, we provide all applicable categories and segments with a score. This score is a strength of association, at the scene level and aggregated to a summary for the entire asset.
- Scores are normalized to a scale of 0-100.
- General guidance is as follows, but custom rules can be applied as necessary:
- Scores of 30 or greater for IAB, places, activities, and objects show high correlation.
- For demographics, affinities, ad type, brand archetypes, and personality types, a score of 50 or greater relates to a high level of confidence.
- Logos and celebrities should use a threshold of 80 or greater.
- Custom taxonomies can all be supported
Additional technical considerations and features
- Timestamps are provided for scene level detail, with the associated metadata.
- A BRISQUE score is calculated to determine the quality of the asset.
- Support for IAB 2 included.
- Inputs like titles, descriptions and custom tags can be used to further hone categorization.
- If there are unique IDs or tags that are relevant to the content, we can associate those values with the evaluation output.
- The output is a file, which is human readable and constructed in a manner that is consistent and easily parsed for your use cases.
- We use a RESTful API that conforms to the constraints of REST architectural style and allow for interaction with RESTful web services.
- The API utilizes an asynchronous response structure. We confirm the receipt of a request with standard http error handling, and provide results back to a location of your choice when processing is complete.