Global Alliance for Responsible Media (GARM) is an industry initiative that has adopted a unified, consistent and understandable language to describe harmful content. Garm has created definitions for unsafe categories that set a clarity of expectation for each associated category. The consistency in categorization pushes the industry towards more responsible and safer internet for all.
How did we implement GARM?
We applaud this effort having been through the journey of defining safety. Being one of the first platforms defining internet safety, the most common explanation I have heard of safety is - I’ll know it when I see it. This is counter to any pattern recognition system that requires consistency in training the system and well defined set of training data.
In addition, the challenge increases multi-fold when it comes to video. How would a system classify a war scene in a movie? What about horror movies with gory scenes? What about a bloody fight in a sports event?
We have not been able to get multiple humans to agree on a black and white definition of safety in these situations; far for a machine to develop an accurate system. It is in this respect that the GARM framework fits Netra approach to video classification, like a glove.
Netra has developed its system and video narrative functionality with a fundamental belief that when it comes to video, context and intent are a lot more important than mere high level classification. Thus Netra has always provided a safety rating in addition to context - high risk due to violence but the context is movies and entertainment. This kind of information helps our partners make more accurate decisions about their use case and can in turn offer this flexibility to brands.
This is where the GARM framework that provides a higher level description of a category but also safety rating into risk categories from high to low was a natural fit to Netra’s API offering.
Netra has been adopted by several partners for GARM classification based on these three pillars -
- Providing the context of the video in addition to a safety risk; enabling better decisioning.
- Netra’s single shot learning system that has helped Netra create GARM categories using its proprietary data. Netra’s system can learn about new classifications using a handful of samples where traditional deep learning technology requires thousands.
- Incorporating context within its safety rating that can detect the nuances and intention of the video providing better classification.
Ultimately it will take the entire industry contributing and adopting platforms like Netra’s if we are to make this internet a safer place for advertisers and we stand by all such approaches for us to get there.