This week’s riveting whistleblower testimony from former product manager Frances Haugen concerning Facebook’s use, or potential underuse, of its Artificial Intelligence (AI) based algorithms underscored the radical shifts and risks taking place across our globally shared social media platforms. A key takeaway is the vital role that AI algorithms could play in creating a vastly safer use of social media across the globe.
As Haugen made clear, Facebook’s algorithms are optimized to maximize user engagement and hence corporate revenue streams and profits. Unfortunately for society, content that generates anger and resentment drives high levels of engagement within its platforms. With profits determined by the amount of content viewers consume, highly polarizing content that may be threatening or based on misinformation can lead to longer engagement sessions and more profit for Facebook.
Particularly troubling is Facebook’s own internal research indicating that younger consumers, particularly teenage girls, tend to be particularly susceptible to a depression spiral stemming from Instagram content. Instagram’s lifestyle-focused content proliferates with unrealistic beauty and living standards for young females, leading to disengagement with real-life and depression when contrasted with the seemingly perfect lifestyles curated on Instagram. This depression leads to less time spent in real-life and more time spent on Instagram, and hence increasing Instagram’s profits.
With Facebook’s vast internal research exposing the devastating impact of information disseminating through its platforms, why hasn’t the company done more to hone its technology to reduce harm? According to Haugen, Facebook may action on “as little as 3-5% of hate content” and 0.6% of violence and incitement content, despite, claiming to be the best in the world on such content detection and safety. For a company with Facebook’s resources and talent, these unacceptable, low action rates on dangerous content will invite not only fierce challenges, more scrutiny, and regulatory oversight. Facebook likely has the AI talent, technology, and resources to lead the space in removing dangerous content, but has chosen to look the other way in favor of lucrative profits achieved through unsafe information spreading through its platform.
At Netra, we’re a visual intelligence company that solves video comprehension, doing for visual data what Search has done for text. Our AI-based API can flag obvious disruptions and hateful information including nudity, hate, disasters, and accidents. Using Artificial Intelligence, our technology minimizes the need to rely on limited, crowdsource tags and human review at scale. As we also harness AI, there are similar players solving these challenging opportunities to use AI to solve a myriad of content-flagging challenges.
With rapid, emerging AI capabilities such as the video technology pioneered and expanding at Netra, Facebook’s inability to take action on its unsafe content suggests a serious lack of focus and prioritization at Facebook, or an overwhelming desire to put profits above safety. While Facebook has stated its openness to further regulation, this legislation will only succeed if Facebook is forced to change its incentives surrounding safety. Haugen has recommended that Congress should explore changing Section 230 to allow for culpability on decisions make about algorithms.
Requiring Facebook to reveal more transparency on its algorithm decisioning would be a welcome initial first step, as well as exposing its internal investments in safety vs. its other business priorities as Congress begins to even comprehend how to regulate both Facebook and further social media technology.
Learn more about Netra, one of Forbes’ Top 20 Machine Learning Startups to Watch in 2021
Authors: Amit Phansalkar, CEO and Founder of Netra, and Sarah Pettengill, contributing writer at press@Netra.io