In recent weeks, Congress has blasted Facebook on revelations that the platform is keenly aware of its own research admitting its platforms’ harmful effects on the mental wellbeing of users under 18 years of age. Of particular concern is the rise of content promoting eating disorders, hate speech, illegal drug sales, and suicide that is easily accessible and viewed by teens on Facebook.
Last week, in a grueling 3-hour hearing entitled “Protecting Kids Online: Snapchat, TikTok, and YouTube,” executives from those three organizations took turns defending their platforms’ alleged shortfalls at protecting young users’ safety, arguing that they have proactively incorporated features to better protect young users. In his introductory address, Senator Ed Markey (D-MA) lambasted social media, proclaiming that “Big Tech prays on children and teens to make more money” and that 13-year girls have the right to defend against “algorithms that push toxic content” towards them.
The three Tech Social Media platforms faced an uncomfortable grilling on their respective platforms’ safety inadequacies as Congressional members took turns revealing their staffs’ experiences after posing as 13-15-year-old-users and slammed the social media companies with examples of the unsafe content their exploratory test-drives uncovered.
All three social media companies assured the Congressional panel that inappropriate content would violate their guidelines and policies and would be taken down, but the Big Tech witnesses struggled to address how they could effectively and efficiently police violations and remove ever-adapting violations living within their platforms.
Will Congress push social media towards greater investment in AI?
With social media profits surging, Congress appears frustrated that the platforms are failing to invest in successfully controlling content and will likely push to end the platforms’ immunity regarding the content hosted on their platforms.
Facebook’s resistance to submit to increased legislation in recent weeks, contrasted with its recently announced quarterly profits of $29 billion, has exacerbated tensions within Congress over social media’s lack of investment in safeguarding minors.
Two areas emerged as likely legislative outcomes:
- Monetary fines and liability for failures to eliminate unsafe content from reaching children
- Independent inspection of and increased transparency into platforms’ AI algorithms
Witnesses from all three firms testified that they welcome the “spirit of legislation” but were hesitant to endorse reform proposals. From Congress’ perspective, revising Section 230 is an attractive area to score an easy win as Facebook’s and broader public support of social media is collapsing. Congress’s testimony, however, on Tuesday continued to reflect the Congressional panel’s inability to grasp the complexities of social media algorithms and what “transparency” might actually transpire into moderating.
Can Congress effectively police algorithms? Unlikely, although maybe for children and teens.
Historical precedent, such as limiting tobacco and alcohol advertising towards children will likely provide enough basis for Congress to better require protections for and social media policies regarding children. Policing for kids is easy and politically less contentious for everyone to get behind. Teens and children are influencers of content and the recipients of attractive media dollars, but it is time to stop treating them as such and improve safeguarding, through further investment in AI and its ability to filter content at scale.
Congress should encourage investment in AI, rather than distrusting and labeling social media algorithms as “black box.” Social media, whether for adults or children, will only continue to expand into previously unexplored technologies and new challenges. Congress’ most solicitous impact will likely be to author legislation that creates uniform policies for content and methods that can and cannot be presented to users under the age of 18, as well as enhanced age verification for social media participants.
The likely effectiveness of Congress, however, to pass legislation remains dubious, as members of Congress seem more concerned with miring AI technology in under-informed scrutiny rather than embracing its useful applications, scalability, and safety opportunities. Improving social media safety for children and teens is an achievable aspiration, but Congress will likely need to embrace AI as a scalable solution rather than continue a dialogue of social media AI as “black-box,” “dangerous” algorithms that are only inciting harm and not benefits within social media platforms.