Contrary Research Rundown #118
Meta’s about-face on content moderation could be as much about AI as it is free expression, plus new memos on Blockchain.com and Hallow
Research Rundown
When Mark Zuckerberg took the stage at a speech he gave at Georgetown University back in 2019, he put forth a vision of social media as a beacon of free expression. In the speech, he argued that free expression was a force for progress and warned that even well-intentioned inhibitions on speech could reinforce existing power structures instead of empowering people. This stance would soon collide with reality, setting off a chain of events that would transform Meta’s approach to content moderation and raise fundamental questions about the future of online discourse.
The sheer volume of content produced on social media, combined with the spread of harmful material, seemed to necessitate complex content moderation systems. To address these issues, platforms like Facebook hired content moderators, often through third-party contractors, to review and remove inappropriate content.
But that solution brought its own problems. Casey Newton’s 2019 article, The Trauma Floor, investigated these content moderation centers. At a Tampa facility operated by Cognizant, moderators worked in conditions that would shake Meta’s foundation. One moderator died of a heart attack at his desk, while others faced daily trauma from exposure to graphic violence, hate speech, and child exploitation. Management responded by discouraging discussion of the death and retaliating against those who raised concerns. This revealed a deep disconnect between Meta’s public ideals and the real human cost of maintaining them.
To combat misinformation, Meta introduced third-party fact-checking programs as part of its content moderation strategy. While these programs aimed to address the spread of false information, they were not without flaws. Meta has since acknowledged that fact-checkers, like any individuals or organizations, can have inherent biases. These biases occasionally resulted in the unintended censorship of legitimate political speech, raising concerns about fairness and balance in the process.
These flawed mechanisms became the catalyst for Meta’s first major re-evaluation of content moderation. While the company had positioned itself as a protector of users from harmful content and misinformation, it was simultaneously creating different kinds of harm. The tension between user protection and worker welfare would ultimately push Meta toward automated solutions, setting the stage for today’s AI-driven challenges.
The trauma experienced by human moderators and the mismanagement of misinformation directly influenced Meta’s current retreat from active moderation. The company’s shift from third-party fact-checking to a Community Notes system represents more than just a policy change – it’s an acknowledgment that centralized content control is both humanly and technically unsustainable.
Meta’s new approach focuses automated systems solely on high-severity violations while transferring responsibility for everyday content moderation to users. This isn’t just about reducing mistakes – it's about addressing what Meta calls “mission creep” in content moderation. As Facebook put it in a blog post this week, “Too much harmless content gets censored, too many people find themselves wrongly locked up in '‘Facebook jail,’ and we are often too slow to respond when they do.” This acknowledgment ties past challenges in moderation to current AI issues, highlighting the need for platforms to rethink their role.
The timing of Meta’s pivot is particularly significant as artificial intelligence is increasingly overtaking traditional moderation approaches. The same issues that burned out human moderators – volume, complexity, and ambiguity – now manifest in new ways through AI-generated content. Consider a scenario becoming increasingly common: AI-generated accounts engaging primarily with other AI-generated accounts, creating incredibly complex content loops. These interactions only amplify the scalability problems that first emerged with human moderation but at an exponentially larger scale.
The future of online platforms appears to lie not in controlling content, but in empowering users to navigate it. And the future of content moderation isn’t just about technology – it’s about fundamental questions of human interaction in an AI-augmented world. The path from Zuckerberg’s Georgetown speech to today’s AI challenges reveals a continuous thread: the tension between ideals and reality, between free expression and user protection, and between human judgment and AI efficiency.
The question has evolved from whether platforms should moderate content to whether they can. While Meta is moving toward a more hands-off approach, the rise of AI-generated content introduces new complexities that demand innovative solutions. A world where AI systems will increasingly talk to other AI systems and generate content at an unprecedented scale will require continuous adaptation and thoughtful consideration of the ethical and societal implications of these technologies.
Ultimately, navigating this evolving landscape will require platforms to balance innovation with accountability, ensuring that technological progress serves the broader interests of society.
Blockchain.com is a cryptocurrency wallet provider and exchange platform serving retail and institutional investors. To learn more, read our full memo here and check out some open roles below:
Senior DevOps Engineer - London, England
Front-End Engineer, Institutional - London, England
Hallow is a Catholic prayer and meditation app that provides a convenient and accessible platform for integrating prayer and mindfulness into daily life. To learn more, read our full memo here and check out some open roles below:
Product Designer - Chicago, IL or Remote
Content Editor - Chicago, IL or Remote
Check out some standout roles from this week.
Zapier | Remote (US or Canada) - Sr. Backend Engineer, Applied AI Engineer, Engineering Manager (Identity Platform), Product Manager Enterprise (Multiple Positions)
Vercel | Remote (US) - Content Engineer, DX Engineer, Software Engineer (Site), Site Reliability Engineer (Edge), Senior Product Designer, Senior Front End Consultant
Anthropic | San Francisco, CA, New York City, NY or Seattle, WA - Product Manager (Research), Software Engineer (API Experience), Research Scientist (Interpretability)
Replit | Foster City, CA (Hybrid) - Engineering Manager (Product & Growth Engineering), Lead Site Reliability Engineer, Software Engineer (Full Stack)
Anthropic is reportedly in talks to raise $2 billion in a funding round led by Lightspeed Venture Partners, which would value the company at $60 billion and make it the fifth most valuable US startup after SpaceX, OpenAI, Stripe, and Databricks.
Sam Altman is confident that OpenAI knows “how to build AGI” and that the first AI agents “will join the workforce” with transformative impact in 2025.
Matt Mandel, an investor at Union Square Ventures, published an essay entitled “The Deep Tech Opportunity” where he argues the "lean startup" model of internet software companies has become less viable, and venture capital should shift focus to frontier-pushing technologies in industries like energy, manufacturing, and life sciences.
Payroll startup Deel has been accused of failing to comply with sanctions and laundering money in a Ponzi scheme lawsuit.
Revolut, a London-based neobank, is looking to expand into the private banking market.
US startups are turning to local chip fabrication facilities in the US, rather than relying solely on Taiwan's TSMC.
The current AI landscape is reminiscent of the early days of the automotive and aviation industries, where remarkable early successes were constrained by challenges in robustness and reliability.
Plenty Unlimited, a vertical farming company once valued at $1.9 billion, is now in talks to raise $125 million in a deal that would value existing shares at less than $15 million — a more than 90% drop in value. This follows the shutdown of Bowery Farming, another vertical farming startup, in November 2024.
Octahedron Research published its insights from Q4 2024 and asserted that the Cloud Giants — namely Google, Microsoft, and AWS — would maintain their moat in AI, anchored by their scale across the four key pillars of the AI ecosystem: infrastructure, research, applications, and distribution.
Addepar, a software company that manages over $7 trillion in assets for clients like Morgan Stanley and Jefferies, is seeking to raise $250 million at a ~$3.3 billion pre-money valuation.
Dario Amodei, CEO of Anthropic, shared his thoughts on biology, AI, and health. He thinks AI-powered biology could compress a century of progress into a decade, revolutionizing healthcare by accelerating breakthroughs in disease prevention, treatment, and biological freedom, and potentially doubling human lifespans.
Big Tech companies like Microsoft, Amazon, and Google are investing in nuclear power to meet the growing energy demands of their AI data centers, signaling a potential "nuclear renaissance" driven by the AI boom.
Wise, a fintech company, is expanding its B2B payments platform to compete directly with Airwallex and Australian banks' legacy infrastructure like SWIFT.
Meta's decision to replace its fact-checking program with "X-style community notes" was announced with less than an hour's notice to the third-party fact-checkers.
Telegram handed over the IP addresses and/or phone numbers of 2,253 users to US law enforcement agencies in 2024, a significant jump from just 14 such requests the previous year.
In our latest episode of Research Radio, we sat down with Base Power Co-Founders Zach Dell (CEO) & Justin Lopas (COO). You can check out the full episode here.
At Contrary Research, we’ve built the best starting place to understand private tech companies. We can't do it alone, nor would we want to. We focus on bringing together a variety of different perspectives.
That's why applications are open for our Research Fellowship. In the past, we've worked with software engineers, product managers, investors, and more. If you're interested in researching and writing about tech companies, apply here!
In the era of content turning to software, we still need to advocate for better content curation and qualitative information sharing. Platforms can't solve this on their own, but rather we need to also educate creators on what they post and that spamming people with daily invaluable posts leads to nowhere
If social media becomes overwhelmed with AI accounts and AI generated content, will real people continue to use it? When the mobile wave got started, building trust to execute transactions with strangers was a big deal. Will the AI wave necessitate a new form of trust infrastructure in order for people to know who is a real person or entity and who is a not?