Facebook

Go read this story on how Facebook’s focus on growth stopped its AI team from fighting misinformation

Views: 429

Facebook has always been a company focused on growth above all else. More users and more engagement equals more revenue. The cost of that single-mindedness is spelled out clearly in this brilliant story from MIT Technology Review. It details how attempts to tackle misinformation by the company’s AI team using machine learning were apparently stymied by Facebook’s unwillingness to limit user engagement.

“If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored,” writes author Karen Hao of Facebook’s machine learning models. “But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff.”

On Twitter, Hao noted that the article is not about “corrupt people [doing] corrupt things.” Instead, she says, “It’s about good people genuinely trying to do the right thing. But they’re trapped in a rotten system, trying their best to push the status quo that won’t budge.”

The story also adds more evidence to the accusation that Facebook’s desire to placate conservatives during Donald Trump’s presidency led to it turning a blind eye to right-wing misinformation. This seems to have happened at least in part due to the influence of Joel Kaplan, a former member of George W. Bush’s administration who is now Facebook’s vice president of global public policy and “its highest-ranking Republican.” As Hao writes:

All Facebook users have some 200 “traits” attached to their profile. These include various dimensions submitted by users or estimated by machine-learning models, such as race, political and religious leanings, socioeconomic class, and level of education. Kaplan’s team began using the traits to assemble custom user segments that reflected largely conservative interests: users who engaged with conservative content, groups, and pages, for example. Then they’d run special analyses to see how content-moderation decisions would affect posts from those segments, according to a former researcher whose work was subject to those reviews.

The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.

The story also says that the work by Facebook’s AI researchers on the problem of algorithmic bias, in which machine learning models unintentionally discriminate against certain groups of users, has been undertaken, at least in part to preempt these same accusations of anti-conservative sentiment and forestall potential regulation by the US government. But pouring more resources into bias has meant ignoring problems involving misinformation and hate speech. Despite the company’s lip service to AI fairness, the guiding principle, says Hao, is still the same as ever: growth, growth, growth.

[T]esting algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.

You can read Hao’s full story at MIT Technology Review here.

Tags: , , ,
GM aims to make its electric vehicles go farther and cost less with new battery partnership
Lip-syncing app Wombo shows the messy, meme-laden potential of deepfakes

Latest News

Film

Cars

Artificial Intelligence

SpaceX

You May Also Like