Back in February, Facebook announced a little experiment. It would reduce the amount of political content shown to a subset of users in a few countries, including the US, and then ask them about the experience. “Our goal is to preserve the ability for people to find and interact with political content on Facebook, while respecting each person’s appetite for it at the top of their News Feed,” Aastha Gupta, a product management director, explained in a blog post.
On Tuesday morning, the company provided an update. The survey results are in, and they suggest that users appreciate seeing political stuff less often in their feeds. Now Facebook intends to repeat the experiment in more countries and is teasing “further expansions in the coming months.” Depoliticizing people’s feeds makes sense for a company that is perpetually in hot water for its alleged impact on politics. The move, after all, was first announced just a month after Donald Trump supporters stormed the US Capitol, an episode that some people, including elected officials, sought to blame Facebook for. The change could end up having major ripple effects for political groups and media organizations that have gotten used to relying on Facebook for distribution.
The most significant part of Facebook’s announcement, however, has nothing to do with politics at all.
The basic premise of any AI-driven social media feed—think Facebook, Instagram, Twitter, TikTok, YouTube—is that you don’t need to tell it what you want to see. Just by observing what you like, share, comment on, or simply linger over, the algorithm learns what kind of material catches your interest and keeps you on the platform. Then it shows you more stuff like that.
In one sense, this design feature gives social media companies and their apologists a convenient defense against critique: If certain stuff is going big on a platform, that’s because it’s what users like. If you have a problem with that, perhaps your problem is with the users.
And yet, at the same time, optimizing for engagement is at the heart of many of the criticisms of social platforms. An algorithm that’s too focused on engagement might push users toward content that might be super engaging but of low social value. It might feed them a diet of posts that are ever more engaging because they are ever more extreme. And it might encourage the viral proliferation of material that’s false or harmful, because the system is selecting first for what will trigger engagement, rather than what ought to be seen. The list of ills associated with engagement-first design helps explain why neither Mark Zuckerberg, Jack Dorsey, nor Sundar Pichai would admit during a March congressional hearing that the platforms under their control are built that way at all. Zuckerberg insisted that “meaningful social interactions” are Facebook’s true goal. “Engagement,” he said, “is only a sign that if we deliver that value, then it will be natural that people use our services more.”
In a different context, however, Zuckerberg has acknowledged that things might not be so simple. In a 2018 post, explaining why Facebook suppresses “borderline” posts that try to push up to the edge of the platform’s rules without breaking them, he wrote, “no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average—even when they tell us afterward they don’t like the content.” But that observation seems to have been confined to the issue of how to implement Facebook’s policies around banned content, rather than rethinking the design of its ranking algorithm more broadly.
Facebook Quietly Makes a Big Admission
Source: Pinoy DB
0 Comments