Turning off political ads on Facebook could have unpredictable consequences

Views: 351

Programming note: The Interface is off on Thursday. Back on Monday.

It seems hard to believe now, but there was a time when political advertising was relatively uncontroversial. Sure, individual ads regularly proved to be controversial, from the nuclear provocations of Lyndon B. Johnson’s “Daisy” to racist fear-mongering of George H.W. Bush’s “Willie Horton.” And the economics of political advertising, which depend heavily on moneyed interests pushing their agenda through opaque front groups, have been the subject of sustained and valid criticism since well before the Citizens United decision.

But the basic idea that a politician ought to be able to buy space in a media outlet and show it to a bunch of people — that was something we mostly had learned to live with. And then social networks came along, and the calculus changed.

Buy an ad in a newspaper or on a TV station and your potential reach is relatively limited, at least compared to the internet. But buy an ad on a social network and its reach is potentially infinite — and it can acquire massive scale in an incredibly short period of time.

The old thinking was, a politician ought to be able to address their constituents — and if they say horrible things, that’s something the public ought to know. The new thinking was — well, sure, but what if he says that Election Day is a week later for the other party? What if he says that members of an ethnic minority have come to the village to eat your babies? What if he tells his army to go shoot a bunch of protesters and buys an ad to brag about it?

Suddenly that combination of scale and speed feels dangerous, in a concrete way. The lie gets halfway around the world before the fact-checkers even see it, and that’s if your social network will even let them fact-check the lies to begin with.

All of that is prelude to Facebook CEO Mark Zuckerberg writing this on Tuesday evening in an op-ed in USA Today:

By giving people a voice, registering and turning out voters, and preventing interference, I believe Facebook is supporting and strengthening our democracy in 2020 and beyond. And for those of you who’ve already made up your minds and just want the election to be over, we hear you — so we’re also introducing the ability to turn off seeing political ads. We’ll still remind you to vote.

And so starting soon, you’ll be able to remove political ads from your Facebook feed. (Most political ads, anyway: Facebook cannot perfectly identify every remotely “political” ad, and so you may see some anyway.) This approach had been announced in January, but honestly who remembers January, and anyway now it’s here. This optional feature brings Facebook’s approach a step closer to that of Twitter, which banned political ads outright in November. (Google allows political advertising but restricts the ability of advertisers to “microtarget,” or show ads to people based on fine-grained demographic or location information.)

How big a deal is all this? Perhaps not as big as Facebook’s intention, also announced in that op-ed, to register 4 million new voters over the next several months — double what the company estimates it managed to achieve during the last election. And a new voter information hub modeled on a similar module that Facebook created for COVID-19 could also be useful as an island of sanity in a News Feed full of the usual polarized provocations.

But I always think it’s worth noting when, in a democracy, a major media outlet enables the restriction of political speech — even when its intentions are good. Sure, there’s the journalist’s tendency to favor more speech in most contexts. But there’s also an awareness that placing limits on one kind of political speech can often benefit other forms of political speech. Limiting political speech in ads, for example, could favor incumbents, who have less need of advertising.

It could also change the kinds of political speech people see on Facebook. Advertising executives there have told me in the past that political ads tended to be less inflammatory than regular posts on the whole, because fewer people want to see inflammatory messages and thus they can be more expensive to distribute. But unpaid partisan engagement bait circulates widely through Facebook echo chambers. The New York Times’ Kevin Roose does a regular bit on Twitter where he posts the day’s top Facebook posts with links, and the results often skew heavily toward the conservative side. You can imagine a world where partisans disable political ads, see nothing but links to articles that flatter their worldview, and withdraw further into their echo chambers.

You can also imagine a world where politicians take the hint, and stop threatening to send in the military to trample peaceful protesters, or falsely suggest that voting by mail is illegal, or rig up a Facebook ad for a data-collection scheme to look like a link to the official US Census. Maybe, in such a world, people would be less likely to turn off political ads on Facebook. This world is admittedly harder to imagine.

Maybe the political-ad toggle will turn out to be a minor thing. Or maybe researchers will pinpoint it, a few years from now, as a decisive factor in some outcome or another. The interlocking feedback loops of Facebook, its user base, and the wider world are basically impossible to understand in real time.

What you can do, though, is watch the company continuously adjust the trade-offs it is required to make: between speech and safety, between Democrats and Republicans, between the absolutists and the people who think there ought to just be a toggle. Sometimes “giving the user more control” can be the most responsible thing to do. Other times it can represent an abdication of some larger responsibility.

Facebook is now locked in a perpetual tug-of-war between the people who think it should allow more speech, and the people who think it should allow less. Allowing everyone to disable political ads feels like Zuckerberg once more attempting to thread that needle — one whose eye is small and shrinking all the time.


I made a mistake in yesterday’s piece about Apple and antitrust: the email app Superhuman does, in fact, offer in-app purchases. So sorry about that.

Elsewhere, I’ve been riveted by the coverage of Apple’s inconsistent enforcement. See Dieter Bohn, Ben Thompson, John Gruber, and Savannah Reising for the best commentary on the subject. I expect this story will have more dramatic turns, and soon: WWDC starts Monday.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: Netflix CEO Reed Hastings and his wife Patty Quillin are donating $120 million to historically black colleges and universities (HBCUs). It’s the largest recorded individual gift to support scholarships at the schools. (Jacob Knutson / Axios)

Trending up: Facebook wants to help register 4 million voters this year with its new Voting Information Center. While some studies have shown that Facebook posts don’t boost voter turnout, perhaps this effort will. (Nick Statt / The Verge)

Trending up: Google is committing to spending more than $175 million on racial equity initiatives with a focus on financing black-owned businesses and supporting black entrepreneurs. (Jacob Kastrenakes / The Verge)


An excerpt from a new book by former Trump administration national security adviser John Bolton says Trump asked Chinese leader Xi Jinping to help him win the 2020 election. Here’s Bolton in the Wall Street Journal:

Trump then, stunningly, turned the conversation to the coming U.S. presidential election, alluding to China’s economic capability and pleading with Xi to ensure he’d win. He stressed the importance of farmers and increased Chinese purchases of soybeans and wheat in the electoral outcome. I would print Trump’s exact words, but the government’s prepublication review process has decided otherwise.

Senate Republicans introduced a new bill that would seek to limit Section 230 protections for big tech companies. In reality, the bill would likely make it even harder to hold companies like Facebook and Twitter accountable for selectively enforcing their own terms of service. Here’s Dell Cameron at Gizmodo:

The bill, dubbed “Limiting Section 230 Immunity to Good Samaritans Act,” would effectively grant users the right to sue major internet companies that enforce their own terms of service unequally and in bad faith. As it stands, users cannot sue any website for content generated by another user (e.g., tweets, comments, posts, etc.) or for any decision by the website owner to restrict access to content that the owner finds “objectionable.”

However, the changes to Section 230 of the Communications Decency Act proposed by Hawley do not appear to place any new restrictions on how companies define their own moderation policies—only that they stick to, and evenly apply, whatever rules they ultimately decide upon. (Any effort to regulate how internet companies moderate content beyond that which is illegal, such as child sexual abuse material, would more than likely run afoul of the First Amendment.)

The Justice Department is also working on a proposal to pare back Section 230 protections for tech platforms. The proposed changes are a direct shot at companies like Facebook, Twitter and YouTube that have come into the cross hairs of the Trump administration. (Cecilia Kang / The New York Times)

India’s antitrust watchdog is reviewing Facebook purchase of a 10 percent stake in Reliance Jio. The deal is meant to give Facbeook a foothold in one of the world’s fastest-growing internet markets. (Shruti Srivastava / Bloomberg)

The claim that half the accounts tweeting about COVID-19 are bots is overblown, according to disinformation experts. They say there’s little research to support the number is actually this high. (Siobhan Roberts / The New York Times)

Police departments are using their official Twitter accounts to tweet misinformation about protestors. At times, these rumors have included unsubstantiated claims about weapons and protestor violence. (Aaron Blake / The Washington Post)

Microsoft has pledged not to sell facial recognition technology to police departments until it’s been federally regulated. But documents show the company tried to sell the controversial technology to the government for years, including to the Drug Enforcement Administration in late 2017. (Ryan Mac / BuzzFeed)


Twitter started rolling out audio tweets on iOS. The feature lets users record audio snippets and attach them to tweets. Here’s how it works, according to The Verge’s Chris Welch:

If you’ve got access to it, you’ll see a new waveform icon beside the camera icon when composing a tweet. Tap that, and a red record button appears at the bottom of the screen, which you can tap to start recording your message.

“Each voice tweet captures up to 140 seconds of audio. Have more to say? Keep talking. Once you reach the time limit for a tweet, a new voice tweet starts automatically to create a thread,” Twitter said.

Audio can only be added to original tweets, according to this help page, so you can’t include them in replies or retweets with a comment. Another minor thing to note is that whatever your profile picture is when you record an audio clip will always be attached to that audio tweet. “Your current profile photo will be added as a static image on your audio attachment and will not refresh if you update your profile photo,” Twitter says.

Zoom says free users will get end-to-end encryption after all. The company had originally said the feature would be restricted to paid enterprise users. (Nick Statt / The Verge)

Apple’s head of diversity and inclusion, Christie Smith, is leaving the company. The news comes as Apple steps up its initiatives to promote equality. (Shelly Banjo and Mark Gurman / Bloomberg)

An army of public health volunteers is fighting vaccine disinformation online. Their plan is to conscript the vast but largely silent majority of Americans who support vaccines to block, hide, and report vaccine misinformation on social media. (Megan Molteni / Wired)

The features that make Facebook groups attractive — privacy and community — make them dangerous for the spread of misinformation. Often, they’re exploited by bad actors to share rumors and conspiracies. (Nina Jankowicz and Cindy Otis / Wired)

Employees at the mental health startup Crisis Text Line tried telling the board about the CEO’s racial insensitivities. When that didn’t work, they went to Twitter. (Zoe Schiffer / The Verge)

Police body cam videos can underplay officer brutality by only showing their point of view. “Body cameras show a scene from the perspective of the officer, and the image on the screen is focused on the suspect. That changes the way the incident is interpreted,” this author writes. (Nicole Wetsman / The Verge)

Video game companies have vowed to fight racism in their communities in the wake of George Floyd’s killing and the ensuing mass protests. But they’ve offered few details about how. Their silence has left some in the community wondering why even basic, text-based instances of racism persist. (Noah Smith / The Washington Post)

ByteDance, the Chinese parent company of TikTok, booked around $5.64 billion in revenue for the January-March quarter. The number shows growth of more than 130 percent compared with the same period a year earlier. (Yingzhi Yang and Julie Zhu / Reuters)

People are stealing OnlyFans content and posting it for free on other adult sites. In some cases, they’re scraping subscribe-only content in bulk and then hawking it on Discord servers. (Samantha Cole and Joseph Cox / Vice)

Magic Leap is trying to stay afloat with business-focused applications built in the model of Microsoft’s HoloLens. It’s a far fall for a company that, in 2016, looked like it could change the world. (Adi Robertson / The Verge)

The Trust & Safety Professional Association is born! It will offer training and career development for people who work on content moderation and related subjects around the tech industry, and initial backers include Facebook, Google, Twitter, YouTube, and Airbnb. I spoke with Clara Tsao during the association’s development and have high hopes for the good it can do.

Things to do

Stuff to occupy you online during the quarantine.

Install Siri’s “I’m getting pulled over” shortcut. “It will dim your phone, pause any music being played, and start recording video from your front-facing camera. It can also send your current location and a copy of that video to an emergency contact, though you’ll need to confirm a few pop-up messages to complete these steps,” James Vincent writes.

Subscribe to a podcast focused history, literature, and current affairs from a non-white point of view. This viral Twitter thread has lots of good examples.

Try a Pride-themed Snapchat filter. These “portal lenses,” made in partnership with The Advocate, will introduce you to some of “the LGBTQ+ activists, artists, politicians, and more from each state who are changing the world for the better.”

Those good tweets

Talk to us

Send us tips, comments, questions, and your political ad preferences: [email protected] and [email protected].

Tags: , , ,
Microsoft to upgrade its xCloud servers to Xbox Series X hardware in 2021
Amazon to stream Premier League soccer free on Twitch

Latest News



Artificial Intelligence


You May Also Like