Jun 26, 2015 ยท 7 minutes

 Good news!

After two days of back-and-forth emails with Facebook representatives – at least two of the three I suspect are humanoid robots -- the company finally allowed Pando to post and promote an investigate journalism article to Facebook. Thanks Facebook! Was that so hard? I knew you’d find a way to take our money eventually…

Some background on what happened:

Late on Monday, we noticed that a larger-than-normal percentage of the the traffic coming to Yasha Levine's excellent piece on Google's unfeeling and illegal war on Venice Beach's homeless population had come from Facebook. For those of you who don't have jobs that require you to participate in Facebook's often farcical but impossible-to-avoid relationship with publishers, paying to promote a story that's already resonating big with Facebook's users can give it an extra amphetamine shot of pageviews that is usually worth the money. 

And so like any good social media jockey I set up a paid promotion for Levine’s piece. Within a few minutes, however, I received a message letting me know that Facebook did not want my money, thank you very much: 

"Your ad wasn't approved because it doesn't follow our Advertising Policies, which apply to an ad's content, its audience and the destination it links to. 

We don't allow ads that use profanity. Such language can offend viewers and doesn't reflect the product being advertised."

First off, the Facebook post or "ad" did not include profanity - not in the headline, the slug, the photo, or the description I wrote. Yes, the story – or “the destination it links to” -- did, but most of that came in the form of direct quotes from the victims of Google's alleged intimidation and bullying -- victims who were understandably pissed the fuck off.

I explained here the many reasons why this is troubling, especially considering Facebook's increasing position as the dominant platform where readers both young and old find news content. According to recent studies, almost half of all web-using adults -- and 88 percent of Millennials -- use Facebook to find news.

To add insult to injury, Facebook has on occasion boasted an arrogance regarding its role as a steward of strong, substantive journalism – even though the race to the bottom evident at many news organizations can be at least partially attributed to how Facebook’s algorithms unearth content. 

As I find myself writing an angry message using Facebook’s support tools, I realize, of course, that I’m basically shaking my fists at what’s almost certainly a lifeless robot – a robot so hapless that it struggles to tell the difference between a piece of investigative journalism that happens to contain some dirty words and a plain text webpage displaying nothing but the word FUCK 10,000 times in a row. But behind every great robot, there’s a great man. And behind every confused, ineffectual, and inconsistently Puritanical robot, there’s a Facebook employee.

What I mean is this: even though it was almost certainly an automated process that disapproved my post and not a real human being, that doesn't absolve Facebook. The company tends to cast off responsibility for the behavior of its algorithms, but this robot scapegoating is disingenuous and contradicts the reality of Facebook - a reality in which these algorithms are constantly tweaked and tinkered by humans in order to achieve specific outcomes. 

To be fair, I honestly don’t believe that one of those desired outcomes is to prevent me from promoting a piece of journalism that, under any number of metrics – quality of writing, social importance, cultural relevance – was pretty fucking good. Facebook’s News Feed algorithm may be accused of serving up too much fluff to users and not enough substance – too many listicles, not enough prose, if you will. But as Facebook looks to become not just a distributor but a home for serious journalism, hosting content from the New York Times and the BBC among others, the last thing it needs is to be accused of “censoring” journalists. 

And so I figured the moment someone at Facebook - maybe a customer service representative or even a slightly smarter robot - took another look at my post, the rejection would be reversed. It might take a while to fight through the robotic red tape, but I was confident I’d get there. 

The second response I received regarding this issue, however, only chipped away at that confidence. It came from "Frank," with “Facebook Ads Team” who opened with this: 

"Thanks for writing in. I'm here to help. 

"Your ad was rejected because it doesn't follow the language policy of our ad guidelines. Please make sure the language in your ad's image or video, body text and title are all compliant."

That's not quite the same language used in the original rejection, but it might as well have been. Like I said, any human can clearly see there is no profanity in the ad itself. So either "Frank" is the product of another automated process, or Frank is a real person who simply didn't bother to look at the rejected ad, opting instead to copy and paste some boilerplate message. Neither possibility reflected too great on Facebook.

I didn’t actually see “Frank”’s message until a couple hours after receiving it and so it wasn’t until later than I sat down to write another angry note, in the rapidly dwindling hope that its recipient would be some entity, machine or otherwise, that possessed human intelligence. 

That’s when I found the third and what would be the final message from Facebook -- this one from “Isabella Leone.”

A last name? That’s a good sign. Maybe that means she’s a real human being. Or if nothing else, maybe her creator was cut from a more civilized cloth than “Frank”’s, and that in giving Isabella a last name had intended to lend a measure of dignity to the Facebook-owned humanoid. 

Whatever the case, “Isabella Leone” totally saw where I was coming from:

Hi David,

I've taken a look at your ad and found that it is policy compliant. We do not allow profanity in ad creatives because it can be offensive and negatively impact the audience's experience on Facebook. In this case, the article was flagged for using the term "bastards," but should not have been disapproved since it does not appear in the ad. I am sorry for trouble here.

The ad is approved and paused. You can set it live through your Ads Manager if you wish to do so.

Isabella Leone

Huh. It wasn’t the numerous “fucks” in the story that offended the sensibilities of the algorithm that initially blocked my ad? Or the handful of “shits”? Nope, the dirty, unforgivable word was “bastards.”

That confused me at first. But the more I thought about, the more it made perfect sense. In a way, Facebook’s algorithms are the illegitimate sons and daughters of Mark Zuckerberg himself. Zuckerberg brought them into this world, only to abandon them in the wilds of a billion News Feeds, acknowledging their existence only when in need of a scapegoat. Some faceless “Frank” or “Isabella” is always to blame when users complain that all they ever see on Facebook anymore are Ice Bucket Challenge Videos.

But you and I know better. These bastard sons and daughters are simply doing what they’re programmed to do, striving in vain to make their father proud, even though they know in the bottom of their binary hearts that it’s hopeless. 

But hey, they fixed my problems. Sure, it wasn’t the most elegant resolution. But the good news is, now I know what to expect from our dystopian future of content, when publishing directly to Facebook is the only way to reach a significant audience – the rest of the Internet might as well be Cable Access Television at that point – and instead of reporting to editors, I'll work my problems out with anonymous algorithms like “Frank” and “Isabella Leone.”