Apr 14, 2015 · 5 minutes

There's been a lot of scary, apocalyptic talk lately about Facebook's ambitions to become not just the conduit through which news is discovered and shared, but the place where these stories live and die -- a black hole in which news organizations can throw their content before Zuckerberg and his army of robot editors determine which wormholes -- if any -- it emerges from to reach users. Language corrupts thought, and Facebook corrupts everything.

But in imagining what this "Facebookified" media landscape would look like, there's one outcome that -- depending on how Facebook approaches it -- could constitute an improvement upon the greater media landscape:

An end to plagiarism and sloppy aggregation.

For a model of how Facebook might hasten the demise of these crimes against content, consider YouTube. Through its Content ID system, YouTube can automatically detect when copyrighted material is featured in an upload. In theory -- and sometimes in practice -- the system works fine, alerting copyright owners and giving them the option to either take the video down, run advertisements against it to collect revenue, or do nothing. The process has helped bands like Explosions in the Sky -- whose songs from the Friday Night Lights soundtrack have become staples of fan-made sports montages -- pocket a little ad revenue from these third-party uploads.

Which raises the question: Assuming Facebook becomes as centralized a platform for journalism as YouTube is for videos, should Facebook also take on the responsibility of enforcing copyright claims? And if so, could that help alleviate the very real problems of plagiarism, sloppy aggregation, and improper attribution that continue to plague journalism?

From a technological standpoint, Facebook is certainly capable of catching intellectual property theft -- after all, spotting identical strings of words is a far simpler task for software than spotting identical pieces of music or video. And from Buzzfeed's Benny Johnson to Mic's Jared Keller -- both of whom were fired for lifting whole sentences from other sources without attribution -- plagiarism is more prevalent today than perhaps ever before, even though the Internet has made it much easier to catch. That's because the demands put on young content producers to re-write (or "aggregate") every story that manages to enter the sphere of public relevance are extraordinary. And with no time to vet or verify, and likely no expertise in the topics there stories concern, it's no wonder people get sloppy.

But if Facebook hosts all this content, it could very easily spot examples of plagiarism and take them down. And unlike YouTube's Content ID -- which often takes down quality content like remixes despite the fact that many judges and legislators consider them to be examples of "fair use" -- the world would be not be any worse off without posts written by bottom-feeding bloggers too lazy for Thesaurus.com.

Facebook could even take things a step further and help producers of original content make a little money whenever their work is aggregated, not unlike the gentlemen of Explosions in the Sky. Yesterday, for example, FiveThirtyEight explainer-in-chief Nate Silver called out Vox, one of his biggest competitors in the explainer racket, after one of its writers reposted a graphic created by Silver's staff charting the popularity of 2016 presidential candidates. Although the writer Matthew Yglesias gave credit to FiveThirtyEight, he did not link back to the original graphic in what Vox's grand viceroy of explanation Ezra Klein called a violation of Vox's aggregation policy.

But what if Vox -- which has already experimented with hosting content on Facebook with some measure of success -- and FiveThirtyEight had each posted their articles directly to Facebook? The social network could have caught the replication and informed Yglesias that his story would be removed if he did not add a link to the original graphic. Or, Facebook could even add the link itself. More importantly -- because even Klein admits most readers don't click on links to source material -- Facebook could give FiveThirtyEight a share of the ad revenue generated by Vox's post because it relied so heavily on another site's material.

But how much revenue should be sent to Vox versus FiveThirtyEight? That's where things get tricky. As Klein notes, Ygelsias does more than simply repost the graphic -- he analyzes it from a unique perspective and even offers a counterpoint to some of the conclusions Silver drew in his original article. That makes it a "worthy" piece of aggregation, as opposed to some 200-word summary of an AP story written by a news intern between cigarette breaks.

Furthermore, what if Silver had asked Facebook to remove Vox's article? After all, the chart is the copyrighted property of FiveThirtyEight. But there's a pretty good argument that Vox's article qualifies as fair use, given that it added original commentary to the graphic. And while these distinctions should be made by a judge, not an algorithm, that hasn't stopped YouTube and now SoundCloud from making these judgment calls all the time. (In almost every dispute, YouTube unfortunately errs on the side of the bigger, more powerful publisher, and Facebook would likely be no different).

In truth, a Facebook-led copyright enforcement tour modeled after Content ID could be a disaster -- but it could also punish thieves and disincentivize the most pointless forms of aggregation by redirecting ad revenue to the original source. This could be done in varying proportions depending on how much or how little the new story adds. And while that's easier said than done, if anyone's smart enough to figure it out it's Mark Zuckerberg. And if journalists don't like it? Like all dysfunctional relationships, Facebook is so powerful and publishers are so insecure that no matter how much the platform jerks them around they always come back.

Then again, this is all enormously hypothetical. Facebook likely won't do any of the things I've described above because it's in the platform's best interest to host as many pieces of content as possible -- so it can sell the maximum number of ads against it. That includes work that's plagiarized, ripped-off, or even outright false. And so while there's a kernel of positive potential in Facebook's plan to take over the distribution of news, the reality of how this takeover plays out will likely be as utterly irredeemable as media's biggest cynics have led us to believe.

[illustration by Brad Jonas]