Aug 20, 2014 · 4 minutes

Yesterday, in an horrific turn of events, the Islamic insurgent group ISIS announced it had killed captive American journalist James Foley, posting an extremely disturbing video to YouTube of his beheading. While that video has since been taken down, some graphic images depicting Foley's death are still being shared on social media. And thanks to Twitter's relatively recent emphasis on embedding images directly into tweets, it can create a jarring and highly uncomfortable experience for users who, for any number of valid reasons, do not want to see these photos.

In response to these concerns, Twitter CEO Dick Costolo announced a new policy, tweeting, "We have been and are actively suspending accounts as we discover them related to this graphic imagery."

But far from setting this controversy to rest, Costolo's announcement has only sparked a greater debate over a social network's responsibility when it comes to policing graphic imagery posted by users.

For example, Costolo's tweet seems clear enough -- post images of Foley's beheading and you will be suspended. And yet accounts belonging to the New York Post and the New York Daily News, which both tweeted out today's front pages depicting what by any standards is "graphic imagery" of Foley, are still chugging along.

A Twitter spokesperson told Business Insider that these accounts would not be suspended, arguing that, depending on a user's media settings, at least one of the tweets included a warning in place of the photo. But not all users saw that warning, and in any case, letting these accounts off the hook because (presumably -- Twitter would not comment on this) they belong to major media organizations, directly contradicts Costolo's tweet, which didn't leave much room for interpretation. Making matters even worse, Twitter even suggested the Post's tweet to one user who didn't even follow the New York tabloid.

Others, like the Guardian's James Ball, see a different double standard at play here. "We are presented with images of grotesque violence on a daily basis," Ball writes. "Last month the New York Times ran on its front page the dead and broken body of a Palestinian child."

He goes on: "To see an outcry for Foley’s video and not for others is to wonder whether we are disproportionately concerned over showing graphic deaths of white westerners – maybe even white journalists – and not others."

Ball makes a good point. Though I would add that there's a distinction between the Foley video and an image of a Palestinian civilian killed in a missile attack: The people who murdered Foley want us to see the video. ISIS wants us to share these images and to be horrified and to be scared as a result. That is what terrorists do. The Israeli government, on the other hand, can't be very pleased to see such horrific loss of innocent human life onthe  front page of the Times. If enough people share their outrage over the collateral tragedies associated with Israel's campaign in Gaza, it could theoretically impact that country's strategy, prompting it to work harder to avoid civilian casualties. Outrage over the Foley video, on the other hand, will only serve to embolden ISIS.

True as this dichotomy may be, however, it may be difficult for Twitter to decide what images serve a positive role and which merit censorship.

The Foley video also raises questions surrounding the argument made here and elsewhere in the wake of the Michael Brown shooting in Ferguson -- that Twitter is a far better place for breaking news than Facebook. The word used by most writers to describe Twitter during breaking news is "raw," both in terms of supplying a raw feed that is unadulterated by algorithms like Facebook's, and in terms of bringing firsthand accounts, videos, and photos from people on the ground. (This also means it regularly circulates unconfirmed rumors).

But yesterday, as images and video of James Foley's beheading flooded the social network, Twitter became far too "raw" for most users. And understandably so. To paraphrase Buzzfeed's Charlie Warzel, for the first time I welcomed the ice bucket videos and listicles of Facebook, at least over this.

I'm not sure what the answer for Twitter is. As Google has discovered in its attempts to enforce Europe's "right to be forgotten law," policing content on the Internet, even with the best of intentions, is a damned complicated business. Whatever policy Twitter adopts, it must enforce these rules consistently, and not bar some accounts from violating its rules while letting others slide, regardless of whether they belong to powerful and popular media organizations.

In any case, let's not place the burden on Twitter alone -- it's also our responsibility as users to gauge what is appropriate to share on public feeds. I'm not saying someone doesn't have a right to seek out these images. But when it comes to a network like Twitter, where images and videos appear in-stream without warning or prelude, let's all think before we post.