Oct 8, 2014 · 2 minutes

The security researcher who discovered that Yahoo's servers had been compromised by the Shellshock bug received a visit from the Federal Bureau of Investigation on Tuesday because the agency wanted to discuss the methods by which he reached his conclusions, according to Wired, which says that the researcher fears he might be arrested for looking into the problem.

Researching security vulnerabilities at large companies is fraught with risk. It's hard to learn about a problem that might allow hackers to access information without taking advantage of that same problem yourself, and the United States' laws concerning the access of someone else's computer are so vague and harsh that such research can lead to almost ludicrous consequences.

Wired explains the government's zealous defense of public servers in its report, and why the researcher involved in this case, Jonathan Hall, might have stepped over the line in his search:

Enacted in 1986, the [Computer Fraud and Abuse Act] makes it illegal to access a computer without authorization, but security researchers and federal prosecutors often don’t agree on what that means. Several high-profile hacking cases have played out in this gray area. Andrew “Weev” Auernheimer and Daniel Spitler were charged after writing a script that accessed information on a publicly available AT&T website, Aaron Swartz for downloading a cache of articles that he was permitted to access.

In Hall’s case, he went a little farther. He says he gained access to a server belonging to compression software maker WinZip and issued a command on the machine that displayed the contents of malicious file on his own monitor. After that, he ran a “kill” command on WinZip’s server that terminated the malicious program. There's an obvious quandary here: should researchers be allowed to look for problems and then leave their removal up to whichever company is responsible for managing them, or should they help protect people while they're digging around in someone else's servers? The answer, which will likely depend on how much you think intent matters when thinking about the act of doing something to a device owned by someone else, could determine what will come next for Hall.

But there's no such problem with the idea that researchers finding security problems shouldn't be punished. There needs to be a line between "attacking" a device and researching a problem, not least because security researchers revealing problems at large companies is pretty much the only thing that makes those companies fix a vulnerability instead of allowing it to stay in place.

Hall likened his action to saving a child from an overheating car. Now, if the car door is locked, there's a case to be made for bearing some legal consequence when you force your way into it. Merely looking through the window and then telling other people that there's a kid in the car, on the other hand, shouldn't be punished. Let's hope that the government won't conflate the two and scare off other researchers who merely observe problems instead of trying to fix them.

[photo by Tambako]