Does "Authorization" Matter?

Context is everything. The headline question is, of course, a troll. Authorization definitely matters, and especially within the context of the Computer Fraud and Abuse Act (CFAA), which is the trigger for this post. A fusillade of question around authorization cropped up last week thanks in large part to a blog post by @ErrataRob in which he states that the CFAA is dangerously vague and indeterminate on this question of authorization. In some ways he was right, but in others it was just misleading... to make matters worse, the coverage through the tech industry has been a touch fatalistic, trending toward uninformed and absurd... so, here goes my contribution! (read that as you will;)

Irony

The first thing worth pointing out is that Rob's opening statement (and the opening, humorous statements by J4vv4d in his vlog derived from Rob's post) was wrong. Rob opens his post stating,"Are you reading this blog? If so, you are committing a crime under 18 USC 1030(a) (better known as the "Computer Fraud & Abuse Act" or "CFAA"). That's because I did not explicitly authorize you to access this site, but you accessed it anyway." It turns out that this comment is absolutely, positively wrong; and, demonstrably so at that.

You see, the ErrataSec blog is hosted on Blogger.com, which is a Google property. Signing up for a Blogger account means that you are agreeing to the Terms of Service (a contract). Moreover, for the site to be accessible, you are explicitly configuring it to publicly available. Therefore, Rob has, in fact, explicitly authorized "the public" to view the site. Moreover, he has explicitly authorized Google to publish his content in their services, without limitation. Specifically, Google's Terms of Service (TOS) says:

"When you upload or otherwise submit content to our Services, you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content. The rights you grant in this license are for the limited purpose of operating, promoting, and improving our Services, and to develop new ones. This license continues even if you stop using our Services (for example, for a business listing you have added to Google Maps). Some Services may offer you ways to access and remove content that has been provided to that Service. Also, in some of our Services, there are terms or settings that narrow the scope of our use of the content submitted in those Services. Make sure you have the necessary rights to grant us this license for any content that you submit to our Services."

So, you see... his opening statement is patently false (as several commenters, including myself, have pointed out). While this certainly makes it hard to proceed to his further arguments, we'll see fit to do so, lest we fall victim to a logical fallacy (the "fallacy fallacy")

CFAA, Authorization, Damage, Implied Consent

The ax against which Rob seeks to grind is the dated and much-maligned Computer Fraud and Abuse Act (CFAA; full modern text available here and here). The nit he picks is with how one knows whether or not one is authorized. However, this is a bit of a ruse, because the CFAA does not look exclusively at "authorized access," but also looks at things like whether or not the system was "protected," "public," government-owned, and if damage was caused. Depending on reading, you may also potentially see the scope being somewhat constrained to financial services and national defense considerations (though this isn't necessarily the case).

Rob is definitely right in that "authorization" is not clearly and unequivocally defined in the statute. However, we're fortunate in that this is not the only consideration. In fact, where this authorization thinking usually leans is toward the concept of "implied consent" and whether or not a site is considered "public." Most web sites, for example, are by their nature "public" unless some form of access control has been implemented in order to delineate between what is and is not authorized access.

Implied consent could be a bit of a slippery slope, and it is complicated by the classic "I know it when I see it" argument. For the most part, however, a "reasonable man" argument can be used to parse and vet most considerations around authorization and implied consent. Where things get murky is when independent security researchers (like Rob) start working in the gray areas between authorized and unauthorized access.

This gray area is really, I believe, what drove Rob's post... triggered in large part by the recent conviction of former Goatse Security personality Andrew "weev" Auernheimer. I'll come back to Auernheimer in a couple minutes, because I think his case is far more clear-cut than Rob might have us believe in his post. First, though, let's look at a couple legal carve-outs provided for security researchers.

So, You Wanna Hack^H^H^H^HResearch Legally, Eh?

There are two main considerations here for security researchers:
1) CFAA and similar regulations on authorized level of access and damages
2) DMCA anti-circumvention exceptions

In the latter case, the DMCA has actually been one of the more controversial regulations, with it's seemingly random, inconsistent, and nearly incomprehensible process for getting research exceptions allowed, and then maintained, by the U.S. Copyright Office. As noted in the associated Wikipedia article, the DMCA has had a chilling effect on some security research, particularly around DRM and cryptanalysis. This is a cat-n-mouse game that has no end in sight... but I digress...

With respect to independent security researchers looking for holes on public web sites, there has been - as Rob has pointed - vague and inconsistent enforcement. If you're conducting research, then I believe one key thing to keep in mind at all times is responsible disclosure practices. As I'll discuss in the next section, failing to follow reasonable practices around validation and disclosure are what seem most likely to come back and bite you (hard). However, even then, one must be exceedingly cautious, especially when dealing with large multi-nationals that have deep pockets and a willingness to burn legal fees pursuing (er, crushing) security researchers.

What I find fascinating is how stupid and fatalistic these large corporations are. If a researcher acts in good faith, follows a reasonable approach that doesn't cause harm, and follows responsible disclosure practices, then there should not only be no adverse reaction, but there should be protection under the law. After all, it's beneficial to companies to have bugs responsibly reported to them. Today we are seeing several major corps. launching bug bounty programs, following the lead of Google and Microsoft. Hopefully this trend will continue. Ironically, it's worth noting that AT&T - the plaintiff in the Auernheimer case - now has a bug bounty program, despite getting the book thrown at him.

Oh, Auernheimer...

The interesting thing about Rob's original post was that he seemed to have been motivated by outrage over the Auernheimer conviction. However, looking at the scant details available about the case, it seems like this is a pretty clear case of exceeding authorized access and causing harm. According to the media reports, Auernheimer was convicted of one count of identity fraud and one count of "conspiracy to access a computer without authorization." He remains free on bail pending the outcome of an appeal.

This Wired article provides some background. Basically, Auernheimer and his associate tumbled onto an enumeration vulnerability on the AT&T web site wherein a person could enter an arbitrary iPad ICC-ID and find out the email address associated with that devices. They identified the vulnerability, and they validated it through testing.

It's at this point where I suspect the CFAA conspiracy charges come into play. If they had contact AT&T directly, disclosed the vulnerability, and then waited for a response, then I suspect that this entire case would have been avoided. Unfortunately, this is not how they proceeded. Instead, they did two things, which make these researchers look unreasonable and irresponsible. First, they wrote a script to automate the enumeration attack and used it against the AT&T web site to collect a bunch of ICC-ID/email pairings. Second, they contacted media site Gawker and disclosed their findings. It wasn't until after these two things had been done that they also then contacted AT&T. Hell hath no fury like an embarrassed multi-national caught flat-footed by wily "hackers." Plus, we're talking about Ma Bell here... a monolithic artifact from the age of true telecom bureaucracy... one would think that proceeding with caution might be warranted...

In his piece, Rob does pose an interesting question, reductionist though it was... if you're simply modifying a single numeric value (parameter) in a URL, are you exceeding access. Of course, as I say, this is reductionist thinking, since the case bears far more than such a simple assertion. Sadly, I have to come back to the "I know it when I see it" argument... which is to say that, no, I don't think manually twiddling a couple URL parms amounts to an unauthorized access that causes significant harm. However, when you move beyond manual twiddling to a scripted attack, and then pile on irresponsible disclosure, I can certainly see where the argument starts to turn against the researcher.

So, does this mean, as a security researcher, that you should never, ever twiddle URL parms unless you have explicit permission to do so? I would think not so much, as long as your twiddling is within reason and relatively harmless... and, most importantly, that your intent can be clearly demonstrated as non-malicious to the "reasonable man" (a reference to the jury, btw). Unfortunately, in the case of Auernheimer, their behavior was not fully above-board (as noted in this story, which talks about them trolling AT&T).

In the End...

Many of the news stories and blog pieces - especially from the security community - have just been plain wrong and needlessly fatalistic about the outcome of the Auernheimer case. The fact of the matter is that the CFAA is just one of many statutes that could potentially be brought to bear in "hacking" cases. Really, attacking the CFAA is just a straw man argument that distracts from the core problem: that there will always be tension between security researchers working in gray areas and law enforcement trying to "serve and protect."

Allow me to leave you with 5 closing thoughts:

1) The CFAA has been updated a half-dozen times since 1986. Check out its Wikipedia entry for a bit more info, or read the entire current code at the links above.

2) Act professionally and responsible. Responsible disclosure is paramount. Openly taunting the company you're "researching" is problem bad form. If you want to be taken seriously, then act professionally.

3) Don't be dumb. Automating an attack, and then executing it, is not being smart. Nor is it smart to go to the press first. Assume you will be subject to intense, negative scrutiny. Document everything and, above all else, see #2.

4) Act reasonably. It is not so fine a line between "proof of concept" and "outright attack." Don't assume that the rest of the world buys into an "everything should be open and free" mentality. More importantly, consider the opposing party's perspective. Nobody likes to be publicly embarrassed. Be reasonable and use good judgment (see #2).

5) CYA. Document, document, document. Make sure everything you're doing is legally defensible. If there is any question at all about what you're doing may or may not exceed authorized access, or may trigger legal action, then stop and get guidance. Talk to a lawyer (EFF understands these things). Get the feds involved (I know, this sounds scary, but use a shield/lawyer - and this could mean US-CERT or FBI or DHS or some other entity/agency). There are known reputable "former hacker" personas IN the fed space now. Reach out for advice, or find someone who can do it for you. Etc. If you don't look out for yourself, then you're going to get burned.

And... that's about it. I don't envy security researchers and the tightrope they walk. However, that said, I think there are some basic practices that they can adopt to help limit and proactively manage the risks they're carrying. At the end of the day, this comes down to making good business decisions, particularly regarding legal risk management. It's 2012... there's no excuse not to get informed, get help, or be caught unaware. Let's hope people learn from this and back away from their fatalistic interpretations of what seems like a pretty clear-cut case of not being smart.

Oh, and, incidentally... you are authorized to read (directly or via RSS) and comment on this article. ;)

About this Entry

This page contains a single entry by Ben Tomhave published on November 27, 2012 10:53 AM.

3 Ideas For Mitigating Robocallers was the previous entry in this blog.

Applying InfoSec Lessons to Public Safety is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Monthly Archives

Pages

  • about
Powered by Movable Type 6.3.7