Proving that writing a book does not make you right... Anton has a new blog post up (cross-posted, in fact) titled "On Scope Shrinkage in PCI DSS" - a sad little piece based on a lot of bad assumptions, and rooted in his blatant fanboyism for the standard that many have come to loath. In my typical fashion, here are some quotes and my thoughts on them...
"People who came to PCI DSS assessments and related services... from doing pure information security often view PCI scope reduction as 'a cheap trick' aimed at making PCI DSS compliance undeservedly easier. They only think of scope reduction as of limiting the area where PCI DSS security controls apply - with negligence, supposedly, reigning supreme outside of that sacred area."
First, what is "pure information security"? Can anybody really claim this? I don't know anybody who started in infosec and had no background in another area, like network administration, systems administration, or software development. I mean, seriously... this is meaningless...
Second, yes, of course the whole point of PCI scope reduction is to limit where the PCI controls apply - duh?! And, as we've seen, when smart hackers attack, they go after the non-PCI areas first, and then escalate access from there. Why bother attacking a hardened environment if you can simply pop an admin workstation and then enter through the admin back door? In my opinion, one of the greatest failings of PCI DSS is allowing the majority of an IT environment to be de-scoped simply by putting up a firewall. It's like putting an ATM in a remote, open field, and then saying "well, if we just put a fence around it, then you need only worry about securing the ATM" as if that would make it any harder to abscond with the unit.
"PCI on-site assessment is not an audit, stop calling it that!"
Let's see. There's a checklist. And there's a person who comes onsite to check items on that checklist. Sure sounds like an "audit" to me. Just because PCI calls them "assessors" does not mean it's not an audit, but nice try at playing semantic games. It certainly is not a security assessment being performed. And, btw, the comparison to SOX audits is ironic considering most major SOX audits end up performing almost the exact same tasks as a PCI audit. Sure, the requirements for SOX are ill-defined, but that doesn't stop auditors from making arbitrary comparisons and assertions, typically based on CObIT or similar cruft...
Incidentally, if you look at #5 on his related annoyance list "Don’t call QSA (Qualified Security Assessor) 'an auditor.' That 'A' does NOT stand for 'auditor' and PCI on-site assessment is not the same as, say, SOX audit." This is a load of BS. I don't care what you call the person performing the job, the job being performed is an audit, which is defined as "an official examination and verification of accounts and records." The customer isn't being rated on a spectrum, they're being told on a binary scale whether or not they are compliant. It makes a difference, since audits don't promote growth and evolution, whereas assessments do/should.
"...tokenization, data vaults, virtual terminals, hashing, network segmentation, transient PAN storage all reduce scope and reduce risk – at the same time. These are the things that make PCI compliance easier WHILE reducing the risk of damaging compromise."
Ah, but do all these things really reduce the risk? It seems that, really, the only thing that effectively and assuredly reduces risk is getting data out of the environment altogether. We've seen network segmentation fail. Data vaults and tokenization aren't completely reliable and in some cases are jinxed by shoddy solutions architecture. And so on... more importantly, I still think we're going to see a class break in payment systems that show tokenization to be a mild annoyance - a limiting factor - not an effective security measure.
The simple truth is that many efforts to reduce PCI scope are merely parlor tricks and do not effectively increase security OR reduce risk. They're sleight of hand designed to give the illusion of either security as an emerging property or reduced exposure of data, when in reality shoddy practices, short-sightedness, or lousy implementations rule the day. Remember, in recent major breaches (e.g., Heartland), each merchant believed that they had met the audit requirements of PCI DSS, only to find out that these requirements weren't ironclad. Network segmentation is a perfect example where organizations greatly reduced the scope of PCI compliance, yet didn't do anything to improve the security of their overall environment, nor did they reduced the amount of sensitive data in their environment.
Hi Ben,
First time poster, first time reader. I was reading through your post and on many points we agree. I too have a beef with "audit" vs. "assessment" -- who cares?! To the average merchant they are just different words meaning the same thing. In fact, many merchants have a different word to represent the "A" in QSA.
Now I'm with you up until "...It seems that, really, the only thing that effectively and assuredly reduces risk is getting data out of the environment altogether... Data vaults and tokenization aren't completely reliable and in some cases are jinxed by shoddy solutions architecture. And so on... more importantly, I still think we're going to see a class break in payment systems that show tokenization to be a mild annoyance - a limiting factor - not an effective security measure."
This seems like a very confused statement. You can reduce risk by getting the sensitive data out of the environment but because some tokenization vendors have shoddy solutions, tokenization will be nothing more than a speed bump for hackers? That is a very broad brush. What about a tokenization solution that is not based on a shoddy architecture and securely sends the data off site? Not all tokenization solutions are made equal. The weakest point of most tokenization solutions (at least offsite solutions) is what PCI refers to a the de-tokenization layer. But not all tokenization solutions have this layer available. If the tokens are not mathematically generated based on the card number (which they should not based on the original definition of "tokenization"), and if the de-tokenization layer is not available (as in a third-party payment gateway where card numbers enter, but only tokens are ever returned), please explain the weakness that these hackers will exploit.
Hi Steve -
Thanks for your comment! My main concern with most tokenization solutions is that the merchant typically retains their existing, and often broken, payment system. Hopefully I'll be proven wrong, but I will not be surprised to start hearing about a class break of these systems that results in fraudulent charges/credits made through the systems, regardless of the use of tokenization.
Obviously compromising the tokens isn't a major concern, unless the tokenization solution isn't good. Hopefully gateways are issuing unique tokens that are bound to specific merchants and cannot be used from non-merchant systems. Controls like this would help reduce the scope of a breach. SaaS solutions like yours are potentially interesting, and I'd hope that these types of services employee a high degree of compartmentalization between customers, systems, data, etc. We should always assume that a breach will happen and then proceed from there in terms of limiting damage, ensuring continued operations despite degraded conditions, and optimizing recovery.
Lastly, I recommend reading my post about Akamai's EdgeTokenization solution as it probably does a better job describing some of my concerns. You can read it here:
http://www.secureconsulting.net/2010/09/thoughts_on_akamais_edgetokeni.html
Thank you,
-ben
Thanks for the reference and I will give it a read. I think what you're describing is an onsite tokenization solution and yes, this just moves the risk around and I don't see any scope differences for the merchant overall -- the merchant and most all the systems are in full PCI scope. The merchant may be able to shore up some weak POS systems with a solution like this if properly implimented, but most merchants will get more bang for the buck with an offsite tokenization solution (if de-tokenization can be avoided).
>really, the only thing that effectively and assuredly reduces risk is getting data out of the environment altogether.
Dude, this seems like one of the key point and it just makes no sense. Why REMOVING the data reduces the risk, but REDUCING THE AMOUNT of data or REDUCING THE SPREAD of data doesn't?
@Steve -
I think tokenization in the cloud is equally problematic when the legacy payment platform is still maintained. I very much believe that we'll see attacks on these payment platforms that, while they'll not directly expose credit card data, will be abused to make fraudulent charges or charge-backs. It's all part of the enablement culture where we (infosec people, services, vendors) start taking away responsibilities from merchants, and so they think they're safe and don't do anything at all, making them easier to compromise, and then we either have to take on even more responsibilities (like fraud monitoring and detection), or we have to give it all back to them and make them responsible. Tokenization solutions are only half-way, and thus imo kludged. Why stop at tokenization? Why not take away the payment platform altogether?
-ben
@Anton -
I would argue that "reducing the amount of data" is the same as "removing data from the environment." In terms of "reducing the spread" - I disagree. As long as the data is in the environment, there's inherent risk involved. I've seen enough lousy implementations of centralized data warehouses full of this data that I'm not convinced at all that centralizing reducing the risk all that much. It makes for a really big target, and typically one that tombstones records rather than expunging them. *shrug* I still think that we shouldn't be wasting our time with this tokenization kludge and should be skipping straight to "Level 3 and 4 merchants are prohibited from storing cardholder data in their environments." There are already viable alternatives, and I think those should be favored strongly over tokenization and similar kludges.
-ben
RE: "Why not take away the payment platform altogether?" and "...I still think that we shouldn't be wasting our time with this tokenization kludge and should be skipping straight to 'Level 3 and 4 merchants are prohibited from storing cardholder data in their environments.'"
So I take it your promoting stand-beside CC environments for level 3 & 4 merchants. I would argue that these stand-beside solutions come with all the same risks and vulnerabilities that the integrated solutions you are replacing have. These stand-beside solutions still rely on CPU's, communication ports, connectivity of some sort, usually either phone or IP over the Internet, and these devices can still be hacked (again, I would argue much more easily than a properly configured "kludged" solution as you call it). If you're talking phone, most dial-up terminals (all that I have hands-on experience with) do not encrypt the traffic on the phone line so to me this is a bigger hole than you are trying to solve. Storing a token vs. not storing any info are virtually the same as far as risk profile goes (ignoring the fact that you limit the merchants ability to fight a charge-back). Whether tokens are used or not, the weakest point is the transfer of the CC info to whatever system is tokenizing or processing the data.
Lastly you mentioned the possibility of fraudulent charges and charge-backs using a tokenization or kludged solution -- this is a fluff statement because this risk exists in all payment systems (token, non-token, and stand-beside). Any half way decent payment system must address this risk, either via reporting and/or risk scoring during the authorization process.
@Steve -
No idea what a "stand-beside solution" is, sorry. What I'm saying is that smaller merchants should not be hosting their payment application, nor should they be hosting any of the sensitive data (cardholder info, customer info, etc.). Is that what you're talking about? The complaint from merchants is that they can't afford to fix their stuff. A service provider, on the other hand, would have to build these security requirements into their business model, and thus would by definition be able to afford the costs.
As for your last quip on fraudulent charges and charge-backs, this is exactly my point! There is no difference, token or no token. So, what improvement has been made here? Tokenization protects the card brands, not the merchant. It doesn't improve security, it merely reduces the risk associated with the likely loss magnitude. However, it does not eliminate fraudulent transactions, but merely limits the scope of the fraud from a single breach.
cheers,
-ben
"Stand-beside" as in a little dedicated mom-integrated credit card terminal you see at usually smaller merchant establishments from time to time. Sorry, I thought you were promoting these devices. I fully agree that in most situations (possibly all situations), merchants, no matter what size (levels 1-4), should not be storing credit card information.
@Steve -
I'm no vendor, have no horse in this race. I try to look at these things from a more strategic perspective, perhaps with a skew toward favoring the merchants. Where I disagree with something like PCI is that it takes a problem that is directly attributable to decisions that the card brands have made and assigns responsibility for mitigation to the merchants, who are really just the customer here. It's a modern oddity that the service providers think the customers should have to pay for their own weaknesses and failings. IANAL, but at some point I have to think this turns into negligence of breach of contract or what-have-you. Anyway.
cheers,
-ben