May 2009 Archives

Knowing One's Strengths

| 1 TrackBack

Based on Anton's recommendation, I picked up a copy of Strengths Finder 2.0 by Tom Rath. That Anton found it interesting and useful told me that I would probably also appreciate it. My assumption has been proved correct.

Knowing one's strengths and weaknesses is very important, whether it be in competition or personal life or professional life. For me, I know that one of my weaknesses is tending to be somewhat negative, cynical, and sarcastic. This trait, when combined with my tendency toward incessant questioning, can be terribly off-putting. It also, however, can make it difficult for me to see my own strengths. As a good friend of mine has pointed out on numerous equations, life is generally pretty good, if only I'd look at it that way.

I first started running hurdles in Jr. High (Gr. 7-8). I wasn't all that good, but held my own, overall. I ran in running shoes, not really thinking much about it until I got into High School (Gr. 9-12), where it became clear that better equipment was needed. Still, having sprinting spikes or light middle distance spikes didn't solve all problems. It wasn't until a couple months into my first season in HS that the distance coach pulled me aside and told me about running technique. It's at that point that I slowly learned that there were different ways to run depending on the type of running you are doing. Sprinters don't run like marathoners do, rolling from heal to toe. Instead, sprinters - and, by extension, hurdlers - run up on the balls of their feet the whole time, getting a spring-like action that greatly improves turnover.

So, I learned technique with sprinting, and my game improved dramatically. Well enough to make it to the State Championships my senior year, which was in and of itself an accomplishment. But technique wasn't all there was to it. I still wasn't all that great of a hurdler, especially when I saw the field of really good sprinters ahead of me. Thus it began to occur to me that, no matter how good my technique might be, there's still something to be said for talent, regardless of whether that be in athletics or academics or on the streets or in the fields. That being said, it is the combination of talent and technique that really accelerates people past the average curve.

In many ways the infosec industry faces a similar quandary. Historically, we've relied on a number of very smart people, rich in talent, but not necessarily steeped in technique. And, to be quite honest, technique hasn't been as important over the years. You had a firewall, you had AV, and maybe you had SSL, and there were limited ways to use them, so you stick them in place and viola! you're secure. Or not. But you get the point.

A Realistic Case for Regulations

Mark Curphy had an interesting post last week titled "The Future : Regulation is Futile – Market Forces Will Prevail" with which I take a bit of an issue. In particular, I question his premise, that market forces are able to prevail in this day and age. I would counter that there are no real market forces any more, at least not in the US. Just the opposite, corporate interests have so pervaded life and politics that there is no objectivity and no free market. The US has entered into a prolonged period of protectionism (look at the post-9/11 landscape... look at the current treatment of borders...). Protectionism reduces the number of true level-set competitors in a field, effectively creating monopolies or oligopolies (*ahem* Big 3 anybody?), with the net effect being a reduction in competition, options, and, overall, quality. From the perspective of security and civil liberties, protectionism and an overt focus on corporate interests is a disservice to consumers.

Interestingly, the only place where Mark really tackles the notion of regulations and their role is in this bit:

Super Crunching – Regulations will not work. “You can’t regulate the problem away”. Market forces drive economic change and when the cost of security becomes something everyone considers, people will act on Fact and not FUD. In order to get to a place where people can make informed decisions; you know like “what’s the real likelihood that this XSS will actually get exploited or show up in the media” or “How many security bugs per KLOC is an acceptable ratio” we need to be able to perform detailed analytics. This means data warehousing and mathematical analysis. The reason an insurance actuary can provide a price for me to drive a Ferrari is that there is empirical data to show that a rich middle aged man who goes out and buys a Red Ferrari is more likely to wrap it around a pole (showing off to his blonde bimbo mistress) within a few months than a middle income guy who chooses to drive an Aston Martin DB5 and just loves cars. Market forces (insurance) will drive change. Market forces require empirical data to provide a framework in which to trade.

This quote is a bit of fantasy, for several reasons.

FYI: Feeds, CMS

Just an fyi, my feeds are obviously still broken. I have no idea why (they seemed to work for about a second). Toward that end, I'm looking at migrating to a different CMS. I'll post copious updates here as that proceeds, but thought I'd provide a heads-up now. Thanks!

More on Tokenization

| 1 Comment | 1 TrackBack

In a previous post, "Does Tokenization Solve Anything?", I questioned what the value was if the cardholder data still passed through your site. I've had the opportunity this week to look at three of these solutions, and have been pleasantly surprised that I had somewhat underestimated the approach out there (though site literature doesn't necessarily get this point across well).

Either directly or indirectly, I've research the following companies:
- Enterprise Payment Exchange (http://www.epx.com/)
- Paymetric (http://www.paymetric.com/)
- Braintree Payment Solutions (http://www.braintreepaymentsolutions.com/)

Each of these solutions has a different spin on things. As such, let me provide some thoughts on each, and then wrap it up at the end with broader comments.

Enterprise Payment Exchange (http://www.epx.com/)

Enterprise Payment Exchange, or EPX, has what I consider to be a fairly basic tokenization solution. They generally rely on a gateway on the customer premises that generates their "BRIC" (token). Rather than taking credit cards directly through your web site, then, you instead redirect to a page hosted on the gateway that is made to look like your site. You then get a redirect back from their box with URL parameters that include the token so that you can complete the transaction accordingly. And the back-end, their gateway ties into their SaaS to securely store the cardholder data off-site. This raises questions for me in terms of the security of their gateway. However, it appears that this system has received PA-DSS certification (look under Electronic Payment Exchange for their BuyerWall 2.0), which should provide some degree of assurance.

Paymetric (http://www.paymetric.com/)

Paymetric appears to be relatively comparable to EPX, with a couple key distinctions. Where EPX uses a redirect, Paymetric instead provides Javascript code to embed into your site. I don't know, however, that this is good thing, as many end-users disable Javascript. As a back-up, they appear to also support a redirect to a landing page. Beyond that, the primary differentiator is that they are the only vendor that integrates fully with SAP Business One. Unfortunately, Biz1 is not apparently a very good billing platform/engine, especially for companies that have a recurring subscription model. Paymetric does not not provide the billing engine, but rather supplements it with their tokenization technology in order to reduce the scope for PCI compliance. As such, you need to bring to the table your own billing engine, and then they'll integrate with and enhance what you have.

Braintree Payment Solutions (http://www.braintreepaymentsolutions.com/)

Braintree has an interesting solution that is different from the other two researched this week. They will generally integrate with existing billing platforms, such that one merely needs to set them up as a processor and modify site code accordingly to add their API calls (rather than a POST to your own servers, you instead POST to theirs) and to receive the token back. The token appears to be a standard hash, nothing too exciting. They also support up to 20 custom fields, opening the door to some creative uses. Overall, the solutions appears to be more complete than either Paymetric or EPX, with some interesting value-added propositions. It definitely seems promising in terms of providing ample motivation for merchants to get cardholder data out of their environment altogether.

Conclusions

In the end, it turns out that the one key weak point identified previously is being addressed by these vendors. Nonetheless, it's not clear how much value straight tokenization solutions add. Yes, you get the cardholder data off of your systems, but with the caveat of making some interesting tradeoffs (having a gateway on-site, redirecting off your site, adding Javascript to your site, etc.). In at least 2 cases, it appears that you still need to maintain your own billing engine/platform, which means your costs will be generally higher. Some might argue that this isn't necessarily a bad thing. In the grand scheme of things, though, the ideal for smaller merchants (at least Level 3-4) seems to me to be getting the entire billing platform outsourced completely such that there is a much lower burden for support and maintenance, as well as dramatically lowering the compliance requirements. This is where a full-fledged SaaS model holds a lot of potential. It will be interesting to see what else may come in the future.

The New School of Privacy

| 3 TrackBacks

It's 2009 and time for a new notion of privacy... the first decade of the century is quickly coming to a close... the advances in technology over the past 15 years (or more, really) have been astounding... in my lifetime computers have gone from being something only used in special businesses and academia, to being a novelty, to being mainstream, to being a fully integrated part of life. Along with this evolution in technology has come an evolution in the amount and types of data available on us. Some of that data is generated by 3rd party sources, but today much if it is also generated directly by us.

Bruce Schneier had a post up recently highlighting an essay by Marc Rotenberg over on HuffingtonPost.com (full article here). The essay is originally from November 9, 2007, and so may seem a bit dated if you read through it. Moreover, "security" in this context is more about "national security," but some of his points are quite apropos. I like this early quote from his piece:

"First, the privacy laws in the United States came about in response to new technologies. Far from accepting the view that innovation invariably erodes privacy, the United States has an excellent record for creating the legal rules that limit intrusive and unjustified invasions into private life."

The tenet of his article is that too much privacy has been sacrificed in the name of "national security". I would argue that it's much worse than that: we are sacrificing privacy in lieu of corporate interests and at the (in)discretion of our friends, colleagues, and acquaintances.

We had an excellent discussion on this very topic at the annual ABA InfoSec Committee meeting last month. One of the speakers talked about how privacy now means different things to different generations. Traditionally, privacy has been about preventing intrusions - about keeping what's behind closed doors to oneself. In this traditional view, as long as you did something in private (within reason), then it was nobody else's business. One could argue that the 4th Amendment is structure precisely toward this right to privacy.

However, with the advent of social networking technologies (the BBS, public web forums, Instant Messaging, Wikis, MySpace, Facebook, Twitter, and so on), there has been a fundamental shift in how data is made available. Generationally, there is a corresponding shift in thinking about privacy of that data. No longer is privacy viewed as a war against intrusion. The data is there, plain to see, oftentimes more available than is likely sensible. In this new context, privacy becomes a matter of access control and authorized use. Platforms like Facebook and Twitter allow you to control (if you choose) who can see your data ("access control"). With this control of access then comes the notion of authorized use.

To give a concrete example, let's look at Facebook. In the privacy settings you can control who can see what you write, post, etc. One of the options is "friends only". Say you've chosen "friends only" for all of your content. You now post some sort of missive disparaging a co-worker, or your boss, or your employer. You know what I'm talking about - one of those "gosh my boss is a jackass" quips that you only intend for friends' ears. Now say that one of your friends is also a co-worker, and they for whatever reason make a comment to your boss about your quip. Perhaps it's an innocuous comment, like "hey, are you giving [friend] a hard time?", but no matter how you look at it, that data leaks out beyond your intended audience. In essence, the implied authorized use was violated.

Consider this, then, to be the new school of privacy: Privacy is the control and authorized use of personal information. Privacy is not about intrusion, but rather a social contract with those around you; a contract where what you say is intended to stay with those around you and not to wander any further without explicit approval. Pursuing and supporting this notion, of course, introduces a couple interesting challenges.

Culture Shift

First and foremost, there is a fundamental challenge in the current culture. While 20-somethings and younger may natively adhere to this new school of privacy, they are not generally supported by older generations. In essence, what we're talking about is engraining a philosophy of discretion into everyone's core being. This seems to be a problem that even the 20-somethings have encountered brutally in getting punished at school or fired from jobs for their social network missives.

Unfortunately, as is true with most cultural issues, this one cannot be solved quickly. Just as older generations still have funny ideas about information technology, so will they be equally challenged to learn to trust that everyone around them will be discrete (ironic given the swingers of the Baby Boomer generation, but anyway). At its most fundamental level, our culture must shift away from the paparazzi mindset, where nobody has any privacy in public, to a mindset that your life is inherently private unless you explicitly authorize the contrary.

What is particularly interesting is that some cultures already have this kind of value, in varying forms. In India, for example, you do not just take pictures of people, even if they're out in public. In Germany, you own all of your data, and you can require companies to remove all traces of it (even from backups). In France, you'd be unsurprised to see a couple making out on a street-corner, while in Amsterdam you wouldn't be surprised by someone using a pissoirs. In Ancient Rome, toilets were holes lined up next to each other with no dividers.

The point here, belabored in excruciating terms, is that cultural norms have changed over time, and it is time for another shift. In this age we need to revert to a healthier practice and perspective in which we all have our sphere of privacy, regardless of where we are or what we're doing. Just because we open our mouths, or put fingers to keyboard (30 years ago I would have said "pen to paper";), does not mean that we intend a public audience. Just because we buy a certain brand of shoe or a certain type of fruit does not mean that we intend for that information to be tracked.

Legal Support

Perhaps one of the greatest challenges today facing the new school of privacy is that of legal support - or the lack thereof. In this regard, we have a true generational problem. Our elected politicians have very little vested interest in protecting the privacy of individual citizens when compared to corporate interests. Even when SCOTUS judges learn first-hand how easy it is to aggregate a full profile. We are still expected to trust government and corporations, despite having that trust betrayed on a repeatable basis. Moreover, these politicians still live under the old school concept of preventing intrusion. They do not fully understand or appreciate that the data is now out there, everywhere; that our only viable way forward is to construct protections around the individual asserting control and authority over data about them.

Moreover, these protections need to extend beyond basic constructs. It is imperative that aggregate data developed by 3rd parties be part of the equation. If the shopping habits tracked by my supermarket loyalty card is so useful, then why has one of the local major supermarkets dropped it? Why has Wal-Mart never used them? Perhaps it's because there are better ways to accomplish the same goal without violating the privacy of individuals by collecting their data without need or true authorization.

This shift will require much heavy lifting. It's my expectation that it will, in fact, take at least a generation to move in this direction. However, it is an important and necessary change, and one that must come to be if we are to regain some control over our lives.

Final Thoughts

The new school of privacy is about shifting away from traditional values pegged against preventing intrusion to new values pegged against access control and authorization. To facilitate this shift, it will be necessary to improve the underlying legal framework, shifting the power back to the individual. At the same time, cultural values must shift toward a unified principle of discretion. It's time to get out of the paparazzi mindset and start focusing on what we put into our social networks, how we control that information, and how we express our authorization for its use.

Since we're coming up on the Summer holiday season, I thought it only appropriate that I provide you with some heartening news that can be used to bolster your business continuity, physical security, and personnel security plans. Without going into too much detail, let's sum this up as such: healthier, happier employees will be more productive and represent a lesser risk to your organization. Hopefully this short missive can help you make a case for improved policies in your organization.

The first argument for this point is one that's been covered a lot in recent weeks in light of the bacon fever scare, so I'll just give it a cursory mention. Lumping sick leave in with paid time off (PTO) has a detrimental effect on productivity by encouraging people to come to work regardless of their physical or mental state. Within reason, sick leave should not be limited unless it becomes problematic. A liberal sick leave policy will actually increase overall productivity, not only in the afflicted employee, but also in the employees around them who may be distracted or, worse, may themselves become sick. The situation of people working all the time regardless of health or stress or conditions is called "presenteeism" and is covered in excellent detail in "Presenteeism: sick and tired at work".

The second argument is that, by removing sick leave from the PTO pool, vacation time would be effectively increased. However, I would recommend taking it one step further and automatically increasing the minimum threshold for time off proactively. Air New Zealand launched an ambitious study in 2006 on the topic (Vacation Gap Study), finding that people are more productive, healthier, and happier after time off, which has a strong benefit to the company.

Obviously these arguments need to be balanced against staffing/resource limitations. From a security perspective, both go directly toward business continuity, physical security, and personnel security planning. Happy, healthy employees are less likely to bail out without warning, are less likely to "snap" and enact retribution on the company, and are far more likely to be cooperative. Furthermore, advocating for a healthy work environment that discourages sick employees from coming into the office additionally strengthens the environment.

T2P: Truth to Power Association, has released a new white paper (by yours truly) titled "PCI: Requirements to Action". You can download it from:
http://www.t2pa.com/component/remository/T2P-Community-Research-Resources/PCI-Requirements-to-Action/

(if you're adamantly against registering to get the paper, track me down and make a case... thank you!:)

I let myself get caught up in a pointless twitwar yesterday, during which I took much abuse from my proponent for basically disagreeing with the assertion that you can just walk into an organization and "know" what is and is not important without doing some degree of assessment. His later point is that you don't need to do a "full" assessment, which is correct and not my point.

My point, quite simply was this: dowsing (or "divining") is no way to assess or manage enterprise risk. Dowsing is the ancient mystical practice of using a dowsing rod to find water hidden underground. To this day, well water is a very important commodity. In olden dayes, technology did not exist for finding sources, and so divining came into practice. Using the divining rod (or dowsing rod), a skilled individual could walk around an area, feeling mild tremors through the rod. The skilled individual would then move around until these tremors were maximized and the rod pointed down to the source of water.

In many ways, risk assessment today is exactly like dowsing. We walk into organizations with some mystical methodology that assesses pseudo-risk and then we act is if we've done something that is in fact truly legitimate and well-founded. The problem, of course, is one of repeatability. The INOFSEC Assurance Methodology (IAM) tries to specifically address this concern by setting up the System Criticality Matrix, but there are potential weaknesses in this approach. Similarly, FAIR leverages Bayes for providing reasonable modeling in the absence of real data. [6/4: correction - Bayes requires data, just provides model based on knowledge-state instead of nature-state]

Both cases are challenged, however, and at best "science" in the way of the "social sciences" (or so-called "soft" sciences). The problem, quite simply, is that there is no reliable way (today, anyway) to quantify a qualitative value. As such, we're stuck with gut instinct in assessing risk ratings, challenged in trying to come up with a consistent, reliable, and accurate method. If the method cannot withstand rigor, then it's not particularly sound or scientific.

This problem is one that is being actively researched. Notable figures like Alex Hutton (formerly of RMI and currently of Verizon Business) talk about this frequently; that enterprise risk management is a broken field that lacks scientific rigor. In my mind, this is spot on, and fully analogous to the state of the security industry. Gunnar Peterson, I think, captures this perfectly in his comment that "Its too bad but assumptions of yesteryear lead to building things on shaky foundations." His notable chart tells the story:

Similar to the lack of innovation and growth in infosec, where the world still revolves around firewalls and SSL, so does risk management revolve around pseudo-quantitative risk assessment that is based on qualitative assessments of varying degrees of reliability that are then converted to numbers, or otherwise averaged out. Dowsing risk in the enterprise is no way to live, and a good way to get completely off-track. Let's hope the future reveals a better way to exist.

A very common chant in the secure coding (or application security, or just plain infosec) community is "bake security in from the beginning." From the perspective of writing new code, building new apps and systems, and so on, this makes perfect, wonderful, logical sense. And, for that matter, while it may on the surface look like fixing vulnerabilities after the fact is cheaper, those calculations often leave out the cost of resolving a compromise (see "Mythbusting, Secure code is less expensive to develop"). But, in chatting with my good buddy yesterday, an interesting question was raised: what about all the software that's already out there?

Historically, it's pretty clear that software came well before software security. Systems were originally closed and the level of expertise and access necessary to fool with them was generally quite high. This is not to say that security didn't exist - in fact, quite the opposite, security has obviously existed long before software. The problem, however, is that security up until the advent of modern technology was oriented toward physical security, supplemented by policy and law. This problem plagues us to this day insomuch as the human brain is not wired to handle non-physical threats very well.

As a result of this order of evolution, an interesting problem pops up; that of retroactively applying software security practice to legacy code and systems. Perhaps the best example of this type of problem is the Y2K rush to fix billions of lines of legacy code before the rollover to 2000. Millions - perhaps billions - perhaps trillions - of dollars were poured into addressing this one little bug. Now think about legacy software - such as SCADA networks - and consider that fixing those problems are not trivial. Now look at the PCI DSS. Here is a standard that expects companies to address a long list of problems in legacy systems and applications that may or may not be conducive to retrofitting security measures.

Flipping back to the Y2K comparison, the estimate at the end of 1999 was that $21B was spent fixing all that code. $21B - yes, billion. What's particularly interesting about those cost estimates, however, is who was generally stuck footing that bill. GE estimated $550M spent. And then there's the insurance industry, which spent an estimated $6.8B. Now consider who else was frequently stuck fixing this problem: vendors, banks, and other large corps.

Now let's compare this to compliance with the PCI DSS. No longer are the costs just borne by large corps. This isn't a problem addressed only at companies with billion dollar budgets. In fact, just the opposite is true. Many organizations affected are small in comparison with the Y2K companies. And yet, when you get down to it, we're looking at requirements that are much more extensive than fixing a date calculation bug. Yes, Y2K had many challenging logic problems to fix, but that's not the point. The point is that small companies are oftentimes locked into platforms that are by definition legacy and that may never achieve PCI compliance - certainly not at the hands of the merchants (I'm somewhat excluding compensating controls here - please allow that to slide;).

There is, of course, a counter-argument. Whereas many Y2K systems were mainframes coded in the 70s and 80s, using such fun languages as ADA and COBOL, the current applications are coded using much more modern codebases. In fact, a significant number of in-scope systems are web-based, and thus represent a fairly mild challenge in terms of fixing. But let's not forget that there are dozens of requirements to address in these environments, and the scope can very easily explode if simple things like network segregation do not exist.

None of this is to say that companies should not be doing what is required in PCI. In fact, just the opposite, they should be applying these principles - within a risk management framework - to their entire organization. However, that being said, it is important to consider regulations like PCI in a proper context that is weighted to include the cost of addressing legacy systems and applications. Then add in time estimates. Y2K was not addressed overnight. It took a few years of extensive work, and even then it was never fully done. To expect that organizations will be able to rapidly address all their software security weaknesses in short order is similar folly.

This situation then introduces an interesting challenge: does security testing, planning, and remediation include a time component? Should it? The answer, I believe, is a resounding "YES!" with a caveat. That caveat is - wait for it - risk assessment and management. *TADA!* The best reasonable way to factor in time considerations for addressing security concerns is by assessing the risks and prioritizing them accordingly. Bringing this back to PCI, I will then point you at the "Prioritized Approach for DSS 1.2", which effectively acknowledges that it is not reasonable to expect instant full compliance with the DSS. Just the opposite, the card brands recognize the costs and challenges involved. Of course, it would be interesting to know how their risk assessment maps to yours, but that's a different story. :)

So which came first, the software or the security? I guess it depends on what kind of security we're talking about and when the software was created. :)

I've been on the hunt for a solution to the PCI problem. You might be wondering "which problem is that?" The problem, as I see it, is that L2-4 merchants have to accept, handle, transmit, and ultimately store cardholder data. Thus, the high risk associated with merchants and their generally lax/lacking security practices.

To me, the solution here is to get the data out of the hands of the merchants. If the merchants don't have the cardholder data, then you don't need to worry (as much) about them getting compromised. The question is if there are solutions in the market that accomplish this goal. One such solution is "tokenization" - but I'm not convinced it actually saves you much.

With tokenization solutions, your card processor provides you with a non-sensitive token to store in your billing system in lieu of the CCN itself. Ok, all good and fine, but that's where things stop. With this solution I see 2 major problems. First, you're still accepting the CCN through your site, and thus have to secure systems in accordance with PCI. Second, it would appear that you then rely on APIs (or compiled software) that can then conduct some transactions with the CCN (though I would hope you couldn't get the CCN back).

My primary concern with a tokenization solution is the first one listed above. If you still have cardholder data flowing through your site, then you're still subject to almost all of the PCI requirements. The main ones you can side-step are encryption, key management, and certain nuances of the network security requirements (such as the proxy). Beyond that, you still have to shoulder the cost of compliance for the rest of your site.

Given this apparent case, one must then wonder: is outsourcing to a tokenization provider really more cost-effective than simply fixing the problem directly on your own systems? Sure, it transfers some of the responsibility away from you, to an org that is geared toward protecting that data. But at what cost? Moreover, now you have to trust this 3rd party not to get compromised, while you still have to share in the responsibility of protecting the data.

In the end, it seems to me that the right solution is to move all of the CC acceptance and processing off of the merchant platforms. I think this can be done in a couple ways, and it seems that there may be a couple solutions out there (EPX, Paymetric), but again it's not fully clear if they are in fact fully transferring the CC acceptance, processing, and storage, or if they're really just a more involved tokenization company. Paymetric's site talks extensively about ERP integration, suggesting a tokenization practice. EPX explicitly talks about tokenization, while making the claim that merchants "Never process, transmit or store data". Of course, the EPX site also says "Eliminate the nightmare of PCI" - which I think is probably not true.

Tokenization may reduce the burden of PCI, but it definitely does not eliminate it completely, and I'm left to wonder if the reduction is even enough to warrant the outsourcing cost. What else am I missing out there? Is this problem being solved, and well?

Well, /That/ Didn't Work!

| 1 Comment

As noted a couple days ago, I'm trying to get full feeds back working. If you're using an RSS reader, you probably noticed that I'm about half-way there. Who needs formatting anyway, though, right? *sigh* Problem noted, working on it, please ping me if you know how to unhork MT... :S

I recently attended the RSA 2009 Conference in San Francisco. Upon arriving back at work, one of the first tasks assigned (besides catching up) was to write a summary of the conference and what I learned /as it applied to the company/. Well, to say the least, this can be a daunting task. You can learn lots of interesting things at a major conference like RSA, but will much of it apply to your real world everyday job? With this in mind, I began to wonder "is there value in security conferences for companies?" and this, of course, has gotten me onto a little rabbit trail that must be followed.

Before I launch into some sort of boring analysis or commentary, let me first preface all of this by saying: Yes, you and/or your team should absolutely go to conferences and training programs. Get out of the office, meet other people, see what else is going on in the world. This is a philosophy that my original home school for Gracie Jiu-Jitsu subscribes to, and one that I would hope that everyone would appreciate. In technology - and particularly in information security - you cannot live in a vacuum. You absolutely positively must get out and see new things, new people, new ideas, new places, new techniques, etc. We don't all tackle problems in the same way, and that means that there are some really cool new things out there to learn, if only you go look for them.

Hopping back down from my soapbox, then, let's look at how you can make attending a security conference worthwhile. In my mind, there are three keys to having a good conference while demonstrating value to your employer (who's hopefully footing the bill). First, you need to go into the conference with a plan that includes learning objectives. Second, whether you're comfortable doing it or not, you need to get out and be social with vendors on the expo floor. Third, whether you're comfortable doing it or not, you need to get out and be social with your colleagues. Allow me to go into a bit more detail.

Plan + Learning Objectives
Any learning opportunity will only be maximized by the amount of effort you put into it. If you walk into a learning opportunity blindly, with no direction, and with no real inclination or interests, then you're quite likely going to walk away disappointed. On the other hand, if you enter with at least a moderate degree of curiosity - at least in specific areas related to the conference - then you will greatly increase the value of your experience.

This whole "make a plan" concept really applies more to large, multi-track conferences than it does to small, single-track conferences. Showing up for SOURCE in Boston will be a completely different experience from attending RSA or CSI where there are a wide variety of topics and tracks. So, in the case of these large conferences, find out ahead of time what's being offered, and develop a plan. Moreover, since you're employer will be looking at ways for you to incorporate the value proposition into the company, make sure your plan looks at what is important to your job beyond your basic interests (hopefully these align, but you never know). This is where learning objectives really come into play, because you can then go to a conference seeking specific knowledge or information and, hopefully, walk away having found some of it.

Socialize with Vendors
Yes, yes, I know. If you talk to a vendor, they'll probably get your contact info, and then they'll call you all the time, over and over and over again, whether or not you want them to do so. Don't panic. Talking to vendors is a good way to find out what they have to offer and, more importantly, what's coming down the pipe. Especially for security management, it's definitely worth your time to seek out, in particular, the younger, hungrier startup-ish vendors specifically to learn what is being seen as emerging trends. These companies frequently have millions of dollars invested in research, so you might as well make use of it as best as possible.

Now, for those in the audience who aren't big talkers, don't worry. Here are a couple tips from, well, a big talker:
* Don't feel obligated to give information away.
* Get the sales flack talking and maintain eye contact to prove you're listening.
* Resist the demo unless you're actually interested. If you're interested, /ask/ for a demo!
* If you're more technical, don't be afraid to ask to talk to the techie. (if no techie, flee!;)
* Try to toss out leading questions to help the sales flack along.

As with everything, you'll get out what you put in, and sometimes even more. At big conferences, many vendors have parties at night, and so spending some time showing the vendors love can help get you into the thick of things, which brings me to my third point.

Socialize with Colleagues
As smart as you are, there are other people who know things that you don't. Hopefully they're friendly! One of the best ways to find out is to go hang out with them. Hey, it's a conference, you're probably on the road, what's the big deal? Even if you are introverted and scared to death of crowds, you can meet some amazing people (hey, I met Dan Farmer at RSA this year - he's a huge reason I got into infosec!) and even learn a few things along the way.

Don't believe me? Well, that's fine, but consider this: last year I didn't know many people, and was known by even fewer. This year, having hung out with folks last year and then interacted with them over the course of the year (blog/twitter), I was now much better prepared to find folks, talk to folks, and so on. What did this get me? Well, for starters, I found out about MiniMetriCon 3.5, the Monday before RSA started, at which I got to hear some excellent presentations on security metrics, including one by Jeremiah Grossman of Whitehat Security and one by Wade Baker of Verizon Business. Both gentlemen went through real life data that was not only sobering, but also information and educational (e.g. PCI is apparently not a complete waste of time and money, despite how it feels).

If you get out and meet people and swap stories, you will quickly find that you're not the only person fighting the good fight, but that you in fact have commiserators in the grand scheme of things. It feels good knowing that I'm not the only one dealing with various issues - and hopefully you'll get to enjoy that sense of camaraderie, too.

Bonus: What Not to Say When You Return
Several times I've returned from a conference and been asked minutes after walking into the office "hey, how as it?" to which I've stupidly said "eh, it was ok, nothing great." D'OH! The last thing your boss wants to hear is that s/he just wasted a few thousand dollars to send you to a conference that wasn't worthwhile.

So, take a tip from me. Before you get back to work, start developing a storyline about how the conference was good and useful and educational. Pull out those learning objectives and developing talking points about how it met the company's needs. Pull out your notes from talking with vendors to demonstrate that there might be technology solutions for given problems. Put a positive spin on the conference as much as possible, and - assuming you actually want to go to another conference - don't make it sound like it wasn't worthwhile.

Just an fyi, in case anybody cares, I'm moving back to full feeds. Trying to drive ad revenue quickly become more hassle than it was worth. So, hopefully you will see all posts going forward with a full feed. If not, apologies, bear with me while I un-hose things. :)

Controlling the Bacon Fever Frenzy

As I noted about a week ago, there seems to be a lot of insanity surrounding the current "Swine Flu pandemic" ("swine flu" being the colloquial name for the H1N1 virus). In a continuing goal to fight FUD, replacing it with rational, logical, and intelligent thought, allow me to pull out a few definitions and suggestions to help you cope with the mass hysteria.

1) pandemic: "(of a disease) prevalent throughout an entire country, continent, or the whole world; epidemic over a large area." Now, this sounds a bit scary, but then let's look at what the World Health Organization (WHO) actually says about a pandemic, since they've raised the alert to Phase 5. From http://www.who.int/csr/disease/avian_influenza/phase/en/:

Phase 5 is characterized by human-to-human spread of the virus into at least two countries in one WHO region. While most countries will not be affected at this stage, the declaration of Phase 5 is a strong signal that a pandemic is imminent and that the time to finalize the organization, communication, and implementation of the planned mitigation measures is short.

2) Better context from WHO:
http://www.who.int/csr/don/2009_05_04/en/index.html:
* "As of 06:00 GMT, 4 May 2009, 20 countries have officially reported 985 cases of influenza A (H1N1) infection."
* "There is no risk of infection from this virus from consumption of well-cooked pork and pork products."
* 19 countries have lab-confirmed cases with no deaths.

http://www.who.int/csr/disease/swineflu/guidance/public_health/travel_advice/en/index.html:
* "1 May 2009 -- WHO is not recommending travel restrictions related to the outbreak of the influenza A(H1N1) virus."

3) Better context for the U.S. from the Center for Disease Control (CDC):
http://www.cdc.gov/h1n1flu/: In the US - 36 states with confirmed cases, a total of 286 confirmed cases in the country, and only 1 death reported attributable to H1N1.

The primary recommendations from the CDC? Wash your hands, stay home when sick, rest, recover, recuperate.

4) Comparing H1N1 to avian flu (H5N1).
http://www.who.int/csr/don/2009_04_23a/en/index.html:
"23 April 2009 - Of the 67 cases confirmed to date in Egypt, 23 have been fatal."

So, yeah, the strength and danger of swine flu pales in comparison with avian flu.

5) Opportunity to introduce common sense to HR policies. One of my biggest pet peeves is with paid time off. Many companies lump sick leave in with vacation time, mainly because of some misguided big brother nanny culture idea that people might fake being sick, and thus should not be trusted. The problem with this philosophy is that limiting sick leave discourages people from staying home when they are sick, thus greatly increasing the likelihood that one sick person will infect most of an office. Such policies are lunatic and need to be brought into the modern age.

Similarly, work from home policies are often quite backward despite significant advances in technology. Want to know how to deal with a pandemic? Make sure your workers can work from home, and encourage them to do so. Oh, and btw, guess what? Work-from-home policies also are good for the environment in that they help reduce the number of cars on the road, and thus help reduce emissions. Can't get much more green than that.

6) Skip the FUD, use your brain. If the mainstream media is covering it at a frenzied pitch, it's usually safe to assume that the actual risk represented is low. More people die in car accidents each day than have died from the swine flu (see http://www.car-accidents.com/pages/stats.html). "About 115 people die every day in vehicle crashes in the United States -- one death every 13 minutes." So, let's put things into perspective a bit here. 1 death from swine flu in the US, less than 300 confirmed infections, versus 115 deaths per days from traffic accidents.

If you really want to reduce deaths, I highly recommend investing in feasible mass transit. There's really no good reason why there aren't high-speed and light-rail trains connecting major cities and regional areas. For example, here in Arizona, there should be rail service from Phoenix to Tucson, Flagstaff, and Albuquerque, NM, as well as to LA, San Diego, Denver, Vegas, and SLC. Use airplanes for long-haul trips, but then use electric-based rail for the rest. If I didn't have to drive to work every day, I would not be sad. If I didn't have to drive to my favorite hiking and camping spots, even better.

Apply a modicum of common sense in the face of blatant hysteria. A little perspective is worth a lifetime of stress-inducing FUD.

Alrighty, it's only been 10 days since RSA, so what the heck, let's get the last two vendor reviews out the door, shall we? :) If I'd been smart, I would have talked about these two vendors in my earlier wrap-up post, but apparently my brain was on vacation that day. :)

First up we have Solera Networks... if you've followed my blog at all, you know that I'm a fan of NetWitness (see here)... Solera is very similar, with the exception of being more of a software solution than a hardware appliance. More importantly, they differentiate themselves from NetWitness by having a a VMWare Virtual Appliance that can be used to capture vSwitch traffic for full analysis.

Solera further differentiates itself from NetWitness in speed, confirmed at 8.1Gbps sustained, with bursts up to 10Gbps. They're able to handle 1Gbps in the software alone, which is fairly remarkable (that has to be one tight stack). Solera also has an API (SOAP/REST) that can be used for integration with other tools. And, lastly, their software generates PCAPs on the fly based on raw data. You might think that this would be incredibly slow, but you would be wrong. :) Thanks to a proprietary file system, they're able to quickly rip out a PCAP when needed.

A special "thank you" to Solera Networks President and CEO Steve Shillingford for taking the time to chat with me about this interesting product. Hat tip to Lauren Dresnick of New Venture Communications for coordinating the meeting.

The last product I wanted to bring to your attention was one of the Innovation Sandbox finalists, Yubico, makers of YubiKey. This little USB device is really quite interesting, and for once it's a strong auth product aimed at the consumer space. Check out their long list of integrated applications to see just how much use you can make of the device. In a nutshell, by installing and touching the hot spot on the key, it will go out to their web site, securely authenticate, grab strong credentials, and then automatically populate them into the supported application.

My only major concern with YubiKey is that, being a SaaS styled solution, network loss could have a detrimental effect on its use. However, integrated with OpenID and OATH, this could really be an interesting tool going forward that could provide a viable alternative to manually entered passwords.

And thus concludes RSA vendor reviews. Thanks for playing. Have a nice day. :)

I had the opportunity to sit down with Derek Tumulak, SafeNet VP of Product Management - Enterprise Data Protection,during the RSA Conference a week+ ago. The meeting was facilitated by SafeNet Sr. PR Specialist Matt Pugh. I am very grateful for the opportunity to get the inside scoop. Moreover, I was excited to see that Data Secure lives on.

For a little background, Ingrian Networks was acquired by SafeNet on April 3, 2008. Ingrian was most notable for their "Data Secure" product line, which is centered on an appliance solution for encryption and key management. The rumormill suggests that the acquisition shook itself out in earnest mid-Summer 2008, and the resulting confusion caused me to exclude SafeNet from a customer project last Fall (the local sales rep literally said "I don't think we're doing that any more" to which I replied "pity, the Ingrian product was the best in the market"). It sounds like that confusion has now been resolved, with Data Secure back, and in a better position overall.

Going forward, SafeNet's mantra is "enterprise data protection." They foresee Data Secure as being central to the enterprise, serving in myriad data security roles, well beyond the original crypto focus. To that end, they plan to converge their HSM solutions, as well as to look at bringing in new solutions to build-out the offering. One such idea bandied about is integrating a next-gen DLP solution to round out their data control capabilities. SafeNet is also looking at ways to integrate data discovery, management, and control capabilities, all through the central Data Secure appliance line. If you think about it, this is an awesome idea: one central interface for monitoring and controlling data across your network. Convergence between DLP, crypto, and other data control (dare I hope for IAM and other related access controls some day, too?) - all based on security policies, of course - could make life much, much easier if done properly.

One interesting development in the Data Secure line is in the key management capabilities. The keys can now be stored locally, or securely distributed over an SSL-wrapped connection. SafeNet is also participating in the OASIS KMIP tech committee, undoubtedly to interface with devices that need encryption keys. Look for updates from SafeNet as various standards come to fruition (KMIP, keyprov, EKMI, P1619.3, etc.).

Data Secure now also uses metadata to describe keys and encrypted blobs to better track which keys are in use and where. Even more interesting, SafeNet can now work directly with some databases to facilitate assessment of which keys are in place, as well as if data needs to be re-encrypted with a new key (the DBA still retains control, however, by inputting the necessary credentials when prompted).

Overall, SafeNet appears to be quietly growing into a major player in the security market, not only building on their excellent product lineage, but having a vision that makes sense. Look for an announcement from them later this year as they expand their data control capabilities.

About this Archive

This page is an archive of entries from May 2009 listed from newest to oldest.

April 2009 is the previous archive.

June 2009 is the next archive.

Find recent content on the main index or look in the archives to find all content.

Monthly Archives

Pages

  • about
Powered by Movable Type 6.3.7