The Thankless Life of Analysts

There are shenanigans afoot, I tell ya; shenanigans!

I was recently contacted by an intermediary asking if I'd be interested in writing a paid blog post slamming analysts, to be published on my own blog site, and then promoted by the vendor. No real details were given other than the expectation to slam analyst firms, but once I learned who was funding the initiative, it became pretty clear what was going on. Basically, this vendor has received, or is about to receive, a less-than-stellar review and rating from one of the analyst firms and they're trying to get out in front of the news by trying to proactively discredit analyst reports.

My response to the offer was to decline, and now as I'm hearing some may take up the opportunity, I've decided it's time to myself get out ahead of this potential onslaught of misleading propaganda. Mind you, I'm not a huge fan of the analyst firms, and I found myself incredibly frustrated and disappointed during my time at Gartner when I was constantly told to write about really old and boring topics rather than being allowed to write more progressive reports that would actually help move the industry forward. But I'll get to that in a moment...

How Research Happens

First and foremost, let's talk about how research happens. I can only speak from my direct experience at Gartner, but my understanding is it's not too dissimilar at other organizations. Also, let me provide a disclaimer or three: I don't work for Gartner any more, I never wrote Magic Quadrant reports, and I really have nothing to personally gain from defending or attacking them. As always, my goal here is to be fair and balanced (for real, not in a Fox News or CNN kinda way).

So, research... there are a lot of different kinds of reports, but let's talk a little bit about the biggies that everyone sees (e.g., MQs, Waves, etc.). These reports typically combine primary research (customer surveys and interviews) with analysis of open source information and interviews with the vendors themselves. One of the first challenges is defining a market niche, followed by then identifying vendors in the niche, and then conducting the research. For vendors who whine about being excluded from the research process, part of this is your fault for not making yourself known to analysts, and part of this is falling victim to arbitrary market segment rules created to keep the pool of rated vendors to a reasonable size (think for example, of the old, retired Gartner MQ for EGRC, which easily could have included hundreds of vendors, so instead was arbitrarily pared down to 20 or so vendors who no one in their right mind would ever put on a comparative chart).

Customer surveys (references) generally play a huge role in information collection, and the pros and cons highlighted in those surveys and interviews will often get incorporated directly into reports. Which is to say, keep your customers happy and be sure any customer references you provide aren't going to trash you! Beyond that, it then comes down to an analyst's judgment (and biases) in terms of how they score you. The scoring is consistent across all vendors included in a report insomuch as the same criteria are used for everyone.

Contrary to popular mythos, analysts are not compensated for sales, renewals, etc., and that is precisely to keep the analysts as neutral and objective as possible. That said, I've noted that it's very common for larger vendors to buy lots of analyst time in order to increase their exposure to the analyst(s) covering them. This has included off-site analyst information seminars in sunny locations, which I've always found a bit suspect. The bottom line here, though, is that if there's an image of bias and favoritism toward larger, richer vendors, it's at least partly correct insomuch as the major players can afford the extra face time that helps keep their offerings fresh in the mind of the analysts.

Additionally, there's also a self-reinforcing cycle at play. The large vendors have the largest marketing budgets, and thus tend to drive a lot of customer inquiries to analysts about their products. As the inquiries increase, so does the imperative for an analyst to speak with the vendors on a regular basis in order to keep apprised of ongoing developments. Most large vendors have a dedicated analyst liaison whose job is to ensure all the analyst firms are getting regularly briefings on products and product strategy, which in turn may trigger research notes sharing updates from the vendors, etc, etc, etc.

It's Not Foolproof / The Ombudsman

The research and publishing process is not foolproof. Humans are involved, which means bias will always be present in varying degrees, and mistakes will be made. As noted already, it's fairly easy to increase your influence over an analyst simply by increasing your company's exposure. And, in many ways, you should be doing this as much as you can (within reasonable limits - i.e., most major vendors offer quarterly updates). But, nonetheless, the process has failings, and in many ways you should give analysts a break... to a degree, anyway! Oh, and btw, let's also bear in mind that it is a process, and often a lengthy one. You'll notice new MQs only come out ever 2-4 years. Do you know why that is? Because a) research takes a long time, b) writing and internal peer review takes a long time, and c) review and sign-off from vendors takes a long time.

The biggest problem I have with analyst reports like the MQ is how the market niche is defined. For example, in the former "IT GRC" space, a company in that space was once marked down and held back from the Leaders quadrant because they hadn't yet expanded into Europe, nor did they have 24x7x365 support. Such criteria are rather arbitrary, especially at the time. To be honest, nobody really needed or was asking for 24x7x365 tech support for the product because it wasn't critical path. And yet, there was the criteria, and the subsequent markdown against our product. This sort of thing feels arbitrary, and in many examples such criteria really are as such, but we also have to view it from the larger perspective and realize that a report trying to compare a hundred or more vendors also isn't going to be very useful (as that EGRC report continually proved).

Vendors have a recourse if they feel they're treated unfairly. First and foremost, they're given access to draft report language pertaining to them so they can review and offer corrections or refutations. Failing that, if an analyst won't revise a report in response to a vendor objection, the vendor may also engage the ombudsman process and file a formal complaint. This process is generally reliable, but isn't itself foolproof, as demonstrated by the Netscout v Gartner lawsuit filed in Connecticut in 2014 (filing, Gartner on outcome, Netscout on outcome). Overall, a lawsuit is a lousy way to try and resolve issues, though I feel it's certainly better than a guerrilla marketing campaign meant to undermine analysts in general.

Part of the Problem

Now, all of this apologist explaining is great, but let's be clear: analyst firms are as much a problem as they are a help. I, for one, take issue with much that the analyst firms say and do, and I'm very ready to point out some of those failings, including:
* Targeting the mainstream with information that was "current" 10+ years ago doesn't move the industry forward.
* Reports end up driving customer decisions based on an incorrect understanding of the conclusions.
* Bias due to increased exposure and influence aren't effectively managed.

Starting from the first point, one of the problems that frustrated me most as a Gartner analyst was that I wasn't allowed to write "forward-leaning" research because I was told there wasn't an apparent market for it. To be fair, in some ways that's correct; the "average" company today is not on the cutting edge, and in fact very much needs to be told (repeatedly) to do the basics.

However, there are other issues here that are harmful. For example, much of the focus in analyst reporting is on tools and vendors, not on processes and practices and architectures. As such, we see the notorious "shiny object syndrome" in full effect every time an executive returns from an event or reads a new report. Just because a product is a "leader" in a report does NOT mean that you need to run out and buy them.

In fact, just because someone is a "leader" doesn't mean they're the best choice for your organization. Analyst reports are notoriously misrepresented, and of course get heavily played-up by marketing departments as some sort of definitive pronouncement, when in fact it's just one data point. Something I often pointed out to clients as an analyst was that you really want to look at the report in the inverse to make sure you're not pursuing a product that a) didn't make it into a report because they weren't mature enough, b) scored very poorly in a report for not meeting a number of sound criteria, or c) were otherwise cautioned against in a report. That is where the real values lies; not in who may or may not be a "leader" in a space. Often you'll find buying a product from a "leader" comes at much greater cost, while other products in the middle of the pack would be more than sufficient for your organization's needs and provide much quicker ROI overall.

The bottom line here is that analyst firms must be seen as they are: generic advisors on products. They're not foolproof, they're not unfailing, they're not completely unbiased (no such thing), they're not necessarily producing comprehensive reports (where does free and open source fit in a report on commercial offerings?!), and the reports are absolutely, positively not telling you which products to buy or making specific product recommendations suitable to your organization.

At the end of the day, treat analyst reporting as an input into your product decisions, but do not use them as the only consideration, and do not over-weight their input. Choosing a product should be part of an architectural process that first defines and understands a problem-space, and then progresses to identifying and evaluating possible solutions for that problem-space (assuming the problem-space is even worth solving!). Starting from an analyst report on the assumption that they're providing product recommendations for you specifically is patently incorrect, and will almost certainly lead to pain down the line. Misusing analyst reports is not the fault of the analysts, just like buying into vendor marketing campaigns without reasonable scrutiny and critical thinking is unwise.

As for people who are eager to attack and undermine analyst firms, be wary of their agenda, because you never know who might be paying for their criticism. If there's one thing I've found over the years, people like to think they know all about analyst firms even though they've never worked for one. As with everything, a healthy dose of skepticism goes a long way (for both critics and advocates!).


About this Entry

This page contains a single entry by Ben Tomhave published on January 25, 2018 3:45 PM.

Design For Behavior, Not Awareness was the previous entry in this blog.

Measure Security Performance, Not Policy Compliance is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Creative Commons License
This blog is licensed under a Creative Commons License.
Powered by Movable Type 6.3.7