Interpreting The Insight 2023 Sales KPI Report

Insight Partners recently published an excellent 2023 Sales KPI Report. As I went through it, I thought it could be educational and fun to write a companion guide for three distinct audiences:

  • The intimidated. Those who find SaaS benchmark reports as impenetrable as James Joyce. The post could serve as my Ulysses Guide for the interested but in need of assistance.
  • The cavalier. Those who are perhaps too comfortable, too quick to jump into the numbers, and ergo potentially misinterpreting the data. The post could serve to slow them down and make them think a bit more before diving into interpretation.
  • The interested.  Those who simply enjoy deeper thinking on this topic and who are curious about how someone like me (i.e., someone who spends far too much time thinking about SaaS metrics) approaches it.

So, let’s try it.  I’ll go page-by-page through the short guide, sharing impressions and questions that arise in my mind as I read this report.  As usual, this has ended up about five times as much work as I thought at the outset.

Onwards!  Grab your copy and let’s go.

Introduction (Slide 3)

Yikes, there are footnotes to the first paragraph. What do they say?

  • They’re cutting the data by size bucket (aka, “scale-up stage”). I suspect they use this specific language because Scale Up is a key element of Insight’s positioning.
  • They’re also cutting the data by go-to-market (GTM) motion: transactional, solution, or consultative. This is a cool idea, but it’s misleading because those descriptive names are simply a proxy for deal size (aka average selling price, or ASP).
  • While the names don’t really matter (they are just labels for deal size buckets), I find “transactional” clear, but I don’t see a difference between “solution” and “consultative” sales.  I’m guessing “solution” means selling a solution directly to a business buyer (e.g., selling a budgeting system to a VP of FP&A) and “consultative” means a complex sale with multiple constituents.
  • Ambiguity aside, the flaw here is the imperfect correlation between deal size and sales motion. Yes, deal size does generally imply a sales motion, but the correlation is not 100%. (I’ve seen big, rather transactional deals and small highly consultative ones). They’d be better off just saying “small, medium, and large” deals rather than trying to map them to sales motions. We need to remember that later in interpretation.

Now we can read the second paragraph of the first page.

  • Data is self-reported from 300+ software companies that Insight has worked with in the past year.
  • That’s nice, because 300 companies is a pretty large set of data.
  • But beware the “Insight has worked with.” Insight is a top-tier firm so this is not a random sample of SaaS companies. I’m guessing “working with” Insight means tried and/or succeeded in raising money from Insight. So I’d argue that this data likely contains a random blend of top-tier companies (who reasonably think they are Insight material) and non-self-aware companies (who think they are, but aren’t).
  • Nevertheless, I’m guessing this is a pretty high quality group. While some SaaS benchmarks include a broad mix of VC-backed, founder bootstrapped, and PE-owned SaaS companies, SaaS benchmarks produced by VC firms generally include only those firms who tried to raise VC — i.e., the moonshots or at least wannabe moonshots.
  • By analogy, this is the difference between comparing your SAT scores to Ivy League admittees vs. Ivy League applicants vs. all test takers. (The mid-fifty percentile for Ivy League admittees is 1468-1564, overall it’s 950-1250, and for applicants I don’t know.)
  • I’ve always felt you should, in a perfect world, cut benchmarks by aspiration. You run a company differently when you’re a VC-fueled, share-grabbing moonshot vs. a founder-bootstrap when you’re hoping to sell to a PE sponsor in 3 years. Thus, this data is most relevant when you’re trying to raise money from a firm like Insight.

Table of Contents (Slide 4)

Just kidding. Nothing to add here.

Executive Summary: Sales KPIs (Slide 5)

Here we can see key metrics, cut by size, and grouped into five areas: growth & profitability, sales efficiency, retention & churn, GTM strategy, and sales productivity.

Before we go row-by-row into the metrics, I’ll share my impressions on the table itself.

  • CAC payback period (CPP) is simply not a sales efficiency metric. While many people confuse it as one, payback periods are measured in time (e.g., months) — which is itself a clue — and they are risk metrics, not return metrics. They answer the question: how long does it take to get your money back [1]? Pathological example: CPP of 12 months and 100% churn rate means you get your money back in a year but never get anything else. It’s not measuring efficiency. It’s not measuring return. It’s measuring time to payback [2].
  • I’ve never heard of SaaS quick ratio before, but from finance class I remember that the quick ratio is a liquidity metric, so I’m curious.
  • I wouldn’t view pipeline coverage as a sales productivity metric, but agree it should be included in the list and I view its placement as harmless.

Now, I’ll share my reactions as I go row-by-row:

  • ARR growth. The rates strike me as strong, partially validating the view that these are Ivy League applicants. For example, median 106% growth between $10M and $20M is strong. For more views on best-in-class growth rate, see my post on The Rule of 56789.
  • New + expansion growth rate. This seems to reveal a common taxonomy problem. If you consider new logo ARR and expansion ARR as two independent, top-level categories you end up with no parent category or name. For this reason, I prefer new ARR to be the parent category, with new ARR having two subcategories: from existing customers (expansion ARR) and from new customers (new logo ARR). See my recent SaaS Metrics 101 talk. In Dave-speak, row 1 is ending ARR growth rate and row 2 is new ARR growth rate.
  • Efficiency rule. I haven’t heard precisely of this before but I’m guessing it’s some variation on burn multiple. We’ll review it later. Surprised they lack data for the bigger categories.
  • CAC payback period (CPP). The prior discussion aside, these numbers look very strong raising two questions: who are these companies again and are they calculating it funny?
  • SaaS quick ratio. We’ll come back to this once I know what it is. If it’s a liquidity ratio (and it turns out it’s not) then these companies would be swimming in cash.
  • Magic number. Usually this is the inverse of the CAC ratio, but sometimes (and as defined by Scale) calculated using revenue, not ARR. When I invert the magic numbers here, I see CAC ratios of 1.4, 1.1, 1.0, 1.3, and 1.3 across the five categories — which are all pretty good.
  • For fun, let’s do some metrics footing. In practice, CPP is usually around 15 / magic number [3], so I can create an implied CPP (which is 21.4, 16.7, 15.0, 18.8, and 18.8). Since those values are about 1.4x the reported CPPs, I’m pretty sure we’re not defining everything the same way. We’ll see what we find later [4].
  • S&M % of revenue. A good metric, and a quick skim again shows pretty solid numbers.  Let’s compare to RevOps Squared, which hits a broad population of SaaS companies, and shows ~35%, ~35%, 54%, 43%, and 45% across the five categories [5]. The notable difference is that Insight’s companies spend more earlier (83%, 45% in the first two categories), presumably because they’re shooting for higher growth.
  • Net revenue retention (NRR) aka net dollar retention (NDR) [6]. While there is a definitional question here, the number themselves look very strong (cf. RevOps Squared at ~103%, ~104%, 110%, 106%, and 102%). I believe this reflects Insight’s high-flying sample more than a calculation difference, but maybe we’ll learn differently later.
  • Gross revenue retention (GRR) aka gross dollar retention (GDR). This is an increasingly popular metric because investors are increasingly concerned that one train may hide another [7] in analyzing expansion and shrinkage, and thus want to see both NRR and GRR. The figures again look quite strong (cf. RevOps Squared at ~86%, ~87%, 88%, 88%, and 87%). This reinforces the point that we need to understand the sample differences behind benchmarks: Insight sets a much higher bar on NRR and GDR than RevOps Squared [8].
  • Annual revenue churn (rate). I’ve never heard it exactly this way, but this is some sort of churn rate.  It looks very close to 1 – GRR (i.e., plus or minus 1-2%), so it’s hard to understand why I need both.  More later.
  • NPS (net promoter score).  The first question is always for which role because NPS can vary widely across end users, primary users, adminstrators, and economic decision makers.  That can also lead to random weightings across those categories.  That said, the numbers here strike me as setting a very high bar.
  • New bookings as a % of total bookings.  This is a good metric, but I look at it the other way (i.e., expansion %) and use new ARR, not bookings [9].  That is, I prefer expansion ARR as a % of new ARR and I like to run around 30%, lower when you’re smaller and higher when you’re bigger.
  • Average sales cycle (ASC) (months).  This was the row that shocked me the most — with numbers like 2.5, I’d have guessed there were measuring quarters, not months.  Then again, I come from an enterprise background, but I do work with some SMB companies.  Let’s see if they drill into it later.  And remember it’s a median, I’d love to see the distribution and cut by deal size.
  • S&M as % of total opex.  I get why people do this [10] but I don’t like it as a metric, prefering S&M as a percent of revenue. (Cf. RevOps Squared where S&M as % of revenue runs 30-50%.)
  • Sales % of S&M expense.  I like this metric a lot, and it’s happily gaining in popularity.  I prefer to track sales/marketing expense ratio, which I think is more intuitive but uses the same numbers, just compared differently.  In my experience, the sales/marketing ratio runs around 2-1, equivalent to 66% when viewing sales as a percent of S&M.  More important than baseline value, companies need to watch how this changes over time; it’s often a function of sales’ superior negotiating ability and leverage more than anything else.  See my post.
  • Sales headcount as % of total headcount.  I get where they’re coming from with this metric, but I prefer to track what I call quota carrying rep (QCR) density = QCRs / sales headcount.  I’m trying to measure the percent of the sales org that is actually carrying an incremental quota [11].  See my post, the Two Engines of SaaS, which introduces both QCR density and its product equivalent, DEV density.  Because I don’t track this one, I have no intuitive reaction to the numbers.
  • Bookings per rep.  I’m imaging this is what I’d call new ARR per rep, aka sales (or AE) productivity, measured in new ARR per year.  These numbers strike me as correct for enterprise, but inconsistent with a 3 month ASC — that usually connotes smaller deals and lower sales productivity on the order of $600K ARR/year.  The key rule of thumb here is that bookings/rep is ideally 4x a rep’s on-target earnings (OTE).  So this data implies sellers with $250K OTE.
  • Pipeline coverage.  While technically speaking I don’t view pipeline coverage as a sales productivity metric, it’s an important metric and I’m glad they benchmarked it.  In my experience 2.5 to 3.0x coverage is sufficient and when I see numbers above 3x, I get worried about several things (e.g., cleaniness, win rate, sales accountability, and if marketing is being proactively thrown under the bus).  These numbers thus concern me, but sadly do not surprise me.
  • Pipeline conversion rate.  This is notionally the inverse of pipeline coverage if both are measured for the same time period.  I do track them independently because, in enteprise, starting pipeline is mix of opportunities created in the past 1-4 quarters, and the eventual (cohort-based) close rate is not the same as the week-3 current-quarter conversion rate.  The glaring inconsistency here, speaking on behalf of CMOs everywhere, is this:  sales saying they want 4.0x coverage on a pipeline that closes at 44% is buying a 1.75x insurance policy on the number.  I get that we all like cushion, but it’s expensive and such heavy cushion puts the monkey on the back of the pipeline sources (e.g., marketing, SDR, partners, and to a lesser extent, sales itself).  Think:  if we drown sales in pipeline, then we can’t miss the number!  Math:  if you close 44% of it, you need 2.3x coverage, not 4.0x.

Go-To-Market Sales Motion Definitions (Slide 6)

Holy cow.  We’re only on slide six.  Thanks for reading this far and have no fear, it’s largely downhill from here — the Insight center of excellence pitch starts on slide 12, so we have only six slides to go.

I think slide six is superfluous and confusing. 

  • In reality, they are not cutting the data by sales motion, they are cutting it by deal size (ASP). 
  • They say they are using ASP as a proxy for sales motion, but I think it’s actually the other way around:  they seem to be preparing to use sales motion as a proxy for ASP, but then they don’t present any data cut by sales motion.
  • The category names are confusing.  I’ve been doing this a while and don’t get the distinction between the solution and consultative sale based on the names alone.

The reality is simple:  if they later present data cut by sales motion remember that it’s actually cut by ASP.  But they don’t.  So much ado about nothing.

Also, the ASCs by sales type look correct in this chart yet the data has a median ASC of 2-3 months.  Ergo, one must assume it’s heavily weighted towards the transactional, but that seems inconsistent with sales (bookings) productivity numbers [12].  Hum.

Growth and Profitability Metrics (Slide 7)

OK, I now realize what’s going on.  I was expecting this report to drill down in slides 7-11, presenting key metrics by subject area cut by size and/or sales motion — but that’s not where we’re headed.  I almost feel like this is the teaser for a bigger report.

Thus, we are now in the definitions section and along with each definition they present the top quartile boundary (as opposed the medians in the summary table) for each metric.  Because these top quartiles are across the whole range  (i.e., from $0 to $100M+ companies) they aren’t terribly meaningful.  It’d be nice if Insight presented the quartiles cut by company size and ASP a la RevOps Squared.  Consider that an enhancement request.

Insight has an interesting take on the “efficiency rule,” which is what most people call the burn multiple (cash burn / net new ARR).  Insight inverts it (i.e., net new ARR / cash burn) [13] and suggests that top quartile companies score 1.0x or better. 

David Sacks suggests the following ranges for burn multiple:  <1.0 amazing (consistent with Insight’s top quartile), 1 to 1.5 great, 1.5 to 2.0 good, 2.0 to 3.0 suspect, and >3.0  bad.

Finally, Insight seems to believe that the efficiency rule is only for smaller companies and I don’t quite understand that.  Perhaps it’s because their bigger companies are all cash flow positive and they don’t burn money at all!  The math still works with a negative sign and there are plenty of big, cash-burning companies out there (where the metric’s value is admittedly more meaningful) so I apply burn multiple to cash-burning companies of all sizes.

Finally, Bessemer has a related metric called cash conversion score (CCS) which is not a period metric but an inception-to-date metric.  CCS = current ARR / net cash consumed from inception to date.  They do an interesting regression that predicts investment IRR as a function of CCS — if you need a reminder of why VCs ultimately care about these metrics [14]

Sales Efficiency Metrics (Slide 8)

Thoughts:

  • They define CAC on a per-customer basis, don’t define CAC ratio (the same but per new ARR dollar) and don’t actually present either in the summary table.  Odd.
  • They use what I believe is a non-standard definition of CAC payback period, defining it on ARR as opposed to subscription gross profit.  For most people, CAC payback period is not months of subscription revenue — it’s months of subscription gross profit — to pay back the CAC investment. This explains why their numbers look so good.  To be comparable to most other benchmarks, you need to multiple their CAC payback periods by 1.25 to 1.5.   This is a great example of why we need to understand what we’re looking at when doing benchmarking.  In this case, you learn that you’re doing much better than you thought!
  • They suggest that top quartile is <12 months for small and medium deals, and <18 months for large ones, equivalent to 15 and 22.5 months assuming the more standard formula and 80% subscription gross margins.
  • They define the SaaS quick ratio, which is a bad name [15] for a good concept.  In my parlance, it’s simply = new ARR / churn ARR, i.e., the ratio between inflows and outflows of the SaaS leaky bucket.  I generally track net customer expansion = new ARR – churn ARR, so I don’t have an intuitive sense here.  They say 4x+ is top quartile.
  • They define magic number on revenue, not ARR, as does its inventor.  I prefer CAC ratio because I think it’s more intuitive (i.e., S&M required to get $1 of new ARR) and it’s based on ARR, not revenue.  For public companies, you have to use revenue because you typically don’t have ARR; for private ones, you do.  They say a 1.0x+ magic number is top quartile.
  • They say S&M as % of revenue top quartile is 37% [16].

Retention and Churn Metrics (Slide 9)

OK, just a few more slides to go:

  • For NRR and GRR, they use a bridge approach (i.e., starting + adds – subtracts = ending) which calculates what I call lazy NRR and GRR. 
  • To me, these metrics are defined in terms of cohorts/snapshots (deliberately to float over some of the things people do in those bridges) and you should calculate them as such.  See my post for a detailed explanation.
  • Annual revenue churn, as defined, is pretty non-standard and a weak metric because it’s highly gameable.  You want to stop using the service?  Wait, let me renew you for one dollar.  The churn ARR masked as downsell would be invisible.  If you want to count logos, count logos — and do logo-based as well as dollar-based churn rates.  For more on churn rates and calculations, see Churn is Dead, Long Live Net Dollar Retention.
  • Net promoter score.  As mentioned above, I think they’re setting a high on bar NPS, saying the benchmark is 50%+.  I’d have guessed 25-30%+.  

GTM Strategy Metrics (Slide 10)

One more time, thoughts:

  • Selling motion is not really a metric yet it’s defined here.  Moreover, it’s differently and better defined on slide 6.  They try to classify a company’s sales motion by the motion that has 75% or more of its reps.  This won’t work for many companies with multiple motions because no one motion accounts for 75% of the team.
  • New (logo) ARR as % of new ARR.   I mapped this to my terminology for clarity.  They say 75% is top quartile, but that doesn’t make sense to me.  This is a Goldilocks metric, not a higher-is-better metric.  If you’re getting a lot more than 70% of your new ARR from new logos, I wonder why you’re not doing more with the installed base.  If you’re getting a lot less than 70%, I wonder why you aren’t winning more new customers.
  • Average sales cycle (ASC).  They say the benchmark is 3-6 months for a transactional motion (where just two rows above they use a different taxonomy of field, inside, and hybrid) and 9-12 months for consultative.  On slide 6 they say transactional is <3 months, solution is 3-9 months, and consultative is 6-12+ months.  It’s not shockingly inconsistent, but they need to clean it up.

Sales Productivity Metrics (Slide 11)

Last slide, here are my thoughts:

  • Bookings per rep.  Just when we thought it was safe to finish with a simple clear metric, we find an issue. They define bookings/rep = new ARR / number of fully-ramped reps.  If the intent of the metric is to know what a typical fully-ramped rep can  sell, it’s the wrong calculation.  What’s the right one?  Ramped AE productivity = new ARR from ramped reps / number of ramped reps.  As expressed, they’re including bookings from ramping reps in the numerator and that overstates the productivity number.  See my post on the rep ramp chart for more.
  • They say top quartile is $993K/year which strikes me as good in mid-market, light in enterprise, and impossibly high in SMB.
  • Here is where they really need to segment the benchmark by sales motion yet, despite the hubbub around defining sales motions, they don’t do it.
  • Pipeline coverage is somewhat misdefined in my opinion.  By default it should be calculated relative to plan, not a projection or forecast.  It should also be calculated on a to-go basis during the quarter (remaining pipeline / to-go to plan) and, in cases where the forecast is significantly different from plan, it makes sense to calculate it on a to-forecast basis as well.  
  • Conversion rate is defined correctly, providing we have a clear and consistent understanding of “starting.”  For me, it’s day 1, week 3 of the quarter — allowing sales two weeks to recover from the prior close and clean up this quarter’s pipeline.  Maybe I’m too nice, it should probably be day 1, week 2.  Also, remember that conversion rates are quite different for new and expansion ARR pipeline, so you should always segment this metric accordingly.  I look at it overall (aka blended) as well, but I’m aware that it’s really a blended average of two different rates and if the mix changes, the rate will change along with it.

Sales & CS Center of Excellence (CoE) (Slide 12)

Alas, the pitch for Insight’s CoE begins here, so our work is done.  Thanks for sticking with me thus far.  And feel free to click through the rest of Insight’s deck.

Thanks to Insight for producing this report.  I hope in this post that I’ve demonstrated that there is significantly more work than meets the eye in understanding and interpreting a seemingly simple benchmark report.

# # #

Notes

[1] Ironically, CPP doesn’t even do this well. It’s a theoretical payback period (which is very much not the intent of capital budgeting which is typically done on a cash basis). The problem? In enterprise SaaS, you typically get paid once/year so an 8-month CPP is actually a 30-60 day CPP (i.e., the time it takes to collect receivables, known as days sales outstanding) and an 18-month CPP is, on a cash basis, actually a 365-days-plus-DSO one. That is, in enterprise, your actual CPP is always some multiple of 12 plus your DSO.

[2] You can argue it’s a quasi-efficiency metric in that a faster payback period means more efficient sales, but it might also mean higher subscription gross margin. Morever, the trumping argument is simple:  if you want to measure sales efficiency look at CAC ratio — that’s exactly what it does.

[3] CPP in months = 12 * (CAC ratio / subscription gross margin), see this post. Subscription GM usually runs around 80% , so re-arranging a bit CPP = 12 * (1/0.8) * CAC ratio = 15 * CAC ratio = 15 / magic number. Neat, huh? If you prefer assuming 75% subscription GM, then it’s 18 / magic number.

[4] I like metrics footing as a quick way to reveal differences in calculation and/or definition of metrics.

[5] The tildas indicate that I’ve eyeball-rebucketed figures because the categories don’t align at the low end.

[6] Dollar is used generically here to mean value-based, not count-based. But that’s an awkward metric name for a company that reports in Euros. Hence the world is moving to saying NRR and GRR over NDR and GDR.

[7] Referring to a sign at French railroad crossings and meaning that investors are less willing to look only at NRR, because a good NRR of 115% can be the result of 20% expansion and 5% shrinkage or 50% expansion and 35% shrinkage.

[8] I doubt there is a calculation difference here because GRR is a pretty straightforward metric.

[9] I define “bookings” as turning into cash quickly (e.g., 30-60 days).  It’s a useful concept for cash modeling.  See my SaaS Metrics 101 talk.  Here, I don’t think they mean cash, and I think they’re forced into using “bookings” because they haven’t defined new ARR as inclusive of both newlogo and expansion.  

[10] Because in early-stage companies total opex is often greater than revenue, but I prefer the consistency of just doing it against revenue and knowing that the sum of S&M, G&A, and R&D as a % of revenue may well be over 100%.

[11] Not overlaid or otherwise double-counted quota, as a product overlay sales person or an alliances manager might.

[12] Bear in mind these are all medians of a distribution so it’s certainly possible there is not inconsistency, but it is suspicious.

[13] There’s a lot of “you say tomato, I say tomato” here.  Some prefer to think, “how much do I need to burn to get $1 of net new ARR?” resulting in a multiple.  Others prefer to think, “how much net new ARR do I extract from $1 of burn?” resulting in what I’d call an extraction ratio.  I prefer multiples.  The difference between Bessemer’s original CAC ratio (ARR/S&M) and what I view as today’s standard (S&M/ARR) was this same issue.

[14] Scale does a similar thing with its magic number.

[15] It’s a rotten name because the quick ratio is a liquidity ratio that compares it’s most liquid assets (e.g., cash and equivalents, marketable securities, net accounts receivable) to its current liabilities.  I think I get the intended metaphor, but it doesn’t work for me.  

[16] They actually have this wierd thing where they either put a number in black or orange.  Black means “benchmark” but with an undefined percentile.  Orange means Insight top quartile because no industry standard benchmark is available.  Which calls into question what that means because there are certainly benchmarks for some of these figures out there.

3 responses to “Interpreting The Insight 2023 Sales KPI Report

  1. The only thing I can add:

    Your take on NPS is right.

    Benchmarking studies (like CustomerGuage – https://customergauge.com/ebook/b2b-nps-and-cx-benchmarks-report) would have the Insight sample in the top 15%-20%

  2. jonathanrevops

    Super helpful Dave! Do you see any differences between favourite SaaS metrics between US vs UK operators or US vs UK investors?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.