The Evolution of Software Marketing: Hey Marketing, Go Get [This]!

As loyal readers know, I’m a reductionist, always trying to find the shortest, simplest way of saying things even if some degree of precision gets lost in the process and even if things end up more subtle than they initially appear.

For example, my marketing mission statement of “makes sales easier” is sometimes misinterpreted as relegating marketing to a purely tactical role, when it actually encompasses far more than that.  Yes, marketing can make sales easier through tactical means like lead generation and sales support, but marketing can also makes sales easier through more leveraged means such as competitive analysis and sales enablement or even more leveraged means such as influencer relations and solutions development or the most leveraged means of picking which markets the company competes in and (with product management) designing products to be easily salable within them.

“Make sales easier” does not just mean lead generation and tactical sales support.

So, in this reductionist spirit, I thought I’d do a historical review of the evolution of enterprise software marketing by looking at its top objective during the thirty-odd years (or should I say thirty odd years) of my career, cast through a fill-in-the-blank lens of, “Hey Marketing, go get [this].”

Hey Marketing, Go Get Leads

In the old days, leads were the focus.  They were tracked on paper and the goal was a big a pile as possible.  These were the days of tradeshow models and free beer:  do anything to get people come by the booth – regardless of whether they have any interest in or ability to buy the software.  Students, consultants, who cares?  Run their card and throw them in the pile.  We’ll celebrate the depth of the pile at the end of the show.

Hey Marketing, Go Get Qualified Leads

Then somebody figured out that all those students and consultants and self-employed people who worked at companies way outside the company’s target customer size range and couldn’t actually buy our software.  So the focus changed to get qualified leads.  Qualified first basically meant not unqualified:

  • It couldn’t be garbage, illegible, or duplicate
  • It couldn’t be self-employed, students, or consultants
  • It couldn’t be other people who clearly can’t buy the software (e.g., in the wrong country, at too small a company, in a non-applicable industry)

Then people realized that not all not-unqualified leads were the same. 

Enter lead scoring.  The first systems were manual and arbitrarily defined:  e.g., let’s give 10 points for target companies, 10 points for a VP title, and 15 points if they checked buying-within-6-months on the lead form.  Later systems got considerably more sophisticated adding both firmographic and behavioral criteria (e.g., downloaded the Evaluation Guide).  They’d even have decay functions where downloading a white paper got you 10 points, but you’d lose a point every week since if there you had no further activity. 

The problem was, of course, that no one ever did any regressions to see if A leads actually were more likely to close than B leads and so on.  At one company I ran, our single largest customer was initially scored a D lead because the contact downloaded a white paper using his Yahoo email address.  Given such stories and a general lack of faith in the scoring system, operationally nobody ever treated an A lead differently from a D lead – they’d all get “6×6’ed” (6 emails and 6 calls) anyway by the sales development reps (SDRs).  If the score didn’t differentiate the likelihood of closing and the SDR process was score-invariant, what good was scoring? The answer: not much.

Hey Marketing, Go Get Pipeline

Since it was seemingly too hard to figure out what a qualified lead was, the emphasis shifted.  Instead of “go get leads” it became, “go get pipeline.”  After all, regardless of score, the only leads we care about are those that turn into pipeline.  So, go get that.

Marketing shifted emphasis from leads to pipeline as salesforce automation (SFA) systems were increasingly in place that made pipeline easier to track.  The problem was that nobody put really good gates on what it took to get into the pipeline.  Worse yet, incentives backfired as SDRs, who were at the time almost always mapped directly to quota-carrying reps (QCRs), were paid incentives when leads were accepted as opportunities.  “Heck,” thinks the QCR, “I’ll scratch my SDR’s back in order to make sure he/she keeps scratching mine:  I’ll accept a bunch of unqualified opportunities, my SDR will get paid a $200 bonus on each, and in a few months I’ll just mark them no decision.  No harm, no foul. “Except the pipeline ends up full of junk and the 3x self-fulfilling pipeline coverage prophecy is developed.  Unless you have 3x coverage, your sales manager will beat you up, so go get 3x coverage regardless of whether it’s real or not.  So QCRs stuff bad opportunities into the pipeline which in turn converts at a lower rate which in turn increases the coverage goal – i.e., “heck, we’re only converting pipeline at 25%, so now we need 4x coverage!”  And so on.

At one point in my career I actually met a company with 100x pipeline coverage and 1% conversion rates. 

Hey Marketing, Go Get Qualified Opportunities (SQLs)

Enter the sales qualified lead (SQL). Companies realize they need to put real emphasis on someone, somewhere in the process defining what’s real and what not.  That someone ends up the QCR and it’s now their job to qualify opportunities as they are passed over and only accept those that both look real and meet documented criteria.  Management is now focused on SQLs.  SQL-based metrics, such as cost-per-SQL or SQL-to-close-rate, are created and benchmarked.  QCRs can no longer just accept everything and no-decision it later and, in fact, there’s less incentive to anyway as SDRs are no longer basically working for the QCRs, but instead for “the process” and they’re increasingly reporting into marketing to boot.  Yes, SDRs will be paid on SQLs accepted by sales, but sales is going to be held highly accountable for what happens to the SQLs they accept. 

Hey Marketing, Go Get Qualified Opportunities Efficiently

At this point we’ve got marketing focused on SQL generation and we’ve built a metrics-driven inbound SDR team to process all leads. We’ve eliminated the cracks between sales and marketing and, if we’re good, we’ve got metrics and reporting in place such that we can easily see if leads or opportunities are getting stuck in the pipeline. Operationally, we’re tight.

But are we efficient? This is also the era of SaaS metrics and companies are increasingly focused not just on growth, but growth efficiency.  Customer acquisition cost (CAC) becomes a key industry metric which puts pressure on both sales and marketing to improve efficiency.  Sales responds by staffing up sales enablement and sales productivity functions. Marketing responds with attribution as a way to try and measure the relative effectiveness of different campaigns.

Until now, campaign efficiency tended to be measured a last-touch attribution basis. So when marketers tried to calculate the effectiveness of various marketing campaigns, they’d get a list of closed deals, and allocate the resultant sales to campaigns by looking at the last thing someone did before buying. The predictable result: down-funnel campaigns and tools got all of the credit and up-funnel campaigns (e.g., advertising) got none.

People pretty quickly realized this was a flawed way to look at things so, happily, marketers didn’t shoot the propellers off their marketing planes by immediately stopping all top-of-funnel activity. Instead, they kept trying to find better means of attribution.

Attribution systems, like Bizible, came along which tried to capture the full richness of enterprise sales. That meant modeling many different contacts over a long period of time interacting with the company via various mechanisms and campaigns. In some ways attribution became like search: it wasn’t whether you got the one right answer, it was whether search engine A helped you find relevant documents better than search engine B. Right was kind of out the question. I feel the same way about attribution. Some folks feel it doesn’t work at all. My instinct is that there is no “right” answer but with a good attribution system you can do better at assessing relative campaign efficiency than you can with the alternatives (e.g., first- or last-touch attribution).

After all, it’s called the marketing mix for a reason.

Hey Marketing, Go Get Qualified Opportunities That Close

After the quixotic dalliance with campaign efficiency, sales got marketing focused back on what mattered most to them. Sales knew that while the bar for becoming a SQL was now standardized, that not all SQLs that cleared it were created equal. Some SQLs closed bigger, faster, and at higher rates than others. So, hey marketing, figure out which ones those are and go get more like them.

Thus was born the ideal customer profile (ICP). In seed-stage startups the ICP is something the founders imagine — based on the product and target market they have in mind, here’s who we should sell to. In growth-stage startups, say $10M in ARR and up, it’s no longer about vision, it’s about math.

Companies in this size range should have enough data to be able to say “who are our most successful customers” and “what do they have in common.” This involves doing a regression between various attributes of customers (e.g., vertical industry, size, number of employees, related systems, contract size, …) and some success criteria. I’d note that choosing the success criteria to regress against is harder than meetings the eye: when we say we find to find prospects most like our successful customers, how are we defining success?

  • Where we closed a big deal? (But what if it came at really high cost?)
  • Where we closed a deal quickly? (But what if they never implemented?)
  • Where they implemented successfully? (But what if they didn’t renew?)
  • Where they renewed once? (But what if they didn’t renew because of uncontrollable factor such as being acquired?)
  • Where they gave us a high NPS score? (But what if, despite that, they didn’t renew?)

The Devil really is in the detail here. I’ll dig deeper into this and other ICP-related issues one day in a subsequent post. Meantime, TOPO has some great posts that you can read.

Once you determine what an ideal customer looks like, you can then build a target list of them and enter into the world of account-based marketing (ABM).

Hey Marketing, Go Get Opportunities that Turn into Customers Who Renew

While sales may be focused simply on opportunities that close bigger and faster than the rest, what the company actually wants is happy customers (to spread positive word of mouth) who renew. Sales is typically compensated on new orders, but the company builds value by building its ARR base. A $100M ARR company with a CAC ratio of 1.5 and churn rate of 20% needs to spend $30M on sales and marketing just to refill the $20M lost to churn. (I love to multiply dollar-churn by the CAC ratio to figure out the real cost of churn.)

What the company wants is customers who don’t churn, i.e., those that have a high lifetime value (LTV). So marketing should orient its ICP (i.e., define success in terms of) not just likelihood to {close, close big, close fast} but around likelihood to renew, and potentially not just once. Defining different success criteria may well produce a different ICP.

Hey Marketing, Go Get Opportunities that Turn into Customers Who Expand

In the end, the company doesn’t just want customers who renew, even if for a long time. To really the build the value of the ARR base, the company wants customers who (1) are relatively easily won (win rate) and relatively quickly (average sales cycle) sold, (2) who not only renew multiple times, but who (3) expand their contracts over time.

Enter net dollar expansion rate (NDER), the metric that is quickly replacing churn and LTV, particularly with public SaaS companies. In my upcoming SaaStr 2020 talk, Churn is Dead, Love Live Net Dollar Expansion Rate, I’ll go into why this happening and why companies should increasingly focus on this metric when it comes to thinking about the long-term value of their ARR base.

In reality, the ultimate ICP is built around customers who meet the three above criteria: we can sell them fairly easily, they renew, and they expand. That’s what marketing needs to go get!

5 responses to “The Evolution of Software Marketing: Hey Marketing, Go Get [This]!

  1. Regarding companies in the $10M ARR area, I’d think that regression between attributes of customers and success criteria would be best used to tell you who NOT to go after, rather than as a limiter. At that stage, there are likely many directions you haven’t yet explored.

    • Agree, I may have hasty saying to pattern match at $10M. I still think the exercise is useful (I think it’s useful as a vision statement at $0M ARR). But you’re right it may tell you where not to focus. Or, as I often tell folks I work, even at $30-$50M, no pattern is a pattern. If we can’t find, e.g., that any one vertical closes faster, bigger, renews better, expands better than any other then *that* is data. Yes, you can impose a vertical strategy on the company in the name of focus and solutions but one has not “emerged” from the data.

  2. Pingback: Marketing Exists to Make Sales Easier | Kellblog

  3. Pingback: ABM is Not B2B and Other Rants on Account-Based Marketing | Kellblog

  4. Colin Corstorphine

    Reading this again (since it popped up on Twitter) and can’t express how much the lessons in here are still missed by so many startups. SMH over and over and over.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.