Support Talks: Customer Retention Experiments (and its shocking results)

Craig Stoss Craig Stoss · 5 min read

Customer retention is an important topic for all departments to understand.

Typically the onus to renew a customer is on account or success managers, but all aspects of the customer journey contribute to retention. Clay Telfer, a Customer Success Executive who’s built and led Success functions since 2011, wanted to explore how different retention strategies could work.

Benchmarking these strategies across companies is difficult. With different customer personas, business models, products, etc., some strategies that work great for company A, may fall flat for company B. How can you determine which will have the most meaningful impact?

Start your day 
with great 
quality 
content

What did you observe that sparked an interest in running this experiment?

Clay: My old boss and I had a friendly disagreement that I’ve seen a lot between Customer Success professionals. I prefer retention strategies that focus on providing an effortless experience to the customer. She preferred retention strategies that focus on “Wow” Moments, Surprise and Delight, and so on. 

There’s a time and place for both approaches, of course! We just disagreed on which approach was generally the most impactful. And then my boss had an idea that might improve our retention, and which had the added bonus of letting us test the success of a Surprise and Delight strategy.

(It’s worth noting that I’m speaking specifically of B2B here, and B2C may have different results.)

Craig: Can you explain the experiment and your starting hypothesis?

Clay: In this experiment, we split our new clients into a Control group and an Experimental group. If you ended up in the Experimental group, then we’d send you a surprise gift each month for your first six months. Stickers, swag, and similar things that we knew our clients enjoyed. If you were in the Control group, you didn’t get any of these gifts.

For six months, we put every other client into the Experimental group. Over the course of a year, we compared the customer retention of each cohort – Month 1 Control vs Month 1 Experimental, Month 2 Control vs Month 2 Experimental, and so on. At the end of this experiment, we’d have six cohorts of each group to compare, each of which would be between 6 and 12 months from their initial purchase.

As you can probably guess, we actually had two competing hypotheses here! My boss thought that the Experimental group would have better retention numbers than the Control group, and I thought they would have roughly the same retention numbers.

nicereply blog

Did you control for any other factors? Tickets? CSAT or NPS? Did you feel those were relevant to the experience?

Clay: We didn’t, and I think doing so would largely defeat the purpose. We wanted to test the effect of this on our customers, and some of those customers are going to have a hundred tickets, or be unresponsive after launch, or give a really great/terrible NPS score. 

We started by identifying a handful of situations where a customer would cancel during onboarding or soon after launch. These were instances where a customer would realize very quickly that they had misunderstood a fundamental aspect of the service, and in those cases, we’d unwind the contract.

For these customers and only these customers, we’d remove them from either group. Since the first gift for the Experimental Group was sent roughly a month after launch, that was pretty easy.

We also removed a couple of clients where we’d sold them custom solutions. That customer profile would be getting an entirely different customer experience and couldn’t be compared to other customers. We also verified that the control and experiment groups were distributed fairly randomly across regions and other variables.

Outside of that, as we analyzed the results, we’d just compare some of the basic stats between the two groups and make sure there weren’t any large discrepancies. If we’d found that Experimental Group 3 had a vastly different number of tickets than Control Group 3, we would have looked into that.

But the thing is, you should expect to see differences like that. What we’re doing is testing a different customer experience, right? So maybe the Experimental Groups are going to have way better NPS scores, or way more tickets, or something else. That’s not a flaw that needs to be controlled for, that’s the exact information you’re trying to learn.

Craig: What tools and processes did you use to drive the A/B tests?

Clay: Google Sheets. #startuplife and all that, right? But also, for something so experimental, we didn’t want to invest too much time or money in tools or processes for something that might get thrown out after the experiment was done. 

I was able to put together a structure that kept this to about 10 people-hours per month, and most of that was just the physical packaging and shipping of the gifts. We’d get five people who could spare an hour and sit down at a table to fold fancy cardboard envelopes into shape, pack them with stickers and thank-you notes, and all that. The rest was tracked in Asana and those Google Sheets.

nicereply blog

Craig: What were the results and did they surprise you?

Clay: The results were actually surprising to both of us – the Experimental group performed worse than the Control group. I hadn’t expected the gifts to make enough difference to be cost-effective, but I certainly didn’t expect they’d hurt our customer retention efforts!

Craig: That is shocking! What conclusions did you draw from that?

Clay: The biggest conclusion is that it’s important to test all your assumptions. It would have been so easy to think “Sending free gifts can’t hurt, everyone loves free gifts, let’s just start doing it and see how it goes.”

I think this is especially true with customer retention experiments just because of how long they take. So much of what we do is impacting not tomorrow but three or six months down the road, and that can make it tempting to just run with something you haven’t tested.

My second conclusion is that the Surprise and Delight strategy isn’t as effective for B2B clients as a lot of CS folks think it is. Do you know what our clients really wanted? More space. Each of the individual gifts was well received, and the same items had gotten great feedback from existing clients in the same demographics, but it added up to too many touches.

And even in the earliest months of the lifecycle, it wasn’t improving customer retention. Making your product champion a little happier doesn’t really change the ROI calculations that their company is doing.

nicereply blog

Craig: What would you recommend to other companies looking to test similar strategies?

Clay: Two things. First, it’s all about the details. As an example, we couldn’t send the gift boxes to all experimental customers at once, because they had signed up at different times of the month. To make sure everyone was getting as similar an experience as possible, we had to send out 3 shipments a week.

That way someone who signed up on January 3rd would get their first gift around February 3rd and someone who signed up on January 26th would get their first gift on February 26th. In any kind of test like this, it’s vital to make sure you’ve found all those little things that can prevent the experiences from being truly apples-to-apples.

Second, make sure you’ve got a test that uses minimal resources. Because this experiment didn’t cost much in terms of money or people hours, it wasn’t that big a deal when it didn’t pan out. We’d spent some resources to answer a long-standing question, which was “should we shift our customer retention strategy more towards Surprise and Delight?”

And even though the answer turned out to be “No”, it was still kind of a bargain. In the end, we spent those resources to make sure that we didn’t commit a lot more resources to a larger shift in retention strategy that wouldn’t have panned out for us. 


How did you like this blog?

NiceAwesomeBoo!

Craig Stoss Craig Stoss

Craig has spent time in more than 30 countries working with support, development, and professional services teams building insight into Customer Experience and engagement. He is driven by building strong, effective support and services teams and ensuring his customers are successful. In his spare time Craig leads a local Support Thought Leadership group. He can be found on Twitter @StossInSupport

Related articles


Envelope icon

The best customer service tips every week. No spam, we promise.

Get guides, support templates, and discounts first. Join us.

Pencil icon

Are you a freelance writer? Do you want your articles published on Nicereply blog?

Get in touch with us

Facebook icon Twitter icon LinkedIn icon Instagram icon