January 2, 2020

A/B Testing with Optimizely

As a veteran of Silicon Valley with tenures at multiple massive, industry-leading tech companies, Carl Tsukahara has developed a serious taste for adventurous testing, especially in B2B. That doesn’t mean just running around pushing buttons and selecting metrics at random, it means breaking free from the common snare of planning multiple meetings and emails before adjusting a small facet of the landing page for A/B testing.

Speaking of A/B testing, Tsukahara’s career path has led him to currently serve as Chief Marketing Officer of Optimizely, which is essentially the home of the A/B test. They also call themselves “the world’s leading experimentation platform,” painting the company as a fitting place for a bold marketer and tester like Tsukahara. In this interview, Carl discusses what different companies ought to be testing for, how to properly set up certain guardrails to guide your testing, what we can learn from big brands like Amazon and Netflix, and much, much more. Check it out!

Full Transcript: Drew Neisser in conversation with Carl Tsukahara

Drew Neisser: Hello, Renegade Thinkers. Here’s the question of the day. Do you trust your gut instinct? I know I do. I’ve been in this business a long time, I’ve had lots of experiences where my gut came through for me, so I believe in my gut. But I’ve been listening to a book lately. It’s Malcolm Gladwell’s latest book, called Talking to Strangers. I’m about a quarter of the way through it. The first part of this book is pretty much an indictment of your gut instinct.

It goes like this, an example that he provides that I think was so interesting—I’m going to give you a judge who is responsible for deciding whether or not someone should get bail. Is this person going to jump bail or is this person going to commit a crime while they’re on bail or is it all going to be good? Now, this judge had 25 years of experience making this call over and over again and what was remarkable was, when they created an algorithm, they got the same information the judge did. Without being able to see the individual or hear the individual, the algorithm was 20 times more likely to predict bad behavior than the judge.

Part of it is our personal biases. We just think that someone is more likely to be innocent than guilty. We hear for positiveness. The same thing came up, amazingly, and Gladwell talks about it—how bad spies are at guessing who the other spies are. It’s amazing. This notion of whether or not we can trust our gut is something that we’re going to spend a lot of time talking about on the show and what it really leads to is we’re in a unique opportunity today in this world where we can test a lot of stuff where we don’t have to trust our gut.

The expert on this is Carl Tsukahara, who is the CMO of Optimizely, a company that describes itself as the world’s leading experimentation platform. If you’re familiar with Optimizely, it’s really the home of the A/B test. I mean, that’s my understanding of the brand. But anyway, Carl, welcome to the show.

Carl Tsukahara: Well, hello, and good afternoon. Nice to be here. Thank you for having me.

Drew Neisser: It’s great. And, by the way, love your name. My father is a Carl and my son is a Carl, so awesome to have a Carl on the show.

Carl Tsukahara: We’ve got the Karl triad going, I’m excited about that as well.

Drew Neisser: There it is. Yes, my dad, actually was episode 100 (listen here). We recorded it a year ago when he was 92. People have enjoyed that. Anyway, you are a veteran of Silicon Valley. You’ve worked at Oracle, Evolve, and Burst, among others, so let’s fast forward. You started at Optimizely about two years ago, a little more than two years ago. What was the situation? I’m curious—what was your mandate?

Carl Tsukahara: Yeah, I think there are a few things, Drew. The situation was that the company had gotten to a certain point in its growth, and through a subtle change in strategy decided to spend more time with enterprise customers. The company had grown, as you mentioned earlier, as an A/B testing tool, but primarily sold to small groups and practitioners in a bottom-up sense.

What we realized is that it’s still important to see the market bottom-up, one of the key to-dos—both from Jay, the CEO, and the board—was to help pivot this to more of an enterprise company. I think what happened is there was a lot of evidence. We had customers like IBM and HP and big companies where they were saying they wanted to take this experimentation test and learn process. It’s not a tool’s business. It is a strategic imperative for our organization.

This was coming from people like Michelle Peluso, the CMO at IBM, saying, “We want to really take this out and make an enterprise system and practice. At the time, if you positioned yourself as an A/B testing tool, I mean, that’s interesting, but I don’t think that gets the attention of CMOs or Heads of Digital for some of the biggest firms in America or the world for that matter.

So really, the imperative was, how do you transform the positioning, the messaging, and the go-to-market motion where it’s not a velocity business, it’s a strategic business. You can take that to different roles in the organization, not just the marketing folks, but also to developers and product people who all want to make sure that when they deploy something interesting, it works. You talked about the whole guesswork notion, but how do you communicate in a different way to make people know that this is a practice and not just a tool?

Drew Neisser: When you arrived—because I checked the press release—Optimizely was calling itself the world’s leading experimentation platform. That to me was a new language because I hadn’t actually been on Optimizely in the last two years, sad to say and that’s a big promise. It’s much bigger than A/B testing the experimentation platform. Let’s connect that. It sounds like that message hadn’t necessarily laddered up to enterprises on a broad level, saying, “Oh wait, this is a strategic imperative as opposed to this executional-level, ‘this color versus that color’ test.” Is that what we’re talking about?

Carl Tsukahara: Yeah. We do have a phenomenal technology platform that does these experiments at scale and that’s a huge winner. The change we made, Drew, which was important was to ask: “For what?” “Why does this matter to your business?” We pivoted the messaging to say that what this is really about is experience optimization. It was really a categorical positioning, which is very important here, because at the end of the day, what are marketing leaders and business leaders and product developers trying to do?

They’re trying to deploy experiences to their users, internal or external, that win. And when I say “win” I mean that delight the users, convert users, drive more e-commerce revenue, drive more conversion from paid search ads, and allow you to experiment on pricing. These are important decisions at the business level, but they’re all about the experience. It’s about the experience, the deployment of the experience, and the testing of the experience, so when you push this out at scale against your users, it wins, and it really provides and delivers the outcome that you need.

But I think that the second piece that I’m talking about, which is really the business relevance of experimentation, wasn’t really quite there, so we worked hard to build this categorical definition out called “experience optimization.”

Drew Neisser: There are a lot of people fighting for experience optimization in different ways. I had Alicia Tillman, CMO of SAP on the show (listen here), and of course, they had recently completed their acquisition of Qualtrics. Of course, their advertising had moved into this whole experience world. In fact, they created some pretty funny ads around it. But what SAP is talking about when they’re talking about experience and what you’re talking about when you’re talking about experience feel like they’re kind of different. Or are they the same?

Carl Tsukahara: We really try and focus today, Drew. I think there’s some overlap, for sure, because at the end of the day, when I say “experience,” I just think there’s a somebody at the end of that experience that you’re trying to delight and get them to engage with you. That’s the similarity.

I think the difference in what we do is really focused on the digital experience. If you’re trying to walk your customer or please your customer or convert your customer—whether or not they’re before the acquisition phase, during acquisition, you’re trying to get them to check out and buy something or you’re trying to create loyalty—we focus on that in the digital channel. If you have a website or a mobile app, perhaps, we even do a lot of things with media companies on over-the-top because people are cutting the cord. I’m sure if you use Hulu or something like that and you want to use an Internet app to watch your TV show or your movie, that’s also a channel now to talk to your customers. We think of those as a collection of digital experiences and that’s where we segment ourselves off.

The other piece where we segment ourselves off from others is that we really, I believe, are the leaders in the whole way to use data, evidence, testing, and statistics to really make sure that every time you roll out a new one of those changes—and these changes could be as small as a button color or text, but it could be complicated, it could be a pricing model, could be all kinds of things—you will know with an experimentation mindset that it’s going to win because you’ve tested it, you’ve created evidence that it will work and convert the way you want it to, and then you have the opportunity to deploy this at scale. That’s how we look at this. It’s about the digital experiences and touchpoints and making sure each one you roll out is a winner.

Drew Neisser: Cool. All right. We’re going to take a quick break and when we come back, I want to dive into a lot of this stuff. There’s stuff to unpack here. Definitely. Stay with us.

BREAK

Drew Neisser: We’re back, and we’ve been talking about the importance of experience optimization and you mentioned something, Carl, about experimentation and the importance of experimentation. I am curious—if I am a B2B company, let’s say I’m a software service company, and I have not been doing that. I probably have a fair amount of marketing technology in place, obviously, I have Salesforce and I have some kind of marketing automation and I probably have Tableau and I have a stack. They’re doing demand gen campaigns, they’re doing all sorts of content marketing out there, where does Optimizely and what you guys do fit in in that world?

Carl Tsukahara: There are so many places, Drew. Just to take some examples, think of something simple—you’re doing paid search, you’re doing Google AdWords. I guess I would ask this question: Is the messaging in your ads right? Is the language you’re using correct? What about the experience that happens when somebody clicks through the link and gets to your landing page? Do you have the right images? Colors? Offer strategy? That’s just one example. How do you know if you can’t get ten percent better? Twenty percent better?

Or think of this: maybe you have a self-service acquisition model and you get somebody to a paywall. They say, “Hey, I hear you’re a free version of the product…uh-oh! Now I have to pay.” How do you make sure that paywall is right? How many screens in the paywall should you have? Should you have one expandable experience? Should you have three different click-throughs? What’s the pricing? Should this be $9.95 for the first month to get from the trial version to one user? There are so many things like that where we help the marketer convert.

Drew Neisser: That’s the thing that’s really interesting to me. I’ve been for a long time looking at our clients and rambling about—or sometimes ranting about—the notion of cost-per-lead and that people optimize for cost-per-lead. The problem with that is, of course, not all the leads are equal, and even if you do it on a cost-per-acquisition basis, not all customers acquired are good customers. Sometimes they came in on a price promotion, they leave on the other end, you’ve got churn, you’ve got a high cost of acquisition, relatively speaking, but very low lifetime value.

We’ve been trying to, unsuccessfully, get someone out there to come up with a cost-per-happy-customer-acquired, so that you acquire the right customers for your organization, which are the ones that really match because every customer represents either someone who can talk about your business or someone who can say some nasty about it.

What I get concerned about, or naturally when I hear you can test every little bitty thing, is that you can optimize for driving clicks. You can always do that. You can optimize for clicks, but those clicks don’t have the same value. How does this work? I imagine this is where the integration with all the other tools start to matter, but how do you deal with micro testing versus the big picture, particularly for companies that have a 12- to 18-month sales cycle, have 10 people on the buying committee? You know, it’s tricky.

Carl Tsukahara: The nice thing about platforms like ours, and there are others, of course, is that you can test for any kind of outcome you want. It doesn’t have to be click-through rate. Maybe there’s something like, how do you get people to want to request a demo? Let’s just think of the classic B2B use case.

At almost every place I’ve been, there are certain actions like contact sales, or I want to talk to an adviser, or I want to look at a demo that—for a long period of our existence—are high converting actions. As we all know, and I think Forrester Research has talked about this ad nauseam for about a decade, most of that evaluation or engagement of your prospect or even a customer you’re trying to expand to, that happens without you knowing. Maybe you have these people cookied, maybe you don’t, but you have a lot of anonymous evaluation.

Once they do start to engage with your properties where you can look at actions, how do you convert to those pieces? Obviously, if you have the blended digital plus human conversion that takes place in “n” number of months, three, six, nine, twelve, depending on the size of your transaction, you need to think about that, but there’s always high converting actions, right? Even when we work with a company in B2B such as Atlassian—here’s an interesting company. They are growing like wildfire selling to technical people with very few salespeople.

When you think about what we do for a company like that, they have a lot of self-service ways to expand, like I’m using Jira or a Bitbucket or any of their products, but maybe I’m announcing something new. Can I present that in their actual user experience of people using the product where people go, “That’s a really interesting add on! I should be taking a look at that.” That is an experience in its own right. That’s one that can lead directly to revenue.

The point I’m making is that those are two extremes of a model. One is more organic and grows through user adoption and in-product experiences, while the other goes all the way to a large company that says, “Hey, I’ve got a brand new sales process going for my enterprise software piece where I’m just trying to convert to key action points that drive down the funnel a higher bookings conversion.” This is something that we all know as journey-oriented marketers—what is that combinational logic of people and what is the combinational logic of events I’m trying to do? Again, the benefit of this platform is that we can go as far as helping you measure what actually drives a revenue conversion or a high converting action, which is part of the journey to get a large purchase.

Drew Neisser: I’m curiousby the way, we had Atlassian on the show (listen here) and part of the conversation was that they had a very interesting culture in terms of not serving coffee so they get people to go out and talk. But also, they have the advantage that so many companies do, they have an essentially free trial product that makes the barrier to trial really, really low, which is a great thing when that works.

When I think about high converting actions—years ago, I talked to Jon Miller when he was at Engagio (listen here) and he talked about how, if they go to the pricing page, he knows they’re interested. That’s the moment that we really have to jump on. Similarly, I know from other clients that we worked with that, if they hit the demo button, they’re interested. Optimizing the demo experience or how they go through that, that makes a lot of sense because those folks are really prime. You don’t want to blow it there.

It’s just so interesting to me because I’ve seen so many times where you micro tested your way out of a big idea. And what’s hard when you’re dealing with big B2B purchases is that you do have so many decision-makers involved, and they do come at it from a different standpoint. But if you present different messages to each of the decision-makers about your company, when they come together as a committee, you fail. And you don’t even see that.

You could have optimized to each of those individual ones on separate messaging and it seemed like it was good, but the decision was made when the committee came together, not online, but offline. So it’s tricky and this is this balance between the micro and the macro picture of what your company is about and what you stand for. I don’t know. It’s a complicated thing and, I get it. I’ve also seen companies even on a B2C level where they had a choice, they could improve their click rates by going off-brand.

Right? There’s a question: I just improved my click rates by going off-brand, is that a good call?

Carl Tsukahara: I think brand is extremely important. It just depends on how strong they think their brand is and what it stands for. We actually tried to think about how you can enforce your brand in the context of your testing program because you have to put a bit of governance and guardrails around testing. That’s one of the things we really believe in, especially when we’re dealing with larger B2B entities.

I mentioned IBM, but we have HP and a lot of other big tech companies as customers, and you have to really allow for governance. Part of that governance may be to say, “Look, I’m going to create a brand-safe environment in which you can test. You can put offers out there and other things, but here’s how you can create a compliance-oriented framework to test.” This is something that is a byproduct of how the organization wants to think about their brand, but we want to give them that flexibility because, I think you’re right Drew, you can’t just let this thing run wild.

We thought about that and about how to not only have some of that capability in the product to drive governance but also how some of this is just organizational practice. A lot of the customers that we work with not only benefit from the platform. We have 1/3 of the top Interbrand companies in the world. We’ve done this so many times with very successful orgs, and part of that is, how do you encapsulate the product discipline with the organizational structure discipline where you know how you should be creating ideation, how you should be prioritizing that stuff, getting it scored by different people, and building a test pipeline. How will you isolate audiences? There’s a lot of stuff in the practice that’s important to do. It can’t be testing gone wild. That’s just not what we’re trying to do.

Drew Neisser: No testing gone wild. Perfect place because we’re going to come back and we’re going to talk about the opposite of testing gone wild and really provide if we can, a primer for setting up an ideal B2B experience testing machine. Stay with us.

BREAK

Drew Neisser: As we think about testing, maybe we could start by saying, “Testing goes wrong when…” so we can set the framework. Off the rails. You need governance and guardrails, so when does it go off the rails?

Carl Tsukahara: At the highest level one thing I would avoid, for any of the people listening to this is, what are you testing for? This sounds really basic, Drew, but we’ve seen programs that don’t have senior-level sponsorship where you know what you’re shooting for. What are you shooting for? Is it acquisition? Is it share of wallet? Is it NPS? I think testing programs gone bad are the ones where you don’t really know how you’re going to line up with the important objectives of the company.

That sounds so basic, but I think that happens on occasion and, again, that’s kind of a two-edged sword. When you do line up with the key business objectives of the organization, then you have a huge winner on your hands because the whole concept of test and learn is that you don’t get them all right. You’ll find out which ones are actually going to move the needle the wrong way and you’ll only deploy the ones that are moving the needle the right way. If you want to talk about a departmental impact to the business, then avoid the bad and do more of the good. That’s how the program goes.

Another way that the program can go poorly is if you don’t have alignment between the people that are doing the testing and the important lines of business, which could be in marketing, could be the product team, and in some cases could even be development.

Sometimes testing is an expertise that you can create with the right structure. We’ve seen a lot of places create a center of excellence model that supports different lines of business. The BBC does that at scale right now. We see others where they want to put it in the different lines of business and let them focus on different parts of the funnel, and that’s fine, too.

The folks at Sky TV have stuff—even though this is more of a B2C example—where they’re thinking about the customer service angle and how they can make sure that experience really drives loyalty by making sure that the online or offline customer experience is outstanding every single time. Those are two things I would focus on. Alignment with the business objectives, people can get that wrong. They may have the wrong structure, and they don’t think about returned investment because we’ve seen dramatic improvements in a return on investment with this type of capability. To get those three things right, then your probability of being successful with this is really high.

Drew Neisser: One of the things that I think a lot of people think about when they think about A/B testing and optimizing is, it’s acquisition, and you just mentioned retention. I’m curious— when we think about B2B marketing, our priority is employees, then customers, then prospects in terms of communicating, particularly when you’re creating or launching a new campaign. Do any of your customers actually test employee engagement and likelihood to recommend? Do they do anything on that level?

Carl Tsukahara: Absolutely. For the most part, Drew, I don’t know what the exact percentage is, 80% or something, is customer-facing for the whole cycle, but we do have that 20%. Think about insurance. You have insurance folks and many of them still have a large network of either direct or affiliate agents. If you can get agents, or salespeople for that matter, to engage with an experience that helps their productivity out in the field—sometimes this is tens of thousands of people in some of the large organizations.

Imagine I can go to a contact center experience and say to someone, “Hey, that flow of engagement you have with your customer, even if it’s physical, even if you’re sitting with a headset on having a phone call with a prospect, how are you supposed to answer questions? What kinds of documents should you push to that customer?” All those things can be tested in an operational flow. Again, it is a digital experience because at the end of the day, the agent is sitting there and screen pops are happening that tell them what to do, but you can test all kinds of stuff and that’s really getting to the customer through a next level.

Here’s an example: “Would you want to test parts of your sales engagement process?” When you get a prospect to stage four, what two or three things do you want them to do in that next level to get from stage four to closed? Maybe you want to test that with a new product. There are so many things you can do. These might be digital experiences where someone connects to Salesforce and looks at the next part of their sales plan to close customer XYZ, but there are all kinds of ways to think about doing testing all the way through to things that face the internal audience—your employees—ultimately to serve your customer better.

Drew Neisser: Yeah, I love that. I could see easily testing whether it’s message or motivational ideas against the Salesforce. I can see, certainly, in a call center, I had the CEO of Pega on the phone and we talked about their machine learning tool for next best action, where someone from Verizon knows that you’ve called and you’re a Verizon customer and you have an iPad and a cell phone. The next best action might be to say, “Hey, we have a special deal on the Apple Watch that you could activate via Verizon.” It’s remarkable because they have so many data points, that’s how machine learning can work.

I’m wondering, as we try to get some helpful hints here to folks, what do you think are the most basic elements that B2B marketers are missing in their experimentation flow?

Carl Tsukahara: That’s a good question. I think there are many. A lot of B2B marketers, especially if they’re data-driven and they think about a closed-loop analysis of things they do in their spend, I think that they are doing basic testing on things like landing pages and the like. I do think there’s a much more interesting set of tests that they can run. Even when you think about your website, there are all kinds of things you can do.

Here’s an example—somebody comes to your website and they do a search for your blog. What do you show them? All search is not created equal. Are there different search algorithms that can produce higher actions to those high converting events? I’ll bet you there are because we’ve seen this in many other kinds of things. In my opinion, you should also be doing what I would call “painted doors.” Maybe you do a painted door against a small sample and see if people click the button—”Wow, if we had that thing, that would really work.”

You kind of get ahead of the deployment, and this is really thinking more about the product marketing or the product management audience. If everybody in the product is clicking this button and it’s popping up saying “Thank you for clicking the button, this is a forward-looking feature and we’d love to enroll you in our free evaluation,” there are all kinds of things you can do. Again, I’m thinking more about the down funnel events and this is stuff in the B2B world that people are just starting to get their arms around. I think we see a lot of experiences because most of our customer base, probably 2/3 or more, are B2C companies, and a lot of them do this already. Say I’m in an experience where I click a button to get five new songs or five new subscription users for free—it’s a consumer mindset on innovation, and I would love to see more B2B companies do this kind of thing.

It’s really a lesson that I think all of us can learn from big brands like Amazon and Netflix. These folks experiment at crazy scale and they do so because instead of going into sessions and meetings and PowerPoints and Google slides, they just take it out there. If you’re a big enough company, expose it to a few percentage points of your user base with pretty low risk and find out what’s going to work. It clears up a lot of what I’ll call “innovation fog.”

If you have the right mindset—and we see our strong B2B customers doing this today—then you can really drive innovation. You can drive innovation in offer strategy, in product, and in trying new ideas. Should we think about when building this new feature? Does anybody even care? Why not find some empirical evidence about that that gives you a hypothesis versus. having a user group with 50 people and assuming that represents your 20,000 customers. There are some ways to balance the soft and the hard testing to really help you innovate at speed.

Drew Neisser: That’s interesting. I’m with you all the way. Testing the offer strategy makes a lot of sense. That would be relatively easy. Testing product features. I love the red door over here and seeing you get folks interested in it. When you have an existing product and let’s say it’s a platform, but it exists separate from your website, do you have integrations that you do actually within products? That’s where I lost you a little because I love that idea, but does anybody care about this feature? You only want to show it to existing customers who’ve already been to the platform, if you will.

Carl Tsukahara: Yes. One of the things that we brought to market is to be able to do testing and experiments in-product. I don’t want to have this be a product commercial, but this is actually in demand from our customers. It’s more of a code-based product because, as you mentioned earlier, you might be wanting to deploy machine learning, like a machine learning algorithm. Maybe you should test that algorithm. How does that work against our current way that we have a conversation with the customer? Maybe I should have two variations of something like, what products should I recommend? Like, “Hey, they’ve gone through and they’re starting to browse around the site and we’re collecting all this stuff in Marketo and you’re creating interesting moments, but there may be something that’s in the product. By the way, look at what they’re doing in the product. Look, they’re reacting better to this feature versus that feature, maybe there are two variations of the same feature.”

This is something we do in the product. We look at this as what we call “feature flagging,” where we build two things, like a pricing algorithm, and expose them to different samples of the populace to see which one drives more conversion. We can do this on the code base and that’s, again, actually in the product.

You can have the opposite effect, too. What if you’re a travel company or an airline? These are real examples, I just can’t mention the names—”Huh, maybe I should test this new thing. I’m going to come up with some different options for a new affinity for my frequent flyers. Hey, you get X.” Well, that’s probably a code based thing because someone’s in the website or in the experience or in the mobile app, but you can test that, and you can test that in code, whether it’s in JavaScript, React, Objective-C or any kind of codebase.

That’s something really interesting. Really, what you want to avoid if you’re a product person or even a marketing person is blowing up the reservation. If you find that you’re adding something into the actual quote “product” or reservation flow—”If I roll that out at scale, it’s going to erode e-commerce booking by 4%.” Well, we can help you predict that. Again, that’s an end product thing. As a former product guy myself, the last thing you want is to blow up the site or blow up the product.

Some number of years ago, it’s actually our fastest-growing product today, being able to do these kinds of code level testing things. Again, this affects both the marketer and the product leader developers—”Should I roll this out at scale? Oh, good, well, thankfully I tested it so now I can pull it back easily and make sure that it works.”

Drew Neisser: Yeah. It’s so interesting because there’s flow and you could have some drop-off. From a marketing standpoint, marketing could say, “This would A: give us competitive news that we could talk about. We really want this feature. Get it out there.” It’s so interesting because you may find out before you market it because sometimes you have to create demand for things.

As we wrap up this thing, one of the things that I’ve been working on for quite a while, and frankly it’s a struggle, is to try to simplify on a broad level. CMOs have a very difficult time showing brand value to the board. Brand is this fuzzy thing and they have a tough time until brand goes away. Then all of a sudden everybody in the board understands because stock prices lower because there was a crisis in trust. But anyway, in my dream world, there are blended metrics that exist for employees, customers, and prospects that are predictive of future success. I believe that these are testable variables, but the goal is to radically simplify them.

And the reason to radically simplify them is that eventually, you have to present these to people. And you know if you give them 50 metrics, their eyes are going to roll over, so that’s the idea. I’m wondering, from your standpoint, how you could help inform—I love the culture of experimentation, the need to always be testing. Totally agree with that idea, as you’re optimizing to something. You’re optimizing to employee satisfaction and advocacy. You’re optimizing to customer satisfaction and advocacy. You’re optimizing to lifetime value of prospective customers. Those are the big things.

This is me talking way too long, but I have a vision when I hear all the things you can test is a thousand and a million variables, and some CMOs are just pulling their hair out thinking about how they can test everything, and they get lost in the experimentation. Help us bring this experimentation back up to some simplified metrics if that makes any sense at all.

Carl Tsukahara: Sure. I think those metrics, Drew, depend on what the company is trying to do. Here’s an example, we have a large B2B customer—a Fortune 500—they’ve traditionally sold a big product to big enterprises and they basically said, “Hey, look, I’ve got to be much better at selling to the down market,” which is much more self-service. The whole point is, align your testing program to that. If that’s a key corporate initiative, that’s the key to everything, especially if you’re lined up with the key initiatives of the business.

Hopefully, most of my counterparts in B2B marketing have some idea what actual stages they want to take a prospect through, even if it’s a nine month sales cycle.

If you do any kind of journey mapping, even if someone is mid-funnel, you should be able to figure it out. If they’re mid-funnel and you can get them to do an ROI analysis and have me present a competitive presentation, then that really gets them to closed at a much higher velocity. These are basic things that most people should know. Great. Those are your actions. You want to get them to consume an ROI analysis, and maybe they’ll do that behind your back on your website, or get them to say, “Hey, click here and we’ll send an advisor or a professional services person and give you a competitive bake-off report.” There are always those kinds of actions but, to me, if you know where your process gets stuck and what you’re trying to do, then test to improve those steps.

It’s all about steps, Drew. We’ve all grown up in the world of Salesforce and Marketo and everything elsewhere there are stages and steps. Where are you getting hung up? I think that if you understand your funnel, try and experiment with the parts of the funnel that are the most problematic because you might be able to move the needle in a big way. Then all of a sudden, your program gets more funding and you can really show that evidence of its value in the income statement. Here, you can really show how you’re driving velocity.

We have examples from automotive manufacturers. If I’m an automotive provider, I’m trying to help the dealers to get the customer to schedule a test drive or configure a car interactively. Does that close the car? No. But if I could make the experience drive people more predictably into scheduling a test drive or configuring a car, I win, because down at the fiscal level, that dealer gets a huge high converting lead. I wish I had a more blanket answer for you but it kind of depends on what the company is trying to do.

Drew Neisser: I heard a lot of things that I think are valuable and there’s a lot to digest. But what’s so important in all of this is—you’ve got to start at the end, right? Where do you want to get to? And then as you sort of look at it and say, “We’re going to break it into stages. Great.” And then what within those stages would make the biggest difference? I think that’s the thing that you could do a thousand experiments, but the reality is, if you’re starting on a program of testing, you want to test the big things. You don’t want to be testing on a micro-level when there are big issues at stake. You only get to a micro-level when you’ve solved the big ones because you can have a product proposition that’s wrong. And that would be worth testing.

I think I’m in a wrap up by saying the connection here is you can test your way innovation. You really can. The renegade mindset here is: think big. Think big but don’t rely on your gut, necessarily, right? You get in and you create this opportunity for folks in your organization to say, “You know what? I don’t have to have the answer. I have to understand the problem. I have to understand where we’re going and then I’m going to test my way to success.” I think that feels like a very good, solid footing for any CMO to be on. Does that make sense?

Carl Tsukahara: It does. There are some people doing this a lot, there are some people doing very little, and there are some people doing none. My advice, honestly, Drew is—just get started. We have worked with this gentleman named Stefon Thomky, a professor at the Harvard Business School, who has some really great anecdotes about experimentation in the digital world and why, if you don’t do it, you’re potentially circling the drain.

If your competitors are doing this and you’re not, then, in theory, they know what to deploy that is winning and you don’t. Nobody wants to be Toys “R” Us. I love Toys “R” Us as a parent and taking my kids there, but Blockbuster and Toys “R” Us never got with the program of modernization and innovation and really thinking as what you would describe as renegade thinkers.

Everybody laughed at Netflix and Amazon in the beginning, but those kinds of organizations brought this kind of thinking of— “Hey, maybe I’ve got some really smart people, but I don’t have all the answers and my people can’t create all the answers.” Here’s the bottom line: Customers and users are extremely unpredictable. When we talk about the genesis of Optimizely we talk about how our founder got the whole idea started around the Obama campaign. He was the head of digital for Obama and—look, I’m not trying to make a political statement here but—he tried all of these different experiences to see what would get consumers to engage on the Obama.org website and donate money or join the campaign.

I show that test to hundreds of business leaders and they almost always get it wrong. What your intuition will tell you is often wrong because you’re not selling to yourself, you’re selling to a broad swath of companies and individual personas that are hard to predict. Say new device comes out like a new camera on the iPhone. Are people going to use it? It’s really hard to predict how users are going to behave, especially in a complex journey. So test it. Try it. Everybody knows where their funnel gets stuck. Are you trying to acquire or figure out why your NPS is so bad with your classic customers? Test the dialogues and the interactive experience until your NPS gets better. It’s not impossible. We have customers doing that today.

Drew Neisser: All right. Perfect place for us to wrap up almost exactly where we started, which is—as good as your gut instinct is, it ain’t that good. You don’t want to be the last team in baseball did introduce Moneyball. Let’s get some smart learning out there through experimentation. All right, Carl, thanks so much for being on the show.

Carl Tsukahara: Thank you, Drew. I really enjoyed it and really nice to meet you.

Drew Neisser: Nice to meet you as well. And to the listeners who’ve stayed with us all the way through on your long commute, I hope you enjoyed it. If you did, go right away—when you pull off the side of the road—and give us a five-star review on Apple or iTunes or your favorite channel. Until next time, keep those Renegade Thinking Caps on and strong.