Showing posts with label email testing. Show all posts
Showing posts with label email testing. Show all posts

Pre-Game Day Super Bowl Ads

I was served up an ad trailer of a game day ad by Toyota. And it was the feeblest 30 seconds of my day so far.  Part of the excitement and anticipation for the half-time show is to see what new spins on ad creatives that agencies and corporations have come up with, to laugh and be amused. 

This ad trailer already has more than 44k views. I imagine the email advertisement went to all current Toyota owners who are also registered subscribers. I can understand why marketing and sales would have chosen to run a preview of the ad; perhaps they can use early feedback to tweak the ad components or to test campaign tracking; but isn't that what Q/A testing is for? Or maybe they want to prep the audience for what looks like it will be an over-spent ad spot.

Anyhow. I don't see the point of wasting an existing customer's time to read an email to click through to a video hosted on YouTube for content that isn't even an entire ad. Ugh. And the micetype? Watch outtakes? Not even close. These are just clips of the same ad hastily pasted together.

The saddest part? There will be Super Bowl ads on game day from companies and organizations that lack deep-pocketed agency resources but will be able to pull off a good ad that drives home a relevant selling point and reasons why you should buy or invest in that company.

Is offering a discount enough?

"Your mystery offer is waiting <checkmark symbol> it out". That was in my inbox this morning from Shutterfly. Thank goodness Google Mail rendered the symbol correctly; though I wonder how many customers got the question mark instead for that special character insertion. I would think that I am already a customer since I made a purchase earlier in the year and while Shutterfly might have marketing automation capabilities, whoever is managing how content is delivered to customers and prospects is doing it from a batch and blast perspective; which is okay if your company fits one of the following scenarios:
  • a consumer products oriented company with a high churn rate
  • customers respond better to discounts because you've conditioned them to expect it
  • your company is a market share leader in its industry, and you simply don't care about losing existing customers
  • your CEO dictates the marketing plan
  • your products are more of a commodity than a premium brand
Before and after purchasing from Shutterfly, I'd always gotten promotional emails and like most consumer emails they are spammy...meaning I get more than four per month. And to top that, all the discounts have been in the 40% to 50% range or free products (e.g., 12 free thank you cards, free photo magnet, free photo book, etc.). It makes me think that their loss-leader costs are minimal.

Take this Shutterfly email as an example. What do you see?
Shutterfly promo email from 2013-08-22

If you saw in the email preview screen that you only got up to 40% off for buying something from Shutterfly, you might just be in the 90% who would see just that. However, if you were really tuned into promotional offers, you'd see that it's up to 40% in addition to existing sale prices. Which did you see, the former or the latter response? If you saw the former, chances are you deleted the email since it didn't look any different than any other Shutterfly promotion. If you saw the latter, it might take you a second glance to see if you really saw the right offer and perhaps that is enough for you to click through. 

If this is a totally new offer (as in, this product pricing scenario has not been used with new customers), the offer needs to be called out in a color other than black text on construction yellow. The italicized black text is still more of the same. This is a callout for the "on top of sale prices" to be in a different color. It's hard to tell though if such an offer would really impact Shutterfly's media plan without having the production data. And, it is just one offer campaign out of dozens that Shutterfly runs each calendar quarter.

Pardot: My first drip test

Probably the most bothersome of this whole affair is the waiting. Pardot's drip campaign setup requires you to put in a minimum 1 day (can't do partial days, hours, or minute settings) between when a user is sent an email and opens or clicks. For every open or click, add another day to your test cycle. For brevity, this test only uses two email templates. While I could have Pardot do more complicated actions, I don't want to have to delete or reset my test addresses in Salesforce, so these actions just trigger the Pardot side of the data.

Having four unique-by-content templates (video, case study, webinar, and general) for persona nurturing is a lot better (Thanks Kate!) than setting up 40 unique templates. Except, to start the personas, I'll have to create at least three sets of eight persona content pieces. Each set would include body content, a video placeholder image and a video link, and a unique call to action.

The drip report display is a big step up from what you could get out of Silverpop's EngageB2B product (and much cleaner too) which shows how many prospects are in each section of the drip.

Pardot - Sample Drip Progress



A/B testing, is it worth the effort?

Industry: Banking

This year we've done a few A/B tests of just subject lines for B2B e-mail campaigns. The "standard" way we split the campaign audience was by halving the dataset equally. The first half of the e-mail addresses receive Subject Line A, the latter half receive Subject Line B. This got me to thinking that maybe customers will read just about anything we send them, and if our content is unclear they'll hit reply and/or call up their salesperson. And, that the population subset where campaign readership was higher might also reflect how clean the dataset was.

The customer population used for all three broadcasts is the same dataset.

March
E-mail Focus: Product

Test A: NINA/No Doc - $750K to 80% LTV
Test B: NINA/No Doc - Non-Traditional Solutions

May
E-mail Focus: Product

Test A: New 1-Month LIBOR for the Building Season
Test B: New Construction Solutions

July
E-mail Focus: Technology Enhancement

Test A: Private Label Marketing A Click Away
Test B: Let Us Market for You

The test results are as follows:

Month, A/B, qty e-mails sent, 1-wk opens, 1-wk open rate, 3-wk opens, 3-wk open rate

March A, 21019, 9809, 47%, 10450, 50%
March B, 21022, 9286, 44%, 9840, 47%

May A, 22024, 7766, 35%, 8152, 37%
May B, 22028, 9563, 43%, 10153, 46%

July A, 22600, 8698, 38%, 9580, 42%
July B, 22604, 9468, 42%, 10330, 46%

You might infer that a 3% difference in open rate is not statistically relevant. Even in market research where our audience size was much smaller, at least 8-10% is noteworthy, but not 3%. Not for this population size. Maybe if we did consumer marketing and broadcasted to millions of addresses per month would this be more relevant, but I digress. July is about the same with 4% not being that significant either.

So what happened with the May campaign? Perhaps it isn't data related at all. Spring to Fall is when a lot of home construction and remodeling goes on. A product that has good rates for just its first month isn't as appealing as say, a more general umbrella of solutions offered to a customer.

One thought in the aftermath of the May campaign was that dataset B responded more favorably to campaign broadcasts. Sure, I might be inclined to believe that. However, for the July A/B campaign, the second half of the population was sent the A campaign, and the first half were sent the B campaign. As the results show, the audience didn't care one way or the other.

Analysis: Inconclusive. The only good way to measure e-mail campaign success is to tie the data to quantitative metrics, like how much loan volume resulted from a broadcasted campaign, how much of a media mix was used to promoted a particular product, how many calls were generated by this campaign. ROI isn't necessarily a good metric anymore especially when your company broadcasts in-house. It costs us almost nothing to broacast, except the time spent by the business units putting the campaign content together, myself formatting the content into html/aol campaign templates and pulling data for broadcasting.

So, is it worth the effort? No. But that isn't going to keep us from testing this process with future campaigns.

A/B Testing for E-mail Campaigns

A/B testing, also called an A/B split, is the easiest method of testing elements within your emails or on your Web site. The target audience is divided into two groups. One group is broadcasted the original version of whatever you are testing, and the second group sees an alternative version, where only one element has changed. Then, the results are tracked. With e-mail marketing, the following components are usually tested: bonus gifts, coupons, P.S. messages, guarantees, opening sentence image, closing sentence image, from-field, calls to action, opening greetings, type styles, layout elements, graphic images, etc. This form of testing is used by both business and consumer-driven campaigns.

My company did its first A/B e-mail campaign test last week. The only element that changed was the e-mail subject line. We used:

Test A: NINA/No Doc - Non-Traditional Solutions
Test B: NINA/No Doc - $750K to 80% LTV

On average, our audience open rate for product e-mails is about 45%, it's not that great and could use some improvement. The distribution list was split 50/50. After 5 calendar days, there's no discernable difference between either subject line and the combined open rate is 42%. Having not worked in the mortgage industry before, much of the e-mail content that we broadcast doesn't make much sense and is so banking specific that I hope our customers are getting value from it.