Deprecated: Function set_magic_quotes_runtime() is deprecated in /home/mwexler/public_html/tp/textpattern/lib/txplib_db.php on line 14
The Net Takeaway: Time of Day and Observational Studies


Danny Flamberg's Blog
Danny has been marketing for a while, and his articles and work reflect great understanding of data driven marketing.

Eric Peterson the Demystifier
Eric gets metrics, analytics, interactive, and the real world. His advice is worth taking...

Geeking with Greg
Greg Linden created Amazon's recommendation system, so imagine what can write about...

Ned Batchelder's Blog
Ned just finds and writes interesting things. I don't know how he does it.

R at LoyaltyMatrix
Jim Porzak tells of his real-life use of R for marketing analysis.






Time of Day and Observational Studies · 07/27/2004 10:55 AM, Analysis Marketing

Recently, 2 studies have been released which try to claim that the time of day or day of week that one emails has real impact on performance. Jumping to the end, these observational studies are both flawed: Past controlled research (by my team) shows little impact for day of week, though we’ve performed little research with time of day (mostly because email delivery time is controlled by the ISP, not the sender, and we’ve seen delays as high as 3 days ore more for mails to get through an ISP’s systems).

But ok, enough foreshadowing. ReturnPath says that mails sent at certain times are more likely to get delivered; Direct Magazine reports here. eROI, a small email agency, reports that it’s found that performance on certain days of the week appear to be better. MarketingSherpa has the exclusive, but its only available for a short time here.

Both are interesting, somewhat provocative… and are completely flawed. (Caveat: I don’t have the full report or data for either study, so maybe they mention these flaws… but they certainly didn’t mention them to the press, so I doubt they emphasize them enough in their reports.)

Why? They are both observational studies. They both detect a change in time, but do not do any of the necessary controls to understand why these changes are happening.

Let’s start with ReturnPath: Like every other observational study, they throw around large numbers with the belief that every study sounds better if the sample size is large enough. In addition, they mention that they calculate an “index per company” as if that controls for the fact that they are measuring a variety of factors and controlling for none.

They did the standard thing: took the results of a mailing (in their case, some measure of deliverabilty, though unclear if its Inbox or just ISP acceptance of the message) and grouped by time.

The problems are manifold. Lots of different people mailing here. Some B2C, some B2B. Some have a large list, some have small. Some have high quality, others less so. Did RP randomize these elements so every time frame has the same distribution of mailing types? No, they just group by time.

(Less important: Did they wait to see if mails got accepted after time, so that a mailing sent at 5 am got delivered at 10 pm? Unclear in the blurb, but I doubt it. They probably just counted it as a 1and moved on… but not sure about this)

This is a huge flaw in the study. Yes, there is evidence that more spammers mail in the wee hours when traffic is light on the wire; you can send faster that way, and spammer make money on volume. (However, this appears to be changing; see stats to see a pretty even distro of spam across the day.) But does that mean that sending in the early morning is less likely to be delivered… or that the people who choose to mail in the early morning have larger and lower quality lists? Can’t tell from this study.

The eROI study has the same huge gaping hole. Its purely observational. The fact that the “best day” changed so much since the last time (Weds used to be the incorrectly assumed best day to mail) implies that its not stable, meaning (perhaps) that it wasn’t really correct in the first place.

But yes, like the RP study, it doesn’t control for its main variable of interest (day of the week) but simply observes when their collection of clients chose to mail. This doesn’t make it “wrong”, but some of the claims made are not supported by this type of study. Sure, some observations and “directional learning” is possible, but without a true experiment, observations are about all you are left with.

A controlled study would take the same mail, send it out on each day, and measure a rolling window of 7- or 10- or 30-days-out results. Since that was not done here, we can’t really say anything about which day is the “best” or the “winner”. For example, it may be that certain types of mails are sent on that day and therefore get opened/clicked.

By saying that its over “6k marketers” and “a wide range of industries”, they hope that they can hide the fact that there are tons of possible confounds. Are all the marketers of the same type? Are they sending similiar types of messages, with similar calls to action? Are they selling, inviting to webinars, sending informational newsletters, etc? And are all these equally distributed over every day? What is their window, is it 30 days from mail or just 30 days of results rolled up (so that the last week has fewer days in which to count, while the first week has 3 more weeks to drive response)? Without either controlling for or randomizing across these conditions, one cannot really say that there is anything going on. There are too many correlational factors for anyone to feel confident that any day (or any time) is the correct one.

Now, at the marketing company where I work, we have run these studies as well… but we run them as controlled experiments. That is, we use the same email, making sure that there are no confounds like “sale ends tommorrow” or other time-based content. We mail it at about the same time each day (knowing, of course, that arrival time is not really controllable, but we can at least send at the same time). We then measure out the same time window (varies for different clients, but always at least 7 days) to examine all behaviors (clicks, opens, forwards, conversions, unsubs, bounces, etc.)

What have we found? There is an immediate effect no matter when you send: People online get alerted and open. Sending at noon east coast during a weekday does result in higher immediate response than 4:00am on a Saturday morning… but not often huge, just clearly more. Then, by the time 24 hours have rolled around, we see people logging on and checking their mail and looking over the last 24 hours… and seeing the mail. Then we see a bump on the weekend when weekend surfers check their mail for the week. This same pattern results no matter when the mail is sent: after the first day, the usage pattern reflects when users are online, and nothing we do can change that. And if we try to guess when each person is online… well, maybe that is when the mail should be sent. But the mail response curve quickly resembles a site usage curve, reflecting how users go online, not what day is their favorite for reading emails.

So, on average, no day is better than any other. Certain types of mails may do better on certain days, esp if consumers tend to shop in a specific way or products are distributed in certain ways (New CDs and DVDs are released on Tuesdays, movies hit theaters on Thursdays)... but on average, every test we’ve run has shown no effect for day of the week after 24 hours, across a collection of industries, mailtypes, and response vehicles (simple click, conversion, etc.). People open mails either when they arrive, or when they are online, and this doesn’t change per day (with the exception of weekend vs. weekday as mentioned above): controlled studies show this time after time.

So, focus on relevance, high quality content, smart personalization. Beat your baseline. Don’t worry about trying to take a shortcut and mail on a certain day or time and hope for stable improvements in performance. And, in the interest of personalization, perhaps the right thing to do is to see if segments consistently open on certain days, and mail that way. We can even do this by time of day. But given all the factors that really play into a quality message, such as relevance, customized offerings, and an understanding of the trust and relationship you have with the reader… it seems less important to play with day of the week and hope that makes a difference.

Certainly, as I point out, for certain cases, it might help (not as much has having a more relevant message, but…). And if, after doing a controlled test, you see that it does for your specific mailings, great! But don’t assume it will be of much help in the long run. Focus instead on understanding customer needs and expectaions, and that will make your mails much better than a Tuesday vs. Thursday send.

All of these players (ReturnPath, eROI, Sherpa and Direct Magazine) are great sources of info and work, so no slight on them. Its just that correlation does not imply causality, and these observational studies are just bad science. If we want to see the effect of time of day or day of week, let’s do the controlled experimental study. But to toot one’s horn for an observational study, imply you’ve found something new, and not examine if its actually an artifact of the “research” process is just sloppy, and I have higher expectations for the legitimate email marketing industry.

Otherwise, we really will just be the spammers people think we are.

* * *


  Textile Help
Please note that your email will be obfuscated via entities, so its ok to put a real one if you feel like it...

powered by Textpattern 4.0.4 (r1956)