Deprecated: Function set_magic_quotes_runtime() is deprecated in /home/mwexler/public_html/tp/textpattern/lib/txplib_db.php on line 14
The Net Takeaway: Page 18

OTHER PLACES OF INTEREST

Danny Flamberg's Blog
Danny has been marketing for a while, and his articles and work reflect great understanding of data driven marketing.

Eric Peterson the Demystifier
Eric gets metrics, analytics, interactive, and the real world. His advice is worth taking...

Geeking with Greg
Greg Linden created Amazon's recommendation system, so imagine what can write about...

Ned Batchelder's Blog
Ned just finds and writes interesting things. I don't know how he does it.

R at LoyaltyMatrix
Jim Porzak tells of his real-life use of R for marketing analysis.

 

HOW DID YOU GET HERE?

nettakeaway.com
iqworkforce.com
nettakeaway.com
https:
android-app:
android-app:
https:
https:
https:
https:

 

 

 

Tag-Hater at Yahoo, home of tagging? · 04/26/2006 01:38 PM, MetaBlog

Some folks have sent nice and not so nice notes wondering what I am doing at Yahoo if I don’t like tagging. After all, Yahoo is home to del.ico.us, the site that made tagging all the rage (now I expect a flood of notes explaining how tagging existed way before, flat categories existed since 1 bc, user generated keywords have been part of stone tablets, etc…).

They usually point to this collection of posts: I Hate Tags and I still hate tagging… and I continue to despise tagging… and In Conference… and even Yahoo buys del.ico.us, tags win? where I summarize all the myriad of problems with tagging.

Basically, I think tags are not the panacea that so many think they are. I am not alone in this; lots of smart folks have pointed out the pains of tagging (or, to be fair, our implementations of them) including the renowned Danny Sullivan of Search Engine Watch in posts like Tagging Not Likely The Killer Solution For Search and Yahoo My Web Tagging & Why (So Far) It Sucks and even Another Poke At Tags As Search Savior

So, what am I doing here, if I don’t love tags? Well, fixing the problem, of course. I’m killing tagging altogether.

Just kidding. No, my goal here, among many others, is to help show how tagging fits into everything. It’s ok to be a zealot and believe that tagging will replace search and navigation and everything else… but its an incorrect belief (c.f. Voltaire and others, http://www.classroomtools.com/voltaire.htm).

Tagging is one of many ways to access and discover information. It has many problems currently… but it also solves many problems, so the goal should be to use it for what it’s best for. My posts above go into extreme detail about how I feel, but the big takeaway is that tagging is one of many, many clever things that Yahoo folks (and others) are doing to make access to information easier and more efficient. Don’t get stuck on any one method or you will be missing out.

Even I (rarely) find myself turning to tags to get a quick feel for a topic area, so I can’t say these days that I “hate” tags as much as I used to.

Of course, the overzealous buzz died out as people realized that our current implementations do not scale, so we are meeting in the middle.

Look for aspects of tagging to show up in all sorts of unexpected places in Yahoo, but also other ways of getting to info and related info, some graphically visual, other relying on nuances of textual metadata. No one will win, but together, they will make search a much more useful tool than it ever was before.

Comments?

* * *

 

What... no calendars? · 04/19/2006 12:29 PM, Tech Personal

Jeremy Zawodny describes it better than I can. Its really annoying that a company with this many people and this much impact hasn’t taken the time to solve this. The below post is more tech than some may like, but it shows the efforts people have to go through just to facilitate working in a fast moving and networked environment like Yahoo.

Using Outlook’s Calendar with Thunderbird

Comments?

* * *

 

Work started... · 04/19/2006 12:22 PM, Personal

Its been a crazy couple of weeks. I’ve been living in Brooklyn, in the apartment of my wife’s family in the Brighton Beach area (called “Little Odessa by the Sea”). Its a tight squeeze for the average American family but for a Russian family, its no big deal. The B takes you right into Manhattan, and there was no need for a passport when leaving Brkln and entering Manh. Ha. Anyway, Manhattan is just like I remember it from a few years ago; some things never change, except David Lee Roth is still on the radio; that needs to change.

The onboarding was a bit chaotic here, but that’s to be expected given the incredible growth of Yahoo! (and all the tech companies; search your favorite news site for the many stories ad nauseum). Like Microsoft, full of really smart people who can think as fast as you can. Unlike Microsoft, they are much, much nicer about everything. Not everyone is kind and patient, but since practically no one at MS was, this is a pleasant change. (Of course, this is kind and patient crossed with NYC, so we do count in New York Minutes here.)

The intranets here are amazing: sites far superior to any intranet I’ve seen for any company. The group I work with, SDS, actually has designers available to assist with getting clean content up on a shared intranet knowledgebase, as well as programmers to add tracking features so we can understand what content people are reading, as well as full comment capability. Really impressive.

In addition, there was always a line on the Twiki site that Yahoo! uses this wiki, but they really, really leverage it. Tons of docs and ideas are stored via this software, and the wiki captures the history and growth of projects so you know why something is the way it is, even if you don’t like it.

Lots of emphasis on DIY tools here; they’ve found that the scale of data and speed of action required for their business often necessitates the building of new technologies for which there are no outside substitutes. Yes, they do acquire companies when necessary, and they do use tools like Oracle here and there… but they’ve solved lots of problems by throwing smarts at it instead of shelfware.

So, lots to learn, and on top of it all, still no place to live here. I’ll be commuting between Boston, NY, and Sunnyvale for a time, so my life will be what happens between airports. Also, my parents-in-law have every type of eastern european food, but they don’t have broadband, so posting may be in spurts over time.

More soon…

Comments?

* * *

 

Time for a Change... · 03/31/2006 01:35 AM, Marketing Personal

So, its been quiet around here… too quiet. Why? Because I’ve been in the midst of a huge change in my life, and haven’t had the time to blog as much as I might like. And now its time to share it.

Starting in a mere few weeks, I am becoming a Yahoo!. That’s right, I am leaving e-Dialog, not because they are in trouble (on the contrary, the client wins in both US and UK are staggering; if you are sending email but not with e-Dialog, you are just throwing away money) but because this was too good to pass up.

I will be a VP of Marketing Insight, based out of the New York City office but working with a team of analysts in Sunnyvale. I will be working under Usama Fayyad and Bassel Ojjeh whom I met when I was at Microsoft. They’ve managed to pull together some of the most impressive talent I’ve ever met, from Bob Page who started Accrue to most of the DMXGroup.

Yahoo has some of the largest databases on the planet (as viewed by the 2005 Winter Corp awards) around registered users, but not all the advertisers are taking advantage of this information. My role will be to help guide marketers in how to best leverage this information and understand how data and user-generated media can help them best market their brand. Its a combination of analysis and education, a mix of branding and direct (search marketing), and a chance to work with some of the largest media players on the planet.

So, things will continue to be chaotic here, but I’ll keep you all up to date on how the transition is going… It will involve moving from the Boston area to somewhere around Manhattan, as well as becoming a frequent flyer to San Jose/Sunnyvale (JetBlue, here I come), on top of all the other fun things which go with a new job. It’ll be fun! Hang on, it may get bumpy, but the end will be worth it.

PS: I also get to work with Wenda Harris Millard who you may recall as either a) EVP of Doubleclick, b) publisher of more magazines than you can count, or c) part of the last ep of the recent run of The Apprentice

Comments? [1]

* * *

 

Why do companies lie? · 03/23/2006 03:26 PM, Tech

I posted previously about the question Why can’t companies police themselves?. I get even more annoyed when companies lie.. esp companies I like.

March 14, 2006: http://news.zdnet.com/2100-9584_22-6049853.html
“Dell has moved to quash rumors that suggested the company has acquired rival PC vendor Alienware.”

March 15, 2006: http://desktops.engadget.com/2006/03/15/dell-non-denies-alienware-buyout/
“Dell non-denies Alienware buyout”

March 22, 2006: http://www.pcmag.com/article2/0,1895,1941384,00.asp
“It’s Official: Dell Beams Up Alienware”

Alienware isn’t public. Dell is, and there are requirements around disclosure; I get it. But instead of denying the takeover, why not just say “No comment”. Don’t agree, don’t disagree, don’t lie, don’t give false direction, just say no comment. Why is that so hard?

Comments? [2]

* * *

 

If you can't do it right, add more sample! · 03/22/2006 02:08 PM, Marketing

UPDATE: I’ve responded to some small criticism at a newer post ExactTarget still observing…

Original Post:

ExactTarget has some smart folks, including its founder, Chris Baggott, and its recent addition to its “strategic services”, Morgan Stewart. And, as I’ve said before, I am ever more disappointed when smart people do not-so-smart things.

You all know how much I hate observational studies which attempt to create cause out of correlation. For example, see my complaints about eROI’s work at Time of Day and Observational Studies and More ‘research’ from eROI.

Well, here’s another one. ExactTarget has released yet another completely observational study, which attempts to derive truth from a non-controlled, non-randomized, non-stratified, and basically non-research oriented research report. What’s their claim to fame for this one? “In fact, this is the largest, most comprehensive study to date, including data from more than 4,000 organizations, 230,000 email campaigns and 2.7 billion email messages. The study summarizes overall open, click-through and unsubscribe rates and provides additional analyses based on day of week for sending email while examining list size and target audience.”

Ah, I see. If you feel you can’t do it right by actually examining the impact of, say, manipulating the day of the week mail is sent, or examining what controlled factors impact unsubs, then add as much biased and non-randomized data as you can and hope that it all comes out in the wash. Look, larger is not always better. (In fact, they try to make some claims about the negative impact of list size as well, its not like they aren’t aware that just shoveling more mails in doesn’t make it better… but more on that later).

http://email.exacttarget.com/pdf/2005_Response_Study.pdf

Why is this such a disappointment? Because they could have really done it right. That is, with all these clients, all these mails, couldn’t they have gotten just a few to actually control what’s controllable and actually demonstraate causation? Instead, not one variable is manipulated. Not one experiment is reported on. No real research happened here at all. Oh, and let’s add to the pain. The first sentence of the study says:

“ExactTarget’s 2004 groundbreaking study of which day of the week was the best day for marketers to send emails caused many to re-evaluate their common practices and employ testing to determine which day worked best for their customers.”

Hmm. So, puffery aside (e-Dialog clients knew to test this since 1999), they recognize that testing is required to really, really understand what works. 10 pages later, we see… no testing.

What else is missing? Well, no description of the sample or the population that they intend this sample to represent. For example, no description of the client distributions (industry, size of list, goal of campaigns, mailing frequencies). (Note: ExactTarget is under no obligation to reveal “confidential business information” and a detailed sample description could reveal more about their client makeup than they prefer to reveal. However, a stratified sample selection could have allowed them to say who was included without revealing what % of their client base that represents).

Without knowing who was in this data, should I be comparing my small business mailings to this data? How about my CPG brand mailings? How about my subscription renewal mailings? Just because its a large sample doesn’t excuse the fact that it may be biased (and in fact, we know that its biased towards the countless small and medium businesses and agencies using the ExactTarget platform and API; its an easy-to-use and powerful platform).

No description of how data was aggregated. This is actually a pretty common mistake, and one that we need to be better about as an industry. I’ll demonstrate with a contrived example, but it gets the point across.

Lemme aggregate my open data for my mailings for the Auto industry over the past time period of interest. I have 3 mailings:

So, what’s the average? I can average the rates: 36%. This seems a bit odd; its not really representative of any of the mails, either overstating or understating every one. Ok, let’s calc based on the sums of the underlying numbers: 610 opens/1,001,500 mails = 0.06% open rate. Well, that doesn’t seem quite right either; it substantially undercounts 2 out of 3 mails! Which should I use?

Well, it all depends: these three mails are to very, very different audiences: The first was follow up from a screensaver download from my site, the 2nd was a service message to users in the first week of ownership, and the 3rd was to my “house list” which includes contest names, list rental acquisitions, and other poor quality names. So, should I even be averaging these at all?

And, given the variety of mailing sizes: That 1 million has some pretty heavy impact when I use the underlying numbers, but that 100% service message to only 500 people has a huge impact on the “average of the rates”. Which should I be giving the “extra credit” to? This issue of mailing size is not only ignored, but its somewhat misinterpreted by ExactTarget; I explain below.

So, how we aggregate is HUGELY important. Yes, we need to be consistent on how we calc metrics (unique or gross clicks? Net mailed or gross mailed?) but also on how we combine them. The industry is starting to conform to standards on clicks and opens, but how we aggregate is still up for grabs. I encourage the ESPC and other groups to lay out standards on how to aggregate for future work.

No description of mime types; do they send everything Multipart? (I won’t even give them extra points off for the fact that they fat-fingered their definition of Click-through rates by copying and pasting the defn of Open Rates on page 9… but I digress.)

Reading the report, we see more talk of day of the week without reporting a single test. More talk of the value of segmentation without any examples of segmented content vs. a control group. These are all correlations, not really any causational data, and they really could lead people to believ

The conclusions they draw are great press… but of course, they are nothing special, and should be things that most mailers know. That is, larger lists tend to have lower rates than smaller lists. Hmm. Perhaps Mr. Stewart may not recall his Econ 101, but there is this concept of the law of diminishing returns (and not the “law of big numbers”, most people get this wrong). It points out that all thing being equal, increasing the size of one thing does not create a correlated increase in other things. In effect, making a list larger makes it harder to get higher rates. Is it segmentation? Well, it doesn’t have to be. I can send junk to 10 random people and still get a 30% open rate; I only need 3 people! Sending junk to 1,000,000 people requires lots more action before I can get that 30%.

What I don’t see in this work is any manipulation of a segmented approach vs. a control/generic approach, similar content, offer, seasonality, call to action, etc. In fact, doing such an approach does show major impact, but this study does nothing to support such a claim, even though it winds up being true.

There continues to be a difference between full service and self-service, and this kind of work demonstrates it clearly. They picked some conclusions out of the air (good ones, ones that clients should be aspiring to) and then they look at observational data and try to cram it all together. It really is the best of intentions, and they are trying hard to make it all sound good… but its not research.

It was bad when eROI did it, but they don’t have any research experience in their background. Morgan Stewart is better than that (see this press release for his background, including Targetbase, one of the best) and for every spark of greatness he shows in various articles, interviews and in recent work performed by ExactTarget, I wonder if he’s overruled by his marketing dept. in sending out studies which don’t really help expand our knowledge around cause and effect.

You will notice that e-Dialog does not release studies like this. We don’t see the value in random observational work when it can be done correctly through experimental design. Our analytic team performs the requisite controlled testing to understand for each client’s unique business model, capabilities, and marketing goals what the best approach would be. Yes, in other mediums, we don’t often have the control we have in email, so the many studies of self-reported data are all we have. But since it can be done better, why are we falling back into old bad habits?

Look, Chris and Morgan and his team are trying to do the right thing, and they deserve credit for that. But observational studies tend to have no end of of problems, see
There Are No Industry Averages! Get Over It!,
The ‘ladder’ has many rungs… to fall off of.,
Stats are Meaningless, Pt II, and even Quote of the day…. This stuff isn’t easy to do well, and it really does require some care and thought.

Is this work better than nothing? In my opinion, no. Bad research sullies what we are trying to do. But I know others disagree; the popular press loves printing anything which sounds like its fact, even if, on closer examination, it really isn’t. But I suspect the next work we see out of ExactTarget will be more rigorous… and that’s the one you will want to pay attention to. But this one, like most observational studies, should be used as directional learning: half the time its wrong, half the time its right, but its hard to know which half is which.

Comments? [1]

* * *

 

Firefox 1.5 eats memory... · 03/19/2006 05:19 PM, Tech

But there are some ways to reduce that.

You know, if it weren’t such a problem, there wouldn’t be another UPDATE*4… (3/19/2006): Mozillazine posts the one and only Memory thread which collects a lot of tips and such. No information on how to fix it, but all the collected wisdom around cases which seem to exacerbate the situation.

UPDATE UPDATE UPDATE (2/17/2006): Lead engineer for Firefox Ben Goodger posts about his pov on the leaks. He’s a smart guy, but the evidence doesn’t support his claims. That is, he blames it on the “Back-Forward cache that retains the rendered document for the last few session history entries. ” Unfortunately, that’s easily disproven, as Ben provides a setting to turn off this feature… and readers report continued memory growth. So, still an issue.

UPDATE UPDATE (2/5/2006): If you are are using 1.5.0.1 (how many decimals do they need?), you may see some improvement. More details from this post by Jesse Ruderman.

UPDATE (1/9/2006): After reading these, see ones I missed at some other great posts linked below… Now, back to your originally scheduled posting.

  1. I know we are all using tons of extensions. Some of them are worse than others. Try removing some extensions to see which you use are eating space. Hint: if it does something with every request, its probably an issue.
  2. We all use FasterFox (Oh, you aren’t? You probably should). Besides the fact that DNSStuff.com blocks you if it detects it on some of the more tweaked configs, there are many settings it had by default which are profligate, esp. with memory. (BTW, I could have used the word prodigal there as well, another good one to throw around pretentiously.) I went custom and did the following:

Oh, and every once in a while, check for updates. Extensions often get bugs fixed and sometimes that reduces memory usage. Tools|Extensions|Check for Updates button.

How bad does the memory usage get? If you are running Windows Task Manager, make sure to View|Select columns and add “virtual memory size” to the processes tab. That’s frightening. Even more frightening is to really dig in with the free Process Explorerer by Sysinternals. Free and really lets you solve process issues. Lots more info than you need… but when you need it, its the best there is.

Another recent find which has made my life much better:
PureText. This makes a hotkey to “edit:paste-special as text” in any app. This is very helpful for Outlook and other RTF panes which really, really want to keep all the formatting they can, when you really, really want it to keep your formatting. Its free and wonderful. I highly recommend it.

UPDATE, Cont: So, still want more, eh?
Check out http://forums.mozillazine.org/viewtopic.php?t=354828 and http://labnol.blogspot.com/2006/01/solutions-to-common-firefox-15.html

Comments?

* * *

 

Why can't companies police themselves? · 03/05/2006 03:13 PM, Marketing

I get annoyed with greed. Its boring. Watching good things get destroyed by unrelenting yearning for more is all the more painful because it doesn’t have to be.

I’ve raged before about affiliate spam, which I believe to be the number one reason email has had such huge increases in fear. Spam was driven completely by affiliate links, and only in the past few years have other scams come along to phish and trojan in these waters.

Now, look at “splogging”, or blogs set up simply to get foolish people to click on the adsense links. Through typosquatting and other techniques, these people create networks of websites which link to each other and raise rankings, and simply show ads.

Google, like the affiliate sponsors, say that they aren’t responsible for how people use their products, and that its impossible to police them.

Yet somehow, mere mortals from the outside, without access to internal data, somehow seem to be able to track and detect splogging networks. Since August, 2005, the Fighting Splog blog has detected and reported thousands of these pseudo-sites. While he (or she) gets little response from Google, sometimes they are shut down, and sometimes not.

Now, one could say, “Why should Google spend any money at all trying to clean up? Volunteers like this sucker will do it all for free!”. That’s greed talking again. If we believe the world will be a better place, both those on the outside and the inside of an experience or situation have to work together. Google should be doing everything it can to either build up this talent base inside its org, and/or provide data feeds and tools which would empower outside judges to assist in detecting scumbags.

There are always the “who watches the watchers” people who immediately throw up edge cases (which, by definition, are rare or they wouldn’t be edge cases, they would be part of the border definition) and imply that some judges would attack legit groups who seem spammy, that lots of people would be abused, how do you set the guidelines, etc., etc. We hear the same thing about affiliates: for every 10,000 scumbag spams, there is a legit affiliate link by some small guy trying to make a few cents off recommendng a product he likes. How do you define spam? etc. etc.

Yes, some edge cases might be hurt, and some scumbags might slip through. But given where we are going now, where vigilantes feel that they have to establish RBLs (blackhole lists to help ISPs block mail percieved by some arbitrary person to be bad) and sites like Fighting Splog, it seems time to shape up. There’s lots of pain far outside of the edge cases; you don’t have to solve 100% of the problem to happy about solving 80% of the problem.

I won’t even start to talk about the issues around spyware forcing, but you get my drift. Its time for companies to be the first to admit there are problems, and the first to show how they will fix it, instead of sitting on it until others are forced to solve their problems, sometimes incorrectly.

Do I disagree with the vigilante movement? I don’t if its rational. If it considers ways to mitigate harm, to allow appeals, to mete out punishment with justice. But looking at vigilante software installers who try to destroy the pocket pc of potential pirates or “you can’t get off of us ever” blocklists that corporations use without even thinking about what they are blocking… these are not rational reactions, and they are destroying the very thing they want to create: trust.

We talk of relevance in email all the time, but we dance around trust. I think we need to solve that one first. And companies like Google and Yahoo and MSN on the search side, and companies sending email on behalf of large corporations like e-Dialog, Responsys, ExactTarget, and DigitalImpact need to consider how they can both enable outsiders to help, and lay a foundation themeselves.

PS: I was pointed to a book review in the Boston Globe about a new book called Integrity by Henry Cloud. He points out that many successful companies are led by leaders who focus on honesty and truth. These are character traits which, when exemplified at the top, grow through an organization. Probably a good book to read soon.

Comments?

* * *

 

Seth Grimes is always worth reading · 03/03/2006 08:06 PM, Analysis

If you like reading about the tools and technology driving our ability to turn data into something useful, you should be reading Seth Grimes. He has a column in Intelligent Enterprise magazine (itself a good read) and posts on various discussion groups and forums.

Here’s a recent column pointing out some under-the-radar tools and approaches to analyzing data which herald the next generation of “olap” approaches.

New Directions For OLAP

An archive is here.

If ever someone needed a blog, he does, but til then, you can kind of keep tabs on his work at his consulting company’s home page, Alta Plana.

(BTW, another guy who has written some good articles in IE is Stephen Few who is mentioned in the Grimes article above.)

Comments?

* * *

 

Logs vs Panels, yet again: Slate · 02/28/2006 02:02 PM, Analysis Marketing

Paul Boutin has a nice article entitled Slate Has 8 Million Readers, Honest. Personally, I can’t believe we are still discussing these issues, but given that reconciling these data counts has helped my career grow, I shouldn’t complain.

Well worth a read: very concise, and also one of the few times you see folks refer to IBM’s SurfAid which is not such a bad package and has a longer history than most of the current offerings.

Comments?

* * *

 

On a previous episode...

Admin
powered by Textpattern 4.0.4 (r1956)