Tuesday, November 28, 2006

Distinguishing among Multi-Variate Testing Products (I'm going to regret this)

My last two posts listed many factors to consider when evaluating multi-variate Web testing systems. Each factor is important, which in practical terms means it can get you fired (if it prevents you from using a system you’ve purchased). So there’s really no way to avoid researching each factor in detail before making a choice.

And yet…there are so many factors. If you’re not actively engaged in a selection process, isn’t there a smaller number of items you might keep in mind when mentally classifying the different products?

One way to answer this is to look at the features which truly appear to distinguish the different vendors—things that either are truly unique, or that the vendor emphasizes in their own promotions. Warning: “unique” is a very dangerous term to use about software. Some things that are unique do not matter; some things that vendors believe are unique are not; some things that are unique in a technical sense can be accomplished using other, perfectly satisfactory approaches.

A list of distinguishing features only makes sense if you know what is commonly available. In general (and with some exceptions), you can expect a multi-variate testing system to:

- have an interface that lets marketers set up tests with minimal support from Web site technicians
- support Taguchi method multi-variate testing and simpler designs such as A/B splits
- use segmentation to deliver different tests to different visitors
- use Javascript snippets on each Web page to call a test engine which returns test content
- use persistent cookies, and sometimes stored profiles, to recognize repeat visitors
- provide real time reporting of test results

That said, here is what strikes me as the single most distinguishing feature each of the multi-variate testing vendors (listed alphabetically). No doubt each vendor has other items it would like to add—I’ve listed just one feature per vendor to make as clear as possible that this isn’t a comprehensive description.

- Offermatica: can run multi-page and multi-session tests. This isn’t fully unique, but some products only test components within a single page.

- Optimost: offers “optimal design” in addition to the more common Taguchi method for multi-variate testing. According to Optimost, "optimal design" does a better job than Taguchi of dealing with relationships among variables.

- SiteSpect: delivers test content by intercepting and replacing Web traffic rather than inserting Javascript snippets. This can be done by an on-site appliance or a hosted service. (click here to see a more detailed explanation from SiteSpect in a comment on yesterday’s post.)

- Verster: uses AJAX/DHTML to generate test contents within the visitor’s browser rather than inserting them before the page is sent. All test content remains on the client’s Web server.

There are (at least!) two more vendors who offer multi-variate testing but are not exactly focused in this area:

- Kefta: tightly integrates multi-variate testing results with business rules and system-generated visitor profiles used to select content. Kefta considers itself a “dynamic targeting” system.

- Memetrics: supports “marketing strategies optimization” with installed software to build “choice models” of customer preferences across multiple channels. Also has a conventional, hosted page optimization product using A/B and multi-variate methods.

2 comments:

Unknown said...

We are a past customer of Optimost. We had a very bad experience with them and do NOT recommend them. They say one thing and do another. They provided very poor results, made mistakes, and had inexperienced people work our account. We do not recommend this company to anyone. Offermatica or Google would be much better use of your time and money.

David Raab said...

I've posted Mike's comment although I would prefer he have identified himself. Obviously he is a very unhappy former user, but we don't know Optimost's side of the story. This is a fundamental problem with Web comments--there needs to be some mechanism to put them in perspective. If anyone has suggestions for how to handle this better, please let me know.