Wednesday, April 29, 2009

New Webinars and White Paper

I have two Webinars and a newly published white paper you might find interesting:
  • Webinar Making the Right Start with Demand Generation, Thursday, April 30, 2:00 p.m. Eastern. This will discuss preparing for your new demand generation system, including requirements definition, vendor selection, and the initial deployment. I'll talk a bit more about results of the deployment survey. Sponsored by Marketo. Click here to register.

  • Webinar How to 'walk the walk' with the Sales 2.0 Approach to Aligning Sales & Marketing, Wednesday, May 13, 1:30 p.m. Eastern. This will be feature myself, Sales 2.0 guru Anneke Seley, and Genius.com CEO David Thompson in a discussion format. Sponsored by Genius.com. Click here to register.

  • White Paper When Best Practices Go Bad: New Rules for Sales and Marketing Management. Best practices that were valid just a few years ago are now obsolete. This paper shows why and offers some replacements. Also sponsored by Genius.com. Download here.

Demand Generation Implementation Survey: Half of Users Deploy Basic Features in One Week

Summary: a small survey of demand generation users shows that more than half deployed basic demand generation features within one week, and about 75% within one month. More complicated features take longer, but in general, 80% of the features ever deployed are in place by the end of two months. This suggests that marketers are quickly gaining value from their systems, but also highlights the need for continued training to be sure they take advantage of all system capabilities.

*********************

Yesterday’s post described the responders to my online survey on demand generation implementation. Today we get to the main event: what people actually do.

Table 1 shows the actual responses, with the items ordered by % used (that is, how many respondents ultimately deployed a given function).


table 1

How soon after starting implementation did you first do...
first done:

first week

first month

second month

third month

later

never

total

% used

outbound email campaign

22

8

5

1

0

0

36

1.00

campaign response reporting

12

15

3

2

4

0

36

1.00

lead transfer to CRM

18

5

8

0

2

2

35

0.94

CRM integration / synchronization

23

4

2

1

3

3

36

0.92

landing page

19

9

2

0

2

4

36

0.89

lead scoring

12

7

3

0

10

4

36

.89

multi-step lead nurturing campaign

8

10

7

2

5

4

36

0.89

Web site analytics

14

7

6

2

2

4

35

0.89

Webinar campaign

5

13

5

1

6

5

35

0.86

campaign ROI reporting

7

11

3

0

8

6

35

0.83

data cleansing process

10

7

1

3

7

8

36

0.78

pay per click campaign reporting

9

5

1

2

6

12

35

0.66

Web page survey

3

6

5

1

6

15

36

0.58

email survey

2

2

6

1

7

16

34

0.53

combined

164

109

57

16

68

83

497

0.83



Looking at the table, we see:

- virtually everyone (more than 90%) does outbound email, campaign reponse reporting, lead transfer to CRM, and CRM integration. No surprises there.

- Just slightly fewer (80-90%) do landing pages, lead scoring, multi-step lead nurturing, Web site analytics, Webinars and campaign ROI reporting. I’m a bit surprised to see Webinars ranking so highly, given that support for them is rather limited in many demand generation systems. But they’re certainly a popular marketing tool, so I guess people will run them through their demand generation system regardless. The high utilization of other relatively advanced features is impressive (lead scoring, lead nurturing and ROI reporting), although perhaps to be taken with a grain of salt.

- Other features are less widely employed (53-78%), including data cleansing, pay per click (PPC) campaign reporting, and Web and email surveys. The latter three make sense: it’s hard to get PPC costs into a demand generation system, so many people probably don’t bother. Surveys are simply not that common, bearing in mind that most data is gathered through forms on landing pages. On the other hand, the relatively low utilization of data cleansing is a bit scary because I strongly suspect nearly everyone needs it. This may reflect the fairly limited data cleansing tools in most demand generation products.

So far so good. But the main purpose of the survey was to understand when and how quickly the different functions get deployed, to get a more nuanced view of the implementation process – and, in particular, see what marketers can realistically expect to accomplish in the first week.

Table 2 addresses this by calculating the cumulative fraction of responders who had deployed each function by each milestone (one week after implementation, one month, two months, etc.). The calculation excludes people who never deploy a given function, since we’re trying to understand how quickly the people who use a function deploy it.


table 2

cumulative deployment rate (base: ever deployed)

cumulative %

first week

first month

second month

third month

later

% used

landing page

0.59

0.88

0.94

0.94

1.00

0.89

outbound email campaign

0.61

0.83

0.97

1.00

1.00

1.00

CRM integration / synchronization

0.70

0.82

0.88

0.91

1.00

0.92

campaign response reporting

0.33

0.75

0.83

0.89

1.00

1.00

lead transfer to CRM

0.55

0.70

0.94

0.94

1.00

0.94

Web site analytics

0.45

0.68

0.87

0.94

1.00

0.89

multi-step lead nurturing campaign

0.25

0.56

0.78

0.84

1.00

0.89

Webinar campaign

0.17

0.60

0.77

0.80

1.00

0.86

data cleansing process

0.36

0.61

0.64

0.75

1.00

0.78

pay per click campaign reporting

0.39

0.61

0.65

0.74

1.00

0.66

campaign ROI reporting

0.24

0.62

0.72

0.72

1.00

0.83

Web page survey

0.14

0.43

0.67

0.71

1.00

0.58

lead scoring

0.38

0.59

0.69

0.69

1.00

0.89

email survey

0.11

0.22

0.56

0.61

1.00

0.53

combined

0.40

0.66

0.80

0.84

1.00

0.83




I’ve arbitrarily chosen to highlight when each function exceeds 75% utilization. This shows the relative deployment speed and presents a very interesting pattern:

- the basic demand generation activities needed for a simple email campaign (outbound email, landing pages, CRM integration and response reporting) are almost fully deployed in the first month . In fact, about half the users deploy them in the first week.

- Lead transfer to CRM doesn’t quite make the one month cut-off, but it’s also deployed by half the people in the first week, and almost everyone by the second month. Clearly moving leads to sales to a core demand generation function. The somewhat slower deployment, if it’s anything more than noisy data, might reflect the added time needed to set up a lead transfer process in cooperation with sales. You’ll note that the preceding four items were totally under marketing’s control.

- Web site analytics shows a pattern like lead transfer: nearly half the people do it immediately, but then there is a lag until it reaches nearly 90% deployment in month two. This might also reflect the need for help from the an outside department (whoever runs the company Web site). It might also reflect relatively low urgency, since other Web analytics tools are often in place. But bear in mind that detailed activity tracking of individual Web site visitors (not provided by traditional Web analytics) requires the demand generation tracking code to be installed.

- Multi-step lead nurturing and Webinar campaigns are both fairly complex projects, so it makes sense that deployment of these builds slowly and steadily through the first few months. We can probably infer that most marketers start with something simpler and then add these as they become more proficient with the systems.

- Most of the remaining items (data cleansing, PPC reporting, Web and email surveys) are relatively low priority, as reflected in their % used scores, so relatively slow deployment makes sense. The two exceptions are campaign ROI reporting and lead scoring, which have high ultimate usage rates (83% and 89%) but take a long time to reach those levels. Both are relatively complicated and require cooperation from external departments: ROI reporting needs revenue from sales and approved formulas from finance; lead scoring needs coordination with sales management. I think it’s reasonable to conclude that the importance of these items pushes marketers to deploy them, but their complexity and the need for external cooperation slows the implementation.

Is there a trend in deployment speed over time? I did some analysis of results by implementation year, and the pace does seem to be picking up. But it's a tricky analysis since more recent implementations haven't had time to deploy the longer-lead functions. I'll revisit this if time permits and let you know if I find anything.

Table 3 is similar to table 2, except that the fractions are calculated including never-deployed cases. This gives a more realistic view of the actual pace of deployment for different features. The sequencing is pretty much the same as table 2, with the notable exceptions of lead scoring and campaign ROI ranking somewhat higher.

table 3

cumulative deployment rate (including never deployed)

cumulative %

first week

first month

second month

third month

later

never

outbound email campaign

0.61

0.83

0.97

1.00

1.00

-

landing page

0.53

0.78

0.83

0.83

0.89

0.11

CRM integration / synchronization

0.64

0.75

0.81

0.83

0.92

0.08

campaign response reporting

0.33

0.75

0.83

0.89

1.00

-

lead transfer to CRM

0.51

0.66

0.89

0.89

0.94

0.06

Web site analytics

0.40

0.60

0.77

0.83

0.89

0.11

multi-step lead nurturing campaign

0.22

0.50

0.69

0.75

0.89

0.11

lead scoring

0.33

0.53

0.61

0.61

0.89

0.14

Webinar campaign

0.14

0.51

0.66

0.69

0.86

0.11

campaign ROI reporting

0.20

0.51

0.60

0.60

0.83

0.17

data cleansing process

0.28

0.47

0.50

0.58

0.78

0.22

pay per click campaign reporting

0.26

0.40

0.43

0.49

0.66

0.34

Web page survey

0.08

0.25

0.39

0.42

0.58

0.42

email survey

0.06

0.12

0.29

0.32

0.53

0.47

0.33

0.55

0.66

0.70

0.83

0.17



Summary

Pulling back from these details, what I find really impressive is how quickly in general the features are deployed: 40% of the features ever deployed are deployed in the first week; two-thirds are deployed in the first month, and 80% by the second month. An optimist might argue that this shows marketers are quickly gaining value from their systems. A pessimist could say this shows that marketers learn a few things quickly and then stop.

The slow-but-steady deployment of complex processes like ROI reporting and lead scoring suggests that neither view is quite accurate, since marketers do add some features over time. It’s also true that this survey didn’t capture some of the more esoteric demand generation applications that marketers might add later. So it does seem there is at least some continued development after the initial implementation.

Circling back to the original question of how much marketers can expect to accomplish during the first week, the short answer is: quite a bit, actually. But it still takes a couple of months to get fully up to speed, and there is certainly a need for continued training to ensure you get the full value of any demand generation system. The job is far from done the day the implementation team walks out the door.

Tuesday, April 28, 2009

Demand Generation Implementation Survey - Background Results

I've been having a dandy time analyzing the results of my Demand Generation Implementation Survey. Responses are still coming in but I thought I'd at least post some preliminary results to whet your appetite. Hopefully I'll be able to post a more substantive analysis tonight or tomorrow.

As of April 29, I've received 40 responses, of which I've discarded two as incomplete and two because they related to vendors I considered irrelevant (Zoho and Ad Giants PitchRocket). Obviously any survey based on 36 net responses (and self-selected at that) has little statistical value, but I still think the broad results are extremely interesting.

The survey was promoted on this blog and the Raab Guide site, but primarily via posts on Twitter. (Thanks to the many people who 'retweeted' the request). This introduces yet another source of sample bias. One measure of this is the distribution of vendors reported by the respondents, which clearly doesn't reflect the installed base of the industry. This distribution actually pleases me, since it means we have results from users of many different systems. (Obviously, however, the quantities are too small and sample bias too significant to break out results by vendor.)


nbr responses vendor
8Marketo
6Eloqua
3Genius.com
3LoopFuse
3Pardot
2Market2Lead
2

Treehouse Interactive

1eTrigue
1Vtrenz (Silverpop)
7No Response
36


Another intriguing bit of contextual information is the deployment date of the systems. Two respondents actually reported future dates -- I'd guess those were typos but, since responses were anonymous, I couldn't ask. There was actually another dated 6/01/2208, which I treated as 2008.

I was also curious to see the six responses for implementations during 3/09 and 4/09; obviously, these companies haven't gotten past their first or second month. Most of the answers for those entries reported features deployed within the first two months, or made the reasonable selection of 'later', so they could quite well be accurate. One repondent reported deployment on 4/24/09 (i.e., last week) but showed several features as deployed in month three. I assume represents their plans rather than reality. Fair enough.

In any case, the ten deployments in the first four months of 2009 (or 12 if you count the two future dates) and 12 in 2008 highlights the newness and fast growth of the demand generation industry. There were just five earlier deployments, including one for 1990, which is almost surely an error.


nbr responses

deployment date

1

10/09

1

8/09

3

4/09

3

3/09

1

2/09

3

1/09

12

2008

2

2007

2

2006

1

2005

1

1990

6

No Response

36



One final bit of more data, this more substantive: I asked how well their experience with deployment and their systems as a whole had met their expectations. Results strike me as extremely positive -- about two-thirds rated both experiences as better than expected, with just a bit more satisfaction with the systems than the implementation. Only a couple of responders felt things were worse than expected. Again, we have to consider sample bias. But even so, this seems to be a pretty happy set of campers.

I actually looked to see if there was any relationship between deployment year and satisfaction, and it newer customers may be a bit happier. But the numbers are very small, recency may also introduce some bias, and in any event even the earlier customers are highly satisfied. So I don't consider this more than a hint of what might be the case.


How would you rate your experience with...
%

better than expected

about as expected

worse than expected

total

system implementation

0.64

0.33

0.03

1.00

the system itself

0.67

0.28

0.06

1.00




How would you rate your experience with...
nbr responses

better than expected

about as expected

worse than expected

total

system implementation

23

12

1

36

the system itself

24

10

2

36

Tuesday, April 21, 2009

Demand Generation Implementation -- Take My Survey, Please!

Update - 4/23/09: I have some preliminary results, but would still like more responses. Click here to take survey. One result of interest: how quickly people deploy the features they eventually use. I had expected people to start slow and add more features over time. Not so much. It seems that by the end of the first month, people have already used 2/3 of the features they will ever use. Interesting. Here is the cumulative percentage of total features deployed based on when they were first deployed:

time since system deployment first weekfirst month second month third month later

cumulative % of used features

38%65%81%86%100%


The recent discussion triggered by my post Pedowitz Group Offers Free Support for New Eloqua Clients raises an important question: Just how much can marketers realistically expect to accomplish during the initial stages of a demand generation system deployment?

The obvious answer is “it depends”, but that just begs the question, “Depends on what?” My own take is that the main factor is how well the marketers know what they want to do – that is, do they understand their data, know what marketing campaigns they want to set up, have the materials in hand and process flows defined, know what their scoring rules should be, etc.

In theory, those could be defined even before a marketing automation system is selected. You actually need a pretty good idea of the answers to select the right system. One might also think that most companies would already have these processes in place, even if they’re not formally defined. Yet my impression from industry vendors and consultants is that most deployments start with a fairly extended planning stage where companies either document their existing campaigns and processes or, more likely, define a large number of new ones.

This makes sense to a certain degree, since a demand generation system allows vastly more activity, specified in more detail, than was possible without one. A new system also presents an opportunity to revisit and update existing practices rather than simply reproducing them.

In any event, I’m curious about people’s actual experiences. I’ve created a little poll using SurveyMonkey – if you’ve implemented a demand generation system, please click below to fill it out. Of course, I’ll report on results when I have some. Thanks!

Click here to take survey

Thursday, April 16, 2009

Lyzasoft: Independence for Analysts and Maybe Some Light on Shadow IT

Long-time readers of this blog know that I have a deep fondness for QlikView as a tool that lets business analysts do work that would otherwise require IT support. QlikView has a very fast, scalable database and excellent tools to create reports and graphs. But quite a few other systems offer at least one of these.*

What really sets QlikView apart is its scripting language, which lets analysts build processing streams to combine and transform multiple data sources. Although QlikView is far from comparable with enterprise-class data integration tools like Informatica, its scripts allow sophisticated data preparation that is vastly too complex to repeat regularly in Excel. (See my post What Makes QlikTech So Good for more on this.)

Lyzasoft Lyza is the first product I’ve seen that might give QlikView a serious run for its money. Lyza doesn’t have scripts, but users can achieve similar goals by building step-by-step process flows to merge and transform multiple data sources. The flows support different kinds of joins and Excel-style formulas, including if statements and comparisons to adjacent rows. This gives Lyza enough power to do most of the manipulations an analyst would want in cleaning and extending a data set.

Lyza also has the unique and important advantage of letting users view the actual data at every step in the flow, the way they’d see rows on a spreadsheet. This makes it vastly easier to build a flow that does what you want. The flows can also produce reports, including tables and different kinds of graphs, which would typically be the final result of an analysis project.

All of that is quite impressive and makes for a beautiful demonstration. But plenty of systems can do cool things on small volumes of data – basically, they throw the data into memory and go nuts. Everything about Lyza, from its cartoonish logo to its desktop-only deployment to the online store selling at a sub-$1,000 price point, led me to expect the same. I figured this would be another nice tool for little data sets – which to me means 50,000 to 100,000 rows – and nothing more.

But it seems that’s not the case. Lyzasoft CEO Scott Davis tells me the system regularly runs data sets with tens of millions of rows and the biggest he’s used is 591 million rows and around 7.5-8 GB.

A good part of the trick is that Lyza is NOT an in-memory database. This means it’s not bound by the workstation’s memory limits. Instead, Lyza uses a columnar structure with indexes on non-numeric fields. This lets it read required data from the disk very quickly. Davis also said that in practice most users either summarize or sample very large data sets early in their data flows to get down to more manageable volumes.

Summarizing the data seems a lot like cheating when you’re talking about scalability, so that didn’t leave me very convinced. But you can download a free 30 day trial of Lyza, which let me test it myself.

Bottom line: my embarrassingly ancient desktop (2.8 GHz CPU, 2 GB RAM, Windows XP) loaded a 400 MB CSV file with about 430,000 rows in just over 6 minutes. That’s somewhat painful, but it does suggest you could load 4 GB in an hour – a practical if not exactly desirable period. The real issue is that each subsequent step could take similar amounts of time: copying my 400 MB set to a second step took a little over 2 minutes and, more worrisome, subsequent filters took the same 2 minutes even though they reduced the record count to 85,000 then 7,000 then 50. This means a complete processing flow on a large data set could run for hours.

Still, a typical real-world scenario would be to do development work on small samples, and then only run a really big flow once you knew you had it right. So even the load time for subsequent steps is not necessarily a show-stopper.

Better news is that rerunning an existing filter with slightly different criteria took just a few seconds, and even rerunning the existing flow from the start was much faster than the first time through. Users can also rerun all steps after a given point in the flow. This works because Lyza saves the intermediate data sets. It means that analysts can efficiently explore changes or extend an existing project without waiting for the entire flow to re-execute. It’s not as nice as running everything on a lightning-fast data server, but most analysts would find it gives them all the power they need.

As a point of comparison, loading that same 400 MB CSV file took almost 11 minutes with QlikView. I had forgotten how slowly QlikView loads text files, particularly on my limited CPU. On the other hand, loading a 100 MB Excel spreadsheet took about 90 seconds for Lyza vs. 13 seconds in QlikView. QlikView also compressed the 400 MB to 22 MB on disk and about 50 MB in memory, whereas Lyza more than doubled data to 960 MB of disk, due mostly to indexes. Memory consumption in Lyza rose only about 10 MB.

Of course, compression ratios for both QlikView and Lyza depend greatly on the nature of the data. This particular set had lots of blanks and Y/N fields. The result was much more compression than I usually see in QlikView and, I suspect, more expansion than usual in Lyza. In general, Lyza seems to make little use of data compression, which is usually a key advantage of columnar databases. Although this seems like a problem today, it also means there's an obvious opportunity for improvement as the system finds itself dealing with larger data sets.

What I think this boils down to is that Lyza can effectively handle multi-gigabyte data volumes on a desktop system. The only reason I’m not being more definite is I did see a lot of pauses, most accompanied by 100% CPU utilization, and occasional spikes in memory usage that I could only resolve by closing the software and, once or twice, by rebooting. This happened when I was working with small files as well as the large ones. It might have been the auto-save function, my old hardware, crowded disk drives, or Windows XP. On the other hand, Lyza is a young product (released September 2008) with only a dozen or so clients, so bugs would not be surprising. I'm certainly not ready to say Lyza doesn't have them.

Tracking down bugs will be harder because Lyza also runs on Linux and Mac systems. In fact, judging by the Mac-like interface, I suspect it wasn't developed on a Windows platform. According to Davis, performance isn’t very sensitive to adding memory beyond 1 GB, but high speed disk drives do help once you get past 10 million rows or so. The absolute limit on a 32 bit system is about 2 billion rows, a constraint related to addressable memory space (2^31 = about 2 billion) rather than anything peculiar to Lyza. Lyza can also run on 64 bit servers and is certified on Intel multi-core systems.

Enough about scalability. I haven’t done justice to Lyza’s interface, which is quite good. Most actions involve dragging objects into place, whether to add a new step to a process flow, move a field from one flow stage to the next, or drop measures and dimensions onto a report layout. Being able to see the data and reports instantly is tremendously helpful when building a complex processing flow, particularly if you’re exploring the data or trying to understand a problem at the same time. This is exactly how most analysts work.

Lyza also provides basic statistical functions including descriptive statistics, correlation and Z-test scores, a mean vs. standard deviation plot, and stepwise regression. This is nothing for SAS or SPSS to worry about; in fact, even Excel has more options. But it’s enough for most purposes. Similarly, data visualization is limited compared to a Tableau or ADVIZOR, but allows some interactive analysis and is more than adequate for day-to-day purposes.

Users can combine several reports onto a single dashboard, adding titles and effects similar to a Powerpoint slide. The report remains connected to the original workflow but doesn’t update automatically when the flow is rerun.

Intriguingly, Lyza can also display the lineage of a table or chart value. It traces the data from its source through all subsequent workflow steps, listing any transformations or selections applied along the way. Davis sees this as quickly answering the ever-popular question, “Where did that number come from?” Presumably this will leave more time to discuss American Idol.


Users can also link one workflow to another by simply dragging an object onto a new worksheet. This is a very powerful feature, since it lets users break big workflows into pieces and lets one workflow feed data into several others. The company has just taken this one step further by adding a collaboration server, Lyza Commons, that lets different users share workflows and reports. Reports show which users send and receive data from other users, as well as which data sets send and receive information from other data sets.

Those reports are more than just neat: they're documenting data flows that are otherwise lost in the “shadow IT” which exists outside of formal systems in most organizations. Combined with lineage tracing, this is where IT departments and auditors should start to find Lyza really interesting.

A future version of Commons will also let non-Lyza users view Lyza reports over the Web – further extending Lyza beyond the analyst’s personal desktop to be an enterprise resource. Add in the 64-bit capability, an API to call Lyza from other systems, and some other tricks the company isn’t ready to discuss in public, and there’s potential here to be much more than a productivity tool for analysts.

This brings us back to pricing. If you were reading closely, you noticed that little comment about Lyza being priced under $1,000. Actually there are two versions: a $199 Lyza Lite that only loads from Microsoft Excel, Access and text files, and the $899 regular version that can also connect to standard relational databases and other ODBC sources and includes the API.

This isn’t quite as cheap as it sounds because these are one year subscriptions. But even so, it is an entry cost well below the several tens of thousands of dollars you’d pay to get started with full versions of QlikView or ADVIZOR, and even a little cheaper than Tableau. The strategy of using analysts’ desktop as a beachhead is obvious, but that doesn’t make it any less effective.

So, should my friends at QlikView be worried? Not right away – QlikView is a vastly more mature product with many features and capabilities that Lyza doesn’t match, and probably can’t unless it switches to an in-memory database. But analysts are QlikView’s beachhead too, and there’s probably not enough room on their desktops for both systems. With a much lower entry price and enough scalability, data manipulation and analysis features to meet analysts’ basic needs, Lyza could be the easier one to pick. And that would make QlikView's growth much harder.

----------------------------

*ADVIZOR Solutions and Tableau Software have excellent visualization with an in-memory database, although they’re not so scalable. PivotLink, Birst and LucidEra are on-demand systems that are highly scalable, although their visualization is less sophisticated. Here are links to my reviews: ADVIZOR , Tableau, PivotLink, Birst and LucidEra.

Tuesday, April 14, 2009

LeadLife Mixes Advanced and Simple Features

I have my little checklist of features to define whether a demand generation system is suited for simple or complex marketing programs. (You'll find most of the list in our report on Vendor Usability Scores on the Raab Guide site.) Sadly, some vendors didn't get the memo and have built products that straddle my categories.

Consider LeadLife. It offers many features that appeal to large marketing departments: fine-grained user rights management, rule-based content selection, multiple scores per lead, central processes to score leads and transfer them to sales, APIs to integrate with external Web forms, campaign cost tracking, detailed ROI reporting, and project management with tasks. But it lacks other features that are equally advanced: approval workflows, templates linked to deployed content, split tests, campaign actions to update data values, support for channels beyond email, and, most important, any way to direct leads from one campaign to another.

One way to explain this particular mix of features is to note that LeadLife’s founders previously sold sales automation software. Many of LeadLife's strengths and weaknesses are typical for sales automation systems.

Of course, Joe the Marketer won't care about my classification scheme. LeadLife president Lisa Cramer says the system is targeted at mid-size firms (which she defines as 25 or more employees), not large enterprises, and she should know. Still, it’s probably significant that “flexibility,” not simplicity, was the first term she used to describe the system. Her second term was “intuitive”, so she wasn’t saying the system is designed only for expert users. To me, those terms reflect an ambition to support more than just the simplest marketing programs.

I did in fact find the user interface in LeadLife to be particularly well designed. It follows some principles I first heard many years ago, the gist of which was to divide the screen into fixed regions that always display the same type of information (e.g., navigation folders on the left, detail data in the center) and avoid windows that pop up and disappear in random locations. Today that looks a bit old-fashioned, but it really does make things easier because users always know what to expect. On the other hand, LeadLife has inexplicably chosen a green-based color scheme that can only be described as institutional.

I’ll forgive them the color scheme because LeadLife had the good sense to agree with me on the much more important issue of flow-chart vs. step-based campaign design. LeadLife campaigns are defined strictly as a list of steps, without any branching at all – not even the if/then/else logic that some vendors embed within a single step. In fact, Cramer told me that LeadLife originally tried a flow chart approach, but discarded it because clients got lost. My point exactly.

Notwithstanding the austere simplicity of its campaign flows, LeadLife is a very powerful system. Emails, landing pages and Web surveys all support rule-driven content selection, which lets the system send different messages in different situations even without conventional branching. Rules can dynamically select survey questions, so a single survey page can ask the same visitor different questions over time. Users build emails and Web pages by positioning objects (text, data entry fields, images, etc.) in layers. This allows more flexibility than conventional methods, although it also opens new opportunities for errors. The system incorporates SpamAssassin spam scoring and is exploring how to add preview rendering for different ISPs. Marketing materials, including downloadable documents as well as emails and Web pages, can be shared across several campaigns.

The campaigns themselves can contain multiple events such as trade shows, Webinars, newsletters and surveys. Leads can be assigned to an event with a list or posted to the event from a Web form. The system keeps track of all events each lead is linked to and uses events as its primary vehicle for marketing performance measurement.

Leads can also be added to a campaign through queries against the system database. Queries can reference pretty much any data in the system, including survey responses and activity details. The query builder is quite sophisticated, allowing queries to incorporate multiple data elements and to scan for multiple values and substrings. Advanced users can view and modify the underlying SQL if they wish. The same interface is used to set up selections, campaign conditions, and lead scoring.

Once a query is created, the user can export the selected records, send them an email, or update data on their records. Queries execute continuously as data changes. This lets a campaign attached to a query react immediately as new members become qualified.

Users can combine a sequence of steps into a single campaign. Each step is either a query condition, which must be met for the lead to continue through the sequence, or an action. Conditions can also define waiting periods in multi-step campaigns. The only available actions are different types of emails. Cramer said that LeadLife originally allowed other actions, but removed these for simplicity. The company is considering adding some new actions, including one to direct leads from one campaign to another.

The system already provides an unusually rich set of administration functions. Campaign events can be assigned expenses, goals, budgets and activities such as notes, appointments, and tasks. Task attributes can include due dates, responsible individuals, billable time, and status. Access to system functions is managed by user groups, and at last count could be tailored to control 656 specific capabilities.

Lead scoring is also quite sophisticated. Users set up lead scoring rules, which run outside of campaigns but can be limited to members of a particular campaign or event. Each rule contains a query condition and number of points earned for meeting that condition. Users can also define several scores per lead and specify which score a given rule will update. The system can be set to score a rule just once, thereby capping the number of points derived from a particular type of event. Users can also define “decay” rules that reduce a lead’s total score after a specified period without activity. The system updates scores for each lead every few minutes.

Users also define one or more scoring processes, which can assign lead status (new, open, contacted, qualified, etc.) and execute actions when leads meet status and score thresholds. Actions can send the lead to the CRM system, assign the lead to an owner, and send the owner an email. LeadLife has existing integration Salesforce.com and could connect with other CRM systems via the system API. Users can define up to sixteen user-assigned fields on the lead record, plus an unlimited number of survey responses.

LeadLife provides full Web analytics, fueled by tracking codes on vendor-created and external Web pages. Campaign reports show activity counts (emails sent, opens, links clicked, etc.) and let users drill into the reports to see the individuals, and then drill further to see all activities for a selected individual. Other reports can list individuals by status, by products purchased, by contact recency, and other attributes. The system calculates ROI for each event within a campaign, drawing on the cost figures entered by the user and on revenues imported from CRM opportunity records. Revenue is attached to the earliest event associated with a lead linked to the opportunity.

Pricing is based on primarily on email volume. It starts at $500 per month for 1,500 emails and reached $1,395 for a more practical 25,000 emails. Each price includes all system features, unlimited Web volume, and five users. Additional users cost $10 to $30 per month depending on the user type. There are no additional fees for set-up, implementation or training. A quick implementation program aims at executing the client’s first campaign in three days. The company requires a one year contract but clients can leave within the first 90 days without further payment.

LeadLife was established in 2006 and released its first version in September 2008. The company now has about 20 clients.

Sunday, April 12, 2009

PivotLink: Flexible On-Demand Business Intelligence

I did a Webinar recently (click here for slides) about on-demand business intelligence systems, sponsored by Birst. It boiled to two key points:

- most of the work in business intelligence is in assembling the underlying database, even though the term “BI systems” often refers to the query and reporting tools (a.k.a. the “presentation layer”).

- vendor strategies to simplify BI include: using simpler interfaces, automation or pre-built solutions to make conventional technology easier; or using inherently more efficient alternative technologies such as in-memory and columnar databases.

Naturally, the full Webinar was jammed with much other wisdom (you missed Time-Traveling Elvis and the Reese’s Peanut Butter Cup Fallacy). But those two points alone provide a useful framework for considering business intelligence systems in general.

I bring this up because I’m finally writing about PivotLink, which I looked at more than a month ago. It turns out that my framework helps to put PivotLink into perspective.

Here’s the thing. PivotLink is an on-demand business intelligence product. Its most interesting technical feature is an in-memory columnar database. If you follow the industry, you know that type of database is about the geek-sexiest thing out there right now. I myself find it totally fascinating and had a grand time digging into the details.

But the rest of the world doesn’t care if it’s geek-ilicious:* they want to know how PivotLink can help them. Here’s where the framework comes in, since it clarifies what PivotLink does and doesn’t do. Or, to put that a bit more formally, it shows which elements of a complete business intelligence system PivotLink provides.

The answer to that being, PivotLink works largely at the presentation layer. It can import data from multiple sources and join tables on common keys. But it won’t do the complicated transformations and fuzzy matching needed for serious data integration. This means that PivotLink must either work with data that's already been processed into a conventional data warehouse or can be usefully analyzed in its original state.

There’s actually more of this naturally-analyzable data than you might think. Purchase transactions, a common source for PivotLink, are a good example. The obstacle to working with these has often been the size of the data sets, which meant lots of expensive hardware and lots of (how to put this delicately?) deliberately-paced IT support. These are exactly the barriers that on-demand systems overcome.

This brings us back to PivotLink's technology. In-memory, columnar databases are especially well suited for on-demand business intelligence because they compress data tightly (10% of the original volume. according to PivotLink), read only the columns required for a particular query (providing faster response) and don’t require need special schemas or preaggregated data cubes (requiring less skill to set up and modify).

But even columnar systems vary in their details. PivotLink sits towards the flexible end of the spectrum, with support for incremental updates, many-to-many table relationships, and abilities to add new columns and merge data along different query paths without reloading it. The system also allows calculations during the data load and within queries, caches data in memory and further compresses it after an initial query, and supports user- and group-level security at the row, column or individual cell levels. Not all column-based systems can say the same.

On the other hand, PivotLink does not support standard SQL queries and doesn’t run on a massively parallel (“shared nothing”) architecture. Both limits are typical of older columnar databases, a reminder that PivotLink began life in 1998 as SeaTab Software. Although shared-nothing architectures are generally more scalable, PivotLink is already running data sets with more than 10 billion rows in its current configuration. Response is very fast: according to the company, one client with several billion rows of point-of-sale data runs a nightly update and then executes a summary report in under one minute. Still, PivotLink recognizes the benefits of shared-nothing systems and plans to move to that architecture by the end of 2009.

Lack of SQL compatibility means users must rely on PivotLink’s tools for queries and reports. These let administrators import data from CSV, TXT and Excel files and map them to PivotLink tables. (The actual storage is columnar, with different compression techniques applied to different data types. But to the user, the data looks like it’s organized in tables.) The system reads the source data and makes suggestions about field types and names, which the user can accept or revise.

Users then define queries against the tables. Each query contains a selected set of columns, which are identified either as a header (i.e., dimension) or metric. When queries involve multiple tables, the user also specifies the columns to join on. Each report is written against one query.

The distinction between tables and queries is important in PivotLink, because it provides much of the system’s flexibility. The same column can be a dimension in one query and a metric in another, and the same tables can be related on different keys in different queries. All this happens without reloading the data or running any aggregations. The metadata used to build reports is defined by the associated query, not the underlying tables.

Reports are built by dragging headers and metrics into place on a grid, defining drill paths, and selecting tabular or chart formats. Reports can also rank and sort results and select the top or bottom rows for each rank. For example, a report could rank the top ten stores per region by revenue. Users can combine several reports into a dashboard.

End-users can apply filters and drill into the selected elements within a report. However, PivotLink does not apply filters for one report to the rest of the dashboard, in the way of QlikView or ADVIZOR. This feature is also on the agenda for 2009.

PivotLink clients can import data, define queries and build reports without help from the company. PivotLink said it takes a couple of days to train an administrator to load the data and build queries, a day or two to train a user to build reports and dashboards, and minutes to hours to learn to use the reports. Based on what they showed me, that sounds about right. You can also find out for yourself by signing up for a free trial.

Pricing of PivotLink is based on the size of the deployment, taking into account data volume and types of users. It starts around $3,000 per month for 50 million rows. When I spoke with the company in early March, they had about 60 clients supporting over 6,000 users, and had doubled their volume in the past year.

-----------------------------------------------------------------------------------------
* geek-alicious? is it possible to misspell a word that doesn’t exist?

Thursday, April 09, 2009

Good Look at QlikView from a Microstrategy Consultant's Viewpoint

I noticed some visitors this morning from the blog of Microstrategy consultancy Aellament, and found they have published a nice look at QlikView on their blog. It's worth a read, and quite interesting in their appreciation of the advantages that QlikView offers over the product they know best. They fairly point out some disadvantages too, of which I think lack of a unified metadata view is probably most significant.

What really resonated was Aellament's sense (in an earlier blog post, which contains the link back to this blog) that QlikView is empowering departmental users to do work that otherwise takes support from technical specialists. That's exactly what I've seen as QlikView's advantage and I think it's fundamentally reshaping the industry.

As I see the future, BI specialists will still be needed to assemble source data into usable forms (i.e., build the data warehouses). This has always been the heavy lifting. But the huge army of people who then essentially reformat that data into cubes, reports, dashboards, etc. will dwindle as business analysts do that for themselves using tools like QlikView. Bad news for Microstrategy consultants (presumably why Aellament is hedging its bets with QlikView training) but good news for business users.

Wednesday, April 08, 2009

Pedowitz Group Offers Free Support for New Eloqua Clients

The Pedowitz Group announced this morning that it was offering $15,000 in free consulting services and guaranteeing a five-day implementation to new clients who purchase Eloqua demand generation software . (Click here for the announcement.)

(If you’re not familiar with The Pedowitz Group, President and CEO Jeff Pedowitz ran the professional services group at Eloqua for several years before starting the company. The firm also works with Marketo, Silverpop Engage B2B and MarketingGenius depending on client needs. )

My initial reaction to the announcement was to wonder if it would be interpreted as evidence that Eloqua requires a lot of consulting to implement. Certainly that’s how I’d spin it if I were a competitor. But then again, maybe I wouldn't, because it focuses attention on how much support is really needed to deploy other systems.

On the surface, this is a strength of vendors who promise free implementation and deployment within a few days. But the reality is that most marketers need outside help to design their email campaigns, nurture programs, scoring rules, and CRM integration. This has less to do with learning the software than with knowing what works and how to apply it to their own business.

Yes, some products really are easier to use than others, especially for simple programs. That's one reason Pedowitz works with several vendors. (Download the Raab Guide report on Vendor Usability Scores for more on this.) And yes, some marketers will get their system running with no more than telephone support.

But it’s just plain silly to think that most marketers can quickly deploy sophisticated demand generation programs without some expert help. This is what’s highlighted by the Pedowitz Group offer – and it’s a discussion that vendors selling the dream of an instant deployment should probably avoid.

Sunday, April 05, 2009

True Influence Opens a Window into Future Demand Generation

People develop new products because they feel they can offer something existing products do not. In the early stages of an industry, the new products are often similar because several people have independently spotted the same opportunity and built something to tap it. As the industry matures, second-generation products are designed to improve on the original products, either by adding new capabilities or by delivering the same capabilities faster, easier or cheaper. This leads to more variety as vendors experiment with different approaches to a now-defined problem. In a third stage, variety diminishes as widely successful approaches become templates for standard configurations.

Demand generation systems are in that second stage. This means new products reflect the lessons each vendor has drawn from the industry’s history to date.

True Influence illustrates this nicely. CEO and co-founder Brian Giese had extensive experience in business sales and marketing and with existing demand generation systems when he began developing True Influence two years ago. So it’s possible to see True Influence, which was released at the end of last year, as his well-educated guess at what future demand generation systems will look like.

Giese’s conclusion was that marketers’ overwhelming need is simplicity. In fact, he said he has actually removed features from the system because customers weren’t using them. But he also decided that marketers want Webinar integration, digital asset management, APIs to capture data from external Web forms, and a dedicated IP address for email. These are not yet standard features on most demand generation products. But if Giese is right, they will be.

Like all demand generation systems, True Influence can import lists, send emails, create Web forms and surveys, score leads, set up multi-step campaigns, and integrate with CRM systems. Capabilities in these areas tend to be adequate but minimal. For example:

- emails and Web forms can be personalized with lead data, but don’t incorporate rule-selected content blocks.

- the list selection interface uses a form that lets users apply values to a list of all data elements. This is simple but doesn’t easily support complex conditions. Nor does True Influence support random splits for tests.

- Answers to Web surveys are limited to list box or radio button formats.

- The system allows an unlimited number of survey questions, but it will overwrite previous replies if a question is answered more than once.

- The lead record allows only four user-defined fields.

- Lead scores can be based on just a few attributes and activities: industry, job title, company size, location, lead source, email status, activity indicator, most recent activity date, and visits to specific Web pages. Giese said that other elements could be exposed but clients haven’t requested them.

On the brighter side:

- The system API lets users easily adopt externally-built and -hosted Web forms to post into the True Influence database. This saves clients the trouble of replacing existing Web forms when they deploy the system. Clients can also build and host their forms within True Influence if they prefer.

- The base price includes a separate IP address. This protects the client if any other True Influence customers run afoul of the anti-spam police. Most other vendors charge extra for a dedicated IP address if they make it available at all.

- The system includes a resource library for both internal assets (templates, emails, Web forms, etc.) and downloadable collateral such as white papers and brochures. This might replace a separate digital asset management system for clients who don't need approval workflows or fine-grained user rights management. The system does support some version control.

Campaign management reflects a particularly interesting set of design choices. Users define campaigns by building a flow chart with icons for steps and delays. Most simplicity-oriented systems avoid flow charts, so I was surprised to see them in True Influence. The system does simplify the diagrams a bit by embedding the decision rules within the lines that link the icons instead of creating separate decision icons. I was also surprised to find that each icon can have its own schedule – another feature typically reserved for advanced systems.

Giese assured me that his clients like the flow charts and successfully use them for "very complex" campaigns. But we didn't discuss the meaning of "very complex" and I suspect my definition is more demanding than his.

More in line with what I’d expect from this system, decision rules are limited to a few essential functions (opened email, clicked link, registered for Webinar, joined Webinar, submitted Web form, completed step, action completed, action failed). Campaign actions are also constrained: for example, the system can send emails but has no particular support for other media such as direct mail or call centers.

Campaigns can also add a lead to a list (which might in turn trigger another campaign), update the lead score, convert a prospect to a lead and send it to the CRM system, send an email alert to a salesperson or sales manager, and publish a landing page or Web form. Treating lead scoring, conversion to a CRM lead and publication of Web pages as campaign functions, rather than executing these outside of individual campaigns, is typical of simplicity-oriented demand generation systems. So is moving leads among campaigns by adding them to lists rather than assigning them directly.

Wait steps can either delay the campaign for a specified period (e.g., wait seven days for a reply), or be keyed to a specific date (e.g., send a reminder three days before a Webinar). This is more flexible than some other products. However, event-based triggers are limited to submission of a Web form. Otherwise, users can achieve near-real-time triggers by scheduling a campaign step to check for specified conditions at regular intervals defined in hours or minutes.

Webinar support includes prebuilt campaigns with registration and confirmation forms; emails for invitations, reminders and thank-you messagess; and salesperson alerts. These are all part of one campaign flow. More important, the vendor has prebuilt integration with Webinar vendor DimDim (a pretty interesting product in its own right). This lets True Influence capture actual attendance and automatically load it into the contact history. Less extensive integration is also available with GoToWebinar.

True Influence has existing integrations with Salesforce.com and SugarCRM. These provide bi-directional synchronization at five minute intervals. The system can exchange data with other CRM systems through batch updates as needed.

Campaign reporting in True Influence includes campaign activities (emails sent and received, page visits, etc.) and lets users drill down to the list of individuals in each campaign. Users can also view the profile, list memberships and activity details for an individual. The system provides its own Web analytics, based on page tags.

Pricing of True Influence is also designed for on simplicity. Fees are based on the number of “actionable” names in the client database, which basically means valid email addresses. This accommodates the large volume of bad data often imported from mailing lists. There are no separate charges for deployment, training, individual modules, extra seats or dedicated IP addresses. Prices are being revised at this writing but are generally intended to be competitive in the middle tier of the demand generation market.

True Influence was released in late 2008 and currently has about fifteen clients.

Wednesday, April 01, 2009

DemandBase Creeps Up the Value Chain

I had a nice little chat with DemandBase two weeks ago. I’d been aware of them since they were founded in 2006 but in their original incarnation as a data provider. That is, they take business data from sources including Hoovers, D&B, LexisNexis, AccuData, BusinessWatch Network, Jigsaw) and merge it into one big contact list that people can use for outbound promotions or to enhance their own files. It’s a perfectly reasonable business, but not one I find especially exciting.

But it turns out that DemandBase has been inching its way up the value chain. Some time ago they released a free widget, DemandBase Stream, that shows the companies visiting your Web site in a news ticker on your computer desktop. That’s somewhat entertaining and can be useful to salespeople who might notice and reach out to current clients or prospects. But the technology is nothing special: a page tag sends the visitor’s IP address to DemandBase, which looks up its owner in standard Internet registries. In any case, being free, it's really just a promotion for DemandBase.

Their new product, DemandBase Professional, is another matter. It also captures the visitor stream to a company Web site and uses the IP address to identify the company. But now it matches that company against the main DemandBase database to actually apply details such as location, company size, industry, and further finds contact names that marketing or salespeople can reach out to. That’s not much more entertaining, but a lot more salable.

DemandBase Professional actually goes further than simply looking up the information, by letting clients specify the industries, company sizes, geographic regions, search terms and number of page views they care about, and only returning information on visitors who fit those parameters. It can also import a client’s list of target accounts from Salesforce.com and issuing alerts when those companies visit. The good news is that clients only pay for alerts that meet their specified parameters, keeping both the cost and volume within reason. The company only charges for incremental prospect names, so you pay don't again if someone visits your Web site twice.

I was impressed by the sheer cleverness of all this, precisely because the underlying technology (page tags, IP owner lookup, name matching, simple filtering) is so straightforward. There’s no identification of actual visitors and no cookies.

Ok, some of the underlying database preparation and matching is actually pretty demanding. In fact, DemandBase told me that they can use only about 20% of the data they get from their partners because quality problems with the rest. But Rodney Dangerfield on a bad day got more respect than data hygiene experts.

One obvious question is how many useful new names can DemandBase actually provide. According to the company, it typically finds that 50-60% of the Web site visitors come through a generic Internet Service Provider connection and thus can’t be traced to a particular company. Of the remainder, 10-20% fall within the target company sizes, industries or regions. Thus, DemandBase might send information on 5-10% of total visitors.

This is still twice the 2-4% of visitors who DemandBase says typically identify themselves directly. DemandBase said they hadn’t made a formal study of the quality of those leads, but felt confident they were more than worth the cost.

DemandBase plans to continue adding value, possibly by pushing automated messages to selected visitors. But they don’t intend to become a lead management system. We discussed social media briefly; they aren’t using social data (e.g. blog, Facebook, Twitter posts) but are considering trying to find degrees of separation in social networks like Linked In. They didn’t mention it, but I could also see them providing input to Web site personalization systems, so the site itself would tailor its content to what DemandBase infers about the visitor.

The company said that about one thousand companies have set up its free widget and a couple hundred are testing DemandBase Professional. Pricing for Professional depends on volume and starts around $200-$300 per month.

I suppose I could just say this is a useful-sounding product and let it go at that. But if you’re looking for Some Larger Significance: it’s also an example of how the traditional distinction between sales and marketing is dissolving. DemandBase is mostly a salesperson’s tool, but it is identifying names much earlier than sales typically gets involved. Even more to the point, a particular name might go to either sales or marketing depending on the situation. This means that sales and marketing have to agree on rules for who does what—requiring considerably more cooperation than just throwing a lead over the wall once it's deemed “qualified”. I'd say that's significant.