Saturday, September 16, 2017

Vizury Combines Web Page Personalization with a Customer Data Platform

One of the fascinating things about tracking Customer Data Platforms is the great variety among the vendors.

It’s true that variety causes confusion for buyers. The CDP Institute is working to ease that pain, most recently with a blog discussion you’re welcome to join here.  But for me personally, it’s been endlessly intriguing to trace the paths that vendors have followed to become CDPs and learn where they plan to go next.

Take Vizury, a Bangalore-based company that started eight years ago as an retargeting ad bidding platform. That grew into a successful business with more than 200 employees, 400 clients in 40 countries, and $30 million in funding. As it developed, the company expanded its product and, in 2015, released its current flagship, Vizury Engage, an omnichannel personalization system sold primarily to banks and insurance companies. Engage now has more than a dozen enterprise clients in Asia, expects to double that roster in the next six months, and is testing the waters in the U.S.

As often happens, Vizury’s configuration reflects its origins. In their case, the most obvious impact is on the scope of the system, which includes sophisticated Web page personalization – something very rare in the CDP world at large. In a typical implementation, Vizury builds the client’s Web site home page.  That gives it complete control of how each visitor is handled. The system doesn't take over the rest of the client's Web site, although it can inject personalized messages on those pages through embedded tags.

In both situations, Vizury is identifying known visitors by reading a hashed (i.e., disguised) customer ID it has placed on the visitor’s browser cookie. When a visitor enters the site, a Vizury tag sends the hased ID to the Vizury server, which looks up the customer, retrieves a personalized message, and sends it back to the browser.  The messages are built by templates which can include variables such as first name and calculated values such as a credit limit.  Customer-specific versions may be pregenerated to speed response; these are updated as new data is received about each customer. It takes ten to fifteen seconds for new information to make its way through the system and be reflected in output seen by the visitor.

Message templates are embedded in what Vizury calls an engagement, which is associated with a segment definition and can include versions of the same message for different channels. One intriguing strength of Vizury is machine-learning-based propensity models that determine each customer’s preferred channel. This lets Vizury send outbound messages through the customer’s preferred channel when there’s a choice. Outbound options include email, SMS, Facebook ads, and programmatic display ads. These can be sent on a fixed schedule or be triggered when the customer enters or leaves a segment. Bids for Facebook and display ads can be managed by Vizury’s own bidding engine, another vestige of its origins. Inbound options include on-site and browser push messages.

If a Web visitor is eligible for multiple messages, Vizury currently just picks one at random. The vendor is working an automated optimization system that will pick the best message for each customer instead. There’s no way to embed a sequence of different messages within a given engagement, although segment definitions could push customers from one engagement to the next. Users do have the ability to specify how often a customer will be sent the same message, block messages the customer has already responded to, and limit how many total messages a customer receives during a time period.

What makes Vizury a CDP is that it builds and exposes a unified, persistent customer database. This collects data through Vizury's own page tags, API, and mobile SDK; external tag managers; and batch file loads.  Data is unified with deterministic methods including stitching of multiple identifiers provided by customers and of multiple applications on the same device. The system can do probabilistic cross-device matching but that's not reliable enough for most financial service applications.  Vizury doesn’t do fuzzy matching based on customer names and addresses, which is not a common technique in Asia.

The system includes standard machine learning algorithms that predict product purchase, app uninstalls, and message fatigue in addition to channel preference and ad bidding. Results can be applied to tasks other than personalization, such as lead scoring.  Algorithms are adapted for each industry and trained on the client’s own data. Users can't currently apply machine learning to other tasks.

Vizury uses a typical big data stack including Hadoop, Hive, Pig, Hbase, Flume, and Kafka. Clients can access the data directly through Hadoop or Hbase.  Standard reports show results by experience, segment, and channel, and users can create custom reports as well.


Pricing for Vizury is based on the number of impressions served, another echo of its original business. Enterprise clients pay upwards of $20,000 per month, although U.S. pricing could be different.





Friday, September 08, 2017

B2B Marketers Are Buying Customer Data Platforms. Here's Why.

I’m currently drafting a paper on use of Customer Data Platforms by B2B SaaS marketers.  The topic is more intriguing than it sounds because it raises the dual questions of  why CDPs haven’t previously been used much by B2B SaaS companies and what's changed.  To build some suspense, let’s first review who else has been buying CDPs.

We can skip over the first 3.8 billion years of life on earth, when the answer is no one. When true CDPs first emerged from the primordial ooze, their buyers were concentrated among B2C retailers. That’s not surprising, since retailers have always been among the data-driven marketers. They’re the R in BRAT (Banks, Retailers, Airlines, Telcos), the mnemonic I’ve long used to describe the core data-driven industries*.

What's more surprising is that the B's, A's, and T's weren't also early CDP users.  I think the reason is that banks, airlines, and telcos all capture their customers’ names as part of their normal operations. This means they’ve always had customer data available and thus been able to build extensive customer databases without a CDP.

By contrast, offline retailers must work hard to get customer names and tie them to transactions, using indirect tools such as credit cards and loyalty programs. This means their customer data management has been less mature and more fragmented. (Online retailers do capture customer names and transactions operationally.  And, while I don’t have firm data, my impression is that online-only retailers have been slower to buy CDPs than their multi-channel cousins. If so, they're the exception that proves the rule.)

Over the past year or two, as CDPs have moved beyond the early adopter stage, more BATs have in fact started to buy CDPs.  As a further sign of industry maturity, we’re now starting to see CDPs that specialize in those industries. Emergence of such vertical systems is normal: it happens when demand grows in new segments because the basic concepts of a category are widely understand.  Specialization gives new entrants as a way to sell successfully against established leaders.  Sure enough, we're also seeing new CDPs with other types of specialties, such as products from regional markets (France, India, and Australia have each produced several) and for small and mid-size organizations (not happening much so far, but there are hints).

And, of course, the CDP industry has always been characterized by an unusually broad range of product configurations, from systems that only build the central database to systems that provide a database, analytics, and message selection; that's another type of specialization.  I recently proposed a way to classify CDPs by function on the CDP Institute blog.** 

B2B is another vertical. B2B marketers have definitely been slow to pick up on CDPs, which may seem surprising given their frenzied adoption of other martech. I’d again explain this in part by the state of the existing customer data: the more advanced B2B marketers (who are the most likely CDP buyers) nearly all have a marketing automation system in place. The marketers' initial assumption would be that marketing automation can assemble a unified customer database, making them uninterested in exploring a separate CDP.  Eventually they'd discover that nearly all B2B marketing automation systems are very limited in their data management capabilities.  That’s happening now in many cases – and, sure enough, we’re now seeing more interest among B2B marketers in CDPs.

But there's another reason B2B marketers have been uncharacteristically slow adopters when it comes to CDPs.  B2B marketers have traditionally focused on acquiring new leads, leaving the rest of the customer life cycle to sales, account, and customer success teams.  So B2B marketers didn't need the rich customer profiles that a CDP creates.  Meanwhile, the sales, account and customer success teams generally worked with individual and account records stored in a CRM system, so they weren't especially interested in CDPs either.  (That said, it’s worth noting that customer success systems like Gainsight and Totango were on my original list of CDP vendors.)

The situation in B2B has now changed.  Marketers are taking more responsibility for the entire customer life cycle and work more closely with sales, account management, and customer success teams. This pushes them to look for a complete customer view that includes data from marketing automation, CRM, and additional systems like Web sites, social media, and content marketing. That quest leads directly to CDP.

Can you guess who's leading that search?  Well, which B2B marketers have been the most active martech adopters? That’s right: B2B tech marketers in general and B2B SaaS product marketers in particular. They’re the B2B marketers who have the greatest need (because they have the most martech) and the greatest inclination to try new solutions (which is why they ended up with the most martech). So it’s no surprise they’re the earliest B2B adopters of CDP too.

And do those B2B SaaS marketers have special needs in a CDP?  You bet.  Do we know those needs are?  Yes, but you’ll have to read my paper to find out.

_______________________________________________________
*It might more properly be FRAT, since Banking really stands for all Financial services including insurance, brokers, investment funds, and so on.  Similarly, Airlines represents all of travel and hospitality, while Telco includes telephone, cable, and power utilities and other subscription networks.  We should arguably add healthcare and education as late arrivals to the list.  That would give us BREATH.  Or, better still, replace Banks with Financial Services and you get dear old FATHER.

**It may be worth noting that part of the variety is due to the differing origins of CDP systems, which often started as products for other purposes such as tag management, big data analytics, and campaign management.   That they've all ended up serving roughly the same needs is a result of convergent evolution (species independently developing similar features to serve a similar need or ecological niche) rather than common origin (related species become different over time as they adapt to different situations).  You could look at new market segments as new ecological niches, which are sometimes filled by specialized variants of generic products and are other times filled by tangentially related products adapting to a new opportunity.

My point here is there are two separate dynamics at play: the first is market readiness and the second is vendor development.  Market readiness is driven by reasons internal to the niche, such as the types of customer data available in an industry.  Vendor development is driven by vendor capabilities and resources.  One implication of this is that vendors from different origins could end up dominating different niches; that is, there's no reason to assume a single vendor or standard configuration will dominate the market as a whole.  Although perhaps market segments served by different configurations are really separate markets.

Thursday, August 31, 2017

AgilOne Adds New Flexibility to An Already-Powerful Customer Data Platform


It’s more than four years since my original review of AgilOne, a pioneering Customer Data Platform. As you might imagine, the system has evolved quite a bit since then. In fact, the core data management portions have been entirely rebuilt, replacing the original fixed data model with a fully configurable model that lets the system easily adapt to each customer.

The new version uses a bouquet of colorfully-named big data technologies (Kafka, Parquet, Impala, Spark, Elastic Search, etc.) to support streaming inputs, machine learning, real time queries, ad hoc analytics, SQL access, and other things that don’t come naturally to Hadoop. It also runs on distributed processors that allow fast scaling to meet peak demands. That’s especially important to AgilOne since most of its clients are retailers whose business can spike sharply on days like Black Friday.

In other ways, though, AgilOne is still similar to the system I reviewed in 2013. It still provides sophisticated data quality, postal processing, and name/address matching, which are often missing in CDPs designed primarily for online data. It still has more than 300 predefined attributes for specialized analytics and processing, although the system can function without them. It still includes predictive models and provides a powerful query builder to create audience segments. Campaigns are still designed to deliver one message, such as an email, although users could define campaigns with related audiences to deliver a sequence of messages. There’s still a “Customer360” screen to display detailed information about individual customers, including full interaction history.

But there’s plenty new as well. There are more connectors to data sources, a new interface to let users add custom fields and calculations for themselves, and workflow diagrams to manage data processing flows. Personalization has been enhanced and the system exposes message-related data elements including product recommendations and the last products browsed, purchased, and abandoned. AgilOne now supports Web, mobile, and social channels and offers more options for email delivery. A/b tests have been added while analytics and reporting have been enhanced.

What should be clear is that AgilOne has an exceptionally broad (and deep) set of features. This puts it at one end of the spectrum of Customer Data Platforms. At the other end are CDPs that build a unified, sharable customer database and do nothing else. In between are CDPs that offer some subset of what AgilOne offers: advanced identity management, offline data support, predictive analytics, segmentation, multi-channel campaigns, real time interactions, advanced analytics, and high scalability. This variety is good for buyers, since it means there’s a better chance they can find a system that matches their needs. But it’s also confusing, especially for buyers who are just learning about CDPs and don’t realize how much they can differ. That confusion is something we’re worrying about a lot at the CDP Institute right now. If you have ideas for how to deal with it, let me know.

Friday, August 25, 2017

Self-Driving Marketing Campaigns: Possible But Not Easy


A recent Forrester study found that most marketers expect artificial intelligence to take over the more routine parts of their jobs, allowing them to focus on creative and strategic work.


That’s been my attitude as well. More precisely, I see AI enabling marketers to provide the highly tailored experiences that customers now demand. Without AI, it would be impossible to make the number of decisions necessary to do this. In short, complexity is the problem, AI is the solution, and we all get Friday afternoons off. Happy ending.

But maybe it's not so simple.

Here’s the thing: we all know that AI works because it can learn from data. That lets it make the best choice in each situation, taking into account many more factors than humans can build into conventional decision rules. We also all know that machines can automatically adjust their choices as they learn from new data, allowing them to continuously adapt to new situations.

Anyone who's dug a bit deeper knows two more things:

  • self-adjustment only works in circumstances similar to the initial training conditions. AI systems don’t know what to do when they’re faced with something totally unexpected. Smart developers build their systems to recognize such situations, alert human supervisors, and fail gracefully by taking an action that is likely to be safe. (This isn’t as easy as it sounds: a self-driving car shouldn’t stop in the middle of an intersection when it gets confused.)

  • AI systems of today and the near future are specialists. Each is trained to do a specific task like play chess, look for cancer in an X-ray, or bid on display ads. This means that something like a marketing campaign, which involves many specialized tasks, will require cooperation of many AIs. That’s not new: most marketing work today is done by human specialists, who also need to cooperate. But while cooperation comes naturally to (most) humans, it needs to be purposely added as a skill to an AI.*

By itself, this more nuanced picture isn’t especially problematic. Yes, marketers will need multiple AIs and those AIs will need to cooperate. Maintaining that cooperation will be work but presumably can itself eventually be managed by yet another specialized AI.

But let’s put that picture in a larger context.

The dominant feature of today’s business environment is accelerating change. AI itself is part of that change but there are other forces at play: notably, the “personal network effect” that drives companies like Facebook, Google, and Amazon to hoard increasing amounts of data about individual consumers. These forces will impose radical change on marketers’ relations with customers. And radical change is exactly what the marketers’ AI systems will be unable to handle.

So now we have a problem. It’s easy – and fun – to envision a complex collection of AI-driven components collaborating to create fully automated, perfectly personalized customer experiences. But that system will be prone to frequent failures as one or another component finds itself facing conditions it wasn’t trained to handle. If the systems are well designed (and we’re lucky), the components will shut themselves down when that happens. If we’re not so lucky, they’ll keep running and return increasingly inappropriate results. Yikes.

Where do we go from here? One conclusion would be that there’s a practical limit to how much of the marketing process can really be taken over by AI. Some people might find that comforting, at least for job security. Others would be sad.

A more positive conclusion is it’s still possible to build a completely AI-driven marketing process but it’s going to be harder than we thought. We’ll need to add a few more chores to the project plan:

  • build a coordination framework. We need to teach the different components to talk to each other, preferably in a language that humans can understand. They'll have to share information about what they’re doing and about the results they’re getting, so each component can learn from the experience of the others and can see the impact its choices have elsewhere.  It seems likely there will be an AI dedicated specifically to understanding and predicting those impacts throughout the system. Training that AI will be especially challenging. In keeping with the new tradition of naming AIs after famous people, let's call this one John Wanamaker. 

  • learn to monitor effectively. Someone has to keep an eye on the AIs to make sure they’re making good choices and otherwise generally functioning correctly. Each component needs to be monitored in its own terms and the coordination framework needs to be monitored as a whole. Yes, an AI could do that but it would be dangerous to remove humans from the loop entirely. This is one reason it’s important the coordination language be human-friendly.  Fortunately, result monitoring is a concern for all AI systems, so marketers should be able to piggyback on solutions built elsewhere. At the risk of seeming overly paranoid, I'd suggest the monitoring component be kept as separate as possible from the rest of the system.

  • build swappable components.  Different components will become obsolete or need retraining at different times, depending on when changes happen in the particular bits of marketing that they control. So we need to make it easy to take any given component offline or to substitute a new one. If we’ve built our coordination framework properly, this should be reasonably doable. Similarly, a proper framework will make it easy to inject new components when necessary: say, to manage a new output channel or take advantage of a new data source.  (This is starting to sound more like a backbone than a framework.  I guess it's both.)  There will be considerable art in deciding how what work to assign to a single component and what to split among different components. 

  • gather lots of data.  More data is almost always better, but there's a specific reason to do this for AI: when things change you might need data you didn’t need before, and you’ll be able to retrain your system more quickly if you’ve been capturing that data all along. Remember that AI is based on training sets, so building new training sets is a core activity.  The faster you can build new training sets the faster your systems will be back to functioning effectively. This makes it worth investing in data that has no immediate use. Of course, it may also turn out that deeper analysis finds new uses for data even when there hasn’t been a fundamental change. So storing lots of data would be useful for AI even in a stable world.

  • be flexible, be agile, expect the unexpected, look out black swans, etc.  This is the principle underlying all the previous items, but it's worth stating explicitly because there are surely other methods I haven't listed. If there’s a true black swan event – unpredictable, rare, and transformative – you might end up scrapping your system entirely. That, in itself, is a contingency to plan for. But you can also expect lots of smaller changes and want your system to be robust while giving up as little performance as possible during periods of stability.

Are there steps you should take right now to get ready for the AI-driven future? You betcha. I’ll be talking about them at the MarTech Conference in Boston in October.  I hope you’ll be there!


____________________________________________________________________________________
*Of course, separate AIs privately cooperating with each other is also the stuff of nightmares. But the story that Facebook shut down a chatbot experiment when the chatbots developed their own language is apparently overblown.**

** On the other hand, the Facebook incident was the second time in the past year that AIs were reported to have created a private language.  And that’s just what I found on the first page of Google search. Who knows what the Google search AI is hiding????

Sunday, August 20, 2017

Treasure Data Offers An Easy-to-Deploy Customer Data Platform

One of my favorite objections from potential buyers of Customer Data Platforms is that CDPs are simply “too good to be true”.   It’s a reasonable response from people who hear CDP vendors say they can quickly build a unified customer database but have seen many similar-seeming projects fail in the past.  I like the objection because I can so easily refute it by pointing to real-world case histories where CDPs have actually delivered on their promise.

One of the vendors I have in mind when I’m referring to those histories is Treasure Data. They’ve posted several case studies on the CDP Institute Library, including one where data was available within one month and another where it was ready in two hours.  Your mileage may vary, of course, but these cases illustrate the core CDP advantage of using preassembled components to ingest, organize, access, and analyze data. Without that preassembly, accessing just one source can take days, weeks, or even months to complete.

Even in the context of other CDP systems, Treasure Data stands out for its ability to connect with massive data sources quickly. The key is a proprietary data format that lets access new data sources with little explicit mapping: in slightly more technical terms, Treasure Data uses a columnar data structure where new attributes automatically appear as new columns. It also helps that the system runs on Amazon S3, so little time is spent setting up new clients or adding resources as existing clients grow.

Treasure Data ingests data using open source connectors Fluentd for streaming inputs and embulk  for batch transfers. It provides deterministic and probabilistic identity matching, integrated machine learning, always-on encryption, and precise control over which users can access which pieces of data. One caveat is there’s no user interface to manage this sort of processing: users basically write scripts and query statements. Treasure Data is working on a user interface to make this easier and to support complex workflows.

Data loaded into Treasure Data can be accessed through an integrated reporting tool and an interface that shows the set of events associated with a customer.  But most users will rely on prebuilt connectors for Python, R, Tableau, and Power BI.  Other SQL access is available using Hive, Presto and ODBC. While there’s no user interface for creating audiences, Treasure Data does provide the functions needed to assign customers to segments and then push those segments to email, Facebook, or Google. It also has an API that lets external systems retrieve the list of all segments associated with a single customer.  

Treasure Data clearly isn’t an all-in-one solution for customer data management.  But organizations with the necessary technical skills and systems can find it hugely increases the productivity of their resources.  The company was founded in 2011 and now has over 250 clients, about half from the data-intensive worlds of games, ecommerce, and ad tech. Annual cost starts around $100,000 per year.  The actual pricing models vary with the situation but are usually based on either the number of customer profiles being managed or total resource consumption.



Friday, July 14, 2017

Blueshift CDP Adds Advanced Features

I reviewed Blueshift in June 2015, when the product had been in-market for just a few months and had a handful of large clients. Since then they’ve added many new features and grown to about 50 customers. So let’s do a quick update.

Basically, the system is still what it was: a Customer Data Platform that includes predictive modeling, content creation, and multi-step campaigns. Customer data can be acquired through the vendor’s own Javascript tags, mobile SDK (new since 2015), API connectors, or file imports. Blueshift also has collection connectors for Segment, Ensighten, mParticle, and Tealium. Product data can load through file imports, a standard API, or a direct connector to DemandWare.

As before, Blueshift can ingest, store and index pretty much any data with no advance modeling, using JSON, MongoDB, Postgres, and Kafka. Users do have to tell source systems what information to send and map inputs to standard entities such as customer name, product ID, or interaction type. There is some new advanced automation, such as tying related events to a transaction ID. The system’s ability to load and expose imported data in near-real-time remains impressive.

Blueshift will stitch together customer identities using multiple identifiers and can convert anonymous to known profiles without losing any history. Profiles are automatically enhanced with product affinities and scores for purchase intent, engagement, and retention.

The system had automated predictive modeling when I first reviewed it, but has now added machine- learning-based product recommendations. In fact, it recommendations are exceptionally sophisticated. Features include a wide range of rule- and model-based recommendation methods, an option for users to create custom recommendation types, and multi-product recommendation blocks that mix recommendations based on different rules. For example, the system can first pick a primary recommendation and then recommend products related to it. To check that the system is working as expected, users can preview recommendations for specified segments or individuals.

The segment builder in Blueshift doesn’t seem to have changed much since my last review: users select data categories, elements, and values used to include or exclude segment members. The system still shows the counts for how many segment members are addressable via email, display ads, push, and SMS.

On the other hand, the campaign builder has expanded significantly. The previous form-based campaign builder has been replaced by a visual interface that allows branching sequences of events and different treatments within each event.  These treatments include thumbnails of campaign creative and can be in different channels. That's special because many vendors still limit campaigns to a single channel. Campaigns can be triggered by events, run on fixed schedules, or executed once.


Each treatment within an event has its own selection conditions, which can incorporate any data type: previous behaviors, model scores, preferred communications channels, and so on. Customers are tested against the treatment conditions in sequence and assigned to the first treatment they match. Content builders let users create templates for email, display ads, push messages, and SMS messages. This is another relatively rare feature. Templates can include personalized offers based on predictive models or recommendations. The system can run split tests of content or recommendation methods. Attribution reports can now include custom goals, which lets users measure different campaigns against different objectives.

Blueshift still relies on external services to deliver the messages it creates. It has integrations with SendGrid, Sparkpost, and Cheetahmail for email and Twilio and Gupshup for SMS. Other channels can be fed through list extracts or custom API connectors.

Blueshift still offers its product in three different versions: email-only, cross-channel and predictive. Pricing has increased since 2015, and now starts at $2,000 per month for the email edition version, $4,000 per month for the cross-channel edition and $10,000 per month for the predictive edition. Actual fees depend on the number of active customers, with the lowest tier starting at 500,000 active users per month. The company now has several enterprise-scale clients including LendingTree, Udacity, and Paypal.

Friday, July 07, 2017

Lexer Customer Data Platform Grows from Social Listening Roots

Customer Data Platform vendors come from many places, geographically and functionally. Lexer is unusual in both ways, having started in Australia as a social media listening platform. About two years ago the company refocused on building customer profiles with data from all sources. It quickly added clients among many of Australia’s largest consumer-facing brands including Qantas airlines and Westpac bank.

Social media is still a major focus for Lexer. The system gathers data from Facebook and Instagram public pages and from the Twitter follower lists of clients’ brands. It analyzes posts and follows to understand consumer interests, assigning people to “tribes” such as “beach lifestyle” and personas such as “sports and fitness”.  It supplements the social inputs with information from third party data sources, location history, and a clients’ own email, Web site, customer service, mobile apps, surveys, point of sale, and other systems. Matching is strictly deterministic, although links based on different matches can be chained together to unify identities across channels.  The system can also use third party data to add connections it can’t be made directly.

Lexer ingests data in near-real-time, making social media posts available to users within about five minutes. It can react to new data by moving customers into different tribes or personas and can send lists of those customers to external systems for targeting in social, email, or other channels.  There are standard integrations with Facebook, Twitter, and Google Adwords advertising campaigns. External systems can also use an API to read the Lexer data, which is stored in Amazon Elastic Search.

Unusally for a CDP, Lexer also provides a social engagement system that lets service agents engage directly with customers. This system displays the customer’s profile including a detailed interaction history and group memberships. Segment visualization is unusually colorful and attractive.

Lexer has about forty clients, nearly all in Australia. It is just entering the U.S. market and hasn’t set U.S. prices.