Saturday, September 16, 2017

Vizury Combines Web Page Personalization with a Customer Data Platform

One of the fascinating things about tracking Customer Data Platforms is the great variety among the vendors.

It’s true that variety causes confusion for buyers. The CDP Institute is working to ease that pain, most recently with a blog discussion you’re welcome to join here.  But for me personally, it’s been endlessly intriguing to trace the paths that vendors have followed to become CDPs and learn where they plan to go next.

Take Vizury, a Bangalore-based company that started eight years ago as an retargeting ad bidding platform. That grew into a successful business with more than 200 employees, 400 clients in 40 countries, and $30 million in funding. As it developed, the company expanded its product and, in 2015, released its current flagship, Vizury Engage, an omnichannel personalization system sold primarily to banks and insurance companies. Engage now has more than a dozen enterprise clients in Asia, expects to double that roster in the next six months, and is testing the waters in the U.S.

As often happens, Vizury’s configuration reflects its origins. In their case, the most obvious impact is on the scope of the system, which includes sophisticated Web page personalization – something very rare in the CDP world at large. In a typical implementation, Vizury builds the client’s Web site home page.  That gives it complete control of how each visitor is handled. The system doesn't take over the rest of the client's Web site, although it can inject personalized messages on those pages through embedded tags.

In both situations, Vizury is identifying known visitors by reading a hashed (i.e., disguised) customer ID it has placed on the visitor’s browser cookie. When a visitor enters the site, a Vizury tag sends the hased ID to the Vizury server, which looks up the customer, retrieves a personalized message, and sends it back to the browser.  The messages are built by templates which can include variables such as first name and calculated values such as a credit limit.  Customer-specific versions may be pregenerated to speed response; these are updated as new data is received about each customer. It takes ten to fifteen seconds for new information to make its way through the system and be reflected in output seen by the visitor.

Message templates are embedded in what Vizury calls an engagement, which is associated with a segment definition and can include versions of the same message for different channels. One intriguing strength of Vizury is machine-learning-based propensity models that determine each customer’s preferred channel. This lets Vizury send outbound messages through the customer’s preferred channel when there’s a choice. Outbound options include email, SMS, Facebook ads, and programmatic display ads. These can be sent on a fixed schedule or be triggered when the customer enters or leaves a segment. Bids for Facebook and display ads can be managed by Vizury’s own bidding engine, another vestige of its origins. Inbound options include on-site and browser push messages.

If a Web visitor is eligible for multiple messages, Vizury currently just picks one at random. The vendor is working an automated optimization system that will pick the best message for each customer instead. There’s no way to embed a sequence of different messages within a given engagement, although segment definitions could push customers from one engagement to the next. Users do have the ability to specify how often a customer will be sent the same message, block messages the customer has already responded to, and limit how many total messages a customer receives during a time period.

What makes Vizury a CDP is that it builds and exposes a unified, persistent customer database. This collects data through Vizury's own page tags, API, and mobile SDK; external tag managers; and batch file loads.  Data is unified with deterministic methods including stitching of multiple identifiers provided by customers and of multiple applications on the same device. The system can do probabilistic cross-device matching but that's not reliable enough for most financial service applications.  Vizury doesn’t do fuzzy matching based on customer names and addresses, which is not a common technique in Asia.

The system includes standard machine learning algorithms that predict product purchase, app uninstalls, and message fatigue in addition to channel preference and ad bidding. Results can be applied to tasks other than personalization, such as lead scoring.  Algorithms are adapted for each industry and trained on the client’s own data. Users can't currently apply machine learning to other tasks.

Vizury uses a typical big data stack including Hadoop, Hive, Pig, Hbase, Flume, and Kafka. Clients can access the data directly through Hadoop or Hbase.  Standard reports show results by experience, segment, and channel, and users can create custom reports as well.


Pricing for Vizury is based on the number of impressions served, another echo of its original business. Enterprise clients pay upwards of $20,000 per month, although U.S. pricing could be different.





Friday, September 08, 2017

B2B Marketers Are Buying Customer Data Platforms. Here's Why.

I’m currently drafting a paper on use of Customer Data Platforms by B2B SaaS marketers.  The topic is more intriguing than it sounds because it raises the dual questions of  why CDPs haven’t previously been used much by B2B SaaS companies and what's changed.  To build some suspense, let’s first review who else has been buying CDPs.

We can skip over the first 3.8 billion years of life on earth, when the answer is no one. When true CDPs first emerged from the primordial ooze, their buyers were concentrated among B2C retailers. That’s not surprising, since retailers have always been among the data-driven marketers. They’re the R in BRAT (Banks, Retailers, Airlines, Telcos), the mnemonic I’ve long used to describe the core data-driven industries*.

What's more surprising is that the B's, A's, and T's weren't also early CDP users.  I think the reason is that banks, airlines, and telcos all capture their customers’ names as part of their normal operations. This means they’ve always had customer data available and thus been able to build extensive customer databases without a CDP.

By contrast, offline retailers must work hard to get customer names and tie them to transactions, using indirect tools such as credit cards and loyalty programs. This means their customer data management has been less mature and more fragmented. (Online retailers do capture customer names and transactions operationally.  And, while I don’t have firm data, my impression is that online-only retailers have been slower to buy CDPs than their multi-channel cousins. If so, they're the exception that proves the rule.)

Over the past year or two, as CDPs have moved beyond the early adopter stage, more BATs have in fact started to buy CDPs.  As a further sign of industry maturity, we’re now starting to see CDPs that specialize in those industries. Emergence of such vertical systems is normal: it happens when demand grows in new segments because the basic concepts of a category are widely understand.  Specialization gives new entrants as a way to sell successfully against established leaders.  Sure enough, we're also seeing new CDPs with other types of specialties, such as products from regional markets (France, India, and Australia have each produced several) and for small and mid-size organizations (not happening much so far, but there are hints).

And, of course, the CDP industry has always been characterized by an unusually broad range of product configurations, from systems that only build the central database to systems that provide a database, analytics, and message selection; that's another type of specialization.  I recently proposed a way to classify CDPs by function on the CDP Institute blog.** 

B2B is another vertical. B2B marketers have definitely been slow to pick up on CDPs, which may seem surprising given their frenzied adoption of other martech. I’d again explain this in part by the state of the existing customer data: the more advanced B2B marketers (who are the most likely CDP buyers) nearly all have a marketing automation system in place. The marketers' initial assumption would be that marketing automation can assemble a unified customer database, making them uninterested in exploring a separate CDP.  Eventually they'd discover that nearly all B2B marketing automation systems are very limited in their data management capabilities.  That’s happening now in many cases – and, sure enough, we’re now seeing more interest among B2B marketers in CDPs.

But there's another reason B2B marketers have been uncharacteristically slow adopters when it comes to CDPs.  B2B marketers have traditionally focused on acquiring new leads, leaving the rest of the customer life cycle to sales, account, and customer success teams.  So B2B marketers didn't need the rich customer profiles that a CDP creates.  Meanwhile, the sales, account and customer success teams generally worked with individual and account records stored in a CRM system, so they weren't especially interested in CDPs either.  (That said, it’s worth noting that customer success systems like Gainsight and Totango were on my original list of CDP vendors.)

The situation in B2B has now changed.  Marketers are taking more responsibility for the entire customer life cycle and work more closely with sales, account management, and customer success teams. This pushes them to look for a complete customer view that includes data from marketing automation, CRM, and additional systems like Web sites, social media, and content marketing. That quest leads directly to CDP.

Can you guess who's leading that search?  Well, which B2B marketers have been the most active martech adopters? That’s right: B2B tech marketers in general and B2B SaaS product marketers in particular. They’re the B2B marketers who have the greatest need (because they have the most martech) and the greatest inclination to try new solutions (which is why they ended up with the most martech). So it’s no surprise they’re the earliest B2B adopters of CDP too.

And do those B2B SaaS marketers have special needs in a CDP?  You bet.  Do we know those needs are?  Yes, but you’ll have to read my paper to find out.

_______________________________________________________
*It might more properly be FRAT, since Banking really stands for all Financial services including insurance, brokers, investment funds, and so on.  Similarly, Airlines represents all of travel and hospitality, while Telco includes telephone, cable, and power utilities and other subscription networks.  We should arguably add healthcare and education as late arrivals to the list.  That would give us BREATH.  Or, better still, replace Banks with Financial Services and you get dear old FATHER.

**It may be worth noting that part of the variety is due to the differing origins of CDP systems, which often started as products for other purposes such as tag management, big data analytics, and campaign management.   That they've all ended up serving roughly the same needs is a result of convergent evolution (species independently developing similar features to serve a similar need or ecological niche) rather than common origin (related species become different over time as they adapt to different situations).  You could look at new market segments as new ecological niches, which are sometimes filled by specialized variants of generic products and are other times filled by tangentially related products adapting to a new opportunity.

My point here is there are two separate dynamics at play: the first is market readiness and the second is vendor development.  Market readiness is driven by reasons internal to the niche, such as the types of customer data available in an industry.  Vendor development is driven by vendor capabilities and resources.  One implication of this is that vendors from different origins could end up dominating different niches; that is, there's no reason to assume a single vendor or standard configuration will dominate the market as a whole.  Although perhaps market segments served by different configurations are really separate markets.

Thursday, August 31, 2017

AgilOne Adds New Flexibility to An Already-Powerful Customer Data Platform


It’s more than four years since my original review of AgilOne, a pioneering Customer Data Platform. As you might imagine, the system has evolved quite a bit since then. In fact, the core data management portions have been entirely rebuilt, replacing the original fixed data model with a fully configurable model that lets the system easily adapt to each customer.

The new version uses a bouquet of colorfully-named big data technologies (Kafka, Parquet, Impala, Spark, Elastic Search, etc.) to support streaming inputs, machine learning, real time queries, ad hoc analytics, SQL access, and other things that don’t come naturally to Hadoop. It also runs on distributed processors that allow fast scaling to meet peak demands. That’s especially important to AgilOne since most of its clients are retailers whose business can spike sharply on days like Black Friday.

In other ways, though, AgilOne is still similar to the system I reviewed in 2013. It still provides sophisticated data quality, postal processing, and name/address matching, which are often missing in CDPs designed primarily for online data. It still has more than 300 predefined attributes for specialized analytics and processing, although the system can function without them. It still includes predictive models and provides a powerful query builder to create audience segments. Campaigns are still designed to deliver one message, such as an email, although users could define campaigns with related audiences to deliver a sequence of messages. There’s still a “Customer360” screen to display detailed information about individual customers, including full interaction history.

But there’s plenty new as well. There are more connectors to data sources, a new interface to let users add custom fields and calculations for themselves, and workflow diagrams to manage data processing flows. Personalization has been enhanced and the system exposes message-related data elements including product recommendations and the last products browsed, purchased, and abandoned. AgilOne now supports Web, mobile, and social channels and offers more options for email delivery. A/b tests have been added while analytics and reporting have been enhanced.

What should be clear is that AgilOne has an exceptionally broad (and deep) set of features. This puts it at one end of the spectrum of Customer Data Platforms. At the other end are CDPs that build a unified, sharable customer database and do nothing else. In between are CDPs that offer some subset of what AgilOne offers: advanced identity management, offline data support, predictive analytics, segmentation, multi-channel campaigns, real time interactions, advanced analytics, and high scalability. This variety is good for buyers, since it means there’s a better chance they can find a system that matches their needs. But it’s also confusing, especially for buyers who are just learning about CDPs and don’t realize how much they can differ. That confusion is something we’re worrying about a lot at the CDP Institute right now. If you have ideas for how to deal with it, let me know.

Friday, August 25, 2017

Self-Driving Marketing Campaigns: Possible But Not Easy


A recent Forrester study found that most marketers expect artificial intelligence to take over the more routine parts of their jobs, allowing them to focus on creative and strategic work.


That’s been my attitude as well. More precisely, I see AI enabling marketers to provide the highly tailored experiences that customers now demand. Without AI, it would be impossible to make the number of decisions necessary to do this. In short, complexity is the problem, AI is the solution, and we all get Friday afternoons off. Happy ending.

But maybe it's not so simple.

Here’s the thing: we all know that AI works because it can learn from data. That lets it make the best choice in each situation, taking into account many more factors than humans can build into conventional decision rules. We also all know that machines can automatically adjust their choices as they learn from new data, allowing them to continuously adapt to new situations.

Anyone who's dug a bit deeper knows two more things:

  • self-adjustment only works in circumstances similar to the initial training conditions. AI systems don’t know what to do when they’re faced with something totally unexpected. Smart developers build their systems to recognize such situations, alert human supervisors, and fail gracefully by taking an action that is likely to be safe. (This isn’t as easy as it sounds: a self-driving car shouldn’t stop in the middle of an intersection when it gets confused.)

  • AI systems of today and the near future are specialists. Each is trained to do a specific task like play chess, look for cancer in an X-ray, or bid on display ads. This means that something like a marketing campaign, which involves many specialized tasks, will require cooperation of many AIs. That’s not new: most marketing work today is done by human specialists, who also need to cooperate. But while cooperation comes naturally to (most) humans, it needs to be purposely added as a skill to an AI.*

By itself, this more nuanced picture isn’t especially problematic. Yes, marketers will need multiple AIs and those AIs will need to cooperate. Maintaining that cooperation will be work but presumably can itself eventually be managed by yet another specialized AI.

But let’s put that picture in a larger context.

The dominant feature of today’s business environment is accelerating change. AI itself is part of that change but there are other forces at play: notably, the “personal network effect” that drives companies like Facebook, Google, and Amazon to hoard increasing amounts of data about individual consumers. These forces will impose radical change on marketers’ relations with customers. And radical change is exactly what the marketers’ AI systems will be unable to handle.

So now we have a problem. It’s easy – and fun – to envision a complex collection of AI-driven components collaborating to create fully automated, perfectly personalized customer experiences. But that system will be prone to frequent failures as one or another component finds itself facing conditions it wasn’t trained to handle. If the systems are well designed (and we’re lucky), the components will shut themselves down when that happens. If we’re not so lucky, they’ll keep running and return increasingly inappropriate results. Yikes.

Where do we go from here? One conclusion would be that there’s a practical limit to how much of the marketing process can really be taken over by AI. Some people might find that comforting, at least for job security. Others would be sad.

A more positive conclusion is it’s still possible to build a completely AI-driven marketing process but it’s going to be harder than we thought. We’ll need to add a few more chores to the project plan:

  • build a coordination framework. We need to teach the different components to talk to each other, preferably in a language that humans can understand. They'll have to share information about what they’re doing and about the results they’re getting, so each component can learn from the experience of the others and can see the impact its choices have elsewhere.  It seems likely there will be an AI dedicated specifically to understanding and predicting those impacts throughout the system. Training that AI will be especially challenging. In keeping with the new tradition of naming AIs after famous people, let's call this one John Wanamaker. 

  • learn to monitor effectively. Someone has to keep an eye on the AIs to make sure they’re making good choices and otherwise generally functioning correctly. Each component needs to be monitored in its own terms and the coordination framework needs to be monitored as a whole. Yes, an AI could do that but it would be dangerous to remove humans from the loop entirely. This is one reason it’s important the coordination language be human-friendly.  Fortunately, result monitoring is a concern for all AI systems, so marketers should be able to piggyback on solutions built elsewhere. At the risk of seeming overly paranoid, I'd suggest the monitoring component be kept as separate as possible from the rest of the system.

  • build swappable components.  Different components will become obsolete or need retraining at different times, depending on when changes happen in the particular bits of marketing that they control. So we need to make it easy to take any given component offline or to substitute a new one. If we’ve built our coordination framework properly, this should be reasonably doable. Similarly, a proper framework will make it easy to inject new components when necessary: say, to manage a new output channel or take advantage of a new data source.  (This is starting to sound more like a backbone than a framework.  I guess it's both.)  There will be considerable art in deciding how what work to assign to a single component and what to split among different components. 

  • gather lots of data.  More data is almost always better, but there's a specific reason to do this for AI: when things change you might need data you didn’t need before, and you’ll be able to retrain your system more quickly if you’ve been capturing that data all along. Remember that AI is based on training sets, so building new training sets is a core activity.  The faster you can build new training sets the faster your systems will be back to functioning effectively. This makes it worth investing in data that has no immediate use. Of course, it may also turn out that deeper analysis finds new uses for data even when there hasn’t been a fundamental change. So storing lots of data would be useful for AI even in a stable world.

  • be flexible, be agile, expect the unexpected, look out black swans, etc.  This is the principle underlying all the previous items, but it's worth stating explicitly because there are surely other methods I haven't listed. If there’s a true black swan event – unpredictable, rare, and transformative – you might end up scrapping your system entirely. That, in itself, is a contingency to plan for. But you can also expect lots of smaller changes and want your system to be robust while giving up as little performance as possible during periods of stability.

Are there steps you should take right now to get ready for the AI-driven future? You betcha. I’ll be talking about them at the MarTech Conference in Boston in October.  I hope you’ll be there!


____________________________________________________________________________________
*Of course, separate AIs privately cooperating with each other is also the stuff of nightmares. But the story that Facebook shut down a chatbot experiment when the chatbots developed their own language is apparently overblown.**

** On the other hand, the Facebook incident was the second time in the past year that AIs were reported to have created a private language.  And that’s just what I found on the first page of Google search. Who knows what the Google search AI is hiding????

Sunday, August 20, 2017

Treasure Data Offers An Easy-to-Deploy Customer Data Platform

One of my favorite objections from potential buyers of Customer Data Platforms is that CDPs are simply “too good to be true”.   It’s a reasonable response from people who hear CDP vendors say they can quickly build a unified customer database but have seen many similar-seeming projects fail in the past.  I like the objection because I can so easily refute it by pointing to real-world case histories where CDPs have actually delivered on their promise.

One of the vendors I have in mind when I’m referring to those histories is Treasure Data. They’ve posted several case studies on the CDP Institute Library, including one where data was available within one month and another where it was ready in two hours.  Your mileage may vary, of course, but these cases illustrate the core CDP advantage of using preassembled components to ingest, organize, access, and analyze data. Without that preassembly, accessing just one source can take days, weeks, or even months to complete.

Even in the context of other CDP systems, Treasure Data stands out for its ability to connect with massive data sources quickly. The key is a proprietary data format that lets access new data sources with little explicit mapping: in slightly more technical terms, Treasure Data uses a columnar data structure where new attributes automatically appear as new columns. It also helps that the system runs on Amazon S3, so little time is spent setting up new clients or adding resources as existing clients grow.

Treasure Data ingests data using open source connectors Fluentd for streaming inputs and embulk  for batch transfers. It provides deterministic and probabilistic identity matching, integrated machine learning, always-on encryption, and precise control over which users can access which pieces of data. One caveat is there’s no user interface to manage this sort of processing: users basically write scripts and query statements. Treasure Data is working on a user interface to make this easier and to support complex workflows.

Data loaded into Treasure Data can be accessed through an integrated reporting tool and an interface that shows the set of events associated with a customer.  But most users will rely on prebuilt connectors for Python, R, Tableau, and Power BI.  Other SQL access is available using Hive, Presto and ODBC. While there’s no user interface for creating audiences, Treasure Data does provide the functions needed to assign customers to segments and then push those segments to email, Facebook, or Google. It also has an API that lets external systems retrieve the list of all segments associated with a single customer.  

Treasure Data clearly isn’t an all-in-one solution for customer data management.  But organizations with the necessary technical skills and systems can find it hugely increases the productivity of their resources.  The company was founded in 2011 and now has over 250 clients, about half from the data-intensive worlds of games, ecommerce, and ad tech. Annual cost starts around $100,000 per year.  The actual pricing models vary with the situation but are usually based on either the number of customer profiles being managed or total resource consumption.



Friday, July 14, 2017

Blueshift CDP Adds Advanced Features

I reviewed Blueshift in June 2015, when the product had been in-market for just a few months and had a handful of large clients. Since then they’ve added many new features and grown to about 50 customers. So let’s do a quick update.

Basically, the system is still what it was: a Customer Data Platform that includes predictive modeling, content creation, and multi-step campaigns. Customer data can be acquired through the vendor’s own Javascript tags, mobile SDK (new since 2015), API connectors, or file imports. Blueshift also has collection connectors for Segment, Ensighten, mParticle, and Tealium. Product data can load through file imports, a standard API, or a direct connector to DemandWare.

As before, Blueshift can ingest, store and index pretty much any data with no advance modeling, using JSON, MongoDB, Postgres, and Kafka. Users do have to tell source systems what information to send and map inputs to standard entities such as customer name, product ID, or interaction type. There is some new advanced automation, such as tying related events to a transaction ID. The system’s ability to load and expose imported data in near-real-time remains impressive.

Blueshift will stitch together customer identities using multiple identifiers and can convert anonymous to known profiles without losing any history. Profiles are automatically enhanced with product affinities and scores for purchase intent, engagement, and retention.

The system had automated predictive modeling when I first reviewed it, but has now added machine- learning-based product recommendations. In fact, it recommendations are exceptionally sophisticated. Features include a wide range of rule- and model-based recommendation methods, an option for users to create custom recommendation types, and multi-product recommendation blocks that mix recommendations based on different rules. For example, the system can first pick a primary recommendation and then recommend products related to it. To check that the system is working as expected, users can preview recommendations for specified segments or individuals.

The segment builder in Blueshift doesn’t seem to have changed much since my last review: users select data categories, elements, and values used to include or exclude segment members. The system still shows the counts for how many segment members are addressable via email, display ads, push, and SMS.

On the other hand, the campaign builder has expanded significantly. The previous form-based campaign builder has been replaced by a visual interface that allows branching sequences of events and different treatments within each event.  These treatments include thumbnails of campaign creative and can be in different channels. That's special because many vendors still limit campaigns to a single channel. Campaigns can be triggered by events, run on fixed schedules, or executed once.


Each treatment within an event has its own selection conditions, which can incorporate any data type: previous behaviors, model scores, preferred communications channels, and so on. Customers are tested against the treatment conditions in sequence and assigned to the first treatment they match. Content builders let users create templates for email, display ads, push messages, and SMS messages. This is another relatively rare feature. Templates can include personalized offers based on predictive models or recommendations. The system can run split tests of content or recommendation methods. Attribution reports can now include custom goals, which lets users measure different campaigns against different objectives.

Blueshift still relies on external services to deliver the messages it creates. It has integrations with SendGrid, Sparkpost, and Cheetahmail for email and Twilio and Gupshup for SMS. Other channels can be fed through list extracts or custom API connectors.

Blueshift still offers its product in three different versions: email-only, cross-channel and predictive. Pricing has increased since 2015, and now starts at $2,000 per month for the email edition version, $4,000 per month for the cross-channel edition and $10,000 per month for the predictive edition. Actual fees depend on the number of active customers, with the lowest tier starting at 500,000 active users per month. The company now has several enterprise-scale clients including LendingTree, Udacity, and Paypal.

Friday, July 07, 2017

Lexer Customer Data Platform Grows from Social Listening Roots

Customer Data Platform vendors come from many places, geographically and functionally. Lexer is unusual in both ways, having started in Australia as a social media listening platform. About two years ago the company refocused on building customer profiles with data from all sources. It quickly added clients among many of Australia’s largest consumer-facing brands including Qantas airlines and Westpac bank.

Social media is still a major focus for Lexer. The system gathers data from Facebook and Instagram public pages and from the Twitter follower lists of clients’ brands. It analyzes posts and follows to understand consumer interests, assigning people to “tribes” such as “beach lifestyle” and personas such as “sports and fitness”.  It supplements the social inputs with information from third party data sources, location history, and a clients’ own email, Web site, customer service, mobile apps, surveys, point of sale, and other systems. Matching is strictly deterministic, although links based on different matches can be chained together to unify identities across channels.  The system can also use third party data to add connections it can’t be made directly.

Lexer ingests data in near-real-time, making social media posts available to users within about five minutes. It can react to new data by moving customers into different tribes or personas and can send lists of those customers to external systems for targeting in social, email, or other channels.  There are standard integrations with Facebook, Twitter, and Google Adwords advertising campaigns. External systems can also use an API to read the Lexer data, which is stored in Amazon Elastic Search.

Unusally for a CDP, Lexer also provides a social engagement system that lets service agents engage directly with customers. This system displays the customer’s profile including a detailed interaction history and group memberships. Segment visualization is unusually colorful and attractive.

Lexer has about forty clients, nearly all in Australia. It is just entering the U.S. market and hasn’t set U.S. prices.

Monday, July 03, 2017

The Personal Network Effect Makes Walled Gardens Stronger, But There's Still Hope

I’m still chewing over the role of “walled garden” vendors including Google, Amazon, and Facebook, and in particular how most observers – especially in the general media – fail to grasp how those firms differ from traditional monopolists. As it happens, I’m also preparing a speech for later this month that will touch on the topic, which means I’ve spent many hours working on slides to communicate the relevant concepts. Since just a handful of people will see the slides in person, I figured I’d share them here as well.

In pondering the relation of the walled garden vendors to the rest of us, I’ve come to realize there are two primary dynamics at work. The first is the “personal network effect” that I’ve described previously. The fundamental notion is that companies get exponentially increasing value as they capture more types of information about a consumer. For example, it’s useful to know what’s on someone’s calendar and it’s useful to have a mapping app that captures their locations. But if the same company controls both those apps, it can connect them to provide a new service such as automatically mapping out the day’s travel route.  Maybe you even add helpful suggestions for where to stop for fuel or lunch.


 In network terms, you can think of each application as a node with a value of its own and each connection between nodes having a separate additional value. Since the number of connections increases faster than the number of nodes, there’s a sharp rise in value each time a new node is added. The more nodes you own already, the greater the increase: so companies that own several nodes can afford to pay more for a new node than companies that own just one node. This makes it tough for new companies to break into a customer’s life. It also makes it tough for customers to break away from their dominant network provider.

My best visualization of this is to show the applications surrounding an individual and to draw lines showing how many more connections appear when you add nodes.  If it looks like the customer is trapped by those lines, well, yes.



The point that’s missing from the discussions I’ve seen about walled gardens is that personal networks create a monopoly on the individual level. Different companies can coexist as the dominant networks for different people.  So let’s assume that Google, Facebook, Amazon, and Apple each manage to capture one quarter of the population in their own network. If each member spends 100% of her money through her network owner, the over-all market share of each firm would be just 25%. From a classical viewpoint, that’s a highly competitive market. But each consumer is actually at the mercy of a monopolist.  (If you want a real-life example, consider airline hub-and-spoke route maps.  Each airline has an effective monopoly in its hub cities, even though no airline has an over-all monopoly.  It took regulators a long time to figure that one out, too.)  

In theory the consumer could switch to a new network. But switching costs are very high, since you have to train the new network to know as much about you as the old network. And switching to a new network just means you’re changing monopolists.  Remember that the personal network effect makes it really inconvenient to have more than one primary network provider.

The second dynamic is the competition among network providers to attract new customers. As with any network, personal networks hold a big first mover advantage: whichever provider first sells several apps to the same consumer has a good, and ever-growing, chance of becoming that consumer's primary network.

Once the importance of this becomes clear, you can recognize the game of high-stakes leapfrog that network vendors have been playing for the past two decades. It starts with Amazon in 1994, intercepting buyers before they can reach a physical retailer. A few years later, Google starts catching buyers in the browser, making searches before they’re ready to buy through Amazon. Then Facebook shows up, first with a social network where people discuss their purchases before they make a Web search, and later with a mobile app that bypasses the Web browser altogether. A decade after that, Amazon strikes back with voice search on Alexa, which can happen even before someone types in a social post.

Remember, this isn’t just about selling advertising. Vendors can share that pie. What they can’t share is control over one consumer’s personal network. Since that, in turn, gives control over actual purchases, it’s a much bigger prize and, therefore, worth a great deal more effort to win. Now you see why Amazon has put so much effort into hardware over the years.  It's not just that Jeff Bezos likes cool gadgets.

At this point, you might pause to wonder what happens next.  Is there something that can intercept consumers before they say what they're thinking?  AI- and/or implant-enabled mind reading are certainly possibilities.  But the next frontier right now is subscriptions, which let purchases happen without any specific action for a voice system to intercept.


This is exactly why subscriptions are getting so much attention right now.  (I do need to admit that Dollar Shave Club, Blue Apron, and Birchbox messed up my chronology by launching around 2011, several years before Alexa).

Of course, there’s nothing to prevent the network vendors from launching subscription services. In fact, the price of Blue Apron’s IPO was depressed precisely by the fear that Amazon would enter its business through the Whole Foods acquisition.

But what’s really interesting about subscriptions is they’re less subject to the personal network effect than other types of purchases. A subscription company comes to understand its customers’ needs in one particular area very deeply.  Potentially, it can fulfill those needs better than a company working from less detailed data gathered in other domains.

Certainly a great deal depends on execution.  But if I’ve trained my wine-by-mail company to understand my precise tastes, I’ll probably buy through them when I’m stocking up for my next party, even though Amazon knows I’m planning to have people over and has some general idea that my friends like to drink.

In short, the walled gardens are not impregnable.  Subscriptions might offer a way to help customers escape. But marketers are going to have to work harder than ever to create relationships strong enough to pull their customers away from the networks. All I can do here is to clarify the issues so marketers can better understand the tasks ahead.

Sunday, June 18, 2017

Amazon Buys Whole Foods: It's Not About Groceries

Most of the comments I’ve seen about Amazon’s acquisition of Whole Foods have described it as Amazon (a) expanding into a new industry (b) continuing to disrupt conventional retail and (c) moving more commerce from offline to online channels. Those are all true, I suppose, but I felt they missed the real story: this is another step in Amazon building a self-contained universe that its customers never have to leave.

That sounds a bit more paranoid than it should. This has nothing to do with Amazon being evil. It’s just that I see the over-arching story of the current economy as creation of closed universes by Amazon, Facebook, Google, Apple, and maybe a couple of others. The owners of those universes control the information their occupants receive, and, through that, control what they buy, who they meet, and ultimately what they think. The main players all realize this and are quite consciously competing with each other to expand the scope of their services so consumers have less reason to look outside of their borders. So Amazon buys a grocery chain to give its customers one less reason to visit a retail store (because Amazon’s long-term goal is surely for customers to order online for same-day delivery). And, hedging its bets a bit, Amazon also wants to control the physical environment if customers do make a visit.

I’ve written about this trend many times before, but still haven’t seen much on the topic from other observers. This puzzles me a bit because it’s such an obviously powerful force with such profound implications. Indeed, a great deal of what we worry about in the near future will become irrelevant if things unfold as I expect.

Let me step back and give a summary of my analysis. The starting point is that people increasingly interact with the world through their online identities in general and their mobile phones in particular. The second point is a handful of companies control an increasing portion of consumers’ experiences through those devices: this is Facebook taking most of their screen time, Google or Apple owning the physical device and primary user interface, and Amazon managing most of their purchases.

At present, Facebook, Apple, Google, and Amazon still occupy largely separate spheres, so most people live in more than one universe. But each of the major players is entering the turf of the others. Facebook and Google compete to provide information via social and search. Both offer buying services that compete with Amazon. Amazon and Apple are using voice appliances to intercept queries that would otherwise go through to the others.

Each vendor’s goal is to expand the range of services it provides. This sets up a virtuous cycle where consumers find it’s increasingly convenient to do everything through one vendor. Instead of a conventional “social network effect” where the value of a network grows with the number of users, this is a “personal network effect” where the value of a vendor relationship grows with the number of services the vendor provides to the same individual.

While a social network effect pulls everyone onto a single universal network, the personal network effect allows different individuals to congregate in separate networks. That means the different network universes can thrive side by side, competing at the margins for new members while making it very difficult for members to switch from one network to the other.

There’s still some value to network scale, however. Bigger networks will be able to create more appealing services and attract more partners, The network owners will also provide sharing services that make it easy for members to communicate with each other (see: Apple Facetime) but harder to interact with anyone else. So the likely outcome is a handful of large networks, each with members who are increasingly isolated from members of other networks. Think of it as a collection of tribes.

Even without any intentional effort by the network owners, members of each network will have shared experiences that separate them from outsiders: try asking an Android user for help with your iPhone. The separation will become even more pronounced if the network owners more actively control the information their members receive – something that’s already happening in the name of blocking terrorists, bullies, and other genuinely bad actors. Of course, people who prefer a particular world view will be able to form their own networks, which will be economically viable because the personal network effect with outweigh the social network effect. These splinter networks might be owned independently (it’s easy to imagine a Fox News tribe) or owned by a bigger network that just gives each tribe what it wants. Either way, you have a society whose tribes that are mutually unaware at best and actively hostile at worst.

Let’s put aside the deeper social implications of all this, in the best tradition of “other than that, how did you like the play, Mrs. Lincoln?” My immediate point is that marketers and technologists should be aware of these trends because they help to explain much of what’s happening today in our industry and help to prepare for what might happen tomorrow. Here are some things to keep an eye on:

- Growth of Voice. As I’ve already mentioned, voice interactions are an alternative to conventional screen interactions. What’s important is the voice interaction often happens first: it’s easier to ask Alexa or Siri to do something than to type that same request into Google, Facebook, or Amazon. This means whoever owns the voice interaction can intercept customer behaviors before anyone else. So pay close attention to voice-based systems: far from a gimmick, they could be keys to the kingdom.

- Owning the Pipes. Network owners want above all to keep their customers’ data to themselves. This will make them increasingly interested in owning the pipes that carry that data and in blocking anyone else from tapping those pipes. Don’t be surprised to see the network owners take an interest in physical networks (cable and phone companies) and alternative connections (community wifi). Also expect them to argue that physical network owners shouldn’t be allowed to use the data they carry (an argument they just lost in Congress but will likely resurrect on privacy grounds) and that they should be able to buy preferential access (the “network neutrality” debate they are now winning in that same Congress). Could a pipe owner grow its own network? The folks at Verizon apparently think so: that’s why they bought AOL and Yahoo!

- Data Motels. It goes pretty much without saying that network owners are eager to take data from other companies, but stingy about sharing their own. So they’re happy to import other companies’ customer lists and serve them ads, conveniently getting paid while gaining new information. But they’re less interested in exporting data about those same common customers. It’s the information version of a roach motel: data checks in but it can’t check out.

- Expanding Services. We’ve already covered this but it’s so important that it bears repeating: network vendors will continue to extend the services they offer, tightly integrating them to increase the “personal network value” of their relationships. Watch carefully and you’ll notice each new service gives customers a reason to share more data, gives the network owners still more information to better personalize customer services. Everybody wins, although the networks win more.

- The AIs Have It. The networks’ ultimate goal is to handle all their members’ purchases. The best way to do this is to have members delegate as many decisions to the network as possible, starting with things like subscriptions for restocking groceries and on-demand transportation. This saves the effort of making individual sales and, more important, eliminates opportunities for members to leak out of the system. Delegation requires the members to trust the network to make the right decisions on their behalf. Gathering more data is one key to this; artificial intelligence to make good decisions with that data is another. So if you’re thinking the networks are investing in AI only because they’re nerds who like science projects, think again.

- Trust. Arguably, trust is the result of experience, so making good decisions for members should be enough to earn permission to make more decisions. But in practice it will be impossible for consumers to know if the network is really making the best possible choices. So building trust through conventional branding and relationship management will be critical skills for the network marketers, especially when it comes to recruiting new members. (Of course, with network usage starting somewhere around age 2, membership is likely to be more hereditary than anything else.) Data and AI systems will help network marketers know the best way to build trust with each individual, but human marketing skills will also be needed – at least for now.

- Marketing to Networks. If the networks really do take control of their members’ commercial lives, the role of marketers at non-network companies is much diminished. This is already happening: every dollar spent on pay-per-click search or social advertising is essentially a dollar the network spends on the owner’s behalf, based on data only the network possesses. Today, non-network marketers still set budgets, write copy, and select keywords. But those tasks are well on their way to being automated and it won’t matter much whether the automation runs on a machine at the network or the non-network company. The role of the non-network marketer in this world is to market to the network itself. This is already a reality: search engine optimization is really marketing to network search algorithms. It will be even more important when the member isn’t directly involved in the purchase process. No doubt there will be a certain amount of “incentivizing” of the network to pick a particular product, some of it under the table. But there will also be competition to build products and services that best meet member needs and to create brands that members are pleased to have chosen on their behalf.

- Whither MarTech? Marketers at non-network companies will still have jobs whether or not they sell directly to their customers. But martech vendors could face a threat to their existence. Simply put, if the networks capture all direct customer interactions and don’t share their data with outsiders, the market for customer data platforms, journey orchestration engines, predictive analytics, content management systems, and other martech mainstays will vanish. This probably overstates the problem: presumably companies will still interact directly with people after have made a purchase, even if the purchase itself is managed by the network. But the majority of marketing technology is used for customer acquisition, and much of that could become obsolete.

- Alternate Routes. Like Dickens’ Ghost of Christmas Future, I’m only showing you what might be. Non-network marketers have a strong incentive to preserve direct access to their current and future customers and many suppliers have ways to help. Non-network advertising media are first in line, of course, although they’ve been losing ground at an alarming rate. But many other companies are finding creative ways to capture customer data and attract customers’ attention. Location data and mobile apps are especially contested territory because they let firms reach customers directly in ways that customers find highly valuable. The lowly mobile wallet, if it remains outside the networks’ control, could be an alternative channel for reaching a mass audience. Telecommunication providers, with their deep pockets, broad reach, physical access to mobile devices, and vast government relationships, are probably a better bet. Of course, the telcos would probably rather join the network oligopoly than break it. But the broader point is there are still many players in the game and the outcome is far from decided. I hope this helps you make a little more sense of what's happening on the field.


Saturday, June 10, 2017

Cheetah Digital Debuts in Las Vegas

I spent the latter part of last week still in Las Vegas, switching to the client conference for Cheetah Digital, the newly-renamed spinoff of Experian’s Cross Channel Marketing division. Mercifully, this was at a relatively humane venue, the big advantage being I could get from my hotel room to the conference sessions without walking through the casino floor or a massive shopping mall. But it was still definitely Vegas.

The conference offered a mix of continuity and change. Nearly every client and employee I met had been with Cheetah / Experian for at least several years, so there was a definite feeling of old friends reconnecting. Less pleasantly, Cheetah’s systems have also been largely unchanged for years, something that company leaders could admit openly since they are now free to make new investments. Change was provided by the company’s new name and ownership: the main investor is now Vector Capital, whose other prominent martech investments include Sizmek, Emarsys, and Meltwater. There’s also some participation from ExactTarget co-founder Peter McCormick and Experian itself, which retained 25% ownership. The Cheetah Digital name reflects the company’s origins as CheetahMail, which Experian bought in 2004 and later renamed, although many people never stopped calling it Cheetah.

Looking ahead, newly-named Cheetah CEO Sameer Kazi, another ExactTarget veteran, said the company’s immediate priorities are to consolidate and modernize its technology. In particular, they want to move all clients from the original CheetahMail platform to Marketing Suite, which was launched in 2014. Marketing Suite is based on the Conversen, a cross-channel messaging system that Experian acquired in 2012. Kazi said about one third of the company’s revenue already comes from Marketing Suite and that the migration from the old platform will take four or five years to complete.

Longer term, Kazi said Cheetah’s goal is to become the world’s leading independent marketing technology company, distinguishing Cheetah from systems that are part of larger enterprise platforms. Part of the technical strategy to do this is to separate business logic from applications, using APIs to connect the two layers. This will make it easier for marketers to integrate external systems, taking advantage of industry innovation without requiring Cheetah to extend its own products.

Cheetah will also continue to provide services and build customer databases for its clients. Products based on third party data, such as credit information and identity management, have remained with the old Experian organization.

With $300 million in revenue and 1,600 employees, Cheetah Digital is already one of the largest martech companies. It is also one of the few that can handle enterprise-scale email. This makes it uniquely appealing to companies that are uncomfortable with the big marketing cloud vendors. The company still faces a major challenge in upgrading its technology to optimize customer treatments in real time across inbound as well as outbound channels.  It's a roll of the dice.

Wednesday, June 07, 2017

Pega Does Vegas

I spent the first part of this week at Pegasystems’ PegaWorld conference in Las Vegas, a place which totally creeps me out.* Ironically or appropriately, Las Vegas’ skill at profit-optimized people-herding is exactly what Pega offers its own clients, if in a more genteel fashion.

Pega sells software that improves the efficiency of company operations such as claims processing and customer service. It places a strong emphasis on meeting customer needs, both through predictive analytics to anticipate what each person wants and through interfaces that make service agents’ jobs easier. The conference highlighted Pega and Pega clients’ achievements in both areas. Although Pega also offers some conventional marketing systems, they were not a major focus. In fact, while conference materials included a press release announcing a new Paid Media solution, I don’t recall it being mentioned on the main stage.**

What we did hear about was artificial intelligence. Pega founder and CEO Alan Trefler opened with a blast of criticism of other companies’ over-hyping of AI but wasn’t shy about promoting his own company’s “real” AI achievements. These include varying types of machine learning, recommendations, natural language processing, and, of course, chatbots. The key point was that Pega integrates its bots with all of a company’s systems, hiding much of the complexity in assembling and using information from both customers and workers. In Pega’s view, this distinguishes their approach from firms that deploy scores of disconnected bots to do individual tasks.

Pega Vice President for Decision Management and Analytics Rob Walker gave a separate keynote that addressed fears of AI hurting humans. He didn’t fully reject the possibility, but made clear that Pega’s official position is it’s adequate to let users understand what an AI is doing and then choose whether to accept its recommendations. Trefler reinforced the point in a subsequent press briefing, arguing that Pega has no reason to limit how clients can use AI or to warn them when something could be illegal, unethical, dangerous, or just plain stupid.

Apart from AI, there was an interesting stream of discussion at the conference about “robotic process automation”. This doesn’t come up much in the world of marketing technology, which is where I mostly live outside of Vegas. But apparently it’s a huge thing in customer service, where agents often have to toggle among many systems to get tasks done. RPA, as its known to its friends, is basically a stored series of keystrokes, which in simpler times was called a macro. But it’s managed centrally and runs across systems. We heard amazing tales of the effort saved by RPA, which doesn’t require changes to existing systems and is therefore very easy to deploy. But, as one roundtable participant pointed out, companies still need change management to ensure workers take advantage of it.

Beyond the keynotes, the conference featured several customer stories. Coca Cola and General Motors both presented visions of a connected future where soda machines and automobiles try to sell you things. Interesting but we’ve heard those stories before, if not necessarily from those firms. But Scotiabank gave an unusually detailed look at its in-process digital transformation project and Transavia airlines showed how it has connected customer, flight, and employee information to give everyone in the company a complete view of pretty much everything. This allows Transavia to be genuinely helpful to customers, for example by letting cabin crews see passenger information and resolve service issues inflight. Given the customer-hostile approach of most airlines, it was nice to glimpse an alternate reality.

The common thread of all the client stories (beyond using Pega) was a top-down, culture-deep commitment to customer-centricity. Of course, every company says it’s customer centric but most stop there.  The speakers’ organizations had really built or rebuilt themselves around it.  Come to think of it, Las Vegas has that same customer focus at its core. As in Las Vegas, the result can be a bit creepy but gives a lot people what they want.  Maybe that's a good trade-off after all.

_________________________________________________________________________________________
* On the other hand, I had never seen the corrugated hot cup they had in the hotel food court. So maybe Vegas is really Wonderland after all.

** The solution calculates the value a company should bid to reach individual customers on Facebook, Google, or other ad networks.  Although the press release talks extensively about real time, Pega staff told me it's basically pushing lists of customers and bid values out to the networks.  It's real time in the sense that bid values can be recalculated within Pega as new information is received, and revised bids could be pushed to the networks. 



Saturday, June 03, 2017

SessionM Expands from Loyalty to Full Customer Engagement Management

SessionM launched in 2012 as a platform that increased user engagement by adding gamification and loyalty rewards to mobile apps. The system has since expanded to support more channels and message types. This puts it in competition with dozens of other customer engagement and personalization systems. Compared with these vendors, SessionM’s loyalty features are probably its most unusual feature.  But it would be misleading to pigeonhole SessionM as a system for loyalty marketers. Instead, consider it a personalized messaging* product that offers loyalty as a bonus option for marketers who need it.



In that spirit, let’s break down SessionM’s capabilities by the usual categories of data, message selection, and delivery.

Data: SessionM can gather customer behaviors on Web and mobile apps from its own tags or using feeds from standard Web analytics tools. It can also ingest data from other sources such as a Customer Data Platform or CRM system. Customer data is organized into profiles and events, which lets the system store nearly any type of information without a complex data model.  SessionM can also accommodate non-customer data such as lists of products and retail stores. It can apply multiple keys to link data related to the same customer, but requires exact matches. This works well when dealing with known customers, who usually identify themselves when they start using a sytem. Finding connections among records belonging to anonymous visitors would require additional types of matching.

Message Selection: SessionM is organized around campaigns.  Each campaign has a target audience, goal (defined by a query), outcome (such as adding points to an account or tagging a customer profile), message, and “execution” (the channel-specific experience that includes the message). SessionM describes the outcome as primary and the message as following it: think of notification after you've earned an award. Non-loyalty marketers might think of the message as coming first with the outcome as secondary. In practice, the order doesn’t matter.

What does matter is that campaigns can include multiple messages, each having its own selection rules. Message delivery can be scheduled or triggered by variables such as time, frequency, and customer behaviors. This means a SessionM campaign could deliver a sequence of messages over time, even though the system doesn’t have a multi-step campaign builder.  Rules can draw on machine learning models that predict content affinity, churn, lifetime value, near-time purchase, and engagement. Clients can use the standard models or tweak them to fit special needs. Automated product recommendations are due later this year.  Messages are built from templates that can include dynamic elements selected by rules or models.


Delivery: Campaign messages are delivered through widgets installed in a Web page or mobile app, through lists sent to email providers or advertising Data Management Platforms (DMPs), or through API calls from other systems such as chatbots. Multiple campaigns can connect through the same widget, which raises the possibility of conflicts.  At present, users have to control this manually through campaign and message rules. SessionM is working on a governance module to manage campaign precedence and limit the total number of messages.

The system can generate presentation-ready messages or send data elements for the delivery system to transform into the published format. It supports real time response by loading customer profiles into memory, limiting itself to information required by active campaigns. External systems can access the customer profiles directly through JSON API calls or file extracts, but not through SQL queries.

About that loyalty system: it’s sold as a separate module, so only people who need it have to pay for it. It includes the features you’d expect: points, promotions, awards, status tiers, reward redemption, and so on.  SessionM added the ability to deliver and redeem personalized coupons through retail Point of Sale systems when it bought LoyaltyTree inDecember 2016,

SessionM has about 70 clients. The company originally sold to large enterprises, which are still about half its customer base. It is now pursuing mid-market clients more actively. The company has raised $73.5 million in funding.


_______________________________________________________________________
* You’ll note that I’m using “customer engagement”, “personalization”, “messaging”, and other system categories interchangeably. It’s probably possible to distinguish among them, but, in practice, all assemble a customer profile, use rules to select messages for individuals, and deliver those messages through execution systems such as Web sites. Most marketers will want to pick just one system to do this sort of thing, so they’ll evaluate vendors from all those classes against each other. This makes distinguishing between them largely an academic exercise.

Wednesday, May 24, 2017

Coherent Path Auto-Optimizes Promotions for Long Term Value

One of the grand challenges facing marketing technology today is having a computer find the best messages to send each customer over time, instead of making marketers schedule the messages in advance.  One roadblock has been that automated design requires predicting the long-term impact of each message: just selecting the message with the highest immediate value can reduce future income. This clearly requires optimizing against a metric like lifetime value. But that's really hard to predict.

Coherent Path offers what may be a solution. Using advanced math that I won’t pretend to understand*, they identify offers that lead customers towards higher long-term values. In concrete terms, this often means cross-selling into product categories the customer hasn’t yet purchased.  While this isn’t a new tactic, Coherent Path improves it by identifying intermediary products (on the "path" to the target) that the customer is most likely to buy now.  It can also optimize other variables such as the time between messages, price discounts, and the balance between long- and short-term results

Coherent Path clients usually start by optimizing their email programs, which offer a good mix of high volume and easy measurability. The approach is to define a promotion calendar, pick product themes for each promotion, and then select the best offers within each theme for each customer. “Themes” are important because they’re what Coherent Path calculates different customers might be interested in. The system relies on marketers to tell it what themes are associated with each product and message (that is, the system has no semantic analytics to do that automatically). But because Coherent Path can predict which customers might buy in which themes, it can suggest themes to include in future promotions.

Lest this seem like the blackest of magic, rest assured that Coherent Path bases its decisions on data.  It starts with about two years’ of interactions for most clients, so it can see good sample of customers who have already completed a journey to high value status. Clients need at least several hundred products and preferably thousands. These products need to be grouped into categories so the system can find common patterns among the customer paths. Coherent Path automatically runs tests within promotions to further refine its ability to predict customer behaviors. Most clients also set aside a control group to compare Coherent Path results against customers managed outside the system. Coherent Path reports results such as 22% increase in email revenue and 10:1 return on investment – although of course your mileage may vary.

The system can manage other channels than email. Coherent Path says most of its clients move on to display ads, which are also relatively easy to target and measure. Web site offers usually come next.

Coherent Path was founded in 2012 and has been offering its current product for more than two years. Clients are mostly mid-size and large retailers, including Neiman Marcus, L.L. Bean, and Staples. Pricing starts around $10,000 per month.

_________________________________________________________________________
* Download their marketing explanation here or read an academic discussion here.

Saturday, May 20, 2017

Dynamic Yield Offers Flexible Omni-Channel Personalization

There are dozens of Web personalization tools available. All do roughly the same thing: look at data about a visitor, pick messages based on that data, and deploy those messages. So how do you tell them apart?

The differences fall along several dimensions. These include what data is available, how messages are chosen, which channels are supported, and how the system is implemented. Let’s look at how Dynamic Yield stacks up.

Data: Dynamic Yield can install its own Javascript tag to identify visitors and gather their information, or it can accept an API call with a visitor ID. It can also build profiles by ingesting data from email, CRM, mobile apps, or third party sources. It will stitch data together when the same personal identifier is used in different source systems, but it doesn’t do fuzzy or probabilistic cross-device matching. Data is ingested in real time, allowing the system to react to customer behaviors as they happen.

Message selection: this is probably where personalization systems vary the most. Dynamic Yield largely relies on users to define selection rules. Specifically, users create “experiences” that usually relate to a single position on a Web page or single message in another channel.  Each experience has a list of associated promotions and each promotion has its own target audience, content, and related settings. When a visitor engages with an experience, the system finds the first promotion audience the visitor matches and delivers the related content.

This is a pretty basic approach and doesn’t necessarily deliver the best message to visitors who qualify for several audiences. But dynamic content rules, machine-learning, and automated recommendations can improve results by tailoring the final message to each individual. In addition, the system can test different messages within each promotion and optimize the results against a user-specified goal.  This lets it send different messages to different segments within the audience.

Product recommendations are especially powerful.  Dynamic Yield supports multiple recommendation rules, including similarity, bought together, most popular, user affinity, and recently viewed.  One experience can return multiple products, with different products selected by different rules.  In other words, the system present a combination of recommendations including some that are similar to the current product, some that are often purchased with it, and some that are most popular over all. 

Channels: this is a particular strength for Dynamic Yield, which can personalize Web pages, emails, landing pages, mobile apps, mobile push, display ads, and offline channels. Most personalization options are available in most channels, although there are some exceptions: you can’t do multi-product recommendations within a display ad and system-hosted landing pages can’t include dynamic content.

Implementation: this also varies by channel. Web site personalization is especially flexible: the Javascript tag can read an existing Web page and either replace it entirely or create a version with a Dynamic Yield object inserted, without changing the page code itself. Users who do control the page code can insert a call the Dynamic Yield API.  Email personalization can also be done by inserting an API call, which lets Dynamic Yield reselect the message each time the email is rendered. The system has direct integration with major ad servers and networks, letting it send targeting rules with different ad versions for each target.

Dynamic Yield’s multi-channel scope and easy deployment options will be appealing to many marketers. The company has more than 100 customers, primarily in ecommerce and media. Pricing is based on the number of unique user profiles managed and on system components. A small client might pay as little as $25,000 per year, although larger companies can pay much more.

Sunday, May 14, 2017

Will Privacy Regulations Favor Internet Giants?

Last week’s MarTech Conference in San Francisco came and went in the usual blur of excellent presentations, interesting vendors, and private conversations. I’m sure each attendee had their own experience based on their particular interests. The two themes that appeared the most in my own world were:

- data activation. This reflects recognition that customer data delivers most of its value when it is used to personalize customer treatments. In other words, it’s not enough to simply assemble a complete customer view and use it for analytics.  “Activation” means taking the next step of making the data available to use during customer interactions, ideally in real time and across all channels. It’s one of the advantages of a Customer Data Platform, which by definition makes unified customer data available to other systems. This is a big differentiator compared with conventional data warehouses, which are designed primarily to support analytical projects through batch updates and extracts.  Conventional data warehouse architectures load data into a separate structure called an “operational data store” when real-time access is needed. Many CDP systems use a similar technical approach but it’s part of the core design rather than an afterthought. This is part of the CDPs’ advantage of providing a packaged system rather than a set of components that users assemble for themselves. CDP vendors exhibiting at the show included Treasure Data, Tealium, and Lytics.

- orchestration. This is creating a unified customer experience by coordinating contacts across all channels. It’s not a new goal but is standing out more clearly from approaches that manage just one channel. More precisely, orchestration requires a decision system that uses activated customer data to find best messages and then distributes them to customer-facing systems for delivery. Some Customer Data Platforms include orchestration features and others don’t; conversely, some orchestration systems are Customer Data Platforms and some are not. (Only orchestration systems that assemble a unified customer view and expose it to other systems qualify as CDPs.) Current frontiers for orchestration systems are journey orchestration, which is managing the entire customer experience as a single journey (rather than disconnected campaigns), and adaptive orchestration, which is using automated processes to find and deliver the optimal message content, timing, and channels for each customer. Orchestration vendors at the show included UserMind, Pointillist, Thunderhead, and Amplero.

Of course, it wouldn’t be MarTech if the conference didn’t also provoke Deeper Thoughts. For me, the conference highlighted three long-term trends:

- continued martech growth. The highlight of the opening keynote was unveiling of martech Uber-guru Scott Brinker’s latest industry landscape, which clocked in at 5,300 products compared with 3,500 the year before. You can read Brinker’s in-depth analysis here, so I’ll just say that industry growth shows no signs of slowing down.

- primacy of data. Only a few presentations or vendors at the conference were devoted specifically to data, but nearly everything there depends on customer data in one way or another. And, as you know from my last blog post, the main story in customer data today is the increasing control exerted by Google and Facebook, and to a lesser degree Amazon, Apple, and Microsoft. If those firms succeed in monopolizing access to customer information, then many martech systems won’t have the inputs they need to work their magic. That could be the pin that bursts the martech bubble.

- new privacy regulations. As Doc Searles (co-author of The Cluetrain Manifesto) pointed out in the second-day keynote , new privacy regulations also threaten to cut off the data supply of marketing and advertising systems, creating an “extinction level event”. Searles announced a “customer commons” that lets consumers share data on their own terms . It’s an interesting concept but I suspect few consumers will put that much work into personal data management.

My initial inclination was to agree with Searles about the implications of new privacy rules, but I’ve since adjusted my view.  It’s just inconceivable that an economic force as powerful as Internet marketing will let regulations put it out of business. It's much more likely that companies like Google and Facebook will learn to work within the new regulations, which after all don’t ban personal data collection but merely require consumer consent. Surely firms with products that are literally addictive can gain consumer consent in ways that will satisfy even the most determined regulators. More broadly, big companies in general should be able to make the investments needed to comply with privacy regulations with minimal harm to their business.

Small businesses are another matter.  Many will lack the resources needed to understand and comply with new privacy regulations.  In other words, privacy regulations will have the unintended consequence of favoring big businesses – which can afford to find ways to comply – over small businesses – which won’t.   Google and Facebook will spend whatever they must to protect their businesses, in the same way that auto manufacturers found ways to comply with safety and pollution regulations. Indeed, as the auto industry illustrates, the actual cost of compliance is likely to be slight and may even result in better, more profitable products. The impact on small businesses will be to push them to use packaged software – yes, including Customer Data Platforms – that have regulatory compliance built in by experts. The analogy here is with financial and human resources packaged software, which similarly provides built-in compliance with government and industry standards.

Of course, if Google, Facebook, and a handful of others take near-total control over access to customers, there won’t be much data for anyone else to manage. But it seems likely that companies will find ways around those toll booths, especially when dealing with customers who have already purchased their products. Ironically, this would return marketers to the situation that existed before the Internet, when data on prospects was limited but customers could be reached directly. That might put a small crimp in martech growth but would still leave plenty of room for innovation.

Saturday, May 06, 2017

Martech Vendors Can't Avoid Ad Audience Battles

It’s been said that sports are soap operas for men. You can see business news the same way: a drama with heroes, villains, intertwining story lines, and endless plot twists. One of the most interesting stories playing out right now is online advertising, where the walled gardens of Google, Facebook, and other audience aggregators are under assault by insurgent advertisers who, like most rebels, aspire as much to replace their overlords as destroy their power. What they’re really fighting over is control of the serfs – oops, I meant consumers – who create the empires' wealth.

Recent complaints about ad measurement. audience transparency, and even placement near objectionable Web content are all tactics in the assault, aimed both at winning concessions and weakening their opponents. More strategically, support for letting broadband suppliers resell consumer data is an attempt create alternative suppliers who will strengthen the insurgents’ bargaining position.

Yet another front opened up last week with an announcement from a consortium of adtech vendors, including AppNexus, LiveRamp, MediaMath, Index Exchange, LiveIntent, OpenX, and Rocket Fuel, that they had created a standard identity framework to support personal targeting of programmatic ads. The goal was to strengthen programmatic’s position as an alternative to the aggregators by making programmatic audiences larger, more targetable, and more unified across devices.

The consortium was quite explicit on this goal. To quote the press release:

"Today, 48 percent of all digital advertising dollars accrue to just two companies – Facebook and Google," said Brian O'Kelley, CEO of AppNexus. "That dynamic has placed considerable strain on the open internet companies that generate great journalism, film, music, social networking, and information. This consortium enables precision advertising comparable to that of Google and Facebook, and does so in a privacy-conscious manner. That means better outcomes for marketers, greater monetization for publishers, and more engaging content for consumers."

But behind the rallying cry, the alliance between advertisers and programmatic ad suppliers is uneasy at best. After all, programmatic threatens the core ad buying business of the agencies and faces its own problems of measurement and objectionable ad placement. How the two groups cooperate against a common enemy will be a story worth watching.

Martech vendors have so far remained pretty much neutral in the ad wars, feeding audiences to both sides with the pragmatic indifference of merchants throughout history. But the new ad tech consortium brings the battle closer, since it involves the personal identities that have been the martech vendors’ stock in trade. In particular, LiveRamp (which links anonymous cookies to known identities) belonging to the consortium creates a connection that will likely pull in other martech players. Of course, the convergence between adtech and martech has long been predicted – it's more than two years since I oh-so-cutely christened it “madtech”  and the big marketing clouds started to  purchase data management platforms and other adtech components even earlier.  The merger is probably inevitable as programmatic advertising looks more like personalized marketing every day.  Martech vendors have growing reason to side with the programmatic alliance as it becomes clear that audience aggregators could threaten their own kingdoms by cutting off access to personal data and taking control of contact opportunities.

In short, what seems like a remote, and remotely entertaining, conflict in adland is more closely connected to the central martech story than you may think. So it’s worth watching closely and deciding what role your business will play when they call your cue
.



Thursday, April 27, 2017

Infusionsoft Announces Freemium Marketing Automation to Expand Its User Base

Infusionsoft has always presented combined methodical management of its own business with evangelical cheerleading for its small business clients. The contrast was even greater than usual at the company’s annual ICON conference in Phoenix this week. For Infusionsoft managers, the big news was a new product called Propel, which delivers prepackaged programs for business owners who don’t want to get involved in the details of marketing. For attendees, who were a largely partners and power users, the most exciting announcements were improvements to the current product such as a vastly better Web form builder. Propel will let Infusionsoft serve business owners who find the current product too complicated or expensive. Pretty much by definition, those people weren’t at ICON. So while Infusionsoft managers and some far-sighted partners were almost giddy about the growth that Propel could create for their businesses, the larger audience was more interested in ICON’s usual training sessions and inspirational hoopla.

Propel addresses a fundamental problem that has limited the growth of all small business sales and marketing systems: the vast majority of small business owners don’t have the time, money, skills, or interest to use them well.  Vendors have addressed this either by reducing the required effort through easier-to-use interfaces, content templates, and prebuilt campaigns, or by providing services that do the work on business owners’ behalf.  Infusionsoft has done both and also used a relatively expensive mandatory start-up package ($999 or higher) to screen out buyers who aren't serious about using the system. This has worked well for Infusionsoft – the company has grown steadily, although it no longer releases client counts as it positions itself for an as-yet-unscheduled Initial Public Offering. But it also limits the market to the most aggressive small business owners.

Infusionsoft sees Propel as a third way to serve less-ambitious businesses: not just by making the product simpler to use, but by removing some tasks altogether. For example, prebuilt campaign templates typically require users to create or customize the actual content, and often require them to set up campaign flows following cookbook-style directions. Propel will include default content tailored to a particular industry or product. It will automatically scrape a client’s Web site to find a logo and brand colors and apply them. When customization is unavoidable, Propel will let campaign designers build wizards that ask users key questions. The system will then automatically adjust the campaign by inserting relevant information or changing the campaign flow. The goal is campaigns that can be set up in a few minutes with no training and deliver immediately visible benefits. Infusionsoft hopes these will entice business owners who don’t want to commit from the start to a long-term marketing plan.

The success of this approach is far from certain.  Business owners must still take some initial steps that could be daunting. Infusionsoft managers are acutely aware of the issues and doing everything they can to remove start-up barriers. This includes making it easy to import existing email addresses from phone contact lists, personal email accounts, spreadsheets, accounting systems, or elsewhere. More radically for Infusionsoft, there will be a free version of the system and no start-up fee. This will clearly attract a new set of less-committed users. Delivering enough value for these to stick with the system will be difficult. So will making the system so easy to use that customer support costs are close to zero. It could be hard for Infusionsoft’s entrepreneur-loving staff to limit the help they give to new clients.

Infusionsoft also announced two other major changes during the conference. The simpler one was “partner first”, which translates to relying more on partners to train new clients and provide on-going support.  This will let Infusionsoft support more customers without expanding its internal staff and help to attract more partners.  Propel supports "partner first" by letting partners build their own packaged campaigns and sell them directly to their own clients or in a marketplace to all Infusionsoft clients. Although “partner first” would make sense even without Propel, leveraging partners is a key way for Infusionsoft to grow quickly while keeping costs down and margins high. The company said the proportion of new clients trained by partners has already moved from 20% to over 80%.

The second change was technical: Infusionsoft has built a new data structure and services oriented architecture that can more easily synchronize data with other systems, especially in real time. The slogan for this is “from all-in-one to one platform”. (Infusionsoft loves its slogans.)  Infusionsoft has always been clear that it won’t build a truly complete set of functions: for example, it limits its ecommerce to a relatively simple shopping cart and doesn’t provide its own Web content manager. So the real change is that the underlying data model now includes data expected from external systems. This will simplify integration with new systems and make the imported data easily available for Infusionsoft campaigns and reporting. The goal is for Infusionsoft to be the “one place” that users look for all their data, making life simpler for users and, of course, ensuring that Infusionsoft has a central role their business operations. The new platform also supports Propel by exposing more functions to build into packaged campaigns and adds some partner-friendly features such as unified access to all instances belonging to a partner’s clients.

Infusionsoft staff made scattered references to using artificial intelligence inside Propel to make recommendations and implement some changes automatically. That would be the ultimate in work reduction.  But they didn’t talk about AI in any detail, perhaps because it’s still in early stages and perhaps because it could scare some users and partners. Or both.

Current customers will have some access to Propel in May or June, and new customers will be placed on the new platform starting mid-summer. Existing customers will be migrated to the new platform in stages through the end of 2018.