Topics

If personalized experiences are the Holy Grail for customer loyalties, data has been the foundation for personalization. In fact, the pundits will all tell you that you need data, lots and lots of data to power key aspects of truly personalized journeys – like recommendations. And that “data” includes traffic, user behavior (signals), tons of queries, not to mention product reviews. 

Many of the recommendation systems that are out there are based on collaborative filtering – by comparing the current user to other users that have similar preferences. But what if you have sparse user data? Or you have first-time shoppers and don’t know much about them? Or people who prefer to always check out as guests?

How do you recommend products for people you don’t know? Or are these shoppers automatically shut out from hyper-personalization? 

Hyper-Personalizing Anonymous/Unknown Customers

A new-ish technique lets organizations with low volumes of customer data, variable shopping habits, anonymous, or first-time customers, deliver personalization. This means you can determine the intent of any given customer and serve up personalized products most likely to convert. 

This technique requires machine learning, which analyzes products using the same method used in Natural Language Processing (NLP): embeddings. Word embeddings are the process of converting words to dense vectors (numeric representations). And in turn, these words are then analyzed to see how often they appear near each other, and how often they appear with certain parts of speech, such as nouns or verbs.

For example, if you have ever seen Wheel of Fortune, you have probably seen the Before and After puzzle. The word Toast is collocated after “french” — and before “to the bride and groom.” 

In the above examples, Toast is both a noun (modified by French), and a verb. The word would be indexed in both ways to help a person find Recipes vs Types of Speech.

This analysis is then mapped into multi-dimensional vector space with algorithms such as Word2Vec. (Word2Vec is an awesome way to find synonyms.) This process allows for “flexible clustering,” or in other words, embedding things in context. 

But words aren’t the only discrete entities that can be put into contexts: products are so too. If a text is made up of sentences, which are made up of words, Ecommerce is then made up of sessions, which are made up of products. 

Associating Products With Product Embeddings

Indeed, if you can do word embeddings, then you can do product embeddings. Airbnb may have been one of the first to pioneer this concept in order to develop search relevancy for users based on types of homes.  

In the abstract of the paper they wrote, the Airbnb authors point out the problem: Renters are rarely looking for the same things twice – and a listing can only be consumed once on a given set of dates. Search needed to be highly relevant for users with likely little to no prior history.

Similar to Airbnb, the percentage of anonymous and first-time shoppers on any site can be north of 95%. As your shopper navigates your site, you can begin to fine-tune the experience with product embeddings.

My colleague Sébastien Paquet, PhD, who heads up R&D for machine learning at Coveo explains how. 

If products are often viewed together, they are closer together in the vector space and we can infer a high correlation between them,” he explains. “This lets us give a signature to a product. This signature is what we use for recommendations.”

Here’s how it works.

If customers on your site have selected a product from a specific brand, other products from that brand would be weighted higher. Complementary products would be recommended. Or as my colleague Jacopo Tagliabue explains in this brilliant blog on personalizing products, when “Audrey” was browsing through Nike-brand sneaker images she saw these:

Audrey-looking-at-Nike-brand-shoes

Then she typed “tennis sneakers” in the search bar. She might have been turned off if this generic men’s tennis shoe came  up:

men's tennis shoe

Instead of Nike-branded shoe:

Nike women's tennis s

By extending the concept of embeddings beyond word representations, “Audrey’s” browsing data was mapped to products. Vector representations of products are learned from search sessions and allow us to measure similarities between products. Similar products (such as in the sneakers example) are “close” in that space. 

You can’t really infer that the products will be broken down into neat categories like tennis shoes because taxonomies tend to be incomplete, outdated or missing altogether. By analyzing images and text descriptions, we can make some predictions on what your embeddings should look like. (This certainly helps with cold-start problems.) 

So even with limited behavioral data, this sophisticated and granular understanding of products, we can take shoppers’ preferences into account to very quickly suggest, return, or recommend related products.

This is a game-changer for Ecommerce players, making it possible to personalize from the start, as well as increase the relevancy of your recommendations as they move through the funnel. 

Real-time Data Analysis for B2C and B2B

There is one more consideration. This user behavioral analysis must be done “real-time.” 

Many vendors will say they incorporate user data signals, but they do so in batches. Meaning that a shopper is really a “persona”- not a person. Again, you don’t want to point your shoppers to what the preponderance of individuals want – you want to point them to what they actually want!

In Coveo’s case we can analyze users on the fly. This means if Audrey likes Nike tennis shoes for example, and then searches for socks, we can infer that she would want Nike-branded socks. 

Here’s a great video that explains more. 

 

B2B Too Can Personalize With Scarce Data 

The same applies to B2B buyers. During this period of time when supply chains are interrupted, people are looking for replacements of items that might be discontinued or out of stock. Again, with a deep understanding of your products, similarities across your catalog, and product affinity in order to match customers with products they are most likely to buy. 

Imagine a corporate buyer for a general contractor.  At one point, the person may be looking for a “cutter.” Depending on the project being worked on, this may be a pipe cutter, wire cutter, or tile cutter. They are all cutters, although the applications are very different. Using our approach, the session variables of where the users interact or browse, the correct “cutter” can be shown. With a product catalog of 6 million SKUs, this approach is essential.  

Think of it another way, the machine is suggestive selling what a knowledgeable rep might have offered!

Shopper Loyalties Are Up for Grabs

Bottomline, shoppers have become highly frustrated by negative online experiences. Sites that offer real-time personalization have a chance to win customers — and their long-term loyalty if you can make a great first impression. 

You don’t need a lot of individual data — you just have to leverage your product data to infer what your users want.

Dig Deeper

Learn more about the Coveo Relevance Engine

About Diane Burley

Diane Burley is an expert on what it takes to stand out in the digital age. She showcases her expertise when writing about the necessity of AI, NLP, and intelligent search in DX projects. She cut her teeth as a reporter, is a long-time technologist, and knows how to tell a story in a gripping way

Read more from this author