Category Archives: Analytics
Data warehousing has been around for a long time. It’s a key step that many companies took to modernize their infrastructure and house their rapidly growing stores of data. But will 2013 be the last year for the enterprise data warehouse as we know it?
Many seem to think so. When Gartner released its 2013 tech trends last month, it noted that Big Data was “leading organizations to abandon the concept of a single enterprise data warehouse containing all information needed for decisions.” The Huffington Post was a bit more decisive, as BlueKai’s CEO declared that “the age of the data warehouse is behind us. It’s gone and it’s not coming back.” Read more and comment…
As a rule of thumb in customer service, financial models show that any ticket not resolved quickly – due to lack of insight – will see its cost roughly triple. When you factor in research time to gain insight, contact and context re-establishment time, having to bring other people into the mix to resolve a problem, escalation time, time wasted when an agent is interrupted by another agent looking for advice on yet another ticket, and more, you’ve got quite an inefficient, costly problem on your hands.
Worse – from the customer experience perspective – because of queues introduced in the process, case resolution time can easily increase 10x. For those of you who are familiar with manufacturing and Kanban principles, this is a very similar topology of problem. Every time a ticket is unresolved it introduces WIP and delays, driving the whole cycle-time up significantly.
So when we do the math at a high level, this 20% of more complex calls often consumes more than 60% of the total customer service budget, because these tickets take much longer, require much greater knowledge aggregation and correlation, and can involve more people. Moreover these 20% more complex tickets are the ones driving the average case resolution time metrics up by a factor.
Even worse is these tickets are the source of 80% of customer dissatisfaction and ultimately client defection. Customers don’t leave after a password reset call; but they are more likely to leave after a sequence of calls during which the agent is grasping at straws to solve the issue at hand. Trust is further negatively impacted if the client has more information than the agent, gleaned from social media and communities.
So at a time when companies are faced with reduced budgets and shortages of staff to maintain a reasonable margin, while at the same time challenged to improve the customer experience, targeting where 60% of the budget is spent, combined with where most customer dissatisfaction arises, seems to me like THE critical place to focus. Thus injecting greater insight into the process of solving these complex issues faster allows for increased capacity which results in higher margins—and also drives higher customer satisfaction. Because this can mean big money, it should be viewed as the single most important initiative within a customer service environment.
Knowledge is everywhere… and yes it means also beyond the knowledge base
While knowledge management in its traditional form – often KBs along with the required access – is clearly a means of delivering the insight required to resolve the most common customer issues, companies focused on the greater strategy of delivering real-time Knowledge Insight into the much broader knowledge ecosystem will win by enabling a more comprehensive strategy. The focus is on optimizing the customer experience vs. on knowledge standardization.
This much more strategic focus for customer service organizations encompasses knowledge management, but also looks beyond that and into the broader knowledge ecosystem, including people’s cumulative know-how, and even reaches out to knowledge within customer communities and other social content sources. For customer service executives, this goal is also much simpler to understand for everybody in the organisation: enabling the service infrastructure to efficiently deliver the right knowledge to help customers and agents understand and resolve each and every issue, every time, quickly and accurately.
We like to distinguish between what we call “process-centric tickets” vs. “knowledge-centric tickets” in customer service.
In the first case, insight is gained from a curated knowledge base by an agent or ideally self-service by the customer. In the latter knowledge-centric ticket case, the best practice is to gather insight by distilling relevant information from various silos, by identifying and consulting key experts, and by correlating and analyzing information efficiently and in real-time.
So where is the knowledge needed to gain that kind of insight? Within the KB for sure. But also everywhere else: within documents, engineering records, CRM and help desk systems, telephony, emails and client communications, ticket histories, and even in cloud-based customer communities, blogs and other social content sources. Then what about information known from other agents’ experiences and how do you go about identifying the experts?
Bottom line: the knowledge required to gain insight is everywhere. It is a knowledge ecosystem which needs to be enabled and tapped into, not just another data silo curated by a pocket of people. Tapping into this disparate, siloed ecosystem with the right tools can save millions while enhancing customer service.
What are your strategies for helping agents and customers gain insight?
As I mentioned in my previous blog post (part 1 of BI vs. Analytics), the amount of information impacting business operations continues to grow, as markets change and the rate of adoption of new technologies increases. So what’s the next step in making sense of all this data, quickly and efficiently? The answer is combining business intelligence and analytics, driven by Enterprise Search 2.0 platforms, to get the results you need.
Is measuring the variance in predictability really analytics?
Business intelligence as a platform has significantly improved the ability of businesses to gain insight on answering some of their most important performance questions. At a very basic level, here’s how it works:
- The designer of the data warehouse painstakingly sifts through a myriad of information that the business leaders say is important to run their business, looking for the appropriate data that will provide the answers.
- Once found, models are created so that the information is now being captured and monitored.
Now the question is, since this is a planned metric, at what point did analysis take place? If we assume that it occurred at design time, then this metric has become predictable because the only thing it is capable of reporting is what the model was originally designed to tell us. For example, the model may be designed to monitor the relationship between parts and suppliers. If inventory falls below 20%, an alert will appear for someone to come and order new products. Good designers will look for all the possible combinations they can think of to understand why parts would drop below 20% and put in metrics, scorecards, dashboards etc, to show what is happening.
There is a slight problem, however.
The models generated to create the business intelligence warehouse are static in nature. What this means is that if additional information is required in the future, then so is the entire process of rebuilding the model, extracting the data, reloading the data, and republishing the warehouse before the new data is available to analyze the new question that needs to be asked. Often, little sub-warehouses are created to speed up this process by not moving as much data and publishing information faster. Although ideal in theory, these sub-warehouses contribute to the issue of the proliferation of data – duplicating data that then needs to be updated in more than one location.
Our conclusion is that business intelligence is great at static analysis or measuring predictable results of pre-planned conditions. But what do we do when something unexpected happens?
When static analytics are not enough, what’s next?
What’s next is “dynamic analytics.” Let’s take an internet search as an example. The first thing I would do is go to a search box and type in “species of frogs.”I could then count the total number of species, but what if I just want to count bright green frogs? I can type “bright green frogs”, because this data exists on the internet, in no particular structure, further enhancing my search. This is fun: “bright green frogs found in South America,” “bright green frogs in South America that live in trees.” These queries are all possible, each one providing me with more information.
So what is the difference between internet searches and the business intelligence environment? Every day I could type in to the search box “bright green frogs in South America that live in trees,” and every day I could potentially get a different answer – maybe some new data was added due to the fact that destruction of the rainforest caused a species of green frogs to become extinct or scientists discovered a new species of green frogs in another area of South America, etc.
With Enterprise Search 2.0 platforms, this dynamic concept of searching and obtaining relevant information is now possible.
Shifting to Enterprise Search 2.0-powered dynamic analytics for business
Innovative and advanced organizations see the value and power of a unified search platform for their business. Using a series of state-of-the-art data connectors to connect disparate data systems in your information ecosystem allows information to be pulled into a common unified index that can consolidate, correlate and normalize the data in near real time and provide ubiquitous access to it.
Isn’t that what the internet is – a common index of information that is accessible by everyone? Like the internet, Enterprise Search 2.0 platforms can enrich their business environments, providing dynamic mash-ups of key relationships between non-integrated data systems through a search query as opposed to through a warehouse that takes days or weeks to rebuild and recreate by moving all the data. Instead of moving the data, the unified index approach only references it, so when new applications or new entities are added to existing applications they become part of the index and are fully accessible.
Joint Research with the TSIA – Enterprise Search 2.0 Powered Analytics: Transform Data into Actionable Knowledge
If you needed further evidence that customer support operations are overwhelmed by data, look no further than the joint research paper released today by the Technology Services Industry Association (TSIA) and Coveo entitled, “Enterprise Search 2.0-Powered Analytics: Transforming Data into Actionable Knowledge.” New data revealed within the report includes this eye-opening statistic: TSIA members receive, on average, 51,000 support incidents per month. These include phone, email, Web chat, and online incidents, each filled with critical information about products and services that could be mined for trends.
I was pleased to have contributed to this report. As the title suggests, the report focuses on Enterprise Search 2.0-powered customer service analytics, a topic relevant to today’s customer service organizations who are awash in oceans of data, and one where we have much expertise to offer. The aim of the report is to help readers understand how support teams are leveraging analytics to deliver real business value in the areas of operational impact, knowledge management, multi-channel management and voice of the customer.
The report outlines how the amount of data flowing through support organizations is increasing every year due to rising interaction volumes and social media activity. The report also reveals interesting figures regarding customer satisfaction scores by channel. In the graph below, you can see the averages follow the same curve as cost: the more human interaction, the higher the satisfaction. The low ratings for self-service are particularly troubling, showing that first-generation knowledgebase and full-text search tools are not keeping pace with customer demand.
The report provides readers with a plan of attack to migrate traffic to the most effective channel to please their customers, which can mean serious financial savings, as well as real-world case studies of organizations who have measured significant business benefits with a unified, 360 view of customer information across multiple channels.
One solution to this data overload is the adoption of analytics in the form of 360-degree views of data centered on what matters most: the customer and the customer base, product and sales information, and customer support performance metrics. The ability to consolidate and correlate data from multiple sources enables the detection of customer trends and the identification of new operational and financial insights.
The full TSIA/Coveo report –“Enterprise Search 2.0-Powered Analytics: Transforming Data into Actionable Knowledge” – can be accessed here: www.coveo.com/TSIAreport.
Information impacting business operations is diverse, complex and growing at staggering rates. Due to unrelenting competition, changing markets, and accelerating rates of adoption for new technology, there is a tremendous strain on IT and business infrastructures. Accessibility to actionable knowledge continually sparks the debate between business intelligence and analytics, questioning the roles each of them play in making informed decisions.
In the past, organizations have struggled to find people willing to sift through mountains of data in order to properly analyze the information needed to make smart decisions. BI made this process easier by introducing analytics as part of the company’s strategic decision making process. Unfortunately, many companies striving to run their entire organization based on BI alone have fallen short for a number of reasons:
- The same people who were sifting through all of the data are now trying to manage the surplus of data required to create an all-encompassing warehouse;
- BI infrastructure and design are faced with a dilemma: as soon as they are completed, they are out of date due to the massive proliferation of data in the business ecosystem. It is almost impossible for organizations to keep up with the veritable explosion of data from new sources;
- The needs of an organization are constantly shifting. In order to respond to these changes, it is necessary (but virtually impossible) to anticipate today what will happen tomorrow.
My guess is that this debate of BI and analytics has been in progress since the inception and branding of BI as a standalone discipline for organizations. BI, as I see it, is a complete end-to-end platform consisting of tools, processes and business models that allow for the retrieval of relevant information in the best format for your business. At this level, analytics is a key part of the BI process. It’s about the predictability of the business – to the extent in which you can predict it – based on potential variances of business norms. The question of what data is being retrieved becomes static in the bigger picture.
One of the biggest questions I hear raised in the debate of BI vs. analytics is: “How dynamic must the access/navigation of information be to really make analytics representative of true business intelligence?” I believe the answer lies in leveraging Enterprise Search 2.0 platforms as a driving source for business intelligence and analytics, and I will explore this idea further in my next blog post.
This week we announced new research that reveals some harsh realities for today’s contact center. The survey results indicate the biggest problems are caused by inefficient access to the information needed to solve customer issues, as data continues to proliferate beyond the traditional knowledge. Our survey was conducted in partnership with Omega Management Group – home to the Center for Loyalty Research and a leader in customer experience management (CEM) strategy.
Perhaps the harshest reality contact centers are facing is that the knowledgebase in which they have invested countless dollars and other resources, and which has been the center of their knowledge management strategy, is no longer enough.
While nearly 70% of customer service organizations report they’ve invested in a knowledgebase, that same percentage report that the knowledgebase does not contain the information necessary for agents to solve customer issues. For companies with more than 10,000 employees, 43% report that information that contact center agents need to access to resolve customer issues resides in more than 20 systems.
Other survey findings include the following:
- 70% of survey respondents indicated that they are facing significant challenges as a result of agents not being able to find necessary customer information.
- Respondents listed case handling time (50%), customer satisfaction (49%), and first contact resolution (FCR) (49%) as the top three challenges.
- 30% of participants estimated the impact of knowledge base operational challenges at somewhere between $100,000 to $1 million per year, including six percent who put the cost at $1 to $5 million.
We also created an infographic to depict some of the key survey findings.
Additional survey findings can be found in the official press release.
We’ve seen how the explosion of data is overwhelming practically every company, and customer service organizations are not exempt from the pressure. A negative customer experience directly impacts customer satisfaction, renewal rates, and other important metrics.
Are these challenges that your organization is facing, or that you have overcome?