"Sensemaking is a motivated, continuous effort to understand connections (which can be among people, places, and events) in order to anticipate their trajectories and act effectively."
Making Sense of Sensemaking 1 (2006)
Story #1: A Tale of Causal Links
A causal model can serve as a sensemaking tool. I have reproduced below a path diagram from an earlier post organizing a set of customer ratings based on their hypothesized causes and effects. As shown on the right side of the graph, satisfaction comes first and loyalty follows with input from image and complaints. Value and Quality perceptions are positioned as drivers of satisfaction. Image seems to be separated from product experience and causally prior. Of course, you are free to disagree with the proposed causal structure. All I ask is that you "see" how such a path diagram can be imposed on observed data in order to connect the components and predict the impact of marketing interventions.
Actually, the nodes are latent variables, and I have not drawn in the measurement model. The typical customer satisfaction questionnaire has many items tapping each construct. In my previous post referenced above, I borrowed the mobile phone dataset from the R package semPLS, where loyalty was assessed with three ratings: continued usage, switching to lower price competitor, and likelihood to recommend. These items are seen as indicators of a commitment and attachment, and their intercorrelations are due to their common cause, which we have labeled as Loyalty.
Where Do Causal Models Come From? The data were collected at one point in time, but it is difficult not to impose a learning sequence on the ratings. That is, the analyst overlays the formation process onto the data as if the measurements were made as learning occurred. Brand image is believed to be acquired first and expectation thought to be formed before the purchase is made. Product experience is understood to come next in the sequence, followed by an evaluation and finally the loyalty decisions to continue using and recommend to others.
As I argued in the prior post, causation is not in the data because the ratings were not gathered over time. By the time the questionnaire is seen, dissonance has already worked its way backward creating consistencies in the ratings. For instance, when switching is a chore, satisfaction and product perceptions are all higher than they would have been had changing providers been an easier task. In a similar manner, reluctantly recommending only when forced for your opinion may reverse the direction of the arrows and at least temporarily raise all ratings. We shall see in the next section how ratings are interconnected by a network of consumer inferences reflecting not observed covariation but belief and semantics.
Story #2: Living on a One-Dimensional Love-Hate Manifold (Halo Effects)
Our first sensemaking tool, structural equation modeling, was shaped by an intricate plot with many characters playing fixed causal roles. Few believe that this is the only way to make sense of the connections among the different ratings. For some, including myself, the causal model seems a bit too rational. What happened to affect? Halo effects are thought of as a cognitive bias, but all summaries introduce bias measured by the variation about the centroid. In the case of customer satisfaction and loyalty, a pointer on a single evaluative dimension can reproduce all the ratings. You tell me that you are very satisfied with your mobile phone provider, and I can predict that you are not dropping a lot of calls.
The halo effect functions as a form of data comprehension. We learn what constitutes a "good" product or service before we buy. These are the well-formed performance expectations that serve as the tests for deciding satisfaction. We are upset when the basic functions that are must-haves are not delivered (e.g., failure of our mobile phone to pair with the car's Bluetooth), and we are delighted when extras are included that we did not expect (e.g., responsive customer support). Most of these expectations lie just below awareness until experienced (e.g., breakage and repair costs when dropped short distance or onto relatively soft surface).
This representation orders features and services as milestones along a single dimension so that one can read one's overall satisfaction from their position along this path. You may be familiar with the usage of such sensemaking tools in measuring achievement (e.g., spelling ability is assessed by the difficulty of words that one can spell) or political ideology (e.g., a legislator's position along the liberal-conservative continuum depends on the bills voted for and against). Thus, I assess your spelling ability by the difficulty of the words you can spell. I determine how liberal or conservative you are by the issues you support or oppose. And I evaluate brands and their products by the features and services they are able to provide. We simply reanalyze the same customer satisfaction rating data. The graded response model from the ltm R package will order both customers and the rating items along the same latent satisfaction dimension, as shown in my post Item Response Modeling of Customer Satisfaction.
Perhaps you noticed that we have changed our perspective or shifted to a new paradigm. Feature ratings are no longer drivers of satisfaction, instead they have become indicators of satisfaction. In Story #1, a Tale of Causal Links, the arrows go from the features to satisfaction and loyalty. Driver analysis accumulates satisfaction feature by feature with each adding a component to the overall reservoir of goodwill. However, in Story #2 all the ratings (features, satisfaction, and loyalty) fall along the same evaluative continuum from rage to praise. We can still display the interrelationship with a diagram, thought we need to drop the arrows for everything is interconnected in this network.
The manifold from Story #2 makes sense of the data by ranking features based on performance expectations. Some features and services are basic and everyone scores well. The premium features and services, on the other hand, are those not provided by every product. Customers decide what they want and are willing to pay, and then they assess the ability of the purchased product to deliver. This is not a driver analysis for the assessment of each component is not independent of the other components.
Those of us willing to live with the imperfections of our current product tend to rate the product higher in a backward adjustment from loyalty to feature performance. You do something similar when you determine that switching is useless because all the competitors are the same. Can I alter your perceptions by tempting you with a $100 bonus or a free month of service to recommend a friend? It's a network of jointly determined nodes with a directionality represented by the love-hate manifold. The ability to generate satisfaction or engender loyalty is but another node, different from product quality perceptions, yet still part of the network.
How else can you explain how randomly attaching a higher price to a bottle of wine yields higher ratings for taste? Price changes consumer perceptions of quality because consumers make inferences about uncertain features based on what they know about more familiar features. When asked about customer support, you can answer even if you have never contacted or used customer support. You simply fill in a rating with an inference from other features with which you are more familiar or you simply assume it must be good or bad because you are happy or unhappy overall. Such a network analysis can be done with R, as can the driver analysis from our first story.
No comments:
Post a Comment