tag:blogger.com,1999:blog-590043897961646114.post4185024002817229504..comments2024-02-06T02:06:06.364-08:00Comments on Engaging Market Research: Feature Prioritization: Multiple Correspondence Analysis Reveals Underlying StructureUnknownnoreply@blogger.comBlogger2125tag:blogger.com,1999:blog-590043897961646114.post-79153458128300069192013-12-18T09:06:27.949-08:002013-12-18T09:06:27.949-08:00Choice modeling works when we carefully mimic the ...Choice modeling works when we carefully mimic the marketplace. I show respondents the retail display and vary a few attributes that are typically varied in that situation so that all of this is familiar territory for consumers (e.g., price, size, packaging, and a claim or two). This is contextualized or grounded measurement that seeks to reproduce what consumers think and do in the real world, rather than complex designs with so many features with excessive variation in unfamiliar situations that consumers are forced to simplify and make something up. Even when we maintain realism, we need to be concerned about the reactive effect of showing several choice sets to every respondent and too great a variation in the attributes (e.g., wide variations in price). We must be careful not to transform the choice exercise into a game that is detached from the mindset that consumers actually use in the market, in which case we no longer measure but create effects that will not generalize out of the lab.<br /><br />Now, this post deals with feature prioritization where the client asks what if we add one of these 9 features (e.g., 9 different credit card reward options). Of course, nine is the number of features that I used in our example. There is no reason why the number could not have been 30 features. Our task is feature screening where I add one feature at a time to the current product. One could create 9 choice sets with each feature as one attribute in the design, but that would take some time for the respondent to complete.<br /><br />Features do not interact because only one feature is added at a time. The task is not feature configuration, where choice modeling might be considered to be the preferred solution. One must remember, however, there are always context effects and that the effect of Feature A by itself is not the same as the average effect of Feature A when Feature B is present or absent half the time (e.g., Feature A dominates Feature B so that varying Feature B in and out of the choice sets makes Feature A seem to have greater value than it would if by itself).<br /><br />Lastly, Robin, please allow me to thank you for your thoughtful questions. What I have provided is my answer to the feature screening problem. I made no claim that it is foolproof. Self-report always causes me some concern. I would not be surprised if others find another solution that works better for them.<br />Joel Cadwellhttps://www.blogger.com/profile/14946447393733294251noreply@blogger.comtag:blogger.com,1999:blog-590043897961646114.post-55783367328414529502013-12-17T20:29:31.885-08:002013-12-17T20:29:31.885-08:00This is a fascinating approach to consumer choice....This is a fascinating approach to consumer choice. I have used IRT a lot in psychometric modeling (building assessments and other similar measurement analyses), but in all consumer research I have done discrete choice experiments, using a multinomial or rank ordered logit model. I'm interested to hear more about why you would go with the approach you present here over the choice experiment, which I've always understood as being more accurate. <br /><br />In your context, you are asking consumers to rate a feature without a counterfactual (or rather, the counterfactual of not having the feature) - simply how important is this feature. I'm interesting in why you think that consumers can make realistic comparisons across features (which would happen in a real purchase, where they have to balance a number of features against one another). Maybe I'm missing your point - that perhaps a DCE is superior methodologically, but practically comparing a large number of features in a DCE is a challenge due to the sample size problems?<br /><br />Another question is, which you may have addressed and I overlooked it, is how do you account for interactions between features? For example, for a car, red in a sports car may behave differently than red in an SUV? In a DCE you can explicitly model these interactions. In IRT, because each item is assumed to be independent; and in fact, if you are asking the features separately, not in combination, then you cannot get at this. Or is there something I'm missing?<br />robinhttps://www.blogger.com/profile/04504568127499970008noreply@blogger.com