Wednesday, March 6, 2013

Let's Do Some Hierarchical Bayes Choice Modeling in R!

It can be difficult to work your way through hierarchical Bayes choice modeling.  There is just too much new to learn.  If nothing else, one gets lost in all ways that choice data can be collected and analyzed.  Then there is all this output from the estimation procedure that you have probably never seen before.  What is this Markov Chain Monte Carlo (MCMC)?  What if I don't want to become a Bayesian?  How does one get started? 
We can reduce the steepness of the curve by learning how to use one function from the R package bayesm to analyze one type of choice data.  Hopefully, once you have stepped through a simple example, you will no longer be completely confused when reading introductions and tutorials.  My claim is not that you will understand, but that you will stop feeling so overwhelmed.
 
Your choice:  Brand A at some $ versus Brand B at some $ vs. No Thank You
 
This is a common choice setting.  The researcher wants to learn about price sensitivity between two brands in the market.  Why not generate a number of choice sets with both brands at varying prices along with the option to refuse to buy either brand.  The last option is needed because customers will leave the market if prices are too high.  We will keep it simple by using three price points for Brand A and three price points for Brand B and crossing the two pricing factors.  That is, we will use a complete factorial design with the 3x3=9 cells obtained from the table with Brand A prices as the rows and Brand B prices as the columns.
 
Below is what I had in mind.  Nine choice sets with three alternatives in each choice set.  Price 1 is the lowest price, and Price 3 is the highest price.  The price points can be different for the two brands because we will be estimating separate effects for Brand A's and Brand B's prices (called alternative-specific effects).  The proportions show the likelihood of a respondent selecting each alternative in each choice set.  You should note that Brand A seems to be the preferred brand with higher choice probabilities, except in the Choice Set #7 where Brand A is at its highest price and Brand B is at its lowest price.  You should also notice that this is a main-effects design with the likelihood of selecting Brand B remaining the same at each price level of Brand A.  That is, the pattern "0.3 - 0.2 - 0.1" is repeated for each of Brand A's price levels.  A similar pattern can be seen for Brand A's prices within Brand B's price levels (0.5 - 0.4 - 0.3).
 
 
Choice
Pricing Level
Proportion Selecting Option
Set
Brand A
Brand B
Brand A
Brand B
Neither
1
Price 1
Price 1
0.5
0.3
0.2
2
Price 1
Price 2
0.5
0.2
0.3
3
Price 1
Price 3
0.5
0.1
0.4
4
Price 2
Price 1
0.4
0.3
0.3
5
Price 2
Price 2
0.4
0.2
0.4
6
Price 2
Price 3
0.4
0.1
0.5
7
Price 3
Price 1
0.3
0.3
0.4
8
Price 3
Price 2
0.3
0.2
0.5
9
Price 3
Price 3
0.3
0.1
0.6
 
 
Randomly Generating the Data So You Can Step Through the Analysis
 
Choice modeling is an "as if" statistical model.  It is assumed that people behave as if they assigned utility to the attribute levels varied in a choice design.  Products are attribute bundles, so the value of the product is a function of the utilities of its component attribute levels.  One should be able to look at the proportions in the above table and infer what utilities would have produced such data.  That is, Brand A tends to be selected more often than Brand B, therefore Brand A must have a higher utility than Brand B.  Similarly, one can observe price sensitivity in the decreasing proportions selecting each brand as that brand's price increases.
 
My goal was to keep it simple with a 3x3 complete factorial with varying choice set probabilities that would "reveal" the underlying utilities.  Here is the R-code to generate 100 respondents making choices from the above nine sets.  This code was written to be easy to follow.  It creates a NULL object "choice" outside of the loop to store the dataset.  The loop repeats 100 times by row binding a new respondent to choice.  Data for only 100 respondents were created because I wanted to minimize the run time for the hierarchical Bayes estimation.  Each of the nine choice sets is called a cell.  For each loop all nine cells are reset to NULL, and a new choice is made using the probabilities from the above table. 
 
 # set seed in order to replicate
  
 set.seed(362013)
 #create structure for choice data
 choice<-NULL
  
 #simulates 100 respondents and 9 choice sets
 for (i in 1:100) {
   cell1<-NULL
   cell2<-NULL
   cell3<-NULL
   cell4<-NULL
   cell5<-NULL
   cell6<-NULL
   cell7<-NULL
   cell8<-NULL
   cell9<-NULL
  
   cell1<-rbind(cell1,which.max(rmultinom(1, size = 1, prob = c(0.5,0.3,0.2))))
   cell2<-rbind(cell2,which.max(rmultinom(1, size = 1, prob = c(0.5,0.2,0.3))))
   cell3<-rbind(cell3,which.max(rmultinom(1, size = 1, prob = c(0.5,0.1,0.4))))
   cell4<-rbind(cell4,which.max(rmultinom(1, size = 1, prob = c(0.4,0.3,0.3))))
   cell5<-rbind(cell5,which.max(rmultinom(1, size = 1, prob = c(0.4,0.2,0.4))))
   cell6<-rbind(cell6,which.max(rmultinom(1, size = 1, prob = c(0.4,0.1,0.5))))
   cell7<-rbind(cell7,which.max(rmultinom(1, size = 1, prob = c(0.3,0.3,0.4))))
   cell8<-rbind(cell8,which.max(rmultinom(1, size = 1, prob = c(0.3,0.2,0.5))))
   cell9<-rbind(cell9,which.max(rmultinom(1, size = 1, prob = c(0.3,0.1,0.6))))
  
  row<-cbind(cell1,cell2,cell3,cell4,cell5,cell6,cell7,cell8,cell9)
  choice<-rbind(choice,row)
 }
  
 choice
 apply(choice,2,table)
 choice_df<-data.frame(cbind(1:100,choice))
 names(choice_df)<-c("id","set1","set2","set3","set4","set5","set6","set7","set8","set9")
  
 choice_df
  
 
I used the R function rmultinom to generate the multinomial choice using the table proportions.  The resulting dataset has the following format, showing each respondent as a row and the alternative that was selected in each choice set (1=Brand A, 2=Brand B, and 3=Neither).
 
id
set1
set2
set3
set4
set5
set6
set7
set8
set9
1
2
2
1
3
1
3
1
3
2
2
1
3
1
1
3
3
1
2
3
3
1
3
3
3
3
1
1
3
1
4
3
2
1
1
3
1
1
3
1
5
1
1
3
1
1
1
1
3
3
 
 
One should always begin by looking at the counts for each of the choice sets.  Had we not known how these data were generated, we could still observe a pattern.  Brand A is selected more often than Brand B.  Price sensitivity for Brand A is clear with Price 1 > Price 2 > Price 3.  Although there is some price inconsistency for Brand B between Sets 4 and 5, the marginal totals indicate that Brand B was selected 70 times at Price 1 and 60 times at Price 2.  Exploratory analysis of the raw counts is always useful, however, it becomes a more difficult task when the design is not balanced (e.g., when attribute levels do not appear equally often or when attributes are correlated).
 
Option
set1
set2
set3
set4
set5
set6
set7
set8
set9
1
58
51
58
47
39
41
37
35
46
2
26
21
8
15
18
9
29
21
5
3
16
28
34
38
43
50
34
44
49
 
 
On to the Design Matrices

So far, we have a data frame with the categorical dependent variable called choice. Now we need to attach a design matrix for each of the nine choice sets.  It might be easiest to explain by presenting the R code.  Design #1 is for the first choice set.  There are 6 columns for the 6 parameters to be estimated in this design.  That is, there are 2 dfs for the three alternatives, plus 2 dfs for Brand A price and 2 dfs for Brand B price.  I have used dummy coding so that the third alternative, Neither, is the base with only 0s.  The columns, in order, represent the effects for Brand A, Brand B, Brand A Price 1, Brand A Price 2, Brand B Price 1, and Brand B Price 2.  Brand A and B appear in every choice set; only the effects for price vary.
 
 design1<-matrix(c(
  1,0,1,0,0,0,
  0,1,0,0,1,0,
  0,0,0,0,0,0),
 nrow=3,ncol=6,byrow=TRUE)
 design1
  
 design2<-matrix(c(
  1,0,1,0,0,0,
  0,1,0,0,0,1,
  0,0,0,0,0,0),
 nrow=3,ncol=6,byrow=TRUE)
 design2
  
 design3<-matrix(c(
  1,0,1,0,0,0,
  0,1,0,0,0,0,
  0,0,0,0,0,0),
 nrow=3,ncol=6,byrow=TRUE)
 design3
  
 design4<-matrix(c(
  1,0,0,1,0,0,
  0,1,0,0,1,0,
  0,0,0,0,0,0),
 nrow=3,ncol=6,byrow=TRUE)
 design4
  
 design5<-matrix(c(
  1,0,0,1,0,0,
  0,1,0,0,0,1,
  0,0,0,0,0,0),
 nrow=3,ncol=6,byrow=TRUE)
 design5
  
 design6<-matrix(c(
  1,0,0,1,0,0,
  0,1,0,0,0,0,
  0,0,0,0,0,0),
 nrow=3,ncol=6,byrow=TRUE)
 design6
  
 design7<-matrix(c(
  1,0,0,0,0,0,
  0,1,0,0,1,0,
  0,0,0,0,0,0),
 nrow=3,ncol=6,byrow=TRUE)
 design7
  
 design8<-matrix(c(
  1,0,0,0,0,0,
  0,1,0,0,0,1,
  0,0,0,0,0,0),
 nrow=3,ncol=6,byrow=TRUE)
 design8
  
 design9<-matrix(c(
  1,0,0,0,0,0,
  0,1,0,0,0,0,
  0,0,0,0,0,0),
 nrow=3,ncol=6,byrow=TRUE)
 design9
  

Creating a List Structure for bayesm

We will be using the function rhierMnlRwMixture from the R package bayesm to estimate the parameters of our choice model, and bayesm uses a list structure to store the y (choices), the X (design matrices), and the z (covariates).  We will not be discussing covariates in this post, but you should be aware that bayesm offers much greater capability than we will be covering.  For instance, the function's name, r-hier-Mnl-Rw-Mixture, suggests a hierarchical multinomial random walk with mixtures of normal distributions.  We, however, will be keeping it simple.

The following code will create the list structures that bayesm needs.  It is somewhat more general than we need for this example given that every individual sees the same nine choice sets.  Briefly, id is used to select only those rows with the same id value.  This allows us to have multiple rows for each respondent, as we might if the data were in long rather than wide format.  The list will be stored in lgtdata.  Then, we loop through each respondent to create a list of 100 with y and X.
 
 id=levels(as.factor(choice_df[,1]))
 lgtdata=NULL
  
 for (i in 1:100)
 {
  respdata=choice_df[choice_df[,1]==id[i],]
  ty<-NULL
  tdesign<-NULL
  ty=c(ty,respdata$set1)
  tdesign=rbind(tdesign,design1)
  ty=c(ty,respdata$set2)
  tdesign=rbind(tdesign,design2)
  ty=c(ty,respdata$set3)
  tdesign=rbind(tdesign,design3)
  ty=c(ty,respdata$set4)
  tdesign=rbind(tdesign,design4)
  ty=c(ty,respdata$set5)
  tdesign=rbind(tdesign,design5)
  ty=c(ty,respdata$set6)
  tdesign=rbind(tdesign,design6)
  ty=c(ty,respdata$set7)
  tdesign=rbind(tdesign,design7)
  ty=c(ty,respdata$set8)
  tdesign=rbind(tdesign,design8)
  ty=c(ty,respdata$set9)
  tdesign=rbind(tdesign,design9)
  lgtdata[[i]]=list(y=ty,X=as.matrix(tdesign))
 }
  

To verify that the code functioned as intended, you need to view the elements of lgtdata.  You index the list as lgtdata[[1]] for the first respondent and lgtdata[[100]] for the last respondent.  Here are the data for the first respondent.  The design matrices have been stacked in $X, and the choices have been placed in the vector $y.  Once you tell rhierMnlRwMixture that there are three alternatives per choice set, $X will be subdivided and matched with $y.

$y
[1] 2 2 1 3 1 3 1 3 2

$X
      [,1] [,2] [,3] [,4] [,5] [,6]
 [1,]    1    0    1    0    0    0
 [2,]    0    1    0    0    1    0
 [3,]    0    0    0    0    0    0
 [4,]    1    0    1    0    0    0
 [5,]    0    1    0    0    0    1
 [6,]    0    0    0    0    0    0
 [7,]    1    0    1    0    0    0
 [8,]    0    1    0    0    0    0
 [9,]    0    0    0    0    0    0
[10,]    1    0    0    1    0    0
[11,]    0    1    0    0    1    0
[12,]    0    0    0    0    0    0
[13,]    1    0    0    1    0    0
[14,]    0    1    0    0    0    1
[15,]    0    0    0    0    0    0
[16,]    1    0    0    1    0    0
[17,]    0    1    0    0    0    0
[18,]    0    0    0    0    0    0
[19,]    1    0    0    0    0    0
[20,]    0    1    0    0    1    0
[21,]    0    0    0    0    0    0
[22,]    1    0    0    0    0    0
[23,]    0    1    0    0    0    1
[24,]    0    0    0    0    0    0
[25,]    1    0    0    0    0    0
[26,]    0    1    0    0    0    0
[27,]    0    0    0    0    0    0


Finally We Can Run the Analysis

We need to set a couple of values and pass those values along with the data to the function.  R=20000 is the number of MCMC draws, and keep=10 is the thinning parameter telling the function to keep every 10th draw.

 mcmc=list(R=20000,keep=10)
  
 library(bayesm)
 out=rhierMnlRwMixture(Data=list(p=3,lgtdata=lgtdata),Prior=list(ncomp=1),Mcmc=mcmc)
 plot(out$loglike,type="l")
 trace<-t(apply(out$betadraw,c(2,3),mean))
 matplot(trace, type="l")
 beta.501_2000<-apply(out$betadraw[,,501:2000], 2, mean)
 beta.1001_2000<-apply(out$betadraw[,,1001:2000], 2, mean)
 beta.1501_2000<-apply(out$betadraw[,,1501:2000], 2, mean)
 cbind(beta.501_2000,beta.1001_2000,beta.1501_2000)
  

The rhierMnlRwMixture function took about 10 minutes on my PC to complete the 20,000 draws.  The coefficients for the parameters in our design matrix are stored in out$betadraw, which is a 100 x 6 x 2000 array.  The first two dimensions, the 100 x 6, seem reasonable.  If hierarchical Bayes yields individual-level beta weights, then we should have six coefficients for each of the 100 respondents.  But we do not have one estimate of each coefficient for each respondent.  There were 20,000 draws per respondent from which we kept every 10th for 2000 in total.

I have included a number of lines after rhierMnlRwMixture that require some time to explain because they involve issues about whether the MCMC draws have entered a high-probability region (often called convergence).  Appendix A from the book Bayesian Statistics and Marketing reviews these commands when discussing another example.  This chapter by Rossi and Allenby also describes this same study.  Both readings are challenging, but neither is overly technical.

However, I do want to finish with the last four lines of codes.  I want to examine the aggregate level estimates for my six parameters after averaging over respondents.  If I discard the first 500 draws, I will average draw 501 to 2000.  Thus, the three apply operations discard a different number of initial draws (500, 1000, and 1500).  One of the conditions for entering a high-probability region is that the estimates should not fluctuate widely.  Let's take a look.

 
beta.501_2000
beta.1001_2000
beta.1501_2000
Brand A
-0.08
-0.08
-0.04
Brand B
-1.99
-2.02
-2.01
Brand A, Price 1
0.76
0.74
0.69
Brand A, Price 2
0.11
0.09
0.05
Brand B, Price 1
1.39
1.43
1.43
Brand B, Price 2
1.16
1.19
1.18

We do not see much fluctuation, but do these coefficients reflect what we knew when we generated the data?  Given the dummy coding, the coefficients appear reasonable.  The brands are ordered as expected, given that "Neither" was the base alternative and has a coefficient of zero (Neither = Brand A > Brand B).  The highest price is Price 3, which is also coded 0, therefore price sensitivity seems to be in the correct direction.  But these coefficients do not represent linear effects, and we should be reluctant to reach general conclusions.  We need a simulator to access the impact of changing prices for the two brands.

As you might recall, I began this post with the hope that it would help the reader to run their first hierarchical Bayes choice model using R.  With that in mind, I tried to stay focused on running the analysis.  For those wanting to understand the details, Kenneth Train provides an updated online textbook and some very blurry RealPlayer videos of a 2001 course.  The learning curve is still very steep.  Perhaps with the ability to run the analysis in R, you will be motivated to start the climb.

 

8 comments:

  1. mmmm...very nice!

    ReplyDelete
  2. Nice R foo in matching in the design files to make a usable dataset... it's nice to have a walkthrough of what you need to actually use bayesm. Thanks!

    ReplyDelete
  3. This is super useful !! Thank you so much for explaining it step by step. I had purchased Bayesian Statistics and Marketing but found it very hard to go through it. I still need a much more simplified version for it. Any site you could recommend ?
    Also in your last few lines you've talked about not being able to interpret the co-efficients since its not a linear analysis and hence we will need to perform a simulation. Would I be too greedy if I ask you to make a post on it ? Or alternatively if you could give me some pointers?

    ReplyDelete
    Replies
    1. I am very happy to hear that the post was useful. I am even happier to see that you have started the steep climb. If it is any comfort, you are not alone in finding the BSM book difficult. However, it is actually well-written, which you will discover once you have learned more. I gave the link in the post to the chapter by Rossi and Allenby because it seems more introductory. Some find the technical reports by Sawtooth Software to be more approachable (see their website under CBC/HB).

      The problem is that hierarchical Bayes choice modeling is hard because it is a hierarchical model, it is Bayesian, and it is choice modeling (i.e., the dependent variable is multinomial). I would recommend that you start with choice modeling. I gave a link in the post to John Fox because he writes well and provides a lot of free material. You need to learn first what is wrong with the linear probability model, then master logistic regression and multinomial models. Next, what is a hierarchical model (also called a multilevel model or a mixed model)? Many recommend Gelman and Hill's book "Data Analysis Using Regression and Multilevel/Hierarchical Models." Finally, Bayesian estimation cannot be treated as a "black box" that you can point-and-click your way through because it is too easy to go astray. You need to know enough about what you are doing so that you are aware of all the ways it can go wrong. You don't need to be a mathematical statistician, but you do need to know more than how to navigate a GUI and what numbers to write down. John Kruschke's book "Doing Bayesian Data Analysis" may be a good place to begin your study. I thought that Simon Jackman's book "Bayesian Analysis for the Social Sciences" was written well, but then I have a social science background. I wrote a review of it on Amazon. I gave links to Kenneth Train's book and video course in the post because his work is cutting edge and because it is not all technical. That is, he discusses the behavioral models that might lead us to use Bayesian estimation or something like maximum simulated likelihood (mlogit r package). I also studied econometrics in graduate school.

      I make these personal references because you need to find references in your field. Each treatment has its own viewpoint and jargon. Although I have subdivided the task into three components, you should seek out references in your area of research with examples that you can relate to. Let me what you find. There is an email address under contact.

      Delete
  4. Great post! Carry on the good job with the blog ;)

    ReplyDelete
  5. Thank you for this very useful post! I have already used bayesm and I am not sure about one thing: If I want to know, if the coefficients are significant (I know, that "significant" is not really the correct term in bayesian analysis), can I just use the r-command
    t.test(out$betadraw[,,501:2000])?

    ReplyDelete
    Replies
    1. It depends on how you wish to treat your individual-level estimates. You can decide to remain entirely within the Bayesian framework and use the posterior distributions to compute confidence or highest density intervals. Or, you may treat the estimates as if they were any other subscale score, that is, derived scores from repeated observations of each person (“fancy” sums and differences). Then, you are free to do with the estimates as you would do with any subscale score, including t-tests. You need to be careful about significance tests that assume independence because the “subscale scores” were calculated using a hierarchical or multilevel model, which is why some prefer highest density intervals from the posterior distribution (see John K. Kruschke Doing Bayesian Data Analysis).

      Delete
    2. Thank you very much.
      I now think, that using the t.test-command is wrong; I only have to calculate the ratio mean/sd of the draws. I explored this by comparing the results of a bayesian analysis of data using bayesm (rmnlIndepMetrop) with a Maximum Likelihood Estimation of the same data. However, I will try to learn more about analysis of the posterior distribution.

      Delete