Paul E. Green & V. Srinivasan Conjoint Analysis in Marketing: New Developments With Implications for Research and Practice The authors update and extend their 1978 review of conjoint analysis. In addition to discussing several new developments, they consider alternative approaches for measuring preference structures in the presence of a large number of attributes. They also discuss other topics such as reliability, validity, and choice simulators. S

INCE the early 1970s, conjoint analysis has received considerable academic and industry attention as a major set of techniques for measuring buyers’ tradeoffs among multiattributed products and services (Green and Rao 1971; Johnson 1974; Srinivasan and Shocker 1973b).

We presented a state-ofthe-art review of conjoint analysis in 1978 (Green and Srinivasan 1978). Since that time many new developments in conjoint analysis and related methods have been reported.

The purpose of this article is to review those developments (with comments on their rationale, advantages, and limitations) and propose potentially useful avenues for new research. We assume the reader is familiar with our previous review as background for a detailed study of this article.

‘ In subsequent sections we describe a variety of de- velopments that have been achieved since the 1978 review.

Topics include: • choosing conjoint models to minimize prediction error, • collecting conjoint data via the telephone-mail-telephone method, • experimental designs that incorporate environmental correlations across the attributes, • methods for improving part-worth estimation accuracy, • new techniques for coping with large numbers of attributes and levels within attribute, • issues in measuring conjoint reliability, • recent findings in conjoint validation, • coping with the problem of “unacceptable” attribute levels, • extending conjoint to multivariate preference responses, • trends in conjoint simulators, and • new kinds of industry applications of conjoint analysis. Industry Acceptance of Conjoint Techniques ‘Readers who are new to conjoint analysis may first want to read the article by Green and Wind (1975). Paul E. Green is the Sebastian S. Kresge Professor of Marketing, Wharton School, University of Pennsylvania. V. Srinivasan is the Ernest C. Arbuckle Professor of Marketing and Management Science, Graduate School of Business, Stanford University. The authors thank four anonymous JM reviewers and the Editor for their comments on a previous version of the article. Conjoint analysis continues to be popular.

Witdnk and Cattin (1989) estimate that about 4()0 commercial applications per year were carded out during the early 1980s. Some of the highlights of their study are: • The large majority of conjoint studies pertain to consumer goods (59%) and industrial goods (18%), with financial (9%) and other services (9%) accounting for most of the rest. • New product/concept evaluation, repositioning, com- Conjoint Analy»s in Mwiceting / 3 pedtive analysis, pricing, and market segmentation are the principal types of applications. ^ e Personal interviewing is the most popular data-gathering procedure, though computer-interactive methods are gaining favor. The full-profile method, using rating scales or rank orders with part-worths estimated by least squares regression, is the most conmion type of application. Part of the reason for conjoint’s growing usage in the 1980s has been the introduction of microcomputer packages for p f e n n i n g commercial conjoint studies. ^ The availability of these packages makes conjoint analysis easier and less expensive to apply, leading us to expect its increased use (and misuse) in years to come. Scope of the Review As defined in our 1978 review, conjoint analysis is any decompositional method that estimates the structure of a consumer’s preferences (i. e. estimates preference parameters such as part-worths, importance weights, ideal points), given his or her overall evaluations of a set of alternatives that are prespecified in terms of levels of different attributes. Price typically is included as an attribute; see Srinivasan (1982) for a justification. Because of the substantial amount of among-person variation in consumer preferences, conjoint analysis usually is carried out at the individual level. Significant improvements in predictive validity are obtained by estimating preference models at the individual level rather than at the aggregate or segment level (see Wittink and Montgomery 1979, Table 1, and Moore 1980, Table 2).

Consequently, in this review we deemphasize methods that are primarily useful in estimating pref^Conjoint analysis is being used increasingly to develop marketing strategy. For instance, over the past several years the management consulting firm McKinsey and Company has sponsored more than 200 applications of conjoint analysis, the results of which are used in marketing and competitive strategy planning for its clients (Allison 1989). ‘Adaptive Conjoint Analysis (ACA), developed by Richard M. Johnson of Sawtooth Software, incorporates data collection in a computer-interactive mode, estimation of part-worths, and choice simulation. Sawtooth Software’s Conjoint Value Analysis is designed expressly for pricing studies.

A second set of microcomputer packages has been developed by Steven Herman of Bretton-Clark Software. Conjoint Designer prepares full-profile stimuli (cards) using fractional factorial designs (Green 1974). Conjoint Analyzer and Conjoint LINMAP estimate part-worth functions by metric and nonmetric methods, respectively, and perform choice simulation. SIMGRAF is a sophisticated choice simulator. Bridger estimates part-worth functions when the respondent performs multiple card sorts with each card sort using a different (but overlapping) set of attributes. SPSS’s Categories program module consists of a full-profile conjoint data collection and analysis system.

Reviews of some of the software have been prepared by Carmone (1986, 1987), Green (1987), Kamakura (1987), and Albaum (1989). Forresearcherswho are interested in individual conjoint analysis computer programs, such as MONANOVA, PREFMAP, and Trade-off, a third microcomputer package is available fiom Scott Smith of Bhgham Young University (Smith 1988). erence functions at the aggregate or the segment level (e. g. , Louviere 1988; Louviere and Woodworth 1983). We also exclude methods that are primarily used with perceptual survey data (e. g. , Gensch 1987). Our review is not intended to be exhaustive; we include only what we believe to be some of the primary developments in the area. (For a more extensive bibliography, see Green and Srinivasan 1990).

The burgeoning literature on the related topic of optimal product and product line design has been reviewed elsewhere (Green and Krieger 1989) and hence is not discussed here. We begin the review by examining the new developments, using our 1978 framework describing the different steps involved in conjoint analysis. We then consider alternative approaches that have been proposed to solve the practical problem of including a large number of product attributes. Finally, we discuss additional topics such as reliability, validity, and choice simulators. Issues in Implemeifting Conjoint Analysis—An UfKlate Table 1 is our framework of the different steps involved in conjoint analysis and the alternatives available for each step. Let us examine how the new developments relate to each of the steps.

Preference Models In our 1978 review we considered three main preference models of increasing generality in terms of the shape of the function relating attribute values to preference: vector model (linear), ideal point model (linear plus quadratic), and part-worth function model (piecewise linear). The mixed model allows some attributes to be treated as following the part-worth function model while other attributes follow vector or ideal point models. The vector model estimates the fewest parameters by assuming the (potentially restrictive) linear functional form, whereas the part-worth model estiniates the largest number of parameters because it permits the most general functional form. The ideal point model is between these two extremes.

In contexts where data are collected by the fullprofile method under a fractional factorial design, and multiple regression is used as the estimation procedure, we suggested previously that the choice among the preference models could be based on a statistical models comparison test. Given that the purpose of conjoint analysis is to predict customer reactions to new products and services, a more relevant criterion is to choose the model that is likely to have the greatest predictive validity (Cattin and Punj 1984; Hagerty 1985). For each respondent, the prediction error can be compared across models by using the formula (Hagerty and Srinivasan 1991): 4 / Journal of Marketing, October 1990 TABLE 1 Steps Involved in Conjoint Analysis” Step 1. Preference model 2. Data collection method 3. Stimulus set construction 4. Stimulus presentation 5. Measurement scale for the dependent variable 6. Estimation method

Altemative Methods Vector model, ideal point model, part-worth function model, mixed model Full profile, two-attribute-at-a-time (tradeoff tables) Fractional factorial design, random sampling from a multivariate distribution, Pareto-optimal designs Verbal description (multiple-cue stimulus card), paragraph description, pictorial or three-dimensional model representation, physical products Rating scale, rank order, paired comparisons, constant-sum paired comparisons, graded paired comparisons, category assignment Metric methods (multiple regression); nonmetric methods (LINMAP, MONANOVA, PREFMAP, Johnson’s nonmetric algorithm); choice-probabilitybased methods (logit, probit). ‘Adapted from Green and Srinivasan (1978). EMSEP^ = (Ri – R^) + (1 – R^) 1 + = where: (1) EMSEPm = an estimate of the expected mean squared error of prediction of model m (e. g. the vector, ideal point, partworth, or mixed model), expressed as a fraction of the variance of the dependent variable, Rg = adjusted R^ for the most general (least restrictive) model; for example, in the context of comparing the vector, ideal point, part-worth, and mixed models, the most general model is the partworth function model, Rm = adjusted R^ for model m under consideration, k = number of estimated parameters in model m, and n = number of stimuli (profiles) used in the estimation. We note that R^ is likely to be the smallest (and hence the first term in equation 1 is likely to be the largest) for the vector model because it uses the most restrictive (linear) functional form. However, the number of estimated parameters k, and hence the second term, is largest for the part-worth model so that a priori it is not obvious which model would have the smallest prediction error.

Intuitively, the first term captures the loss resulting from a restrictive functional form while the second term incorporates the loss in predictive power resulting from estimating too many parameters. The two terms correspond to the squared bias and variance, respectively, so that their sum provides the mean squared error of prediction (see Mallows 1973). For each respondent, equation 1 can be evaluated for each of the models under consideration. The model with the smallest EMSEP is likely to have the smallest prediction error (greatest predictive validity). Wittink and Cattin (1989) report that the typical commercial conjoint study has too few degrees of freedom. The average commercial study has used 16 stimuli evaluated on eight attributes at three levels each.

Taken literally, such a design leads to no degrees of freedom for flie commonly used part-worth function model (Green and Srinivasan 1978, p. 106)! The use of the vector and ideal point models, when applicable, would be of help in increasing the degrees of freedom. The availability of the vector, ideal point, and mixed model options in the Bretton-Clark computer programs. Conjoint Analyzer and Conjoint LINMAP, is a welcome development. However, these computer programs currently allow only for a common model to be estimated across all respondents in the sample. Consequently, in the case of a common preference model, equation 1 can be applied with the R^ representing the average value of adjusted R^ for the respondents in the sample.

It has been typical in conjoint studies to estimate only the main effects and assume away interaction effects. In certain cases, interaction effects, particularly two-^way interaction effects, may be important. AppUcation areas include sensory phenomena (e. g. , foods, beverages, personal care products) and styling and aesthetic features. Carmone and Green (1981) have illustrated how “compromise” designs can be used to measure selected two-way interaction effects. The issue is again whether the predictive validity of the model with interactions would be better because of its increased realism or worse because of the estimation of several additional parameters (Hagerty 1986).

Equation 1 can be used to indicate whether the model is likely to have higher predictive validity with Conjoint Analysis in Marketing / 5 or without interactions. In this context, the more general model g in equation 1 corresponds to the model with interactions. Empirical evidence (Green 1984, Table 1) indicates that the model with interaction terms often leads to lower predictive validity—that is, the increased model realism obtained by incorporating interactions is small in comparison with the deterioration in predictive accuracy caused by including additional parameters. Further empirical research is needed to determine the extent to which eqtiation 1 minimizes prediction error. Because the adjusted R^ statistic is measured with error, there is no guarantee that the model chosen by using equation 1 will, in fact, minimize prediction error. ) Equation 1 also could be compared with the well-known F-test for model comparison applied, say, at the 5% level. ‘* Data Collection Methods In 1978 we described several advantages and limitations of the fiiU-profile method in comparison with the tradeoff procedure (two-factor-at-a-time method). An additional advantage of the full-profile method is its ability to measure overall preference judgments directly using behaviorally oriented constructs such as intentions to buy, likelihood of trial, chances of switching to a new brand, and so on. Such measures are particularly useful in the context of introducing new products/services.

The elicitation of such constructs from respondents requires that each option be described on all of the attributes, that is, the full-profile approach. Wittink and Cattin (1989) report that the use of the tradeoff method has decreased considerably in recent years. A relatively new method for conjoint data collection is the telephone-mail-telephone (TMT) procedure; see Levy, Webster, and Kedn (1983) for one of the earlier applications of this approach. Respondents are recruited by telephone screening (either through random digit dialing or pre-establisted lists). The main interview materials, including questionnaires, stimulus cards, incentive gifts, and other items, then are sent by mail or by overnight air express.

An appointment is scheduled for collecting all data by telephone. The conjoint exercise usually is reserved for interviewer-interviewee interaction at the time of the followup call. The easier questions can be self-administered by the respondent; the answers are simply recorded by the interviewer during the main interview. “Hagerty and Srinivasan (1991) show that comparing two models using equation 1 is equivalent to using a critical value of F equal to 2 – (k/n); that is, if the F-test produces a value of F > 2 – (k/n), then the more general model is recommended. (Here k denotes the number of parameters for the more restrictive of the two models. )

The advantages of the TMT interview are fourfold: (1) sampled populations can be pinpointed and probability sampling methods employed, thereby reducing selection bias, (2) difficult sorting and rating tasks can be handled with visual materials and telephone backup, (3) once respondents are recruited, the interview completion rate is very high (typically about 80%), and (4) all questiotmaires contain complete responses (i. e. , there is no missing-data problem). A variation on the TMT theme is use of the locked box (Schwartz 1978), a metal box containing all interview materials, which serves as a respondent premium; the combination lock is “revealed” at interview time.

Some users of the locked box device forgo the screening interview and include a cover letter indicating the purpose of the survey and a time when the telephone interview call is scheduled. Stimulus Set Construction for the Full-Profile Method Fractional factorial designs and other kinds of orthogonal plans that either exclude or markedly limit the measurement of interaction effects currently dominate the industry scene. This trend has been helped immeasurably by the diffusion of various kinds of atlases (e. g. , Addelman 1962; Hahn and Shapiro 1966) and microcomputer packages (e. g. , Bretton-Clark’s Conjoint Designer) for preparing orthogonal main effects plans.

Steckel, DeSarbo, and Mahajan (1990) review the literature on various ways of incorporating environmental interattribute correlations in the construction of stimulus sets so as to increase the realism of the task. They provide a method for maximizing “orthogonalness” subject to meeting various user-supplied constraints on the attribute levels that are allowed to appear together in full-profile descriptions. In industry studies, if two or more attributes are highly correlated environmentally, our 1978 suggestion of making up “superattributes” appears to be popular. If this device is not feasible, it is not unusual to depart from fully orthogonal requirements and permit some attributes to be correlated by deleting totally unrealistic profiles generated by the fractional factorial design. ‘ re also situations in which two profiles in a fractional factorial design become identical, in which case it may be appropriate to delete the duplicated profile. The deletion of any profile from a fractional factorial design makes the attributes become somewhat correlated. However, the presence of interattribute correlations per se does not violate any assumptions of conjoint analysis. This is analogous to multiple regression models in which there is no assumption per se that the predictors are perfectly orthogonal. Correlation among attributes, however, increases the error in estimating preference parameters (Johnston 1984, p. 247), so interattribute correlations should be kept to a minimum (but they need not be zero). 6 / Journal of Marketing, Octdier 1990

We argued in 1978 that the additive compensatory model assumed in conjoint analysis is likely to predict well even if the decision process is more complex and noncompensatory (also, see Huber 1987). Johnson, Meyer, and Ghose (1989) suggest that conjoint predictive validity in the noncompensatory environment (e. g. , conjunctive and disjunctive models) may be poor if there are negative correlations among the attributes in a choice set. Negative interattribute correlations are likely to arise because products that are totally dominated by other products are unlikely to exist in a market, so that if product A is better than product B on one attribute, it is likely to be worse on some other attribute.

Fortunately, the average interattribute correlation carmot be more negative than — l/(t — 1), where t denotes the number of attributes (Gleser 1972). Thus, in a realistic six-attribute problem (t = 6), the average intercorrelation among attributes cannot be more negative than —. 2, which is not too different from the orthogonal case of zero correlation. (Moreover, given Johnson, Meyer, and Ghose’s admonitions about the ability of compensatory models to mimic noncompensatory processes, those authors also found that a compensatory model with selected interaction terms approximated noncompensatory processes even in negatively correlated environments. Krieger and Green (1988) and Wiley (1977) suggest methods for constructing stimulus sets for conjoint analysis that are Pareto optimal (i. e. , no option dominates any other option on all attributes). In particular, Krieger and Green suggest some heuristic procedures involving systematic permutations of attribute-level indexes to transform “standard” (atlasobtained) orthogonal designs into ones that are nearly (if not exactly) Pareto optimal. Huber and Hansen (1986) and Green, Helsen, and Shandler (1988) report empirical results on the question of whether Pareto-optimally designed choice sets provide greater predictive validity than standard orthogonal designs in predicting a holdout set of realistic (Pareto-optimal) full profiles. The results are mixed.

Whereas Huber and Hansen’s study, utilizing paired comparison preference judgments, suggests that Pareto-optimal choice sets predict better. Green, Helsen, and Shimdler’s study, utilizing full profiles, indicates the opposite. More recent studies by Moore and Holbrook (1990) and Elrod, Louviere, and Davey (1989) support and extend the findings of Green, Helsen, and Shandler’s study. So far, the weight of the evidence suggests that orttiogonal designs are very robust even when prediction is made on Pareto-optimal choice sets. ^ “This empirical result is consistent with what would be expected ftxjm econometric theory. Consider the vector model of preference

Using both rankings and ratings data, Wittink and his coworkers have shown that the relative importance of an attribute increases as the number of levels on which it is defined increases, even though the minimum and maximum values for the attribute are held fixed (Wittink, Krishnamurthi, and Nutter 1982; Wittitik, Krishnamurthi, and Reibstein 1990). For instance, the relative importance of price went up by seven percentage points when two more intermediate levels were added to the three levels used for price. This finding is a potentially serious problem for conjoint analysis. The fact that the problem occurs even with ratings data, analyzed by multiple regression, indicates that it is not merely an estimation artifact. The stimated regression coefficients (part-worths) are supposed to be unbiased under very reasonable assumptions (Johnston 1984, p. 172), no matter what the design matrix is. One possible psychological explanation of the phenomenon is that the addition of intermediate levels to an attribute makes the respondent pay more attention to that attribute, thereby increasing its apparent importance in determining overall preferences. More research is needed to isolate the cause(s) of the observed phenomenon and develop methods for minimizing/correcting the problem. Stimulus Presentation Though some industry studies still employ paragraph descriptions, profile cards (with terse attribute-level descriptions) are by far the more popular stimulus presentation method.

Increasing use of pictorial materials has been found. These kinds of props make the task more interesting to the respondent, provide easier and potentially less ambiguous ways of conveying information, and hence allow a greater number of attributes to be included in the full-profile method. Pictorial (or three-dimensional) materials are virtually indispensable in conjoint studies associated with appearance and aesthetics, such as package design and product styling. Moreover, conjoint methodology is increasingly being applied with physical products as stimuli (e. g. , foods/beverages, fragrances, personal (which is the same as the multiple regression model. If we denote the estimation stimuli ccsrelation matrix to be R and the corresponding correlation matrix for the validation profiles to be Q, then by following a method of proof similar to that of Hagerty and Srinivasan (1991) we can show that the expected mean squared error of prediction is given by o^ [1 + (1/n) tt(R~’ Q] where o^ is the variance of the error term, n is the number of profiles used in estimation, and tr denotes the trace of a matrix. The case of the orthogonal design matrix corresponds to R = I. When the estimation stimuli are similar to the validation stimuli, R = Q. In either case tr(R~’Q) = k, the number of estimated parameters.

Thus, similar prediction errors should result in the two cases. The preceding analysis ignores the potential negative effects of unlealism of the stimulus profiles on respondent interest in the conjoint task. Conjoint Analysis in Marketing / 7 care products, etc. ) to aid in product design. Thus conjoint methods are being adapted to provide a type of discrete-level analog to response surface modeling (Moskowitz, Stanley, and Chandler 1977). In cases involving radically new [Hoduct ideas (e. g. , new kinds of washing machines, mobile telephone systems, electric cars), it is not unusual for the conjoint exercise to be preceded by film clips describing the basic concept.

In automobile styling clinics, respondents are exposed to full-scale prototypes (constructed in fiberglass or painted clay) prior to engaging in the tradeoff task. The increasing use of pictorial materials (and actual prototypes) should expand the scope of conjoint analysis and enhance its realism in representing marketplace conditions. Measurement Scale for Hie Dependent Variable In addition to rating and ranking scales, which continue to be popular, paired comparisons are common in computer-assisted methods of data collection such as Adaptive Conjoint Analysis (Johnson 1987). To maximize the information content in a paired comparison, graded paired comparison judgments are obtained to provide the degree of preference of subprofile A versus B on a rating scale.

Estimation Methods Because conjoint analysis utilizes regressionlike estimation procedures, it is subject to the same problems that beset any regression model, particularly the instability of estimated parameters in the face of various sources of error variance. The problem is exacerbated by increasing industry demands for studies with a large number of product attributes and, hence, reduced degrees of freedom in estimation. Srinivasan, Jain, and Malhotra (1983) propose a constrained parameter estimation approach to improving predictive validity. They argue that frequently there are a priori monotonicity constraints that part-worth functions should satisfy (e. g. in choosing a bank for opening a checking account, one would prefer higher service and quality levels, lower cost levels, and better accessibility, other things remaining equal). They show that the imposition of such constraints in the LDSMAP estimation procedure (Srinivasan and Shocker 1973a) significantly improves the percentage of first choices correctly predicted. Appropriate constraints also could be imposed by first obtaining from each respondent his or her rank order of preferences for the levels of each factor, holding other factors constant. More recently, Hagerty (1985) and Kamakura (1988) have proposed innovative approaches to improving the accuracy of full-profile conjoint analysis.

Hagerty has suggested that data pooling of similar respondents’ conjoint full-profile responses (by the use of Q-type factor analysis) can reduce the variance of individual respondents’ estimated part-worths without unduly increasing the bias of the estimates. He shows, by simulation and empirical data analysis, that his factor analytic approach can indeed improve predictive accuracy at the individual-respondent level. Kamakura uses the same general approach by pooling respondents who are similar in terms of their conjoint full-profile responses, but employs an agglomerative clustering algorithm. The number of clusters is chosen so as to maximize predictive accuracy.

He illustrates, with simulation and empirical data, the value of his method in improving predictive accuracy. A somewhat related approach in the context of the logit estimation has been proposed by Ogawa (1987). Green and Helsen (1989) compared conventional, individual-level-based conjoint analysis with the methods proposed by Hagerty and Kainakura. Neither Hagerty’s nor Kamakura’s suggestions led to higher predictive validates (in a holdout sample) than traditional conjoint analysis. Green and Helsen’s result runs counter to the empirical results reported by Hagerty and Kamakura. More empirical research is needed to determine whether the segmentation-based methods do improve predictive validity.

Overall, it appears that conventional, individuallevel-based conjoint analysis may be difficult to improve in a major way (at least when the number of stimulus evaluations is large in relation to the number of parameters being estimated). This was also the finding in the Bayesian hybrid model proposed by Cattin, Gelfand, and Danes (1983). Hagerty (1986) has argued that the emphasis on maximizing predictive power at the individual level may be misplaced. He correctly points out that one should be more concerned with the accuracy of predicting maiket shares in the choice simulator. He provides a formula for choosing among models so as to maximize the accuracy of predicting market shares. Hagerty’s formula unfortunately is based on the assumption of a limited amount of heterogeneity across consumers in the market.

Research is underway to determine whether his formula leads to correct conclusions in the more realistic scenario of greater heterogeneity across consumers. Approaches for Handling a Large Number of Attributes The fUIl-profile method of conjoint analysis works very well when there are only a few (say, six or fewer) attributes. As indicated by Green (1984), industrial users of conjoint analysis have strained the methodology by requiring larger numbers of attributes and levels within attributes, thus placing a severe information overload on the respondents. When faced with 8 / Joumai of Maicetmg, October 1990 such tasks, respondents resort to simplifying tactics and the resulting part-worth estimates may distort their true preference structures (Wright 1975).

The full-profile mefliod can be extended to a larger number of attributes through the use of “bridging” designs—that is, by using two or more card decks with a more manageable number of attributes per card deck but with at least one attribute common across card decks (e. g. , Bretton-Clark’s Bridger program). However, to estimate part-worths at the respondent level, a multiple card sort is needed, leading to possible respondent fatigue and reduced reliability of the results. The use of tradeoff tables reduces the information overload on any single table. However, the increased number of tables leads some respondents either to forget where they are in the table or to simplify the task by adopting pattemized responses, such as attending to variations in one attribute before considering the other (Johnson 1976).

Three approaches have been proposed to handle the problem of large numbeis of attributes: (1) the selfexplication approach, (2) hybrid conjoint analysis, and (3) Adaptive Conjoint Analysis (ACA). All three approaches involve some amount of self-explication, that is, direct elicitation of part-worth functions from the respondent; in this sense the methods are not solely decompositional as required in the strict definition of conjoint analysis. The three approaches, along with traditional conjoint analysis, may be better thought of as alternative methods for preference structure measurement as shown in Figure 1. Self-Explication Approaches In the self-explication approach, the respondent first evaluates the levels of each attribute, including price. n a 0-10 (say) desirability scale (with other attributes held constant) where the most preferred level on the attribute may be assigned the value 10 and the least preferred level assigned the value 0. The respondent then is asked to allocate (say) 100 points across the attributes so as to refiect their relative importance. Part-worths are obtained by multiplying the importance weights with the attribute-level desirability ratings. This is the basic idea of the self-explication approach, though its implementation varies somewhat across authors (Green, Goldberg, and Montemayor 1981; Huber 1974; Leigh, MacKay, and Summers 1984; Srinivasan 1988; Wright and Kriewall 1980).

The self-explication approach is a compositional (or buildup) approach reminiscent of the expectancy-value models of attitude theory (Wilkie and Pessemier 1973). By contrast, conjoint analysis is decompositional. The primary advantage of the self-explication approach is its simplicity and thus one’s ability to use it even when the number of attributes is large. However, it has several possible problems. If there is substantial intercorrelation between attributes, it is difficult for the respondent to provide ratings for the levels of an attribute holding all else equal. Furthermore, increased biases may result from direct questioning of the importance of socially sensitive factors.

For instance, Montgomery (1986) reports that when asked directly, MBA students ranked salary as the sixth most important factor whereas the importance weights derived from conjoint analysis indicated that salary was number one in importance. A related issue is that the question “How important is attribute X? ” is highly ambiguous because the respondent may answer on the basis of his or her own range of experience over existing products rather than FIGURE 1 Alternative Approaches to Measuring Preference Structures Preference Structure Measurement Compositional (Self-Explicated) Approach Decompositional (Conjoint Analysis) –See Table 1 Compositional/ Decompositional (e. g. , Hybrid, ACA) Conjoint Analysis In Marketing / 9 n the experimentally defined range of the attribute levels. Srinivasan (1988) argues that, given the manner in which importaiKe enters the self-explicated partworth calculations, it logically follows that importance should be defined as the value to the consumer of getting an irrjpovement from the least preferred level to the most preferred level of an attribute. A second problem is that in the self-explication approach one assumes the additive part-worfli model to be the literal truth. By contrast, in the estimation of part-worths from full-profile rankings, one is only fitting to an additive model. For instance, the true process that the consumer follows could be a multiplicative model.

However, the estimation of an additive model, using a nonmetric procedure such as LINMAP, is also consistent with the multiplicative model because the logarithmic transformation that converts the multiplicative model to the additive form is just one of many permissible monotone transformations of the dependent variable. Thus, model misspecification may not be as much of a problem when part-worths are estimated nonmetrically as when they are elicited directly. A third problem with the self-explication approach is that any redundancy in the attributes can lead to double counting. For instance, if gas mileage and economy are two attributes, there is an obvious potential for double counting because each attribute is questioned separately in the self-explication approach. Suppose, however, that the two attributes are varied in a full-profile design after elimination of unrealistic combinations.

When a respondent reacts to the full profile, he or she could recognize the redundancy between the attributes so that overall preference ratings would not be affected as much by double counting. Consequently, part-worths estimated from a decomposition of the overall ratings are likely to be less affected by the double-counting problem. A fourth problem is that when the attribute is quantitative, the relative desirability ratings may become more linear. For instance, suppose the gas mileage of cars is varied at three levels, say, 20, 30, and 40 mpg. Given the 0-10 desirability scale with 0 for 20 mpg and 10 for 40 mpg, respondents may rate 5 for the intermediate level, making the part-worth function linear.

A full-profile task has a better chance of detecting potential nonlinearity in the part-worth function. A fifth problem occurs if the data collection is limited solely to the self-explication approach. The researcher obtains no respondent evaluation of purchase likelihood because no full profiles are seen. This limitation can be serious in new product contexts in which the researcher uses a simulator to obtain average purchase likelihoods under alternative product formulations. Despite the limitations, the self-explication approach warrants consideration in studies with large numbers of attributes because the information overload problem considerably diminishes the value of full profile/tradeoff studies.

Empirical studies comparing the self-explication approach and traditional conjoint analysis have produced mixed results. Green (1984, Table 2) summarizes the results of three studies in which the self-explication approach produced a smaller cross-validity than full-profile conjoint analysis. However, Wright and Kriewall (l%0) and Leigh, MacKay, and Summers (1984) report higher predictive validity for the selfexplication approach than for full-profile conjoint analysis. Srinivasan (1988) compared the predictive validity of the self-explication approach with the results obtained from a tradeoff analysis conducted in a previous year with the same factors and the sanu general subject population.

The difference in predictive validates, though slightly in favor of the self-explication af^Hoach, was not statistically significant. Overall, the empirical results to date indicate that the self-explication approach is likely to yield predictive validities roughly comparable to those of traditional conjoint analysis. The full-profile approach to conjoint analysis is extremely difficult to execute through a single telephone interview, though telephone-mail-telephone methods are feasible and growing in populiuity. An advantage of the self-explication aj^roach is that it can be executed in a single telephone interview. M/A/R/C, a national maricet research firm, has commercially implemented a telephone-based method caUed CASEMAP.

Srinivasan and Wyner (1989) point out that this computer-assisted, telephone-based interview minimizes invalid responses, provides good quality control over interviewers, and is more likely to yield a geographically dispersed sample of respondents. Hybrid Methods Hybrid models (Green, Goldberg, and Montemayor 1981) have been designed explicitly for task simplification in conjoint analysis. Hybrid uses self-explicated data to obtain a preliminary set of individualized part-worths for each respondent. In addition, each respondent provides full-profile evaluations for a limited number of stimulus profiles (e. g. , three to nine). The smaller number of profiles are drawn from a much larger master design (e. g. , 64-81 combinations) in such a way that at the market-segment level, each stimulus in the larger set has been evaluated by a subset of the respondents.

Maiket-segnwnt-level adjustments to partworths (and, if desired, interaction effects) are estimated by relating, through multiple regression, the 10 / Journal of Marketing, October 1990 overall preferences for the full profiles to the self-explicated utilities. Each respondent’s self-explicated utilities then can be augmented by segment-level parameters estimated from the multiple regression. The cross-validity of hybrid, traditional full-profile conjoint, and self-explication methods has been examined in three studies reported by Green (1984), three cases by Moore and Semenik (1988), and one by Akaah (1987). In six of the seven cases, hybrid was better than self-explication.

However, the fullprofile method was better than hybrid in five of the seven cases. The hybrid j^proach tends to reduce the limitations of the self-explication approach through the use of full profiles and provides a built-in check on the internal validity of each respondent’s self-explicated data. At the same time, the information overload problem is reduced by using only a few full profiles per respondent. In practical situations the full-profile method is difficult to execute with the large niunber of profiles requited by larger numbers of attributes and/ or levels witoi attributes. Here is where hybrid models seem to provide a practical alternative to the full-profile method.

Adaptive Conjoint Analysis (ACA) Adaptive Conjoint Analysis (ACA) from Sawtooth Software collects preference data in a computer-interactive mode. Johnson (1987) indicates that the respondent’s interaction with the computer increases respondent interest and involvement with the task. The ACA system is unique in the sense that it collects (as well as analyzes) data by microcomputers. ACA starts with a considerably simplified self-explication task through which the particular respondent’s more important attributes are identified. Partworths for the more important attributes then are refined (in a “Bayesian” updating sense) through a series of graded paired comparisons. An interesting aspect of ACA is that it is dynamic in that the respondent’s previous answers are used at each step to select the next paired comparison question so as to provide the most information. ) In ACA, adjustments aie made to self-explicated part-worths at the respondent level (and not at the total-sample or segment level, as in hybrid). However, the paired comparison data collection is considerably less efficient than rank order or rating tasks—that is, in the same amount of respondent time, many more equivalent paired comparisons can be inferred from a ranking or rating task than from direct paired comparison judgments. Two empirical studies have compared the ACA method and full-profile conjoint analysis (Agarwal 1988; Finkbeiner and Platz 1986).

Both studies found that ACA performed slightly worse than the full-pro- file method in terms of predicting a set of holdout profiles; the interview time was also longer in the ACA method. Green, Krieger, and Agarwal (1990) report that the self-explication a{^roach ou^nedicted the ACA method in terms of cross-validated correlation and firstchoice hits, besides reducing the interview time considerably. Overall, the empirical validation results to date do not seem to favor the ACA method. The ACA software package, however, has a strong intuitive appeal and is pragmatically useful as a complete data collection, analysis, and market simulation system.

On the basis of the advantages and limitations of the methods discussed before, we recommend the use of full-profile conjoint analysis if the number of attributes can be kept down to (say) six or fewer factors. If the number of attributes is somewhat larger, tradeoff tables may be appropriate. (However, if the respondent is willing to do multiple card sorts, bridging designs with full profiles probably would be even better. ) When the niunber of attributes is (say) 10 or more, the self-explication approach (including CASEMAP) or the combination methods (hybrid and ACA) are likely to be more appropriate. The problem of preference structiuie measurement in the context f a large number of attributes has become very important in practical terms. So far, the key ideas for tackling the problem have involved using (1) self-explication, (2) self-explication plus segment-level estimates (as in hybrid modeling), and (3) decompositional estimation for only the more important attributes of the respondent (as in ACA). We hope that researchers will develop additional methods to address this important practical problem, either by using the preceding ideas in other ways or by adopting different approaches. Additional Topics in Preference Structure Measurement ReliabilitY A comprehensive review of conjoint reliability studies is provided by Bateson, Reibstein, and Boulding (1987).

They consider four different types of reliability: 1. Reliability over time—conjoint measurements are taken and then repeated (with the same instrument) at a subsequent point in time. 2. Reliability over attribute set—^the stability of part-worths for a common (core) set of attributes is examined as other attributes are varied. 3. Reliability over stimulus sets—the derived part-worths are examined for their sensitivity to subsets of profile descriptions. 4. Reliability over data collection methods—part-worths are examined for their sensitivity to type of data collected, data-gathering procedure, or type of dependent (i. e. , response) variable.

Conjoint Analysis in Maiteting / 1 1 The authors examine more than 30 studies over the 1973-1984 period. They find that the authors of these studies use a variety of reliability measures, such as Pearsonian product moment correlations, rank correlations, predictive accuracy of most preferred item, prediction of the k most-preferred items, and so on. Though Bateson, Reibstein, and Boulding appropriately suggest caution in comparing findings across diverse experimental procedures, the median reliability correlation is about . 75. Reibstein, Bateson, and Boulding (1988) report the results fi’om an ambitious empirical study involving a fully crossed comparison of (1) data ollection methods—full profile, tradeoff matrices, and paired comparisons, (2) two types of attribute level manipulation, (3) five different product categories—color TVs, typewriters, yogurts, banking services, and long-distance telephone services, and (4) two types of attribute context manipulation. A main finding of their study is that the full-profile method (as well as paired comparisons procedtires) leads to higher reliability than the tradeoff matrix. The authors indicate that previous studies (Jain et al. 1979; Leigh, MacKay, and Summers 1984; Segal 1982) found no such differences in reliability between full-profile and tradeoff matrices.

Reibstein, Bateson, and Boulding’s different results may be due in part to the different measure of reliability they employ. They use an F-test, sometimes referred to as the Chow test, to examine whether the two sets of part-worths corresponding to the two replications are the same. They suggest using the alpha level (i. e. , the {wobability of getting a value higher than the one obtained for F under the null hypothesis of no difference) as a measure of reliability. (This alpha measure is not to be confused with the well-known Cronbach alpha measure of reliability. ) Thus, if the part-worths in the two replications are very different, the obtained F-value would be high and alpha would be low, indicating low reliability.

The major problem we have with their suggested alpha measure is that the F-value tests whether the true part-worths are different across replications, but the general purpose of computing reliability is to assess the degree of accuracy of the estimated part-worths. Stated differently, alpha measures the degree of stability in the true part-worths, not the degree of accuracy of the estimated part-worths. One would expect any measure of reliability to decrease as one increases measurement error. Unfortunately, the alpha measure tends to do the opposite. To illustrate this point, consider the case in which the number of full profiles used in the estimation is reduced. Then the measurement error in the estimated part-worths would be increased so that one should expect reliability to decrease.

In contrast, the power of any statistical test such as the F-test is weaker if the number of obser- vations (profiles) is smaller (c^er things remaining equal), thereby leading to higher values of alpha. Likewise, if the error variance in the full-profile task were to increase, one should expect ths estimated part-worths to be measured with greater error, which again should decrease reliability. However, the alf^a value would be higher in this case because of the lower power of the F-test resulting firom the higher error variance (Wittink et al. 1990). Bateson, Reibstein, and Boulding point out two potential problems witii the conventional correlation coefficient as a measure of reliability of the partworths. The first issue is that the correlation measure is affected by the variation in true part-worths across the attributes. Stated differently, wiUi the measurement error in part-worths held constant, the correlation coefficient increases as the true variation in partworths across attributes becomes larger. This issue is related to the psychometric definition of reliability as 1 – (measurement error vanaiKe/true variance) so that increased true variance (with error variance held constant) increases reliability. However, as the true variance of the part-worths becomes larger, any given error variance is less likely to affect tiie relative values of the part-worths and the predicted choice among products.

Consequently, in the context of conjoint analysis, the fact that reliability goes up with true variation is conceptually n^aningful. A second issue those authors raise is diat the computed correlation coefficient itself becomes unreliable when it is based on only a few part-worths. Though this assertion is true, the correlation coefficient usually is averaged over respondents and hence the unreliability of the average correlation would be much smaller. Overall, we believe the limitations of the alpha measure are conceptually more serious than those of the correlation coefficient. Consequently, in the absence of a better measure, we recommend the continued use of the correlation coefficient as a measure of reliability. Alpha, however, is a useful statistic to measure the stability of true part-worths. ) Not surprisingly, we urge users of conjoint analysis to continue to evaluate the reliability of the measured partworths. Validity In the concluding remarks section of our 1978 review we make a plea for the continued testing of conjoint validity (and reliability). Above all, conjoint analysis is a model for predicting choice (or at least inteiMled choice). Its value is based on its cumulative record of providing meaningful and timely forecasts of buyer choice. to computing the correlation, one typically normalizes partworths (e. g. , to average zero over the levels of each attribute). 12 / Journal of Marketing, October 1990

Fortunately, a large number of studies addressing validity issues have been reported during the past several years. Most of the studies have involved tests of cross-validity—that is, the ability of the model to predict the ranking (or first choice) of a holdout set of profiles. Several studies have demonstrated the ability of conjoint analysis to predict actual choice behavior. These validation studies have typically entailed three approaches: 1. Comparing (aggregate-level) maiket shares predicted by a conjoint simulator with current market shares (Clarke 1987; Davidson 1973; Page and Rosenbaum 1987) or preferably future market shares (Benbenisty 1983; Robinson 1980; Srinivasan et al. 1981). 2.

Individual-level comparisons in which conjoint analysis is used to predict some surrogate of purchase intention or of actual behavior, such as what fraction of chips are allocated to the new product (Mohn 1990), which brand is redeemed in a simulated shotting experiment (Leigh, MacKay and Summers 1984), or which product’s coupon is chosen (Anderson and E>onthu 1988). 3. Individual-level comparisons in which conjoint analysis is used to predict actual choices at some later date (Krishnamurthi 1988; Srinivasan 1988; Swinnen 1983; Wittink and Montgomery 1979; Wri^t and Kdewall 1980). kets for grocery products. Interestingly, at least one major research firm (Buike Marketing Services’ BASES group) has introduced a service called Concept Designer that incorporates conjoint studies into its established new product concept testing service.

A major value of this new service is that the firm’s historical norms (for adjusting stated buying intentions to actual behavior) can also be applied to its conjoint-based market share and sales estimates. “Unacceptable” Attribute Levels Some of the current commercial conjoint procedures (e. g. , CASEMAP, ACA) allow for the elimination of attribute levels that are deemed “totally unacceptable” or “completely unacceptable” by the respondent prior to the presentation of any tradeoff questions. This approach is consistent with the findings from consumer behavior research that has examined the choice processes consumers actually use in choosing among products (e. g. , Lussier and Olshavsky 1979; Payne 1976).

This research indicates that respondents’ decision processes can be summarized crudely as a twostage process. In the first “conjunctive” stage, the consumer eliminates options with one or more unacceptable attribute levels. In the second “compensatory” stage, the options that remain are traded-off on the multiple attributes. Srinivasan (1988) uses the conjunctive-compensatory model in a self-explication approach to measure preference structures. The question eliciting “totally unacceptable” levels asks the respondent to indicate whether any of the levels for any of the attributes are so unattractive that he or she would reject any option having that level, no matter how attractive the option is on all other factors.

In Srinivasan’s study on MBA job choice, the preference structure data were collected several months prior to actual job choice. Not a single MBA (of 54) chose a real job offer that had a “totally unacceptable” level. Bucklin and Srinivasan (1991) report that less than 1% of the total coffee volume in their study was devoted to “totally unacceptable” brands. Such was not the finding of Klein (1986) and Green, Krieger, and Bansal (1988). In those two studies, a significant fraction of respondents did not totally eliminate altematives that had unacceptable levels; instead, those respondents merely treated the unacceptable level in a compensatory manner as a highly undesirable level. However, in both studies the percentage of first choices correctly predicted was about the same whether the unacceptable level was treated as implying sure rejection or whether it was treated as having the lowest level of part-worth for that factor. ) Our recommendation is that if the unacceptable level is to be used literally to mean sure rejection, the ques- A few studies have compared market shares predicted by conjoint analysis with actual results. Such studies are the most relevant tests of predictive validity in a marketing context, but are difficult to conduct because of the confounding effects of marketing mix variables such as advertising and distribution. Robinson (1980) reports a multinational conjoint study of North Atlantic air travel involving airfares, discounts, and travel restrictions.

His results indicate that conjoint analysis had a substantial ability to predict market shares. Srinivasan et al. (1981) describe a conjoint study of individually computed conjoint functions that are used to predict work-trip modes (auto, public transit, and car pool). Travel mode shifts were forecasted for various policy-level gasoline tax surcharges. The authors’ forecasted results were roughly consistent with the actual subsequent increase in transit ridership resulting from a serendipitous rise in the price of gasoline. Benbenisty (1983) describes a conjoint study involving AT&T’s entry into the data terminal market. The simulator forecasted a share of 8% for AT«&T four years after launch. The actual share was just under 8%.

In sum, the empirical evidence points to the validity of conjoint analysis as a predictive technique. We need more validation studies that compare actual market share (or sales) results with predicted shares (after adjustment, if necessary, for marketing variables), somewhat analogous to the simulated test mar- Conjoint AnalyMS in iMariceting / 1 3 tionnaire wording must be modified to maximize that effect. For instance, in CASEMAP, whenever a respondent states that a level is totally unacceptable, an immediate foUowup question asks whether the respondent would never purchase a product with that level even though it may be most attractive on all other factors. Consequently, it would be good to make the respondent aware of the range of levels for all the factors prior to elicitation of unacceptable levels. ) Mehta, Moore, and Pavia (1989) suggest in the context of ACA that if the unacceptable level is treated to some extent in a compensatory manner by the respondent but is seen by the researcher as a device to reduce the dimensionsdity of the problem, the partworth for the unacceptable level should be set just low enough that options with that level are not chosen by the simulator. Conjoint Modeis With Multivariate Response Variables Mahajan, Green, and Goldberg (1982) adapt a data collection procedure by Jones (1975) to determine ownproduct and cross-product price/demand relationships.

In contrast to the usual one-profile-at-a-time rating (or ranking) task, the respondent is asked to allocate (say) lCX) points across the displayed competitive options so as to refiect the likelihood that each would be chosen; thus the response is multivariate. The authors show how orthogonal designs can be modified to accommodate such contexts. To illustrate, if there are four brands, each appearing at five price levels (which may be idiosyncratic to each brand), the full factorial in tWs context consists of 5″ = 625 combinations, not 5 x 4 = 20 combinations as assumed in the traditional tradeofi” model. The authors estimate the parameters of their proposed model by conditional logit (Theii 1969) applied to segment-level data. The model includes not only a specific brand’s prices, but also the prices of competitive brands. The authors show how their results relate to self/cross-price elasticities.

Wyner, Benedetti, and Trapp (1984) applied a modification of the Mahajan, Green, and Goldberg procedure that substitutes units purchased of each of a set of products (instead of the constant-sum dependent variable) and multiple regression (instead of the logit-based estimation). Louviere and Woodworth (1983) have discussed a wide variety of applications pertaining to product choice and resource allocation in which multinomial logit models are applied. DeSarbo et al. (1982) also describe an approach to conjoint analysis involving multiple response variables. In sum, the extension of conjoint analysis to the explicit consideration of all (major) options in a competitive set enables the researcher to consider both self and cross-competitor attribute-level effects. So far, the models have been commercially applied mainly to pricing problems.

In principle, however, the methodology can be extended to additional types of attributes (albeit at the expense of increased data collection demands; Green and Krieger 1990). Choice Simulators One of the main reasons for the popularity of conjoint analysis in industry is the fact that most applications of conjoint analysis are initiated primarily to obtain a matrix of consumers’ part-worths, which then is entered into a choice simulator to answer various “what i f questions. It is no accident that both the Sawtooth and Bretton-Clark computer packages include choice simulators. Ironically, relatively little academic research has been reported on choice simulators (Green and Krieger 1988).

Much of what is known about them has been assembled informally. First-generation choice simulators were very simple by today’s standards. Most of them were limited to an input matrix of individuals’ part-worflis and a set of user-supplied product profiles. Each individual was assumed to choose the product with the highest utility (max utility choice rule). The simulator’s output typically consisted of the proportion of choices received by each contender (i. e. , its market share). Capability for performing sensitivity analyses or obtaining segment-based information was limited and the task was cumbersome. Today’s choice simulators are considerably more versatile.

Three kinds of simulations can be provided: (1) single product, (2) multiple (firm’s and competitors’) products, and (3) a “bundle” of the firm’s products against a backdrop of competitive products. Choice rules include the max utility, Bradley-Terry-Luce (BTL), and logit rules. * A few “trends” in the design of choice simulators should be pointed out. First, there is growing interest in the simulation of a base-case scenario/profile. Inclusion of a base-case profile (in botii data collection and choice simulation) provides a useful benchmark for subsequent comparative analysis. If the base-case scenario includes competitive products (with known market shares), the simulator-based shares can even be adjusted to equal known shares prior to miming comparative analyses (Davidson 1973; Srinivasan 1979). Green and Srinivasan (1978, p. 113) state that the independence from irrelevant alternatives (DA) assumption implied by the logit rule may be problematic. Empirical evidence presented by Kamakura and Srivastava (1984) suggests that the IIA assumption is reasonable at the individual level. However, the use of flie BTL and logit rules usually involves arbitrary scaling assumptions—that is, the results are not invariant over linear transformations on the utilities estimated by conjoint analysis. The logit rule should not be confused with the direct estimation of part-woths by the multinomial logit model (e. g. , Ogawa 1987). 14 / Journal of Marking, October 1990

A second trend entails the compilation of various kinds of market segmentation summaries, in which shares of choices are automatically cross-tabulated against selected segments. Simulator outputs also can include sales dollar volume and gross profits, in addition to the usual share data. A third trend is the extension of simulators to optimal product and product line search, including cannibalization of a firm’s current brands and potential competitive retaliation (Green and Krieger 1989). The simulators themselves are becoming more userfriendly with menu-driven feattires and opportunities for simulating a large numtier of different new product configurations in a single run of the simulator and for performing sensitivity analysis.

Several of the major constilting firms now offer a microcomputer-based simulator to their clients, along with part-worth utility matrices and respondent background data. The client is encotiraged to try out various simulations as needs arise. Ftirther research is needed to propose and compare different approaches for converting conjoint-analysisbased utilities to brand choice probabilities. (For a discussion of some of the resulting biases in market share predictions, see Elrod and Kumar 1989. ) Nonconventional Conjoint Applications In addition to the use of conjoint analysis for marketing and strategic analysis, its applications are becoming increasingly diverse.

One area of growing interest is litigation. Recently, conjoint studies provided primary input to the settlement of disputes in the telecommunications (foreign dumping of equipment in the U. S. ), Pharmaceuticals (lost profits through misleading competitive product claims), and airline (alleged brand position bias in travel agents’ reservation computer screens) industries. Applications in the context of employee benefit packages and personnel administration (Sawtooth Software 1989) are illustrative of conjoint’s diffusion into new classes of problems. Experimental studies also have been carried out in the measurement of em- TABLE 2 Future Directions in Conjoint Analysis Research

Statistical methodology for choosing among alternative conjoint models Empirical studies for determining the extent to which varying numbers of levels within attributes lead to biases in attribute importances and market share estimates Theoretical and empirical studies in the design of compensatory models (with selected interactions) for mimicking noncompensatory decision processes Studies in the robustness of orthogonal designs in predicting choices in correlated attribute environments Methods for data pooling that increase conjoint reliability while maintaining individual differences in partworths Methods for coping with large numbers of attributes and levels within attribute Models for working with multivariate responses (e. g. , Green and Krieger 1990; Louviere and Woodworth 1983) in explicit competitive set evaluations Extensions of choice simulators to encompass flexible sensitivity analyses and optimal product and product line search Practice Extension of conjoint methodology to new application areas, such as litigation, employee benefit packages, conflict resolution (e. g. employer/employee negotiations), corporate strategy, and social/environmental tradeoffs Application of newly developed models for optimal product and product line design, including models that combine conjoint analysis with multidimensional scaling; for a review, see Green and Krieger (1989) Descriptive and normative studies for measuring customer satisfaction and perceived product and service quality Models and applications of conjoint analysis to simulated test marketing sen/ices that include ongoing prediction, validation, and the establishment of adjustment “norms” for converting survey responses to market forecasts Extension of conjoint models and applications to include marketing mix strategy variables, such as advertising, promotion, and distribution Models and applications that combine survey-based data (e. g. , conjoint analysis) with single-source behavioral data obtained from scanning services and split-cable TV experiments. New computer packages that exploit recent developments in hybrid modeling, multivariate response analysis, and optimal product search. Conjoint Analysis in Marketing / 1 5 ployees’ perceived managerial power (Steckel and O’Shaughnessy 1989) and salespersons’ tradeoffs between income and leisure (Darmon 1979), as well as in predicting the outcon^s of buyer-seller negotiations (Neslin and Greenhalgh 1983). Concluding Comments

Though we comment on a large number of trends that have taken place since our 1978 review, space limitations preclude an exhaustive discussion. Table 2 summarizes our suggestions for future directions. In keeping with the dual audience for whom this article has been prepared, futtu”e directions are classified under the principal headings of research and practice. In many cases, however, progress will be made by the combined efforts of academicians and industry practitioners. Part of the current vigor of research in conjoint analysis reflects the close ties between theory and practice that have characterized its development since the early 1970s. As we reflect on the activities that characterize re- earch in conjoint analysis, two key trends appear to have been the development of (1) standardized microcomputer packages and (2) modified approaches to conjoint analysis for obtaining stable part-worth estimates at the individual level for problems involving large numbers of attributes. We are concerned that the increased availability of microcomputer packages may make the misuse of conjoint analysis more likely. Also, as noted in the subsection on preference models, many full-profile studies a | ^ a r to be conducted on the basis of few or zero degrees of freedom. If the full-profile approach is used, it is important to limit the number of attributes and levels, increase the number of profiles, or use more parsimonious models (such as the vector or ideal point models) so as to increase the degrees of freedom for conjoint estimation.

In comparison with our 1978 review, our update refiects a growing interest in the pragmatic aspects of conjoint analysis, as academic and industry researchers continue to expand its scope and versatility in realworld business/government applications. As should be clear from our discussion, conjoint analysis is still open to continued growth and new directions. REFERENCES Addelman, S. (1962), “Orthogonal Main-Effect Plans for Asymmetrical Factorial Experiments,” Technometrics, 4, 21-46. Agarwal, Manoj (1988), “An Empirical Comparison of Traditional Conjoint and Adaptive Conjoint Analysis,” Working Paper No. 88-140, School of Managennent, State University of New York at Binghamton. Akaah, Ishmael P. 1987), “Predictive Performance of Hybrid Conjoint Models in a Small-Scale Design: An Empirical Assessment,” working paper, Wayne State University (March). Alhaum, Gerald (1989), “BRIDGER,” “SIMGRAF” (review). Journal of Marketing Research, 26 (November), 486— 8. Allison, Neil (1989), “Conjoint Analysis Across the Business System,” in 1989 Sawtooth Software Conference Proceedings. Ketchum, ID: Sawtooth Software, 183-96. Anderson, James C. and Naveen Donthu (1988), “A Proximate Assessment of the External Validity of Conjoint Analysis,” 1988 AMA Educators’ Proceedings, G. Frazier et al. , eds. Series 54. Chicago: American Marketing Association, 87-91. Bateson, John E. G. , David J.

Reibslein, and William Boulding (1987), “Conjoint Analysis Reliability and Validity: A Framework for Future Research,” in Review of Marketing, Michael J. Houston, ed. Chicago: American Marketing Association, 451-81. Benbenisty, Rochelle L. (1983), “Attitude Research, Conjoint Analysis Guided Ma Bell’s Entry Into Data Terminal Market,” Marketing News (May 13), 12. Bucklin, Randolph E. and V. Srinivasan (1991), “Determining Inter-Brand Substitutability Through Survey Measurement of Consumer Preference Structures,” Journal of Marketing Research, 28 (February), forthcoming. Carmone, Frank J. (1986), “Conjoint Designer” (review). Journal of Marketing Research, 23 (August), 311-2. (1987), “ACA System for Adaptive Conjoint Analysis” (review).

Journal of Marketing Research, 24 (August), 325-7. and Paul E. Green (1981), “Model Misspecification in Multiattribute Parameter Estimation,” Journal of Marketing Research, 18 (February), 87-93. Cattin, Philippe, Alan Gelfand, and Jeffrey Danes (1983), “A Simple Bayesian Procedure for Estimation in a Conjoint Model,” Journal of Marketing Research, 20 (February), 2 9 35. and Girish Punj (1984), “Factors Influencing the Selection of Preference Model Form for Continuous Utility Functions in Conjoint Analysis,” Marketing Science, 3 (Winter), 73-82. Clarke, Darral G. (1987), Marketing Analysis and Decision Making. Redwood City, CA: The Scientific Press, 180-92. Darmon, Rene Y. 1979), “Setting Sales Quotas With Conjoint Analysis,” Journal of Marketing Research, 16 (February), 133-40. Davidson, J. D. (1973), “Forecasting Traffic on STOL,” Operations Research Quarterly, 22, 561-9. DeSarbo, Wayne S. , J. Douglas Carroll, Donald R. Lehmann, and John O’Shaughnessy (1982), “Three-Way Multivadate Conjoint Analysis,” Marketing Science, 1 (Fall), 323-50. 16 / Journal of Marketing, October 1990 Elrod, Terry, Jordan J. Louviete, and Krishnakumar S. Davey (1989), “How Well Can Compensatory Models Predict Choice for Efficient Choice Sets? ” woridng paper. School of Business, Vanderbilt University. Finkbeiner, Cari T. and Patricia J.

Platz (1986), “Computerized Versus Paper and Pencil Methods: A Comparison Study,” paper {»«sented at the Association for Consumer Research Conference, Toronto (Octotjer). Gensch, Dennis H. (1987), “A Two-Stage Diss^gregate Attribute Choice Model,” Marketing Science, 6 (SunMner), 223-39. Gleser, L. J. (1972), “On Bounds for the Average Correlation Between Subtest Scores in Ipsatively Scored Tests,” Educational and Psychological Measurement, 32 (Fall), 7 5 9 65. Green, Paul E. (1974), “On the Design of Choice Experiments Involving Multifactor Alternatives,” Journal of Consumer Research, 1 (September), 61-8. (1984), “Hybrid Models for Conjoint Analysis: An Expository Review,” Journal of Marketing Research, 21 (May), 155-9. (1987), “Conjoint Analyzer” (review). Journal of Marketing Research. 4 (August), 327-9. -, Stephen M. Goldberg, and Mila Montemayor (1981), “A Hybrid Utility Estimation Model for Conjoint Analysis,” Journal of Marketing, 45 (Winter), 33-41. and Kristiaan Helsen (1989), “Cross-Validation Assessment of Alternatives to Individual-Level Conjoint Analysis: A Case Study,” Journal of Marketing Research, 26 (August), 346-50. -, and Bruce Shandler (1988), “Conjoint Validity Under Alternative Profile Presentations,” Journal of Consumer Research, 15 (December), 392-7. and Abba M. Krieger (1988), “Choice Rules and Sensitivity Analysis in Conjoint Simulators,” Journal of the Academy of Marketing Science, 16 (Spring), 114-27. nd (1989), “Recent Contributions to Optimal Product Positioning and Buyer Segmentation,” European Journal of Operational Research, 41, 127-41. and (1990), “A Hybrid Conjoint Model for Price-Demand Estimation,” European Journal of Operational Research, 44, 28-38. , , and Manoj K. Agarwal (1990), “Adaptive Conjoint Analysis: Some Caveats and Suggestions,” working paper. University of Pennsylvania. , , and Pradeep Bansal (1988), “Completely Unacceptable Levels in Conjoint Analysis: A Cautionary Note,” Journal of Marketing Research, 25 (August), 293-300. and Vithala R. Rao (1971), “Conjoint Measurement for Quantifying Judgmental Data,” Journal of Marketing Research, 8 (August), 355-63. • and V.

Srinivasan (1978), “Conjoint Analysis in Consumer Research: Issues and Outlook,” Journal of Consumer Research, 5 (September), 103—23. and (1990), “A Bibliography on Conjoint Analysis and Related Methodology in Marketing Research,” working paper, Wharton School, University of Pennsylvania (March). and Yoram Wind (1975), “New Way to Measure Consumers’ Judgments,” Harvard Business Review, 53 (July-August), 107-17. Hagerty, Michael R. (1985), “Improving the Predictive Power of Conjoint Analysis: The Use of Factor Analysis and Cluster Analysis,” Journal of Marketing Research, 22 (May), 168-84. (1986), “The Cost of Simplifying Preference Models,” Marketing Science, 5 (Fall), 298-319. and V.

Srinivasan (1991), “Comparing the Predictive Powers of Alternative Multiple Regression Models,” Psychometrika (forthcoming). Hahn, G. J. and S. S. Shapiro (1966), A Catalog and Computer Program for the Design and Analysis of Orthogonal Symmetric and Asymmetric Fractional Factorial Experiments, Report No. 66-C-165. Schenectady, NY: General Electric Research and Development Center (May). Huber, G. P. (1974), “Multiattribute Utility Models: A Review of Field and Field-Like Studies,” Management Science, 20, 1393-402. Huber, Joel (1987), “Conjoint Analysis: How We Got Here and Where We Are,” in Proceedings of the Sawtooth Software Conference on Perceptual Mapping, Conjoint Analysis, and Computer Interviewing.

Ketchum, ID: Sawtooth Software, 237-51. and David Hansen, (1986), “Testing the Impact of Dimensional Complexity and Affective Differences of Paired Concepts in Adaptive Conjoint Analysis,” in Advances in Consumer Research, Vol. 14, M. Wallendorf and P. Anderson, eds. Provo, UT: Association for Consumer Research, 159-63. Jain, Arun K. , Franklin Acito, Naresh K. Malhotra, and Vijay Mahajan, (1979), “A Comparison of Intemal Validity of Alternative Parameter Estimation Methods in Decompositional Multiattribute Preference Models,” Journal of Marketing Research, 16 (August), 313-22. Johnson, Eric, Robert J. Meyer, and Sanjay Ghose (1989), “When Choice Models

Fail: Compensatory Models in Negatively Correlated Environments,” Journal of Marketing Research, 26 (August), 255-70. Johnson, Richard M. (1974), “Trade-off Analysis of Consumer Values,” Jott/7i«i of Afarfering/? esearcfc, 11 (May), 121-7. (1976), “Beyond Conjoint Measurement: A Method of Pairwise Tradeoff Analysis,” in Advances in Consumer Research, Vol. 3, Beverlee B. Anderson, ed. Ann Arbor, MI: Association for Consumer Research, 353-8. (1987), “Adaptive Conjoint Analysis,” in Sawtooth Software Conference on Perceptual Mapping, Conjoint Analysis, and Computer Interviewing. Ketchum, ID: Sawtooth Software, 253-65. Johnston, J. (1984), Econometric Methods. New York: McGraw-Hill Book Company. Jones, Frank D. 1975), “A Survey Technique to Measure Demand Under Various Pricing Strategies,” Journal of Marketing, 39 (July) 75-7. Kamakura, Wagner (1987), “Review of: Conjoint Designer/ Analyzer,” OR/MS Today, 14 (June), 36-7. (1988), “A Least Squares Procedure for Benefit Segmentation With Conjoint Experiments,” Journal of Marketing Research, 25 (May), 157-67. • and Rajendra K. Srivastava (1984), “Predicting Choice Shares Under Conditions of Brand Interdependence,” Journal of Marketing Research, 21 (November), 420-34. Klein, Noreen M. (1986), “Assessing Unacceptable Attribute Levels in Conjoint Analysis,” in Advances in Consumer Research, Vol. 14, M. Wallendorf and P. Anderson, eds. Provo, UT: Association for Consumer Research, 154-8. Krieger, Abba M. and Paul E.

Green (1988), “On the Generation of Pareto Optimal, Conjoint Profiles From Orthogonal Main Effects Plans,” working paper, Wharton School, University of Pennsylvania (August). Conjoint Analysis in Marketing / 1 7 Krishnamurthi, Lakshman (1988), ‘Conjoint Models of Family Decision Making,” International Journal of Research in Marketing, 5, 185-98. Leigh, T. W. , DavidB. MacKay, aiKl JohnO. Summers (1984), “Reliability and Validity of Conjoint Analysis and Self-Explicated Weights: A Comparison,” Journal of Marketing Research, 21 (November), 456-62. Levy, Michael, John Webster, and Roger A. Kerin (1983), “Formulating Push Marketing Strategies: A Method and Application,” Journal of Marketing, 47 (Fall), 25-34. Louviere, Jordan J. 1988), Analyzing Decision Making: Metric Conjoint Analysis. Beverly Hills, CA: Sage Publications, Inc. and George Woodworth (1983), “Design and Analysis of Simulated Constuner Choice on Allocation Experiments: An Approach Based on Aggregate Data,” Journal of Marketing Research, 20 (November), 350-67. Lussier, Dennis A. and Richard W. Olshavsky (1979), “Task Complexity and Contingent Processing in Brand Choice,” Journal of Consumer Research, 6 (September), 154-65. Mahajan, V. , Paul E. Green, and Stephen M. Goldberg (1982), “A Conjoint Model for Measuring Self- and Cross-Price/ Demand Relationships,” Journal of Marketing Research, 19 (August), 334-42. Mallows, Colin L. 1973), “Some Cotnments on Cp,” Technometrics, 15, 661-76. Mehta, Raj, William L. Moore, and Teresa M. Pavia (1989), “An Examination of Unacceptable Levels in Conjoint Analysis,” working paper. Graduate School of Business, The University of Utah. Mohn, N. Carroll (1990), “Simulated Purchase ‘Chip’ Testing vs. Tradeoff (Conjoint) Analysis—Coca Cola’s Experience,” Marketing Research, 2 (March), 49-54. Montgomery, David B. (1986), “Conjoint Calibration of the Customer/Competitor Interface in Industrial Markets,” in Industrial Marketing: A German-American Perspective, Klaus Backhaus and David T. Wilson, eds. Berlin: SpringerVerlag, 297-319. Moore, WilUam L. 1980), “Levels of Aggregation in Conjoint Analysis: An Empirical Comparison,” Journal of Marketing Research, 17 (November), 516-23. and Morris B. Holbrook (1990), “Conjoint Analysis on Objects With EnvironiMntally Correlated Attributes: The Questionable Importance of Representative Design,” Journal of Consumer Research, 16 (March), 490-7. – and Richard J. Semenik (1988), “Measuring Preference With Hybrid Conjoint Analysis: The Impact of a Different Number of Attributes in the Master Design,” Journal of Business Research, 16, 261-74. Moskowitz, H. R. , D. W. Stanley, and J. W. Chandler (1977), “The Eclipse Method: Optimizing I»roduct Formulation Through a Consumer Generated Ideal Sensory Profile,” Canadian Institute of Food Science Technology Journal, 10 (July), 161-8.

Neslin, Scott and Leonard Greenhalgh (1983), “Nash’s Theory of Cooperative Games as a Predictor of the Outcomes of Buyer-Seller Negotiations: An Experiment in Media Purchasing,” Journal of Marketing Research, 20 (November), 368-79. Ogawa, Kahsuki (1987), “An Approach to Simultaneous Estimation and Segmentation in Conjoint Analysis,” Marketing Science, 6 (Winter), 6 6 – 8 1 . Page, Albert L. and Harold F. Rosenbaum (1987), “Redesigning Product Lines With Conjoint Analysis: How Sunbeam Does It,” Journal of Product Innovation Management, 4, 120-37. Payne, John W. (1976), “Task Complexity and Contingent Processing in Decision Making: An Information Search and Protocol Analysis,” Organizational Behavior and Human Performance, 16, 366—87. Reibstein, David, John E. G.

Bateson, and William Boulding (1988), “Conjoint Analysis Reliability: Empirical Findings,” Marketing Science, 1 (Sununer), 271-86. Robinson, Patrick J. (1980), “Application of Conjoint Analysis to Pricing Problems,” in Proceedings of the First ORSA/TIMS Special Interest Conference on Market Measurement and Analysis, D. B. Montgomery and D. R. Wittink, eds. Cambridge, MA: Marketing Science Institute, 183-205. Sawtooth Software (1989), “Conjoint Analysis Used for Personnel Applications,” Sawtooth News, 5 (Fall), 5-7. Schwartz, David (1978), “Locked Box Combines Survey Methods, Helps End Woes of Probing Industrial Field,” Marketing News (January 27), 18. Segal, Madhav N. 1982), “ReUabiUty of Conjoint Analysis: Contrasting Data Collection Procedures,” Journal of Marketing Research, 19 (February), 139-43. Smith, Scott M. (1988), “Statistical Software for Conjoint Analysis,” in Proceedings of the Sawtooth Software Conference on Perceptual Mapping, Conjoint Analysis, and Computer Interviewing. Ketchum, ID: Sawtooth Software (June), 109-16. Srinivasan, V. (1979), “Netwwk Models for Estimating BrandSpecific Effects in Multi-Attribute Marketing Models,” Management Science, 25 (January), 11-21. (1982), “Comments on the Role of Price in Individual Utility Judgments,” in Choice Models for Buyer Behavior, Leigh McAlister, ed. Greenwich, CT: JAI Press, Inc. , 81-90. 1988), “A Conjunctive-Compensatory Approach to the Self-Explication of Multiattributed Preferences,” Decision Sciences, 19 (Spring), 295-305. -, Peter G. Flaschsbart, Jarir S. Dajani, and Rolfe G. Hartley (1981), “Fcwecasting the Effectiveness of Work-Trip Gasoline Conservation Policies Through Conjoint Analysis ” Journal of Marketing, 45 (Summer), 157-72. -, Arun K. Jain, and Naresh K. Malhotra (1983), “Improving Predictive Power of Conjoint Analysis by Constrained Parameter Estimation,” Journal of Marketing Research, 20 (November), 433-8. and Allan D. Shocker, (1973a), “Linear Programming Techniques for Multidimensional Analysis of Preferences,” Psychometrika, 38, 337-69. and (1973b), ‘Estimating the Weights for Multiple Attributes in a Composite Criterion Using Pairwise Judgn»nts,” Psychometrika, 38, 473-93. and Gordon A. Wyner (1989), “CASEMAP: Computer-Assisted Self-Explication of Multi-Attributed Preferences,” in New Product Development and Testing, W. Henry, M. Menasco, and H. Takada, eds. Lexington, MA: Lexington Books, 91-111. Steckel, Joel, H. , Wayne S. DeSarbo, and Vijay Mahajan (1990), “On the Creation of Feasible Conjoint Analysis Experimental Designs,” Decision Sciences (forthcoming). and John O’Shaughnessy (1989), “Towards a New Way to Measure Power: Applying Conjoint Analysis to Group Discussions,” Marketing Letters, 1 (December), 3 7 46. Swinnen, G. 1983), “Decisions on Product-Mix Changes m Supermaricet Chains,” unpublished doctoral dissertation, UFSIA, Antwerp University, Belgium. Theii, Henri (1969), “A Multinomial Extension of the Linear 18 / Journal of Marketing, October 1990 Logit Model,” International Economics Review, 10 (October), 251-9. Wiley, James B. (1977), “Selecting Pareto Optimal Subsets From Multiattribute Alternatives,” in Advances in Consumer Research, Vol. 4, W. D. Perreault, Jr. , ed. Atlanta; Association for Consumer Research, 171-4. Wilkie, William L. and Edgar A. Pessemier (1973), “Issues in Mariceting’s Use of Multi-Attribute Attitude Models,” Journal of Marketing Research, 10 (November), 428-41. Wittink, Dick R. nd Philippe Cattin (1989), “Commercial Use of Conjoint Analysis: An Update,” Journal of Marketing, 53 (July), 91-6. , Lakshman Krishniunurthi, and Julia B. Nutter (1982), “Comparing Derived Importance Weights Across Attributes,” Journal of Consumer Research, 8 (March), 4714. , , and David J. Reibstein (1990), “The Effect of Differences in the Number of Attribute Levels on Conjoint Results,” Marketing Letters (forthcoming). and David B. Montgomery (1979), “Predictive Validity of Trade-Off Analysis for Alternative Segmentation Schemes,” in Educators’ Conference Proceedings, Series 44, Neil Beckwith et al. , eds. Chicago: American Marketing Association, 69—73. -, David J.

Reibstein, William Boulding, John E. G. Bateson, and John W. Walsh (1990), “Conjoint Reliability Measures,” Marketing Science (forthcoming). Wright, Peter (1975), “Consumer Choice Strategies: Simplifying Vs. Optimizing,” Journal of Marketing Research, 12 (February), 60-7. and Mary Ann Kriewall (1980), “State of Mind Effects on the Accuracy With Which Utility Functions Predict Marke^lace Choice,” Journal of Marketing Research, 19 (August), 277-93. Wyner, Gordon A. , Lois H. Benedetti, and Bart M. Trapp (1984), “Measuring the Quantity and Mix of Product Demand,” Journal of Marketing, 48 (Winter), 101-9. Repriot No. JM544101 AMERICAN MARKETING ASSOCIATION

REPRINT AND PERMISSION POLICIES AND PROCEDURES FOR JOURNAL ARTICLES The American Marketing Association is eager to serve you in the sharing of the valuable information found in AMA’s journal articles. REPRINTS If you wish to order REPRINTS of any article, write to the American Marketing Association, Reprints Department, 250 S. Wacker Drive, Chicago, IL 60606. CaU (312) 648-0536 or Fax (312) 993-7540. The prices below apply to each title ordered. Multiple titles may be combined to reach the minimum order amount of $15. 00. tmm mmt ‘nm^mm, wm, mm m. Each reproduction will be provided with an attached cover, and reproduced on 20# matte paper stock. Orders must be prepaid by check or credit card. NO REFUNDS OR EXCHANGES.

REPRINT PRICES (minimum order is S15. 00) 1-14 15-99 100-500 >500 reproductions of the same article or an assortment of articles. reproductions of the same article reproductions of the same article reproductions of the same article $1. 50 each $1. 25 each $1. 00 each call for quote ALL PRICES INCLUDE SHIPPING AND HANDLING CHARGES Also, a single copy reprint or an order of less than 50 copies may be obtained from University Microfilms Intemational, 300 N. Zeeb Road, Ann Arbor, Ml 48106. Articles are priced prepaid at $10. 75 plus $2. 25 for each additional copy d ^ e same article. Those who have a standing account with UMI pay $8. 75 per article plus $2. 5 for each additional copy of the same article. Complete issues are obtainable at $45. 00 each. PERMISSIONS AMA would be proud to grant PERMISSION FOR THE REPRODUCTION of copyrighted materials. To obtain permission to reproduce muld{de copies from AMA puUications, contact the American Marketing Association, Pennissions Department Under the ‘fair use’ provision of the Copyright Law, effective January, 1978, anyone may make a photocopy of a copyrighted work for their own use without seeicing permission. To secure permission to reproduce one’s own works in quantity, please contact the Pennissions Department /MRKETING /4SBOCI/4TION Conjoint Analysis in Marketing / 1 9

### Cite this Conjoint Analysis in Marketing: New Developments with Implications for Research and Practice

Conjoint Analysis in Marketing: New Developments with Implications for Research and Practice. (2016, Oct 14). Retrieved from https://graduateway.com/conjoint-analysis-in-marketing-new-developments-with-implications-for-research-and-practice/

This is just a sample.

You can get your custom paper from
our expert writers