Sales, Sales Management, Organizational Behavior, and Interfirm-Intrafirm Issues Show
Sales, Sales Management, Organizational Behavior, and Interfirm-Intrafirm Issues Sales, sales management, organizational behavior, and interfirm-intrafirm issues Sales, Sales Management, and Organizational Behavior Issues Job Satisfaction Measures Job Characteristic Inventory: JCI (Sims, Szilagyi, and Keller 1979) Construct: The Job Characteristic Inventory (JCI) ... locked icon Sign in to access this contentSign in Get a 30 day FREE TRIAL
sign up today! Here we present some formal tests and remedial measures for dealing with error autocorrelation. We usually assume that the error terms are independent unless there is a specific reason to think that this is not the case. Usually, violation of this
assumption occurs because there is a known temporal component for how the observations were drawn. The easiest way to assess if there is dependency is by producing a scatterplot of the residuals versus the time measurement for that observation (assuming you have the data arranged according to a time sequence order). If the data are independent, then the residuals should look randomly scattered about 0. However, if a noticeable pattern emerges (particularly one that is cyclical) then dependency
is likely an issue. Recall that if we have a first-order autocorrelation with the errors, then the errors are modeled as: \(\begin{equation*} \epsilon_{t}=\rho\epsilon _{t-1}+\omega_{t}, \end{equation*}\) where \(|\rho|<1\) and the \(\omega_{t}\sim_{iid}N(0,\sigma^{2})\). If we suspect first-order autocorrelation with the errors, then a formal test does exist regarding the parameter \(\rho\). In particular, the Durbin-Watson test is constructed as: \(\begin{align*} \nonumber H_{0}&\colon \rho=0 \\ \nonumber H_{A}&\colon \rho\neq 0. \end{align*}\) So the null hypothesis of \(\rho=0\) means that \(\epsilon_{t}=\omega_{t}\), or that the error term in one period is not correlated with the error term in the previous period, while the alternative hypothesis of \(\rho\neq 0\) means the error term in one period is either positively or negatively correlated with the error term in the previous period. Oftentimes, a researcher
will already have an indication of whether the errors are positively or negatively correlated. For example, a regression of oil prices (in dollars per barrel) versus the gas price index will surely have positively correlated errors. When the researcher has an indication of the direction of the correlation, then the Durbin-Watson test also accommodates the one-sided alternatives \(H_{A}\colon\rho< 0\) for negative correlations or \(H_{A}\colon\rho> 0\) for positive correlations (as in the
oil example). The test statistic for the Durbin-Watson test on a data set of size n is given by: \(\begin{equation*} D=\dfrac{\sum_{t=2}^{n}(e_{t}-e_{t-1})^{2}}{\sum_{t=1}^{n}e_{t}^{2}}, \end{equation*}\) where \(e_{t}=y_{t}-\hat{y}_{t}\) are the residuals from the ordinary least squares fit. The DW test statistic varies from 0 to 4, with values between 0 and 2 indicating positive autocorrelation, 2 indicating zero autocorrelation, and values between 2 and 4 indicating negative autocorrelation. Exact critical values are difficult to obtain, but tables (for certain significance values) can be used to make a decision (e.g., see the tables on the Durbin Watson Significance Tables, where N represents the sample size, n, and \(\Lambda\) represents the number of regression parameters, p). The tables provide a lower and upper bound, called \(d_{L}\) and \(d_{U}\), respectively. In testing for positive autocorrelation, if \(D<d_{L}\) then reject \(H_{0}\), if \(D>d_{U}\) then fail to reject \(H_{0}\), or if \(d_{L}\leq D\leq d_{U}\), then the test is inconclusive. While the prospect of having an inconclusive test result is less than desirable, there are some programs that use exact and approximate procedures for calculating a p-value. These procedures require certain assumptions about the data which we will not discuss. One "exact" method is based on the beta distribution for obtaining p-values. To illustrate, consider the Blaisdell Company example from page 489 of Applied Linear Regression Models (4th ed) by Kutner, Nachtsheim, and Neter. If we fit a simple linear regression model with response comsales (company sales in $ millions) and predictor indsales (industry sales in $ millions) and click the "Results" button in the Regression Dialog and check "Durbin-Watson statistic" we obtain the following output: Coefficients
Durbin-Watson Statistic = 0.734726 Since the value of the Durbin-Watson Statistic falls below the lower bound at a 0.01 significance level (obtained from a table of Durbin-Watson test bounds), there is strong evidence the error terms are positively correlated. Ljung-Box Q Test SectionThe Ljung-Box Q test (sometimes called the Portmanteau test) is used to test whether or not observations over time are random and independent. In particular, for a given k, it tests the following: \(\begin{align*} \nonumber H_{0}&\colon \textrm{the autocorrelations up to lag} \ k \ \textrm{are all 0} \\ \nonumber H_{A}&\colon \textrm{the autocorrelations of one or more lags differ from 0}. \end{align*}\) The test statistic is calculated as: \(\begin{equation*} Q_{k}=n(n+2)\sum_{j=1}^{k}\dfrac{{r}^{2}_{j}}{n-j}, \end{equation*}\) which is approximately \(\chi^{2}_{k}\)-distributed. To illustrate how the test works for k=1, consider the Blaisdell Company example from above. If we store the residuals from a simple linear regression model with response comsales and predictor indsales and then find the autocorrelation function for the residuals (select Stat > Time Series > Autocorrelation), we obtain the following output: Autocorrelation Function: RESI1
The Ljung-Box Q test statistic of 9.08 corresponds to a \(\chi^{2}_{1}\) p-value of 0.0026, so there is strong evidence the lag-1 autocorrelation is non-zero. Remedial Measures SectionWhen autocorrelated error terms are found to be present, then one of the first remedial measures should be to investigate the omission of a key predictor variable. If such a predictor does not aid in reducing/eliminating autocorrelation of the error terms, then certain transformations on the variables can be performed. We discuss three transformations that are designed for AR(1) errors. Methods for dealing with errors from an AR(k) process do exist in the literature but are much more technical in nature. Cochrane-Orcutt Procedure SectionThe first of the three transformation methods we discuss is called the Cochrane-Orcutt procedure, which involves an iterative process (after identifying the need for an AR(1) process): Estimate \(\rho\) for \(\begin{equation*} \epsilon_{t}=\rho\epsilon_{t-1}+\omega_{t} \end{equation*}\) by performing a regression through the origin. Call this estimate r. Transform the variables from the multiple regression model \(\begin{equation*} y_{t}=\beta_{0}+\beta_{1}x_{t,1}+\ldots+\beta_{p-1}x_{t,p-1}+\epsilon_{t} \end{equation*}\) by setting \(y_{t}^{*}=y_{t}-ry_{t-1}\) and \(x_{t,j}^{*}=x_{t,j}-rx_{t-1,j}\) for \(j=1,\ldots,p-1\). Regress \(y_{t}^{*}\) on the transformed predictors using ordinary least squares to obtain estimates \(\hat{\beta}_{0}^{*},\ldots,\hat{\beta}_{p-1}^{*}\). Look at the error terms for this fit and determine if autocorrelation is still present (such as using the Durbin-Watson test). If autocorrelation is still present, then iterate this procedure. If it appears to be corrected, then transform the estimates back to their original scale by setting \(\hat{\beta}_{0}=\hat{\beta}_{0}^{*}/(1-r)\) and \(\hat{\beta}_{j}=\hat{\beta}_{j}^{*}\) for \(j=1,\ldots,p-1\). Notice that only the intercept parameter requires a transformation. Furthermore, the standard errors of the regression estimates for the original scale can also be obtained by setting \(\textrm{s.e.}(\hat{\beta}_{0})=\textrm{s.e.}(\hat{\beta}_{0}^{*})/(1-r)\) and \(\textrm{s.e.}(\hat{\beta}_{j})=\textrm{s.e.}(\hat{\beta}_{j}^{*})\) for \(j=1,\ldots,p-1\). To illustrate the Cochrane-Orcutt procedure, consider the Blaisdell Company example from above:
Coefficients
Durbin-Watson Statistic = 1.65025
One thing to note about the Cochrane-Orcutt approach is that it does not always work properly. This occurs primarily because if the errors are positively autocorrelated, then r tends to underestimate \(\rho\). When this bias is serious, then it can seriously reduce the effectiveness of the Cochrane-Orcutt procedure. Hildreth-Lu Procedure SectionThe Hildreth-Lu procedure is a more direct method for estimating \(\rho\). After establishing that the errors have an AR(1) structure, follow these steps:
Notice that this procedure is similar to the Box-Cox transformation discussed previously and that it is not iterative like the Cochrane-Orcutt procedure. To illustrate the Hildreth-Lu procedure, consider the Blaisdell Company example from above:
Analysis of Variances
Coefficients
Durbin-Watson Statistic = 1.72544
First Differences Procedure SectionSince \(\rho\) is frequently large for AR(1) errors (especially in economics data), many have suggested just setting \(\rho\) = 1 in the transformed model of the previous two procedures. This procedure is called the first differences procedure and simply regresses \(y_{t}^{*}=y_{t}-y_{t-1}\) on the \(x_{t,j}^{*}=x_{t,j}-x_{t-1,j}\) for \(j=1,\ldots,p-1\) using regression through the origin. The estimates from this regression are then transformed back, setting \(\hat{\beta}_{j}=\hat{\beta}_{j}^{*}\) for \(j=1,\ldots,p-1 \) and \(\hat{\beta}_{0}=\bar{y}-(\hat{\beta}_{1}\bar{x}_{1}+\ldots+\hat{\beta}_{p-1}\bar{x}_{p-1})\). To illustrate the first differences procedure, consider the Blaisdell Company example from above:
Coefficients
Forecasting Issues SectionWhen calculating forecasts for regression with autoregressive errors, it is important to utilize the modeled error structure as part of our process. For example, for AR(1) errors, \(\epsilon_{t}=\rho\epsilon_{t-1}+\omega_{t}\), our fitted regression equation is \(\begin{equation*} \hat{y}_{t}=b_{0}+b_{1}x_{t}, \end{equation*}\) with forecasts of \(y\) at time \(t\), denoted \(F_{t}\), computed as: \(\begin{equation*} F_{t}=\hat{y}_{t}+e_{t}=\hat{y}_{t}+re_{t-1}. \end{equation*}\) So, we can compute forecasts of \(y\) at time \(t+1\), denoted \(F_{t+1}\), iteratively:
To illustrate forecasting for the Cochrane-Orcutt procedure, consider the Blaisdell Company example from above. Suppose we wish to forecast comsales for time period 21 when indsales are projected to be $175.3 million:
Note that this procedure is not needed when simply using the lagged response variable as a predictor, e.g., in the first-order autoregression model \(\begin{equation*} y_{t}=\beta_{0}+\beta_{1}y_{t-1}+\epsilon_{t} \end{equation*}.\)
What is the common mistake made by many inexperienced sales people?Trying to make your business look better by putting down your competition is one of the worst things you can do. This is one of the mistakes an inexperienced salesperson does in hopes to win over a prospect. To a smart customer this will only show insecurity in your own product.
Which of the following is not one of the characteristics of a behavior based performance evaluation perspective?The most useful input unit for productivity analysis is total sales per salesperson. Which of the following is not one of the characteristics of a behavior-based performance evaluation perspective? Little monitoring of salespeople.
Which of the following personal selling approaches focuses most on the buyer's needs and strategic priorities?Stimulus response selling is the most flexible and most focused on the buyer's unique needs and strategic priorities.
Which one of the following forms of commission rate may actually result in overselling and higher selling costs to the company?This form of commission rate may actually result in overselling and higher selling costs to the company: progressive.
|