3 Linear Regression
This chapter is about linear regression, a very simple approach for supervised learning. In particular, linear regression is a useful tool for predicting a quantitative response. It has been around for a long time and is the topic of innumerable textbooks. Though it may seem somewhat dull compared to some of the more modern statistical learning approaches described in later chapters of this book, linear regression is still a useful and widely used statistical learning method. Moreover, it serves as a good jumping-off point for newer approaches: as we will see in later chapters, many fancy statistical learning approaches can be seen as generalizations or extensions of linear regression. Consequently, the importance of having a good understanding of linear regression before studying more complex learning methods cannot be overstated. In this chapter, we review some of the key ideas underlying the linear regression model, as well as the least squares approach that is most commonly used to fit this model.
Recall the Advertising data from Chapter 2. Figure 2.1 displays sales (in thousands of units) for a particular product as a function of advertising budgets (in thousands of dollars) for TV, radio, and newspaper media. Suppose that in our role as statistical consultants we are asked to suggest, on the basis of this data, a marketing plan for next year that will result in high product sales. What information would be useful in order to provide such a recommendation? Here are a few important questions that we might seek to address:
-
Is there a relationship between advertising budget and sales? Our first goal should be to determine whether the data provide evidence of an association between advertising expenditure and sales. If the evidence is weak, then one might argue that no money should be spent on advertising!
-
How strong is the relationship between advertising budget and sales? Assuming that there is a relationship between advertising and sales, we would like to know the strength of this relationship. Does knowledge of the advertising budget provide a lot of information about product sales?
-
Which media are associated with sales? Are all three media—TV, radio, and newspaper—associated with sales, or are just one or two of the media associated? To answer this question, we must find a way to separate out the individual contribution of each medium to sales when we have spent money on all three media.
-
How large is the association between each medium and sales? For every dollar spent on advertising in a particular medium, by what amount will sales increase? How accurately can we predict this amount of increase?
-
How accurately can we predict future sales? For any given level of television, radio, or newspaper advertising, what is our prediction for sales, and what is the accuracy of this prediction?
-
Is the relationship linear? If there is approximately a straight-line relationship between advertising expenditure in the various media and sales, then linear regression is an appropriate tool. If not, then it may still be possible to transform the predictor or the response so that linear regression can be used.
-
Is there synergy among the advertising media? Perhaps spending $50,000 on television advertising and $50,000 on radio advertising is associated with higher sales than allocating $100,000 to either television or radio individually. In marketing, this is known as a synergy effect, while in statistics it is called an interaction effect.
It turns out that linear regression can be used to answer each of these questions. We will first discuss all of these questions in a general context, and then return to them in this specific context in Section 3.4.
3.1 Simple Linear Regression
Simple linear regression lives up to its name: it is a very straightforward approach for predicting a quantitative response on the basis of a single predictor variable . It assumes that there is approximately a linear relationship between and . Mathematically, we can write this linear relationship as
You might read "" as “is approximately modeled as”. We will sometimes describe (3.1) by saying that we are regressing on (or onto ). For example, may represent TV advertising and may represent sales. Then we can regress sales onto TV by fitting the model
In Equation 3.1, and are two unknown constants that represent the intercept and slope terms in the linear model. Together, and are known as the model coefficients or parameters. Once we have used our training data to produce estimates and for the model coefficients, we can predict future sales on the basis of a particular value of TV advertising by computing
where indicates a prediction of on the basis of . Here we use a hat symbol, , to denote the estimated value for an unknown parameter or coefficient, or to denote the predicted value of the response.
3.1.1 Estimating the Coefficients
In practice, and are unknown. So before we can use (3.1) to make predictions, we must use data to estimate the coefficients. Let
represent observation pairs, each of which consists of a measurement of and a measurement of . In the Advertising example, this data set consists of the TV advertising budget and product sales in different markets. (Recall that the data are displayed in Figure 2.1.) Our goal is to obtain coefficient estimates and such that the linear model (3.1) fits the available data well—that is, so that for . In other words, we want to find an intercept and a slope such that the resulting line is as close as possible to the data points. There are a number of ways of measuring closeness. However, by far the most common approach involves minimizing the least squares criterion, and we take that approach in this chapter. Alternative approaches will be considered in Chapter 6.
Let be the prediction for based on the th value of . Then represents the th residual—this is the difference between the th observed response value and the th response value that is predicted by our linear model. We define the residual sum of squares (RSS) as
or equivalently as
The least squares approach chooses and to minimize the RSS. Using some calculus, one can show that the minimizers are
where and are the sample means. In other words, (3.4) defines the least squares coefficient estimates for simple linear regression.
Figure 3.1. For the
Advertisingdata, the least squares fit for the regression ofsalesontoTVis shown. The fit is found by minimizing the residual sum of squares. Each grey line segment represents a residual. In this case a linear fit captures the essence of the relationship, although it overestimates the trend in the left of the plot.
Figure 3.1 displays the simple linear regression fit to the Advertising data, where and . In other words, according to this approximation, an additional $1,000 spent on TV advertising is associated with selling approximately 47.5 additional units of the product. In Figure 3.2, we have computed RSS for a number of values of and , using the advertising data with sales as the response and TV as the predictor. In each plot, the red dot represents the pair of least squares estimates given by (3.4). These values clearly minimize the RSS.
Figure 3.2. Contour and three-dimensional plots of the RSS on the
Advertisingdata, usingsalesas the response andTVas the predictor. The red dots correspond to the least squares estimates and , given by (3.4).
3.1.2 Assessing the Accuracy of the Coefficient Estimates
Recall from (2.1) that we assume that the true relationship between and takes the form for some unknown function , where is a mean-zero random error term. If is to be approximated by a linear function, then we can write this relationship as
Here is the intercept term—that is, the expected value of when , and is the slope—the average increase in associated with a one-unit increase in . The error term is a catch-all for what we miss with this simple model: the true relationship is probably not linear, there may be other variables that cause variation in , and there may be measurement error. We typically assume that the error term is independent of .
The model given by (3.5) defines the population regression line, which is the best linear approximation to the true relationship between and .1 The least squares regression coefficient estimates (3.4) characterize the least squares line (3.2). The left-hand panel of Figure 3.3 displays these two lines in a simple simulated example. We created 100 random s, and generated 100 corresponding s from the model
where was generated from a normal distribution with mean zero. The red line in the left-hand panel of Figure 3.3 displays the true relationship, , while the blue line is the least squares estimate based on the observed data. The true relationship is generally not known for real data, but the least squares line can always be computed using the coefficient estimates given in (3.4). In other words, in real applications, we have access to a set of observations from which we can compute the least squares line; however, the population regression line is unobserved. In the right-hand panel of Figure 3.3 we have generated ten different data sets from the model given by (3.6) and plotted the corresponding ten least squares lines. Notice that different data sets generated from the same true model result in slightly different least squares lines, but the unobserved population regression line does not change.
Figure 3.3. A simulated data set. Left: The red line represents the true relationship, , which is known as the population regression line. The blue line is the least squares line; it is the least squares estimate for based on the observed data, shown in black. Right: The population regression line is again shown in red, and the least squares line in dark blue. In light blue, ten least squares lines are shown, each computed on the basis of a separate random set of observations. Each least squares line is different, but on average, the least squares lines are quite close to the population regression line.
At first glance, the difference between the population regression line and the least squares line may seem subtle and confusing. We only have one data set, and so what does it mean that two different lines describe the relationship between the predictor and the response? Fundamentally, the concept of these two lines is a natural extension of the standard statistical approach of using information from a sample to estimate characteristics of a large population. For example, suppose that we are interested in knowing the population mean of some random variable . Unfortunately, is unknown, but we do have access to observations from , , which we can use to estimate . A reasonable estimate is , where is the sample mean. The sample mean and the population mean are different, but in general the sample mean will provide a good estimate of the population mean. In the same way, the unknown coefficients and in linear regression define the population regression line. We seek to estimate these unknown coefficients using and given in (3.4). These coefficient estimates define the least squares line.
The analogy between linear regression and estimation of the mean of a random variable is an apt one based on the concept of bias. If we use the sample mean to estimate , this estimate is unbiased, in the sense that on average, we expect to equal . What exactly does this mean? It means that on the basis of one particular set of observations , might overestimate , and on the basis of another set of observations, might underestimate . But if we could average a huge number of estimates of obtained from a huge number of sets of observations, then this average would exactly equal . Hence, an unbiased estimator does not systematically over- or under-estimate the true parameter. The property of unbiasedness holds for the least squares coefficient estimates given by (3.4) as well: if we estimate and on the basis of a particular data set, then our estimates won’t be exactly equal to and . But if we could average the estimates obtained over a huge number of data sets, then the average of these estimates would be spot on! In fact, we can see from the right-hand panel of Figure 3.3 that the average of many least squares lines, each estimated from a separate data set, is pretty close to the true population regression line.
We continue the analogy with the estimation of the population mean of a random variable . A natural question is as follows: how accurate is the sample mean as an estimate of ? We have established that the average of ‘s over many data sets will be very close to , but that a single estimate may be a substantial underestimate or overestimate of . How far off will that single estimate of be? In general, we answer this question by computing the standard error of , written as . We have the well-known formula
where is the standard deviation of each of the realizations of .2 Roughly speaking, the standard error tells us the average amount that this estimate differs from the actual value of . Equation 3.7 also tells us how this deviation shrinks with —the more observations we have, the smaller the standard error of . In a similar vein, we can wonder how close and are to the true values and . To compute the standard errors associated with and , we use the following formulas:
where . For these formulas to be strictly valid, we need to assume that the errors for each observation have common variance and are uncorrelated. This is clearly not true in Figure 3.1, but the formula still turns out to be a good approximation. Notice in the formula that is smaller when the are more spread out; intuitively we have more leverage to estimate a slope when this is the case. We also see that would be the same as if were zero (in which case would be equal to ). In general, is not known, but can be estimated from the data. This estimate of is known as the residual standard error, and is given by the formula . Strictly speaking, when is estimated from the data we should write to indicate that an estimate has been made, but for simplicity of notation we will drop this extra “hat”.
Standard errors can be used to compute confidence intervals. A 95 % confidence interval is defined as a range of values such that with 95 % probability, the range will contain the true unknown value of the parameter. The range is defined in terms of lower and upper limits computed from the sample of data. A 95 % confidence interval has the following property: if we take repeated samples and construct the confidence interval for each sample, 95 % of the intervals will contain the true unknown value of the parameter. For linear regression, the 95 % confidence interval for approximately takes the form
That is, there is approximately a 95 % chance that the interval
will contain the true value of .3 Similarly, a confidence interval for approximately takes the form
In the case of the advertising data, the 95 % confidence interval for is and the 95 % confidence interval for is . Therefore, we can conclude that in the absence of any advertising, sales will, on average, fall somewhere between 6,130 and 7,935 units. Furthermore, for each $1,000 increase in television advertising, there will be an average increase in sales of between 42 and 53 units.
Standard errors can also be used to perform hypothesis tests on the coefficients. The most common hypothesis test involves testing the null hypothesis of
versus the alternative hypothesis
Mathematically, this corresponds to testing
versus
since if then the model (3.5) reduces to , and is not associated with . To test the null hypothesis, we need to determine whether , our estimate for , is sufficiently far from zero that we can be confident that is non-zero. How far is far enough? This of course depends on the accuracy of —that is, it depends on . If is small, then even relatively small values of may provide strong evidence that , and hence that there is a relationship between and . In contrast, if is large, then must be large in absolute value in order for us to reject the null hypothesis. In practice, we compute a -statistic, given by
which measures the number of standard deviations that is away from 0. If there really is no relationship between and , then we expect that (3.14) will have a -distribution with degrees of freedom. The -distribution has a bell shape and for values of greater than approximately 30 it is quite similar to the standard normal distribution. Consequently, it is a simple matter to compute the probability of observing any number equal to or larger in absolute value, assuming . We call this probability the -value. Roughly speaking, we interpret the -value as follows: a small -value indicates that it is unlikely to observe such a substantial association between the predictor and the response due to chance, in the absence of any real association between the predictor and the response. Hence, if we see a small -value, then we can infer that there is an association between the predictor and the response. We reject the null hypothesis—that is, we declare a relationship to exist between and —if the -value is small enough. Typical -value cutoffs for rejecting the null hypothesis are 5 % or 1 %, although this topic will be explored in much greater detail in Chapter 13. When , these correspond to -statistics (3.14) of around 2 and 2.75, respectively.
| Coefficient | Std. error | -statistic | -value | |
|---|---|---|---|---|
Intercept | 7.0325 | 0.4578 | 15.36 | < 0.0001 |
TV | 0.0475 | 0.0027 | 17.67 | < 0.0001 |
Table 3.1. For the
Advertisingdata, coefficients of the least squares model for the regression of number of units sold on TV advertising budget. An increase of $1,000 in the TV advertising budget is associated with an increase in sales by around 50 units. (Recall that thesalesvariable is in thousands of units, and theTVvariable is in thousands of dollars.)
Table 3.1 provides details of the least squares model for the regression of number of units sold on TV advertising budget for the Advertising data. Notice that the coefficients for and are very large relative to their standard errors, so the -statistics are also large; the probabilities of seeing such values if is true are virtually zero. Hence we can conclude that and .4
3.1.3 Assessing the Accuracy of the Model
Once we have rejected the null hypothesis (3.12) in favor of the alternative hypothesis (3.13), it is natural to want to quantify the extent to which the model fits the data. The quality of a linear regression fit is typically assessed using two related quantities: the residual standard error (RSE) and the statistic.
| Quantity | Value |
|---|---|
| Residual standard error | 3.26 |
| 0.612 | |
| -statistic | 312.1 |
Table 3.2. For the
Advertisingdata, more information about the least squares model for the regression of number of units sold on TV advertising budget.
Table 3.2 displays the RSE, the statistic, and the -statistic (to be described in Section 3.2.2) for the linear regression of number of units sold on TV advertising budget.
Residual Standard Error
Recall from the model (3.5) that associated with each observation is an error term . Due to the presence of these error terms, even if we knew the true regression line (i.e. even if and were known), we would not be able to perfectly predict from . The RSE is an estimate of the standard deviation of . Roughly speaking, it is the average amount that the response will deviate from the true regression line. It is computed using the formula
Note that RSS was defined in Section 3.1.1, and is given by the formula
In the case of the advertising data, we see from the linear regression output in Table 3.2 that the RSE is 3.26. In other words, actual sales in each market deviate from the true regression line by approximately 3,260 units, on average. Another way to think about this is that even if the model were correct and the true values of the unknown coefficients and were known exactly, any prediction of sales on the basis of TV advertising would still be off by about 3,260 units on average. Of course, whether or not 3,260 units is an acceptable prediction error depends on the problem context. In the advertising data set, the mean value of sales over all markets is approximately 14,000 units, and so the percentage error is .
The RSE is considered a measure of the lack of fit of the model (3.5) to the data. If the predictions obtained using the model are very close to the true outcome values—that is, if for —then (3.15) will be small, and we can conclude that the model fits the data very well. On the other hand, if is very far from for one or more observations, then the RSE may be quite large, indicating that the model doesn’t fit the data well.
Statistic
The RSE provides an absolute measure of lack of fit of the model (3.5) to the data. But since it is measured in the units of , it is not always clear what constitutes a good RSE. The statistic provides an alternative measure of fit. It takes the form of a proportion—the proportion of variance explained—and so it always takes on a value between 0 and 1, and is independent of the scale of .
To calculate , we use the formula
where is the total sum of squares, and RSS is defined in (3.16). TSS measures the total variance in the response , and can be thought of as the amount of variability inherent in the response before the regression is performed. In contrast, RSS measures the amount of variability that is left unexplained after performing the regression. Hence, measures the amount of variability in the response that is explained (or removed) by performing the regression, and measures the proportion of variability in that can be explained using . An statistic that is close to 1 indicates that a large proportion of the variability in the response is explained by the regression. A number near 0 indicates that the regression does not explain much of the variability in the response; this might occur because the linear model is wrong, or the error variance is high, or both. In Table 3.2, the was 0.61, and so just under two-thirds of the variability in sales is explained by a linear regression on TV.
The statistic (3.17) has an interpretational advantage over the RSE (3.15), since unlike the RSE, it always lies between 0 and 1. However, it can still be challenging to determine what is a good value, and in general, this will depend on the application. For instance, in certain problems in physics, we may know that the data truly comes from a linear model with a small residual error. In this case, we would expect to see an value that is extremely close to 1, and a substantially smaller value might indicate a serious problem with the experiment in which the data were generated. On the other hand, in typical applications in biology, psychology, marketing, and other domains, the linear model (3.5) is at best an extremely rough approximation to the data, and residual errors due to other unmeasured factors are often very large. In this setting, we would expect only a very small proportion of the variance in the response to be explained by the predictor, and an value well below 0.1 might be more realistic!
The statistic is a measure of the linear relationship between and . Recall that correlation, defined as
is also a measure of the linear relationship between and .5 This suggests that we might be able to use instead of in order to assess the fit of the linear model. In fact, it can be shown that in the simple linear regression setting, . In other words, the squared correlation and the statistic are identical. However, in the next section we will discuss the multiple linear regression problem, in which we use several predictors simultaneously to predict the response. The concept of correlation between the predictors and the response does not extend automatically to this setting, since correlation quantifies the association between a single pair of variables rather than between a larger number of variables. We will see that fills this role.
3.2 Multiple Linear Regression
Simple linear regression is a useful approach for predicting a response on the basis of a single predictor variable. However, in practice we often have more than one predictor. For example, in the Advertising data, we have examined the relationship between sales and TV advertising. We also have data for the amount of money spent advertising on the radio and in newspapers, and we may want to know whether either of these two media is associated with sales. How can we extend our analysis of the advertising data in order to accommodate these two additional predictors?
One option is to run three separate simple linear regressions, each of which uses a different advertising medium as a predictor. For instance, we can fit a simple linear regression to predict sales on the basis of the amount spent on radio advertisements. Results are shown in Table 3.3 (top table). We find that a $1,000 increase in spending on radio advertising is associated with an increase in sales of around 203 units. Table 3.3 (bottom table) contains the least squares coefficients for a simple linear regression of sales onto newspaper advertising budget. A $1,000 increase in newspaper advertising budget is associated with an increase in sales of approximately 55 units.
Simple regression of sales on radio
| Coefficient | Std. error | -statistic | -value | |
|---|---|---|---|---|
Intercept | 9.312 | 0.563 | 16.54 | < 0.0001 |
radio | 0.203 | 0.020 | 9.92 | < 0.0001 |
Simple regression of sales on newspaper
| Coefficient | Std. error | -statistic | -value | |
|---|---|---|---|---|
Intercept | 12.351 | 0.621 | 19.88 | < 0.0001 |
newspaper | 0.055 | 0.017 | 3.30 | 0.00115 |
Table 3.3. More simple linear regression models for the
Advertisingdata. Coefficients of the simple linear regression model for number of units sold on Top: radio advertising budget and Bottom: newspaper advertising budget. A $1,000 increase in spending on radio advertising is associated with an average increase in sales by around 203 units, while the same increase in spending on newspaper advertising is associated with an average increase in sales by around 55 units. (Note that thesalesvariable is in thousands of units, and theradioandnewspapervariables are in thousands of dollars.)
However, the approach of fitting a separate simple linear regression model for each predictor is not entirely satisfactory. First of all, it is unclear how to make a single prediction of sales given the three advertising media budgets, since each of the budgets is associated with a separate regression equation. Second, each of the three regression equations ignores the other two media in forming estimates for the regression coefficients. We will see shortly that if the media budgets are correlated with each other in the 200 markets in our data set, then this can lead to very misleading estimates of the association between each media budget and sales.
Instead of fitting a separate simple linear regression model for each predictor, a better approach is to extend the simple linear regression model (3.5) so that it can directly accommodate multiple predictors. We can do this by giving each predictor a separate slope coefficient in a single model. In general, suppose that we have distinct predictors. Then the multiple linear regression model takes the form
where represents the th predictor and quantifies the association between that variable and the response. We interpret as the average effect on of a one unit increase in , holding all other predictors fixed. In the advertising example, (3.19) becomes
3.2.1 Estimating the Regression Coefficients
As was the case in the simple linear regression setting, the regression coefficients in (3.19) are unknown, and must be estimated. Given estimates , we can make predictions using the formula
The parameters are estimated using the same least squares approach that we saw in the context of simple linear regression. We choose to minimize the sum of squared residuals
The values that minimize (3.22) are the multiple least squares regression coefficient estimates. Unlike the simple linear regression estimates given in (3.4), the multiple regression coefficient estimates have somewhat complicated forms that are most easily represented using matrix algebra. For this reason, we do not provide them here. Any statistical software package can be used to compute these coefficient estimates, and later in this chapter we will show how this can be done in R. Figure 3.4 illustrates an example of the least squares fit to a toy data set with predictors.
Figure 3.4. In a three-dimensional setting, with two predictors and one response, the least squares regression line becomes a plane. The plane is chosen to minimize the sum of the squared vertical distances between each observation (shown in red) and the plane.
Table 3.4 displays the multiple regression coefficient estimates when TV, radio, and newspaper advertising budgets are used to predict product sales using the Advertising data. We interpret these results as follows: for a given amount of TV and newspaper advertising, spending an additional $1,000 on radio advertising is associated with approximately 189 units of additional sales. Comparing these coefficient estimates to those displayed in Tables 3.1 and 3.3, we notice that the multiple regression coefficient estimates for TV and radio are pretty similar to the simple linear regression coefficient estimates. However, while the newspaper regression coefficient estimate in Table 3.3 was significantly non-zero, the coefficient estimate for newspaper in the multiple regression model is close to zero, and the corresponding -value is no longer significant, with a value around 0.86. This illustrates that the simple and multiple regression coefficients can be quite different. This difference stems from the fact that in the simple regression case, the slope term represents the average increase in product sales associated with a $1,000 increase in newspaper advertising, ignoring other predictors such as TV and radio. By contrast, in the multiple regression setting, the coefficient for newspaper represents the average increase in product sales associated with increasing newspaper spending by $1,000 while holding TV and radio fixed.
| Coefficient | Std. error | -statistic | -value | |
|---|---|---|---|---|
Intercept | 2.939 | 0.3119 | 9.42 | < 0.0001 |
TV | 0.046 | 0.0014 | 32.81 | < 0.0001 |
radio | 0.189 | 0.0086 | 21.89 | < 0.0001 |
newspaper | −0.001 | 0.0059 | −0.18 | 0.8599 |
Table 3.4. For the
Advertisingdata, least squares coefficient estimates of the multiple linear regression of number of units sold on TV, radio, and newspaper advertising budgets.
TV | radio | newspaper | sales | |
|---|---|---|---|---|
TV | 1.0000 | 0.0548 | 0.0567 | 0.7822 |
radio | 1.0000 | 0.3541 | 0.5762 | |
newspaper | 1.0000 | 0.2283 | ||
sales | 1.0000 |
Table 3.5. Correlation matrix for
TV,radio,newspaper, andsalesfor theAdvertisingdata.
Does it make sense for the multiple regression to suggest no relationship between sales and newspaper while the simple linear regression implies the opposite? In fact it does. Consider the correlation matrix for the three predictor variables and response variable, displayed in Table 3.5. Notice that the correlation between radio and newspaper is 0.35. This indicates that markets with high newspaper advertising tend to also have high radio advertising. Now suppose that the multiple regression is correct and newspaper advertising is not associated with sales, but radio advertising is associated with sales. Then in markets where we spend more on radio our sales will tend to be higher, and as our correlation matrix shows, we also tend to spend more on newspaper advertising in those same markets. Hence, in a simple linear regression which only examines sales versus newspaper, we will observe that higher values of newspaper tend to be associated with higher values of sales, even though newspaper advertising is not directly associated with sales. So newspaper advertising is a surrogate for radio advertising; newspaper gets “credit” for the association between radio on sales.
This slightly counterintuitive result is very common in many real life situations. Consider an absurd example to illustrate the point. Running a regression of shark attacks versus ice cream sales for data collected at a given beach community over a period of time would show a positive relationship, similar to that seen between sales and newspaper. Of course no one has (yet) suggested that ice creams should be banned at beaches to reduce shark attacks. In reality, higher temperatures cause more people to visit the beach, which in turn results in more ice cream sales and more shark attacks. A multiple regression of shark attacks onto ice cream sales and temperature reveals that, as intuition implies, ice cream sales is no longer a significant predictor after adjusting for temperature.
3.2.2 Some Important Questions
When we perform multiple linear regression, we usually are interested in answering a few important questions.
- Is at least one of the predictors useful in predicting the response?
- Do all the predictors help to explain , or is only a subset of the predictors useful?
- How well does the model fit the data?
- Given a set of predictor values, what response value should we predict, and how accurate is our prediction?
We now address each of these questions in turn.
One: Is There a Relationship Between the Response and Predictors?
Recall that in the simple linear regression setting, in order to determine whether there is a relationship between the response and the predictor we can simply check whether . In the multiple regression setting with predictors, we need to ask whether all of the regression coefficients are zero, i.e. whether . As in the simple linear regression setting, we use a hypothesis test to answer this question. We test the null hypothesis,
versus the alternative
This hypothesis test is performed by computing the -statistic,
where, as with simple linear regression, and . If the linear model assumptions are correct, one can show that
and that, provided is true,
Hence, when there is no relationship between the response and predictors, one would expect the -statistic to take on a value close to 1. On the other hand, if is true, then , so we expect to be greater than 1.
The -statistic for the multiple linear regression model obtained by regressing sales onto radio, TV, and newspaper is shown in Table 3.6. In this example the -statistic is 570. Since this is far larger than 1, it provides compelling evidence against the null hypothesis . In other words, the large -statistic suggests that at least one of the advertising media must be related to sales. However, what if the -statistic had been closer to 1? How large does the -statistic need to be before we can reject and conclude that there is a relationship?
| Quantity | Value |
|---|---|
| Residual standard error | 1.69 |
| 0.897 | |
| -statistic | 570 |
Table 3.6. More information about the least squares model for the regression of number of units sold on TV, newspaper, and radio advertising budgets in the
Advertisingdata. Other information about this model was displayed in Table 3.4.
It turns out that the answer depends on the values of and . When is large, an -statistic that is just a little larger than 1 might still provide evidence against . In contrast, a larger -statistic is needed to reject if is small. When is true and the errors have a normal distribution, the -statistic follows an -distribution.6 For any given value of and , any statistical software package can be used to compute the -value associated with the -statistic using this distribution. Based on this -value, we can determine whether or not to reject . For the advertising data, the -value associated with the -statistic in Table 3.6 is essentially zero, so we have extremely strong evidence that at least one of the media is associated with increased sales.
In (3.23) we are testing that all the coefficients are zero. Sometimes we want to test that a particular subset of of the coefficients are zero. This corresponds to a null hypothesis
where for convenience we have put the variables chosen for omission at the end of the list. In this case we fit a second model that uses all the variables except those last . Suppose that the residual sum of squares for that model is . Then the appropriate -statistic is
Notice that in Table 3.4, for each individual predictor a -statistic and a -value were reported. These provide information about whether each individual predictor is related to the response, after adjusting for the other predictors. It turns out that each of these is exactly equivalent7 to the -test that omits that single variable from the model, leaving all the others in—i.e. in (3.24). So it reports the partial effect of adding that variable to the model. For instance, as we discussed earlier, these -values indicate that TV and radio are related to sales, but that there is no evidence that newspaper is associated with sales, when TV and radio are held fixed.
Given these individual -values for each variable, why do we need to look at the overall -statistic? After all, it seems likely that if any one of the -values for the individual variables is very small, then at least one of the predictors is related to the response. However, this logic is flawed, especially when the number of predictors is large.
For instance, consider an example in which and is true, so no variable is truly associated with the response. In this situation, about 5 % of the -values associated with each variable (of the type shown in Table 3.4) will be below 0.05 by chance. In other words, we expect to see approximately five small -values even in the absence of any true association between the predictors and the response.8 In fact, it is likely that we will observe at least one -value below 0.05 by chance! Hence, if we use the individual -statistics and associated -values in order to decide whether or not there is any association between the variables and the response, there is a very high chance that we will incorrectly conclude that there is a relationship. However, the -statistic does not suffer from this problem because it adjusts for the number of predictors. Hence, if is true, there is only a 5 % chance that the -statistic will result in a -value below 0.05, regardless of the number of predictors or the number of observations.
The approach of using an -statistic to test for any association between the predictors and the response works when is relatively small, and certainly small compared to . However, sometimes we have a very large number of variables. If then there are more coefficients to estimate than observations from which to estimate them. In this case we cannot even fit the multiple linear regression model using least squares, so the -statistic cannot be used, and neither can most of the other concepts that we have seen so far in this chapter. When is large, some of the approaches discussed in the next section, such as forward selection, can be used. This high-dimensional setting is discussed in greater detail in Chapter 6.
Two: Deciding on Important Variables
As discussed in the previous section, the first step in a multiple regression analysis is to compute the -statistic and to examine the associated -value. If we conclude on the basis of that -value that at least one of the predictors is related to the response, then it is natural to wonder which are the guilty ones! We could look at the individual -values as in Table 3.4, but as discussed (and as further explored in Chapter 13), if is large we are likely to make some false discoveries.
It is possible that all of the predictors are associated with the response, but it is more often the case that the response is only associated with a subset of the predictors. The task of determining which predictors are associated with the response, in order to fit a single model involving only those predictors, is referred to as variable selection. The variable selection problem is studied extensively in Chapter 6, and so here we will provide only a brief outline of some classical approaches.
Ideally, we would like to perform variable selection by trying out a lot of different models, each containing a different subset of the predictors. For instance, if , then we can consider four models: (1) a model containing no variables, (2) a model containing only, (3) a model containing only, and (4) a model containing both and . We can then select the best model out of all of the models that we have considered. How do we determine which model is best? Various statistics can be used to judge the quality of a model. These include Mallow’s , Akaike information criterion (AIC), Bayesian information criterion (BIC), and adjusted . These are discussed in more detail in Chapter 6. We can also determine which model is best by plotting various model outputs, such as the residuals, in order to search for patterns.
Unfortunately, there are a total of models that contain subsets of variables. This means that even for moderate , trying out every possible subset of the predictors is infeasible. For instance, we saw that if , then there are models to consider. But if , then we must consider models! This is not practical. Therefore, unless is very small, we cannot consider all models, and instead we need an automated and efficient approach to choose a smaller set of models to consider. There are three classical approaches for this task:
-
Forward selection. We begin with the null model—a model that contains an intercept but no predictors. We then fit simple linear regressions and add to the null model the variable that results in the lowest RSS. We then add to that model the variable that results in the lowest RSS for the new two-variable model. This approach is continued until some stopping rule is satisfied.
-
Backward selection. We start with all variables in the model, and remove the variable with the largest -value—that is, the variable that is the least statistically significant. The new -variable model is fit, and the variable with the largest -value is removed. This procedure continues until a stopping rule is reached. For instance, we may stop when all remaining variables have a -value below some threshold.
-
Mixed selection. This is a combination of forward and backward selection. We start with no variables in the model, and as with forward selection, we add the variable that provides the best fit. We continue to add variables one-by-one. Of course, as we noted with the
Advertisingexample, the -values for variables can become larger as new predictors are added to the model. Hence, if at any point the -value for one of the variables in the model rises above a certain threshold, then we remove that variable from the model. We continue to perform these forward and backward steps until all variables in the model have a sufficiently low -value, and all variables outside the model would have a large -value if added to the model.
Backward selection cannot be used if , while forward selection can always be used. Forward selection is a greedy approach, and might include variables early that later become redundant. Mixed selection can remedy this.
Three: Model Fit
Two of the most common numerical measures of model fit are the RSE and , the fraction of variance explained. These quantities are computed and interpreted in the same fashion as for simple linear regression.
Recall that in simple regression, is the square of the correlation of the response and the variable. In multiple linear regression, it turns out that it equals , the square of the correlation between the response and the fitted linear model; in fact one property of the fitted linear model is that it maximizes this correlation among all possible linear models.
An value close to 1 indicates that the model explains a large portion of the variance in the response variable. As an example, we saw in Table 3.6 that for the Advertising data, the model that uses all three advertising media to predict sales has an of 0.8972. On the other hand, the model that uses only TV and radio to predict sales has an value of 0.89719. In other words, there is a small increase in if we include newspaper advertising in the model that already contains TV and radio advertising, even though we saw earlier that the -value for newspaper advertising in Table 3.4 is not significant. It turns out that will always increase when more variables are added to the model, even if those variables are only weakly associated with the response. This is due to the fact that adding another variable always results in a decrease in the residual sum of squares on the training data (though not necessarily the testing data). Thus, the statistic, which is also computed on the training data, must increase. The fact that adding newspaper advertising to the model containing only TV and radio advertising leads to just a tiny increase in provides additional evidence that newspaper can be dropped from the model. Essentially, newspaper provides no real improvement in the model fit to the training samples, and its inclusion will likely lead to poor results on independent test samples due to overfitting.
By contrast, the model containing only TV as a predictor had an of 0.61 (Table 3.2). Adding radio to the model leads to a substantial improvement in . This implies that a model that uses TV and radio expenditures to predict sales is substantially better than one that uses only TV advertising. We could further quantify this improvement by looking at the -value for the radio coefficient in a model that contains only TV and radio as predictors.
The model that contains only TV and radio as predictors has an RSE of 1.681, and the model that also contains newspaper as a predictor has an RSE of 1.686 (Table 3.6). In contrast, the model that contains only TV has an RSE of 3.26 (Table 3.2). This corroborates our previous conclusion that a model that uses TV and radio expenditures to predict sales is much more accurate (on the training data) than one that only uses TV spending. Furthermore, given that TV and radio expenditures are used as predictors, there is no point in also using newspaper spending as a predictor in the model. The observant reader may wonder how RSE can increase when newspaper is added to the model given that RSS must decrease. In general RSE is defined as
which simplifies to (3.15) for a simple linear regression. Thus, models with more variables can have higher RSE if the decrease in RSS is small relative to the increase in .
Figure 3.5. For the
Advertisingdata, a linear regression fit tosalesusingTVandradioas predictors. From the pattern of the residuals, we can see that there is a pronounced non-linear relationship in the data. The positive residuals (those visible above the surface), tend to lie along the 45-degree line, where TV and Radio budgets are split evenly. The negative residuals (most not visible), tend to lie away from this line, where budgets are more lopsided.
In addition to looking at the RSE and statistics just discussed, it can be useful to plot the data. Graphical summaries can reveal problems with a model that are not visible from numerical statistics. For example, Figure 3.5 displays a three-dimensional plot of TV and radio versus sales. We see that some observations lie above and some observations lie below the least squares regression plane. In particular, the linear model seems to overestimate sales for instances in which most of the advertising money was spent exclusively on either TV or radio. It underestimates sales for instances where the budget was split between the two media. This pronounced non-linear pattern suggests a synergy or interaction effect between the advertising media, whereby combining the media together results in a bigger boost to sales than using any single medium. In Section 3.3.2, we will discuss extending the linear model to accommodate such synergistic effects through the use of interaction terms.
Four: Predictions
Once we have fit the multiple regression model, it is straightforward to apply (3.21) in order to predict the response on the basis of a set of values for the predictors . However, there are three sorts of uncertainty associated with this prediction.
-
The coefficient estimates are estimates for . That is, the least squares plane
is only an estimate for the true population regression plane
The inaccuracy in the coefficient estimates is related to the reducible error from Chapter 2. We can compute a confidence interval in order to determine how close will be to .
-
Of course, in practice assuming a linear model for is almost always an approximation of reality, so there is an additional source of potentially reducible error which we call model bias. So when we use a linear model, we are in fact estimating the best linear approximation to the true surface. However, here we will ignore this discrepancy, and operate as if the linear model were correct.
-
Even if we knew —that is, even if we knew the true values for —the response value cannot be predicted perfectly because of the random error in the model (3.20). In Chapter 2, we referred to this as the irreducible error. How much will vary from ? We use prediction intervals to answer this question. Prediction intervals are always wider than confidence intervals, because they incorporate both the error in the estimate for (the reducible error) and the uncertainty as to how much an individual point will differ from the population regression plane (the irreducible error).
We use a confidence interval to quantify the uncertainty surrounding the average sales over a large number of cities. For example, given that $100,000 is spent on TV advertising and $20,000 is spent on radio advertising in each city, the 95 % confidence interval is . We interpret this to mean that 95 % of intervals of this form will contain the true value of .9 On the other hand, a prediction interval can be used to quantify the uncertainty surrounding sales for a particular city. Given that $100,000 is spent on TV advertising and $20,000 is spent on radio advertising in that city the 95 % prediction interval is . We interpret this to mean that 95 % of intervals of this form will contain the true value of for this city. Note that both intervals are centered at 11,256, but that the prediction interval is substantially wider than the confidence interval, reflecting the increased uncertainty about sales for a given city in comparison to the average sales over many locations.
3.3 Other Considerations in the Regression Model
3.3.1 Qualitative Predictors
In our discussion so far, we have assumed that all variables in our linear regression model are quantitative. But in practice, this is not necessarily the case; often some predictors are qualitative.
For example, the Credit data set displayed in Figure 3.6 records variables for a number of credit card holders. The response is balance (average credit card debt for each individual) and there are several quantitative predictors: age, cards (number of credit cards), education (years of education), income (in thousands of dollars), limit (credit limit), and rating (credit rating). Each panel of Figure 3.6 is a scatterplot for a pair of variables whose identities are given by the corresponding row and column labels. For example, the scatterplot directly to the right of the word “Balance” depicts balance versus age, while the plot directly to the right of “Age” corresponds to age versus cards. In addition to these quantitative variables, we also have four qualitative variables: own (house ownership), student (student status), status (marital status), and region (East, West or South).
Figure 3.6. The
Creditdata set contains information aboutbalance,age,cards,education,income,limit, andratingfor a number of potential customers.
Predictors with Only Two Levels
Suppose that we wish to investigate differences in credit card balance between those who own a house and those who don’t, ignoring the other variables for the moment. If a qualitative predictor (also known as a factor) only has two levels, or possible values, then incorporating it into a regression model is very simple. We simply create an indicator or dummy variable that takes on two possible numerical values.10 For example, based on the own variable, we can create a new variable that takes the form
and use this variable as a predictor in the regression equation. This results in the model
Now can be interpreted as the average credit card balance among those who do not own, as the average credit card balance among those who do own their house, and as the average difference in credit card balance between owners and non-owners.
Table 3.7 displays the coefficient estimates and other information associated with the model (3.27). The average credit card debt for non-owners is estimated to be $509.80, whereas owners are estimated to carry $19.73 in additional debt for a total of $509.80 + $19.73 = $529.53. However, we notice that the -value for the dummy variable is very high. This indicates that there is no statistical evidence of a difference in average credit card balance based on house ownership.
| Coefficient | Std. error | -statistic | -value | |
|---|---|---|---|---|
Intercept | 509.80 | 33.13 | 15.389 | < 0.0001 |
own[Yes] | 19.73 | 46.05 | 0.429 | 0.6690 |
Table 3.7. Least squares coefficient estimates associated with the regression of
balanceontoownin theCreditdata set. The linear model is given in (3.27). That is, ownership is encoded as a dummy variable, as in (3.26).
The decision to code owners as 1 and non-owners as 0 in (3.27) is arbitrary, and has no effect on the regression fit, but does alter the interpretation of the coefficients. If we had coded non-owners as 1 and owners as 0, then the estimates for and would have been 529.53 and , respectively, leading once again to a prediction of credit card debt of $529.53 − $19.73 = $509.80 for non-owners and a prediction of $529.53 for owners. Alternatively, instead of a 0/1 coding scheme, we could create a dummy variable
and use this variable in the regression equation. This results in the model
Now can be interpreted as the overall average credit card balance (ignoring the house ownership effect), and is the amount by which house owners and non-owners have credit card balances that are above and below the average, respectively.11 In this example, the estimate for is $519.665, halfway between the non-owner and owner averages of $509.80 and $529.53. The estimate for is $9.865, which is half of $19.73, the average difference between owners and non-owners. It is important to note that the final predictions for the credit balances of owners and non-owners will be identical regardless of the coding scheme used. The only difference is in the way that the coefficients are interpreted.
Qualitative Predictors with More than Two Levels
When a qualitative predictor has more than two levels, a single dummy variable cannot represent all possible values. In this situation, we can create additional dummy variables. For example, for the region variable we create two dummy variables. The first could be
and the second could be
Then both of these variables can be used in the regression equation, in order to obtain the model
Now can be interpreted as the average credit card balance for individuals from the East, can be interpreted as the difference in the average balance between people from the South versus the East, and can be interpreted as the difference in the average balance between those from the West versus the East. There will always be one fewer dummy variable than the number of levels. The level with no dummy variable—East in this example—is known as the baseline.
| Coefficient | Std. error | -statistic | -value | |
|---|---|---|---|---|
Intercept | 531.00 | 46.32 | 11.464 | < 0.0001 |
region[South] | −12.50 | 56.68 | −0.221 | 0.8260 |
region[West] | −18.69 | 65.02 | −0.287 | 0.7740 |
Table 3.8. Least squares coefficient estimates associated with the regression of
balanceontoregionin theCreditdata set. The linear model is given in (3.30). That is, region is encoded via two dummy variables (3.28) and (3.29).
From Table 3.8, we see that the estimated balance for the baseline, East, is $531.00. It is estimated that those in the South will have $18.69 less debt than those in the East, and that those in the West will have $12.50 less debt than those in the East. However, the -values associated with the coefficient estimates for the two dummy variables are very large, suggesting no statistical evidence of a real difference in average credit card balance between South and East or between West and East.12 Once again, the level selected as the baseline category is arbitrary, and the final predictions for each group will be the same regardless of this choice. However, the coefficients and their -values do depend on the choice of dummy variable coding. Rather than rely on the individual coefficients, we can use an -test to test ; this does not depend on the coding. This -test has a -value of 0.96, indicating that we cannot reject the null hypothesis that there is no relationship between balance and region.
Using this dummy variable approach presents no difficulties when incorporating both quantitative and qualitative predictors. For example, to regress balance on both a quantitative variable such as income and a qualitative variable such as student, we must simply create a dummy variable for student and then fit a multiple regression model using income and the dummy variable as predictors for credit card balance.
There are many different ways of coding qualitative variables besides the dummy variable approach taken here. All of these approaches lead to equivalent model fits, but the coefficients are different and have different interpretations, and are designed to measure particular contrasts. This topic is beyond the scope of the book.
3.3.2 Extensions of the Linear Model
The standard linear regression model (3.19) provides interpretable results and works quite well on many real-world problems. However, it makes several highly restrictive assumptions that are often violated in practice. Two of the most important assumptions state that the relationship between the predictors and response are additive and linear. The additivity assumption means that the association between a predictor and the response does not depend on the values of the other predictors. The linearity assumption states that the change in the response associated with a one-unit change in is constant, regardless of the value of . In later chapters of this book, we examine a number of sophisticated methods that relax these two assumptions. Here, we briefly examine some common classical approaches for extending the linear model.
Removing the Additive Assumption
In our previous analysis of the Advertising data, we concluded that both TV and radio seem to be associated with sales. The linear models that formed the basis for this conclusion assumed that the effect on sales of increasing one advertising medium is independent of the amount spent on the other media. For example, the linear model (3.20) states that the average increase in sales associated with a one-unit increase in TV is always , regardless of the amount spent on radio.
However, this simple model may be incorrect. Suppose that spending money on radio advertising actually increases the effectiveness of TV advertising, so that the slope term for TV should increase as radio increases. In this situation, given a fixed budget of $100,000, spending half on radio and half on TV may increase sales more than allocating the entire amount to either TV or to radio. In marketing, this is known as a synergy effect, and in statistics it is referred to as an interaction effect. Figure 3.5 suggests that such an effect may be present in the advertising data. Notice that when levels of either TV or radio are low, then the true sales are lower than predicted by the linear model. But when advertising is split between the two media, then the model tends to underestimate sales.
Consider the standard linear regression model with two variables,
According to this model, a one-unit increase in is associated with an average increase in of units. Notice that the presence of does not alter this statement—that is, regardless of the value of , a one-unit increase in is associated with a -unit increase in . One way of extending this model is to include a third predictor, called an interaction term, which is constructed by computing the product of and . This results in the model
How does inclusion of this interaction term relax the additive assumption? Notice that (3.31) can be rewritten as
where . Since is now a function of , the association between and is no longer constant: a change in the value of will change the association between and . A similar argument shows that a change in the value of changes the association between and .
For example, suppose that we are interested in studying the productivity of a factory. We wish to predict the number of units produced on the basis of the number of production lines and the total number of workers. It seems likely that the effect of increasing the number of production lines will depend on the number of workers, since if no workers are available to operate the lines, then increasing the number of lines will not increase production. This suggests that it would be appropriate to include an interaction term between lines and workers in a linear model to predict units. Suppose that when we fit the model, we obtain
In other words, adding an additional line will increase the number of units produced by . Hence the more workers we have, the stronger will be the effect of lines.
We now return to the Advertising example. A linear model that uses radio, TV, and an interaction between the two to predict sales takes the form
We can interpret as the increase in the effectiveness of TV advertising associated with a one-unit increase in radio advertising (or vice-versa). The coefficients that result from fitting the model (3.33) are given in Table 3.9.
| Coefficient | Std. error | -statistic | -value | |
|---|---|---|---|---|
Intercept | 6.7502 | 0.248 | 27.23 | < 0.0001 |
TV | 0.0191 | 0.002 | 12.70 | < 0.0001 |
radio | 0.0289 | 0.009 | 3.24 | 0.0014 |
TV×radio | 0.0011 | 0.000 | 20.73 | < 0.0001 |
Table 3.9. For the
Advertisingdata, least squares coefficient estimates associated with the regression ofsalesontoTVandradio, with an interaction term, as in (3.33).
The results in Table 3.9 strongly suggest that the model that includes the interaction term is superior to the model that contains only main effects. The -value for the interaction term, TV×radio, is extremely low, indicating that there is strong evidence for . In other words, it is clear that the true relationship is not additive. The for the model (3.33) is 96.8 %, compared to only 89.7 % for the model that predicts sales using TV and radio without an interaction term. This means that of the variability in sales that remains after fitting the additive model has been explained by the interaction term. The coefficient estimates in Table 3.9 suggest that an increase in TV advertising of $1,000 is associated with increased sales of units. And an increase in radio advertising of $1,000 will be associated with an increase in sales of units.
In this example, the -values associated with TV, radio, and the interaction term all are statistically significant (Table 3.9), and so it is obvious that all three variables should be included in the model. However, it is sometimes the case that an interaction term has a very small -value, but the associated main effects (in this case, TV and radio) do not. The hierarchical principle states that if we include an interaction in a model, we should also include the main effects, even if the -values associated with their coefficients are not significant. In other words, if the interaction between and seems important, then we should include both and in the model even if their coefficient estimates have large -values. The rationale for this principle is that if is related to the response, then whether or not the coefficients of or are exactly zero is of little interest. Also is typically correlated with and , and so leaving them out tends to alter the meaning of the interaction.
In the previous example, we considered an interaction between TV and radio, both of which are quantitative variables. However, the concept of interactions applies just as well to qualitative variables, or to a combination of quantitative and qualitative variables. In fact, an interaction between a qualitative variable and a quantitative variable has a particularly nice interpretation. Consider the Credit data set from Section 3.3.1, and suppose that we wish to predict balance using the income (quantitative) and student (qualitative) variables. In the absence of an interaction term, the model takes the form
Notice that this amounts to fitting two parallel lines to the data, one for students and one for non-students. The lines for students and non-students have different intercepts, versus , but the same slope, . This is illustrated in the left-hand panel of Figure 3.7. The fact that the lines are parallel means that the average effect on balance of a one-unit increase in income does not depend on whether or not the individual is a student. This represents a potentially serious limitation of the model, since in fact a change in income may have a very different effect on the credit card balance of a student versus a non-student.
This limitation can be addressed by adding an interaction variable, created by multiplying income with the dummy variable for student. Our model now becomes
Once again, we have two different regression lines for the students and the non-students. But now those regression lines have different intercepts, versus , as well as different slopes, versus . This allows for the possibility that changes in income may affect the credit card balances of students and non-students differently. The right-hand panel of Figure 3.7 shows the estimated relationships between income and balance for students and non-students in the model (3.35). We note that the slope for students is lower than the slope for non-students. This suggests that increases in income are associated with smaller increases in credit card balance among students as compared to non-students.
Figure 3.7. For the
Creditdata, the least squares lines are shown for prediction ofbalancefromincomefor students and non-students. Left: The model (3.34) was fit. There is no interaction betweenincomeandstudent. Right: The model (3.35) was fit. There is an interaction term betweenincomeandstudent.
Non-linear Relationships
As discussed previously, the linear regression model (3.19) assumes a linear relationship between the response and predictors. But in some cases, the true relationship between the response and the predictors may be non-linear. Here we present a very simple way to directly extend the linear model to accommodate non-linear relationships, using polynomial regression. In later chapters, we will present more complex approaches for performing non-linear fits in more general settings.
Consider Figure 3.8, in which the mpg (gas mileage in miles per gallon) versus horsepower is shown for a number of cars in the Auto data set. The orange line represents the linear regression fit. There is a pronounced relationship between mpg and horsepower, but it seems clear that this relationship is in fact non-linear: the data suggest a curved relationship. A simple approach for incorporating non-linear associations in a linear model is to include transformed versions of the predictors. For example, the points in Figure 3.8 seem to have a quadratic shape, suggesting that a model of the form
may provide a better fit. Equation 3.36 involves predicting mpg using a non-linear function of horsepower. But it is still a linear model! That is, (3.36) is simply a multiple linear regression model with and . So we can use standard linear regression software to estimate , and in order to produce a non-linear fit. The blue curve in Figure 3.8 shows the resulting quadratic fit to the data. The quadratic fit appears to be substantially better than the fit obtained when just the linear term is included. The of the quadratic fit is 0.688, compared to 0.606 for the linear fit, and the -value in Table 3.10 for the quadratic term is highly significant.
Figure 3.8. The
Autodata set. For a number of cars,mpgandhorsepowerare shown. The linear regression fit is shown in orange. The linear regression fit for a model that includeshorsepoweris shown as a blue curve. The linear regression fit for a model that includes all polynomials ofhorsepowerup to fifth-degree is shown in green.
| Coefficient | Std. error | -statistic | -value | |
|---|---|---|---|---|
Intercept | 56.9001 | 1.8004 | 31.6 | < 0.0001 |
horsepower | −0.4662 | 0.0311 | −15.0 | < 0.0001 |
horsepower | 0.0012 | 0.0001 | 10.1 | < 0.0001 |
Table 3.10. For the
Autodata set, least squares coefficient estimates associated with the regression ofmpgontohorsepowerandhorsepower.
If including horsepower led to such a big improvement in the model, why not include horsepower, horsepower, or even horsepower? The green curve in Figure 3.8 displays the fit that results from including all polynomials up to fifth degree in the model (3.36). The resulting fit seems unnecessarily wiggly—that is, it is unclear that including the additional terms really has led to a better fit to the data.
The approach that we have just described for extending the linear model to accommodate non-linear relationships is known as polynomial regression, since we have included polynomial functions of the predictors in the regression model. We further explore this approach and other non-linear extensions of the linear model in Chapter 7.
3.3.3 Potential Problems
When we fit a linear regression model to a particular data set, many problems may occur. Most common among these are the following:
- Non-linearity of the response-predictor relationships.
- Correlation of error terms.
- Non-constant variance of error terms.
- Outliers.
- High-leverage points.
- Collinearity.
In practice, identifying and overcoming these problems is as much an art as a science. Many pages in countless books have been written on this topic. Since the linear regression model is not our primary focus here, we will provide only a brief summary of some key points.
1. Non-linearity of the Data
Figure 3.9. Plots of residuals versus predicted (or fitted) values for the
Autodata set. In each plot, the red line is a smooth fit to the residuals, intended to make it easier to identify a trend. Left: A linear regression ofmpgonhorsepower. A strong pattern in the residuals indicates non-linearity in the data. Right: A linear regression ofmpgonhorsepowerandhorsepower. There is little pattern in the residuals.
The linear regression model assumes that there is a straight-line relationship between the predictors and the response. If the true relationship is far from linear, then virtually all of the conclusions that we draw from the fit are suspect. In addition, the prediction accuracy of the model can be significantly reduced.
Residual plots are a useful graphical tool for identifying non-linearity. Given a simple linear regression model, we can plot the residuals, , versus the predictor . In the case of a multiple regression model, since there are multiple predictors, we instead plot the residuals versus the predicted (or fitted) values . Ideally, the residual plot will show no discernible pattern. The presence of a pattern may indicate a problem with some aspect of the linear model.
The left panel of Figure 3.9 displays a residual plot from the linear regression of mpg onto horsepower on the Auto data set that was illustrated in Figure 3.8. The red line is a smooth fit to the residuals, which is displayed in order to make it easier to identify any trends. The residuals exhibit a clear U-shape, which provides a strong indication of non-linearity in the data. In contrast, the right-hand panel of Figure 3.9 displays the residual plot that results from the model (3.36), which contains a quadratic term. There appears to be little pattern in the residuals, suggesting that the quadratic term improves the fit to the data.
If the residual plot indicates that there are non-linear associations in the data, then a simple approach is to use non-linear transformations of the predictors, such as , , and , in the regression model. In the later chapters of this book, we will discuss other more advanced non-linear approaches for addressing this issue.
2. Correlation of Error Terms
An important assumption of the linear regression model is that the error terms, , are uncorrelated. What does this mean? For instance, if the errors are uncorrelated, then the fact that is positive provides little or no information about the sign of . The standard errors that are computed for the estimated regression coefficients or the fitted values are based on the assumption of uncorrelated error terms. If in fact there is correlation among the error terms, then the estimated standard errors will tend to underestimate the true standard errors. As a result, confidence and prediction intervals will be narrower than they should be. For example, a 95 % confidence interval may in reality have a much lower probability than 0.95 of containing the true value of the parameter. In addition, -values associated with the model will be lower than they should be; this could cause us to erroneously conclude that a parameter is statistically significant. In short, if the error terms are correlated, we may have an unwarranted sense of confidence in our model.
As an extreme example, suppose we accidentally doubled our data, leading to observations and error terms identical in pairs. If we ignored this, our standard error calculations would be as if we had a sample of size , when in fact we have only samples. Our estimated parameters would be the same for the samples as for the samples, but the confidence intervals would be narrower by a factor of !
Why might correlations among the error terms occur? Such correlations frequently occur in the context of time series data, which consists of observations for which measurements are obtained at discrete points in time. In many cases, observations that are obtained at adjacent time points will have positively correlated errors. In order to determine if this is the case for a given data set, we can plot the residuals from our model as a function of time. If the errors are uncorrelated, then there should be no discernible pattern. On the other hand, if the error terms are positively correlated, then we may see tracking in the residuals—that is, adjacent residuals may have similar values. Figure 3.10 provides an illustration. In the top panel, we see the residuals from a linear regression fit to data generated with uncorrelated errors. There is no evidence of a time-related trend in the residuals. In contrast, the residuals in the bottom panel are from a data set in which adjacent errors had a correlation of 0.9. Now there is a clear pattern in the residuals—adjacent residuals tend to take on similar values. Finally, the center panel illustrates a more moderate case in which the residuals had a correlation of 0.5. There is still evidence of tracking, but the pattern is less clear.
Figure 3.10. Plots of residuals from simulated time series data sets generated with differing levels of correlation between error terms for adjacent time points.
Many methods have been developed to properly take account of correlations in the error terms in time series data. Correlation among the error terms can also occur outside of time series data. For instance, consider a study in which individuals’ heights are predicted from their weights. The assumption of uncorrelated errors could be violated if some of the individuals in the study are members of the same family, eat the same diet, or have been exposed to the same environmental factors. In general, the assumption of uncorrelated errors is extremely important for linear regression as well as for other statistical methods, and good experimental design is crucial in order to mitigate the risk of such correlations.
3. Non-constant Variance of Error Terms
Another important assumption of the linear regression model is that the error terms have a constant variance, . The standard errors, confidence intervals, and hypothesis tests associated with the linear model rely upon this assumption.
Unfortunately, it is often the case that the variances of the error terms are non-constant. For instance, the variances of the error terms may increase with the value of the response. One can identify non-constant variances in the errors, or heteroscedasticity, from the presence of a funnel shape in the residual plot. An example is shown in the left-hand panel of Figure 3.11, in which the magnitude of the residuals tends to increase with the fitted values. When faced with this problem, one possible solution is to transform the response using a concave function such as or . Such a transformation results in a greater amount of shrinkage of the larger responses, leading to a reduction in heteroscedasticity. The right-hand panel of Figure 3.11 displays the residual plot after transforming the response using . The residuals now appear to have constant variance, though there is some evidence of a slight non-linear relationship in the data.
Figure 3.11. Residual plots. In each plot, the red line is a smooth fit to the residuals, intended to make it easier to identify a trend. The blue lines track the outer quantiles of the residuals, and emphasize patterns. Left: The funnel shape indicates heteroscedasticity. Right: The response has been log transformed, and there is now no evidence of heteroscedasticity.
Sometimes we have a good idea of the variance of each response. For example, the th response could be an average of raw observations. If each of these raw observations is uncorrelated with variance , then their average has variance . In this case a simple remedy is to fit our model by weighted least squares, with weights proportional to the inverse variances—i.e. in this case. Most linear regression software allows for observation weights.
4. Outliers
An outlier is a point for which is far from the value predicted by the model. Outliers can arise for a variety of reasons, such as incorrect recording of an observation during data collection.
Figure 3.12. Left: The least squares regression line is shown in red, and the regression line after removing the outlier is shown in blue. Center: The residual plot clearly identifies the outlier. Right: The outlier has a studentized residual of 6; typically we expect values between and 3.
The red point (observation 20) in the left-hand panel of Figure 3.12 illustrates a typical outlier. The red solid line is the least squares regression fit, while the blue dashed line is the least squares fit after removal of the outlier. In this case, removing the outlier has little effect on the least squares line: it leads to almost no change in the slope, and a miniscule reduction in the intercept. It is typical for an outlier that does not have an unusual predictor value to have little effect on the least squares fit. However, even if an outlier does not have much effect on the least squares fit, it can cause other problems. For instance, in this example, the RSE is 1.09 when the outlier is included in the regression, but it is only 0.77 when the outlier is removed. Since the RSE is used to compute all confidence intervals and -values, such a dramatic increase caused by a single data point can have implications for the interpretation of the fit. Similarly, inclusion of the outlier causes the to decline from 0.892 to 0.805.
Residual plots can be used to identify outliers. In this example, the outlier is clearly visible in the residual plot illustrated in the center panel of Figure 3.12. But in practice, it can be difficult to decide how large a residual needs to be before we consider the point to be an outlier. To address this problem, instead of plotting the residuals, we can plot the studentized residuals, computed by dividing each residual by its estimated standard error. Observations whose studentized residuals are greater than 3 in absolute value are possible outliers. In the right-hand panel of Figure 3.12, the outlier’s studentized residual exceeds 6, while all other observations have studentized residuals between and 2.
If we believe that an outlier has occurred due to an error in data collection or recording, then one solution is to simply remove the observation. However, care should be taken, since an outlier may instead indicate a deficiency with the model, such as a missing predictor.
5. High Leverage Points
We just saw that outliers are observations for which the response is unusual given the predictor . In contrast, observations with high leverage have an unusual value for . For example, observation 41 in the left-hand panel of Figure 3.13 has high leverage, in that the predictor value for this observation is large relative to the other observations. (Note that the data displayed in Figure 3.13 are the same as the data displayed in Figure 3.12, but with the addition of a single high leverage observation.) The red solid line is the least squares fit to the data, while the blue dashed line is the fit produced when observation 41 is removed. Comparing the left-hand panels of Figures 3.12 and 3.13, we observe that removing the high leverage observation has a much more substantial impact on the least squares line than removing the outlier. In fact, high leverage observations tend to have a sizable impact on the estimated regression line. It is cause for concern if the least squares line is heavily affected by just a couple of observations, because any problems with these points may invalidate the entire fit. For this reason, it is important to identify high leverage observations.
Figure 3.13. Left: Observation 41 is a high leverage point, while 20 is not. The red line is the fit to all the data, and the blue line is the fit with observation 41 removed. Center: The red observation is not unusual in terms of its value or its value, but still falls outside the bulk of the data, and hence has high leverage. Right: Observation 41 has a high leverage and a high residual.
In a simple linear regression, high leverage observations are fairly easy to identify, since we can simply look for observations for which the predictor value is outside of the normal range of the observations. But in a multiple linear regression with many predictors, it is possible to have an observation that is well within the range of each individual predictor’s values, but that is unusual in terms of the full set of predictors. An example is shown in the center panel of Figure 3.13, for a data set with two predictors, and . Most of the observations’ predictor values fall within the blue dashed ellipse, but the red observation is well outside of this range. But neither its value for nor its value for is unusual. So if we examine just or just , we will fail to notice this high leverage point. This problem is more pronounced in multiple regression settings with more than two predictors, because then there is no simple way to plot all dimensions of the data simultaneously.
In order to quantify an observation’s leverage, we compute the leverage statistic. A large value of this statistic indicates an observation with high leverage. For a simple linear regression,
It is clear from this equation that increases with the distance of from . There is a simple extension of to the case of multiple predictors, though we do not provide the formula here. The leverage statistic is always between and 1, and the average leverage for all the observations is always equal to . So if a given observation has a leverage statistic that greatly exceeds , then we may suspect that the corresponding point has high leverage.
The right-hand panel of Figure 3.13 provides a plot of the studentized residuals versus for the data in the left-hand panel of Figure 3.13. Observation 41 stands out as having a very high leverage statistic as well as a high studentized residual. In other words, it is an outlier as well as a high leverage observation. This is a particularly dangerous combination! This plot also reveals the reason that observation 20 had relatively little effect on the least squares fit in Figure 3.12: it has low leverage.
6. Collinearity
Collinearity refers to the situation in which two or more predictor variables are closely related to one another. The concept of collinearity is illustrated in Figure 3.14 using the Credit data set. In the left-hand panel of Figure 3.14, the two predictors limit and age appear to have no obvious relationship. In contrast, in the right-hand panel of Figure 3.14, the predictors limit and rating are very highly correlated with each other, and we say that they are collinear. The presence of collinearity can pose problems in the regression context, since it can be difficult to separate out the individual effects of collinear variables on the response. In other words, since limit and rating tend to increase or decrease together, it can be difficult to determine how each one separately is associated with the response, balance.
Figure 3.14. Scatterplots of the observations from the
Creditdata set. Left: A plot ofageversuslimit. These two variables are not collinear. Right: A plot ofratingversuslimit. There is high collinearity.
Figure 3.15 illustrates some of the difficulties that can result from collinearity. The left-hand panel of Figure 3.15 is a contour plot of the RSS (3.22) associated with different possible coefficient estimates for the regression of balance on limit and age. Each ellipse represents a set of coefficients that correspond to the same RSS, with ellipses nearest to the center taking on the lowest values of RSS. The black dots and associated dashed lines represent the coefficient estimates that result in the smallest possible RSS—in other words, these are the least squares estimates. The axes for limit and age have been scaled so that the plot includes possible coefficient estimates that are up to four standard errors on either side of the least squares estimates. Thus the plot includes all plausible values for the coefficients. For example, we see that the true limit coefficient is almost certainly somewhere between 0.15 and 0.20.
Figure 3.15. Contour plots for the RSS values as a function of the parameters for various regressions involving the
Creditdata set. In each plot, the black dots represent the coefficient values corresponding to the minimum RSS. Left: A contour plot of RSS for the regression ofbalanceontoageandlimit. The minimum value is well defined. Right: A contour plot of RSS for the regression ofbalanceontoratingandlimit. Because of the collinearity, there are many pairs with a similar value for RSS.
In contrast, the right-hand panel of Figure 3.15 displays contour plots of the RSS associated with possible coefficient estimates for the regression of balance onto limit and rating, which we know to be highly collinear. Now the contours run along a narrow valley; there is a broad range of values for the coefficient estimates that result in equal values for RSS. Hence a small change in the data could cause the pair of coefficient values that yield the smallest RSS—that is, the least squares estimates—to move anywhere along this valley. This results in a great deal of uncertainty in the coefficient estimates. Notice that the scale for the limit coefficient now runs from roughly to 0.2; this is an eight-fold increase over the plausible range of the limit coefficient in the regression with age. Interestingly, even though the limit and rating coefficients now have much more individual uncertainty, they will almost certainly lie somewhere in this contour valley. For example, we would not expect the true value of the limit and rating coefficients to be and 1 respectively, even though such a value is plausible for each coefficient individually.
Since collinearity reduces the accuracy of the estimates of the regression coefficients, it causes the standard error for to grow. Recall that the -statistic for each predictor is calculated by dividing by its standard error. Consequently, collinearity results in a decline in the -statistic. As a result, in the presence of collinearity, we may fail to reject . This means that the power of the hypothesis test—the probability of correctly detecting a non-zero coefficient—is reduced by collinearity.
| Coefficient | Std. error | -statistic | -value | ||
|---|---|---|---|---|---|
| Model 1 | Intercept | −173.411 | 43.828 | −3.957 | < 0.0001 |
age | −2.292 | 0.672 | −3.407 | 0.0007 | |
limit | 0.173 | 0.005 | 34.496 | < 0.0001 | |
| Model 2 | Intercept | −377.537 | 45.254 | −8.343 | < 0.0001 |
rating | 2.202 | 0.952 | 2.312 | 0.0213 | |
limit | 0.025 | 0.064 | 0.384 | 0.7012 |
Table 3.11. The results for two multiple regression models involving the
Creditdata set are shown. Model 1 is a regression ofbalanceonageandlimit, and Model 2 a regression ofbalanceonratingandlimit. The standard error of increases 12-fold in the second regression, due to collinearity.
Table 3.11 compares the coefficient estimates obtained from two separate multiple regression models. The first is a regression of balance on age and limit, and the second is a regression of balance on rating and limit. In the first regression, both age and limit are highly significant with very small -values. In the second, the collinearity between limit and rating has caused the standard error for the limit coefficient estimate to increase by a factor of 12 and the -value to increase to 0.701. In other words, the importance of the limit variable has been masked due to the presence of collinearity. To avoid such a situation, it is desirable to identify and address potential collinearity problems while fitting the model.
A simple way to detect collinearity is to look at the correlation matrix of the predictors. An element of this matrix that is large in absolute value indicates a pair of highly correlated variables, and therefore a collinearity problem in the data. Unfortunately, not all collinearity problems can be detected by inspection of the correlation matrix: it is possible for collinearity to exist between three or more variables even if no pair of variables has a particularly high correlation. We call this situation multicollinearity. Instead of inspecting the correlation matrix, a better way to assess multicollinearity is to compute the variance inflation factor (VIF). The VIF is the ratio of the variance of when fitting the full model divided by the variance of if fit on its own. The smallest possible value for VIF is 1, which indicates the complete absence of collinearity. Typically in practice there is a small amount of collinearity among the predictors. As a rule of thumb, a VIF value that exceeds 5 or 10 indicates a problematic amount of collinearity. The VIF for each variable can be computed using the formula
where is the from a regression of onto all of the other predictors. If is close to one, then collinearity is present, and so the VIF will be large.
In the Credit data, a regression of balance on age, rating, and limit indicates that the predictors have VIF values of 1.01, 160.67, and 160.59. As we suspected, there is considerable collinearity in the data!
When faced with the problem of collinearity, there are two simple solutions. The first is to drop one of the problematic variables from the regression. This can usually be done without much compromise to the regression fit, since the presence of collinearity implies that the information that this variable provides about the response is redundant in the presence of the other variables. For instance, if we regress balance onto age and limit, without the rating predictor, then the resulting VIF values are close to the minimum possible value of 1, and the drops from 0.754 to 0.75. So dropping rating from the set of predictors has effectively solved the collinearity problem without compromising the fit. The second solution is to combine the collinear variables together into a single predictor. For instance, we might take the average of standardized versions of limit and rating in order to create a new variable that measures credit worthiness.
3.4 The Marketing Plan
We now briefly return to the seven questions about the Advertising data that we set out to answer at the beginning of this chapter.
-
Is there a relationship between sales and advertising budget?
This question can be answered by fitting a multiple regression model of
salesontoTV,radio, andnewspaper, as in (3.20), and testing the hypothesis . In Section 3.2.2, we showed that the -statistic can be used to determine whether or not we should reject this null hypothesis. In this case the -value corresponding to the -statistic in Table 3.6 is very low, indicating clear evidence of a relationship between advertising and sales. -
How strong is the relationship?
We discussed two measures of model accuracy in Section 3.1.3. First, the RSE estimates the standard deviation of the response from the population regression line. For the
Advertisingdata, the RSE is 1.69 units while the mean value for the response is 14.022, indicating a percentage error of roughly 12 %. Second, the statistic records the percentage of variability in the response that is explained by the predictors. The predictors explain almost 90 % of the variance insales. The RSE and statistics are displayed in Table 3.6. -
Which media are associated with sales?
To answer this question, we can examine the -values associated with each predictor’s -statistic (Section 3.1.2). In the multiple linear regression displayed in Table 3.4, the -values for
TVandradioare low, but the -value fornewspaperis not. This suggests that onlyTVandradioare related tosales. In Chapter 6 we explore this question in greater detail. -
How large is the association between each medium and sales?
We saw in Section 3.1.2 that the standard error of can be used to construct confidence intervals for . For the
Advertisingdata, we can use the results in Table 3.4 to compute the 95 % confidence intervals for the coefficients in a multiple regression model using all three media budgets as predictors. The confidence intervals are as follows: forTV, forradio, and fornewspaper. The confidence intervals forTVandradioare narrow and far from zero, providing evidence that these media are related tosales. But the interval fornewspaperincludes zero, indicating that the variable is not statistically significant given the values ofTVandradio.We saw in Section 3.3.3 that collinearity can result in very wide standard errors. Could collinearity be the reason that the confidence interval associated with
newspaperis so wide? The VIF scores are 1.005, 1.145, and 1.145 forTV,radio, andnewspaper, suggesting no evidence of collinearity.In order to assess the association of each medium individually on
sales, we can perform three separate simple linear regressions. Results are shown in Tables 3.1 and 3.3. There is evidence of an extremely strong association betweenTVandsalesand betweenradioandsales. There is evidence of a mild association betweennewspaperandsales, when the values ofTVandradioare ignored. -
How accurately can we predict future sales?
The response can be predicted using (3.21). The accuracy associated with this estimate depends on whether we wish to predict an individual response, , or the average response, (Section 3.2.2). If the former, we use a prediction interval, and if the latter, we use a confidence interval. Prediction intervals will always be wider than confidence intervals because they account for the uncertainty associated with , the irreducible error.
-
Is the relationship linear?
In Section 3.3.3, we saw that residual plots can be used in order to identify non-linearity. If the relationships are linear, then the residual plots should display no pattern. In the case of the
Advertisingdata, we observe a non-linear effect in Figure 3.5, though this effect could also be observed in a residual plot. In Section 3.3.2, we discussed the inclusion of transformations of the predictors in the linear regression model in order to accommodate non-linear relationships. -
Is there synergy among the advertising media?
The standard linear regression model assumes an additive relationship between the predictors and the response. An additive model is easy to interpret because the association between each predictor and the response is unrelated to the values of the other predictors. However, the additive assumption may be unrealistic for certain data sets. In Section 3.3.2, we showed how to include an interaction term in the regression model in order to accommodate non-additive relationships. A small -value associated with the interaction term indicates the presence of such relationships. Figure 3.5 suggested that the
Advertisingdata may not be additive. Including an interaction term in the model results in a substantial increase in , from around 90 % to almost 97 %.
3.5 Comparison of Linear Regression with K-Nearest Neighbors
As discussed in Chapter 2, linear regression is an example of a parametric approach because it assumes a linear functional form for . Parametric methods have several advantages. They are often easy to fit, because one need estimate only a small number of coefficients. In the case of linear regression, the coefficients have simple interpretations, and tests of statistical significance can be easily performed. But parametric methods do have a disadvantage: by construction, they make strong assumptions about the form of . If the specified functional form is far from the truth, and prediction accuracy is our goal, then the parametric method will perform poorly. For instance, if we assume a linear relationship between and but the true relationship is far from linear, then the resulting model will provide a poor fit to the data, and any conclusions drawn from it will be suspect.
In contrast, non-parametric methods do not explicitly assume a parametric form for , and thereby provide an alternative and more flexible approach for performing regression. We discuss various non-parametric methods in this book. Here we consider one of the simplest and best-known non-parametric methods, -nearest neighbors regression (KNN regression). The KNN regression method is closely related to the KNN classifier discussed in Chapter 2. Given a value for and a prediction point , KNN regression first identifies the training observations that are closest to , represented by . It then estimates using the average of all the training responses in . In other words,
Figure 3.16 illustrates two KNN fits on a data set with predictors. The fit with is shown in the left-hand panel, while the right-hand panel corresponds to . We see that when , the KNN fit perfectly interpolates the training observations, and consequently takes the form of a step function. When , the KNN fit still is a step function, but averaging over nine observations results in much smaller regions of constant prediction, and consequently a smoother fit. In general, the optimal value for will depend on the bias-variance tradeoff, which we introduced in Chapter 2. A small value for provides the most flexible fit, which will have low bias but high variance. This variance is due to the fact that the prediction in a given region is entirely dependent on just one observation. In contrast, larger values of provide a smoother and less variable fit; the prediction in a region is an average of several points, and so changing one observation has a smaller effect. However, the smoothing may cause bias by masking some of the structure in . In Chapter 5, we introduce several approaches for estimating test error rates. These methods can be used to identify the optimal value of in KNN regression.
Figure 3.16. Plots of using KNN regression on a two-dimensional data set with 64 observations (orange dots). Left: results in a rough step function fit. Right: produces a much smoother fit.
In what setting will a parametric approach such as least squares linear regression outperform a non-parametric approach such as KNN regression? The answer is simple: the parametric approach will outperform the non-parametric approach if the parametric form that has been selected is close to the true form of . Figure 3.17 provides an example with data generated from a one-dimensional linear regression model. The black solid lines represent , while the blue curves correspond to the KNN fits using and . In this case, the predictions are far too variable, while the smoother fit is much closer to . However, since the true relationship is linear, it is hard for a non-parametric approach to compete with linear regression: a non-parametric approach incurs a cost in variance that is not offset by a reduction in bias. The blue dashed line in the left-hand panel of Figure 3.18 represents the linear regression fit to the same data. It is almost perfect. The right-hand panel of Figure 3.18 reveals that linear regression outperforms KNN for this data. The green solid line, plotted as a function of , represents the test set mean squared error (MSE) for KNN. The KNN errors are well above the black dashed line, which is the test MSE for linear regression. When the value of is large, then KNN performs only a little worse than least squares regression in terms of MSE. It performs far worse when is small.
Figure 3.17. Plots of using KNN regression on a one-dimensional data set with 50 observations. The true relationship is given by the black solid line. Left: The blue curve corresponds to and interpolates (i.e. passes directly through) the training data. Right: The blue curve corresponds to , and represents a smoother fit.
Figure 3.18. The same data set shown in Figure 3.17 is investigated further. Left: The blue dashed line is the least squares fit to the data. Since is in fact linear (displayed as the black line), the least squares regression line provides a very good estimate of . Right: The dashed horizontal line represents the least squares test set MSE, while the green solid line corresponds to the MSE for KNN as a function of (on the log scale). Linear regression achieves a lower test MSE than does KNN regression, since is in fact linear. For KNN regression, the best results occur with a very large value of , corresponding to a small value of .
In practice, the true relationship between and is rarely exactly linear. Figure 3.19 examines the relative performances of least squares regression and KNN under increasing levels of non-linearity in the relationship between and . In the top row, the true relationship is nearly linear. In this case we see that the test MSE for linear regression is still superior to that of KNN for low values of . However, for , KNN outperforms linear regression. The second row illustrates a more substantial deviation from linearity. In this situation, KNN substantially outperforms linear regression for all values of . Note that as the extent of non-linearity increases, there is little change in the test set MSE for the non-parametric KNN method, but there is a large increase in the test set MSE of linear regression.
Figure 3.19. Top Left: In a setting with a slightly non-linear relationship between and (solid black line), the KNN fits with (blue) and (red) are displayed. Top Right: For the slightly non-linear data, the test set MSE for least squares regression (horizontal black) and KNN with various values of (green) are displayed. Bottom Left and Bottom Right: As in the top panel, but with a strongly non-linear relationship between and .
Figures 3.18 and 3.19 display situations in which KNN performs slightly worse than linear regression when the relationship is linear, but much better than linear regression for nonlinear situations. In a real life situation in which the true relationship is unknown, one might suspect that KNN should be favored over linear regression because it will at worst be slightly inferior to linear regression if the true relationship is linear, and may give substantially better results if the true relationship is non-linear. But in reality, even when the true relationship is highly non-linear, KNN may still provide inferior results to linear regression. In particular, both Figures 3.18 and 3.19 illustrate settings with predictor. But in higher dimensions, KNN often performs worse than linear regression.
Figure 3.20. Test MSE for linear regression (black dashed lines) and KNN (green curves) as the number of variables increases. The true function is non-linear in the first variable, as in the lower panel in Figure 3.19, and does not depend on the additional variables. The performance of linear regression deteriorates slowly in the presence of these additional noise variables, whereas KNN’s performance degrades much more quickly as increases.
Figure 3.20 considers the same strongly non-linear situation as in the second row of Figure 3.19, except that we have added additional noise predictors that are not associated with the response. When or , KNN outperforms linear regression. But for the results are mixed, and for linear regression is superior to KNN. In fact, the increase in dimension has only caused a small deterioration in the linear regression test set MSE, but it has caused more than a ten-fold increase in the MSE for KNN. This decrease in performance as the dimension increases is a common problem for KNN, and results from the fact that in higher dimensions there is effectively a reduction in sample size. In this data set there are 50 training observations; when , this provides enough information to accurately estimate . However, spreading 50 observations over dimensions results in a phenomenon in which a given observation has no nearby neighbors—this is the so-called curse of dimensionality. That is, the observations that are nearest to a given test observation may be very far away from in -dimensional space when is large, leading to a very poor prediction of and hence a poor KNN fit. As a general rule, parametric methods will tend to outperform non-parametric approaches when there is a small number of observations per predictor.
Even when the dimension is small, we might prefer linear regression to KNN from an interpretability standpoint. If the test MSE of KNN is only slightly lower than that of linear regression, we might be willing to forego a little bit of prediction accuracy for the sake of a simple model that can be described in terms of just a few coefficients, and for which -values are available.
3.6 Lab: Linear Regression
3.6.1 Importing packages
We import our standard libraries at this top level.
import numpy as np
import pandas as pd
from matplotlib.pyplot import subplotsNew imports
Throughout this lab we will introduce new functions and libraries. However, we will import them here to emphasize these are the new code objects in this lab. Keeping imports near the top of a notebook makes the code more readable, since scanning the first few lines tells us what libraries are used.
import statsmodels.api as smWe will provide relevant details about the functions below as they are needed.
Besides importing whole modules, it is also possible to import only a few items from a given module. This will help keep the namespace clean. We will use a few specific objects from the statsmodels package which we import here.
from statsmodels.stats.outliers_influence \
import variance_inflation_factor as VIF
from statsmodels.stats.anova import anova_lmAs one of the import statements above is quite a long line, we inserted a line break \ to ease readability.
We will also use some functions written for the labs in this book in the ISLP package.
from ISLP import load_data
from ISLP.models import (ModelSpec as MS,
summarize,
poly)Inspecting Objects and Namespaces
The function dir() provides a list of objects in a namespace.
dir()['In',
'MS',
'_',
'__',
'___',
'__builtin__',
'__builtins__',
...
'poly',
'quit',
'sm',
'summarize']This shows you everything that Python can find at the top level. There are certain objects like __builtins__ that contain references to built-in functions like print().
Every python object has its own notion of namespace, also accessible with dir(). This will include both the attributes of the object as well as any methods associated with it. For instance, we see 'sum' in the listing for an array.
A = np.array([3,5,11])
dir(A)...
'strides',
'sum',
'swapaxes',
...This indicates that the object A.sum exists. In this case it is a method that can be used to compute the sum of the array A as can be seen by typing A.sum?.
A.sum()193.6.2 Simple Linear Regression
In this section we will construct model matrices (also called design matrices) using the ModelSpec() transform from ISLP.models.
We will use the Boston housing data set, which is contained in the ISLP package. The Boston dataset records medv (median house value) for 506 neighborhoods around Boston. We will build a regression model to predict medv using 13 predictors such as rmvar (average number of rooms per house), age (proportion of owner-occupied units built prior to 1940), and lstat (percent of households with low socioeconomic status). We will use statsmodels for this task, a Python package that implements several commonly used regression methods.
We have included a simple loading function load_data() in the ISLP package:
Boston = load_data("Boston")
Boston.columnsIndex(['crim', 'zn', 'indus', 'chas', 'nox', 'rm', 'age', 'dis',
'rad', 'tax', 'ptratio', 'black', 'lstat', 'medv'],
dtype='object')Type Boston? to find out more about these data.
We start by using the sm.OLS() function to fit a simple linear regression model. Our response will be medv and lstat will be the single predictor. For this model, we can create the model matrix by hand.
X = pd.DataFrame({'intercept': np.ones(Boston.shape[0]),
'lstat': Boston['lstat']})
X[:4] intercept lstat
0 1.0 4.98
1 1.0 9.14
2 1.0 4.03
3 1.0 2.94We extract the response, and fit the model.
y = Boston['medv']
model = sm.OLS(y, X)
results = model.fit()Note that sm.OLS() does not fit the model; it specifies the model, and then model.fit() does the actual fitting.
Our ISLP function summarize() produces a simple table of the parameter estimates, their standard errors, t-statistics and p-values. The function takes a single argument, such as the object results returned here by the fit method, and returns such a summary.
summarize(results) coef std err t P>|t|
intercept 34.5538 0.563 61.415 0.0
lstat -0.9500 0.039 -24.528 0.0Before we describe other methods for working with fitted models, we outline a more useful and general framework for constructing a model matrix X.
Using Transformations: Fit and Transform
Our model above has a single predictor, and constructing X was straightforward. In practice we often fit models with more than one predictor, typically selected from an array or data frame. We may wish to introduce transformations to the variables before fitting the model, specify interactions between variables, and expand some particular variables into sets of variables (e.g. polynomials). The sklearn package has a particular notion for this type of task: a transform. A transform is an object that is created with some parameters as arguments. The object has two main methods: .fit() and .transform().
We provide a general approach for specifying models and constructing the model matrix through the transform ModelSpec() in the ISLP library. ModelSpec() (renamed MS() in the preamble) creates a transform object, and then a pair of methods transform() and fit() are used to construct a corresponding model matrix.
We first describe this process for our simple regression model using a single predictor lstat in the Boston data frame, but will use it repeatedly in more complex tasks in this and other labs in this book. In our case the transform is created by the expression design = MS(['lstat']).
The fit() method takes the original array and may do some initial computations on it, as specified in the transform object. For example, it may compute means and standard deviations for centering and scaling. The transform() method applies the fitted transformation to the array of data, and produces the model matrix.
design = MS(['lstat'])
design = design.fit(Boston)
X = design.transform(Boston)
X[:4] intercept lstat
0 1.0 4.98
1 1.0 9.14
2 1.0 4.03
3 1.0 2.94In this simple case, the fit() method does very little; it simply checks that the variable 'lstat' specified in design exists in Boston. Then transform() constructs the model matrix with two columns: an intercept and the variable lstat.
These two operations can be combined with the fit_transform() method.
design = MS(['lstat'])
X = design.fit_transform(Boston)
X[:4] intercept lstat
0 1.0 4.98
1 1.0 9.14
2 1.0 4.03
3 1.0 2.94Note that, as in the previous code chunk when the two steps were done separately, the design object is changed as a result of the fit() operation. The power of this pipeline will become clearer when we fit more complex models that involve interactions and transformations.
Let’s return to our fitted regression model. The object results has several methods that can be used for inference. We already presented a function summarize() for showing the essentials of the fit. For a full and somewhat exhaustive summary of the fit, we can use the summary() method (output not shown).
results.summary()The fitted coefficients can also be retrieved as the params attribute of results.
results.paramsintercept 34.553841
lstat -0.950049
dtype: float64The get_prediction() method can be used to obtain predictions, and produce confidence intervals and prediction intervals for the prediction of medv for given values of lstat.
We first create a new data frame, in this case containing only the variable lstat, with the values for this variable at which we wish to make predictions. We then use the transform() method of design to create the corresponding model matrix.
new_df = pd.DataFrame({'lstat':[5, 10, 15]})
newX = design.transform(new_df)
newX intercept lstat
0 1.0 5
1 1.0 10
2 1.0 15Next we compute the predictions at newX, and view them by extracting the predicted_mean attribute.
new_predictions = results.get_prediction(newX);
new_predictions.predicted_meanarray([29.80359411, 25.05334734, 20.30310057])We can produce confidence intervals for the predicted values.
new_predictions.conf_int(alpha=0.05)array([[29.00741194, 30.59977628],
[24.47413202, 25.63256267],
[19.73158815, 20.87461299]])Prediction intervals are computing by setting obs=True:
new_predictions.conf_int(obs=True, alpha=0.05)array([[17.56567478, 42.04151344],
[12.82626635, 37.27906833],
[ 8.0777421 , 32.52845905]])For instance, the 95 % confidence interval associated with an lstat value of 10 is , and the 95 % prediction interval is . As expected, the confidence and prediction intervals are centered around the same point (a predicted value of 25.05 for medv when lstat equals 10), but the latter are substantially wider.
Next we will plot medv and lstat using DataFrame.plot.scatter(), and wish to add the regression line to the resulting plot.
Defining Functions
While there is a function within the ISLP package that adds a line to an existing plot, we take this opportunity to define our first function to do so.
def abline(ax, b, m):
"Add a line with slope m and intercept b to ax"
xlim = ax.get_xlim()
ylim = [m * xlim[0] + b, m * xlim[1] + b]
ax.plot(xlim, ylim)A few things are illustrated above. First we see the syntax for defining a function: def funcname(...). The function has arguments ax, b, m where ax is an axis object for an exisiting plot, b is the intercept and m is the slope of the desired line. Other plotting options can be passed on to ax.plot by including additional optional arguments as follows:
def abline(ax, b, m, *args, **kwargs):
"Add a line with slope m and intercept b to ax"
xlim = ax.get_xlim()
ylim = [m * xlim[0] + b, m * xlim[1] + b]
ax.plot(xlim, ylim, *args, **kwargs)The addition of *args allows any number of non-named arguments to abline, while **kwargs allows any number of named arguments (such as linewidth=3) to abline. In our function, we pass these arguments verbatim to ax.plot above. Readers interested in learning more about functions are referred to the section on defining functions in docs.python.org/tutorial.
Let’s use our new function to add this regression line to a plot of medv vs. lstat.
ax = Boston.plot.scatter('lstat', 'medv')
abline(ax,
results.params[0],
results.params[1],
'r--',
linewidth=3)Thus, the final call to ax.plot() is ax.plot(xlim, ylim, 'r--', linewidth=3). We have used the argument 'r--' to produce a red dashed line, and added an argument to make it of width 3. There is some evidence for non-linearity in the relationship between lstat and medv. We will explore this issue later in this lab.
As mentioned above, there is an existing function to add a line to a plot — ax.axline() — but knowing how to write such functions empowers us to create more expressive displays.
Next we examine some diagnostic plots, several of which were discussed in Section 3.3.3. We can find the fitted values and residuals of the fit as attributes of the results object. Various influence measures describing the regression model are computed with the get_influence() method. As we will not use the fig component returned as the first value from subplots(), we simply capture the second returned value in ax below.
ax = subplots(figsize=(8,8))[1]
ax.scatter(results.fittedvalues, results.resid)
ax.set_xlabel('Fitted value')
ax.set_ylabel('Residual')
ax.axhline(0, c='k', ls='--');We add a horizontal line at 0 for reference using the ax.axhline() method, indicating it should be black (c='k') and have a dashed linestyle (ls='--'). On the basis of the residual plot (not shown), there is some evidence of non-linearity.
Leverage statistics can be computed for any number of predictors using the hat_matrix_diag attribute of the value returned by the get_influence() method.
infl = results.get_influence()
ax = subplots(figsize=(8,8))[1]
ax.scatter(np.arange(X.shape[0]), infl.hat_matrix_diag)
ax.set_xlabel('Index')
ax.set_ylabel('Leverage')
np.argmax(infl.hat_matrix_diag)374The np.argmax() function identifies the index of the largest element of an array, optionally computed over an axis of the array. In this case, we maximized over the entire array to determine which observation has the largest leverage statistic.
3.6.3 Multiple Linear Regression
In order to fit a multiple linear regression model using least squares, we again use the ModelSpec() transform to construct the required model matrix and response. The arguments to ModelSpec() can be quite general, but in this case a list of column names suffice. We consider a fit here with the two variables lstat and age.
X = MS(['lstat', 'age']).fit_transform(Boston)
model1 = sm.OLS(y, X)
results1 = model1.fit()
summarize(results1) coef std err t P>|t|
intercept 33.2228 0.731 45.458 0.000
lstat -1.0321 0.048 -21.416 0.000
age 0.0345 0.012 2.826 0.005Notice how we have compacted the first line into a succinct expression describing the construction of X.
The Boston data set contains 12 variables, and so it would be cumbersome to have to type all of these in order to perform a regression using all of the predictors. Instead, we can use the following short-hand:
terms = Boston.columns.drop('medv')
termsIndex(['crim', 'zn', 'indus', 'chas', 'nox', 'rm', 'age', 'dis',
'rad', 'tax', 'ptratio', 'lstat'],
dtype='object')We can now fit the model with all the variables in terms using the same model matrix builder.
X = MS(terms).fit_transform(Boston)
model = sm.OLS(y, X)
results = model.fit()
summarize(results) coef std err t P>|t|
intercept 41.6173 4.936 8.431 0.000
crim -0.1214 0.033 -3.678 0.000
zn 0.0470 0.014 3.384 0.001
indus 0.0135 0.062 0.217 0.829
chas 2.8400 0.870 3.264 0.001
nox -18.7580 3.851 -4.870 0.000
rm 3.6581 0.420 8.705 0.000
age 0.0036 0.013 0.271 0.787
dis -1.4908 0.202 -7.394 0.000
rad 0.2894 0.067 4.325 0.000
tax -0.0127 0.004 -3.337 0.001
ptratio -0.9375 0.132 -7.091 0.000
lstat -0.5520 0.051 -10.897 0.000What if we would like to perform a regression using all of the variables but one? For example, in the above regression output, age has a high -value. So we may wish to run a regression excluding this predictor. The following syntax results in a regression using all predictors except age (output not shown).
minus_age = Boston.columns.drop(['medv', 'age'])
Xma = MS(minus_age).fit_transform(Boston)
model1 = sm.OLS(y, Xma)
summarize(model1.fit())3.6.4 Multivariate Goodness of Fit
We can access the individual components of results by name (dir(results) shows us what is available). Hence results.rsquared gives us the , and np.sqrt(results.scale) gives us the RSE.
Variance inflation factors (Section 3.3.3) are sometimes useful to assess the effect of collinearity in the model matrix of a regression model. We will compute the VIFs in our multiple regression fit, and use the opportunity to introduce the idea of list comprehension.
List Comprehension
Often we encounter a sequence of objects which we would like to transform for some other task. Below, we compute the VIF for each feature in our X matrix and produce a data frame whose index agrees with the columns of X. The notion of list comprehension can often make such a task easier.
List comprehensions are simple and powerful ways to form lists of Python objects. The language also supports dictionary and generator comprehension, though these are beyond our scope here. Let’s look at an example. We compute the VIF for each of the variables in the model matrix X, using the function variance_inflation_factor().
vals = [VIF(X, i)
for i in range(1, X.shape[1])]
vif = pd.DataFrame({'vif':vals},
index=X.columns[1:])
vif vif
crim 1.767
zn 2.298
indus 3.987
chas 1.071
nox 4.369
rm 1.913
age 3.088
dis 3.954
rad 7.445
tax 9.002
ptratio 1.797
lstat 2.871The function VIF() takes two arguments: a dataframe or array, and a variable column index. In the code above we call VIF() on the fly for all columns in X. We have excluded column 0 above (the intercept), which is not of interest. In this case the VIFs are not that exciting.
The object vals above could have been constructed with the following for loop:
vals = []
for i in range(1, X.values.shape[1]):
vals.append(VIF(X.values, i))List comprehension allows us to perform such repetitive operations in a more straightforward way.
3.6.5 Interaction Terms
It is easy to include interaction terms in a linear model using ModelSpec(). Including a tuple ("lstat","age") tells the model matrix builder to include an interaction term between lstat and age.
X = MS(['lstat',
'age',
('lstat', 'age')]).fit_transform(Boston)
model2 = sm.OLS(y, X)
summarize(model2.fit()) coef std err t P>|t|
intercept 36.0885 1.470 24.553 0.000
lstat -1.3921 0.167 -8.313 0.000
age -0.0007 0.020 -0.036 0.971
lstat:age 0.0042 0.002 2.244 0.0253.6.6 Non-linear Transformations of the Predictors
The model matrix builder can include terms beyond just column names and interactions. For instance, the poly() function supplied in ISLP specifies that columns representing polynomial functions of its first argument are added to the model matrix.
X = MS([poly('lstat', degree=2), 'age']).fit_transform(Boston)
model3 = sm.OLS(y, X)
results3 = model3.fit()
summarize(results3) coef std err t P>|t|
intercept 17.7151 0.781 22.681 0.000
poly(lstat, degree=2)[0]-179.2279 6.733 -26.620 0.000
poly(lstat, degree=2)[1] 72.9908 5.482 13.315 0.000
age 0.0703 0.011 6.471 0.000The effectively zero -value associated with the quadratic term (i.e. the third row above) suggests that it leads to an improved model.
By default, poly() creates a basis matrix for inclusion in the model matrix whose columns are orthogonal polynomials, which are designed for stable least squares computations.13 Alternatively, had we included an argument raw=True in the above call to poly(), the basis matrix would consist simply of lstat and lstat**2. Since either of these bases represent quadratic polynomials, the fitted values would not change in this case, just the polynomial coefficients. Also by default, the columns created by poly() do not include an intercept column as that is automatically added by MS().
We use the anova_lm() function to further quantify the extent to which the quadratic fit is superior to the linear fit.
anova_lm(results1, results3) df_resid ssr df_diff ss_diff F Pr(>F)
0 503.0 19168.13 0.0 NaN NaN NaN
1 502.0 14165.61 1.0 5002.52 177.28 7.47e-35Here results1 represents the linear submodel containing predictors lstat and age, while results3 corresponds to the larger model above with a quadratic term in lstat. The anova_lm() function performs a hypothesis test comparing the two models. The null hypothesis is that the quadratic term in the bigger model is not needed, and the alternative hypothesis is that the bigger model is superior. Here the -statistic is 177.28 and the associated -value is zero. In this case the -statistic is the square of the -statistic for the quadratic term in the linear model summary for results3 — a consequence of the fact that these nested models differ by one degree of freedom. This provides very clear evidence that the quadratic polynomial in lstat improves the linear model. This is not surprising, since earlier we saw evidence for non-linearity in the relationship between medv and lstat.
The function anova_lm() can take more than two nested models as input, in which case it compares every successive pair of models. That also explains why their are NaNs in the first row above, since there is no previous model with which to compare the first.
ax = subplots(figsize=(8,8))[1]
ax.scatter(results3.fittedvalues, results3.resid)
ax.set_xlabel('Fitted value')
ax.set_ylabel('Residual')
ax.axhline(0, c='k', ls='--')We see that when the quadratic term is included in the model, there is little discernible pattern in the residuals. In order to create a cubic or higher-degree polynomial fit, we can simply change the degree argument to poly().
3.6.7 Qualitative Predictors
Here we use the Carseats data, which is included in the ISLP package. We will attempt to predict Sales (child car seat sales) in 400 locations based on a number of predictors.
Carseats = load_data('Carseats')
Carseats.columnsIndex(['Sales', 'CompPrice', 'Income', 'Advertising',
'Population', 'Price', 'ShelveLoc', 'Age', 'Education',
'Urban', 'US'],
dtype='object')The Carseats data includes qualitative predictors such as ShelveLoc, an indicator of the quality of the shelving location — that is, the space within a store in which the car seat is displayed. The predictor ShelveLoc takes on three possible values, Bad, Medium, and Good. Given a qualitative variable such as ShelveLoc, ModelSpec() generates dummy variables automatically. These variables are often referred to as a one-hot encoding of the categorical feature. Their columns sum to one, so to avoid collinearity with an intercept, the first column is dropped. Below we see the column ShelveLoc[Bad] has been dropped, since Bad is the first level of ShelveLoc. Below we fit a multiple regression model that includes some interaction terms.
allvars = list(Carseats.columns.drop('Sales'))
y = Carseats['Sales']
final = allvars + [('Income', 'Advertising'),
('Price', 'Age')]
X = MS(final).fit_transform(Carseats)
model = sm.OLS(y, X)
summarize(model.fit()) coef std err t P>|t|
intercept 6.5756 1.009 6.519 0.000
CompPrice 0.0929 0.004 22.567 0.000
Income 0.0109 0.003 4.183 0.000
Advertising 0.0702 0.023 3.107 0.002
Population 0.0002 0.000 0.433 0.665
Price -0.1008 0.007 -13.549 0.000
ShelveLoc[Good] 4.8487 0.153 31.724 0.000
ShelveLoc[Medium] 1.9533 0.126 15.531 0.000
Age -0.0579 0.016 -3.633 0.000
Education -0.0209 0.020 -1.063 0.288
Urban[Yes] 0.1402 0.112 1.247 0.213
US[Yes] -0.1576 0.149 -1.058 0.291
Income:Advertising 0.0008 0.000 2.698 0.007
Price:Age 0.0001 0.000 0.801 0.424In the first line above, we made allvars a list, so that we could add the interaction terms two lines down. Our model-matrix builder has created a ShelveLoc[Good] dummy variable that takes on a value of 1 if the shelving location is good, and 0 otherwise. It has also created a ShelveLoc[Medium] dummy variable that equals 1 if the shelving location is medium, and 0 otherwise. A bad shelving location corresponds to a zero for each of the two dummy variables. The fact that the coefficient for ShelveLoc[Good] in the regression output is positive indicates that a good shelving location is associated with high sales (relative to a bad location). And ShelveLoc[Medium] has a smaller positive coefficient, indicating that a medium shelving location leads to higher sales than a bad shelving location, but lower sales than a good shelving location.
3.7 Exercises
Conceptual
-
Describe the null hypotheses to which the -values given in Table 3.4 correspond. Explain what conclusions you can draw based on these -values. Your explanation should be phrased in terms of
sales,TV,radio, andnewspaper, rather than in terms of the coefficients of the linear model. -
Carefully explain the differences between the KNN classifier and KNN regression methods.
-
Suppose we have a data set with five predictors, , , (1 for College and 0 for High School), , and . The response is starting salary after graduation (in thousands of dollars). Suppose we use least squares to fit the model, and get , , , , , .
- Which answer is correct, and why?
- For a fixed value of IQ and GPA, high school graduates earn more, on average, than college graduates.
- For a fixed value of IQ and GPA, college graduates earn more, on average, than high school graduates.
- For a fixed value of IQ and GPA, high school graduates earn more, on average, than college graduates provided that the GPA is high enough.
- For a fixed value of IQ and GPA, college graduates earn more, on average, than high school graduates provided that the GPA is high enough.
- Predict the salary of a college graduate with IQ of 110 and a GPA of 4.0.
- True or false: Since the coefficient for the GPA/IQ interaction term is very small, there is very little evidence of an interaction effect. Justify your answer.
- Which answer is correct, and why?
-
I collect a set of data ( observations) containing a single predictor and a quantitative response. I then fit a linear regression model to the data, as well as a separate cubic regression, i.e. .
- Suppose that the true relationship between and is linear, i.e. . Consider the training residual sum of squares (RSS) for the linear regression, and also the training RSS for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer.
- Answer (a) using test rather than training RSS.
- Suppose that the true relationship between and is not linear, but we don’t know how far it is from linear. Consider the training RSS for the linear regression, and also the training RSS for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer.
- Answer (c) using test rather than training RSS.
-
Consider the fitted values that result from performing linear regression without an intercept. In this setting, the th fitted value takes the form
where
Show that we can write
What is ?
Note: We interpret this result by saying that the fitted values from linear regression are linear combinations of the response values.
-
Using (3.4), argue that in the case of simple linear regression, the least squares line always passes through the point .
-
It is claimed in the text that in the case of simple linear regression of onto , the statistic (3.17) is equal to the square of the correlation between and (3.18). Prove that this is the case. For simplicity, you may assume that .
Applied
-
This question involves the use of simple linear regression on the
Autodata set.- Use the
sm.OLS()function to perform a simple linear regression withmpgas the response andhorsepoweras the predictor. Use thesummarize()function to print the results. Comment on the output. For example:- Is there a relationship between the predictor and the response?
- How strong is the relationship between the predictor and the response?
- Is the relationship between the predictor and the response positive or negative?
- What is the predicted
mpgassociated with ahorsepowerof 98? What are the associated 95 % confidence and prediction intervals?
- Plot the response and the predictor in a new set of axes
ax. Use theax.axline()method or theabline()function defined in the lab to display the least squares regression line. - Produce some of diagnostic plots of the least squares regression fit as described in the lab. Comment on any problems you see with the fit.
- Use the
-
This question involves the use of multiple linear regression on the
Autodata set.- Produce a scatterplot matrix which includes all of the variables in the data set.
- Compute the matrix of correlations between the variables using the
DataFrame.corr()method. - Use the
sm.OLS()function to perform a multiple linear regression withmpgas the response and all other variables exceptnameas the predictors. Use thesummarize()function to print the results. Comment on the output. For instance:- Is there a relationship between the predictors and the response? Use the
anova_lm()function fromstatsmodelsto answer this question. - Which predictors appear to have a statistically significant relationship to the response?
- What does the coefficient for the
yearvariable suggest?
- Is there a relationship between the predictors and the response? Use the
- Produce some of diagnostic plots of the linear regression fit as described in the lab. Comment on any problems you see with the fit. Do the residual plots suggest any unusually large outliers? Does the leverage plot identify any observations with unusually high leverage?
- Fit some models with interactions as described in the lab. Do any interactions appear to be statistically significant?
- Try a few different transformations of the variables, such as , , . Comment on your findings.
-
This question should be answered using the
Carseatsdata set.- Fit a multiple regression model to predict
SalesusingPrice,Urban, andUS. - Provide an interpretation of each coefficient in the model. Be careful—some of the variables in the model are qualitative!
- Write out the model in equation form, being careful to handle the qualitative variables properly.
- For which of the predictors can you reject the null hypothesis ?
- On the basis of your response to the previous question, fit a smaller model that only uses the predictors for which there is evidence of association with the outcome.
- How well do the models in (a) and (e) fit the data?
- Using the model from (e), obtain 95 % confidence intervals for the coefficient(s).
- Is there evidence of outliers or high leverage observations in the model from (e)?
- Fit a multiple regression model to predict
-
In this problem we will investigate the -statistic for the null hypothesis in simple linear regression without an intercept. To begin, we generate a predictor
xand a responseyas follows.rng = np.random.default_rng(1) x = rng.normal(size=100) y = 2 * x + rng.normal(size=100)-
Perform a simple linear regression of
yontox, without an intercept. Report the coefficient estimate , the standard error of this coefficient estimate, and the -statistic and -value associated with the null hypothesis . Comment on these results. (You can perform regression without an intercept using the keywords argumentintercept=FalsetoModelSpec().) -
Now perform a simple linear regression of
xontoywithout an intercept, and report the coefficient estimate, its standard error, and the corresponding -statistic and -values associated with the null hypothesis . Comment on these results. -
What is the relationship between the results obtained in (a) and (b)?
-
For the regression of onto without an intercept, the -statistic for takes the form , where is given by (3.38), and where
(These formulas are slightly different from those given in Sections 3.1.1 and 3.1.2, since here we are performing regression without an intercept.) Show algebraically, and confirm numerically in
R, that the -statistic can be written as -
Using the results from (d), argue that the -statistic for the regression of
yontoxis the same as the -statistic for the regression ofxontoy. -
In
R, show that when regression is performed with an intercept, the -statistic for is the same for the regression ofyontoxas it is for the regression ofxontoy.
-
-
This problem involves simple linear regression without an intercept.
- Recall that the coefficient estimate for the linear regression of onto without an intercept is given by (3.38). Under what circumstance is the coefficient estimate for the regression of onto the same as the coefficient estimate for the regression of onto ?
- Generate an example in
Pythonwith observations in which the coefficient estimate for the regression of onto is different from the coefficient estimate for the regression of onto . - Generate an example in
Pythonwith observations in which the coefficient estimate for the regression of onto is the same as the coefficient estimate for the regression of onto .
-
In this exercise you will create some simulated data and will fit simple linear regression models to it. Make sure to use the default random number generator with seed set to 1 prior to starting part (a) to ensure consistent results.
-
Using the
normal()method of your random number generator, create a vector,x, containing 100 observations drawn from a distribution. This represents a feature, . -
Using the
normal()method, create a vector,eps, containing 100 observations drawn from a distribution—a normal distribution with mean zero and variance 0.25. -
Using
xandeps, generate a vectoryaccording to the modelWhat is the length of the vector
y? What are the values of and in this linear model? -
Create a scatterplot displaying the relationship between
xandy. Comment on what you observe. -
Fit a least squares linear model to predict
yusingx. Comment on the model obtained. How do and compare to and ? -
Display the least squares line on the scatterplot obtained in (d). Draw the population regression line on the plot, in a different color. Use the
legend()method of the axes to create an appropriate legend. -
Now fit a polynomial regression model that predicts
yusingxandx^2. Is there evidence that the quadratic term improves the model fit? Explain your answer. -
Repeat (a)–(f) after modifying the data generation process in such a way that there is less noise in the data. The model (3.39) should remain the same. You can do this by decreasing the variance of the normal distribution used to generate the error term in (b). Describe your results.
-
Repeat (a)–(f) after modifying the data generation process in such a way that there is more noise in the data. The model (3.39) should remain the same. You can do this by increasing the variance of the normal distribution used to generate the error term in (b). Describe your results.
-
What are the confidence intervals for and based on the original data set, the noisier data set, and the less noisy data set? Comment on your results.
-
-
This problem focuses on the collinearity problem.
-
Perform the following commands in
Python:rng = np.random.default_rng(10) x1 = rng.uniform(0, 1, size=100) x2 = 0.5 * x1 + rng.normal(size=100) / 10 y = 2 + 2 * x1 + 0.3 * x2 + rng.normal(size=100)The last line corresponds to creating a linear model in which
yis a function ofx1andx2. Write out the form of the linear model. What are the regression coefficients? -
What is the correlation between
x1andx2? Create a scatterplot displaying the relationship between the variables. -
Using this data, fit a least squares regression to predict
yusingx1andx2. Describe the results obtained. What are , , and ? How do these relate to the true , , and ? Can you reject the null hypothesis ? How about the null hypothesis ? -
Now fit a least squares regression to predict
yusing onlyx1. Comment on your results. Can you reject the null hypothesis ? -
Now fit a least squares regression to predict
yusing onlyx2. Comment on your results. Can you reject the null hypothesis ? -
Do the results obtained in (c)–(e) contradict each other? Explain your answer.
-
Suppose we obtain one additional observation, which was unfortunately mismeasured. We use the function
np.concatenate()to add this additional observation to each ofx1,x2andy.x1 = np.concatenate([x1, [0.1]]) x2 = np.concatenate([x2, [0.8]]) y = np.concatenate([y, [6]])Re-fit the linear models from (c) to (e) using this new data. What effect does this new observation have on the each of the models? In each model, is this observation an outlier? A high-leverage point? Both? Explain your answers.
-
-
This problem involves the
Bostondata set, which we saw in the lab for this chapter. We will now try to predict per capita crime rate using the other variables in this data set. In other words, per capita crime rate is the response, and the other variables are the predictors.-
For each predictor, fit a simple linear regression model to predict the response. Describe your results. In which of the models is there a statistically significant association between the predictor and the response? Create some plots to back up your assertions.
-
Fit a multiple regression model to predict the response using all of the predictors. Describe your results. For which predictors can we reject the null hypothesis ?
-
How do your results from (a) compare to your results from (b)? Create a plot displaying the univariate regression coefficients from (a) on the -axis, and the multiple regression coefficients from (b) on the -axis. That is, each predictor is displayed as a single point in the plot. Its coefficient in a simple linear regression model is shown on the -axis, and its coefficient estimate in the multiple linear regression model is shown on the -axis.
-
Is there evidence of non-linear association between any of the predictors and the response? To answer this question, for each predictor , fit a model of the form
-
Footnotes
-
The assumption of linearity is often a useful working model. However, despite what many textbooks might tell us, we seldom believe that the true relationship is linear. ↩
-
This formula holds provided that the observations are uncorrelated. ↩
-
Approximately for several reasons. Equation 3.10 relies on the assumption that the errors are Gaussian. Also, the factor of 2 in front of the term will vary slightly depending on the number of observations in the linear regression. To be precise, rather than the number 2, (3.10) should contain the 97.5 % quantile of a -distribution with degrees of freedom. Details of how to compute the 95 % confidence interval precisely in
Rwill be provided later in this chapter. ↩ -
In Table 3.1, a small -value for the intercept indicates that we can reject the null hypothesis that , and a small -value for
TVindicates that we can reject the null hypothesis that . Rejecting the latter null hypothesis allows us to conclude that there is a relationship betweenTVandsales. Rejecting the former allows us to conclude that in the absence ofTVexpenditure,salesare non-zero. ↩ -
We note that in fact, the right-hand side of (3.18) is the sample correlation; thus, it would be more correct to write ; however, we omit the “hat” for ease of notation. ↩
-
Even if the errors are not normally-distributed, the -statistic approximately follows an -distribution provided that the sample size is large. ↩
-
The square of each -statistic is the corresponding -statistic. ↩
-
This is related to the important concept of multiple testing, which is the focus of Chapter 13. ↩
-
In other words, if we collect a large number of data sets like the
Advertisingdata set, and we construct a confidence interval for the averagesaleson the basis of each data set (given $100,000 inTVand $20,000 inradioadvertising), then 95 % of these confidence intervals will contain the true value of averagesales. ↩ -
In the machine learning community, the creation of dummy variables to handle qualitative predictors is known as “one-hot encoding”. ↩
-
Technically is half the sum of the average debt for house owners and the average debt for non-house owners. Hence, is exactly equal to the overall average only if the two groups have an equal number of members. ↩
-
There could still in theory be a difference between South and West, although the data here does not suggest any difference. ↩
-
Actually,
poly()is a wrapper for the workhorse and standalone functionPoly()that does the work in building the model matrix. ↩