+ - 0:00:00
Notes for current slide
Notes for next slide

General Linear Model

1 / 46

Annoucements

  • Moving model comparison to a later time
  • Homework 1 due next Friday
  • Updated language on syllabus -- I am an Assisting Employee

Last time

Wrap up univariate regression

Confidence and prediction bands

2 / 46

Today

General linear model

3 / 46

Models

  • Thus far you have used t-tests, correlations, and regressions to test basic questions about the world.

  • These types of tests can be thought of as a model for how you think "the world works"

  • Our DV (Y) is what we are trying to understand

  • We hypothesize it has some relationship with your IV(s) (Xs)

Yi=b0+b1Xi+ei

4 / 46

Independent samples t-test

t.1 <- t.test(y ~ x,
data = d)
  • Y is continuous
  • X is a categorical/nominal (dichotomous) factor

Univariate regression

r.1 <- lm(y ~ x,
data = d)
  • Y is continuous
  • X is continuous
5 / 46

General linear model

This model (equation) can be very simple as in a treatment/control experiment. It can be very complex in terms of trying to understand something like academic achievement

The majority of our models fall under the umbrella of a general(ized) linear model

  • The general linear model is a family of models that assume the relationship between your DV and IV(s) is linear and additive, and that your residuals* are normally distributed.

  • This is a subset of the generalized linear model which allows for non-linear associations and non-normally distributed outcomes.

All models imply our theory about how the data are generated (i.e., how the world works)

6 / 46

Example

glimpse(traffic)
## Rows: 280
## Columns: 3
## $ id <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17…
## $ tx <int> 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, …
## $ traffic.risk <dbl> 1.857143, 1.000000, 3.285714, 2.000000, 2.428571, 3.28571…
psych::describe(traffic, fast = T)
## vars n mean sd min max range se
## id 1 280 140.50 80.97 1 280.00 279.00 4.84
## tx 2 274 0.62 0.49 0 1.00 1.00 0.03
## traffic.risk 3 272 2.35 0.81 1 4.71 3.71 0.05
7 / 46
traffic %>%
ggplot() +
aes(x = as.factor(tx), y = traffic.risk) +
geom_boxplot() + geom_jitter() +
theme_bw(base_size = 20)

8 / 46

example

t.1 <- t.test(traffic.risk ~ tx, data = traffic, var.equal = TRUE)
9 / 46

example

t.1 <- t.test(traffic.risk ~ tx, data = traffic, var.equal = TRUE)
##
## Two Sample t-test
##
## data: traffic.risk by tx
## t = 4.9394, df = 268, p-value = 1.381e-06
## alternative hypothesis: true difference in means between group 0 and group 1 is not equal to 0
## 95 percent confidence interval:
## 0.2893360 0.6728755
## sample estimates:
## mean in group 0 mean in group 1
## 2.650641 2.169535
10 / 46

example

r.1 <- cor.test(~ traffic.risk + tx, data = traffic)
11 / 46

example

r.1 <- cor.test(~ traffic.risk + tx, data = traffic)
##
## Pearson's product-moment correlation
##
## data: traffic.risk and tx
## t = -4.9394, df = 268, p-value = 1.381e-06
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.3946279 -0.1755371
## sample estimates:
## cor
## -0.28886
12 / 46

example

a.1 <- aov(traffic.risk ~ tx, data = traffic)
13 / 46

example

a.1 <- aov(traffic.risk ~ tx, data = traffic)
## Df Sum Sq Mean Sq F value Pr(>F)
## tx 1 14.8 14.800 24.4 1.38e-06 ***
## Residuals 268 162.6 0.607
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 10 observations deleted due to missingness
14 / 46

example cont

mod.1 <- lm(traffic.risk ~ tx, data = traffic)
summary(mod.1)
15 / 46

example cont

mod.1 <- lm(traffic.risk ~ tx, data = traffic)
summary(mod.1)
##
## Call:
## lm(formula = traffic.risk ~ tx, data = traffic)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.65064 -0.59811 -0.02668 0.54475 2.54475
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.65064 0.07637 34.707 < 2e-16 ***
## tx -0.48111 0.09740 -4.939 1.38e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.7789 on 268 degrees of freedom
## (10 observations deleted due to missingness)
## Multiple R-squared: 0.08344, Adjusted R-squared: 0.08002
## F-statistic: 24.4 on 1 and 268 DF, p-value: 1.381e-06
16 / 46

example summary

Same p-values for each test; same SS; same test!

  • correlation gives you an effect size (r) and df

  • t-test gives you a t & df (output may give you group M and s)

  • ANOVA gives you an F (and SSs)

  • linear model (regression) gives you an equation in addition to F and SS

17 / 46

t-test as regression

Yi=b0+b1Xi+ei

traffic riski=b0+b1txi+ei

  • Each individual (county) has a unique Y value, X value, and residual

  • The model only has a single b0 and b1 term.

    • b0 is the intercept
    • b1 quantifies the relationship between IV and the DV.
18 / 46

What do the estimates tell us?

summary(mod.1)
## [...]
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.65064 0.07637 34.707 < 2e-16 ***
## tx -0.48111 0.09740 -4.939 1.38e-06 ***
## [...]
psych::describeBy(traffic$traffic.risk,
group = traffic$tx)
##
## Descriptive statistics by group
## group: 0
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 104 2.65 0.79 2.71 2.67 0.95 1 4.14 3.14 -0.22 -0.96 0.08
## ------------------------------------------------------------
## group: 1
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 166 2.17 0.77 2.07 2.12 0.74 1 4.71 3.71 0.67 0.06 0.06
19 / 46

How to interpret regression estimates

  • Intercept is the mean of group of variable tx that is coded 0

  • Regression coefficient is the difference in means between the groups (i.e., slope)

20 / 46

21 / 46

22 / 46

How to interpret regression estimates

  • Intercept (b0) signifies the level of Y when your model IVs (Xs) are zero

  • Regression (b1) signifies the difference for a one unit change in your X

  • as with last semester you have estimates (like X¯) and standard errors, which you can then ask whether they are likely assuming a null or create a CI

23 / 46

t-test as regression

Regression coefficients are another way of presenting the expected means of each group, but instead of M1 and M2, we're given M1 and Δ or the difference.

Now let's compare the inferential test of the two.

24 / 46

Matrix algebra for independent samples t-test (TT)1TX=(b)

Matrix algebra for linear regression (XX)1Xy=b

summary(mod.1)
##
## Call:
## lm(formula = traffic.risk ~ tx, data = traffic)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.65064 -0.59811 -0.02668 0.54475 2.54475
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.65064 0.07637 34.707 < 2e-16 ***
## tx -0.48111 0.09740 -4.939 1.38e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.7789 on 268 degrees of freedom
## (10 observations deleted due to missingness)
## Multiple R-squared: 0.08344, Adjusted R-squared: 0.08002
## F-statistic: 24.4 on 1 and 268 DF, p-value: 1.381e-06
25 / 46
t.test(traffic.risk ~ tx, data = traffic, var.equal = T)
##
## Two Sample t-test
##
## data: traffic.risk by tx
## t = 4.9394, df = 268, p-value = 1.381e-06
## alternative hypothesis: true difference in means between group 0 and group 1 is not equal to 0
## 95 percent confidence interval:
## 0.2893360 0.6728755
## sample estimates:
## mean in group 0 mean in group 1
## 2.650641 2.169535
26 / 46

Statistical Inference

The p-values for the two tests are the same, because they are the same test.

The probability distribution may differ... sort of. Recall that all these distributions are derived from the standard normal.

  • t-test uses a t-distribution
  • regression uses an F-distribution for the omnibus test
  • regression uses a t-distribution for the test of the coefficients

In the case of a single binary predictor, the t-statistic for the t-test will be identical to the t-statistic of the regression coefficient test.

The F-statistic of the omnibus test will be the t-statistic squared: F=t2

27 / 46
library(here)
expertise = read.csv(here("data/expertise.csv"))
cor.test(expertise$self_perceived_knowledge, expertise$overclaiming_proportion, use = "pairwise")
##
## Pearson's product-moment correlation
##
## data: expertise$self_perceived_knowledge and expertise$overclaiming_proportion
## t = 7.762, df = 200, p-value = 4.225e-13
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.3675104 0.5806336
## sample estimates:
## cor
## 0.4811502
28 / 46
mod.2 = lm(overclaiming_proportion ~ self_perceived_knowledge, data = expertise)
summary(mod.2)
##
## Call:
## lm(formula = overclaiming_proportion ~ self_perceived_knowledge,
## data = expertise)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.50551 -0.15610 0.00662 0.12167 0.54215
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.11406 0.05624 -2.028 0.0439 *
## self_perceived_knowledge 0.09532 0.01228 7.762 4.22e-13 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.2041 on 200 degrees of freedom
## Multiple R-squared: 0.2315, Adjusted R-squared: 0.2277
## F-statistic: 60.25 on 1 and 200 DF, p-value: 4.225e-13
29 / 46
mod.3 = lm(self_perceived_knowledge ~ overclaiming_proportion, data = expertise)
summary(mod.3)
##
## Call:
## lm(formula = self_perceived_knowledge ~ overclaiming_proportion,
## data = expertise)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.8150 -0.6761 0.0103 0.7305 3.1850
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 3.6801 0.1206 30.515 < 2e-16 ***
## overclaiming_proportion 2.4288 0.3129 7.762 4.22e-13 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.03 on 200 degrees of freedom
## Multiple R-squared: 0.2315, Adjusted R-squared: 0.2277
## F-statistic: 60.25 on 1 and 200 DF, p-value: 4.225e-13
30 / 46

Regression models provide all the same information as the t-test or correlation. But they provide additional information.

Regression omnibus test are the same as t-test or correlation tests, even the same as ANOVAs. But the coefficient tests give you additional information, like intercepts, and make it easier to calculate predicted values and to assess relative fit.

31 / 46

Predictions

  • predictions Y^ are of the form of E(Y|X)

  • They are created by simply plugging a persons X's into the created model

  • If you have b's and have X's you can create a prediction

32 / 46

Y^i=2.6506410+0.4811057Xi

mod.1 <- lm(traffic.risk ~ tx, data = traffic)
summary(mod.1)
##
## Call:
## lm(formula = traffic.risk ~ tx, data = traffic)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.65064 -0.59811 -0.02668 0.54475 2.54475
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.65064 0.07637 34.707 < 2e-16 ***
## tx -0.48111 0.09740 -4.939 1.38e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.7789 on 268 degrees of freedom
## (10 observations deleted due to missingness)
## Multiple R-squared: 0.08344, Adjusted R-squared: 0.08002
## F-statistic: 24.4 on 1 and 268 DF, p-value: 1.381e-06
33 / 46

Predictions

  • Unless you've collected a new dataset, you can't really assess the accuracy of predictions. Instead, we evaluate the fit to the data used to create the model. We want Y^i to be close to our actual data for each person (Yi)

  • The difference between the actual data and our guess (YiY^i=e) is the residual, how far we are "off". This tells us how good our fit is.

  • You can have the same estimates for two models but completely different fit.

34 / 46

Which one has better fit?

35 / 46

Calculating fitted and predicted values

Y^i=b0+b1Xi

Yi=b0+b1Xi+ei

YiY^i=e

  • can you plug in numbers and calculate County 3's fitted and residual scores without explicitly asking for lm object residuals and fitted values?
traffic[3, ]
## id tx traffic.risk
## 3 3 1 3.285714

Y^i=2.65+0.48Xi

36 / 46

Predicting with new data

mod.1
##
## Call:
## lm(formula = traffic.risk ~ tx, data = traffic)
##
## Coefficients:
## (Intercept) tx
## 2.6506 -0.4811
new.df = data.frame(
tx = c(0,1)
)
predict(mod.1, newdata = new.df)
## 1 2
## 2.650641 2.169535
37 / 46

A tangent about "predict"

"Predict" can refer to multiple things in a regression analyses.

  • Fitted values of data used to estimate model (Y^i)
  • Fitted values of new data (machine learning)
  • The range of possible/expected Y's for any given X (prediction interval)

Some people are really picky about when you can and cannot use this term.

  • I am not one of those people.
  • Pick a definition, explain it, and be consistent.
38 / 46

A tangent about "predict"

predict(mod.1,
newdata = new.df,
interval = "confidence")
## fit lwr upr
## 1 2.650641 2.500274 2.801008
## 2 2.169535 2.050517 2.288554
predict(mod.1,
newdata = new.df,
interval = "prediction")
## fit lwr upr
## 1 2.650641 1.1098391 4.191443
## 2 2.169535 0.6314761 3.707594
39 / 46

40 / 46

Back to statistical inference

  • To evaluate our model, we have to compare our guesses to some alternative guesses to see if we are doing well or not.

  • What is our best guess (i.e., prediction) if we didn't collect any data?

Y^=Y¯

  • Regression can be thought of as: is E(Y|X) better than E(Y)?

  • To the extent that we can generate different guesses of Y based on the values of the IVs, our model is doing well. Said differently, the closer our model is to the "actual" data generating model, our guesses (Y^) will be closer to our actual data (Y)

41 / 46

Partitioning variation

(YY¯)2=(Y^Y¯)2+(YY^)2

  • SS total = SS between + SS within

  • SS total = SS regression + SS residual (or error)

Technically, we haven't gotten to ANOVA yet, but you can run and interpret an ANOVA with no problems given what we've talked about already.

42 / 46
set.seed(0214)
traffic$new_tx = sample(c("1","2","3"), size = nrow(traffic), replace = T)
a.2 = aov(traffic.risk ~ new_tx, data = traffic)
summary(a.2)
## Df Sum Sq Mean Sq F value Pr(>F)
## new_tx 2 1.74 0.8697 1.321 0.269
## Residuals 269 177.11 0.6584
## 8 observations deleted due to missingness
43 / 46

summary

The general linear model is a way of mathematically building a theoretical model and testing it with data. GLMs assume the relationship between your IVs and DV is linear

  • t-tests, ANOVAs, even χ2 tests are special cases of the general linear model. The benefit to looking at these tests through the perspective of a linear model is that it provides us with all the information of those tests and more! Plus it provides a more systematic way at 1) building and testing your theoretical model and 2) comparing between alternative theoretical models

You can get estimates and fit statistics from the model. Both are important.

44 / 46

45 / 46

Next time

Part- and partial correlations

46 / 46

Annoucements

  • Moving model comparison to a later time
  • Homework 1 due next Friday
  • Updated language on syllabus -- I am an Assisting Employee

Last time

Wrap up univariate regression

Confidence and prediction bands

2 / 46
Paused

Help

Keyboard shortcuts

, , Pg Up, k Go to previous slide
, , Pg Dn, Space, j Go to next slide
Home Go to first slide
End Go to last slide
Number + Return Go to specific slide
b / m / f Toggle blackout / mirrored / fullscreen mode
c Clone slideshow
p Toggle presenter mode
t Restart the presentation timer
?, h Toggle this help
Esc Back to slideshow