Deck 18: The Theory of Multiple Regression
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Unlock Deck
Sign up to unlock the cards in this deck!
Unlock Deck
Unlock Deck
1/22
Play
Full screen (f)
Deck 18: The Theory of Multiple Regression
1
Suppose that a sample of n = 20 households has the sample means and sample covariances below for a dependent variable and two regressors:
a. Calculate the OLS estimates of ß 0 , ß h and ß 2 Calculate s 2 u ,. Calculate the R 2 of the regression.
b. Suppose that all six assumptions in Key Concept 18.1 hold. Test the hypothesis that ß 1 = 0 at the 5% significance level.

b. Suppose that all six assumptions in Key Concept 18.1 hold. Test the hypothesis that ß 1 = 0 at the 5% significance level.
b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors
Standard error can be found as square root of the variance, the covariance matrix of the coefficients is
The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is
The coefficient is 0.25 hence t -stat is
2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant.
a) Let the following be the general linear regression
Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below
As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as
Here T represents the transpose of the matrix. Multiplying the matrices leads to
Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below
The summations with more than one variable can be defined in terms of variance and covariances for example, let
be the sample variance for X₁ then
The sample variance has
as one of the terms, isolate
Similar relations can be computed for other summations with two variables as below
Here the terms
is the sample variance of variable X₂ ,
is the sample variance of Y ,
is the sample covariance of X₁ and Y. Hence, substituting into the equation
Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are
The remaining matrix is
Finish the computation
The formula for standard error of regression is
The term
is the matrix that stores the residuals defined below
The second moment is
Note that the first term
is the second moment of Y which was found to be
Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y
The second term is
that is
Calculate the third term
Hence, the second moment is computed as
The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3
The R score can be computed as
The SSR is the second moment of residual errors that is
The TSS is a multiple of the variance of Y
Then the R score is 




a) Let the following be the general linear regression



































2
Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as
where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that
Let
a. Show that
b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let
denote the vector of two stage least squares residuals.
i. Show that
ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f73_4481_bf3e_a3cac7c1cf8f_SM2686_00.jpg)
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f72_812a_bf3e_532e58ff63db_SM2686_11.jpg)
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f72_812b_bf3e_891c7f03b30e_SM2686_11.jpg)
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f72_a83c_bf3e_01629ea6dde1_SM2686_11.jpg)
a. Show that
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f72_cf4d_bf3e_bfec14ed2d38_SM2686_11.jpg)
b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f72_f65e_bf3e_cdeb912c1c21_SM2686_11.jpg)
i. Show that
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f73_1d6f_bf3e_fb70f5fbf8db_SM2686_11.jpg)
ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f73_4480_bf3e_152471c08558_SM2686_00.jpg)
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f73_4481_bf3e_a3cac7c1cf8f_SM2686_00.jpg)
a) The regression is
has are matrices with k 1 and k 2 columns respectively. Let R be defined as the matrix
then the term RX becomes X₁ only. Since X₂ Y is zero, that means only X₁ is relevant to the estimated coefficient
then
b) The TSLS regression is
Here V is error from estimating X. Then the estimated error is
The first term
is zero because of exogenity and
is zero from conditional mean zero.
The prior shown that to calculate F -stat is the difference of sum of square residuals from restricted and unrestricted, hence J stats can be calculated from it.









The prior shown that to calculate F -stat is the difference of sum of square residuals from restricted and unrestricted, hence J stats can be calculated from it.
3
You are analyzing a linear regression model with 500 observations and one regressor. Explain how you would construct a confidence interval for ß 1 if:
a. Assumptions #1 through #4 in Key Concept 18.1 are true, but you think Assumption #5 or #6 might not be true.
b. Assumptions #1 through #5 are true, but you think Assumption #6 might not be true.(give two ways to construct the confidence interval).
c. Assumptions #1 through #6 are true
a. Assumptions #1 through #4 in Key Concept 18.1 are true, but you think Assumption #5 or #6 might not be true.
b. Assumptions #1 through #5 are true, but you think Assumption #6 might not be true.(give two ways to construct the confidence interval).
c. Assumptions #1 through #6 are true
a) This satisfy conditional mean zero, i.i.d., finite fourth moments, column rank assumptions only, therefore, OLS estimates will be consistent and unbiased. However, variance of u is heteroskedastic so regression should be run with heteroskedastic robust standard errors. Confidence interval can be constructed as usual
b) The conditions satisfy conditional mean zero, i.i.d., finite fourth moments, column rank assumptions, and homoskedasticity. OLS estimates will be consistent and unbiased. While, u may not be normally distributed, it is still possible to run regression with homoskedastic errors. Additional run with heteroskedastic errors should give similar values for standard errors. Confidence interval can be constructed as in part a using heteroskedastic robust or homoskedastic standard errors.
c) If all assumptions from part b including conditional normal distribution of u is true then it is enough to run regression assuming homoskedastic standard errors.

c) If all assumptions from part b including conditional normal distribution of u is true then it is enough to run regression assuming homoskedastic standard errors.
4
(Consistency of clustered standard errors.) Consider the panel data model
where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) ,
, and
. For the asymptotic calculations in this problem, suppose that T is fixed and n







Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
5
Let W be an m × 1 vector with covariance matrix
where
is finite and positive definite. Let c be a nonrandom m × 1 vector, and let
a. Show that var
b. Suppose that c 0 m Show that 0 var(Q) .



a. Show that var

b. Suppose that c 0 m Show that 0 var(Q) .
Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
6
This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model
where all variables are scalars and the constant term/intercept is omitted for convenience.
a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent.
b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that
are i.i.d.
i. Show that the OLS estimator can be written as
ii. Suppose that data are "missing completely at random" in the sense that
where p is a constant. Show that
is unbiased and consistent.
iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,
Show that
is unbiased and consistent.
iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,
Is
unbiased Is
consistent Explain.
c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is
unbiased Is
consistent Explain.
![This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that are i.i.d. i. Show that the OLS estimator can be written as ii. Suppose that data are missing completely at random in the sense that where p is a constant. Show that is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, Show that is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, Is unbiased Is consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is unbiased Is consistent Explain.](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f75_b59f_bf3e_0b267a0a4703_SM2686_00.jpg)
a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent.
b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that
![This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that are i.i.d. i. Show that the OLS estimator can be written as ii. Suppose that data are missing completely at random in the sense that where p is a constant. Show that is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, Show that is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, Is unbiased Is consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is unbiased Is consistent Explain.](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f75_b5a0_bf3e_0366a1c02206_SM2686_11.jpg)
i. Show that the OLS estimator can be written as
![This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that are i.i.d. i. Show that the OLS estimator can be written as ii. Suppose that data are missing completely at random in the sense that where p is a constant. Show that is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, Show that is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, Is unbiased Is consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is unbiased Is consistent Explain.](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f75_b5a1_bf3e_95f5cdd8639b_SM2686_00.jpg)
ii. Suppose that data are "missing completely at random" in the sense that
![This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that are i.i.d. i. Show that the OLS estimator can be written as ii. Suppose that data are missing completely at random in the sense that where p is a constant. Show that is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, Show that is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, Is unbiased Is consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is unbiased Is consistent Explain.](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f75_b5a2_bf3e_0dc2d0b98a68_SM2686_11.jpg)
![This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that are i.i.d. i. Show that the OLS estimator can be written as ii. Suppose that data are missing completely at random in the sense that where p is a constant. Show that is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, Show that is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, Is unbiased Is consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is unbiased Is consistent Explain.](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f75_b5a3_bf3e_359f15b3ccc9_SM2686_11.jpg)
iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,
![This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that are i.i.d. i. Show that the OLS estimator can be written as ii. Suppose that data are missing completely at random in the sense that where p is a constant. Show that is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, Show that is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, Is unbiased Is consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is unbiased Is consistent Explain.](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f75_dcb4_bf3e_e521c77fb5ba_SM2686_11.jpg)
![This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that are i.i.d. i. Show that the OLS estimator can be written as ii. Suppose that data are missing completely at random in the sense that where p is a constant. Show that is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, Show that is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, Is unbiased Is consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is unbiased Is consistent Explain.](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f75_dcb5_bf3e_fb6d137f8bfb_SM2686_11.jpg)
iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,
![This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that are i.i.d. i. Show that the OLS estimator can be written as ii. Suppose that data are missing completely at random in the sense that where p is a constant. Show that is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, Show that is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, Is unbiased Is consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is unbiased Is consistent Explain.](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f75_dcb6_bf3e_97e5d815a6ba_SM2686_11.jpg)
![This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that are i.i.d. i. Show that the OLS estimator can be written as ii. Suppose that data are missing completely at random in the sense that where p is a constant. Show that is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, Show that is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, Is unbiased Is consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is unbiased Is consistent Explain.](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f75_dcb7_bf3e_8b31611b8f1c_SM2686_11.jpg)
![This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that are i.i.d. i. Show that the OLS estimator can be written as ii. Suppose that data are missing completely at random in the sense that where p is a constant. Show that is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, Show that is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, Is unbiased Is consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is unbiased Is consistent Explain.](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f76_03c8_bf3e_53c565fe243b_SM2686_11.jpg)
c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is
![This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that are i.i.d. i. Show that the OLS estimator can be written as ii. Suppose that data are missing completely at random in the sense that where p is a constant. Show that is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, Show that is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, Is unbiased Is consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is unbiased Is consistent Explain.](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f76_03c9_bf3e_512f72fd25c4_SM2686_11.jpg)
![This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that are i.i.d. i. Show that the OLS estimator can be written as ii. Suppose that data are missing completely at random in the sense that where p is a constant. Show that is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, Show that is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, Is unbiased Is consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is unbiased Is consistent Explain.](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f76_03ca_bf3e_4d829a53cf2d_SM2686_11.jpg)
Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
7
Suppose that Assumptions #1 through #5 in Key Concept 18.1 are true, but that Assumption #6 is not. Does the result in Equation (18.31) hold Explain.



Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
8
Consider the regression model in matrix form
where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let
where
a. Show that the OLS estimators of ß and can be written as
b. Show that
=
c. Show that
d. The Frisch-Waugh theorem (Appendix 6.2) says that
Use the result in (c) to prove the Frisch-Waugh theorem.



a. Show that the OLS estimators of ß and can be written as

b. Show that


c. Show that

d. The Frisch-Waugh theorem (Appendix 6.2) says that

Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
9
Consider the regression model from Chapter 4,
, and assume that the assumptions in Key Concept 4.3 hold.
a. Write the model in the matrix form given in Equations (18.2) and (18.4).
b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied.
c. Use the general formula for
in Equation (18.11) to derive the expressions for
and
given in Key Concept 4.2.
d. Show that the (1,1) element of
in Equation (18.13) is equal to the expression for
given in Key Concept 4.4.


a. Write the model in the matrix form given in Equations (18.2) and (18.4).
b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied.
c. Use the general formula for



d. Show that the (1,1) element of






Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
10
Can you compute the BLUE estimator of if Equation (18.41) holds and you do not know What if you know 

Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
11
Let P x and M x be as defined in Equations (18.24) and (18.25).
a. Prove that P X M X = 0 n×n and that P x and M x are idempotent.
b. Derive Equations (18.27) and (18.28).
a. Prove that P X M X = 0 n×n and that P x and M x are idempotent.
b. Derive Equations (18.27) and (18.28).

Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
12
Construct an example of a regression model that satisfies the assumption
but for which 


Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
13
Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed
Now let
be the "binary variable" fixed effects estimator computed by estimating Equation (10.11) by OLS and let
be the "de-meaning" fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for
given above to prove that
. [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]
![Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed Now let be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for given above to prove that . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f62_a30f_bf3e_27afa5a28a59_SM2686_11.jpg)
Now let
![Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed Now let be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for given above to prove that . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f62_ca20_bf3e_a16d2ff18b18_SM2686_11.jpg)
![Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed Now let be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for given above to prove that . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f62_ca21_bf3e_1bcae5ba72fa_SM2686_11.jpg)
![Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed Now let be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for given above to prove that . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f62_ca22_bf3e_f132bd65f2b4_SM2686_11.jpg)
![Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed Now let be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for given above to prove that . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f62_f133_bf3e_9fad5287946a_SM2686_11.jpg)
Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
14
Consider the regression model,
where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let
and
be the OLS estimators for this model. Show that
a. Whether or not wi, and ui are correlated,
b. If Wi and u i are correlated, then
is inconsistent.
c. Let
be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which
has a smaller asymptotic variance than
, allowing for the possibility thatWi, and u i are correlated.



a. Whether or not wi, and ui are correlated,

b. If Wi and u i are correlated, then

c. Let



Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
15
Consider the regression model
where
and u i =
Suppose that
, are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j.
a. Derive an expression for
b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are
)




a. Derive an expression for

b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are

Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
16
This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)
, where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption.
a. Use the expression for
given in Exercise 18.6 to write
- ß =
.
b. Show that
where
=
, and so forth. [The matrix
if
: for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.]
c. Show that assumptions (i) and (ii) imply that
.
d. Use (c) and the law of iterated expectations to show that
e. Use (a) through (d) to conclude that, under conditions (i) through
(iv)![This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for given in Exercise 18.6 to write - ß = . b. Show that where = , and so forth. [The matrix if : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that . d. Use (c) and the law of iterated expectations to show that e. Use (a) through (d) to conclude that, under conditions (i) through (iv)](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f6c_8d77_bf3e_eb3b803abbda_SM2686_11.jpg)
![This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for given in Exercise 18.6 to write - ß = . b. Show that where = , and so forth. [The matrix if : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that . d. Use (c) and the law of iterated expectations to show that e. Use (a) through (d) to conclude that, under conditions (i) through (iv)](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f6b_ca1c_bf3e_053c6dc9ee6d_SM2686_00.jpg)
a. Use the expression for
![This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for given in Exercise 18.6 to write - ß = . b. Show that where = , and so forth. [The matrix if : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that . d. Use (c) and the law of iterated expectations to show that e. Use (a) through (d) to conclude that, under conditions (i) through (iv)](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f6b_ca1d_bf3e_b982e9eb6f58_SM2686_11.jpg)
![This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for given in Exercise 18.6 to write - ß = . b. Show that where = , and so forth. [The matrix if : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that . d. Use (c) and the law of iterated expectations to show that e. Use (a) through (d) to conclude that, under conditions (i) through (iv)](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f6b_ca1e_bf3e_8dedd101b6d4_SM2686_11.jpg)
![This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for given in Exercise 18.6 to write - ß = . b. Show that where = , and so forth. [The matrix if : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that . d. Use (c) and the law of iterated expectations to show that e. Use (a) through (d) to conclude that, under conditions (i) through (iv)](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f6b_f12f_bf3e_050c164ac814_SM2686_11.jpg)
b. Show that
![This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for given in Exercise 18.6 to write - ß = . b. Show that where = , and so forth. [The matrix if : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that . d. Use (c) and the law of iterated expectations to show that e. Use (a) through (d) to conclude that, under conditions (i) through (iv)](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f6b_f130_bf3e_ef447f25e9df_SM2686_11.jpg)
![This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for given in Exercise 18.6 to write - ß = . b. Show that where = , and so forth. [The matrix if : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that . d. Use (c) and the law of iterated expectations to show that e. Use (a) through (d) to conclude that, under conditions (i) through (iv)](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f6b_f131_bf3e_39a8cb741926_SM2686_11.jpg)
![This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for given in Exercise 18.6 to write - ß = . b. Show that where = , and so forth. [The matrix if : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that . d. Use (c) and the law of iterated expectations to show that e. Use (a) through (d) to conclude that, under conditions (i) through (iv)](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f6c_1842_bf3e_61a91b32d829_SM2686_11.jpg)
![This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for given in Exercise 18.6 to write - ß = . b. Show that where = , and so forth. [The matrix if : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that . d. Use (c) and the law of iterated expectations to show that e. Use (a) through (d) to conclude that, under conditions (i) through (iv)](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f6c_1843_bf3e_355e939e0c7a_SM2686_11.jpg)
![This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for given in Exercise 18.6 to write - ß = . b. Show that where = , and so forth. [The matrix if : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that . d. Use (c) and the law of iterated expectations to show that e. Use (a) through (d) to conclude that, under conditions (i) through (iv)](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f6c_6664_bf3e_0fa3df5ceb14_SM2686_11.jpg)
c. Show that assumptions (i) and (ii) imply that
![This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for given in Exercise 18.6 to write - ß = . b. Show that where = , and so forth. [The matrix if : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that . d. Use (c) and the law of iterated expectations to show that e. Use (a) through (d) to conclude that, under conditions (i) through (iv)](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f6c_6665_bf3e_93d779bfdd73_SM2686_11.jpg)
d. Use (c) and the law of iterated expectations to show that
![This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for given in Exercise 18.6 to write - ß = . b. Show that where = , and so forth. [The matrix if : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that . d. Use (c) and the law of iterated expectations to show that e. Use (a) through (d) to conclude that, under conditions (i) through (iv)](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f6c_6666_bf3e_ad66deda9ec1_SM2686_11.jpg)
e. Use (a) through (d) to conclude that, under conditions (i) through
(iv)
![This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for given in Exercise 18.6 to write - ß = . b. Show that where = , and so forth. [The matrix if : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that . d. Use (c) and the law of iterated expectations to show that e. Use (a) through (d) to conclude that, under conditions (i) through (iv)](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f6c_8d77_bf3e_eb3b803abbda_SM2686_11.jpg)
Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
17
Let C be a symmetric idempotent matrix.
a. Show that the eigenvalues of C are either 0 or 1. ( Hint: Note that Cq = q implies 0 = Cq q = CCq q = CCq q = 2 q q and solve for .)
b. Show that trace( C ) = rank( C ).
c. Let d be a n × 1 vector. Show that d'Cd 0.
a. Show that the eigenvalues of C are either 0 or 1. ( Hint: Note that Cq = q implies 0 = Cq q = CCq q = CCq q = 2 q q and solve for .)
b. Show that trace( C ) = rank( C ).
c. Let d be a n × 1 vector. Show that d'Cd 0.
Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
18
Suppose that C is an n × n symmetric idempotent matrix with rank r and let V ~ N (0 n , I n ).
a. Show that C = AA ', where A is n × r with A'A = I r. ( Hint: C is possintive semidefinite and can be written as Q Q as explained in Appendix18.1.)
b. Show that A'V ~ N ( 0 r , I r ).
c. Show that V'CV ~ r 2
a. Show that C = AA ', where A is n × r with A'A = I r. ( Hint: C is possintive semidefinite and can be written as Q Q as explained in Appendix18.1.)
b. Show that A'V ~ N ( 0 r , I r ).
c. Show that V'CV ~ r 2
Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
19
Consider the population regression of test scores against income and the square of income in Equation (8.1).
a. Write the regression in Equation (8.1) in the matrix form of Equation (18.5). Define Y,X,U, and ß.
b. Explain how to test the null hypothesis that the relationship between test scores and income is linear against the alternative that it is quadratic. Write the null hypothesis in the form of Equation (18.20). What are R, r, and q
a. Write the regression in Equation (8.1) in the matrix form of Equation (18.5). Define Y,X,U, and ß.
b. Explain how to test the null hypothesis that the relationship between test scores and income is linear against the alternative that it is quadratic. Write the null hypothesis in the form of Equation (18.20). What are R, r, and q

Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
20
Show that
is the efficient GMM estimator-that is, that
Equation (18.66) is the solution to Equation (18.65).
b. Show that
c. Show that



b. Show that

c. Show that



Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
21
A researcher studying the relationship between earnings and gender for a group of workers specifies the regression model, Y i = ß 0 + X₁ i ß 1 + X₂ i ß 2 + u i , where X₁ i is a binary variable that equals 1 if the i th person is a female and X₂i is a binary variable that equals 1 if the i th person is a male. Write the model in the matrix form of Equation (18.2) for a hypothetical set of n = 5 observations. Show that the columns of X are linearly dependent so that X does not have full rank. Explain how you would respecifiy the model to eliminate the perfect multicollinearity. 

Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck
22
Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let
be the value of b that solves the constrained minimization problem.
a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers.
b. Show that
c. Show that
d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).


a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers.
b. Show that

c. Show that


d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).


Unlock Deck
Unlock for access to all 22 flashcards in this deck.
Unlock Deck
k this deck