
Introductory Econometrics 4th Edition by Jeffrey Wooldridge
Edition 4ISBN: 978-0324660609
Introductory Econometrics 4th Edition by Jeffrey Wooldridge
Edition 4ISBN: 978-0324660609 Exercise 4
Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let
be the (k + 1) × 1 vector of OLS estimators and define
=G
as the OLS estimator of .
(i) Show that E(
X) = .
(ii) Find Var(
X) in terms of 2, X, and G.
(iii) Use Problem
Problem Let
be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let
denote the OLS estimate from a regression of y on Z.
(i) Show that
=A-1
.
(ii) L et
be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that
=
, for all t =- 1, 2,…., n. How do the residuals from the two regressions compare
(iii) Show that the estimated variance matrix for
is
A1(X X)1A1 , where
is the usual variance estimate from regressing y on X.
(iv) L et the
be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the
be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the
and the
(v) Assuming the setup of part (iv), use part (iii) to show that se(
) = se(
)/aj.
(vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for
and
are identical.
to verify that
and the appropriate estimate of Var(
X) are obtained from the regression of y on XG1.
(iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G .
(v) Show that for the choice of G in part (iv),
Use this expression for G1 and part (iii) to conclude that
and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can
formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_46cb_8edd_13a3464948d5_SM2712_11.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_6ddc_8edd_4d67e63ca3ef_SM2712_00.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_6ddd_8edd_3fa75babdea3_SM2712_00.jpg)
(i) Show that E(
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_6dde_8edd_adfa0bf35ae9_SM2712_11.jpg)
(ii) Find Var(
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_6ddf_8edd_154822c81c21_SM2712_11.jpg)
(iii) Use Problem
Problem Let
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_6de0_8edd_79b094b08574_SM2712_11.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_94f1_8edd_51432fd4594f_SM2712_11.jpg)
(i) Show that
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_94f2_8edd_b705521bb0f0_SM2712_11.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_94f3_8edd_0d85317fb0b8_SM2712_11.jpg)
(ii) L et
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_94f4_8edd_9dccf50ff8c3_SM2712_11.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_94f5_8edd_0deae72ad739_SM2712_11.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_94f6_8edd_0988286f3b72_SM2712_11.jpg)
(iii) Show that the estimated variance matrix for
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_94f7_8edd_cd8848dda862_SM2712_11.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_bc08_8edd_41c7b3f09fb9_SM2712_11.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_bc09_8edd_0b317f11d65e_SM2712_11.jpg)
(iv) L et the
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_bc0a_8edd_71f44da53942_SM2712_11.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_bc0b_8edd_b5f6e2162814_SM2712_11.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_bc0c_8edd_8f4b48f18688_SM2712_11.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_bc0d_8edd_0514c2a29ae2_SM2712_11.jpg)
(v) Assuming the setup of part (iv), use part (iii) to show that se(
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_e31e_8edd_51cac313be75_SM2712_11.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_e31f_8edd_3f0bbf55f964_SM2712_11.jpg)
(vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_e320_8edd_8132284ad47b_SM2712_11.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_e321_8edd_21264fc6c723_SM2712_11.jpg)
to verify that
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_e322_8edd_d7ed0552ef3e_SM2712_11.jpg)
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f055_e323_8edd_6dc7ee1242bb_SM2712_11.jpg)
(iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G .
(v) Show that for the choice of G in part (iv),
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f056_0a34_8edd_1b27d09245e6_SM2712_00.jpg)
Use this expression for G1 and part (iii) to conclude that
![Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let be the (k + 1) × 1 vector of OLS estimators and define =G as the OLS estimator of . (i) Show that E( X) = . (ii) Find Var( X) in terms of 2, X, and G. (iii) Use Problem Problem Let be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let denote the OLS estimate from a regression of y on Z. (i) Show that =A-1 . (ii) L et be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that = , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare (iii) Show that the estimated variance matrix for is A1(X X)1A1 , where is the usual variance estimate from regressing y on X. (iv) L et the be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the and the (v) Assuming the setup of part (iv), use part (iii) to show that se( ) = se( )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for and are identical. to verify that and the appropriate estimate of Var( X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv), Use this expression for G1 and part (iii) to conclude that and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.](https://d2lvgg3v3hfg70.cloudfront.net/SM2712/11eb9ee2_f056_0a35_8edd_fd1ec4bf6467_SM2712_11.jpg)
formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.
Explanation
The model given is: Satisfies Gauss-Mar...
Introductory Econometrics 4th Edition by Jeffrey Wooldridge
Why don’t you like this exercise?
Other Minimum 8 character and maximum 255 character
Character 255