expand icon
book Introductory Econometrics 4th Edition by Jeffrey Wooldridge cover

Introductory Econometrics 4th Edition by Jeffrey Wooldridge

Edition 4ISBN: 978-0324660609
book Introductory Econometrics 4th Edition by Jeffrey Wooldridge cover

Introductory Econometrics 4th Edition by Jeffrey Wooldridge

Edition 4ISBN: 978-0324660609
Exercise 4
Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. be the (k + 1) × 1 vector of OLS estimators and define Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. =G Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. as the OLS estimator of .
(i) Show that E( Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. X) = .
(ii) Find Var( Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. X) in terms of 2, X, and G.
(iii) Use Problem
Problem Let Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. denote the OLS estimate from a regression of y on Z.
(i) Show that Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. =A-1 Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. .
(ii) L et Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. = Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare
(iii) Show that the estimated variance matrix for Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. is Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. A1(X X)1A1 , where Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. is the usual variance estimate from regressing y on X.
(iv) L et the Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. and the Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.
(v) Assuming the setup of part (iv), use part (iii) to show that se( Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. ) = se( Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. )/aj.
(vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. and Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. are identical.
to verify that Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. and the appropriate estimate of Var( Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. X) are obtained from the regression of y on XG1.
(iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G .
(v) Show that for the choice of G in part (iv), Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.
Use this expression for G1 and part (iii) to conclude that Assume that the model y = X + u satisfies the Gauss-Markov assumptions, let G be a (k + 1) × (k + 1) nonsingular, nonrandom matrix, and define =G , so that is also a (k + 1) × 1 vector. Let   be the (k + 1) × 1 vector of OLS estimators and define   =G   as the OLS estimator of . (i) Show that E(   X) = . (ii) Find Var(   X) in terms of 2, X, and G. (iii) Use Problem Problem Let   be the OLS estimate from the regression of y on X. Let A be a (k + 1) × (k + 1) nonsingular matrix and define zt = xtA, t = 1,….,n. Therefore, zt is 1 × (k + 1) and is a nonsingular linear combination of xt. Let Z be the n × (k + 1) matrix with rows zt. Let   denote the OLS estimate from a regression of y on Z. (i) Show that   =A-1   . (ii) L et   be the fitted values from the original regression and let. y t be the fitted values from regressing y on Z. Show that   =   , for all t =- 1, 2,…., n. How do the residuals from the two regressions compare  (iii) Show that the estimated variance matrix for   is   A1(X X)1A1 , where   is the usual variance estimate from regressing y on X. (iv) L et the   be the OLS estimates from regressing yt on 1, xt1,...,xtk, and let the   be the OLS estimates from the regression of yt on 1, a1xt1,…,ak xtk, where aj 0, j =1,…,k. Use the results from part (i) to find the relationship between the   and the    (v) Assuming the setup of part (iv), use part (iii) to show that se(   ) = se(   )/aj. (vi) Assuming the setup of part (iv), show that the absolute values of the t statistics for   and   are identical. to verify that   and the appropriate estimate of Var(   X) are obtained from the regression of y on XG1. (iv) Now, let c be a (k +1) × 1 vector with at least one nonzero entry. For concreteness, assume that ck 0. Define =c , so that is a scalar. Define j= j, j =1,...,k 1 and k = 0. Show how to define a (k+ 1) × (k + 1) nonsingular matrix G so that =G . (v) Show that for the choice of G in part (iv),    Use this expression for G1 and part (iii) to conclude that   and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters. and its standard error are obtained as the coefficient on xtk /ck in the regression of yt on [1 (c0/ck)xtk], [xt1 (c1/ck)xtk],..., [xt,k1 (ck1/ck)xtk], xtk/ck, t = 1,...,n. This regression is exactly the one obtained by writing k in terms of and 0, 1,..., k1, plugging the result into the original model, and rearranging. Therefore, we can
formally justify the trick we use throughout the text for obtaining the standard error of a linear combination of parameters.
Explanation
Verified
like image
like image

The model given is: blured image Satisfies Gauss-Mar...

close menu
Introductory Econometrics 4th Edition by Jeffrey Wooldridge
cross icon