
Introduction to Econometrics 3rd Edition by James Stock, James Stock
Edition 3ISBN: 978-9352863501
Introduction to Econometrics 3rd Edition by James Stock, James Stock
Edition 3ISBN: 978-9352863501 Exercise 2
Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as
where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that
Let
a. Show that
b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let
denote the vector of two stage least squares residuals.
i. Show that
ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f73_4481_bf3e_a3cac7c1cf8f_SM2686_00.jpg)
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f72_812a_bf3e_532e58ff63db_SM2686_11.jpg)
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f72_812b_bf3e_891c7f03b30e_SM2686_11.jpg)
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f72_a83c_bf3e_01629ea6dde1_SM2686_11.jpg)
a. Show that
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f72_cf4d_bf3e_bfec14ed2d38_SM2686_11.jpg)
b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f72_f65e_bf3e_cdeb912c1c21_SM2686_11.jpg)
i. Show that
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f73_1d6f_bf3e_fb70f5fbf8db_SM2686_11.jpg)
ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f73_4480_bf3e_152471c08558_SM2686_00.jpg)
![Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2686/11eb9b5b_3f73_4481_bf3e_a3cac7c1cf8f_SM2686_00.jpg)
Explanation
a) The regression is
has are matrices ...
Introduction to Econometrics 3rd Edition by James Stock, James Stock
Why don’t you like this exercise?
Other Minimum 8 character and maximum 255 character
Character 255