expand icon
book Introduction to Econometrics 3rd Edition by James Stock, James Stock cover

Introduction to Econometrics 3rd Edition by James Stock, James Stock

النسخة 3الرقم المعياري الدولي: 978-9352863501
book Introduction to Econometrics 3rd Edition by James Stock, James Stock cover

Introduction to Econometrics 3rd Edition by James Stock, James Stock

النسخة 3الرقم المعياري الدولي: 978-9352863501
تمرين 2
Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let    a. Show that    b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that    ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let    a. Show that    b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that    ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    Let Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let    a. Show that    b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that    ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
a. Show that Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let    a. Show that    b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that    ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let    a. Show that    b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that    ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    denote the vector of two stage least squares residuals.
i. Show that Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let    a. Show that    b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that    ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.] Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let    a. Show that    b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that    ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let    a. Show that    b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that    ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
التوضيح
موثّق
like image
like image

a) The regression is blured image blured image has are matrices ...

close menu
Introduction to Econometrics 3rd Edition by James Stock, James Stock
cross icon