Deck 18: The Theory of Multiple Regression

ملء الشاشة (f)
exit full mode
سؤال
Suppose that a sample of n = 20 households has the sample means and sample covariances below for a dependent variable and two regressors: Suppose that a sample of n = 20 households has the sample means and sample covariances below for a dependent variable and two regressors:   a. Calculate the OLS estimates of ß 0 , ß h and ß 2 Calculate s 2 u ,. Calculate the R 2 of the regression. b. Suppose that all six assumptions in Key Concept 18.1 hold. Test the hypothesis that ß 1 = 0 at the 5% significance level.<div style=padding-top: 35px> a. Calculate the OLS estimates of ß 0 , ß h and ß 2 Calculate s 2 u ,. Calculate the R 2 of the regression.
b. Suppose that all six assumptions in Key Concept 18.1 hold. Test the hypothesis that ß 1 = 0 at the 5% significance level.
استخدم زر المسافة أو
up arrow
down arrow
لقلب البطاقة.
سؤال
Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    <div style=padding-top: 35px> where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    <div style=padding-top: 35px> Let Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    <div style=padding-top: 35px>
a. Show that Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    <div style=padding-top: 35px>
b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    <div style=padding-top: 35px> denote the vector of two stage least squares residuals.
i. Show that Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    <div style=padding-top: 35px>
ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.] Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    <div style=padding-top: 35px> Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    <div style=padding-top: 35px>
سؤال
You are analyzing a linear regression model with 500 observations and one regressor. Explain how you would construct a confidence interval for ß 1 if:
a. Assumptions #1 through #4 in Key Concept 18.1 are true, but you think Assumption #5 or #6 might not be true.
b. Assumptions #1 through #5 are true, but you think Assumption #6 might not be true.(give two ways to construct the confidence interval).
c. Assumptions #1 through #6 are true
سؤال
(Consistency of clustered standard errors.) Consider the panel data model (Consistency of clustered standard errors.) Consider the panel data model   where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) ,   , and   . For the asymptotic calculations in this problem, suppose that T is fixed and n      <div style=padding-top: 35px> where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) , (Consistency of clustered standard errors.) Consider the panel data model   where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) ,   , and   . For the asymptotic calculations in this problem, suppose that T is fixed and n      <div style=padding-top: 35px> , and (Consistency of clustered standard errors.) Consider the panel data model   where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) ,   , and   . For the asymptotic calculations in this problem, suppose that T is fixed and n      <div style=padding-top: 35px> . For the asymptotic calculations in this problem, suppose that T is fixed and n (Consistency of clustered standard errors.) Consider the panel data model   where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) ,   , and   . For the asymptotic calculations in this problem, suppose that T is fixed and n      <div style=padding-top: 35px> (Consistency of clustered standard errors.) Consider the panel data model   where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) ,   , and   . For the asymptotic calculations in this problem, suppose that T is fixed and n      <div style=padding-top: 35px> (Consistency of clustered standard errors.) Consider the panel data model   where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) ,   , and   . For the asymptotic calculations in this problem, suppose that T is fixed and n      <div style=padding-top: 35px>
سؤال
Let W be an m × 1 vector with covariance matrix Let W be an m × 1 vector with covariance matrix   where   is finite and positive definite. Let c be a nonrandom m × 1 vector, and let   a. Show that var   b. Suppose that c 0 m Show that 0 var(Q) .<div style=padding-top: 35px> where Let W be an m × 1 vector with covariance matrix   where   is finite and positive definite. Let c be a nonrandom m × 1 vector, and let   a. Show that var   b. Suppose that c 0 m Show that 0 var(Q) .<div style=padding-top: 35px> is finite and positive definite. Let c be a nonrandom m × 1 vector, and let Let W be an m × 1 vector with covariance matrix   where   is finite and positive definite. Let c be a nonrandom m × 1 vector, and let   a. Show that var   b. Suppose that c 0 m Show that 0 var(Q) .<div style=padding-top: 35px>
a. Show that var Let W be an m × 1 vector with covariance matrix   where   is finite and positive definite. Let c be a nonrandom m × 1 vector, and let   a. Show that var   b. Suppose that c 0 m Show that 0 var(Q) .<div style=padding-top: 35px>
b. Suppose that c 0 m Show that 0 var(Q) .
سؤال
This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain.<div style=padding-top: 35px> where all variables are scalars and the constant term/intercept is omitted for convenience.
a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent.
b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain.<div style=padding-top: 35px> are i.i.d.
i. Show that the OLS estimator can be written as This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain.<div style=padding-top: 35px>
ii. Suppose that data are "missing completely at random" in the sense that This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain.<div style=padding-top: 35px> where p is a constant. Show that This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain.<div style=padding-top: 35px> is unbiased and consistent.
iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain.<div style=padding-top: 35px> Show that This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain.<div style=padding-top: 35px> is unbiased and consistent.
iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain.<div style=padding-top: 35px> Is This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain.<div style=padding-top: 35px> unbiased Is This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain.<div style=padding-top: 35px> consistent Explain.
c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain.<div style=padding-top: 35px> unbiased Is This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain.<div style=padding-top: 35px> consistent Explain.
سؤال
Suppose that Assumptions #1 through #5 in Key Concept 18.1 are true, but that Assumption #6 is not. Does the result in Equation (18.31) hold Explain. Suppose that Assumptions #1 through #5 in Key Concept 18.1 are true, but that Assumption #6 is not. Does the result in Equation (18.31) hold Explain.    <div style=padding-top: 35px> Suppose that Assumptions #1 through #5 in Key Concept 18.1 are true, but that Assumption #6 is not. Does the result in Equation (18.31) hold Explain.    <div style=padding-top: 35px>
سؤال
Consider the regression model in matrix form Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem.<div style=padding-top: 35px> where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem.<div style=padding-top: 35px> where Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem.<div style=padding-top: 35px>
a. Show that the OLS estimators of ß and can be written as Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem.<div style=padding-top: 35px>
b. Show that Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem.<div style=padding-top: 35px> = Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem.<div style=padding-top: 35px>
c. Show that Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem.<div style=padding-top: 35px>
d. The Frisch-Waugh theorem (Appendix 6.2) says that Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem.<div style=padding-top: 35px> Use the result in (c) to prove the Frisch-Waugh theorem.
سؤال
Consider the regression model from Chapter 4, Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        <div style=padding-top: 35px> , and assume that the assumptions in Key Concept 4.3 hold.
a. Write the model in the matrix form given in Equations (18.2) and (18.4).
b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied.
c. Use the general formula for Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        <div style=padding-top: 35px> in Equation (18.11) to derive the expressions for Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        <div style=padding-top: 35px> and Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        <div style=padding-top: 35px> given in Key Concept 4.2.
d. Show that the (1,1) element of Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        <div style=padding-top: 35px> in Equation (18.13) is equal to the expression for Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        <div style=padding-top: 35px> given in Key Concept 4.4. Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        <div style=padding-top: 35px> Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        <div style=padding-top: 35px> Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        <div style=padding-top: 35px> Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        <div style=padding-top: 35px>
سؤال
Can you compute the BLUE estimator of if Equation (18.41) holds and you do not know What if you know Can you compute the BLUE estimator of if Equation (18.41) holds and you do not know What if you know  <div style=padding-top: 35px>
سؤال
Let P x and M x be as defined in Equations (18.24) and (18.25).
a. Prove that P X M X = 0 n×n and that P x and M x are idempotent.
b. Derive Equations (18.27) and (18.28). Let P x and M x be as defined in Equations (18.24) and (18.25). a. Prove that P X M X = 0 n×n and that P x and M x are idempotent. b. Derive Equations (18.27) and (18.28).  <div style=padding-top: 35px>
سؤال
Construct an example of a regression model that satisfies the assumption Construct an example of a regression model that satisfies the assumption   but for which  <div style=padding-top: 35px> but for which Construct an example of a regression model that satisfies the assumption   but for which  <div style=padding-top: 35px>
سؤال
Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed   Now let   be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let   be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for   given above to prove that   . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]<div style=padding-top: 35px>
Now let Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed   Now let   be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let   be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for   given above to prove that   . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]<div style=padding-top: 35px> be the "binary variable" fixed effects estimator computed by estimating Equation (10.11) by OLS and let Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed   Now let   be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let   be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for   given above to prove that   . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]<div style=padding-top: 35px> be the "de-meaning" fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed   Now let   be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let   be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for   given above to prove that   . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]<div style=padding-top: 35px> given above to prove that Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed   Now let   be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let   be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for   given above to prove that   . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]<div style=padding-top: 35px> . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]
سؤال
Consider the regression model, Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated.<div style=padding-top: 35px> where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated.<div style=padding-top: 35px> and Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated.<div style=padding-top: 35px> be the OLS estimators for this model. Show that
a. Whether or not wi, and ui are correlated, Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated.<div style=padding-top: 35px>
b. If Wi and u i are correlated, then Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated.<div style=padding-top: 35px> is inconsistent.
c. Let Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated.<div style=padding-top: 35px> be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated.<div style=padding-top: 35px> has a smaller asymptotic variance than Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated.<div style=padding-top: 35px> , allowing for the possibility thatWi, and u i are correlated.
سؤال
Consider the regression model Consider the regression model   where   and u i =   Suppose that   , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j. a. Derive an expression for   b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are   )<div style=padding-top: 35px> where Consider the regression model   where   and u i =   Suppose that   , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j. a. Derive an expression for   b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are   )<div style=padding-top: 35px> and u i = Consider the regression model   where   and u i =   Suppose that   , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j. a. Derive an expression for   b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are   )<div style=padding-top: 35px> Suppose that Consider the regression model   where   and u i =   Suppose that   , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j. a. Derive an expression for   b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are   )<div style=padding-top: 35px> , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j.
a. Derive an expression for Consider the regression model   where   and u i =   Suppose that   , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j. a. Derive an expression for   b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are   )<div style=padding-top: 35px>
b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are Consider the regression model   where   and u i =   Suppose that   , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j. a. Derive an expression for   b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are   )<div style=padding-top: 35px> )
سؤال
This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  <div style=padding-top: 35px> , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption.
a. Use the expression for This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  <div style=padding-top: 35px> given in Exercise 18.6 to write This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  <div style=padding-top: 35px> - ß = This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  <div style=padding-top: 35px> .
b. Show that This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  <div style=padding-top: 35px> where This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  <div style=padding-top: 35px> = This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  <div style=padding-top: 35px> , and so forth. [The matrix This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  <div style=padding-top: 35px> if This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  <div style=padding-top: 35px> : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.]
c. Show that assumptions (i) and (ii) imply that This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  <div style=padding-top: 35px> .
d. Use (c) and the law of iterated expectations to show that This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  <div style=padding-top: 35px>
e. Use (a) through (d) to conclude that, under conditions (i) through
(iv) This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  <div style=padding-top: 35px>
سؤال
Let C be a symmetric idempotent matrix.
a. Show that the eigenvalues of C are either 0 or 1. ( Hint: Note that Cq = q implies 0 = Cq q = CCq q = CCq q = 2 q q and solve for .)
b. Show that trace( C ) = rank( C ).
c. Let d be a n × 1 vector. Show that d'Cd 0.
سؤال
Suppose that C is an n × n symmetric idempotent matrix with rank r and let V ~ N (0 n , I n ).
a. Show that C = AA ', where A is n × r with A'A = I r. ( Hint: C is possintive semidefinite and can be written as Q Q as explained in Appendix18.1.)
b. Show that A'V ~ N ( 0 r , I r ).
c. Show that V'CV ~ r 2
سؤال
Consider the population regression of test scores against income and the square of income in Equation (8.1).
a. Write the regression in Equation (8.1) in the matrix form of Equation (18.5). Define Y,X,U, and ß.
b. Explain how to test the null hypothesis that the relationship between test scores and income is linear against the alternative that it is quadratic. Write the null hypothesis in the form of Equation (18.20). What are R, r, and q Consider the population regression of test scores against income and the square of income in Equation (8.1). a. Write the regression in Equation (8.1) in the matrix form of Equation (18.5). Define Y,X,U, and ß. b. Explain how to test the null hypothesis that the relationship between test scores and income is linear against the alternative that it is quadratic. Write the null hypothesis in the form of Equation (18.20). What are R, r, and q  <div style=padding-top: 35px>
سؤال
Show that Show that   is the efficient GMM estimator-that is, that   Equation (18.66) is the solution to Equation (18.65). b. Show that   c. Show that      <div style=padding-top: 35px> is the efficient GMM estimator-that is, that Show that   is the efficient GMM estimator-that is, that   Equation (18.66) is the solution to Equation (18.65). b. Show that   c. Show that      <div style=padding-top: 35px> Equation (18.66) is the solution to Equation (18.65).
b. Show that Show that   is the efficient GMM estimator-that is, that   Equation (18.66) is the solution to Equation (18.65). b. Show that   c. Show that      <div style=padding-top: 35px>
c. Show that Show that   is the efficient GMM estimator-that is, that   Equation (18.66) is the solution to Equation (18.65). b. Show that   c. Show that      <div style=padding-top: 35px> Show that   is the efficient GMM estimator-that is, that   Equation (18.66) is the solution to Equation (18.65). b. Show that   c. Show that      <div style=padding-top: 35px> Show that   is the efficient GMM estimator-that is, that   Equation (18.66) is the solution to Equation (18.65). b. Show that   c. Show that      <div style=padding-top: 35px>
سؤال
A researcher studying the relationship between earnings and gender for a group of workers specifies the regression model, Y i = ß 0 + X₁ i ß 1 + X₂ i ß 2 + u i , where X₁ i is a binary variable that equals 1 if the i th person is a female and X₂i is a binary variable that equals 1 if the i th person is a male. Write the model in the matrix form of Equation (18.2) for a hypothetical set of n = 5 observations. Show that the columns of X are linearly dependent so that X does not have full rank. Explain how you would respecifiy the model to eliminate the perfect multicollinearity. A researcher studying the relationship between earnings and gender for a group of workers specifies the regression model, Y i = ß 0 + X₁ i ß 1 + X₂ i ß 2 + u i , where X₁ i is a binary variable that equals 1 if the i th person is a female and X₂i is a binary variable that equals 1 if the i th person is a male. Write the model in the matrix form of Equation (18.2) for a hypothetical set of n = 5 observations. Show that the columns of X are linearly dependent so that X does not have full rank. Explain how you would respecifiy the model to eliminate the perfect multicollinearity.  <div style=padding-top: 35px>
سؤال
Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let   be the value of b that solves the constrained minimization problem. a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers. b. Show that   c. Show that     d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).    <div style=padding-top: 35px> be the value of b that solves the constrained minimization problem.
a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers.
b. Show that Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let   be the value of b that solves the constrained minimization problem. a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers. b. Show that   c. Show that     d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).    <div style=padding-top: 35px>
c. Show that Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let   be the value of b that solves the constrained minimization problem. a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers. b. Show that   c. Show that     d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).    <div style=padding-top: 35px> Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let   be the value of b that solves the constrained minimization problem. a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers. b. Show that   c. Show that     d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).    <div style=padding-top: 35px>
d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13). Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let   be the value of b that solves the constrained minimization problem. a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers. b. Show that   c. Show that     d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).    <div style=padding-top: 35px> Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let   be the value of b that solves the constrained minimization problem. a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers. b. Show that   c. Show that     d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).    <div style=padding-top: 35px>
فتح الحزمة
قم بالتسجيل لفتح البطاقات في هذه المجموعة!
Unlock Deck
Unlock Deck
1/22
auto play flashcards
العب
simple tutorial
ملء الشاشة (f)
exit full mode
Deck 18: The Theory of Multiple Regression
1
Suppose that a sample of n = 20 households has the sample means and sample covariances below for a dependent variable and two regressors: Suppose that a sample of n = 20 households has the sample means and sample covariances below for a dependent variable and two regressors:   a. Calculate the OLS estimates of ß 0 , ß h and ß 2 Calculate s 2 u ,. Calculate the R 2 of the regression. b. Suppose that all six assumptions in Key Concept 18.1 hold. Test the hypothesis that ß 1 = 0 at the 5% significance level. a. Calculate the OLS estimates of ß 0 , ß h and ß 2 Calculate s 2 u ,. Calculate the R 2 of the regression.
b. Suppose that all six assumptions in Key Concept 18.1 hold. Test the hypothesis that ß 1 = 0 at the 5% significance level.
b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  Standard error can be found as square root of the variance, the covariance matrix of the coefficients is b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  The coefficient is 0.25 hence t -stat is b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant.
a) Let the following be the general linear regression b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  Here T represents the transpose of the matrix. Multiplying the matrices leads to b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  The summations with more than one variable can be defined in terms of variance and covariances for example, let b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  be the sample variance for X₁ then b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  The sample variance has b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  as one of the terms, isolate b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  Similar relations can be computed for other summations with two variables as below b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  Here the terms b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  is the sample variance of variable X₂ , b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  is the sample variance of Y , b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  is the sample covariance of X₁ and Y. Hence, substituting into the equation b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  The remaining matrix is b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  Finish the computation b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  The formula for standard error of regression is b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  The term b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  is the matrix that stores the residuals defined below b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  The second moment is b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  Note that the first term b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  is the second moment of Y which was found to be b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  The second term is b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  that is b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  Calculate the third term b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  Hence, the second moment is computed as b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3 b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  The R score can be computed as b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  The SSR is the second moment of residual errors that is b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  The TSS is a multiple of the variance of Y b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is  Then the R score is b) Given that Gauss Markov conditions are satisfied, the hypothesis can be tested using a t -test with homoskedastic standard errors   Standard error can be found as square root of the variance, the covariance matrix of the coefficients is   The diagonals of the above matrix are the variances of the coefficient estimates. The first diagonal for ß 0 second for ß 1 and last for ß 2 , hence the standard error for ß 1 is   The coefficient is 0.25 hence t -stat is   2.19 is greater than 1.96 required at the 5% significant level, therefore, the null hypothesis is rejected. The coefficient is significant. a) Let the following be the general linear regression   Here the variables are matrices, Y corresponds to the dependent variable test scores, X corresponds to the independent variable matrix, ß stores the regression coefficients, and U is the error vector as defined below   As seen from the equation there are n observations for each dependent and independent variable. Here the subscripts X₁ i is the independence variable outcome X₁ for the i-th sample, and X₂ i is the independence variable outcome X₂ for the i-th sample. The OLS estimator for the vector of coefficients is defined as   Here T represents the transpose of the matrix. Multiplying the matrices leads to   Now it is necessary to compute for the terms inside the matrices. It is possible to use given data to calculate the summation variables. The summation terms with one variable can be computed simply as total observations times average as below   The summations with more than one variable can be defined in terms of variance and covariances for example, let   be the sample variance for X₁ then   The sample variance has   as one of the terms, isolate     Similar relations can be computed for other summations with two variables as below   Here the terms   is the sample variance of variable X₂ ,   is the sample variance of Y ,   is the sample covariance of X₁ and Y. Hence, substituting into the equation   Substitute the values n = 20, average of X₁ 7.24, average of X₂ 4.00, average of Y 4.00, variance of X₁ 0.80, variance of X₂ 2.40, covariance of X₁ and X₂ 0.28, covariance of X₁ and Y 0.22, covariance of X₂ and Y 0.32 into the above, the matrices are   The remaining matrix is   Finish the computation   The formula for standard error of regression is   The term   is the matrix that stores the residuals defined below   The second moment is   Note that the first term   is the second moment of Y which was found to be   Substitute the values, n for 20, 0.26 for variance of Y , and 6.39 for average of Y   The second term is   that is   Calculate the third term     Hence, the second moment is computed as   The standard error of regression, k is the number of independent variables which is 2, n is 20, and second moment of error is 3.3   The R score can be computed as   The SSR is the second moment of residual errors that is   The TSS is a multiple of the variance of Y   Then the R score is
2
Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    Let Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
a. Show that Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    denote the vector of two stage least squares residuals.
i. Show that Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.] Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]    Consider the regression model Y = Xß + U. Partition X as [ X₁ X₂ ] and ß as   where X₁ has k 1 columns and X₂ has k 2 columns. Suppose that   Let   a. Show that   b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X₁ vector with i th element W 1 i and so forth. Let   denote the vector of two stage least squares residuals. i. Show that   ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
a) The regression is a) The regression is     has are matrices with k 1 and k 2 columns respectively. Let R be defined as the matrix   then the term RX becomes X₁ only. Since X₂ Y is zero, that means only X₁ is relevant to the estimated coefficient   then   b) The TSLS regression is   Here V is error from estimating X. Then the estimated error is   The first term   is zero because of exogenity and   is zero from conditional mean zero. The prior shown that to calculate F -stat is the difference of sum of square residuals from restricted and unrestricted, hence J stats can be calculated from it. a) The regression is     has are matrices with k 1 and k 2 columns respectively. Let R be defined as the matrix   then the term RX becomes X₁ only. Since X₂ Y is zero, that means only X₁ is relevant to the estimated coefficient   then   b) The TSLS regression is   Here V is error from estimating X. Then the estimated error is   The first term   is zero because of exogenity and   is zero from conditional mean zero. The prior shown that to calculate F -stat is the difference of sum of square residuals from restricted and unrestricted, hence J stats can be calculated from it. has are matrices with k 1 and k 2 columns respectively. Let R be defined as the matrix a) The regression is     has are matrices with k 1 and k 2 columns respectively. Let R be defined as the matrix   then the term RX becomes X₁ only. Since X₂ Y is zero, that means only X₁ is relevant to the estimated coefficient   then   b) The TSLS regression is   Here V is error from estimating X. Then the estimated error is   The first term   is zero because of exogenity and   is zero from conditional mean zero. The prior shown that to calculate F -stat is the difference of sum of square residuals from restricted and unrestricted, hence J stats can be calculated from it. then the term RX becomes X₁ only. Since X₂ Y is zero, that means only X₁ is relevant to the estimated coefficient a) The regression is     has are matrices with k 1 and k 2 columns respectively. Let R be defined as the matrix   then the term RX becomes X₁ only. Since X₂ Y is zero, that means only X₁ is relevant to the estimated coefficient   then   b) The TSLS regression is   Here V is error from estimating X. Then the estimated error is   The first term   is zero because of exogenity and   is zero from conditional mean zero. The prior shown that to calculate F -stat is the difference of sum of square residuals from restricted and unrestricted, hence J stats can be calculated from it. then a) The regression is     has are matrices with k 1 and k 2 columns respectively. Let R be defined as the matrix   then the term RX becomes X₁ only. Since X₂ Y is zero, that means only X₁ is relevant to the estimated coefficient   then   b) The TSLS regression is   Here V is error from estimating X. Then the estimated error is   The first term   is zero because of exogenity and   is zero from conditional mean zero. The prior shown that to calculate F -stat is the difference of sum of square residuals from restricted and unrestricted, hence J stats can be calculated from it. b) The TSLS regression is a) The regression is     has are matrices with k 1 and k 2 columns respectively. Let R be defined as the matrix   then the term RX becomes X₁ only. Since X₂ Y is zero, that means only X₁ is relevant to the estimated coefficient   then   b) The TSLS regression is   Here V is error from estimating X. Then the estimated error is   The first term   is zero because of exogenity and   is zero from conditional mean zero. The prior shown that to calculate F -stat is the difference of sum of square residuals from restricted and unrestricted, hence J stats can be calculated from it. Here V is error from estimating X. Then the estimated error is a) The regression is     has are matrices with k 1 and k 2 columns respectively. Let R be defined as the matrix   then the term RX becomes X₁ only. Since X₂ Y is zero, that means only X₁ is relevant to the estimated coefficient   then   b) The TSLS regression is   Here V is error from estimating X. Then the estimated error is   The first term   is zero because of exogenity and   is zero from conditional mean zero. The prior shown that to calculate F -stat is the difference of sum of square residuals from restricted and unrestricted, hence J stats can be calculated from it. The first term a) The regression is     has are matrices with k 1 and k 2 columns respectively. Let R be defined as the matrix   then the term RX becomes X₁ only. Since X₂ Y is zero, that means only X₁ is relevant to the estimated coefficient   then   b) The TSLS regression is   Here V is error from estimating X. Then the estimated error is   The first term   is zero because of exogenity and   is zero from conditional mean zero. The prior shown that to calculate F -stat is the difference of sum of square residuals from restricted and unrestricted, hence J stats can be calculated from it. is zero because of exogenity and a) The regression is     has are matrices with k 1 and k 2 columns respectively. Let R be defined as the matrix   then the term RX becomes X₁ only. Since X₂ Y is zero, that means only X₁ is relevant to the estimated coefficient   then   b) The TSLS regression is   Here V is error from estimating X. Then the estimated error is   The first term   is zero because of exogenity and   is zero from conditional mean zero. The prior shown that to calculate F -stat is the difference of sum of square residuals from restricted and unrestricted, hence J stats can be calculated from it. is zero from conditional mean zero.
The prior shown that to calculate F -stat is the difference of sum of square residuals from restricted and unrestricted, hence J stats can be calculated from it.
3
You are analyzing a linear regression model with 500 observations and one regressor. Explain how you would construct a confidence interval for ß 1 if:
a. Assumptions #1 through #4 in Key Concept 18.1 are true, but you think Assumption #5 or #6 might not be true.
b. Assumptions #1 through #5 are true, but you think Assumption #6 might not be true.(give two ways to construct the confidence interval).
c. Assumptions #1 through #6 are true
a) This satisfy conditional mean zero, i.i.d., finite fourth moments, column rank assumptions only, therefore, OLS estimates will be consistent and unbiased. However, variance of u is heteroskedastic so regression should be run with heteroskedastic robust standard errors. Confidence interval can be constructed as usual a) This satisfy conditional mean zero, i.i.d., finite fourth moments, column rank assumptions only, therefore, OLS estimates will be consistent and unbiased. However, variance of u is heteroskedastic so regression should be run with heteroskedastic robust standard errors. Confidence interval can be constructed as usual   b) The conditions satisfy conditional mean zero, i.i.d., finite fourth moments, column rank assumptions, and homoskedasticity. OLS estimates will be consistent and unbiased. While, u may not be normally distributed, it is still possible to run regression with homoskedastic errors. Additional run with heteroskedastic errors should give similar values for standard errors. Confidence interval can be constructed as in part a using heteroskedastic robust or homoskedastic standard errors. c) If all assumptions from part b including conditional normal distribution of u is true then it is enough to run regression assuming homoskedastic standard errors. b) The conditions satisfy conditional mean zero, i.i.d., finite fourth moments, column rank assumptions, and homoskedasticity. OLS estimates will be consistent and unbiased. While, u may not be normally distributed, it is still possible to run regression with homoskedastic errors. Additional run with heteroskedastic errors should give similar values for standard errors. Confidence interval can be constructed as in part a using heteroskedastic robust or homoskedastic standard errors.
c) If all assumptions from part b including conditional normal distribution of u is true then it is enough to run regression assuming homoskedastic standard errors.
4
(Consistency of clustered standard errors.) Consider the panel data model (Consistency of clustered standard errors.) Consider the panel data model   where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) ,   , and   . For the asymptotic calculations in this problem, suppose that T is fixed and n      where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) , (Consistency of clustered standard errors.) Consider the panel data model   where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) ,   , and   . For the asymptotic calculations in this problem, suppose that T is fixed and n      , and (Consistency of clustered standard errors.) Consider the panel data model   where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) ,   , and   . For the asymptotic calculations in this problem, suppose that T is fixed and n      . For the asymptotic calculations in this problem, suppose that T is fixed and n (Consistency of clustered standard errors.) Consider the panel data model   where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) ,   , and   . For the asymptotic calculations in this problem, suppose that T is fixed and n      (Consistency of clustered standard errors.) Consider the panel data model   where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) ,   , and   . For the asymptotic calculations in this problem, suppose that T is fixed and n      (Consistency of clustered standard errors.) Consider the panel data model   where all variables are scalars. Assume that Assumption #1, #2, and #4 in Key Concept 10.3 hold and strengthen Assumption #3 so that X it and u it have eight nonzero finite moments. Let M = I T T 1 , where is a T × 1 vector ones, Also let Y i = ( Y i 1 · Y i 2 … Y iT ) , X i = ( X i 1 X i 2 … X iT ) , u i = ( u i 1 u i 2 … u iT ) ,   , and   . For the asymptotic calculations in this problem, suppose that T is fixed and n
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
5
Let W be an m × 1 vector with covariance matrix Let W be an m × 1 vector with covariance matrix   where   is finite and positive definite. Let c be a nonrandom m × 1 vector, and let   a. Show that var   b. Suppose that c 0 m Show that 0 var(Q) . where Let W be an m × 1 vector with covariance matrix   where   is finite and positive definite. Let c be a nonrandom m × 1 vector, and let   a. Show that var   b. Suppose that c 0 m Show that 0 var(Q) . is finite and positive definite. Let c be a nonrandom m × 1 vector, and let Let W be an m × 1 vector with covariance matrix   where   is finite and positive definite. Let c be a nonrandom m × 1 vector, and let   a. Show that var   b. Suppose that c 0 m Show that 0 var(Q) .
a. Show that var Let W be an m × 1 vector with covariance matrix   where   is finite and positive definite. Let c be a nonrandom m × 1 vector, and let   a. Show that var   b. Suppose that c 0 m Show that 0 var(Q) .
b. Suppose that c 0 m Show that 0 var(Q) .
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
6
This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain. where all variables are scalars and the constant term/intercept is omitted for convenience.
a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent.
b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain. are i.i.d.
i. Show that the OLS estimator can be written as This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain.
ii. Suppose that data are "missing completely at random" in the sense that This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain. where p is a constant. Show that This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain. is unbiased and consistent.
iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is, This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain. Show that This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain. is unbiased and consistent.
iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is, This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain. Is This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain. unbiased Is This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain. consistent Explain.
c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain. unbiased Is This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model   where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppose that the least assumptions in Key Concept 4.3 are satisfied. Show that the least squares estimator of ß is unbiased and consistent. b. Now suppose that some of the observations are missing. Let I i , denote a binary random variable that indicates the nonmissing observations; that is, I i = 1 if observation i is not missing and I i = 0 if observation i is missing. Assume that   are i.i.d. i. Show that the OLS estimator can be written as   ii. Suppose that data are missing completely at random in the sense that   where p is a constant. Show that   is unbiased and consistent. iii. Suppose that the probability that the i th observation is missing depends of X i but not on u i ; that is,   Show that   is unbiased and consistent. iv. Suppose that the probability that the i th observation is missing depends on both X i and u i ; that is,   Is   unbiased Is   consistent Explain. c. Suppose that ß = 1 and that X i and u i are mutually independent standard normal random variables [so that both X t and iq are distributed N (0,1)]. Suppose that I i = 1 when Y i 0, but I i = 0 when Y i 0. Is   unbiased Is   consistent Explain. consistent Explain.
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
7
Suppose that Assumptions #1 through #5 in Key Concept 18.1 are true, but that Assumption #6 is not. Does the result in Equation (18.31) hold Explain. Suppose that Assumptions #1 through #5 in Key Concept 18.1 are true, but that Assumption #6 is not. Does the result in Equation (18.31) hold Explain.    Suppose that Assumptions #1 through #5 in Key Concept 18.1 are true, but that Assumption #6 is not. Does the result in Equation (18.31) hold Explain.
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
8
Consider the regression model in matrix form Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem. where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem. where Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem.
a. Show that the OLS estimators of ß and can be written as Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem.
b. Show that Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem. = Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem.
c. Show that Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem.
d. The Frisch-Waugh theorem (Appendix 6.2) says that Consider the regression model in matrix form   where X and W are matrices of regressors and ß and are vectors of unknown regression coefficients. Let   where   a. Show that the OLS estimators of ß and can be written as   b. Show that   =   c. Show that   d. The Frisch-Waugh theorem (Appendix 6.2) says that   Use the result in (c) to prove the Frisch-Waugh theorem. Use the result in (c) to prove the Frisch-Waugh theorem.
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
9
Consider the regression model from Chapter 4, Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        , and assume that the assumptions in Key Concept 4.3 hold.
a. Write the model in the matrix form given in Equations (18.2) and (18.4).
b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied.
c. Use the general formula for Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        in Equation (18.11) to derive the expressions for Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        and Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        given in Key Concept 4.2.
d. Show that the (1,1) element of Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        in Equation (18.13) is equal to the expression for Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        given in Key Concept 4.4. Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.        Consider the regression model from Chapter 4,   , and assume that the assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (18.2) and (18.4). b. Show that Assumptions #1 through #4 in Key Concept 18.1 are satisfied. c. Use the general formula for   in Equation (18.11) to derive the expressions for   and   given in Key Concept 4.2. d. Show that the (1,1) element of   in Equation (18.13) is equal to the expression for   given in Key Concept 4.4.
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
10
Can you compute the BLUE estimator of if Equation (18.41) holds and you do not know What if you know Can you compute the BLUE estimator of if Equation (18.41) holds and you do not know What if you know
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
11
Let P x and M x be as defined in Equations (18.24) and (18.25).
a. Prove that P X M X = 0 n×n and that P x and M x are idempotent.
b. Derive Equations (18.27) and (18.28). Let P x and M x be as defined in Equations (18.24) and (18.25). a. Prove that P X M X = 0 n×n and that P x and M x are idempotent. b. Derive Equations (18.27) and (18.28).
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
12
Construct an example of a regression model that satisfies the assumption Construct an example of a regression model that satisfies the assumption   but for which  but for which Construct an example of a regression model that satisfies the assumption   but for which
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
13
Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed   Now let   be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let   be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for   given above to prove that   . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]
Now let Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed   Now let   be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let   be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for   given above to prove that   . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.] be the "binary variable" fixed effects estimator computed by estimating Equation (10.11) by OLS and let Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed   Now let   be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let   be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for   given above to prove that   . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.] be the "de-meaning" fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed   Now let   be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let   be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for   given above to prove that   . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.] given above to prove that Consider the regression model in matrix form, Y = X + W + U , where X is an n × k 1 matrix of regressors and W is an n × k 2 matrix of regressors. Then, as shown in Exercise 18.17, the OLS estimator ß can be expressed   Now let   be the binary variable fixed effects estimator computed by estimating Equation (10.11) by OLS and let   be the de-meaning fixed effects estimator computed by estimating Equation (10.14) by OLS, in which the entity-specific sample means have been subtracted from X and Y. Use the expression for   given above to prove that   . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.] . [ Hint : Write Equation (10.11) using a full set of fixed effects, D 1 i , D 2 i , …, Dn i and no constant term. Include all of the fixed effects in W. Write out the matrix M W X.]
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
14
Consider the regression model, Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated. where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated. and Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated. be the OLS estimators for this model. Show that
a. Whether or not wi, and ui are correlated, Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated.
b. If Wi and u i are correlated, then Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated. is inconsistent.
c. Let Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated. be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated. has a smaller asymptotic variance than Consider the regression model,   where for simplicity the intercept is omitted and all variables are assumed to have a mean of zero. Suppose that Xi is distributed independently of ( w i, u i) but Wi, and ui, might be correlated and let   and   be the OLS estimators for this model. Show that a. Whether or not wi, and ui are correlated,   b. If Wi and u i are correlated, then   is inconsistent. c. Let   be the OLS estimator from the regression of Y on X (the restricted regression that excludes IT). Provide conditions under which   has a smaller asymptotic variance than   , allowing for the possibility thatWi, and u i are correlated. , allowing for the possibility thatWi, and u i are correlated.
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
15
Consider the regression model Consider the regression model   where   and u i =   Suppose that   , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j. a. Derive an expression for   b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are   ) where Consider the regression model   where   and u i =   Suppose that   , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j. a. Derive an expression for   b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are   ) and u i = Consider the regression model   where   and u i =   Suppose that   , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j. a. Derive an expression for   b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are   ) Suppose that Consider the regression model   where   and u i =   Suppose that   , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j. a. Derive an expression for   b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are   ) , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j.
a. Derive an expression for Consider the regression model   where   and u i =   Suppose that   , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j. a. Derive an expression for   b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are   )
b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are Consider the regression model   where   and u i =   Suppose that   , are i.i.d. with mean 0 and variance 1 and are distributed independently of X j : for all i and j. a. Derive an expression for   b. Explain how to estimate the model by GLS without explicitly inverting the matrix . ( Hint : Transform the model so that the regression errors are   ) )
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
16
This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i) This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption.
a. Use the expression for This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  given in Exercise 18.6 to write This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  - ß = This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  .
b. Show that This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  where This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  = This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  , and so forth. [The matrix This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  if This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.]
c. Show that assumptions (i) and (ii) imply that This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)  .
d. Use (c) and the law of iterated expectations to show that This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)
e. Use (a) through (d) to conclude that, under conditions (i) through
(iv) This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Appendix 7.2. Consider the multiple regression model in matrix form Y=Xß + Wy + u, where X and W are, respectively, n × k 1 and n × k 2 matrices of regressors. Let X i and W i denote the i th rows of X and W [as in Equation (18.3)]. Assume that (i)   , where is a k 2 × 1 vector of unknown parameters; (ii) (Xi, W i Yi) are i.i.d.; (iii) (X i W i u i ) have four finite, nonzero moments; and (iv) there is no perfect multicollinearity. These are Assumptions #l-#4 of Key Concept 18.1, with the conditional mean independence assumption (i) replacing the usual conditional mean zero assumption. a. Use the expression for   given in Exercise 18.6 to write   - ß =   . b. Show that   where   =   , and so forth. [The matrix   if   : for all i,j, where A n,ij and A ij are the (i, j) elements of A n and A.] c. Show that assumptions (i) and (ii) imply that   . d. Use (c) and the law of iterated expectations to show that   e. Use (a) through (d) to conclude that, under conditions (i) through (iv)
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
17
Let C be a symmetric idempotent matrix.
a. Show that the eigenvalues of C are either 0 or 1. ( Hint: Note that Cq = q implies 0 = Cq q = CCq q = CCq q = 2 q q and solve for .)
b. Show that trace( C ) = rank( C ).
c. Let d be a n × 1 vector. Show that d'Cd 0.
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
18
Suppose that C is an n × n symmetric idempotent matrix with rank r and let V ~ N (0 n , I n ).
a. Show that C = AA ', where A is n × r with A'A = I r. ( Hint: C is possintive semidefinite and can be written as Q Q as explained in Appendix18.1.)
b. Show that A'V ~ N ( 0 r , I r ).
c. Show that V'CV ~ r 2
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
19
Consider the population regression of test scores against income and the square of income in Equation (8.1).
a. Write the regression in Equation (8.1) in the matrix form of Equation (18.5). Define Y,X,U, and ß.
b. Explain how to test the null hypothesis that the relationship between test scores and income is linear against the alternative that it is quadratic. Write the null hypothesis in the form of Equation (18.20). What are R, r, and q Consider the population regression of test scores against income and the square of income in Equation (8.1). a. Write the regression in Equation (8.1) in the matrix form of Equation (18.5). Define Y,X,U, and ß. b. Explain how to test the null hypothesis that the relationship between test scores and income is linear against the alternative that it is quadratic. Write the null hypothesis in the form of Equation (18.20). What are R, r, and q
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
20
Show that Show that   is the efficient GMM estimator-that is, that   Equation (18.66) is the solution to Equation (18.65). b. Show that   c. Show that      is the efficient GMM estimator-that is, that Show that   is the efficient GMM estimator-that is, that   Equation (18.66) is the solution to Equation (18.65). b. Show that   c. Show that      Equation (18.66) is the solution to Equation (18.65).
b. Show that Show that   is the efficient GMM estimator-that is, that   Equation (18.66) is the solution to Equation (18.65). b. Show that   c. Show that
c. Show that Show that   is the efficient GMM estimator-that is, that   Equation (18.66) is the solution to Equation (18.65). b. Show that   c. Show that      Show that   is the efficient GMM estimator-that is, that   Equation (18.66) is the solution to Equation (18.65). b. Show that   c. Show that      Show that   is the efficient GMM estimator-that is, that   Equation (18.66) is the solution to Equation (18.65). b. Show that   c. Show that
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
21
A researcher studying the relationship between earnings and gender for a group of workers specifies the regression model, Y i = ß 0 + X₁ i ß 1 + X₂ i ß 2 + u i , where X₁ i is a binary variable that equals 1 if the i th person is a female and X₂i is a binary variable that equals 1 if the i th person is a male. Write the model in the matrix form of Equation (18.2) for a hypothetical set of n = 5 observations. Show that the columns of X are linearly dependent so that X does not have full rank. Explain how you would respecifiy the model to eliminate the perfect multicollinearity. A researcher studying the relationship between earnings and gender for a group of workers specifies the regression model, Y i = ß 0 + X₁ i ß 1 + X₂ i ß 2 + u i , where X₁ i is a binary variable that equals 1 if the i th person is a female and X₂i is a binary variable that equals 1 if the i th person is a male. Write the model in the matrix form of Equation (18.2) for a hypothetical set of n = 5 observations. Show that the columns of X are linearly dependent so that X does not have full rank. Explain how you would respecifiy the model to eliminate the perfect multicollinearity.
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
22
Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let   be the value of b that solves the constrained minimization problem. a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers. b. Show that   c. Show that     d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).    be the value of b that solves the constrained minimization problem.
a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers.
b. Show that Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let   be the value of b that solves the constrained minimization problem. a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers. b. Show that   c. Show that     d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).
c. Show that Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let   be the value of b that solves the constrained minimization problem. a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers. b. Show that   c. Show that     d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).    Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let   be the value of b that solves the constrained minimization problem. a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers. b. Show that   c. Show that     d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).
d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13). Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let   be the value of b that solves the constrained minimization problem. a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers. b. Show that   c. Show that     d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).    Consider the problem of minimizing the sum of squared residuals subject to the constraint that Rb = r, where R is q × ( k + 1) with rank cj. Let   be the value of b that solves the constrained minimization problem. a. Show that the Lagrangian for the minimization problem is L(b , ) = ( Y- Xb ) ' ( Y-Xb ) + ' ( Rb - r ), where is a q × 1 vector of Lagrange multipliers. b. Show that   c. Show that     d. Show that F in Equation (18.36) is equivalent to the homoskeskasticity-only F -statistic in Equation (7.13).
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.
فتح الحزمة
k this deck
locked card icon
فتح الحزمة
افتح القفل للوصول البطاقات البالغ عددها 22 في هذه المجموعة.