Skip to main content

Table 1 Linear regression with Y ~ N (2X+X 2, Ï•)

From: Multiple imputation of missing covariates with non-linear effects and interactions: an evaluation of statistical methods

 

R 2= 0.1

R 2 = 0.5

R 2 = 0.8

 

bias

cover

r.prec.

bias

cover

r.prec.

bias

cover

r.prec.

 

MCAR, X ~ normal

CData

-3

95

100

-1

95

100

0

95

100

CCase

-2

95

64

-1

95

64

0

95

64

Passive

-32

99

124

-21

95

104

-20

87

86

PMM

-3

92

59

0

93

65

2

92

64

JAV

-4

94

61

-1

95

61

0

95

62

 

MAR, X ~ normal

CData

-6

95

100

-1

96

100

-2

95

100

CCase

-23

95

72

-13

95

59

-8

94

48

Passive

-45

99

144

-27

95

120

-42

50

122

PMM

-36

89

50

-13

93

49

8

91

36

JAV

-12

94

52

-1

95

42

0

93

38

 

MAR, X ~ log normal

CData

-6

96

100

0

95

100

-1

95

100

CCase

-21

94

42

-19

94

24

-7

94

20

Passive

-72

98

70

24

93

21

-3

88

31

PMM

-46

88

29

-19

90

15

47

86

6

JAV

-7

92

28

7

91

12

18

91

10

  1. Table 1 Percentage bias, coverage and relative precision for quadratic term in linear regression when Y ~ N (2X+X 2, Ï•). The true value of the quadratic term is 1. For MCAR, X ~ normal, the maximum MCSEs among the five methods are 4, 1 and 1% for R 2 = 0.1, 0.5 and 0.8, respectively. For MAR, X ~ normal, they are 5, 2 and 1%. For MAR, X ~ log normal, they are 7, 4 and 3%