Skip to main content

Missing data approaches for probability regression models with missing outcomes with applications

Abstract

In this paper, we investigate several well known approaches for missing data and their relationships for the parametric probability regression model P β (Y|X) when outcome of interest Y is subject to missingness. We explore the relationships between the mean score method, the inverse probability weighting (IPW) method and the augmented inverse probability weighted (AIPW) method with some interesting findings. The asymptotic distributions of the IPW and AIPW estimators are derived and their efficiencies are compared. Our analysis details how efficiency may be gained from the AIPW estimator over the IPW estimator through estimation of validation probability and augmentation. We show that the AIPW estimator that is based on augmentation using the full set of observed variables is more efficient than the AIPW estimator that is based on augmentation using a subset of observed variables. The developed approaches are applied to Poisson regression model with missing outcomes based on auxiliary outcomes and a validated sample for true outcomes. We show that, by stratifying based on a set of discrete variables, the proposed statistical procedure can be formulated to analyze automated records that only contain summarized information at categorical levels. The proposed methods are applied to analyze influenza vaccine efficacy for an influenza vaccine study conducted in Temple-Belton, Texas during the 2000-2001 influenza season. Mathematics Subject Classification Primary 62J02; Secondary 62F12

Introduction

Suppose that Y is the outcome of interest and X is a covariate vector. One is often interested in the probability regression model P β (Y|X) that relates Y to X. In many medical and epidemiological studies, the complete observations on Y may not be available for all study subjects because of time, cost, or ethical concerns. In some situations, an easily measured but less accurate outcome named auxiliary outcome variable, A, is supplemented. The relationship between the true outcome Y and the auxiliary outcome A in the available observations can inform about the missing values of Y. Let V be a subsample of the study subjects, termed the validation sample, for which both true and auxiliary outcomes are available. Thus observations on (X,Y,A) are available for the subjects in V and only (X,A) are observed for those not in V.

It is well known that the complete-case analysis, which uses only subjects who have all variables observed, can be biased and inefficient, cf. Little and Rubin ([2002]). The issues also rise when substituting auxiliary outcome for true outcome; see Ellenberg and Hamilton ([1989]), Prentice ([1989]) and Fleming ([1992]). Inverse probability weighting (IPW) is a statistical technique developed for surveys by Horvitz and Thompson ([1952]) to calculate statistics standardized to a population different from that in which the data was collected. This approach has been generalized to many aspects of statistics under various frameworks. In particular, the IPW approach is used to account for missing data through inflating the weight for subjects who are underrepresented due to missingness. The method can potentially reduce the bias of the complete-case estimator when weighting is correctly specified. However, this approach has been shown to be inefficient in several situations, see Clayton et al. ([1998]) and Scharfstein et al. ([1999]). Robins et al. ([1994]) developed an improved augmented inverse probability weighted (AIPW) complete-case estimation procedure. The method is more efficient and possesses double robustness property. The multiple imputation described in Rubin ([1987]) has been routinely used to handle missing data. Carpenter et al. ([2006]) compared the multiple imputation with IPW and AIPW, and found AIPW as an attractive alternative in terms of double robustness and efficiency. Using the maximum likelihood estimation (MLE) coupled with the EM-algorithm ([Dempster et al. 1977]), Pepe et al. ([1994]) proposed the mean score method for the regression model P β (Y|X) when both X and A are discrete.

In this paper, we investigate several well known approaches for missing data and their relationships for the parametric probability regression model P β (Y|X) when outcome of interest Y is subject to missingness. We explore the relationships between the mean score method, IPW and AIPW with some interesting findings. Our analysis details how efficiency is gained from the AIPW estimator over the IPW estimator through estimation of validation probability and augmentation to the IPW score function. Applying the developed missing data methods, we derive the estimation procedures for Poisson regression model with missing outcomes based on auxiliary outcomes and a validated sample for true outcomes. Further, we show that the proposed statistical procedures can be formulated to analyze automated records that only contain aggregated information at categorical levels, without using observations at individual levels.

The rest of the paper is organized as follows. Section 2 introduces several missing data approaches for the probability regression model P β (Y|X), where the outcome Y may be missing. Section 3 explores the relationships among these estimators. The asymptotic distributions of the IPW and AIPW estimators are derived and their efficiencies are compared. Section 3 investigates efficiency of two AIPW estimators, one is based on the augmentation using a subset of observed variables and the other is based on the augmentation using the full set of observed variables. The procedures for Poisson regression using automated data with missing outcomes are derived in Section 4. The finite-sample performances of the estimators are studied in simulations in Section 5. The proposed method is applied to analyze influenza vaccine efficacy for an influenza vaccine study conducted in Temple-Belton, Texas during the 2000-2001 influenza season. The proofs of the main results are given in the Appendix A, while the proof of a simplified variance formula in Section 4 is placed in the Appendix B.

Missing data approaches

Consider the probability regression model P β (Y|X), where Y is the outcome of interest and X is a covariate vector. Let A be the auxiliary outcome for Y and V be the validation set such that observations on (X,Y,A) are available for the subjects in V and only (X,A) are observed for those in V ̄ , the complement of V. In practice, the validation sample may be selected based on the characteristics of a subset, Z, of the covariates in X. We write X=(Z,Zc). For example, Z may include exposure indicator and other discrete covariates and Zc may be the exposure time. Let (Z i ,X i ,Y i ,A i ), i=1,…,n, be independent identically distributed (iid) copies of (Z,X,Y,A). Let ξ i =I(iV) be the selection indicator.

Most statistical methods for missing data require some assumptions on missingness mechanisms. The commonly used ones are missing completely at random (MCAR) and missing at random (MAR). MCAR assumes that the probability of missingness in a variable is independent of any characteristics of the subjects. MAR assumes that the probability that a variable is missing depends only on observed variables. In practice, if missingness is a result by design, it is often convenient to let the missing probability depend on the categorical variables only. There is also simplicity in statistical inference by modeling the missing probability based on the categorical variables. We introduce the following missing at random assumptions.

MAR I:ξ i is independent of Y i conditional on (X i ,A i ) and ξ i is independent of Z i c conditional on (Z i ,A i ).

MAR II:ξ i is independent of ( Y i , Z i c ) conditional on (Z i ,A i ).

Since the conditional density f(y,zc|ξ,z,a)=f(zc|ξ,z,a) f(y|zc,ξ,z,a)=f(zc|z,a)f(y|zc,z,a)=f(y,zc|z,a), MAR I implies MAR II. It is also easy to show that MAR II implies MAR.

Let π ̂ i be the estimator of the conditional probability π i =P(ξ i =1|X i ,A i ), and π ̂ i z the estimator of π i z =P( ξ i =1| Z i , A i ). Let S β (Y|X) denote the partial derivatives of logP β (Y|X) with respect to β. Let Ê S β ( Y | X i ) | X i , A i be the estimator of the conditional expectation E{S β (Y|X i )|X i ,A i }, and Ê S β ( Y | X i ) | Z i , A i the estimator of E{S β (Y|X i )|Z i ,A i }. We investigate several estimators of β based on the following estimating equations with different choices of W i :

i = 1 n W i =0,
(1)

where W i takes one of the following forms:

W i I 1 = ξ i π ̂ i z S β ( Y i | X i )
(2)
W i E 1 = ξ i S β ( Y i | X i )+(1 ξ i )Ê S β ( Y | X i ) | Z i , A i
(3)
W i A 1 = ξ i π ̂ i z S β ( Y i | X i )+ 1 ξ i π ̂ i z Ê S β ( Y | X i ) | Z i , A i
(4)
W i I 2 = ξ i π ̂ i S β ( Y i | X i )
(5)
W i E 2 = ξ i S β ( Y i | X i )+(1 ξ i )Ê S β ( Y | X i ) | X i , A i
(6)
W i A 2 = ξ i π ̂ i z S β ( Y i | X i )+ 1 ξ i π ̂ i z Ê S β ( Y | X i ) | X i , A i .
(7)
W i A 3 = ξ i π ̂ i S β ( Y i | X i )+ 1 ξ i π ̂ i Ê S β ( Y | X i ) | X i , A i .
(8)

The estimator β ̂ I 1 obtained by using W i I 1 is an IPW estimator where a subject’s validation probability π i z depends only on the category defined by (Z i ,A i ). Because E ( π i z ) 1 ξ i S β ( Y i | X i ) =E S β ( Y i | X i ) =0, the estimator β ̂ I 1 is approximately unbiased. The estimator β ̂ I 2 obtained by using W i I 2 is also an IPW estimator but with the validation probability π i depending on the category defined by (Z i ,A i ) and the additional covariate Z i c .

The estimator β ̂ E 1 obtained by using W i E 1 is the mean score estimator where the scores S β (Y i |X i ) for those with missing outcomes are replaced by the estimated conditional expectations given (Z i ,A i ). The estimator β ̂ E 2 obtained by using W i E 2 is the mean score estimator where the scores S β (Y i |X i ) for those with missing outcomes are replaced by the estimated conditional expectations given (X i ,A i ). The estimator β ̂ E 2 is the mean score estimator in Pepe et al. ([1994]). The mean score estimator is the MLE estimator employing the EM-algorithm ([Dempster et al. 1977]) under the assumption that the auxiliary outcome is noninformative in the sense that the probability model P θ (A|Y,X) is unrelated to β.

The estimator β ̂ A 1 obtained using W i A 1 is the AIPW estimator augmented with the estimated conditional expectation Ê S β ( Y | X i ) | Z i , A i . The estimator β ̂ A 2 obtained using W i A 2 is the AIPW estimator augmented with the estimated conditional expectation Ê S β ( Y | X i ) | X i , A i . The estimator β ̂ A 3 is obtained using W i A 3 . The W i A 3 differs from W i A 2 in that the estimated validation probability is π ̂ i instead of π ̂ i z .

Suppose that π ̂ i z is an asymptotically unbiased estimator of π ̄ i z and that Ê S β ( Y | X i ) | Z i , A i is asymptotically unbiased of Ē S β ( Y | X i ) | Z i , A i , where both π ̄ i z and Ē S β ( Y | X i ) | Z i , A i are functions of (Z i ,A i ). Under MAR II, if one of the equalities, π ̄ i z = π i z and Ē S β ( Y | X i ) | Z i , A i = E S β ( Y | X i ) | Z i , A i , holds, then

E π ̄ i z 1 ξ i S β Y i | X i + E 1 π i z 1 ξ i Ē S β Y | X i | Z i , A i = E S β ( Y i | X i ) = 0 ,

which entails that the estimator β ̂ A 1 has the double robust property in the sense that it is a consistent estimator of β if either π ̂ i z is a consistent estimator of π i z or Ê S β ( Y | X i ) | Z i , A i is a consistent estimator of E[S β (Y|X i )|Z i ,A i ]. Similarly, under MAR I, the estimator β ̂ A 2 possesses the double robust property in that β ̂ A 2 is a consistent estimator of β if either π ̂ i z is a consistent estimator of π i z or Ê S β ( Y | X i ) | X i , A i is a consistent estimator of E[S β (Y|X i )|X i ,A i ]. The estimator β ̂ A 3 has similar double robust property as β ̂ A 2 .

Method comparisons and asymptotic results

Let V(X i ,A i ) denote the subjects in V with values of (X,A) equal to (X i ,A i ), nV(X i ,A i ) the number of subjects in V(X i ,A i ), and n(X i ,A i ) the number of subjects with values of (X,A) equal to (X i ,A i ). When X and A are discrete and their dimensionality is reasonably small, the probability π i =P(ξ i =1|X i ,A i ) can be estimated by π ̂ i = n V ( X i , A i )/n( X i , A i ). The conditional expectation E{S β (Y|X i )|X i ,A i } can be nonparametrically estimated based on the validation sample,

Ê S β ( Y | X i ) | X i , A i = j V ( X i , A i ) S β Y j | X j / n V ( X i , A i ),
(9)

Under MAR I, Ê S β ( Y | X i ) | X i , A i is an unbiased estimator of E{S β (Y|X i )|X i ,A i }. Now we let V(Z i ,A i ) denote the subjects in V with values of (Z,A) equal to (Z i ,A i ), nV(Z i ,A i ) the number of subjects in V(Z i ,A i ), and n(Z i ,A i ) the number of subjects in the sample with values of (Z,A) equal to (Z i ,A i ). A nonparametric estimator of π i z =P( ξ i =1| Z i , A i ) is given by π ̂ i z = n V ( Z i , A i )/n( Z i , A i ). A nonparametric estimator of E{S β (Y|X i )|Z i ,A i } is given by

Ê S β ( Y | X i ) | Z i , A i = j V ( Z i , A i ) S β Y j | X j / n V ( Z i , A i ).
(10)

Under MAR II, (Y i ,X i ) is independent of ξ i conditional on (Z i ,A i ), then Ê S β ( Y | X i ) | Z i , A i is an unbiased estimator of E{S β (Y|X i )|Z i ,A i }.

Proposition 1.

Suppose that X=(Z,Zc) and A are discrete and their dimensionality is reasonably small. Under the nonparametric estimators π ̂ i z = n V ( Z i , A i )/n( Z i , A i ), π ̂ i = n V ( X i , A i )/n( X i , A i ) and the estimators for the conditional expectation defined in (9) and (10), the estimators β ̂ I 1 , β ̂ E 1 and β ̂ A 1 are equivalent, and the estimators β ̂ I 2 , β ̂ E 2 , β ̂ A 2 and β ̂ A 3 are equivalent. However, the estimator β ̂ A 2 is different from β ̂ A 1 unless Z i c is linearly related to Z i in which case β is not identifiable.

The results of Proposition 1 are very intriguing since research has shown that the AIPW and the mean score methods are more efficient than the IPW method. It is also intriguing that the AIPW estimators β ̂ A 2 and β ̂ A 3 are actually the same estimators, not affected by the validation probability. To further understand these approaches, we investigate the asymptotic properties of these methods where (X,A) are not necessarily discrete. Through the asymptotic analysis, we gain insights about what matters to the efficiency in terms of the selections of the validation sample and the augmentation function.

Suppose that E ~ S β ( Y | X i ) | X i , A i is a consistent parametric/nonparametric estimator of E a {S β (Y|X i )|X i ,A i }, where E a {S β (Y|X i )|X i ,A i } is E{S β (Y|X i )|X i ,A i } or E{S β (Y|X i )|Z i ,A i }. Let π(X i ,A i ,ψ) be the parametric model for the validation probability π i , where ψ is a q-dimensional parameter. We show in Corollary 2 that the nonparametric estimator of π(X i ,A i ,ψ) can also be expressed in the parametric form when (X i ,A i ) are discrete. Let ψ0 be the true value of ψ. Under MAR I, the MLE ψ ̂ = ψ ̂ 1 , , ψ ̂ q of ψ=(ψ1,…,ψ q ) is obtained by maximizing the observed data likelihood,

i = 1 n π ( X i , A i , ψ ) ξ i 1 π ( X i , A i , ψ ) 1 ξ i .

The validation probability π i is estimated by π ~ i =π X i , A i , ψ ̂ . Then by the standard likelihood based analysis, we have the approximation

ψ ̂ ψ 0 = n 1 i = 1 n I ψ 1 S i ψ + o p n 1 / 2 ,
(11)

where S i ψ and Iψ are the score vector and information matrix for ψ ̂ defined by

S i ψ = ( ξ i π ( X i , A i , ψ 0 ) ) π ( X i , A i , ψ 0 ) ( 1 π ( X i , A i , ψ 0 ) ) ∂π ( X i , A i , ψ 0 ) ∂ψ , I ψ = E 1 π ( X i , A i , ψ 0 ) ( 1 π ( X i , A i , ψ 0 ) ) ∂π ( X i , A i , ψ 0 ) ∂ψ 2 ,
(12)

where a2=a a.

Consider the IPW estimator β ̂ I obtained by solving the estimating equation

U I = i = 1 n ξ i π ~ i S β ( Y i | X i )
(13)

and the AIPW estimator β ̂ A based on solving the estimating equation

U A = i = 1 n ξ i π ~ i S β Y i | X i + 1 ξ i π ~ i E ~ S β ( Y | X i ) | X i , A i .
(14)

Theorem 1.

Assume that P β (Y|X) and π(X,A,ψ)have bounded third-order derivatives in a neighborhood of the true parameters and are bounded away from 0 almost surely, both −E{(2/ β2)(logP β (Y|X))} and Iψ are positive definite at the true parameters. Then, under MAR I,

n 1 / 2 β ̂ I β = I 1 ( β ) n 1 / 2 i = 1 n Q i I + o p ( 1 ) , n 1 / 2 β ̂ A β = I 1 ( β ) n 1 / 2 i = 1 n Q i A + o p ( 1 ) ,

where I(β)=E{−(2/ β2)(logP β (Y|X))}=Var(S β (Y i |X i )),

Q i I = ξ i / π i S β ( Y i | X i ) E π i 2 ξ i S β ( Y i | X i ) ∂π ( X i , A i , ψ 0 ) / ∂ψ ( I ψ ) 1 S i ψ

and Q i A = ξ i / π i S β ( Y i | X i )+ 1 ξ i / π i E a S β ( Y | X i ) | X i , A i .

Both n 1 / 2 β ̂ I β and n 1 / 2 β ̂ A β have asymptotically normal distributions with mean zero and covariances equal to I 1 (β)Var Q i I I 1 (β) and I 1 (β)Var Q i A I 1 (β), respectively. Further,

Var Q i I = Var Q i A + Var ( B i + O i )
(15)

and

Var Q i A = I ( β ) + Var 1 ξ i π i S β ( Y i | X i ) E a S β ( Y | X i ) | X i , A i ,
(16)

where O i =E π i 2 ξ i S β ( Y i | X i ) ∂π ( X i , A i , ψ 0 ) / ∂ψ ( I ψ ) 1 S i ψ and B i =(1−ξ i /π i )E a {S β (Y|X i )|X i ,A i }.

Suppose that the validation probability π i = P(ξ i = 1|X i ,A i ) depends only on (Z i ,A i ). That is, π i = π i z =P ξ i = 1 | Z i , A i . Suppose that π ~ i is the MLE of π i z under the parametric family ψ(Z i ,A i ,ψ). Let β ̂ A 1 be the estimator obtained by solving (14) where the augmented term, E ~ S β ( Y | X i ) | X i , A i , is a consistent parametric/nonparametric estimator of E{S β (Y|X i )|Z i ,A i }. Let β ̂ A 2 be the estimator obtained by solving (14) where E ~ S β ( Y | X i ) | X i , A i is a consistent parametric/nonparametric estimator of E{S β (Y|X i )|X i ,A i }. The following corollary presents the asymptotic results for two AIPW estimators of β, one that corresponds to the augmentation based on a subset, (Z,A), of observed variables and the other that corresponds to the augmentation based on the full set, (X,A), of the observed variables.

Corollary 1.

Suppose that the validation probability π i =P(ξ i =1|X i ,A i ) depends only on (Z i ,A i ). Under the conditions of Theorem 1,

n 1 / 2 β ̂ A 1 β D N 0 , I 1 ( β ) + I 1 ( β ) Σ A 1 ( β ) I 1 ( β ) ,
(17)

and

n 1 / 2 β ̂ A 2 β D N 0 , I 1 ( β ) + I 1 ( β ) Σ A 2 ( β ) I 1 ( β ) ,
(18)

where Σ A 1 (β)=E ( ( 1 π i z ) / π i z ) Var { S β ( Y i | X i ) | Z i , A i } and Σ A 2 (β)=E ( ( 1 π i z ) / π i z ) Var S β ( Y i | X i ) | X i , A i . The asymptotic variance of β ̂ A 2 is smaller than the asymptotic variance of β ̂ A 1 if the covariates Z i are a proper subset of X i .

Suppose that (Z,A) are discrete taking values (z,a) in a set of finite number of values. If the number of parameters in ψ equals the number of values ψz,a=P(ξ i =1|Z i =z,A i =a) for all distinct pairs (z,a), then ψ={ψz,a} and π(z,a,ψ)=ψz,a. Further, ∂π ( z , a , ψ 0 ) ∂ψ can be viewed as a column vector with 1 in the position for ψz,a and 0 elsewhere. The information matrix Iψ defined in (12) has the expression,

I ψ = z , a ρ ( z , a ) 1 π ( z , a , ψ 0 ) ( 1 π ( z , a , ψ 0 ) ) ∂π ( z , a , ψ 0 ) ∂ψ ∂π ( z , a , ψ 0 ) ∂ψ ,

where ρ(z,a)=P(Z i =z,A i =a). It follows that Iψ is a diagonal matrix and its inverse matrix is also diagonal. The MLE ψ ̂ z , a = n V (z,a)/n(z,a) is in fact the nonparametric estimator for ψz,a based on the proportion of validated samples in the category specified by (z,a). The equation (11) can be expressed as

ψ ̂ z , a π ( z , a , ψ 0 ) = n 1 n V ( z , a ) n ( z , a ) π ( z , a , ψ 0 ) ρ ( z , a ) + o p n 1 / 2 ,

for (z,a).

By Threom 1, the possible efficiency gain of the AIPW estimator over the IPW estimator is shown through the equation (15). The AIPW estimator is more efficient unless Var(B i +O i )=0. In particular, from the proof of Theorem 1, we have

n 1 / 2 U A = n 1 / 2 i = 1 n Q i A + o p ( 1 )
(19)
n 1 / 2 U I = n 1 / 2 i = 1 n Q i A n 1 / 2 i = 1 n ( B i + O i ) + o p ( 1 ) ,
(20)

where B i and O i are defined following (16). The following corollary presents the analysis of the term n 1 / 2 i = 1 n ( B i + O i ) when (Z i ,A i ) are discrete to understand how efficiency may be gained from the AIPW estimator over the IPW estimator.

Corollary 2.

Under the conditions of Theorem 1, suppose that X=(Z,Zc) and (Z,A) are discrete taking values (z,a) in a set of finite number of values. Suppose that the validation probability π i =P(ξ i =1|X i ,A i ) only depends on (Z i ,A i ) and ψz,a=P(ξ i =1|Z i =z,A i =a) is estimated nonparametrically by ψ ̂ z , a = n V (z,a)/n(z,a). Then

n 1 / 2 i = 1 n ( O i + B i ) = z , a n 1 / 2 j = 1 n ξ j π ( z , a , ψ 0 ) π ( z , a , ψ 0 ) I ( Z j = z , A j = a ) E { S β ( Y j | X j ) | Z j = z , A j = a } E S β ( Y j | X j ) | Z j = z , Z j c , A j = a .
(21)

By Corollary 2, (19) and (20), β ̂ A is more efficient than β ̂ I unless V a r{S β (Y j |X j )|Z j =z,A j =a}=0 for all (z,a) for which P(Z i =z,A i =a)≠0. If X=Z and the validation probability π i =P(ξ i =1|X i ,A i ) is nonparametrically estimated with the cell frequencies ψ ̂ z , a = n V (z,a)/n(z,a), then β ̂ A and β ̂ I are asymptotically equivalent.

Remark Consider the estimators of β obtained based on the estimating equation (1) corresponding to different choices of W i given in (2) to (8). If (Z,A) are discrete and the validation probability π i z =P( ξ i =1| Z i , A i ) is estimated nonparametrically by the cell frequency, then by Theorem 1 and Corollary 2, β ̂ A 1 and β ̂ I 1 have same asymptotic normal distributions as long as Ê[ S β (Y| X i )| Z i , A i ] is a consistent estimator of E[S β (Y|X i )|Z i ,A i ]. But β ̂ A 2 is more efficient than β ̂ I 1 as long as Ê[ S β (Y| X i )| X i , A i ] is a consistent estimator of E[S β (Y|X i )|X i ,A i ] since Var(B i +O i ) is not zero by (21). These results are not affected by whether E[S β (Y|X i )|Z i ,A i ] and E[S β (Y|X i )|X i ,A i ] are estimated nonparametrically or based on some parametric models. In addition, by Theorem 1, Corollary 1 and 2, β ̂ A 3 and β ̂ I 2 have the same asymptotic normal distributions as long as Ê[ S β (Y| X i )| X i , A i ] is a consistent estimator of E[S β (Y|X i )|X i ,A i ].

Poisson regression using the automated data with missing outcomes

Many medical and public health data are available only in aggregated format, where the variables of interest are aggregated counts without being available at individual levels. Many existing statistical methods for missing data require observations at individual levels. Applying the missing data methods presented in Section 3, we derive some estimation procedures for the Poisson regression model with missing outcomes based on auxiliary outcomes and a validated sample for true outcomes. Further, we show that, by stratifying based on a set of discrete variables, the proposed statistical procedure can be formulated so that it can be used to analyze automated records which do not contain observations at individual levels, only summarized information at categorical levels.

Let Y represent the number of events occurring in the time-exposure interval [ 0,T] and Z the covariates. We consider the Poisson regression model,

P(Y=y|Z,T)=exp T exp ( β Z ) T exp ( β Z ) y /y!,
(22)

where Z is a vector of k+1 covariates and β a vector of k+1 regression coefficients. In practice, the exact number of true events may not be available for all subjects. We may instead have the number of possible events, namely, the auxiliary events. For example, in the study of vaccine adverse events associated with childhood immunizations, the number of auxiliary events A for MAARI is collected based on ICD-9 codes through hospital records. Further diagnosis may indicate that some of these events are false events. The number of true vaccine adverse events, Y, can only be confirmed for the subjects in the validation set V. Suppose that Z is the vaccination status, 1 for the vaccinated and 0 for the unvaccinated. Then, under Poisson regression, exp(β) is the relative rate of event occurrence per unit time of the exposed versus unexposed. We assume that the number of automated events A can be expressed as A=Y+W, where W is the number of false events independent of Y conditional on (Z,T). Suppose that W follows the Poisson regression model

P(W=w|Z,T)=exp T exp ( γ Z ) T exp ( γ Z ) w /w!,
(23)

where γ=(a0,a1,γ1,,γk−1).

We apply the missing data methods introduced in Section 3 on model (22). The variables (Z i ,T i ,Y i ,A i ) are observed for the validation sample V and (Z i ,T i ,A i ) are observed for the nonvalidation sample V ̄ . While the covariate Z can be considered as categorical, it is natural to consider the exposure time T as a continuous variable. We assume that the validation probability depends only on the stratification of (Z,A). That is, the validation sample is a stratified random sample by the categories defined by (Z,A). Of those estimators discussed in Section 2, there are only two different estimators, β ̂ I 1 and β ̂ A 2 . We show in Section 4.3 that the proposed method can be formulated so that it can be used to analyze the automated records with missing outcomes. First we derive the explicit estimation procedures for β ̂ I 1 and β ̂ A 2 and their variance estimators under model (22).

4.1 Inverse probability weighting estimation

We adopt all notations introduced in Section 3. In particular, let π i z =P( ξ i =1| Z i , A i ) and π ̂ i z = n V ( Z i , A i )/n( Z i , A i ). Let X=(Z,T) and X i =(Z i ,T i ) to be consistent with earlier notations. The score function for subject i under model (22) is S β ( Y i | X i )= Z i ( Y i T i exp( β Z i )). The estimator β ̂ I 1 is obtained by solving i = 1 n ( ξ i / π ̂ i z ) S β ( Y i | X i )=0, where S β ( Y i | X i )= Z i ( Y i T i exp( γ Z i )). By Corollary 1, n ( β ̂ I 1 β) converges in distribution to a normal distribution with mean zero and the variance matrix I−1(β)+I−1(β)ΣA 1(β)I−1(β), where Σ A 1 (β)=E ( ( 1 π i z ) / π i z ) Var S β ( Y i | X i ) | Z i , A i .

The information matrix I(β)=E( Z i Z i T i exp( β Z i ))= z P( Z i =z)z z exp( β z)E( T i | Z i =z) can be estimated by Î( β ̂ ) which is obtained by replacing β with β ̂ I 1 , P(Z i =z) by the sample proportion of the event {Z i =z}, and E(T i |Z i =z) with the sample average exposure time for those with covariates Z i =z. The matrix ΣA 1(β) can be estimated by

Σ ̂ A 1 ( β ̂ )= a , z ρ ̂ (a,z) 1 ρ ̂ v ( a , z ) ρ ̂ v ( a , z ) Var ̂ { S β (Y|X)|A=a,Z=z},
(24)

where ρ ̂ (a,z) is the estimator of P{A i =a,Z i =z}, ρ ̂ v (a,z) is the estimator of P{iV|A i =a,Z i =z}, and Var ̂ S β ( Y | X ) | A = a , Z = z is an estimator of Var{S β (Y i |X i )|Z i ,A i }] which is derived in the following.

Since A is observed for all subjects, W can be determined if Y is known, and undetermined otherwise. The IPW estimator, γ ̂ I 1 , of γ can be estimated by solving the equation i = 1 n ( ξ i / π ̂ i z ) S γ ( W i | X i )=0, where S γ ( W i | X i )= Z i ( W i T i exp( γ Z i )). The conditional distribution of Y given A=a, T, and Z=z is Binomial (a,p z ), where p z = exp(βz)/(exp(βz)+ exp(γz)). Since this conditional distribution does not depend on T, the outcome Y and T are conditionally independent given (A,Z). Therefore, Var{S β (Y|X)|A,Z}=Z Z{Var(Y|A,Z)+ exp(2βZ)Var(T|A,Z)}. The variance Var(Y|A=a,Z=z) can be estimated by a p ̂ z (1 p ̂ z ), where p ̂ z =exp( β ̂ z)/ exp ( β ̂ z ) + exp ( γ ̂ z ) , and Var(T|A=a,Z=z)=E(T2|A=a,Z=z)−{E(T|A=a,Z=z)}2 can be estimated nonparametrically using the first and the second sample moments conditional on each category with A=a and Z=z.

4.2 Augmented inverse probability weighted estimation

Under the assumption that W follows the Poisson regression model (23) and is independent of Y conditional on (Z,T), E S β ( Y | X ) | Z , T , A =A Z exp ( β Z ) exp ( β Z ) + exp ( γ Z ) T Z exp( β Z). Let Ê S β ( Y | X i ) | X i , A i be the estimator of E{S β (Y|X i )|X i ,A i } for a given β by substituting γ by its estimator γ ̂ I 1 of Section 4.1. Then the estimator β ̂ A 2 is obtained by solving

i = 1 n ξ i π ̂ i z S β Y i | X i + 1 ξ i π ̂ i z Ê S β ( Y | X i ) | X i , A i =0.
(25)

By Corollary 1, n ( β ̂ A 2 β) converges in distribution to a normal distribution with mean zero and the variance matrix where I−1(β)+I−1(β)ΣA 2(β)I−1(β), where Σ A 2 (β)=E ( ( 1 π i z ) / π i z ) Var { S β ( Y i | X i ) | X i , A i } . The information matrix I(β) can be estimated by Î( β ̂ ) given in Section 4.1. The conditional variance Var{S β (Y|X)|Z=z,T,A=a}=a p z (1−p z )z2 can be estimated by a p ̂ z (1 p ̂ z ), where p ̂ z =exp( β ̂ z)/(exp( β ̂ z)+exp( γ ̂ z)). It follows that ΣA 2(β) can be consistently estimated by

Σ ̂ A 2 ( β ) = a , z ρ ̂ ( a , z ) 1 ρ ̂ v ( a , z ) ρ ̂ v ( a , z ) a p ̂ z ( 1 p ̂ z ) z 2 ,

where ρ ̂ (a,z) is the estimator of P{A i =a,Z i =z} and ρ ̂ v (a,z) is the estimator of P{iV|A i =a,Z i =z}.

4.3 Estimation using the automated data

This section formulates the missing data estimation procedure for (22) based on the automated (summarized) information at categorical levels defined by relevant covariates of the model. In particular, we show that β ̂ I 1 and β ̂ A 2 and their variance estimators can be formulated using the automated data at categorical levels.

In many applications it is convenient to write Z=(1,Z(1),Z(2)) and β=(b0,b1,θ), where Z(1) is the treatment indicator (Z(1)=1 for the exposed group and Z(1)=0 for the unexposed group) and Z(2)=(η1,,ηk−1) as the other covariates, and θ=(θ1,,θk−1). For the applications involving the automated data records, we let η1,,ηk−1 be k−1 dummy variables representing k groups. Without loss of generality, we choose the k th group as the reference group, η1=1, η2=0, , ηk−1=0 for group 1, η1=0, η2=1, , ηk−1=0 for group 2, so on and η1=0, η2=0, , ηk−1=0 for group k. Thus each value of Z denotes a category which can be represented by (l,m) for l=0,1 and m=1,,k. This correspondence is denoted by Z(l,m) for convenience. For l=0,1 and m=1,,k−1, category (l,m) is defined by Z with Z(1)=l, η m =1 and η j =0 for jm, j=1,…,k, and category (l,k) is defined by Z(1)=l and η j =0 for j=1,…,k−1. Under model (22), the expected number of events for a subject in category (l,m) with the time-exposure interval [ 0,T] is T exp(b lm ), for l=0,1 and m=1,,k, where b1k=b0+b1, b0k=b0, b1m=b0+b1+θ m and b0m=b0+θ m for 1≤mk−1. The parameter b1 represents the log-relative rate of the exposed versus the unexposed adjusted for other factors.

The following notations are used to show that the estimators of β and their variance estimators can be calculated using the automated information at the categorical levels. Let V(a,l,m) denote the set of subjects in V with (A=a, Z(l,m)), V(l,m) for the set of subjects in V with (Z(l,m)), n alm for the number of subjects with (A=a, Z(l,m)), n alm v for the number of subjects in V(a,l,m), n lm v for the number of subjects in V(l,m), λ alm = n alm / n alm v , y alm for the number of events for subjects in V(a,l,m), y lm for the number of events for subjects in V(l,m), t alm for the total exposure time for subjects with (A=a, Z(l,m)), t2,a l m for the total squared exposure time for subjects with (A=a, Z(l,m)), t lm for the total exposure time for subjects with Z(l,m), α lm for the number of automated events for subjects with Z(l,m).

Estimation with β ̂ I 1 using the automated data. The validation probability π i z can be estimated by 1/λ alm when A i =a, Z i (l,m). It can be shown that the estimating equation for β ̂ I 1 is equivalent to the following nonlinear equations for {b lm , for l=0,1, m=1,,k},

m = 1 k a A y alm λ alm e b lm a A t alm λ alm = 0 , l = 0 , 1 a A y alm λ alm e b lm a A t alm λ alm = 0 ,

for l=0,1 and m=1,…,k−1. When k>1, the equations have no explicit solutions.

In the following, we show that the asymptotic variance of β ̂ I 1 can be consistently estimated by only using the automated information at categorical levels. The information matrix is a (k+1)×(k+1) symmetric matrix given by

I ( β ) = E ( Z i Z i T i exp ( β Z i ) ) = l , m q lm m = 1 k q 1 m q 11 + q 01 q 1 r + q 0 r m = 1 k q 1 m m = 1 k q 1 m q 11 q 1 r q 11 + q 01 q 11 q 11 + q 01 0 q 1 r + q 0 r q 1 r 0 q 1 r + q 0 r ,

where r=k−1 and q lm =E( T i e b lm I{individualiin category(l,m)}). The consistent estimator, Î( β ̂ ), of I(β) is thus obtained by replacing q lm with exp( b ̂ lm ) t lm /n.

Under model (23), the expected number of false events for a subject in category (l,m) with the time-exposure interval [ 0,T] is T exp(d lm ), for l=0,1 and m=1,,k, where d1k=a0+a1, d0k=a0, d1m=a0+a1+γ m and d0m=a0+γ m for 1≤mk−1. The conditional distribution of Y given A=a, T, and Z(l,m) is Binomial (a,p lm ), where p lm = exp(b lm )/(exp(b lm )+ exp(d lm )) for a≥1. Then Var(Y|A=a,Z(l,m)) can be estimated by a p ̂ lm (1 p ̂ lm ), where p ̂ lm = e b ̂ lm /( e b ̂ lm + e d ̂ lm ), and Var(T|A=a,Z(l,m)) can be estimated by νa,l,m=t2,a,l,m/n alm −(t alm /n alm )2. By (24) and the discussion that follows, ΣA 1(β) can be estimated by

Σ ̂ A 1 ( β ̂ )= a , l , m ρ ̂ (a,l,m) 1 ρ ̂ v ( a , l , m ) ρ ̂ v ( a , l , m ) G lm {a p ̂ lm (1 p ̂ lm )+ ν a , l , m },
(26)

where ρ ̂ (a,l,m)= n alm /n, ρ ̂ v (a,l,m)= n alm v / n alm and G lm be the value of G i = z i 2 when subject i belongs to the category (l,m). Hence the covariance matrix of β ̂ I 1 can be estimated by Î 1 ( β ̂ )+ Î 1 ( β ̂ ) Σ ̂ A 1 ( β ̂ ) Î 1 ( β ̂ ) using the automated data.

Estimation with β ̂ A 2 using the automated data. The estimating equations (25) are equivalent to the following nonlinear equations for {b lm , for l=0,1, m=1,,k},

m = 1 k a A y alm λ alm e b lm t lm + e b lm e b lm + e d ̂ 1 m α lm a A a n alm λ alm = 0 , l = 0 , 1 a A y alm λ alm e b lm t lm + e b lm e b lm + e d ̂ lm α lm a A a n alm λ alm = 0 ,

for l=0,1 and m=1,…,k−1.

Since Var{S β (Y|X)|Z(l,m),T,A=a}=a p lm (1−p lm )G lm , ΣA 2(β) can be consistently estimated by

Σ ̂ A 2 ( β ̂ ) = a , l , m ρ ̂ ( a , l , m ) 1 ρ ̂ v ( a , l , m ) ρ ̂ v ( a , l , m ) a p ̂ lm ( 1 p ̂ lm ) G lm .

Hence the covariance matrix of β ̂ A 2 can be estimated by Î 1 ( β ̂ )+ Î 1 ( β ̂ ) Σ ̂ A 2 ( β ̂ ) Î 1 ( β ̂ ) using the automated data.

Remark In the special case where ρ(α,l,m)≈0 for α≥2, a much simpler formula for the variance estimator of the log relative risk can be derived. For example in the vaccine safety study, the adverse-event rate is very small. Let

w m = α 0 m α 1 m y 0 m y 1 m α 0 m y 0 m n 1 m v + α 1 m y 1 m n 0 m v m = 1 k α 0 m α 1 m y 0 m n 1 m v α 0 m y 0 m n 1 m v + α 1 m y 1 m n 0 m v .

Then an estimate of variance of b ̂ 1 is given by

Var ( b ̂ 1 ) ̂ = m = 1 k w m 2 1 y 1 m 1 n 1 m v + 1 α 1 m + 1 y 0 m 1 n 0 m v + 1 α 0 m ,
(27)

which is the weighted sum of the estimated variances for the estimated log relative rate of the exposed versus the unexposed over k groups. The details of deviation are given in the Appendix B.

A simulation study

We conduct a simulation study to examine the finite sample performance of the IPW estimator β ̂ I 1 and the AIPW estimator β ̂ A 2 . We consider the Poisson regression model (22). The covariates Z1 and Z2 are generated from the Bernoulli distributions with the probability of success equals to 0.4 and 0.5 respectively. The exposure time T is generated from a uniform distribution on [0,10]. Given Z=(Z1,Z2) and T, the outcome variable Y follows a Poisson distribution with mean T exp(b0+b1Z1+θ Z2) where b0=−0.5, b1=−0.8 and θ=−0.6, and W follows a Poisson distribution with mean T exp(a0+a1Z1+γ Z2) where a0=−1.3, a1=−1.1, γ=−1. We set A=Y+W.

Four models for the validation sample are considered. Under Model 1, the validation sample is a simple random sample with probability π i =0.4. Model 2 considers π i =0.6. In Model 3, the validation probability only depends on A through the logistic regression model logit{π i (X,A)}=A−0.5 where X=(Z,T). In Model 4, the validation probability depends on A and Z1 through the logistic regression model logit{π i (X,A)}=AZ1−0.5.

Tables 1 and 2 present the simulation results for n=50,100,300,500 and 800. Each entry of the tables is based on 1000 simulation runs. Tables 1 and 2 summarize the bias (Bias), the empirical standard error (SSE), the average of the estimated standard error (ESE), and the empirical coverage probability (CP) of 95% confidence intervals of β ̂ I 1 and β ̂ A 2 for β=(b0,b1,θ). We also compare the performance of the estimators β ̂ I 1 and β ̂ A 2 with the complete-case (CC) estimator β ̂ C obtained by simply deleting subjects with missing values of Y i . As a gold standard, we present the estimation results for the full data where all the values of Y i are fully observed. Table 1 presents the results under Model 1 and 2, and Table 2 shows the results under Model 3 and 4.

Table 1 Simulation comparison of the IPW estimator β ̂ I 1 , the AIPW estimator β ̂ A 2 and the complete-case (CC) estimator β ̂ C under various sample sizes and selection probabilities
Table 2 Simulation comparison of the IPW estimator β ̂ I 1 , the AIPW estimator β ̂ A 2 and the complete-case (CC) estimator β ̂ C under various sample sizes and selection probabilities

Table 1 shows that under Model 1 and Model 2, the bias of all estimators is very small at a level comparable with that of the full data estimator. The bias decreases with increased sample size and the increased level of the validation probability. The empirical standard errors are in good agreement with the corresponding estimated standard errors, except for the IPW estimator when n≤100 and π≤0.6. Among them, AIPW has the smallest standard errors for all parameters and sample sizes concerned. The coverage probabilities of the confidence intervals for b0, b1 and θ are close to the nominal level 95%. When the sample size and the validation probability are both small, for example, n=50 and π=0.4, the IPW has large bias and is unstable but the AIPW still performs well.

Table 2 gives the results under Model 3 and Model 4. The bias remains small for β ̂ I 1 and β ̂ A 2 . The empirical standard errors are also close to the corresponding estimated standard errors. The coverage probabilities remain close to the nominal level 95% for all IPW and AIPW estimators. However, the complete-case estimator yields larger bias and incorrect coverage probability because of the association between the validation probability and the auxiliary variable A and/or the covariate Z1, in which case the missing is not missing completely at random. The AIPW performs better than IPW with smaller standard errors.

An Application

A community-based, nonrandomized, open-label influenza vaccine (CAIV-T) study was conducted in Temple-Belton, Texas during the 2000-2001 influenza season. The total 11,606 healthy children aged 18 months - 18 years were involved in this study and about 20% of them received a single dose of CAIV-T in 2000. The primary clinical outcome was based on an nonspecific case definition called medically attended acute respiratory infection (MAARI), which included all International Classification of Diseases, Ninth Revision, Clinical Modification diagnoses codes (ICD-9 codes 381-383, 460-487) for upper and lower respiratory tract infections, otitis media and sinusitis. MAARI outcomes and demographic data were extracted from the Scott & White Health Plan administrative database. For each visit, one or two International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis codes were listed. Visits for which asthma diagnosis codes alone were noted, without another MAARI code, were excluded. More details about this study can be found in Halloran et al. ([2003]).

Any children representing with history of fever and any respiratory illness were eligible to have a throat swab for influenza virus culture. The decision to obtain specimens was made irrespective of whether a patient had received CAIV-T. The specific case definition was culture-confirmed influenza. Table 3 taken from Halloran et al. ([2003]) contains information on the number of children in three age groups, the number of children who are vaccinated versus unvaccinated, the number of nonspecific MAARI cases, the number of cultures performed, and the number of cultures positive for each group.

Table 3 Study data for influenza epidemic season 2000-01, by age and vaccine group (from Halloran et al.[2003])

With the method developed in Section 4 for Poisson regression, we compare the risk of developing MAARI for children who received CAIV-T to the risk for children who had never received CAIV-T using the automated information provided in Table 3. The number of nonspecific MAARI cases extracted using the ICD-9 codes is the auxiliary outcome A, whereas the actual number of influenza cases Y is the outcome of interest. Let Z1 be the treatment indicator (1=vaccine and 0=placebo). Let Z2=(η1,η2) be the dummy variables indicating three age groups, where η1=1 if the age is in the range 1.5–4, η1=0, otherwise, and η2=1 if the age is in the range 5–9, η2=0, otherwise. The reference group is the age 10–18. The exposure time for all children is taken as T=1 year.

Consider a Poisson regression model with mean T exp(b0+b1Z1+θ1η1+θ2η2). Using the IPW estimator β ̂ I 1 , the estimates (standard errors) are b ̂ 0 =0.7659 ( σ ̂ b 0 =0.1046), b ̂ 1 =1.5830 ( σ ̂ b 1 =0.5017), θ ̂ 1 =0.5572 ( σ ̂ θ 1 =0.2111) and θ ̂ 2 =0.0199 ( σ ̂ θ 2 =0.1472). The age-adjusted relative rate (RR) in the vaccinated group compared with the unvaccinated group equals exp( b ̂ 1 )=exp(1.5830)=0.2054, which means that the rate of developing MAARI for the vaccinated group is 20% of that for the unvaccinated group. In terms of the vaccine efficacy VE=1−RR=0.7946, this represents about 80% reduction in the risk of developing MAARI for the vaccinated group compared to the unvaccinated group. The 95% confidence interval of RR obtained by using the delta method is (0.0768,0.5490), showing clear evidence that the vaccinated children have less risk of influenza than the unvaccinated children. The 95% confidence interval for VE is (0.4510,0.9232).

Using the AIPW estimator β ̂ A 2 , the estimates (standard errors) are b ̂ 0 =2.0703 ( σ ̂ b 0 =0.0851), b ̂ 1 =1.8072 ( σ ̂ b 1 =0.3786), θ ̂ 1 =0.6452 ( σ ̂ θ 1 =0.1966) and θ ̂ 2 =0.6235 ( σ ̂ θ 2 =0.1265). The age-adjusted relative rate (RR) is exp( b ̂ 1 )=exp(1.8072)=0.1641. The estimated VE is 0.8359 and the 95% confidence interval is (0.6553,0.9219). The estimator β ̂ A 2 yields smaller standard errors and confidence intervals with more precision than using β ̂ I 1 .

This data was analyzed by Halloran et al. ([2003]) and Chu and Halloran ([2004]). Assuming the binary probability model for P β (Y|X) where X includes the vaccination status and age group indicators, and using the mean score method, Halloran et al. ([2003]) found that the estimated VE based on the nonspecific MAARI cases alone was 0.18 with 95% confidence interval of (0.11,0.24). The estimated VE by incorporating the surveillance cultures was 0.79 with 95% confidence interval of (0.51,0.91). Halloran et al. also reported sample-size-weighted VE =0.77 with 95% confidence interval of (0.48,0.90). Chu and Halloran ([2004]) have developed a Bayesian method to estimate vaccine efficacy. By Chu and Halloran ([2004]), the estimated VE was 0.74 with 95% confidence interval (0.50,0.88) and estimated VE by the multiple imputation method was 0.71 with 95% confidence interval (0.42,0.86).

Our estimates of the vaccine efficacy are in line with the existing methods. The estimator β ̂ A 2 yields smaller standard errors and therefore confidence intervals are more precise than the existing methods of Halloran et al. ([2003]) and Chu and Halloran ([2004]). Compared to the binary regression, Poisson regression model allows multiple recurrent MAARI cases for each child. Although for this particular application the exposure time is fixed at one year time interval, the proposed method is applicable to the situation where the length of exposure time may be different for different children.

Conclusions

In this paper, we investigated the mean score method, the IPW method and the AIPW method for the parametric probability regression model P β (Y|X) when outcome of interest Y is subject to missingness. The asymptotic distributions are derived for the IPW estimator and the AIPW estimator. The selection probability often needs to be estimated for the IPW estimator, and both the selection probability and the conditional expectation of the score function needs to be estimated for the AIPW estimator. We investigated the properties of the IPW estimator and the AIPW estimator when the selection probability and the conditional expectation are implemented differently.

An AIPW estimator is said to be fully augmented if the selection probability and the conditional expectation are estimated using the full set of observed variables; it is partially augmented if the selection probability and the conditional expectation are estimated using a subset of observed variables. Corollary 1 shows that the fully augmented AIPW estimator is more efficient than the partially augmented AIPW estimator. Corollary 2 shows that the AIPW estimator is more efficient than the IPW estimator. However, when the selection probability depends only on a set of discrete random variables, the IPW estimator obtained by estimating the selection probability nonparametrically with the cell frequencies is asymptotically equivalent to the AIPW estimator augmented using the same set of discrete random variables. Proposition 1 shows that the IPW estimator, the AIPW estimator and the mean score estimator are equivalent if the selection probability and the conditional expectation are estimated using same set of discrete random variables.

Applying the developed missing data methods, we derived the estimation procedures for Poisson regression model with missing outcomes based on auxiliary outcomes and a validated sample for true outcomes. By assuming the selection probability depending only on the observed discrete exposure variables, not on the continuous exposure time, we show that the IPW estimator and the AIPW estimator can be formulated to analyze data when only aggregated/summarized information are available. The simulation study shows that for a moderate sample size and selection probability, the IPW estimator and AIPW estimator perform better than the complete-case estimator. The AIPW estimator is more efficient and more stable than the IPW estimator. The proposed methods are applied to analyze a data set from for an influenza vaccine study conducted in Temple-Belton, Texas during the 2000-2001 influenza season. The data set presented in Table 3 only contains summarized information at categorical levels defined by the three age groups and vaccination status. The actual number of influenza cases (the number of positive cultures) out of the number of MAARI cases cultured, along with the number of MAARI cases, are available for each category. Our analysis using the AIPW approach shows that the age-adjusted relative rate in the vaccinated group compared to the unvaccinated group equals 0.1641, which represents about 84% reduction in the risk of developing MAARI for the vaccinated group compared to the unvaccinated group.

Appendix A

Proof of Proposition 1.

Since

i = 1 n ( 1 ξ i ) Ê S β ( Y | X i ) | Z i , A i = i V ̄ j V ( Z i , A i ) S β ( Y j | X j ) / n V ( Z i , A i ) = i V n V ̄ ( Z i , A i ) / n V ( Z i , A i ) S β ( Y i | X i ) ,

we have

i = 1 n W i E 1 = i V 1 + n V ̄ ( Z i , A i ) n V ( Z i , A i ) S β ( Y i | X i ) = i = 1 n ξ i π ̂ i z S β ( Y i | X i ) = i = 1 n W i I 1 .
(A.1)

This shows that the mean score estimator β ̂ E 1 is the same as the IPW estimator β ̂ I 1 . Further, since

i = 1 n 1 ξ i π ̂ i z Ê { S β ( Y | X i ) | Z i , A i } = i V ̄ j V ( Z i , A i ) S β ( Y j | X j ) n V ( Z i , A i ) i V n V ̄ ( Z i , A i ) n V ( Z i , A i ) j V ( Z i , A i ) S β ( Y j | X j ) n V ( Z i , A i ) = i V n V ̄ ( Z i , A i ) n V ( Z i , A i ) S β ( Y i | X i ) i V n V ̄ ( Z i , A i ) n V ( Z i , A i ) S β ( Y i | X i ) = 0 ,

we have i = 1 n W i A 1 = i = 1 n W i I 1 . Thus the AIPW estimator β ̂ A 1 , the IPW estimator β ̂ I 1 and the mean score estimator β ̂ E 1 are equivalent to each other.

Note that

i = 1 n 1 ξ i π ̂ i z Ê { S β ( Y | X i ) | X i , A i } = i V ̄ j V ( X i , A i ) S β ( Y j | X j ) n V ( X i , A i ) i V n V ̄ ( Z i , A i ) n V ( Z i , A i ) j V ( X i , A i ) S β ( Y j | X j ) n V ( X i , A i ) = i V n V ̄ ( X i , A i ) n V ( X i , A i ) S β ( Y i | X i ) i V n V ( X i , A i ) n V ( X i , A i ) n V ̄ ( Z i , A i ) n V ( Z i , A i ) S β ( Y i | X i ) = i V n V ̄ ( X i , A i ) n V ( X i , A i ) n V ̄ ( Z i , A i ) n V ( Z i , A i ) S β ( Y i | X i ) ,
(A.2)

which is not zero unless Z i c is linearly related to Z i and in this case β is not identifiable. Hence the AIPW estimator β ̂ A 2 is different from the AIPW estimator β ̂ A 1 .

By (A.1) and (A.2), we have

i = 1 n W i A 2 = i V n ( Z i , A i ) n V ( Z i , A i ) + n V ̄ ( X i , A i ) n V ( X i , A i ) n V ̄ ( Z i , A i ) n V ( Z i , A i ) S β ( Y i | X i ) = i V 1 + n V ̄ ( X i , A i ) n V ( X i , A i ) S β ( Y i | X i ) = i = 1 n W i I 2 .

Following the same arguments leading to (A.1), we also have i = 1 n W i E 2 = i = 1 n W i I 2 . Hence, the estimators β ̂ I 2 , β ̂ E 2 and β ̂ A 2 are equivalent. By following the steps in (A.2), we also have i = 1 n 1 ξ i π ̂ i Ê S β ( Y | X i ) | X i , A i =0. Hence, β ̂ A 3 is the same as β ̂ I 2 . Therefore, these are essentially two different estimators. □

Proof of Theorem 1.

Applying the first order Taylor expansion, π ~ i π i = ( ∂π ( X i , A i , ψ 0 ) / ∂ψ ) ψ ̂ ψ 0 + o p n 1 / 2 . From (13), we have

n 1 / 2 U I = n 1 / 2 i = 1 n ξ i π i S β ( Y i | X i ) n 1 / 2 i = 1 n π ~ i π i π ~ i π i ξ i S β ( Y i | X i )
(A.3)

The second term of (A.3) is

n 1 / 2 i = 1 n π ~ i π i π ~ i π i ξ i S β ( Y i | X i ) = n 1 / 2 i = 1 n π i 2 ξ i S β ( Y i | X i ) ∂π ( X i , A i , ψ 0 ) ∂ψ ψ ̂ ψ 0 = E π i 2 ξ i S β ( Y i | X i ) ∂π ( X i , A i , ψ 0 ) ∂ψ n 1 / 2 ψ ̂ ψ 0 + o p ( 1 )
(A.4)

By (11), (A.3) and (A.4), we have

n 1 / 2 U I = n 1 / 2 i = 1 n ξ i π i S β ( Y i | X i ) O i + o p ( 1 ) = n 1 / 2 i = 1 n Q i I + o p ( 1 ) .

Now consider the AIPW estimator β ̂ A based on solving the estimating equation (14). For simplicity, we denote E a {S β (Y|X i )|X i ,A i } by E i and E ~ S β ( Y | X i ) | X i , A i by E ~ i . We note that

n 1 / 2 U A = n 1 / 2 i = 1 n ξ i π i S β ( Y i | X i ) + 1 ξ i π i E i + n 1 / 2 i = 1 n ξ i π ~ i ξ i π i S β ( Y i | X i ) E i + 1 ξ i π ~ i E ~ i E i .

Suppose that π ~ i and E ~ i are the estimates of π i and E i based on some parametric or nonparametric models. Then it can be shown using Taylor expansion and standard probability arguments that the second term is at the order of o p (1) under MAR I. Hence

n 1 / 2 U A = n 1 / 2 i = 1 n ξ i π i S β ( Y i | X i ) + 1 ξ i π i E i + o p ( 1 ) .

It can be shown that under MAR I, n 1 U I /∂β P I(β) and n 1 U A /∂β P I(β). By routine derivations, we have

n 1 / 2 β ̂ I β = I 1 ( β ) n 1 / 2 U I + o p ( 1 ) = I 1 ( β ) n 1 / 2 i = 1 n Q i I + o p ( 1 ) , n 1 / 2 β ̂ A β = I 1 ( β ) n 1 / 2 U A + o p ( 1 ) = I 1 ( β ) n 1 / 2 i = 1 n Q i A + o p ( 1 ) .

By the central limit theorem, both n 1 / 2 β ̂ I β and n 1 / 2 β ̂ A β have asymptotically normal distributions with mean zero and covariances equal to I 1 (β)Var Q i I I 1 (β) and I 1 (β)Var Q i A I 1 (β), respectively.

Next, we examine the covariance matrices Var Q i I and Var Q i A to understand the efficiency gain of β ̂ A over β ̂ I . Note that Q i I = ξ i / π i S β ( Y i | X i ) O i and Q i A = ξ i / π i S β ( Y i | X i )+(1 ξ i / π i ) E i . Denote A i =ξ i /π i S β (Y i |X i ) and B i =(1−ξ i /π i )E i . Then Q i I = Q i A B i O i . Under MAR I, Cov Q i A , O i =E Q i A O i =E E ( Q i A | ξ i , X i , A i ) O i =E{ E i O i }=E{ E i E( O i | X i , A i )}=0, and

Cov ( Q i A , B i ) = E ξ i π i S β ( Y i | X i ) + 1 ξ i π i E i 1 ξ i π i E i = E 1 ξ i π i 2 E i 2 E 1 ξ i π i 2 S β ( Y i | X i ) E i + E 1 ξ i π i S β ( Y i | X i ) E i = 0 .

Hence, Cov Q i A , B i + O i =0. It follows that Var( Q i I )=Var Q i A +Var( B i + O i ). Since Q i A = S β ( Y i | X i ) 1 ξ i π i S β ( Y i | X i ) E i and the two terms are uncorrelated under MAR I, we have Var Q i A =Var S β ( Y i | X i ) +Var ( 1 ξ i / π i ) S β ( Y i | X i ) E i , where the first term equals I(β). This completes the proof of Theorem 1. □

Proof of Corollary 1.

Let Q i A 1 = ξ i / π i z S β ( Y i | X i )+(1 ξ i / π i z )E S β ( Y | X i ) | Z i , A i , and Q i A 2 = ξ i / π i z S β ( Y i | X i )+ 1 ξ i / π i z E S β ( Y | X i ) | X i , A i . By (16),

Var Q i A 1 = Var S β ( Y i | X i ) + Var 1 ξ i π i z S β ( Y i | X i ) E S β ( Y | X i ) | Z i , A i , Var Q i A 2 = Var S β ( Y i | X i ) + Var 1 ξ i π i z S β ( Y i | X i ) E S β ( Y | X i ) | X i , A i .

The second term of Var Q i A 1 equals ΣA 1(β) and the second term of Var Q i A 2 equals ΣA 2(β). Then it follows from the main results in Theorem 1 that (17) and (18) hold.

Also by Theorem 1, the difference in the variances of Q i A 1 and Q i A 2 contributes to the difference in the asymptotic variances of β ̂ A 1 and β ̂ A 2 . Since E 1 ξ i / π i z 2 | Y i , X i , A i =E 1 ξ i / π i z 2 | X i , A i =(1 π i z )/ π i z under MAR I,

Σ A 2 ( β ) = E 1 π i z π i z E S β 2 ( Y i | X i ) | X i , A i E S β ( Y | X i ) | X i , A i 2 = E 1 π i z π i z E S β 2 ( Y i | X i ) | Z i , A i E S β ( Y | X i ) | Z i , A i 2 E 1 π i z π i z E S β ( Y | X i ) | X i , A i 2 E S β ( Y | X i ) | Z i , A i 2 = Σ A 1 ( β ) E 1 π i z π i z Var E S β ( Y | X i ) | X i , A i | Z i , A i ,

which is less than ΣA 1(β) if the covariates Z i is a proper subset of X i . □

Proof of Corollary 2.

Consider the definitions of B i and O i given following (16). We note that

E π i 1 S β ( Y i | X i ) ∂π ( Z i , A i , ψ 0 ) ∂ψ = E π i 1 E S β ( Y i | X i ) | Z i , A i ∂π ( Z i , A i , ψ 0 ) ∂ψ = z , a ρ ( z , a ) π 1 ( z , a , ψ 0 ) E S β ( Y | X ) | Z = z , A = a ∂π ( z , a , ψ 0 ) ∂ψ ,

and

n 1 / 2 i = 1 n S i ψ = n 1 / 2 i = 1 n ( ξ i π ( Z i , A i , ψ 0 ) ) π ( Z i , A i , ψ 0 ) ( 1 π ( Z i , A i , ψ 0 ) ) ∂π ( Z i , A i , ψ 0 ) ∂ψ = z , a n 1 / 2 j = 1 n ξ j π ( z , a , ψ 0 ) I Z j = z , A j = a π ( z , a , ψ 0 ) ( 1 π ( z , a , ψ 0 ) ) ∂π ( z , a , ψ 0 ) ∂ψ .

From the discussions preceding Corollary 2, ψ={ψz,a} and π(z,a,ψ)=ψz,a, where ψz,a=P(ξ i =1|Z i =z,A i =a) for all distinct pairs (z,a). Hence, ∂π ( z , a , ψ 0 ) ∂ψ is a column vector with 1 in the position for ψz,a and 0 elsewhere. And Iψ is a diagonal matrix and its inverse matrix is also diagonal.

We have

n 1 / 2 i = 1 n O i = E π i 1 S β ( Y i | X i ) ∂π ( Z i , A i , ψ 0 ) ∂ψ ( I ψ ) 1 n 1 / 2 i = 1 n S i ψ = z , a E S β ( Y | X ) | Z = z , A = a n 1 / 2 j = 1 n ξ j π ( z , a , ψ 0 ) π ( z , a , ψ 0 ) I Z j = z , A j = a ,

and

n 1 / 2 i = 1 n B i = n 1 / 2 i = 1 n ( ξ i π ( Z i , A i ) ) π ( Z i , A i ) E i = n 1 / 2 i = 1 n ( ξ i π ( Z i , A i ) ) π ( Z i , A i ) E S β ( Y i | X i ) | X i , A i = z , a n 1 / 2 j = 1 n ξ j π ( z , a , ψ 0 ) π ( z , a , ψ 0 ) I Z j = z , A j = a E S β ( Y j | X j ) | Z j = z , Z j c , A j = a .

Then (21) holds. It follows that β ̂ A is more efficient than β ̂ I unless V a r{S β (Y j |X j )|Z j =z,A j =a}=0 for all (z,a) for which P(Z i =z,A i =a)≠0. □

Appendix B

Proof of the simplified variance formula ( 27 )

The information matrix I(β) is a (k+1)×(k+1) symmetric matrix given by

I ( b 0 , b 1 , θ ) = l , m q lm m = 1 k q 1 m q 11 + q 01 q 1 r + q 0 r m = 1 k q 1 m m = 1 k q 1 m q 11 q 1 r q 11 + q 01 q 11 q 11 + q 01 0 q 1 r + q 0 r q 1 r 0 q 1 r + q 0 r ,

where r=k−1 and q lm =E( T i e b lm I{individualiin category(l,m)}). For ease of presentation in the following, we drop the augments (b0,b1,θ) and use I for I(b0,b1,θ). Let I ij be the cofactor of the (i,j)th element of I.

First we need to find the elements on the second row of the information matrix I(b0,b1,θ). Note that for a matrix A, ((a ij )n×n)−1=(A ji )n×n/|A|, where A ij is the cofactor of a ij in the matrix A and |A| is the determinant of A. Also note that for a block matrix

B = B 11 B 12 B 21 B 22 ,

the determinant |B|=| B 22 || B 11 B 12 B 22 1 B 21 |. We have

| I | = m = 1 k 1 ( q 1 m + q 0 m ) lm q lm m = 1 k q 1 m m = 1 k q 1 m m = 1 k q 1 m l = 0 1 m = 1 k 1 q lm m = 1 k 1 q 1 m m = 1 k 1 q 1 m m = 1 k 1 ( q 1 m ) 2 q 1 m + q 0 m = m = 1 k 1 ( q 1 m + q 0 m ) q 1 k + q 0 k q 1 k q 1 k q 1 k + m = 1 k 1 q 0 m q 1 m q 1 m + q 0 m = m = 1 k ( q 1 m + q 0 m ) m = 1 k q 0 m q 1 m q 1 m + q 0 m ,

and

I 21 = m = 1 k q 1 m q 11 + q 01 q 1 r + q 0 r q 11 q 11 + q 01 0 q 1 r 0 q 1 r + q 0 r = m = 1 k 1 ( q 1 m + q 0 m ) m = 1 k q 1 m m = 1 k 1 q 1 m = q 1 k m = 1 k 1 ( q 1 m + q 0 m ) .

Hence, the (2,1)th element of I−1 is

I 1 21 = I 21 | I | = q 1 k W ( q 1 k + q 0 k ) ,

where W= m = 1 k q 0 m q 1 m /( q 1 m + q 0 m ).

To calculate (2,2)th element of I−1, we have

I 22 = l , m k q lm q 11 + q 01 q 1 r + q 0 r q 11 + q 01 q 11 + q 01 0 q 1 r + q 0 r 0 q 1 r + q 0 r = m = 1 k 1 ( q 1 m + q 0 m ) l , m q lm l = 0 1 m = 1 k 1 q lm = m = 1 k ( q 1 m + q 0 m ) .

Hence, the (2,2)th element of I−1 is (I−1)22=I22/|I|=1/W.

To calculate (2,3)th element of I−1, we have

I 23 = l , m q lm m = 1 k q 1 m q 12 + q 02 q 1 r + q 0 r q 11 + q 01 q 11 0 0 q 12 + q 02 q 12 q 12 + q 02 0 q 1 r + q 0 r q 1 r 0 q 1 r + q 0 r = m = 2 k 1 ( q 1 m + q 0 m ) lm q lm m = 1 k q 1 m q 11 + q 01 q 11 m = 2 k 1 ( q 1 m + q 0 m ) m = 2 k 1 q 1 m 0 0 = m = 2 k 1 ( q 1 m + q 0 m ) ( q 0 k q 11 q 1 k q 01 ) .

Hence

( I 1 ) 23 = I 23 | I | = q 11 W ( q 11 + q 01 ) + q 1 k W ( q 1 k + q 0 k ) .

To calculate (2,4)th element of I−1, we have

I 24 = l , m q lm m = 1 k q 1 m q 11 + q 01 q 13 + q 03 q 1 r + q 0 r q 11 + q 01 q 11 q 11 + q 01 0 0 q 12 + q 02 q 12 0 0 0 q 13 + q 03 q 13 0 q 13 + q 03 0 q 1 r + q 0 r q 1 r 0 0 q 1 r + q 0 r .

By switching the 2nd and the 3rd row, we have

I 24 = n k l , m q lm m = 1 k q 1 m q 11 + q 01 q 13 + q 03 q 1 r + q 0 r q 12 + q 02 q 12 0 0 0 q 11 + q 01 q 11 q 11 + q 01 0 0 q 13 + q 03 q 13 0 q 13 + q 03 0 q 1 r + q 0 r q 1 r 0 0 q 1 r + q 0 r .

Hence

I 1 24 = I 24 | I | = q 12 W ( q 12 + q 02 ) + q 1 k W ( q 1 k + q 0 k ) .

In general, to calculate I2(m+2) for m=1,,k−1, we can obtain a matrix with a (k−2)×(k−2) diagonal right lower block by switching rows of I2(m+2) even number of times when m is odd and by switching rows odd number of times when m is even. Then similar to calculating (I−1)24, we have

I 1 2 ( m + 2 ) = q 1 m W q 1 m + q 0 m + q 1 k W ( q 1 k + q 0 k ) .

For l=1 and 1≤mk, the (i,j)th element of G lm is g ij =1 for (i,j)=(1,1),(1,2),(1,m+2),(2,1),(2,2),(2,m+2),(m+2,1),(m+2,2),(m+2,m+2), and g ij =0 elsewhere. We have the (2,2)th element of I−1G lm I−1 as

I 1 21 2 + I 1 22 2 + I 1 2 ( m + 2 ) 2 + 2 I 1 21 I 1 2 ( m + 2 ) + 2 I 1 22 I 1 2 ( m + 2 ) + 2 I 1 21 I 1 22 = 1 W 2 q 0 m q 1 m + q 0 m 2 .

Since for l=0 and 1≤mk, g ij =1 for (i,j)=(1,1),(1,m+2),(m+2,1),(m+2,m+2), and g ij =0 elsewhere, in this case the (2,2)th element of I−1G lm I−1 is

( I 1 ) 21 2 + ( I 1 ) 2 ( m + 2 ) 2 + 2 ( I 1 ) 21 ( I 1 ) 2 ( m + 2 ) = ( ( I 1 ) 21 + ( I 1 ) 2 ( m + 2 ) ) 2 = 1 W 2 q 1 m q 1 m + q 0 m 2 .

Hence the (2,2)th element of the asymptotic covariance matrix of β ̂ A 2 is given by σ b 1 2 =1/W+U/ W 2 , where

U = α , l , m ρ ( α , l , m ) 1 ρ V ( α , l , m ) ρ V ( α , l , m ) α p lm ( 1 p lm ) q ( 1 l ) m q 0 m + q 1 m 2 .

Note that P(Y i =1|A i =1,i category (l,m)) can be estimated by y lm / n lm v and ρ(l,m) by α lm /n. Thus q lm can be estimated by ( α lm /n)( y lm / n lm v ). Then we can estimate W byŴ= n 1 m = 1 k α 0 m α 1 m y 0 m y 1 m /( α 0 m y 0 m n 1 m v + α 1 m y 1 m n 0 m v ). By replacing ρ l , m v with n lm v / α lm and p lm with y lm / n lm v , we can estimate U by

Û = n 1 l , m α lm α lm n lm v n lm v y lm n lm v n lm v y lm n lm v α ( 1 l ) m y ( 1 l ) m n ( 1 l ) m v α 1 m y 1 m n 1 m v + α 0 m y 0 m n 0 m v 2 = n 1 m = 1 k α 1 m n 1 m v n 1 m v n 1 m v y 1 m n 1 m v α 0 m y 0 m n 0 m v + α 0 m n 0 m v n 0 m v n 0 m v y 0 m n 0 m v α 1 m y 1 m n 1 m v × α 1 m y 1 m n 1 m v α 0 m y 0 m n 0 m v α 1 m y 1 m n 1 m v + α 0 m y 0 m n 0 m v 2 = n 1 m = 1 k 1 y 1 m 1 n 1 m v + 1 α 1 m n 1 m v α 1 m y 1 m + 1 y 0 m 1 n 0 m v + 1 α 0 m n 0 m v α 0 m y 0 m × α 1 m y 1 m α 0 m y 0 m α 1 m y 1 m n 0 m v + α 0 m y 0 m n 1 m v 2 .

From Ŵ and Û, we obtain an estimate of σ b 1 2 as follows

σ ̂ b 1 2 = 1 Ŵ 2 m = 1 k 1 y 1 m 1 n 1 m v + 1 α 1 m + 1 y 0 m 1 n 0 m v + 1 α 0 m α 1 m y 1 m α 0 m y 0 m α 1 m y 1 m n 0 m v + α 0 m y 0 m n 1 m v 2 + 1 Ŵ + 1 Ŵ 2 m = 1 k n 1 m v α 1 m y 1 m n 0 m v α 0 m y 0 m α 1 m y 1 m α 0 m y 0 m α 1 m y 1 m n 0 m v + α 0 m y 0 m n 1 m v 2 = 1 Ŵ 2 m = 1 k 1 y 1 m 1 n 1 m v + 1 α 1 m + 1 y 0 m 1 n 0 m v + 1 α 0 m α 1 m y 1 m α 0 m y 0 m α 1 m y 1 m n 0 m v + α 0 m y 0 m n 1 m v 2 .

The variance of b ̂ 1 is estimated by Var ̂ b ̂ 1 = n 1 σ ̂ b 1 2 . □

References

  • Carpenter JR, Kenward MG, Vansteelandt S: A comparison of multiple imputation and doubly robust estimation for analyses with missing data. J. R. Stat. Soc. A 2006, 169: 571–584. 10.1111/j.1467-985X.2006.00407.x

    Article  MathSciNet  Google Scholar 

  • Clayton D, Spiegelhalter D, Dunn G, Pickles A: Analysis of longitudinal binary data from multiphase sampling (with discussion). J. R. Stat. Soc. B 1998, 60: 71–87. 10.1111/1467-9868.00109

    Article  MathSciNet  Google Scholar 

  • Chu H, Halloran EM: Estimating vaccine efficacy using auxiliary outcome data and a small validation sample. Stat. Med 2004, 23: 2697–2711. 10.1002/sim.1849

    Article  Google Scholar 

  • Dempster AP, Laird NM, Rubin DB: Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. B 1977, 39: 1–38.

    MathSciNet  Google Scholar 

  • Ellenberg SS, Hamilton JM: Surrogate endpoints in clinical trials: cancer. Stat. Med 1989, 8: 405–413. 10.1002/sim.4780080404

    Article  Google Scholar 

  • Fleming TR: Evaluating therapeutic interventions: some issues and experiences. Stat. Sci 1992, 7: 428–456. 10.1214/ss/1177011128

    Article  Google Scholar 

  • Halloran EM, Longini IM, Gaglani MJ, Piedra PA, Chu H, Herschler GB, Glezed WP: Estimating efficacy of trivalent, cold-adapted, influenza virus vaccine (CAIV-T) against influenza A (H1N1) and B Using Surveillance Cultures. Am. J. Epid 2003, 158: 305–311. 10.1093/aje/kwg163

    Article  Google Scholar 

  • Horvitz DG, Thompson DJ: A generalization of sampling without replacement from a finite universe. J. Am. Stat. Assoc 1952, 47: 663–685. 10.1080/01621459.1952.10483446

    Article  MathSciNet  Google Scholar 

  • Little RJA, Rubin DB: Statistical analysis of missing data. Wiley, New York; 2002.

    Book  Google Scholar 

  • Pepe MS, Reilly M, Fleming TR: Auxiliary outcome data and the mean score method. J. Stat. Plan. Inference 1994, 42: 137–160. 10.1016/0378-3758(94)90194-5

    Article  MathSciNet  Google Scholar 

  • Prentice RL: Surrogate endpoints in clinical trials: definition and operational criteria. Stat. Med 1989, 8: 431–440. 10.1002/sim.4780080407

    Article  Google Scholar 

  • Rubin DB: Multiple imputation for nonresponse in surveys. Wiley, New York; 1987.

    Book  Google Scholar 

  • Robins JM, Rotnitzky A, Zhao LP: Estimation of regression coefficients when some regressors are not always observed. J. Am. Stat. Assoc 1994, 89: 846–866. 10.1080/01621459.1994.10476818

    Article  MathSciNet  Google Scholar 

  • Scharfstein DO, Rotnitzky A, Robins JM: Adjusting for nonignorable drop-out using semiparametric nonresponse models: rejoinder. J. Am. Stat. Assoc 1999, 94: 1135–1146.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors thank the two referees for their valuable comments. The authors also thank Yuriko Nagano for some help with the data analysis. This research was partially supported by the National Science Foundation grant DMS-1208978, the National Institute of Health NIAID grant R37 AI054165, and a fund provided by The University of North Carolina at Charlotte.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yanqing Sun.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

YS proved Proposition 1, Theorem 1, Corollary 1 and 2, and wrote the manuscript. LQ developed the Poisson regression using the automated data with missing outcomes, performed the computations for the simulation study and for the application, and wrote the manuscript. Both authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qi, L., Sun, Y. Missing data approaches for probability regression models with missing outcomes with applications. J Stat Distrib App 1, 23 (2014). https://doi.org/10.1186/s40488-014-0023-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40488-014-0023-3

Keywords