capReg.Rd
This function identifies the first \(k\) projection directions that satisfies the log-linear model assumption.
capReg(Y, X, nD = 1, method = c("CAP", "CAP-C"), CAP.OC = FALSE,
max.itr = 1000, tol = 1e-04, trace = FALSE, score.return = TRUE,
gamma0.mat = NULL, ninitial = NULL)
Y | a data list of length \(n\). Each list element is a \(T\times p\) matrix, the data matrix of \(T\) observations from \(p\) features. |
---|---|
X | a \(n\times q\) data matrix, the covariate matrix of \(n\) subjects with \(q-1\) predictors. The first column is all ones. |
nD | an integer, the number of directions to be identified. Default is 1. |
method | a character of optimization method. |
CAP.OC | a logic variable. Whether the orthogonal constraint is imposed when identifying higher-order PCs. When |
max.itr | an integer, the maximum number of iterations. |
tol | a numeric value of convergence tolerance. |
trace | a logic variable. Whether the solution path is reported. Default is |
score.return | a logic variable. Whether the log-variance in the transformed space is reported. Default is |
gamma0.mat | a data matrix, the initial value of \(\gamma\). Default is |
ninitial | an integer, the number of different initial value is tested. When it is greater than 1, multiple initial values will be tested, and the one yields the minimum objective function will be reported. Default is |
Considering \(y_{it}\) are \(p\)-dimensional independent and identically distributed random samples from a multivariate normal distribution with mean zero and covariance matrix \(\Sigma_{i}\). We assume there exits a \(p\)-dimensional vector \(\gamma\) such that \(z_{it}:=\gamma'y_{it}\) satisfies the multiplicative heteroscedasticity: $$\log(\mathrm{Var}(z_{it}))=\log(\gamma'\Sigma_{i}\gamma)=\beta_{0}+x_{i}'\beta_{1}$$, where \(x_{i}\) contains explanatory variables of subject \(i\), and \(\beta_{0}\) and \(\beta_{1}\) are model coefficients.
Parameters \(\gamma\) and \(\beta=(\beta_{0},\beta_{1}')'\) are study of interest, and we propose to estimate them by maximizing the likelihood function, $$\ell(\beta,\gamma)=-\frac{1}{2}\sum_{i=1}^{n}T_{i}(x_{i}'\beta)-\frac{1}{2}\sum_{i=1}^{n}\exp(-x_{i}'\beta)\gamma'S_{i}\gamma,$$ where \(S_{i}=\sum_{t=1}^{T_{i}}y_{it}y_{it}'\). To estimate \(\gamma\), we impose the following constraint $$\gamma' H\gamma=1,$$ where \(H\) is a positive definite matrix. In this study, we consider the choice that $$H=\bar{\Sigma}, \quad \bar{\Sigma}=\frac{1}{n}\sum_{i=1}^{n}\frac{1}{T_{i}}S_{i}.$$
For higher order projecting directions, an orthogonal constraint is imposed as well.
When method = "CAP"
,
the estimate of \(\gamma\) vectors, which is a \(p\times nD\) matrix.
the estimate of \(\beta\) for each projecting direction, which is a \(q\times nD\) matrix, where \(q-1\) is the number of explanatory variables.
an ad hoc checking of the orthogonality between \(\gamma\) vectors.
output of both average (geometric mean) and individual level of ``deviation from diagonality''.
an output when score.return = TRUE
. A \(n\times nD\) matrix of \(\log(\hat{\gamma}'S_{i}\hat{\gamma})\) value.
the estimate of \(\gamma\) vectors, which is a \(p\times nD\) matrix.
the estimate of \(\beta\) for each projecting direction, which is a \(q\times nD\) matrix, where \(q-1\) is the number of explanatory variables.
an ad hoc checking of the orthogonality between \(\gamma\) vectors.
a vector of length nD
, the order index of identified \(\gamma\) vectors among all the common principal components.
the order index of all the principal components that satisfy the log-linear model and the eigenvalue condition.
a logic output, whether the identified \(\gamma\) vectors are estimated from the minmax approach. If FALSE
, indicating the eigenvalue condition is not satisfied for any principal component.
an output when score.return = TRUE
. A \(n\times nD\) matrix of \(\log(\hat{\gamma}'S_{i}\hat{\gamma})\) value.
Zhao et al. (2018) Covariate Assisted Principal Regression for Covariance Matrix Outcomes <doi:10.1101/425033>
Yi Zhao, Johns Hopkins University, <zhaoyi1026@gmail.com>
Bingkai Wang, Johns Hopkins University, <bwang51@jhmi.edu>
Stewart Mostofsky, Johns Hopkins University, <mostofsky@kennedykrieger.org>
Brian Caffo, Johns Hopkins University, <bcaffo@gmail.com>
Xi Luo, Brown University, <xi.rossi.luo@gmail.com>
#############################################
data(env.example)
X<-get("X",env.example)
Y<-get("Y",env.example)
# method = "CAP"
# without orthogonal constraint
re1<-capReg(Y,X,nD=2,method=c("CAP"),CAP.OC=FALSE)
# with orthogonal constraint
re2<-capReg(Y,X,nD=2,method=c("CAP"),CAP.OC=TRUE)
# method = "CAP-C"
re3<-capReg(Y,X,nD=2,method=c("CAP-C"))
#############################################