


Let
be one of the likelihood functions described in the previous subsections. Let
. Finding
such that
is maximized is equivalent to finding the solution
to the likelihood equations
![\[ \frac{\partial l(\bbeta ) }{\partial \bbeta } = 0 \]](images/statug_phreg0400.png)
With
as the initial solution, the iterative scheme is expressed as
![\[ \hat{\bbeta }^{j+1}=\hat{\bbeta }^{j}-\left[ \frac{ \partial ^{2} l ( \hat{\bbeta }^{j} ) }{ \partial {\bbeta }^{2}} \right]^{-1} \frac{ \partial l ( \hat{\bbeta }^{j}) }{ \partial {\bbeta } } \]](images/statug_phreg0402.png)
The term after the minus sign is the Newton-Raphson step. If the likelihood function evaluated at
is less than that evaluated at
, then
is recomputed using half the step size. The iterative scheme continues until convergence is obtained—that is, until
is sufficiently close to
. Then the maximum likelihood estimate of
is
.
The model-based variance estimate of
is obtained by inverting the information matrix
![\[ \hat{\mb{V}}_ m ( \hat{\bbeta })= \mc{I}^{-1}(\hat{\bbeta }) = -\left[ \frac{\partial ^{2} l ( \hat{\bbeta }) }{ \partial \bbeta ^2 } \right] ^{-1} \]](images/statug_phreg0408.png)