Sunday 23 January 2011

A comparison of non-homogeneous Markov regression models with application to Alzheimer's disease progression

Rebecca Hubbard and Andrew Zhou have a new paper in Journal of Applied Statistics. This considers panel data relating to the progression of Alzheimer's disease using non-homogeneous Markov models. The time transformation method proposed by Hubbard et al (Biometrics, 2008) is extended to allow for (fixed) covariates. In the time transformation model the generator matrix is for some increasing function , which in Hubbard's method can be estimated non-parametrically (or at least flexibly using kernel weights centered on pre-specified time points). Covariates are incorporated as proportional intensities on the intensities in the homogeneous time generator .

A simulation study is presented which aims to compare the robustness of piecewise constant intensity models and time transformation models for estimating covariate effects on intensities. It is no secret that in proportional hazard type models, estimates are reasonably robust to misspecification of the shape of the baseline hazard (in contrast to relative lack of robustness to misspecification of the model for the effect of the covariate on the hazard). In general therefore, each model performs fairly well when the other is true. The simulation puts the piecewise intensities model at a bit of a disadvantage since it requires Q to be estimated separately for each time period (four extra parameters), whereas the time transformation model only requires 1 extra parameter over the homogeneous model. It might have been a fairer comparison if a time transformation model of similar complexity was used. The conclusion that time transformation models are more robust for small sample sizes is therefore not particularly convincing.

The argument against piecewise constant intensities in general is a bit weak as it revolves around the notion that one must estimate a separate generator matrix for each time period leading to many extra parameters. The obvious argument for piecewise constant intensities is that we can look at a particular intensity of interest and let that be time varying while leaving the others constant. In contrast the time transformation method requires the non-homogeneity to be the same for all intensities. Obviously the smoothness of the intensities in the time transformation model is an advantage though.

In the Alzheimer's example a 4-state model is fitted where patients can be in states Normal, Mildly Cognitively Impaired, Alzheimer's or Death. Since the observed data include patients who make transitions back from Alzheimer's to MCI and from MCI to Normal, the Markov models assume backward transitions are possible. In the conclusion it is noted that there is the possibility of misclassification. However, the possible remedy of a hidden Markov model is not mentioned.

Wednesday 19 January 2011

Application of multistate models in hospital epidemiology: Advances and challenges

Jan Beyersmannm, Martin Wolkewitz, Arthur Allignol, Nadine Grambauer and Martin Schumacher have a new paper in Biometrical Journal. The Freiburg group have been very active in promoting multi-state modelling to a wider audience in recent years. This paper looks at the advantages adopting a multi-state model approach can bring to modelling processes related to hospital epidemiology.

Friday 7 January 2011

Parametric Estimation of Cumulative Incidence Functions for Interval Censored Competing Risks Data


Peter Nyangweso[1], Michael Hudgens and Jason Fine have a new paper currently available as a University of North Carolina at Chapel Hill Department of Biostatistics Technical Report. This considers parametric modelling of cumulative incidence functions in the case of interval censored competing risks data. It is somewhat curious that while there has been quite a lot of work on interval-censored competing risks by modelling non-parametrically, little seems to have been done regarding the (presumably easier) problem of parametric modelling (although lots have work has been done for more general multi-state models).

Nyangweso and colleagues consider a parameterisation based on the cumulative incidence functions. This makes sense since the interval-censored likelihood can be directly written in terms of the CIFs. They consider a Gompertz model for each CIF taking the form:


Note that this allows for a proportion who will never experience event k. An obvious complication is that a valid set of CIFs needs to have the property that . It is noted that in practice the unconstrained MLE will be such that the sum of the CIFs will be less than 1 at the greatest time at which someone was right censored as otherwise the likelihood involves taking the log of a non-positive number. Some discussion of constrained optimization, where , so that it is assumed that all patients must eventually experience on of the events (i.e. no cure fraction).

In addition to full likelihood estimation, the parametric analogue of the naive estimator investigated in Jewell et al (2003, Biometrika) is also considered. This just involves modelling each CIF separately as if it were univariate interval censored survival data. Obviously there is an even greater risk of CIFs summing to more than 1 for the naive estimates.

The paper does not get as far as considering models for covariates. The main complication with adding covariates to the Gompertz model seems to be that we would then be almost guaranteed to have covariate patterns where the CIFs sum to more than 1 even at times of interest. There is surely advantage in modelling the cause specific hazards as this way the CIFs are guaranteed to be valid at all times for all covariate values. While for most parametric hazards computation of the CIF requires numerical integration, the extra computation required shouldn't be prohibitive.

Update: The original paper appears to have been superseded by a new version with a slightly different set of authors (Hudgens, Li & Fine).

Multi-State Models for Panel Data: The msm Package for R

The final paper in the special issue in Journal of Statistical Software is by Chris Jackson and is about his package msm. msm has been around for many (over 8) years and has steadily built up functionality over time. As noted by Hein Putter in his introduction, msm is different from the other packages featured in the issue in that it deals with panel observed/interval censored data and concentrates on parametric models. In particular, hidden Markov models with both discrete and continuous responses may be fitted.

The paper covers much old ground, the early part repeating similar themes from Jackson & Sharples 2002 (Statistics in Medicine) and Jackson et al 2003 (JRSS D), the section on model diagnostics closely follows Titman and Sharples 2010 (SMMR). One error in msm is in the implementation of prevalence counts/plots. It is made clear by Gentleman et al (1994, Stat Med) and Titman and Sharples that the denominator for the counts at time t is the number "under observation". In particular, subjects who reach the absorbing state should stay under observation only until the time at which they would have been censored. msm assumes subjects who reach the absorbing state remain under observation indefinitely which leads to overestimation of the empirical prevalence in the absorbing state(s). This can be seen clearly in the bottom-right panel of figure 4 of the paper that is suggesting spurious lack of fit. Patients either need to be removed from observation at a known administrative censoring time, or else the censoring distribution needs to be empirically estimated to allow time-dependent weighting of dead patients.

In Section 6 Jackson discusses model extensions which are generally not available in the package, but seems to suggest that to maintain generality these extensions, such as a wider range of time inhomogeneous models, random effects models and semi-Markov models, will not be incorporated into msm.

In general msm is a very good package and lots of effort (e.g. C programming, use of first derivatives, direct coding of transition probabilities for more basic model structures) has been put in to provide a fast performance. However, some improvement in computation speed is no doubt possible since msm uses the BFGS algorithm in optim to fit models rather than applying the well-known Fisher scoring algorithm.

mstate: An R Package for the Analysis of Competing Risks and Multi-State Models

The sixth paper in the special issue of Journal of Statistical Software is by Liesbeth de Wreede, Marta Fiocco and Hein Putter and is about the R package mstate. A journal article on the package already exists in Computer Methods and Programs in Biomedicine. However, while that paper primarily dealt with theoretical aspects, the current paper is largely a case-study example based on a 6 state model for leukemia patients after bone marrow transplantation.

mstate uses the existing survival package to fit the Cox proportional hazards models. Much of the infrastructure of mstate is in functions for preparing data to be in the correct form. To use mstate for a Cox-Markov or Cox-semi-Markov model, a single coxph() object is required. This results in somewhat messy commands being required because separate strata are required and the same covariate needs to appear multiple times to allow it to have different effects for each transition intensity. For instance the call to coxph in the paper requires 16 lines. While there is perhaps some pedagogical advantage to this complication, in ensuring the user really understands what they are fitting, there is surely scope to allow some automation to this process so that the user only need specify which covariates are required for each transition intensity and this could be passed to coxph behind the scenes.

Finally, as has been noted elsewhere, mstate currently does virtually all calculations within R itself. As a result computation times are sometimes disappointing, especially for models on large datasets.

Using Lexis Objects for Multi-State Models in R

The fourth and fifth papers in the special issue of Journal of Statistical Software concern Lexis objects in the R package Epi and are by Martin Plummer and Bendix Carstensen. The first paper is mainly a general introduction to Lexis objects whereas the second paper looks specifically at their use for multi-state models. Lexis diagrams, which have a long history, provide a way of illustrating data with multiple time scales and have been promoted by Niels Keiding among others.

The Lexis function makes initial exploration of multi-state data easier, by allowing plotting of events on different timescales so that, for instance, the correct time scale(s) for analysis can be chosen. Lexis objects can also be passed to mstate.

The authors consider the problem is determining whether two baseline transition intensities are proportional. A model allowing proportionality is straightforward to implement as it just involves putting the two transition intensities in the same strata of a Cox model and adding an extra indicator variable to denote which transition intensity an event derives from. However, if one is interested in formally testing an assumption of proportionality there is a difficulty because the Cox model factorizes the baseline intensities out. The authors therefore propose full likelihood modelling where the baseline intensities are assumed to be piecewise constant within short time intervals but follow a smooth spline function. Testing proportionality of intensities can then be performed through a standard likelihood ratio test.

Thursday 6 January 2011

Empirical Transition Matrix of Multi-State Models: The etm Package

The third paper in the special issue of Journal of Statistical Software is by Arthir Allignol, Martin Schumacher and Jan Beyersmann and concerns the etm package. etm has been mentioned here before. It's a fairly straightforward package that computes the Aalen-Johansen estimator of the transition probabilities (and their standard errors) for general right-censored and left-truncated Markov models. mstate also has the same functionality (but also allows Cox regression). The niche of etm is that if only the Aalen-Johansen estimator is required it provides a faster implementation than mstate since etm incorporates C code, whereas mstate does most of its computation in R only.

p3state.msm: Analyzing Survival Data from an Illness-Death model

The second paper in the special issue at Journal of Statistical Software is by Luis Meira-Machado and Javier Roca-Pardinas and is about the package p3state.msm. This package allows non and semi-parametric estimation for three-state progressive or illness-death models subject to right-censoring. Much of the functionality of p3state.msm such as Cox-Markov and Cox-semi-Markov models is available elsewhere in mstate. The main niche of p3state.msm is the ability to estimate transition probabilities without making either Markov or semi-Markov assumptions applying the methodology of Meira-Machado et al (2006, Lifetime Data Analysis). Note that while the Aalen-Johansen estimator is consistent for state occupancy probabilities (Datta and Satten 2001 S&PL, Glidden 2002 Biometrics), this doesn't extend to general transition probabilities. Currently estimates are only available for the case of no covariates, though the authors state they are considering extensions to allow Cox-type models for such non-Markov models. Similarly, they are planning to allow a greater range of models, e.g. a progressive k state model.

Analyzing Competing Risk Data Using the R timereg Package

The first paper in the special issue of Journal of Statistical Software is by Thomas Scheike and Mei-Jie Zhang and illustrates the use of the R package timereg to fit the flexible direct binomial regression based competing risks models proposed by Scheike et al, Biometrika 2008. These models, which rely on inverse probability of censoring weights allow both proportional and additive models for the cumulative incidence functions. In particular, this offers a goodness-of-fit test for the Fine-Gray model, testing for time dependence in the covariate effects on the subdistribution hazard.
timereg offers some interesting functionality, for instance confidence bands on the cumulative incidence functions can be computed (albeit via bootstrap resampling).

The package timereg has other very useful functions not directly featured in the paper. In particular, it can fit Aalen additive hazard models which have yet to be used much for modelling transition intensities for multi-state models (Shu & Klein, Biometrika 2005 being the exception).

Tuesday 4 January 2011

Special Issue about Competing Risks and Multi-State Models

The Journal of Statistical Software is devoting the current issue (Vol 38 Issue 1) to software for competing risks and multi-state models. Hein Putter provides an introduction. It's great to see multi-state models getting a higher profile. The issue has seven new papers including papers on timereg, etm, mstate and msm, so there's more to follow here.

Saturday 1 January 2011

Selecting a binary Markov model for a precipitation process

Reza Hosseini, Nhu Le and Jim Zidek have a new paper in Environmental and Ecological Statistics. This considers modelling a 0-1 precipitation process (e.g. daily data with 1 if amount of precipitation is above some threshold) as a binary discrete-time Markov process of some fixed order. The main focus of the paper is model selection where the model with the smallest BIC is chosen. Unsurprisingly simulations find that BIC is better than AIC at picking the "true" model where the "true" model is some model with a relatively small number of parameters.

The data example is rainfall in Calgary over 5 year time periods. Model building is detailed somewhat laboriously. Some of this seems unnecessary. For instance, the exploratory analysis shows clear seasonality in marginal probabilities of rain. However, models without seasonality are first considered leading to choosing a first order Markov model with the number of rainy days that month (i.e. a proxy for season) as a covariate. An obvious difficulty with fitting high order Markov models is that the number of parameters in the general case increases exponentially with the order of the process. Once seasonality is included only a first order Markov model is needed.

There is a disappointing lack of any reference to either hidden Markov models or (hidden) semi-Markov models. In particular the work of Hughes et al (1999, Applied Statistics) and Sansom and Thomson (2001, Journal of Applied Probability) are highly relevant. A hidden process perhaps makes more sense from the perspective of rain occurring due to e.g. the passing of a low-pressure system but the presence of such a system not necessarily manifesting itself with rain.