Thursday 20 October 2011

Impact of delayed diagnosis time in estimating progression rates to hepatitis C virus-related cirrhosis and death

Bo Fu, Wenbin Wang and Xin Shi have a new paper in Statistical Methods in Medical Research. The paper is interested in the estimation of rates of progression from hepatitis C infection to cirrhosis and from cirrhosis to death. The primary complication in the model is that the time of cirrhosis development is interval censored, since it is only diagnosed at clinic examination times.

The model consists of a simple unidirectional 3-state model. A parametric Weibull approach is taken, where the time to cirrhosis from infection has a standard Weibull distribution and the time from cirrhosis to death has a Weibull distribution where the scale parameter is a function of the time between infection and cirrhosis.

Unsurprisingly, they show that a full likelihood approach which takes into account the interval censoring is much less biased than an approach that ignores the problem of interval censoring.

A criticism of the model, which the authors acknowledge, is that the possibility of death before cirrhosis is not accommodated. Potentially, even under their current model there might also be cases where death occurs after cirrhosis but before diagnosis of cirrhosis - a scenario which isn't accommodated in the three cases listed in their likelihood development. The rate of increase of the hazard of cirrhosis from infection is observed to increase after 30 years since infection. It is not clear whether this is a real effect, whether it is due to a small number of people at risk or whether it is an artifact of not accommodating the possibility of death without cirrhosis. Given the long follow-up time in the study it might have been more sensible to consider age rather than either time since cirrhosis or time to infection as the primary timescale, at least for time to death.

Finally, whilst the approach might be novel for HPV patients, the 3-state approach is clearly widely used in other contexts, most specifically in HIV/AIDS studies. For instance the penalised likelihood approach of Joly and Commenges would be highly applicable.

Estimation of the Expected Number of Earthquake Occurrences Based on Semi-Markov Models

Irene Votsi, Nikolaos Limnios, George Tsaklidis and Eleftheria Papadimitriou have a new paper in Methodology and Computing in Applied Probability. In the paper major earthquakes in the Northern Aegean Sea are modelled by a semi-Markov process. Earthquakes of above 5.5 in magnitude are categorized into three groups, [5.5,5.6], [5.7,6.0] and [6.1,7.2] (7.2 being the highest record event in the dataset). The model assumes that probability the next earthquake is of a particular category depends only on the category of the current earthquake (embedded Markov chain), and given a particular transition i->j is to occur, the waiting times between events has some distribution f_{ij}(t). Hence the inter-event times will follow a mixture distribution dependent on i. Note that the process can renew in the same state (i.e. two earthquakes of similar magnitude can occur consecutively).
The dataset used is quite small, involving only 33 events.
The observed matrix of transitions is

Clearly something of basic interest would be whether there is any evidence of dependence on the previous earthquake type. A simple test gives 2.55 on 2 df suggesting no evidence against independence.
The authors instead proceed with a fully non-parametric method which uses the limited number of observations per transition type (i.e. between 1 & 6) to estimate the conditional staying time distributions. While a semi-Markov process of this type may well be appropriate for modelling earthquakes, the limited dataset available is insufficient to get usable estimates.

There are some odd features to the paper. Figure 3 in particular is puzzling as the 95% confidence intervals for the expected number of magnitude [6.1,7.2] earthquakes are clearly far too narrow. It also seems strange that there is essentially no mention of the flowgraph/saddlepoint approximation approach to analyzing semi-Markov processes of this nature.

Friday 14 October 2011

Shape constrained nonparametric estimators of the baseline distribution in Cox proportional hazards model

Hendrik Lopuhaa and Gabriela Nane have a preprint on the Arxiv that considers estimators for the baseline hazard of the Cox proportional hazards model, subject to monotonicity constraints.

The basic idea is fairly straightforward. Using a standard NPMLE argument, the estimated hazard takes the form of a step function which is 0 before the first event time, and then piecewise constant between subsequent event times. For a fixed value of the regression coefficients, , the baseline hazards can be found by performing a weighted pooled adjacent violators algorithm taking the weights as the (covariate corrected) time at risk in the next period and response as the empirical hazard in the next period i.e. 1/(time at risk) if the event was a failure and 0 if it was a censoring.

The authors argue that since Cox regression will give a consistent estimate of regardless of whether the baseline hazard is monotone or not, they propose a two-stage approach where one estimates beta using a standard Cox partial likelihood and then uses this value of to obtain the monotone baseline hazard. Obviously this estimator will have the same asymptotic properties as one based on maximizing the full likelihood jointly. Naively, a profile likelihood approach would seem possible since calculating the likelihood conditional on is straightforward (though it is not clear whether it would be differentiable). Interestingly some quick simulations on Weibull data with shape>1 seem to suggest the full likelihood estimator of (using the monotonicity constraint) is more biased and less efficient for small samples.

A substantial proportion of the paper is dedicated to obtaining the asymptotic properties of the estimators, which are non-standard and require empirical process theory. There is also some discussion of obtaining baseline hazards based on an increasing density constraint via analogous use of the Grennader estimator. Update: This paper has now been published in the Scandinavian Journal of Statistics.

Thursday 6 October 2011

A case-study in the clinical epidemiology of psoriatic arthritis: multistate models and causal arguments

Aidan O'Keeffe, Brian Tom and Vern Farewell have a new paper in Applied Statistics. They model data on the status of 28 individual hand joints in psoriatic arthitis patients. The damage to joints for a particular patient is expected to be correlated. Particular questions of interest are whether the damage process has 'symmetry', meaning if a specific joint in one hand is damaged does this increase the hazard of the corresponding joint in the other hand becoming damaged, and whether activity, meaning whether a joint is inflamed, causes the joint damage.

Two approaches to analysing the data are taken. In the first each pair of joints (from the left and right hands) is modelled by a 4 state model where the initial state is no damage to either joint, the terminal state is damage to both joints and progession to the terminal state may be through either a state representing damage only to the left joint or through a state representing damage only to the right joint. The correlation between joints within a patient is incorporated by allowing a common Gamma frailty which affects all transition intensities for all 14 pairs of joints. Symmetry can then be assessed by comparing the hazard of damage to the one joint with or without damage having occurred to the other joint. Under this approach, activity is incorporated only as a time dependent covariate. There are two limitations to this. Firstly, panel observation means that some assumption about the status of activity between clinic visits has to be made (e.g. it keeps the status observed at the last clinic visit until the current visit) and, from a causal inference perspective, activity is an internal time dependent covariate so treating it as fixed makes it harder to infer causality.

The second approach seeks to address these problems by jointly modelling activity and joint damage as linked stochastic processes. In this model each of the 28 joints is represented by a three-state model where the first two states represent presence and absence of activity for an undamaged joint, whilst state three represents joint damage. For this model, rather than explicitly fitting a random effects model, a GEE type approach is taken where the parameter point estimates are based on maximizing the likelihood assuming independence of the 28 joint processes, but standard errors are calculated using a sandwich estimator based on patient level clustering. This approach is valid provided the point estimates are consistent, which will be the case if the marginal processes are Markov. For instance if the transition intensities of type {ij} are linked via a random effect u such that
but crucially if other transition intensities also have associated random effects, these must be independent of .

The basic model considered is (conditionally) Markov. The authors attempt to relax this assumption by allowing the transition intensities to joint damage from the non-active state to additionally depend on a time dependent covariate indicating whether activity has ever been observed. Ideally the intensity should depend on whether the patient has ever been active rather than whether they have been observed to be active. This could be incorporated very straightforwardly by modelling the process as a 4 state model where state 1 represents never active, no damage, state 2 represents active, no damage, state 3 represents not currently active (but have been in the past) and state 4 represents damage. Clearly, we cannot always determine whether the current occupied state is 1 or 3 so the observed data would come from a simple hidden Markov model.

On a practical level, the authors fit the likelihood for the gamma random effects model by using the integrate function in R, for each patient's likelihood contribution for each likelihood evaluation in the (derivative-free) optimization. Presumably this is very slow. Using a simpler more explicit quadrature formulation would likely improve speed (at a very modest cost to accuracy) because the contributions for all patients for each quadrature point could be calculated at the same time and the overall likelihood contributions could then be calculated in the same operation. Alternatively, the likelihood for a single shared Gamma frailty is in fact available in closed form. This follows from arguments extending the results in the tracking model of Satten (Biometrics, 1999). Essentially, we can write every term in the conditional likelihood as some weighted sum of exponentials:

the product of these terms is then still in this form:

Calculating the marginal likelihood then just involves computing a series of Laplace transforms.
Provided the models are progressive, keeping track of the separate terms is feasible, although the presence of time dependent covariates makes this approach less attractive.

Tuesday 4 October 2011

A Bayesian Simulation Approach to Inference on a Multi-State Latent Facotr Intensity Model

Chew Lian Chua, G.C. Lim and Penelope Smith have a new paper in the Australian and New Zealand Journal of Statistics. This models the trajectory of credit ratings of companies. While there are 26 possible rating classes, these are grouped to the 4 broad classes: A,B,C,D, Movement between any two classes is deemed possible resulting in 12 possible transition types. The authors develop methods for fitting what is termed a multi-state latent factor intensity model. This is essentially a multi-state model with a shared random effect that affects all the transition intensities, but the random effect is allowed to evolve in time. For instance, the authors assume the random effect is an AR(1) process.

There seems to be some confusion in the formulation of the model as to whether the process is in continuous or discrete time: The process is expressed in terms of intensities which are "informally" defined as
where is a counting process describing the number of s to k transitions that have occurred to time t and is the ith observation time. But if the time between observations were variable, the AR(1) formulation (which takes no account of this) would not seem sensible.

Nevertheless the idea of having a random effect that can vary temporally within a multi-state model (proposed by Koopman et al) is interesting, though obviously presents various computational challenges which is the main focus of Chua, Lim and Smith's paper.