go back (01june2004/17june2005)
DrTim homesite
CH9 ANALYZING RESEARCH QUESTIONS ABOUT SURVIVAL (Outcomes)
Intro to Clinical Statistics
by Timothy Bilash MD
June 2004
based on:
Review of Basic & Clinical Biostatistics by
Beth Dawson, Robert Trapp (2001) CH9
- Survival analysis adds a complexity to interpretation of events and rates (TDB)
- patients do not enter study at same time
- time is mixed in with event occurence and contibute indistinguishably for a given outcome
- time and event are mixed and equated
- compound variable (person-years)
- change in time-to-event (earlier time) can have same effect as additional occurance (number)
- cannot distinguish statistically with survival analysis
- exposure and event are not time distinct (sum of given amounts of both)
- event determines amount of exposure, not given exposure leads to event
- time dimension feeds event occurance back into exposure
- event can occur prior to or during beginnings of exposure yet be unrelated to it, even tho in the exposure group
- length of time can be compared, but important to note exposure and event are in different time periods
- withdrawal for event mixed with withdrawal from attrition or non-compliance
- drop out for event not the same as drop out for compliance
- startpoint problem at beginning of period (partial exposure amount to risk)
- limits exposure for early event group
- endpoint problem at end of period (partial time obsrvation for outcome to occur)
- cut of inclusion at same given time
- limits events for later exposure group
- survival analysis is different from experiment
- for survival must analyze data before all events have occurred, as opposed to after all events to be counted have occurred
- events cumulative over time
- always have a truncation
- alters interpetation of the statistics (conditional probabilities)
- Survival parameters that affect risk and outcome
- exposure to risk
- onset delay
- length of exposure (integrated exposure)
- dose
- total
- maximum
- confounders/modifiers
- before
- during
- after
- possibility of outcome
- incubation time to event (delay of time to event after exposure has occurred)
- time to symptom
- time to diagnosis (test+)
- threshold for detection
- "accuracy" of test (PPV, NPV)
- gating
- censoring
- Censored observations
- Time of Entry
- Simultaneously Censored (entry time simultaneous, like experiment)
- [Fig9-1 p 212]
- Progressively Censored (entry time not simultaneous)
- [Fig 9-2 p212]
- SURVIVAL CURVES (Characterizing one Group)
- Life Table (Actuarial) vs Kaplan-Meier Methods
- both calculate proportion surviving thru an interval
- # of events in given time interval
- 1 event in that time interval
- different (weighted) averages to give survival proportion
- Kaplan-Meier is exact
- Life Table is approximation (averages over interval)
- equivalent if constant rate of events over each interval and over study time
- Life Table ( Actuarial Table) Analysis
- also called
Cutler-Edurer Method
- different ways to collect life table data
- Current life table
- cross sectional data
- different people at one point in time
- used by insurers
- Cohort life table
- (longitudinal data)
- same people over a period of time
- most medical studies
- not the same statistic
- assumptions for Life Table method
- intervals fixed
- # of events in given time interval
- allows mild censoring
- assumes events average out to midpoint of intervals
- equivalent to a random withdrawal during the period
- compares to constant rate of withdrawl [?]
- assumes survival in one period not dependent on survival in other periods
- time interval duration used in the analysis is somewhat arbitrary but should be selected so that the number of censored observations in any interval is small
- uses conditional probability, must not have had that event all the periods before that interval
- survival function
- assumes probability of survival in a given period does not depend on survival in any other period
- "probably violated in much medical research but does not appear to cause major concern to biostatisticians"
- count patients in the study at the beginning of each interval who are not left by the end of the interval (had an event)
- used for the numerator
- count patients who were left at the beginning of each interval
- used for denominator
- some not in the study that long (stopped)
- some lost to followup
- denominator reduced by half the number of patients withdrawn for other reasons during the period
- assumes patients withdraw randomly throughout the interval
- on average patient withdrawals for an interval occur on average at the midpoint of the interval
- so subtract 1/2 of the number who withdraw during that period instead of all
- less of a concern if time intervals are short
- varying event# for fixed time interval
- n events for that time interval
- Survival Function per intervals = proportion surviving thru the ith interval =
S(i) = p(i) p(i-1) ... p(1) [D&T p 216]
- i= ith time interval ( fixed time in denominator of rate)
- p(i)= 1-q(i)= percent survival (with no event) for ith time interval
- q(i)= d(i)/[n(i)-w(i)/2]= event rate for ith time interval
- n(i) = n(i-1) - w(i-1) = #pts at beginning of interval
- affects denominator not numerator
- withdrawals in previous interval(i-1) affects number in currnt interval (i)
- d(i) = #termination events in the interval
- w(i) = #withdrawals in interval (censored for other reasons than event)
- uses Greenwood's formula for the standard error SE [D&T p 215]
- formula for confidence interval (CI)
- assumes mild censoring
- assumes the proportion surviving in an interval is approximately normally distributed
- assumes sufficiently large sample sizes
- assumes a normal distribution for proportion surviving
- SE for S(i)= S(i) * Sqrt[Sum( q(i) / (n(i) - d(i) - w(i)/2) )]
- typically as the interval from entry into the study gets longer, the number of patients remaining for the next interval gets smaller.
- this means that the uncertainty (standard deviation) of the proportion surviving gets larger (statistics get worse with time)
- 95% confidence intervals get wider (often drawn on the graph as a band on either side of the curve)
- considerable bias can occur [p217]
- if the intervals are large
- if many withdrawals occur
- if the withdrawals not midway in the interval
- Kaplan-Meier removes this problem
- Kaplan-Meier Product Limit method [D&T p217]
- actuarial-type method, with analysis at each event occurance (time since entry divided into unequal intervals by each event)
- event proportions in group estimated at the variable event moment, rather than at fixed intervals
- good for studies even involving small numbers of patients
- gives exact survival proportions because it uses exact survival times
- fixed event# (1) per varying time
- events per interval time= 1
- Survival Function per event = proportion surviving thru jth event = S(j)=p(j)p(j-1)...p(1)
- j= jth event ( varying time in denominator of rate)
- p(j)=1-q(j)= percent survival (with no event) for time interval of the jth event
- q(j)= d(j)/n(j)= event rate for jth event interval
- d(j)= jth event (1)
- n(j)= #pts at jth event
- [SE for S(j)] = S(j) * Sqrt[Sum( (d(j) / (n(j) * (n(j) - d(j)) )] [see D&T p218]
- note that withdrawls are ignored
- patients who are lost to follow-up and those who drop out before an event time merely drop out of the calculations by no longer being considered.
- [ISLT] effects of censoring
- if no censored observations occur, the Wilcoxon rank sum test is appropriate for comparing the ranks of survival times
- according to D&T, Kaplan-Meier removes the problem of withdrawals not occurring midway in the interval, however:
- K-M gives valid rates only if withdrawals and lost-to-follow-ups occur at a constant rate, or as an approximation at a random rate over the time of study (that is withdrawals are uncorrelated)
- other problems from many withdrawals would still remain however [?]
- no way to account if the patients who have been censored (withdrawals and lost to followup) would be more or less likely to have an event in a given group, if the withdrawals were related to treatment medication or other events related to treatment group or perceptions
- for example, if a patient drops out of an estrogen study because of bleeding (from higher unopposed estrogen levels), and does not have an event (if estrogen lowers the event risk), then the event rate would be artifically increased
- d(j) for that interval would be inflated because does not contain these patients
- so only removes the withdrawal problem if withdrawals are constant over time (or random), withdrawals are random as to patients who are exposed and not to risk, and withdrawals are random as to risk factors for outcome.
- large interval problem would still remain (for low rates) [?]
- Summarizing Survival Data (within a group) [p225]
- Hazard Function or Hazard Rate
- (a Rate is within group as opposed to Ratio is between groups)
- also called conditional failure rate (events per total time)
- used to obtain estimate of Mean Survival Time for survival data
- useful for comparing two groups at risk
- if assumption of an exponential distribution is reasonable (ie, constant event rate)
- allows censored observations (makes corrections to denominator for dropouts and exposure).
- often used to characterize Kaplan-Meier Curves and in Cox Proportional Model
- H = D / (SumF + SumC)
- D is number of deaths
- SumF is the sum of event times
- SumC is the sum of censored times
- assumes exponential distribution for survival curve (constant event rate)
- [see pg 225 for formula]
- when assumption of an exponential distribution is not appropriate , other forms of the hazard function based on different probability distributions are used [Lee 1992]
- MEASURES OF SIGNIFICANCE (Comparing groups)
- "Little information is available to guide investigators in deciding which procedure is appropriate in which situation." [p224]
- in addition sometimes cannot determine which procedure used to compare survival distributions
- multiple names
- research on biostatistical methods for analyzing survival data is still underway
- Independent groups t-test is not appropriate for comparing survival curves directly because survival times are not normally distributed and tend to be positively skewed (p220)
- Comparing Survival Data (between groups - Uncensored data)
- Wilcoxon rank sum test
- assumes constant rates
- compares the ranks of survival time
- if no censored observations occur, appropriate for comparing the ranks of survival times [D&T p221]
- Comparing Survival Data (between groups - Censored or Uncensored data)
- Tests for significance to compare survival curves with censored observations
- presence of censored observations requires special methods for comparing two or more survival distributions
- conclusions that result are approximations and can be calculated in different ways (also have name confusions for techniques)
- Hazard Ratio
- Logrank statistic
- most commonly reported
- Logrank compares the differences of the (group sum over all periods)
- Mantel-Haenszel Chi-Square statistic
- can be applied to any set of 2x2 tables
- Mantel-Haenszel combines the series of 2x2 tables to estimate an odds ratio
- independent-groups t-test is not appropriate for Kaplan-Meier because survival time (denominator) is not normally distributed, and tends to be positively skewed. [D&T p220]
- Constant Hazard Ratio of Hazard Rates between groups,
Constant (or Proportional) Hazard Rate in each group
- Hazard Ratio for comparing two groups [p221]
- ratio of proportions (proportion of outcomes in at risk group to proportion of outcomes in not at risk group for bi-valued risk)
- (O1/ E1) / (O2/ E2)
- Oi is observed
- Ei is expected
- ratio of rates
- constant Hazard Ratio of Hazard Rates between groups
- assumes constant Hazard Rates of events in each group throughout the time of study
- or could also have non-constant but proportional Hazard rates to make the ratio constant (Cox)
- allows censored observations
- interpreted as odds ratio between groups if bi-valued (binary) outcome
- HR can be calculated from logrank statistics [p221]
- used in Cox Proportional Model
- Logrank Statistic
- also called
Mantel logrank statistic
Cox-Mantel logrank statistic (more general use than just survival curves)
- assumes constant hazard ratio between groups throughout time of study
- rate in each group may vary , but rates stay in constant ratio
- ie, ratio of non-constant rates rates are at least proportional
- for each interval, the number of events observed in each group, is compared to the number of events expected in each group (calculated for the group as a proportion by group number of the total number of events from rate in all groups combined who are at risk, as if group membership did not matter), and these are used to calculate a Chi-square statistic test for significance
- at fixed intervals (or computer programs can determine at each event instead)
- determine the number at risk in that jth interval
- remove those not in study at start of that time interval from the number at risk
- died in previous interval
- censored because of event/outcome in previous interval
- censored because of atrition
- calculate the number of observed events Oij in the jth interval, for failures/events/deaths in each ith group
- calculate the number of expected events Eij in the jth interval, for failures/events/deaths in each ith group
- divides up the expected events for the interval by the proportion in each group (as if by chance)
- Eij = (# of events/deaths in ith group in the jth interval) * (proportion of patients at risk in ith group in the jth interval)
- total the numbers of observed events Oij and expected events Eij over all j intervals for each i group to get Oi, Ei
- Oi, Ei are sums over all periods (or events) for each group
- sums these Oi , Ei totals in an approximate Chi-square test for significance (if distributions are not the same as expected)
- Chi-square statistic for events over all intervals
X 2 = SUM [(Oi -Ei )2 / Ei ] (over all groups)
- approximates Chi-square distribution with N-1 degress of freedom (N is # of groups)
- O i is sum of the observed events over all j intervals for the i group
- E i is sum of the expected events over all j intervals for the i group
- X2>N indicates distributions are not the same and there is a difference between groups
- alternate rewrite X2 = SUM [Ei(1 - Oi /Ei)2]
- expected value times the square of fractional risk ratio subtracted from 1
- for bi-valued outcome (yes/no, 1 degree of freedom):
X 2 yes/no = [Oyes - Eyes]2 / Eyes + [Ono - Eno]2 / Eno
- X 2 > ~7 indicates observed is different from expected at the 0.01 level (1%)
- allows censored observations
- better with exponential distributions [p224]
- unweighted
- Petro logrank test
- weighted logrank test
- weighted by number of patients at risk
- gives more weight to early events when the number of patients at risk is large (more patients happens to be at beginning of study period, and since time and events are mixed in survival function cannot separate them)
- Mantel-Haenszel test (chi-square statistic) [D&T p223]
- non-constant hazard rates OK but assumes a constant hazard ratio
- sometimes called logrank test but not the same
- test for homogenity of a relationship between 2 factors across changes in a third factor; is the relationship the same across levels of the third factor
- time intervals determined by events
- calculate the observed and expected numbers each time an event occurs
- that is, compare percentage in each group caused by that event over time to event interval
- approximate test (compares to expected based on average for the period, a pooled odds ratio) [p222]
- unweighted
- similar results to logrank tests
- can be used to compare any distributions [p223]
- MH Chi Square =
[SUM(observed numbers)-SUM(expected numbers)]**2
[SUM(variances)]
over every time period ª
- Would be subject to small number (frequency) restrictions of chi-square analysis?
- Cox Proportional Hazard Model
- assumes the hazard ratio between groups for risk of an event is constant throughout the study (p214 Dawson and Trapp)
- constant hazard ratios ( rates are proportional )
- allows censored observations
- uses the Hazard Function to evaluate the length of time to event
- entry point problem (start point)
- divides periods into intervals
- mismatch endpoint problem for time until event if less than a year, and the analysis being done in yearly segments (like compounding interest yearly instead of daily)
- all patients with an event treated as if event happened at one year mark, whether in first month or last day of period
- for example, a patient dying on day 357 after entering is not counted for a tally of one-year survival with Cox Model since did not make it to the second year
- Kaplan-Meier product limit method gives credit for time up to actual event rather than within interval which corrects this
- see Chapter 10
- Non-constant (arbitrary) Hazard ratio between groups of Hazard Rates for each group
- Methods that are difficult to quantify statistically
- Person-years [p214 D&T]
- patient contributes to the average however long they are in the study
- event marks one in numerator with time to event in denominator
- number of events divided by average time to event of all patients
- used to compare numbers for a period with a different time period or from another study
- no statistical methods available to compare these numbers however
- mixes time and number
- same number is obtained by observing 1000 patients for 1 year as observing 10 patients for 100 years
- assumes chance of and event is constant throughout the study
- risk of exposure
- 1-Year, 5-Year Survival(Mortality) Rates
- endpoint problem for patients that dont stay in group for that total time period
- lose partial participants (have to be in study at least that long)
- Life Table and Kaplan-Meier product limit methods give credit for the amount of time subjects survive up to the time when data are analyzed.
- Generalized Wilcoxon test
- non-constant hazard ratios OK
- other names for this test
- Generalized Kuskal-Wallis test
- Gehan test
- Breslow test
- extension of Wilcoxon rank sum test allowing for censored data
- weights earlier events more
- Intention-to-Treat Principle [p228]
- The results for each patient who entered the trial are included in the analysis of the group to which the patient was randomized, regardless of any subsequent events.
- dropouts
- possible that the patients who dropped out of the treatment group had some characteristics that, independent of treatment, could affect the outcome
- crossovers
- patients cross-over from one treatment group to the other (or have poor commpliance if comparing to placebo group)
- do not know why cross-overs occur
- Comparison of techniques
- analyze the patients by group they were randomized to (intention-to-treat)
- analyze the patients by group they ended up in at end of study
- eliminate all crossovers
- both of these approaches are potentially biased
- advocates of evidence-based medicine recommend intention-to-treat
- [ISLT] these issues are more problematic than is currently addressed
Back to Top
Back
page views since Mar2007
ªcorrection 8.27.2004