Loading in 2 Seconds...

On Competing Risks and Degradation Processes (A Conceptual Framework)

Loading in 2 Seconds...

- By
**thom** - Follow User

- 94 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'On Competing Risks and Degradation Processes (A Conceptual Framework)' - thom

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### On Competing Risks and Degradation Processes(A Conceptual Framework)

Nozer D. Singpurwalla

The George Washington University

Choice of Topic and Overview

- Richard Johnson in whose honor this meeting is held has provided much service to the profession via Good Citizenship, Editorship, and Research Scholarship, much in the arena of Reliability, Survival Analysis, Life Testing, and Multivariate Analysis.
- With respect to the above his work has always been motivated by practicality and reality, making it all scientifically meritorious; I admire this trait!
- The topic of my talk today is an attempt to emulate Richard’s scientific persona, and also to appeal to the interests of survival analysts and other statisticians working on material degradation and biomarkers.

The general plan of my talk is as follows:

- Introduce an idea which will serve as a hook for discussing competing risks, survival analysis, degradation modeling, and interdependence under a common umbrella.
- Develop a framework for assessing survival functions under dependent and independent competing risks and competing risk processes.

iii) Introduce a perspective for looking at degradation and marker processes that is different from the prevailing one.

iv) Invoke iii) above in the context of an example.

v) Discuss a general plan for conducting a Bayesian analysis of data generated by Brownian motion with drift.

1. Preliminaries

- Let:

T= Failure time of an item scheduled to operate in a static environment

h(t)=Hazard Rate Function of

H(t)=Cumulative Hazard Function

- With h(t) specified we know that
- Let then
- Thus

Interpretation of Result.

Item fails when its cumulative hazard H(t), t≥0 crosses a random threshold X, where X~exp(1).

- I call X the hazard potential of the item and view it as an unknown resource that the item is endowed with at inception. The item fails when this resource gets depleted.
- H(t) is viewed as the amount of resource that is consumed by time t, and h(t) the rate at which the resource gets consumed.
- The operating environment is declared to be normal (accelerated) [decelerated] when H(t)=(>)[<]t.

With X interpreted as an unknown resource, life-times T1 and T2 can be seen as being dependent because their corresponding X1 and X2 are dependent.

- Dependence between X1 and X2 could be due to commonalities of design, manufacture, or genetic make up (in the case of biological entities).

When the operating environment is dynamic, the rate at which an item’s resource gets consumed is random. Thus h(t), t≥0, is best described as a stochastic process.

- Consequently, H(t), t≥0 is also described by a continuously non-decreasing stochastic process, called the cumulative hazard process.
- Thus, under a dynamic environment, an item fails when the process {H(t); t≥0} hits a random threshold X, where X~exp(1).
- Candidate processes for H(t), t≥0 are the Lévy or the maximum of a Wiener process.

Under the said interpretation, the hazard potential can be likened to a latent variable (hidden parameter to physicists), and the failure phenomena as the hitting time of a non decreasing stochastic process, to a random threshold.

- Note that the process {H(t),t≥0} is an unobservable process and is not to be seen as an observable phenomenon, such as crack growth, CD-4 cell counts or some marker process.
- Rather, {H(t);t≥0} is seen as the cause of failure in a manner akin to Laplace’s interpretation of p in Bernoulli trials as the cause of an observed outcome (heads or tails).

2. Competing Risks and Competing Risk Processes.

- By competing risks one means the failure of an entity due to several agents (or causes) that presumably compete with each other for the entity’s lifetime.
- An archetypal scenario for discussing competing risks is the lifetime of a series system with independent or dependent component lifetimes. Since the failure of any component leads to system failure, the system is seen to experience multiple risks, the realization of any one which will lead to system failure.
- Our approach to articulating competing risks for the series system scenario is different.

Let Xi be the hazard potential of component i, and Hi(t) the cumulative hazard experienced by its lifetime Ti. Let T be the lifetime of the system. Then

- If the Xi’s are assumed to be independent (a far fetched assumption) then
- an additivity of the cumulative hazards (or risks, since Hi(t) being a depletor of resource is de facto a risk).

If the Xi’s are dependent, then the cumulative risks will not be additive. For example if (X1,X2) have Gumbel’s bivariate exponential with

then

- The series system scenario has also been used to conceptualize the failure of a single item experiencing several causes of failure that compete with each other.
- The argument used involves the generation of several lifetimes, one for each cause, and then assuming the lifetimes dependent or independent.
- This line of reasoning – apparently palatable to biostatisticians – seems contrived.

Our argument proceeds along the following lines:

- Suppose the item experiences k failure causing agents C1,…,Ck.
- Let Hi(t) be the risk posed by agent Ci, were it be the only agent acting on the item.
- Then, since a single item can possess only one hazard potential X, its lifetime T is given as:
- Contrast this with the biostatistician’s result that

the risk is overestimated.

Dependent Competing Risks on a Single Item.

- In order to introduce dependence between the risks operating on a single item, we need to expand the framework described above and

make the Hi(t)’s (stochastically) interdependent.

- To do so we need to endow the Hi(t)’s with a probabilistic structure.
- One strategy would be conceptualize each of the Hi(t)’s as a stochastic process that we call a risk process, and the collection {H1(t),…,Hk(t);t≥0} a (dependent) competing risk process.

Since the Hi(t) are non-decreasing in t, one possible choice for the process {Hi(t);t≥0} could be a Brownian Maximum Process, i.e.

where Wi(s) is a standard Brownian motion process.

- Dependence between the competing risk processes can be induced via a dependence between the standard Brownian motion processes, namely the cross correlation between the processes.
- Another possibility is to suppose that {H1(t); t≥0} is of the form of an impulse function of type H2(t)=0, for t<t*, and H2(t)=∞, for t≥t*, where t*>0.

The rate of occurrence of the impulse at time t depends on H1(t). The process {H2(t);t≥0} can be seen as some sort of a traumatic event which competes with the process {H1(t);t≥0} for the lifetime of the item.

- In the absence of trauma, the item fails when the process {H1(t);t≥0} hits the item’s hazard potential, X.
- If we suppose that the probability of occurrence of an impulse in the interval [t,t+h), given that H1(t)=w is 1-exp(-wh), then for X=x, we can show that

where IA(.) is the indicator of the set A, and the expectation is with respect to the distribution of the process {H1(t);t≥0}.

- As a special case, when {H1(t);t≥0} is a gamma process, then we can show that

3. Biomarkers and Degradation Processes

- A topic of current interest in both reliability and survival analysis is the assessment of lifetimes based on observable surrogates such as crack length, and biomarkers, such as CDA cell counts.
- Here again the hazard potential provides a platform for looking at the interplay between the unobservable failure causing phenomenon, and the observable surrogate.
- The above interplay is possible if we assume some form dependence (or a link) between the two processes. But first, a few words about what we mean by the term degradation and the term ageing.

To material scientists, degradation is the irreversible accumulation of damage throughout life that leads to failure.

- The term damage is not defined; however, it is claimed that damage manifests itself via surrogates such as cracks, corrosion, and wear. That is, cracks and corrosion do not represent damage but are its consequences.
- In the biosciences, the notion of ageing pertains to a unit’s position in a state-space wherein the probabilities of failure are greater than in a former position.
- Ageing manifests itself in terms of biomarkers such as difficulties experienced by individuals.

With the above as background, our proposal is to conceptualize ageing and degradation as unobservable constructs, or as latent variables that serve to describe a process that results in failure.

- These constructs can be seen as the cause of the observable surrogates like cracks, corrosion, and biomarkers.
- The above viewpoint is not in keeping with the prevailing view that degradation is an observable phenomenon that reveals itself in the guise of crack length and CDA counts, and that an item fails when the observable phenomenon hits some threshold (whose value is not specified) [cf. Doksum, Meeker, et. al.].
- Our view separates the observable and the unobservable, and attributes failure as a consequence of the unobservable hitting a random threshold X whose distribution is exponential(1).

3.1. The Underlying Model

We conceptualize the unobservable cumulative hazard function as degradation or ageing, and the biomarker or the surrogates as an observable process that is influenced by the unobservable process.

- The item fails when the cumulative hazard function hits the item’s hazard potential X, where X has an exponential(1) distribution.
- We now introduce the degradation process as a bivariate stochastic process {H(t), Z(t); t≥0} with H(t) representing the unobservable degradation and Z(t) the observable marker. H(t) needs to be non-decreasing; Z(t) need not be so. For Z(t) to be useful as a predictor of failure, we need to relate H(t) and Z(t).

One strategy for linkage is to describe {Z(t);t≥0} by a Wiener process (cracks do heal and CDA cell counts do fluctuate), and the process {H(t);t≥0} by a Wiener maximum process, namely

Note that H(0)=0, H(t)>0, t, and H(t) is non-decreasing.

- Since {Z(t);t≥0} is an observable process, how may one use observations on this process, until some time t*, say, to make inferences about

where T is the item’s time to failure?

This is the underlying inferential problem.

3.2. Hitting Time of a Wiener Maximum Process

- Suppose that the observable marker process Zt is described by a Weiner process with a drift η and diffusion σ2>0. Let Tx be the hitting time of Zt to a threshold x.
- Then it is known that given η and σ2

has an Inverse Gaussian Distribution, with parameters and . Here and [cf. Doksum (1991)], and η>0 to ensure that the above are finite.

We now turn attention to Ht the failure causing process. Since the

hitting time of Ht to a threshold X will coincide with the hitting time of Zt to X.

- However, X~exp(1), and thus if T is the time to failure of the item, then

The IG-Distribution with thresholds x=1,…,5 and the averaged IG-distribution.

3.3. Inferential Issues

- The unknowns of our set-up are η, σ2 and X; the data are the observations on the marker process Zt and the time at which the item fails, namely T.
- Also, Zt can only be observed prior to or until T, and at T=t, we know Ht.
- In practice Zt cannot be monitored continuously, and observations on Zt provide us information about η, σ2 and also X (because for any t<T, we know that X>Zt).

Suppose that the data Z=(Z(t1),…,Z(tk)) for tk<T, and since X>Z(tk), our uncertainty about X is encapsulated by an exponential(1,Z(tk)) where Z(tk) is a location (or shift) parameter.

- Thus, in the case of a single item experiencing failure due to degradation whose marker process yields Z as data, our aim is to assess its residual life (T-tk).

This under certain conditions, namely, upholding the philosophical principle of conditionalization [cf. Singpurwalla (2007)], tantamounts to assessing an expression of the type

for 0<u<∞.

- To assess an expression of the type P(T>t;Z) for some t>0, suppose that π(η,σ2,X;Z) encapsulates our uncertainty about η, σ2, and X in the light of Z [i.e. the posterior of η, σ2, and X].

Then, it can be argued – using some assumptions of conditional independence – that

where Fx(t|η,σ) is the Inverse Gaussian distribution with parameters x/η and x/σ2 .

- Were η take values beween a>0 and

and since x>Z(tk), the above integral will be of the form

The above integral needs to be solved numerically for any fixed t, and specified values of a and b.

- Once this is done, the residual life (T-tk) can be assessed by setting t=tk and t=tk+u, for u>0.
- An overall strategy for obtaining the posterior π(η,σ2,X;Z) by a judicious and meaningful choice of proper priors will be described to an audience of pukka Bayesians (i.e. Bayesians who reject the use of improper priors!).
- The strategy is de facto a proposed Bayesian approach for inference in the context of Brownian motion.

Download Presentation

Connecting to Server..