Markov renewal process

From Infogalactic: the planetary knowledge core
(Redirected from Semi-Markov process)
Jump to: navigation, search

In probability and statistics a Markov renewal process is a random process that generalizes the notion of Markov jump processes. Other random processes like Markov chain, Poisson process, and renewal process can be derived as a special case of an MRP (Markov renewal process).

Definition

File:Markov renewal process - Depiction.jpg
An illustration of a Markov renewal process

Consider a state space \mathrm{S}. Consider a set of random variables (X_n,T_n), where T_n are the jump times and X_n are the associated states in the Markov chain (see Figure). Let the inter-arrival time, \tau_n=T_n-T_{n-1}. Then the sequence (X_n,T_n) is called a Markov renewal process if

\Pr(\tau_{n+1}\le t, X_{n+1}=j|(X_0, T_0), (X_1, T_1),\ldots, (X_n=i, T_n))
 =\Pr(\tau_{n+1}\le t, X_{n+1}=j|X_n=i)\, \forall n \ge1,t\ge0, i,j \in \mathrm{S}

Relation to other stochastic processes

  1. If we define a new stochastic process Y_t:=X_n for t \in [T_n,T_{n+1}), then the process Y_t is called a semi-Markov process. Note the main difference between an MRP and a semi-Markov process is that the former is defined as a two-tuple of states and times, whereas the latter is the actual random process that evolves over time and any realisation of the process has a defined state for any given time. The entire process is not Markovian, i.e., memoryless, as happens in a CTMC. Instead the process is Markovian only at the specified jump instants. This is the rationale behind the name, Semi-Markov.[1][2][3] (See also: hidden semi-Markov model.)
  2. A semi-Markov process (defined in the above bullet point) where all the holding times are exponentially distributed is called a continuous time Markov chain/process (CTMC). In other words, if the inter-arrival times are exponentially distributed and if the waiting time in a state and the next state reached are independent, we have a CTMC.
    \Pr(\tau_{n+1}\le t, X_{n+1}=j|(X_0, T_0), (X_1, T_1),\ldots, (X_n=i, T_n))=\Pr(\tau_{n+1}\le t, X_{n+1}=j|X_n=i)
    =\Pr(X_{n+1}=j|X_n=i)(1-e^{-\lambda_i t}), \text{ for all } n \ge1,t\ge0, i,j \in \mathrm{S}
  3. The sequence X_n in the MRP is a discrete-time Markov chain. In other words, if the time variables are ignored in the MRP equation, we end up with a DTMC.
    \Pr(X_{n+1}=j|X_0, X_1, \ldots, X_n=i)=\Pr(X_{n+1}=j|X_n=i)\, \forall n \ge1, i,j \in \mathrm{S}
  4. If the sequence of \taus are independent and identically distributed, and if their distribution does not depend on the state X_n, then the process is a renewal process. So, if the states are ignored and we have a chain of iid times, then we have a renewal process.
    \Pr(\tau_{n+1}\le t|T_0, T_1, \ldots, T_n)=\Pr(\tau_{n+1}\le t)\, \forall n \ge1, \forall t\ge0

See also

Lua error in package.lua at line 80: module 'strict' not found.

References and Further Reading

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.