Continuous-Time Markov Chains

Oliver C. Ibe , in Markov Processes for Stochastic Modeling (Second Edition), 2013

5.6 Reversible CTMCs

A CTMC { X ( t ) , < t < } is said to be a reversible Markov chain if for any fixed τ and integer n 0 the sequence of states X ( t 1 ) , X ( t 2 ) , , X ( t n ) has the same probabilistic structure as the sequence X ( τ t 1 ) , X ( τ t 2 ) , , X ( τ t n ) . As discussed in Chapter 4, this means that a sequence of states when looked at backward in time has the same structure as the sequence running forward in time. As in the discrete-time analog discussed in Chapter 4, a CTMC is reversible if

v i j p i = v j i p j

As we discussed earlier, in the steady state the local balance condition for a birth and death process { X ( t ) , t 0 } states that

λ i p i = μ i + 1 p i + 1 i = 0 , 1 , 2 ,

Because for a birth and death process v i j = λ i and v j i = μ i + 1 , we observe that the reversibility condition has been met. Thus, a birth and death process is a reversible CTMC.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124077959000050

Stochastic Processes

J. MEDHI , in Stochastic Models in Queueing Theory (Second Edition), 2003

1.4 Birth-and-Death Processes

The class of all continuous-time Markov chains has an important subclass formed by the birth-and-death processes. These processes are characterized by the property that whenever a transition occurs from one state to another, then this transition can be to a neighboring state only. Suppose that the state space is S = {0,1,2,…,i,…}, then transition, whenever it occurs from state i, can be only to a neighboring state (i − 1) or (i + 1).

A continuous-time Markov chain {X(t), t ∈ T} with state space S = {0,1, 2,…} and with rates

q i , i + 1 = λ i ( say ) , i = 0 , 1 , , q i , i 1 = μ i ( say ) , i = 1 , 2 , , q i , j = 0 , j i ± 1 , j i , i = 0 , 1 , , and q i = ( λ i + μ i ) , i = 0 , 1 , , μ 0 = 0 ,

is called

(i)

a pure birth process, if μi = 0 for i = 1,2,…,

(ii)

a pure death process, if λi = 0, i = 0,1,…, and

(iii)

a birth-and-death-process if some of the λi 's and some of the μ′i 's are positive.

Using (1.3.12) we get the Chapman-Kolmogorov forward equations for the birth-and-death process.

For i, j = 1,2,…,

(1.4.1) p i j ( t ) = ( λ j + μ j ) p i j ( t ) + λ j 1 p i , j 1 ( t ) + μ j + 1 p i , j + 1 ( t )

and

(1.4.2) p i 0 ( t ) = λ 0 p i 0 ( t ) μ 1 p i , 1 ( t ) .

The boundary conditions are

(1.4.3) p i , j ( 0 + ) = δ i j , i , j = 0 , 1 , .

Denote

P j ( t ) = P r { X ( t ) = j } , j = 0 , 1 , , t > 0

and assume that at time t = 0, the system starts at state i, so that

(1.4.4) P j ( 0 ) = P r { X ( 0 ) = j } = δ i j ,

then

P j ( t ) = p i j ( t ) ,

and the forward equations can be written as

(1.4.5) P j ( t ) = ( λ j + μ j ) P j ( t ) + λ j 1 P j 1 ( t ) + μ j + 1 P j + 1 ( t ) , j = 1 , 2 , ,

(1.4.6) P 0 ( t ) = λ 0 P 0 ( t ) + μ 1 P 1 ( t ) .

Suppose that all the λi 's and μi ,'s are nonzero. Then the Markov chain is irreducible. It can be shown that such a chain is non-null persistent and that the limits

lim t p i j ( t ) = p j

exist and are independent of the initial state i. Then Eqs. (1.4.5) and (1.4.6) become

(1.4.7) 0 = ( λ j + μ j ) p j + λ j 1 p j 1 + μ j + 1 p j + 1 , j 1

(1.4.8) 0 = λ 0 p 0 + μ 1 p 1 .

Define

(1.4.9) π j = λ 0 λ 1 λ j 1 μ 1 μ 2 μ j , j 1 , and π 0 = 1 ;

then the solution of the above can be obtained by induction. We have from (1.4.8)

p 1 = ( λ 0 μ 1 ) p 0 = π 1 p 0

and assuming pk = πkp 0 ,k = 1,2,…,j, we get from (1.4.7)

p j + 1 μ j + 1 = λ i π j p 0 , or p j + 1 = π j + 1 p 0 .

Thus, it Σk=0 πk < ∞, then

(1.4.10) p j = π j π k , j 0 .

Incidentally, Σ πk < ∞ is a sufficient condition for the birth-and-death process to have all the states non-null persistent (and therefore for the process to be ergodic).

This process is of particular interest in queueing theory as several queueing systems can be modeled as birth-and-death processes. As an example, we consider the simple queue.

1.4.1 Special case: M/M/1 queue

For this queueing model

λ j = λ , μ i = μ , i = 0 , 1 , 2 , i = 1 , 2 , , and μ 0 = 0 ;

then πj = (λ/μ)j and Σπk < ∞ iff(λ/π) < 1, and then

(1.4.11) π k = 1 / [ 1 ( λ / μ ) ] p j = [ 1 ( λ / μ ) ] ( λ / μ ) j , j = 0 , 1 , 2 ,

1.4.2 Pure birth process: Yule-Furry process

If μi = 0, i ≥ 0, then we get a pure birth process; further, if λi = iλ, for all i, we get the Yule-Furry process for which

(1.4.12) P j ( t ) = j λ P j ( t ) + ( j 1 ) λ P j 1 ( t ) , j 1 , and P 0 ( t ) = 0

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124874626500011

Simulation

Sheldon M. Ross , in Introduction to Probability Models (Tenth Edition), 2010

Example 11.20

Consider a continuous-time Markov chain that, upon entering state i, spends an exponential time with rate vi in that state before making a transition into some other state, with the transition being into state j with probability Pi,j , i ≥ 0, ji. Suppose that costs are incurred at rate C (i) ≥ 0 per unit time whenever the chain is in state i, i ≥ 0. With X (t) equal to the state at time t, and α being a constant such that 0 α 1, the quantity

W = 0 e α t C ( x ( t ) ) d t

represents the total discounted cost. For a given initial state, suppose we want to use simulation to estimate E[W]. Whereas at first it might seem that we cannot obtain an unbiased estimator without simulating the continuous-time Markov chain for an infinite amount of time (which is clearly impossible), we can make use of the results of Example 5.1, which gives the equivalent expression for E[W]:

E [ W ] = E [ 0 T C ( x ( t ) ) d t ]

where T is an exponential random variable with rate α that is independent of the continuous-time Markov chain. Therefore, we can first generate the value of T, then generate the states of the continuous-time Markov chain up to time T, to obtain the unbiased estimator 0 T C ( x ( t ) ) d t . Because all the cost rates are nonnegative this estimator is strongly positively correlated with T, which will thus make an effective control variate.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123756862000017

Integrated Population Biology and Modeling, Part A

John Fricks , Ephraim Hanks , in Handbook of Statistics, 2018

2.1.1 The Infinitesimal Definition

A continuous-time Markov chain on the nonnegative integers can be defined in a number of ways. One way is through the infinitesimal change in its probability transition function over time. The probability transition function, which is the continuous-time analogue to the probability transition matrix of discrete Markov chains, is defined as

p i , j ( t ) = P ( X ( t ) = j | X ( 0 ) = i ) = P ( X ( t + s ) = j | X ( s ) = i )

The right-hand equality expresses the assumption of time homogeneity; given that the process starts in state i, the probability of arriving in state j after the passage of t time is the same regardless of the current time. We may then give an infinitesimal definition for a general birth–death process. The probability transition function can be implicitly defined by

p i , i + 1 ( t + Δ ) = α ( i ) p k , i ( t ) Δ + o ( Δ ) p i , i 1 ( t + Δ ) = β ( i ) p k , i ( t ) Δ + o ( Δ ) p i , i ( t + Δ ) = ( 1 α ( i ) β ( i ) ) p k , i ( t ) Δ + o ( Δ )

where α(i), β(i) are nonnegative functions from the nonnegative integers to the reals, called the birth rate and death rate, respectively, and assuming that the process starts in state k. All other transitions are negligible (i.e., o(Δ)). These can be interpreted in the following way: in a small interval of length delta, the probability of a birth and therefore moving from state i to i + 1 is approximately α(i)Δ. The probability of moving from state i to i − 1 is β(i)Δ. Note that β(0) = 0 to ensure that the process remains on the nonnegative integers. The probability of no net change in the population is (1 − (α(i) − β(i))Δ).

There are several standard examples of these birth–death processes. By selecting the birth rate to be constant (α(i) = κ 1, β(i) = 0.), the result is a Poisson process. In population models, these are often used to represent an arrival of an individual to a population from some other population, rather than being generated from the population within. An important feature of a Poisson process is that it has independent increments, i.e., X(t) − X(s) is independent of X(v) − X(u) as long as (s, t] and (u, v] do not overlap. The name of the process arises from the fact that X(t) − X(s) is a Poisson random variable with mean κ 1(ts). This is one instance where the transition probability function can be found exactly. Another well-known fact about Poisson processes is that the times between events are exponentially distributed with rate κ 1 or alternatively with mean 1/κ 1.

A starting point for a more realistic population model is the Yule process (α(i) = κ 2 i and β(i) = 0) also known as the pure birth process (Yule, 1925). The reasoning behind this model is as follows. Each member of the population will give "birth" to a new organism after an exponential time with rate κ 2 independent of every other member. This is the standard setup of exponential races (see, e.g., Durrett, 2016). So, if a population starts with i organisms each with an independent exponential time to a birth with rate κ 2, then after an exponential time with rate κ 2 i a birth will occur, and the population moves to a population of i + 1. After this birth, the memoryless property of the exponential ensures that each organism that did not generate a new organism has its exponential clock reset. Another way to express this is that in the next small time interval Δ a birth occurs. While this may be a poor model for many natural populations, it is not unreasonable for asexual reproduction of single cell organisms such as bacteria. One complaint might be that a cell that has just divided should not have its "clock" reset and restarted according to an exponential, but would rather need a delay; however, the Markovian model often works well as an approximation if this delay is small relative to the mean 1/κ 2.

One classical way to study these birth–death processes is through differential equations that implicitly define the probability transition function; these are called the Kolmogorov forward and backward equations. Suppose that we define the matrix valued function M(t) to have entries being the probability transition function defined above, so that

M i , j ( t ) = P ( X ( t + s ) = j | X ( s ) = i )

Using the Markov property and time homogeneity, we can establish the Chapman–Kolmogorov equations,

M ( u + v ) = M ( u ) M ( v )

(see, e.g., Ross, 2014 for details). Now, if we subtract M(v) from both sides and divide by u and took a limit as u approaches zero from above, we would obtain the Kolmogorov backward equations for a discrete-space continuous-time Markov process,

M ( v ) = Q M ( v )

where M′(v) is the element-wise derivative of M(v) and

Q = lim u 0 + M ( u ) I u

The matrix Q is known as the infinitesimal generator. The infinitesimal generator defines the model as the off-diagonal entries contain the transition rates from i to j and the diagonal entries are set to ensure that the rows of the matrix sum to zero, a restriction that ensures the solution has rows that sum to one. By selecting v to be the small variable in the Chapman–Kolmogorov equations and performing a similar limit, we obtain the Kolmogorov forward equations,

M ( u ) = M ( u ) Q

Both of these equations have an initial condition of M(0) = I. The Kolmogorov forward equation is also known as the master equation, especially in the chemistry and physics literature (Gardiner, 2009; Van Kampen, 2007).

Classic texts that includes a discussion of these equations are Karlin and Taylor (1975), Karlin and Taylor (1981), and there are, of course, many other more modern texts that discuss these equations including Durrett (2016), Ross (2014), and many others. In some simple cases, these equations may be solved to find an explicit solution to the probability transition function using Laplace transform methods. In general, this cannot be done, especially for generalizations beyond one dimension. While this literature is historically important and can sometimes be helpful in calculating quantities associated with particular models, it is especially limited when moving beyond one- or two-dimensional models, though some approximations exist in higher dimensions (Fox et al., 2016).

One particular use of the forward/backward equations is in determining stationary distributions when a unique equation exists, by setting the derivative to zero and solving for the probabilities under the constraint that those probabilities sum to one. Specifically, one needs to solve

0 = π ( I Q )

where π is a row vector representing the stationary distribution, and we have the additional constraint that π should sum to one (Durrett, 2016; Ross, 2014). For a birth–death process, we have specific criteria for the existence of a stationary distribution based on the transition rates along with a general formula for their form. Beyond one dimension, this can also be a challenging approach.

An important takeaway from this brief discussion is that the probabilistic model for continuous-time Markov processes is generally defined only implicitly even when the parameters governing the process are fully known. Except in some simple cases, the probability transition function, which will form the basis of a likelihood function for these population models, will need to be approximated via simulation, approximating models, numerical schemes for ODEs, etc. This will add an extra layer of complexity and error to any estimation or simulation scheme.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0169716118300270

Continuous-Time Markov Chains

Sheldon M. Ross , in Introduction to Probability Models (Tenth Edition), 2010

Example 6.11

(A Continuous-Time Markov Chain Consisting of Two States) Consider a machine that works for an exponential amount of time having mean 1/λ before breaking down; and suppose that it takes an exponential amount of time having mean 1/μ to repair the machine. If the machine is in working condition at time 0, then what is the probability that it will be working at time t = 10?

To answer this question, we note that the process is a birth and death process (with state 0 meaning that the machine is working and state 1 that it is being repaired) having parameters

λ 0 = λ , μ 1 = μ , λ i = 0 , i 0 , μ i = 0 , i 1

We shall derive the desired probability, namely, P 00(10) by solving the set of differential equations given in Example 6.10. From Equation (6.9), we obtain

(6.10) P 00 ( t ) = λ [ P 10 ( t ) P 00 ( t ) ] ,

(6.11) P 10 ( t ) = μ P 00 ( t ) μ P 10 ( t )

Multiplying Equation (6.10) by μ and Equation (6.11) by λ and then adding the two equations yields

μ P 00 ( t ) + λ P 10 ( t ) = 0

By integrating, we obtain

μ P 00 ( t ) + λ P 10 ( t ) = c

However, since P 00(0) = 1 and P 10(0) = 0, we obtain c = μ and hence,

(6.12) μ P 00 ( t ) + λ P 10 ( t ) = μ

or equivalently,

λ P 10 ( t ) = μ [ 1 P 00 ( t ) ]

By substituting this result in Equation (6.10), we obtain

P 00 ( t ) = μ [ 1 P 00 ( t ) ] λ P 00 ( t ) = μ ( μ + λ ) P 00 ( t )

Letting

h ( t ) = P 00 ( t ) μ μ + λ

we have

h ( t ) = μ ( μ + λ ) [ h ( t ) + μ μ + λ ] = ( μ + λ ) h ( t )

or

h ( t ) h ( t ) = ( μ + λ )

By integrating both sides, we obtain

log h ( t ) = ( μ + λ ) t + c

or

h ( t ) = K e ( μ + λ ) t

and thus

P 00 ( t ) = K e ( μ + λ ) t + μ μ + λ

which finally yields, by setting t = 0 and using the fact that P 00(0) = 1,

P 00 ( t ) = λ μ + λ e ( μ + λ ) t + μ μ + λ

From Equation (6.12), this also implies that

P 10 ( t ) = μ μ + λ μ μ + λ e ( μ + λ ) t

Hence, our desired probability is as follows:

P 00 ( 10 ) = λ μ + λ e 10 ( μ + λ ) + μ μ + λ

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123756862000054

Renewal Theory and Its Applications

Sheldon M. Ross , in Introduction to Probability Models (Tenth Edition), 2010

Example 7.18

Consider a positive recurrent continuous—time Markov chain that is initially in state i. By the Markovian property, each time the process reenters state i it starts over again. Thus returns to state i are renewals and constitute the beginnings of new cycles. By Proposition 7.4, it follows that the long—run

proportion of time in state j = E [ amount of time in j  during an i i  cycle ] μ i i

where μ ii represents the mean time to return to state i. If we take j to equal i, then we obtain

proportion of time in state i = 1 / v i μ i i

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123756862000030

Simulation

Sheldon M. Ross , in Introduction to Probability Models (Twelfth Edition), 2019

11.6.3 Control Variates

Again suppose we want to use simulation to estimate E [ g ( X ) ] where X = ( X 1 , , X n ) . But now suppose that for some function f the expected value of f ( X ) is known—say, E [ f ( X ) ] = μ . Then for any constant a we can also use

W = g ( X ) + a ( f ( X ) μ )

as an estimator of E [ g ( X ) ] . Now,

Var ( W ) = Var ( g ( X ) ) + a 2 Var ( f ( X ) ) + 2 a Cov ( g ( X ) , f ( X ) )

Simple calculus shows that the preceding is minimized when

a = Cov ( f ( X ) , g ( X ) ) Var ( f ( X ) )

and, for this value of a,

Var ( W ) = Var ( g ( X ) ) [ Cov ( f ( X ) , g ( X ) ) ] 2 Var ( f ( X ) )

Because Var ( f ( X ) ) and Cov ( f ( X ) , g ( X ) ) are usually unknown, the simulated data should be used to estimate these quantities.

Dividing the preceding equation by Var ( g ( X ) ) shows that

Var ( W ) Var ( g ( X ) ) = 1 Corr 2 ( f ( X ) , g ( X ) )

where Corr ( X , Y ) is the correlation between X and Y. Consequently, the use of a control variate will greatly reduce the variance of the simulation estimator whenever f ( X ) and g ( X ) are strongly correlated.

Example 11.20

Consider a continuous-time Markov chain that, upon entering state i, spends an exponential time with rate v i in that state before making a transition into some other state, with the transition being into state j with probability P i , j , i 0 , j i . Suppose that costs are incurred at rate C ( i ) 0 per unit time whenever the chain is in state i , i 0 . With X ( t ) equal to the state at time t, and α being a constant such that 0 < α < 1 , the quantity

W = 0 e α t C ( X ( t ) ) d t

represents the total discounted cost. For a given initial state, suppose we want to use simulation to estimate E [ W ] . Whereas at first it might seem that we cannot obtain an unbiased estimator without simulating the continuous-time Markov chain for an infinite amount of time (which is clearly impossible), we can make use of the results of Example 5.1, which gives the equivalent expression for E [ W ] :

E [ W ] = E [ 0 T C ( X ( t ) ) d t ]

where T is an exponential random variable with rate α that is independent of the continuous-time Markov chain. Therefore, we can first generate the value of T, then generate the states of the continuous-time Markov chain up to time T, to obtain the unbiased estimator 0 T C ( X ( t ) ) d t . Because all the cost rates are nonnegative this estimator is strongly positively correlated with T, which will thus make an effective control variate.  

Example 11.21 A Queueing System

Let D n + 1 denote the delay in queue of the n + 1 customer in a queueing system in which the interarrival times are independent and identically distributed (i.i.d.) with distribution F having mean μ F and are independent of the service times, which are i.i.d. with distribution G having mean μ G . If X i is the interarrival time between arrival i and i + 1 , and if S i is the service time of customer i, i 1 , we may write

D n + 1 = g ( X 1 , , X n , S 1 , , S n )

To take into account the possibility that the simulated variables X i , S i may by chance be quite different from what might be expected we can let

f ( X 1 , , X n , S 1 , , S n ) = i = 1 n ( S i X i )

As E [ f ( X , S ) ] = n ( μ G μ F ) we could use

g ( X , S ) + a [ f ( X , S ) n ( μ G μ F ) ]

as an estimator of E [ D n + 1 ] . Since D n + 1 and f are both increasing functions of S i , X i , i = 1 , , n it follows from Theorem 11.1 that f ( X , S ) and D n + 1 are positively correlated, and so the simulated estimate of a should turn out to be negative.

If we wanted to estimate the expected sum of the delays in queue of the first N ( T ) arrivals, then we could use i = 1 N ( T ) S i as our control variable. Indeed as the arrival process is usually assumed independent of the service times, it follows that

E [ i = 1 N ( T ) S i ] = E [ S ] E [ N ( T ) ]

where E [ N ( T ) ] can either be computed by the method suggested in Section 7.8 or estimated from the simulation as in Example 11.18. This control variable could also be used if the arrival process were a nonhomogeneous Poisson with rate λ ( t ) ; in this case,

E [ N ( T ) ] = 0 T λ ( t ) d t

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128143469000160

Continuous-Time Markov Chains

Sheldon M. Ross , in Introduction to Probability Models (Twelfth Edition), 2019

6.2 Continuous-Time Markov Chains

Suppose we have a continuous-time stochastic process { X ( t ) , t 0 } taking on values in the set of nonnegative integers. In analogy with the definition of a discrete-time Markov chain, given in Chapter 4, we say that the process { X ( t ) , t 0 } is a continuous-time Markov chain if for all s , t 0 and nonnegative integers i , j , x ( u ) , 0 u < s

P { X ( t + s ) = j | X ( s ) = i , X ( u ) = x ( u ) , 0 u < s } = P { X ( t + s ) = j | X ( s ) = i }

In other words, a continuous-time Markov chain is a stochastic process having the Markovian property that the conditional distribution of the future X ( t + s ) given the present X ( s ) and the past X ( u ) , 0 u < s , depends only on the present and is independent of the past. If, in addition,

P { X ( t + s ) = j | X ( s ) = i }

is independent of s, then the continuous-time Markov chain is said to have stationary or homogeneous transition probabilities.

All Markov chains considered in this text will be assumed to have stationary transition probabilities.

Suppose that a continuous-time Markov chain enters state i at some time, say, time 0, and suppose that the process does not leave state i (that is, a transition does not occur) during the next ten minutes. What is the probability that the process will not leave state i during the following five minutes? Since the process is in state i at time 10 it follows, by the Markovian property, that the probability that it remains in that state during the interval [10,15] is just the (unconditional) probability that it stays in state i for at least five minutes. That is, if we let T i denote the amount of time that the process stays in state i before making a transition into a different state, then

P { T i > 15 | T i > 10 } = P { T i > 5 }

or, in general, by the same reasoning,

P { T i > s + t | T i > s } = P { T i > t }

for all s , t 0 . Hence, the random variable T i is memoryless and must thus (see Section 5.2.2) be exponentially distributed.

In fact, the preceding gives us another way of defining a continuous-time Markov chain. Namely, it is a stochastic process having the properties that each time it enters state i

(i)

the amount of time it spends in that state before making a transition into a different state is exponentially distributed with mean, say, 1 / v i , and

(ii)

when the process leaves state i, it next enters state j with some probability, say, P i j . Of course, the P i j must satisfy

P i i = 0 , all i j P i j = 1 , all i

In other words, a continuous-time Markov chain is a stochastic process that moves from state to state in accordance with a (discrete-time) Markov chain, but is such that the amount of time it spends in each state, before proceeding to the next state, is exponentially distributed. In addition, the amount of time the process spends in state i, and the next state visited, must be independent random variables. For if the next state visited were dependent on T i , then information as to how long the process has already been in state i would be relevant to the prediction of the next state—and this contradicts the Markovian assumption.

Example 6.1 A Shoe Shine Shop

Consider a shoe shine establishment consisting of two chairs—chair 1 and chair 2. A customer upon arrival goes initially to chair 1 where his shoes are cleaned and polish is applied. After this is done the customer moves on to chair 2 where the polish is buffed. The service times at the two chairs are assumed to be independent random variables that are exponentially distributed with respective rates μ 1 and μ 2 . Suppose that potential customers arrive in accordance with a Poisson process having rate λ, and that a potential customer will enter the system only if both chairs are empty.

The preceding model can be analyzed as a continuous-time Markov chain, but first we must decide upon an appropriate state space. Since a potential customer will enter the system only if there are no other customers present, it follows that there will always either be 0 or 1 customers in the system. However, if there is 1 customer in the system, then we would also need to know which chair he was presently in. Hence, an appropriate state space might consist of the three states 0, 1, and 2 where the states have the following interpretation:

State Interpretation 0 system is empty 1 a customer is in chair 1 2 a customer is in chair 2

We leave it as an exercise for you to verify that

v 0 = λ , v 1 = μ 1 , v 2 = μ 2 , P 01 = P 12 = P 20 = 1

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128143469000111

BioModel Engineering with Petri Nets

Mary Ann Blätke , ... Wolfgang Marwan , in Algebraic and Discrete Mathematical Methods for Modern Biology, 2015

Numerical Methods

As long as the underlying semantics of a stochastic Petri net is described by a finite CTMC of manageable size, it can be analyzed using standard stochastic analysis techniques such as transient analysis, steady-state analysis, or (exact) model checking. Transient analysis means to compute the transient probabilities to be in a certain state at a specific time point using, for example, the uniformization method. Steady-state analysis computes the steady-state probabilities using, for example, Jacobi iteration or Gaussian-Seidel iteration. In numerical model checking, behavioral properties can be checked, which have been expressed in, for example, CSL—a stochastic counterpart of CTL, where the path quantifiers are replaced by probabilities.

For illustration, we perform transient analysis for the two introductory examples SPN 1 and SPN 2 ; see Figures 7.14 and 7.15.

Figure 7.14. Transient analysis for SPN 1 in Figure 7.10 for N  =   10 (left) and N  =   100 (right). The 3D plots show the evolution of the probability distribution of the token number on A up to time point 5.

Figure 7.15. Transient analysis for SPN 2 in Figure 7.12 for N  =   10 (left) and N  =   100 (right). The 3D plots in the first row show the evolution of the probability distribution of the fraction of tokens on A up to time point 50. The second row gives the cumulative probability distributions, sharpening the effect; compare the closed-form solution in Figure 7.13b.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128012130000071

Game theoretic modeling and dependability analysis of small cell relays under bandwidth spoofing attack in 5G wireless communication network

K.C. Lalropuia , Vandana Gupta , in The Handbook of Reliability, Maintenance, and System Safety through Mathematical Modeling, 2021

4.1 Numerical illustration

Availability: Availability of a system is characterized by the probability that the system is working properly when needed for use. In the proposed SCR attack model, a CTMC is used as the underlying stochastic process to characterize the lifetime of the SCR. Therefore, we obtain the steady state probabilities of the proposed model using SHARPE software [40]. Note that in the proposed model, the SCR network is available in the states 0, 1, 2, and 5. Thus, we determine SSA of the SCR under attack as

(5.18) S S A = π 0 + π 1 + π 2 + π 5

where π k is the steady state probability that the network under attack is in state k. Based on the parameter values in Table 5.6 and setting λ 40  =   0.001 we get the value of SSA as SSA  =   0.697. Note that the parameter values are chosen for the purpose of numerical illustration only.

Table 5.6. Parameters and their values (per hour).

Parameter Value Parameter Value
λ 01 0.50000 λ 25 0.00040
λ 12 0.00060 λ 51 0.00094
λ 21 0.00060 λ 52 0.00094
λ 13 0.00020 λ 54 0.00003
λ 34 0.00003 λ 64 0.00300
λ 23 0.00020 λ 53 0.00020
λ 14 0.00003 λ 63 0.00040
λ 24 0.00003 λ 62 0.00220
λ 30 0.00500 λ 61 0.00220
λ 56 0.05950 λ 15 0.00040

MTTF: This is a common measure for the reliability of a system and is the average length of time that the system is expected to be in operation. To obtain the MTTF of the SCR, we remove the arc from the failed state (F) to the normal state (N) in Fig. 5.2, resulting in an absorbing Markov chain in which the state F is the absorbing state. We then compute the MTTF of the SCR under bandwidth spoofing attack by solving the absorbing CTMC. For example, on implementing the Algorithm 1 with ν ( r , a , m )   =   −9 and ν ( r , τ , τ ) = 9 , r { P , C } , we get x a P = 0.595 , x a C = 0.450 , y m P = 0.405 , and y m C = 0.550. Substituting, these parameter values in the sixth and seventh row of the rate matrix Q with θ 5  =   0.7 and θ 6  =   0.8, β 51  =   0.0033, ψ 52  =   0.0033, ψ 61  =   0.005, ψ 62  =   0.005, β 56  =   0.33, and β 64  =   0.03, and using the parameter values in Table 5.6, we obtain the MTTF as MTTF  =   6112.339   h.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128195826000058