Techniques

In this section, we outline simple ways to simulate random variables and random procecces. At the beginning of each subsection, we indicate to which section of the main theoretical text it corresponds.

1. Bernoulli and binomial random variables

We can simulate a Bernoulli random variable \(Z\), which takes values \(1\) and \(0\) with probabilities \(p\) and \(1-p\), as follows: Simulate \(U \sim \mathrm{Uni}[0, 1]\). If \(U \le p\), then \(Z := 1\); else \(Z := 0\). A binomial random variable \(X \sim \mathrm{Bin}(N, p)\) can be simulated as the sum of \(N\) i.i.d. Bernoulli random variables: \(X = Z_1 + \ldots + Z_N\).

2. Discrete random variables

Consider a random variable \(X\) which takes values \(0, 1, 2, \ldots\) with probabilities \(p_0, p_1, p_2, \ldots\) Split \([0, 1]\) into \[ [0, p_0],\ [p_0, p_0 + p_1],\ [p_0 + p_1, p_0 + p_1 + p_2],\ \ldots \] Simulate \(U \sim \mathrm{Uni}[0, 1]\) and consider these intervals one by one: If \(0 \le U \le p_0\), then let \(X := 0\); if \(p_0 \le U \le p_0 + p_1\), then let \(X := 1\); etc.

3. Joint discrete distributions

Similarly we can simulate jointly distributed discrete random variables. As an example, simulate \((X, Y)\) with joint distribution \[ \mathbb P(X = Y = 0) = 0.5,\ \mathbb P(X = 0,\, Y = 1) = 0.3,\ \mathbb P(X = 1,\, Y = 0) = 0.2. \] Simulate \(U \sim \mathrm{Uni}[0, 1]\). Next, \[ \begin{cases} 0 \le U \le 0.5\ \Rightarrow X:= 0,\, Y:= 0;\\ 0.5 \le U \le 0.8\ \Rightarrow\ X:= 0,\, Y := 1;\\ 0.8 \le U \le 1\ \Rightarrow\ X := 1,\, Y := 0. \end{cases} \]

4. Continuous random variables: the inverse function method

Consider a random variable \(X\) with cumulative distribution function \(F(x) := \mathbb P(X \le x)\), with density \(p(x) = F'(x)\). Let us simulate it. Construct the inverse function: \(F^{-1}(u) := x\), which solves \(F(x) = u\). Note that \(F\) is increasing, so \(F^{-1}(u) \le x\) if and only if \(u \le F(x)\). Take a uniform \([0, 1]\) random variable \(U\). Then we can take \(X = F^{-1}(U)\), because it has the correct cumulative distribution function: \[ \mathbb{P}\left(F^{-1}(U) \le x\right) = \mathbb{P}\left(U \le F(x)\right) = F(x), \] because for this uniform random variable \(U\) we have: \(\mathbb{P}(U \le a) = a,\ \ 0 \le a \le 1\).

Example 1.

Try to simulate \(X \sim \mathrm{Exp}(\lambda)\). It has density \(p(x) = \lambda e^{-\lambda x}\) and cumulative distribution function \[ F(x) = \int_0^{x}p(y)\mathrm{d} y = \int_0^x\lambda e^{-\lambda y}\mathrm{d} y = \left.\left(-e^{-\lambda y}\right)\right|_{y=0}^{y=x} = 1 - e^{-\lambda x},\ \ x \ge 0. \] (And \(F(x) = 0\) for \(x \le 0\).) Now, let us find the inverse function. To this end, let us solve \(F(x) = u\): \[ 1 - e^{-\lambda x} = u\ \Rightarrow\ e^{-\lambda x} = 1 - u\ \Rightarrow\ x = -\lambda^{-1}\ln(1 - u). \] Now, let \(\boxed{X = -\lambda^{-1}\ln(1 - U)}\)

5. Rejection method

Sometimes it is too difficult to simulate \(X\) using the inverse function method. For example, let \(X \sim \mathcal N(0, 1)\), then \[ F(x) = \frac1{\sqrt{2\pi}}\int_{-\infty}^xe^{-y^2/2}\mathrm{d} y, \] which is difficult to calculate and even more difficult to invert. You can do this, using tables, but there is a better way. Let \(f(x)\) be the density of \(X\). In this case, it is \[ f(x) = \frac1{\sqrt{2\pi}}e^{-x^2/2}. \] It is bounded from above by \(c = (2\pi)^{-1/2}\). Moreover, \(-10 \le X \le 10\) with very high probability. Then we can simulate independent \(X \sim \mathrm{Uni}[-10, 10]\) and \(Y \sim \mathrm{Uni}[0, c]\). The point \((X, Y)\) is then uniformly distributed in the rectangle \([-10, 10]\times [0, c]\). We accept \(X\) if this point lies below the density curve: that is, if \(f(X) \ge Y\). Otherwise, we reject this point and repeat this again, until we get an acceptable point \((X, Y)\). This method works equally well for multivariate distributions.

Example 2.

Simulate \((X, Y)\) with density \(p(x, y) = x + y\) for \(x, y \in [0, 1]\). Then \(p \le 2\). Simulate independent \(X \sim \mathrm{Uni}[0, 1]\), \(Y \sim \mathrm{Uni}[0, 1]\), \(Z \sim \mathrm{Uni}[0, 2]\), and accept \((X, Y)\) if \(p(X, Y) = X + Y \ge Z\).

6. Value at risk

Assume we have i.i.d. claims \(X_1, \ldots, X_n\), and we would like to find value-at-risk corresponding to the level of confidence \(95\%\) by simulation. Simulate these claims, rank them from top to bottom, and find the \(.05\cdot n\)-th ranked value. This is the value-at-risk for the current simulation. Repeat this a lot of times and average the resulting values-at-risk.

7. Monte Carlo method

Assume you have an integral \(I = \int_0^1f(x)\mathrm{d} x\), which you cannot calculate directly. One way to approximate this integral is to split \([0, 1]\) into small subintervals and use Simpson’s rule or other similar rules. Another way is to generate \(N\) i.i.d. uniform random variables \(X_1, \ldots, X_N\) on \([0, 1]\) and let \[ I_N := \frac1N\sum\limits_{k=1}^Nf(X_k). \] By Law of Large Numbers, because \(\mathbb E f(X_k) = \int_0^1f(x)\mathrm{d} x\), and \(f(X_1), \ldots, f(X_N)\) are i.i.d., we have: \(I_N \to I\). More generally, if we have \[ I := \int_{-\infty}^{\infty}f(x)p(x)\mathrm{d} x, \] where \(p\) is some probability density, let us generate \(N\) i.i.d. random variables \(X_1, \ldots, X_N\) with density \(p\), and let \[ I_N := \frac1N\sum\limits_{k=1}^Nf(X_k) \to I,\ \ N \to \infty, \] because \(\mathbb E f(X_k) = \int f(x)p(x)\mathrm{d} x = I\). This convergence is with speed \(N^{-1/2}\), as follows from the Central Limit Theorem.

Example 3.

Let \(I = \int_{-\infty}^{\infty}e^{-x^4}\mathrm{d} x\). Generate \(X_1, \ldots, X_N \sim \mathcal N(0, 1)\), and represent \(I\) as \[ I = \sqrt{2\pi}\int_{-\infty}^{\infty}f(x)p(x)\mathrm{d} x,\quad f(x) = e^{-x^4 + x^2/2},\quad p(x) = \frac1{\sqrt{2\pi}}e^{-x^2/2}. \] Then we can approximate \(I \approx I_N := \frac1N\sum\limits_{k=1}^Nf(X_k)\).

This method works equally well for multiple integrals. This is a particular advantage of Monte Carlo, because the usual methods become slow when the dimension grows curse of dimensionality

Example 4.

Consider the following integral on the \(10\)-dimensional block \([0, 1]^{10}\) in \(\mathbb{R}^{10}\): \[ I = \int_0^1\int_0^1\ldots\int_0^1e^{x_1^2 + x_2^2 + \ldots + x_{10}^2}\,\mathrm{d} x_1\,\mathrm{d} x_2\ldots \,\mathrm{d} x_{10}. \] To split it into subblocks \([0, 0.1]^{10}\) and similar, you need \(10^{10}\) blocks. But it is just as easy to use Monte Carlo for this case as for dimension one. Generate i.i.d. \(10\)-dimensional vectors \(U_i = \left(U_{i1}, \ldots, U_{i10}\right)\), with each component i.i.d. uniform on \([0, 1]\). Then let \[ I_N := \frac1N\sum\limits_{i=1}^N\exp\left(U_{i1}^2 + U_{i2}^2 + \ldots + U_{i10}^2\right) \to I. \]

8. Discrete-time Markov chains

Consider the example with McDonalds-Subway. Let \(0\) and \(1\) correspond to McDonalds and Subway. Assume that the initial distribution is \(x(0) = \begin{bmatrix} 0.7 & 0.3\end{bmatrix}\). Let \(U_0 \sim \mathrm{Uni}[0, 1]\). If \(U_0 \le 0.7\), we let \(X_0 := 0\); otherwise \(X_0 := 1\). Recall the transition matrix: \[ P = \begin{bmatrix} 0.8 & 0.2\\ 0.1 & 0.9 \end{bmatrix} \] Simulate \(U_1 \sim \mathrm{Uni}[0, 1]\) independently of \(U_0\). If \(X_0 = 0\), let \(X_1 := 0\) if \(U_1 \le 0.8\), \(X_1 := 1\) otherwise. If \(X_0 = 1\), let \(X_1 := 0\) if \(U_1 \le 0.1\), and \(X_1 := 1\) otherwise. If this Markov chain has more than two (say four) states, then we should consider four cases for \(U_1\). In the same way, we simulate \(X_2, X_3, \ldots\)

Assume the Markov chain is irreducible. Then the occupation time shares converge to the stationary distribution \(\pi\). One can empirically calculate the stationary distribtuion: Fix a large number \(N\) of steps. For each state \(i\), find the number \(M_i\) of times \(n = 1, \ldots, N\) when \(X_n = i\). Then \(p_i := M_i/N\) is an approximation for the stationary distribution \(\pi\).

9. Google’s PageRank

Take an oriented graph (edges are arrows), where states are Web pages and arrows are hyperlinks. Let us rank pages. A page is ranked highly if there are a lot of links pointing to it. However, the importance of each link is different. A link from an important Web page is also important. But if this important Web page provides lots of links, each of them is not so important. (A recommendation letter from Bill Gates carries more weight than the one from elementary school teacher. However, if Bill Gates is generous and writes thousands of letters every year, maybe his letter does not carry much weight after all.)

Let \(r(i)\) be the rank of the \(i\)th Web page. It is called PageRank in honor of Larry Page (who founded Google with Sergey Brin as Ph.D. students at Stanford in 1998). Let \(|i|\) be the number of links from page \(i\). Then \[ r(i) = \sum\limits_{j \to i}\frac{r(j)}{|j|}. \] This is a mathematical formulation of the idea above. If we normalize \(r(i)\): divide it by \(R = \sum r(i)\), the sum over all existing Web pages, then we have a probability distribution \(r(i)\) on the set of all Web pages. However, it is very hard to solve this system of equation explicitly. Brin and Page invented a Markov chain, which they called a random crawler: if at a certain moment it is at Web page \(i\), it chooses randomly (uniformly) among the links from this page and moves along this link. This is, in effect, the random walk on a (oriented) graph. The transition matrix \(A\) of this Markov chain is given by \[ A_{ij} = \begin{cases} \frac1{|i|},\ i \to j;\\ 0,\ \mbox{otherwise}. \end{cases} \] But we can rewrite the equation for \(r\) above as \[ r(i) = \sum\limits_{j \to i}r(j)A_{ji} = \sum\limits_{j}r(j)A_{ji}\ \ \Rightarrow\ \ r = rA. \] Therefore, \(r\) is a stationary distribution for \(A\). If we run this random crawler long enough, it converges to \(r\). (In fact, we need to make some adjustments to make sure this process indeed converges.)

10. Random walk

Consider a random walk with \(p = 0.75\), starting from \(S_0 = 1\). We simulate i.i.d. random variables \(Z_1, Z_2, \ldots\) with distribution \[ \mathbb{P}(Z_k = 1) = .75,\ \mathbb{P}(Z_k = -1) = .25. \] We do this by taking i.i.d. \(U_1, U_2, \ldots \sim \mathrm{Uni}[0, 1]\) and letting \(Z_k = 1\) if \(U_k \le .75\), \(Z_k = -1\) if \(U > .75\). Then let \[ S_0 := 1,\ S_1 = S_0 + Z_1, \ S_2 = S_1 + Z_2, \ldots \]

11. Poisson process

Simulate i.i.d \(T_1, T_2, \ldots \sim \mathrm{Exp}(\lambda)\) and let \[ \tau_1 := T_1,\, \tau_2 := \tau_1 + T_2,\, \tau_3 := \tau_2 + T_3, \ldots \] Then we find \(N(t)\) by comparing \(t\) and \(\tau_k\) until \(t < \tau_k\). If \(k\) is the first such that \(t < \tau_k\), then \(N(t) = k-1\). For example, if \(\tau_3 \le t\), but \(t < \tau_4\), then \(N(t) = 3\). For a compound Poisson process \[ X(t) = \sum\limits_{k=1}^{N(t)}Z_k, \] we aslo need to simulate i.i.d. steps \(Z_1, Z_2, \ldots\) and let \(X(t) = Z_1 + \ldots + Z_{k-1}\) (and \(X(t) = 0\) in case \(k = 0\)).

12. Continuous-time Markov chains

The difference from the discrete-time case is that we cannot simulate the value \(X(t)\) for every time \(t\). Instead, we do exactly as for the Poisson process: We simulate the jump times, and the values of this process at these jump times. Essentially, we switch from continuous to discrete time. As an example, consider a continuous-time Markov chain \(X = (X(t),\, t \ge 0)\) with generator \[ A = \begin{bmatrix} -3 & 1 & 2\\ 0 & -1 & 1\\ 1 & 1 & -2 \end{bmatrix} \] The corresponding discrete-time Markov chain \(Y = (Y_n)_{n \ge 0}\) has transition matrix \[ P = \begin{bmatrix} 0 & 1/3 & 2/3\\ 0 & 0 & 1\\ 0.5 & 0.5 & 0 \end{bmatrix} \] Simulate \(Y_0 = X(0)\) as shown above. Assume \(X(0) = Y_0 = 1\). Then the Markov chain will spend time \(T_1 \sim \mathrm{Exp}(3)\) in this state. After exiting this state, it will go to \(2\) with probability \(1/3\) and to \(3\) with probability \(2/3\). One can simulate \(Y_1\) as shown in the subsection devoted to discrete-time Markov chains: Let \(U_1 \sim \mathrm{Uni}[0, 1]\); if \(U_1 \le 2/3\) then \(Y_1 := 3\); otherwise \(Y_1 := 2\). Assume \(Y_1 = 3\). Then the Markov chain will spend time \(T_2 \sim \mathrm{Exp}(2)\) in this state. After exiting this state, it will go to \(1\) and \(2\) with equal probabilities \(.5\). Let \(\tau_1 := T_1\) and \(\tau_2 := \tau_1 + T_2\). In this way we can simulate jump times \(\tau_k\) and jump positions \(Y_k\). For every fixed \(t \ge 0\), find the \(k\) such that \(\tau_{k} \le t < \tau_{k+1}\) and assign \(X(t) := Y_k\).

Similarly to discrete-time Markov chains, we can estimate the stationary distribution for irreducible continuous-time Markov chains by calculating the share of time spent in each state over a long time interval. To this end, we fix a large number \(N\) of jumps. Take each state \(i\), and find all \(k = 1, \ldots, N\) such that \(Y_k = i\). Sum \(T_{k+1}\) for all such \(k\), and divide the sum over \(N\). The result will be \(p_i\), and \(\begin{bmatrix}p_1 & p_2 & \ldots & p_N\end{bmatrix}\) is an approximation to the stationary distribution.

13. Brownian motion

It can be approximated by a discrete-time random walk: Take \(t_0\) to be the step size. Then \[ W(kt_0) \approx \sqrt{t_0}\left(Z_1 + \ldots + Z_k\right), \] where \(Z_1, Z_2, \ldots\) are i.i.d. taking values \(\pm 1\) with equal probability. This follows from the Central Limit Theorem: \(\mathbb E Z_k = 0\) and \(\mathrm{Var} Z_k = 1\), therefore \[ \frac{Z_1 + \ldots + Z_k}{\sqrt{k}} \approx \mathcal N(0, 1). \] And \(Z_1 + \ldots + Z_k \approx \sqrt{k}\mathcal N(0, 1) = \mathcal N(0, k)\). But \(W(kt_0) \sim \mathcal N(0, t_0k) = \mathcal N(0, k)\sqrt{t_0}\). Thus, the simulation method is as follows: take a small \(t_0\), say \(t_0 = .01\), and let \(W((k+1)t_0) := W(kt_0) + \sqrt{t_0}Z_{k+1}\), with initial value \(W(0) := 0\).

14. Stochastic integrals

We approximate them by taking a small step \(t_0\), as in the previous subsection: \[ \int_0^2W(s)\,\mathrm{d} W(s) \approx \sum\limits_{k=0}^{2t_0-1}W(kt_0)\left(W((k+1)t_0) - W(kt_0)\right). \] Having simulated the Brownian motion as in the previous subsection, we can then calculate the sum in the right-hand side.

15. Stochastic differential equations

Similarly to the Brownian motion, we take a small time step \(t_0\): \[ \mathrm{d} X(t) = -3X(t)\,\mathrm{d} t + X(t)\,\mathrm{d} W(t),\ X(0) = 1. \] We approximate this equation as follows: \[ X((k+1)t_0) - X(kt_0) = -3X(kt_0)t_0 + X(kt_0)Z_{k+1},\ k = 0, 1, 2, \ldots \] where \(Z_{k+1} := W((k+1)t_0) - W(kt_0)\) can be approximated by \(\sqrt{t_0}Z_{k+1}\), with \(Z_{k+1} = \pm 1\) with equal probabilities. Start from \(X(0) = 1\), simulate \(Z_1\) and calculate \(X(t_0)\), then simulate \(Z_2\) and calculate \(X(2t_0)\), etc.

Problems

Problem 1.

Simulate the random variables \((X, Y)\) \(10000\) times, if they take values \((1, 0)\), \((0, 1)\), \((-1, 0)\), \((0, -1)\) with equal probability \(0.25\). Calculate the empirical covariance \(\mathrm{Cov}(X, Y)\), and compare with the theoretical value.

Problem 2.

Simulate the die flip \(10000\) times, and find the empirical mean and variance.

Problem 3.

Simulate a geometric random variable \(10000\) times with probability of success \(p = 20\%\), by simulating a sequence of Bernoulli trials and then taking the number of the first successful trial. Compute empirical mean and variance, and compare with theoretical values.

Problem 4.

Simulate \(1000\) times a Poisson random variable \(\mathrm{Poi}(1.5)\) by splitting \([0, 1]\) according to this distribution. Find its empirical mean, and compare with theoretical value. Simulate also \(\mathrm{Bin}(100, 0.15)\) by \(100\) Bernoulli trials at a time, repeat \(1000\) times. Compare empirical distributions with each other: find the difference \[ |\mathbb{P}(\mathrm{Poi}(1.5) = k) - \mathbb{P}(\mathrm{Bin}(100, 0.15) = k)|,\quad, k = 0, 1, 2, 3, 4. \] For each distribution, find its empirical mean, and compare with theoretical value.

Problem 5.

Simulate \(10000\) times the random variable \(X\) with density \[ p(x) = \frac{2}{\pi(1 + x^2)},\ x \ge 0, \] using the inverse function method. Calculate an empirical probability \(\mathbb P(X \ge 2)\), compare with theoretical value.

Problem 6.

Simulate \(1000\) times the Gamma variable \(\Gamma(3, 2.6)\), using that it is the sum of three exponential random variables, which can be generated by the inverse function method. Compare the empirical and theoretical mean and variance.

Problem 7.

Simulate \(1000\) times the Gamma variable \(\Gamma(3, 2.6)\) using rejection method. Cut its tail at some large enough point. Compare the empirical and theoretical mean and variance.

Problem 8.

Simulate the following jointly distributed random variables \(10000\) times using the rejection method: \[ p(x, y) = y,\ x \le y \le x + 1,\, 0 \le x \le 1. \] Calculate empirical covariance, and compare with the theoretical result.

Problem 9.

Using Monte Carlo method with \(10000\) tries, simulate the double integral \[ \int_0^1\int_0^1(x + y)\,\mathrm{d} x\,\mathrm{d} y. \]

Problem 10.

Simulate \(100\) i.i.d. claims of size \(Z \sim \mathrm{Exp}(1.5)\). Repeat this \(1000\) times. Find the value-at-risk for confidence level \(95\%\). Compare with theoretical value.

Problem 11.

Simulate \(10000\) steps of a Markov chain with transition matrix \[ P = \begin{bmatrix} 0.5 & 0.3 & 0.2\\ 0.4 & 0.6 & 0\\ 0.1 & 0.9 & 0 \end{bmatrix} \] starting from \(1\). Find the stationary distribution by calculating the share of time spent in each state for this simulation. Compare with theoretical result.

Problem 12.

Simulate \(10000\) steps of a Markov chain with transition matrix \[ P = \begin{bmatrix} 0.5 & 0.3 & 0.1 & 0.1\\ 0.3 & 0.6 & 0.1 & 0\\ 0.2 & 0.2 & 0 & 0.6\\ 0 & 0.2 & 0.8 & 0 \end{bmatrix} \] starting from \(1\). Find empirical probability of hitting \(2\) before \(3\). Compare with theoretical result.

Problem 13.

Simulate \(1000\) times the random walk with \(p = 0.3\) and \(q = 0.7\), starting from \(2\). Find the empirical probability that it hits \(-1\) before \(4\). Compare with theoretical value.

Problem 14.

Simulate \(10\) steps of a random walk with \(p = 0.6\) and \(q = 0.4\), starting from \(-1\). Repeat this simulation \(10000\) times. Find the empirical probability \(\mathbb{P}(S_{10} = -3)\). Compare with theoretical value.

Problem 15.

Simulate a stock price \(1000\) times given by geometric symmetric random walk, starting from \(P_0 = 1\). Find the empirical fair value of the European option call at maturity \(T = 30\) with strike \(K = 2\).

Problem 16.

Simulate \(1000\) times the first \(50\) jumps of a Poisson process \(N\) with intensity \(\lambda = 2\). Calculate the empirical expectation \(\mathbb{E}\left[N(1)N(2)\right]\), and compare it with the true value.

Problem 17.

Simulate \(1000\) times the first \(50\) jumps of a compound Poisson process \(X\) with increments \(\mathcal N(2.5, 4)\), and intensity \(\lambda = 0.5\). Use this to find empirical value \(\mathrm{Var} X(4)\) and compare this with the true value.

Problem 18.

An insurance company receives claims during the year according to the Poisson process \(N(t)\) with intensity \(\lambda = 2\). Each claim is distributed as \(\mathrm{Exp}(2.3)\). We measure time in days (365 days in a year). From \(100\) simulations, find the value-at-risk for confidence level \(90\%\), and compare with theoretical value.

Problem 19.

Until time \(T = 100\), simulate a continuous-time Markov chain with generator \[ A = \begin{bmatrix} -3 & 1 & 2\\ 2 & -5 & 3\\ 5 & 0 & -5 \end{bmatrix} \] starting from state \(1\). Find the stationary distribution by calculating the share of time spent in each state. Compare with the theoretical value.

Problem 20.

Simulate \(1000\) times, starting from state \(2\), a continuous-time Markov chain with generator \[ A = \begin{bmatrix} -3 & 3\\ 2 & -2 \end{bmatrix} \] Find the empirical distribution of this chain at time \(t = 2\). Compare with the theoretical value.

Problem 21.

Simulate the M/M/1 queue with arrival intensity \(\lambda = 2\) and service intensity \(\mu = 4\), until time horizon \(T = 100\). Find the empirical stationary distribution by calculating the share of time spent in each state. Compare this with the theoretical value.

Problem 22.

Simulate a Brownian motion with step size \(.01\) for time horizon \(T = 10\). Repeat this \(1000\) times. Calculate the empirical probability \(\mathbb{P}(W(10) > 2)\), and compare with true value.

Problem 23.

Simulate a geometric Brownian motion \(X\) with drift \(g = -.3\) and variance \(\sigma^2 = 2\), starting from \(X(0) = 0.3\), until for time horizon \(T = 1\), with step size \(.01\). Repeat this \(1000\) times. Calculate the empirical value of \(\mathbb E X^2(1)\), and compare with the true value.

Problem 24.

Find an empirical fair price of a barrier option which pays \(1\) if the price of a stock at some time until maturity \(T = 2.5\) exceeds \(K = 2\), if the initial price is \(S_0 = 1.4\), and the volatility is \(\sigma = 0.4\). Use step size \(0.01\) and \(1000\) simulations.

Problem 25.

Simulate the following It^o integral \(1000\) times with step size \(0.01\): \[ \int_0^2e^{W(s)}\,\mathrm{d} W(s). \] Calculate the empirical variance and compare with the true result.

Problem 26.

Simulate the following Ornstein-Uhlenbweck process: \[ \mathrm{d} X(t) = -2X(t)\,\mathrm{d} t + 3\,\mathrm{d} W(t),\ X(0) = -2, \] with step \(0.01\), for time horizon \(T = 1\). Repeat this \(1000\) times. Find empirical mean \(\mathbb{E} X(1)\), and compare with the true value.