Home Random Process
Post
Cancel

Random Process

Random Process

A random process is defined as a function that maps the result of a probability experiment to a time function. It is written as:

$X(t)\equiv X(t, e)$

A random variable $X \equiv X(e)$ is defined as a function that maps the result of a probability experiment $e$ to a real number. A random process is used to model time-varying probability experiments. Examples include models for stock prices, wind power, sensor noise, etc.



Vector Random Process

A vector random process is a vector whose elements are scalar random processes. It is expressed as:

$X(t) \equiv X(t, e) = [X_{1}(t, e), X_{2}(t, e), \ldots, X_{n}(t, e)]^{T}$

It is also simply called a random process.



Sample Function

A random process is usually denoted with capital letters, and the corresponding time function is denoted with lowercase letters. That is, if the time function of a random process mapping the probability experiment result $e$ is $x(t)$, the random process is written as $X(t, e) = x(t, e)$ or $X(t) = x(t)$.

$x(t)$ represents the state of the random process at time $t$, and it is called a sample function. More specifically, if $e$ is fixed as $e = e_{1}$, the random process becomes the sample function $X(t, e_{1}) = x(t, e_{1})$. For another example, if $e$ is fixed as $e = e_{2}$, the random process becomes the sample function $X(t, e_{2}) = x(t, e_{2})$.

As you can see in the example, a sample function is a deterministic function, and all the samples are called an ensemble.

If time is discrete, the random process is called a discrete-time random process or a random sequence. It is expressed as :

$X(k) \equiv X(k, e) = [X_{1}(k, e), X_{2}(k, e), \ldots, X_{n}(k, e)]^{T}$

where $k$ is the index of time.



Mean Function and Auto-correlation Function and Auto-covariance Function

In this section, we will discuss the mean function, auto-correlation function, auto-covariance function.

Apart from these, the other definitions of a random sequence are the same as those of a random process.



Mean Function

The probability density function of a random process is a function of time because it can vary from time to time, and it is written as $p_{X}(x(t))$. At time $t = t_{1}$, the expectation or ensemble mean function is defined as the expectation of the elements of a random vector.

This means that

$\mu_{X}(t_{1}) = \mathbb{E}[X(t_{1})]$



Auto-correlation Function

At times $t_{1}$ and $t_{2}$, two random vectors have the joint probability density function $p_{X}(x(t_{1}), x(t_{2}))$. To show the auto-correlation at different points in time of a random process, we define the auto-correlation function $R_{XX}(t_{1}, t_{2})$ as follows:

$R_{XX}(t_{1}, t_{2}) = \mathbb{E}[X(t_{1}), X^{T}(t_{2})]$ $= \begin{bmatrix} \mathbb{E}[X_{1}(t_{1}), X_{1}(t_{2})] & \cdots & \mathbb{E}[X_{1}(t_{1}), X_{n}(t_{2})]\\ \vdots & \ddots & \vdots\\ \mathbb{E}[X_{n}(t_{1}), X_{1}(t_{2})] & \cdots & \mathbb{E}[X_{n}(t_{1}), X_{n}(t_{2})] \end{bmatrix}$


The auto-correlation function represents the auto-correlation of random processes in the time domain and the power or distribution of energy that the process contains in the frequency domain.

Auto-covariance Function

The auto-covariance function $P_{XX}(t_{1}, t_{2})$ is defined as follows:

$P_{XX}(t_{1}, t_{2}) = \mathbb{E}[(X(t_{1})-\mathbb{E}[X(t_{1})])(X(t_{2})-\mathbb{E}[X(t_{2})])^{T}]$





Stationarity Process

A stationary process means that the partial or entire probabilistic feature of a random process is time-invariant.

In stationary processes, there are two types:



Strict Sense Stationary (SSS)

If the probability density function of a random process $X(t)$, when taking any $m$ time points $t_{1} < t_{2} < \ldots < t_{m}$, satisfies the following expression for the joint probability density functions of $X(t_{1}), X(t_{2}), \ldots , X(t_{m})$ for any $h > 0$, then $X(t)$ is called an SSS process.

$p_{X}(x(t_{1}), x(t_{2}), \ldots , x(t_{m})) = p_{X}(x(t_{1} + h), x(t_{2} + h), \ldots , x(t_{m} + h))$

If $X(t)$ is an SSS process, the mean of the ensemble becomes constant, and the auto-correlation function $R_{XX}(t_{1}, t_{2})$ at any two time points $t_{1}$ and $t_{2}$ becomes a function of the time difference between the two time points $(t_{2} - t_{1})$.
That is,

  • $\mathbb{E}[X(t)] = \text{constant}$

  • $R_{XX}(t_{1}, t_{2}) = R_{XX}(t_{2} - t_{1}) = R_{XX}(\tau)$

Wide Sense Stationary (WSS)

If the mean of the ensemble of a random process $X(t)$ is constant and $R_{XX}(t_{1}, t_{2}) = R_{XX}(\tau)$, then $X(t)$ is called a WSS process.

If $X(t)$ is an SSS process, then $X(t)$ is also a WSS process, but the inverse is not true.

In WSS, a sharp decrease in $R_{XX}$ for $\tau$, as shown in image [1], results in a sharp decrease in the correlation between the two time points. Conversely, a gradual decrease in $R_{XX}$ for $\tau$, as shown in image [2], results in a gradual decrease in the correlation between the two time points.

Thus, $R_{XX}(\tau)$ functions as a measure of the rate of change of $X(t)$ relative to time $t$. In other words, it acts as a kind of frequency response to $X$.

image [1]

image [2]

WSS is a general condition that can also be applied to multi-dimensional signals, that is, time series data of vector values. However, there is also scalar WSS used when the WSS condition is applied to a single variable (scalar) time signal.



Scalar WSS

In scalar WSS, the features of the auto-correlation function $R_{XX}(\tau)$ of $X(t)$ and $Y(t)$ are as follows:

  • $\mathbb{E}[X^{2}(t)] = R_{XX}(0) \geq 0$

  • $R_{XX}(\tau) = R_{XX}(-\tau)$

  • $\left | R_{XX}(\tau) \right | \leq R_{XX}(0)$



Power Spectral Density (PSD)

The power spectral density $S_{XX}(\omega)$ of a WSS random process is defined as the Fourier transform of the auto-correlation function. The function $S_{XX}(\omega)$ is:

$S_{XX}(\omega) = \int_{-\infty}^{\infty}R_{XX}(\tau)e^{-j\omega \tau}d\tau$

where $\omega$ is the frequency in $rad/sec$.

We can get the auto-correlation function by using the Fourier inverse transform from the power spectral density as:

$R_{XX}(\tau) = \frac{1}{2\pi}\int_{-\infty}^{\infty}S_{XX}(\omega)e^{j\omega\tau}d\omega$


The power of $X(t)$ is calculated from the auto-correlation function or power spectral density as:

$\mathbb{E}[X(t)X^{T}(t)] = R_{XX}(0) = \frac{1}{2\pi}\int_{-\infty}^{\infty}S_{XX}(\omega)d\omega$


The power spectral density of a WSS random sequence $S_{XX}(\hat{\omega})$ is defined by the discrete-time Fourier transform of the auto-correlation function as:

$S_{XX}(\hat{\omega}) = \sum_{n=-\infty}^{\infty}R_{XX}(n)e^{-j\hat{\omega}n}$


where $\hat{\omega}$ is the discrete-time frequency and the range is $\hat{\omega} \in [-\pi, \pi]$.

Also, we can get the auto-correlation function by the discrete-time Fourier inverse transform from the power spectral density as:

$R_{XX}(n) = \frac{1}{2\pi}\int_{-\pi}^{\pi}S_{XX}(\hat{\omega})e^{j\hat{\omega}n}d\hat{\omega}$


The power of a random sequence $X(k)$ can be calculated from the auto-correlation function or power spectral density as:

$\mathbb{E}[X(k)X^{T}(k)] = R_{XX}(0) = \frac{1}{2\pi}\int_{-\pi}^{\pi}S_{XX}(\hat{\omega})d\hat{\omega}$/* $= \int e^{(a-jw)\tau}d\tau$





White Noise

A random process that is temporally uncorrelated is called white noise $V(t)$. It is an impulse-like signal in a deterministic system, defined as a WSS process in which the auto-correlation function is given as a Dirac delta function as follows:

$\mathbb{E}[V(t)V^{T}(t+\tau)] = R_{VV}(\tau) = S_{0}\delta(\tau)$

where $S_{0}$ is a constant matrix.

The power spectral density is:

$S_{VV}(\omega) = \int_{-\infty}^{\infty}R_{VV}(\tau)e^{-j\omega \tau}d\tau = S_{0}\int_{-\infty}^{\infty}\delta(\tau)e^{-j\omega \tau}d\tau = S_{0}$

Therefore, white noise has the same power spectral density value across all frequencies.

White Noise Sequence

A WSS random sequence $V(k)$, in which the auto-correlation function is given as a Kronecker delta function, is called a white noise sequence.

$\mathbb{E}[V(k)V^{T}(k+m)] = R_{VV}(m) = S_{0}\delta_{m}$

where $\delta_{m}$ is a Kronecker delta function defined as:

$\delta_{m} = \left\{\begin{matrix} 1, \ m=0\\ 0, \ m \neq 0 \end{matrix}\right.$

The power spectral density of a white noise sequence is:

$S_{VV}(\hat{\omega}) = \sum_{n=-\infty}^{\infty}R_{VV}(n)e^{-j\hat{\omega}n} = S_{0}$

It has the same power spectral density value across all frequencies.



Gaussian White Noise

At every time point $t$ or $k$, if the probability density function of white noise $V(t)$ or $V(k)$ is given as a Gaussian function, it is called gaussian white noise.





Ergodic Process in The Mean

An ergodic process in the mean means that a sample function extracted from a stationary random process randomly includes all probabilistic information of the random process. Although it is very hard to prove whether it is an ergodic process or not, note that white noise is an ergodic process.

A time average and time correlation of any deterministic function $x(t)$ are defined as equations [1] and [2], respectively.

< $x(t)$ > $=lim_{T \rightarrow \infty} \frac{1}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}}x(t)dt$ [1]

< $x(t)x^{T}(t+\tau)$ > $=lim_{T \rightarrow \infty} \frac{1}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}}x(t)x^{T}(t+\tau)dt$ [2]

If $x(t)$ is a sample function of a stationary process, and if the ensemble mean $\mathbb{E}[X(t)]$ of $X(t)$ equals the time average $\langle x(t) \rangle$, $X(t)$ is called an ergodic process in the mean.
Also, if the ensemble correlation $\mathbb{E}[X(t)X^{T}(t+\tau)]$ of $X(t)$ equals the time correlation $\langle x(t)x^{T}(t+\tau) \rangle$, $X(t)$ is called a correlation ergodic process in the mean.

In a random sequence, time average and time correlation are defined as equations [3] and [4], respectively.

< $x(k)$ > $=lim_{T \rightarrow \infty} \frac{1}{2N+1}\sum_{k=-n}^{N}x(k)$ [1]

< $x(k)x^{T}(k+m)$ > $=lim_{T \rightarrow \infty} \frac{1}{2N+1}\sum_{k=-n}^{N}x(k)X^{T}(k+m)$ [2]





Independent, Identically Distributed (IID)

If all random vectors that constitute a random process $X(t)$ are independent and have the same probability density function, $X(t)$ is called IID.
If all random vectors that constitute a random sequence $X(t)$ are independent and have the same probability density function, $X(t)$ is called an IID sequence.





Markov Process

A Markov process is a random process in which, given the current probability information, the future and the past are irrelevant or conditionally independent of each other.
That is, when the probability distribution of a random process $X(t)$ is given at a specific time point $t_{1}$, if the probability distribution of $X(t)$ at time point $t > t_{1}$ is irrelevant to the probability distribution of $X(s)$ at time point $s < t_{1}$, $X(t)$ is defined as a Markov process.
A Markov process is expressed probabilistically as follows:

$p_{X}(x(t) \mid x(s) \leq t_{1}) = p_{X}(x(t) \mid x(t_{1})), \ \ \forall t > t_{1}$


Similar to the Markov process, the definition of a Markov sequence is determined by the probability distribution of the preceding step. The equation for the Markov sequence $X(k)$ using the probability density function and an image of the Markov sequence is:

$p_{X}(s(k) \mid x(k-1), x(k-2), \ldots , x(0)) = p_{X}(x(k) \mid x(k-1)), \ \ \forall k$





Differentiation of a Random Process

If a random process $X(t)$ satisfies the following equation, $X(t)$ is called continuous in the mean square sense at time $t=t_{0}$.

$\lim_{t \rightarrow t_{0}} \mathbb{E}[(X(t) - X(t_{0}))^{2}] = 0$


If a random process is continuous in the mean square sense at $t=t_{0}$, it can be written simply as a general deterministic function as:

$\lim_{t \rightarrow t_{0}} X(t) = X(t_{0})$


A random process can be differentiated because it changes over time. The differentiation $X’(t)$ of a random process $X(t)$ is defined as:

$X'(t) = \frac{dX(t)}{dt} = \lim_{h \rightarrow 0} \frac{X(t+h)-X(t)}{h}$
Note that the law of exchange holds for the differentiation or integral of the random process and the average operator $\mathbb{E}$.
This post is licensed under CC BY 4.0 by the author.