TTK4115: Linear System Theory
$$
\newcommand{\dt}{\,\mathrm{d}t}
\newcommand{\dx}{\,\mathrm{d}x}
\newcommand{\dy}{\,\mathrm{d}y}
\newcommand{\dh}{\,\mathrm{d}h}
\newcommand{\pt}{\partial t}
\newcommand{\px}{\partial x}
\newcommand{\py}{\partial y}
\newcommand{\QEDA}{\hfill\ensuremath{\blacksquare}}
\newcommand{\QEDB}{\hfill\ensuremath{\square}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\bmat}[1]{\begin{bmatrix}#1\end{bmatrix}}
\renewcommand{\vec}[1]{\mathbf{#1}}
$$
#Realization
##Definition
Given a rational proper matrix function $\hat{G}(s)$, a realization is any state-space
model $(\vec{A};\vec{B};\vec{C};\vec{D})$ such that $\hat{G}(s)$ is the corresponding transfer matrix, i.e.
$$\hat{G}(s) = \vec{C}(s\vec{I} - \vec{A})^{-1}\vec{B} + \vec{D}$$
A realization is said to be minimal if its state has the least achievable dimension; in
particular, this corresponds to the requirement that state-space model $(\vec{A};\vec{B};\vec{C};\vec{D})$
is both controllable and observable.
#Jordan Form
## Definition
The Jordan form of a system can be derived by means of a transformation matrix $ \vec{T} $ such that
$$
\vec{J} = \vec{T}^{-1} \vec{A} \vec{T}, \qquad \vec{\hat{B}} = \vec{T}^{-1} \vec{B}, \qquad \vec{\hat{C}} = \vec{C} \vec{T}
$$
where T consist of the eigenvectors of A
$$
\vec{T}= [ \textbf{v}_1 \quad \textbf{v}_2 \quad ... \quad \textbf{v}_n ]
$$
#Observability
We check if we are able to observe our states. Essential if we want to make a estimator, e.g. a Kalman filter. It is often the case that we can observe states even though we can't observe them directly, often through derivation. For example, you can find the velocity of an object by differentiating the position measurements.
A simple way of checking observability, is checking if the observability matrix $ \mathcal{O}=
\begin{bmatrix}
\mathbf{C}\\
\mathbf{C}\mathbf{A}\\
\vdots\\
\mathbf{C}\mathbf{A^{n-1}}
\end{bmatrix} $ has full rank.
Finding the observability matrix in Matlab:
Ob = obsv(A,C);
#Controllability
We check that our actuations are able to affect our states.
A simple way of checking this, is to check if the controllability matrix
$\mathcal{C}=
\begin{bmatrix}
\mathbf{B} & \mathbf{A}\mathbf{B} & \quad ... \quad \mathbf{A^{n-1}}\mathbf{B}
\end{bmatrix}$ has full rank.
Finding the controllability matrix in Matlab:
Co = ctrb(A,B);
#Linearization
If we have a non-linear system, we can use linearize the system around an operating point (often chosen to be equilibrium):
$ \mathbf{x} = \mathbf{x_p} \quad \mathbf{u} = \mathbf{u_p} $.
Linearizing amounts to transforming our system from $\mathbf{\dot{x}} = \mathbf{A} \mathbf{x} + \mathbf{B} \mathbf{u}$ to $\mathbf{\dot{\tilde{x}}} = \mathbf{\tilde{A}} \mathbf{\tilde{x}} + \mathbf{\tilde{B}} \mathbf{\tilde{u}}$ where
$
\mathbf{\tilde{A}} =
\begin{bmatrix}
\frac{\partial h_{1}}{\partial x_{1}} & \dots & \frac{\partial h_{1}}{\partial x_{6}}\\
\vdots & \ddots & \vdots \\
\frac{\partial h_{6}}{\partial x_{1}} & \dots & \frac{\partial h_{6}}{\partial x_{6}}
\end{bmatrix}\Bigg|_{\mathbf{x} = \mathbf{x_p},\mathbf{u} = \mathbf{u_p}}
$
$
\mathbf{\tilde{B}} =
\begin{bmatrix}
\frac{\partial h_{1}}{\partial u_{1}} & \frac{\partial h_{1}}{\partial u_{2}}\\
\vdots & \vdots \\
\frac{\partial h_{6}}{\partial u_{1}} & \frac{\partial h_{6}}{\partial u_{6}}
\end{bmatrix}\Bigg|_{\mathbf{x} = \mathbf{x_p},\mathbf{u} = \mathbf{u_p}}
$.
#LQR - linear quadratic regulator
## Definition
Linear quadratic regulator for finding the control input $\mathbf{u} = \mathbf{P}\mathbf{r}- \mathbf{K} \mathbf{x}$ (where $\mathbf{P} = 0$ if we don't want any feed forward from the reference), based on optimizing the cost function
$J = \int_{0}^{\infty} \left(\mathbf{x^T}(t)\mathbf{Q}\mathbf{x}(t) + \mathbf{u^T}(t)\mathbf{R}\mathbf{u}(t) \right) dt$.
## When can we use a LQR?
We can use a LQR when we have a linear controllable system. See the sections [Linearization](#Linearization) and [Controllability](#Controllability).
## How do we use a LQR?
We need to choose the positive definite $\mathbf{Q}$ and $\mathbf{R}$ in order to tune the regulator.
###Bryson's rule
Bryson's rule can be used as a starting point to tune the regulator.
$
Q_{ii} = \frac{1}{\text{maximum accepted value of }x_i^2}
$
$
R_{jj} = \frac{1}{\text{maximum accepted value of }u_j^2}
$
###Begin the punishment
With or without using Bryson's rule, you should adjust the diagonal elements of $\mathbf{Q}$ and $\mathbf{R}$ based on the response of your system. Increasing the diagonal elements of $\mathbf{Q}$ amounts to punishing the state errors of the corresponding states in $\mathbf{x}$ (increasing $Q_{22}$ punishes errors in state $x_{2}$). Decreasing the diagonal elements of $\mathbf{R}$ amounts to punishing the errors in requires actuation in $\mathbf{u}$. Increasing a diagonal element of $\mathbf{R}$ in many cases corresponds to decreasing a diagonal element of $\mathbf{Q}$. Scaling both matrices by a constant will not affect the cost function, as the constant can be moved outside the integral.
###Finding K
The desired feedback gain K can be found by using the MATLAB command K = lqr(A,B,Q,R).
###Finding optional feedforward gain P
Assuming that feedback results in a stable equilibrium yields the steady-state condition
$$
\mathbf{0} = \mathbf{\dot{x}} = \mathbf{A}\mathbf{x}_\infty + \mathbf{B}\mathbf{u} = \mathbf{A}\mathbf{x}_\infty + \mathbf{B}(\mathbf{P}\mathbf{r_0} - \mathbf{K}\mathbf{x}_\infty)
$$
$$
(\mathbf{A}-\mathbf{B}\mathbf{K})\mathbf{x}_\infty = -\mathbf{B}\mathbf{P}\mathbf{r_0}
$$
Inserting $\mathbf{y} = \mathbf{C}\mathbf{x}$ yields
$$
\mathbf{y}_\infty = [\mathbf{C}(\mathbf{B}\mathbf{K} - \mathbf{A})^{-1}\mathbf{B}]\mathbf{P}\mathbf{r_0}
$$
we want $\mathbf{y}_\infty = \mathbf{r_0}$, hence we choose
$$
\mathbf{P} = [\mathbf{C}(\mathbf{B}\mathbf{K} - \mathbf{A})^{-1}\mathbf{B}]^{-1}
$$
#State estimation
Example of simulink state estimation implementation:

Example of state estimation pole placement:

#Algorithm for discrete Kalman filter
Given: a priori estimate error covariance, initial a priori state estimate, process noise covariance and measurement noise covariance.
1. Compute the Kalman gain using the estimate error covariance and the measurement noise:
- $\mathbf{L}_k = \mathbf{P}_k^{-}\mathbf{C}^T(\mathbf{C}\mathbf{P}_k^{-}\mathbf{C}^T+\mathbf{{\bar{R}_v}})^{-1}$.
2. Update the estimate with the measurement:
- $\mathbf{\hat{x}}_k =\mathbf{\hat{x}}_k ^{-}+\mathbf{L}_k(\mathbf{y}_k-\mathbf{C}\mathbf{\hat{x}}_k ^{-})$.
3. Compute error covariance for updated estimate:
- $\mathbf{P}_k = (\mathbb{I}-\mathbf{L}_k\mathbf{C})\mathbf{P}_k^{-}(\mathbb{I}-\mathbf{L}_k\mathbf{C})^{T}+\mathbf{L}_k\mathbf{\bar{R}_v}\mathbf{L}_k^{T}$.
4. Project ahead:
- $\mathbf{P}_{k+1}^{-} = \mathbf{A}\mathbf{P}_k^{-}\mathbf{A}^{T}+\mathbf{E}\mathbf{Q_w}\mathbf{E}^{T}$
- $\mathbf{\hat{x}}_{k+1}^{-} = \mathbf{A}\mathbf{\hat{x}}_k+\mathbf{B}\mathbf{u}_k$.
5. Repeat with $k = k +1$.
# Power Spectral Density
## Definition
The power spectrum $ S_x(j \omega) $ of a time series $ x(t) $ describes the distribution of power into frequency components composing that specific signal. By Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies or a spectrum over a continous range.
In other words, the power spectral density function shows the strength of the variations (energy) as a function of frequency. It shows at which frequencies variations are strong and opposite. The unit of the Power Spectral Density is energy per frequency. The way to compute the PSD is by Fast Fourier Transform (FFT) or by computing the autocorrelation function and then transform it. Since a white noise is a stochastic process and with infinite energy, it has a flat PSD.
The power of a signal is defined as:
$$
P = \lim_{T\to \infty} \frac{1}{2T} \int\limits_{-T}^T x(t)^2 dt
$$
$$
S_x (j \omega) = {|G(j \omega)|}^2 \cdot S_h(j \omega) = G(j \omega) G(-j \omega) \cdot S_h(j \omega)
$$
Where $ S_h(j \omega) $ is the spectral density of the input noise.
# Exam Statistics
from 2003