Wikipendium

History Compendium
Log in
This is an old version of the compendium, written Nov. 26, 2016, 11:49 a.m. Changes made in this revision were made by andervat. View rendered version.
Previous version Next version

TTK4115: Linear System Theory

$$ \newcommand{\dt}{\,\mathrm{d}t} \newcommand{\dx}{\,\mathrm{d}x} \newcommand{\dy}{\,\mathrm{d}y} \newcommand{\dh}{\,\mathrm{d}h} \newcommand{\pt}{\partial t} \newcommand{\px}{\partial x} \newcommand{\py}{\partial y} \newcommand{\QEDA}{\hfill\ensuremath{\blacksquare}} \newcommand{\QEDB}{\hfill\ensuremath{\square}} \newcommand{\R}{\mathbb{R}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\bmat}[1]{\begin{bmatrix}#1\end{bmatrix}} \renewcommand{\vec}[1]{\mathbf{#1}} $$ #Realization ##Requirement for realization - The transfer function $\hat{G}(s)$ needs to be a $\underline{\text{proper}}$ function (the degree of the numerator must be at least a high as the degree of the denominator). This is because no real device can amplify signals more and more at infinite frequencies! - I also needs to be $\underline{\text{rational}}$. ##Definition Given a rational proper matrix function $\hat{G}(s)$, a realization is any state-space model $(\vec{A};\vec{B};\vec{C};\vec{D})$ such that $\hat{G}(s)$ is the corresponding transfer matrix, i.e. $$\hat{G}(s) = \vec{C}(s\vec{I} - \vec{A})^{-1}\vec{B} + \vec{D}$$ A realization is said to be minimal if its state has the least achievable dimension; in particular, this corresponds to the requirement that state-space model $(\vec{A};\vec{B};\vec{C};\vec{D})$ is both $\underline{\text{controllable}}$ and $\underline{\text{observable}}$. Furthermore, this mean that in a minimal realization, the dimension of A is equal to the degree of the transfer function (unobservable and uncontrollable states will disappear during Laplace transformation). #Jordan Form ## Definition The Jordan form of a system can be derived by means of a transformation matrix $ \vec{T} $ such that $$ \vec{J} = \vec{T}^{-1} \vec{A} \vec{T}, \qquad \vec{\hat{B}} = \vec{T}^{-1} \vec{B}, \qquad \vec{\hat{C}} = \vec{C} \vec{T} $$ where T consist of the eigenvectors of A $$ \vec{T}= [ \textbf{v}_1 \quad \textbf{v}_2 \quad ... \quad \textbf{v}_n ] $$ #Discretization ## Exact discretization Ongoing... If we want to discretize a system on the form $$ \vec{\dot{x}}(t) = \vec{A}\vec{x}(t) +\vec{B}\vec{u}(t) $$ $$ y(t) =\vec{C}\vec{x}(t) + \vec{D}\vec{u}(t) $$ To the form (given a sample time T) $$ \vec{x}[k +1] = \vec{A}_d\vec{x}[k] +\vec{B}_d\vec{u}[k] $$ $$ y[k] =\vec{C}_d\vec{x}[k] + \vec{D}_d\vec{u}[k] $$ We need to find the matrices $\vec{A}_d, \vec{B}_d, \vec{C}_d, \vec{D}_d $: 1. $\vec{A}_d = e^{\vec{A}T} = Qe^{\hat{\vec{A}}T}Q^{-1} = \mathcal{L}^{-1}((\vec{sI} - \vec{A})^{-1})_{t=T}$ (Choose your liking, a third option is to use The Cayley Hamilton Theorem in calculating square matrix functions). 2. $\vec{B}_d = \vec{A}^{-1}(\vec{A}_d - \vec{I})\vec{B}$ 3. $\vec{C}_d = \vec{C}$ 4. $\vec{D}_d = \vec{D}$ ## Euler discretization Noen som kan fylle inn her? #Observability We check if we are able to observe our states. Essential if we want to make a estimator, e.g. a Kalman filter. It is often the case that we can observe states even though we can't observe them directly, often through derivation. For example, you can find the velocity of an object by differentiating the position measurements. A simple way of checking observability, is checking if the observability matrix $ \mathcal{O}= \begin{bmatrix} \mathbf{C}\\ \mathbf{C}\mathbf{A}\\ \vdots\\ \mathbf{C}\mathbf{A^{n-1}} \end{bmatrix} $ has full rank. Finding the observability matrix in Matlab: Ob = obsv(A,C); #Controllability We check that our actuations are able to affect our states. A simple way of checking this, is to check if the controllability matrix $\mathcal{C}= \begin{bmatrix} \mathbf{B} & \mathbf{A}\mathbf{B} & \quad ... \quad \mathbf{A^{n-1}}\mathbf{B} \end{bmatrix}$ has full rank. Finding the controllability matrix in Matlab: Co = ctrb(A,B); #Linearization If we have a non-linear system, we can use linearize the system around an operating point (often chosen to be equilibrium): $ \mathbf{x} = \mathbf{x_p} \quad \mathbf{u} = \mathbf{u_p} $. Linearizing amounts to transforming our system from $\mathbf{\dot{x}} = \mathbf{A} \mathbf{x} + \mathbf{B} \mathbf{u}$ to $\mathbf{\dot{\tilde{x}}} = \mathbf{\tilde{A}} \mathbf{\tilde{x}} + \mathbf{\tilde{B}} \mathbf{\tilde{u}}$ where $ \mathbf{\tilde{A}} = \begin{bmatrix} \frac{\partial h_{1}}{\partial x_{1}} & \dots & \frac{\partial h_{1}}{\partial x_{n}}\\ \vdots & \ddots & \vdots \\ \frac{\partial h_{n}}{\partial x_{1}} & \dots & \frac{\partial h_{n}}{\partial x_{n}} \end{bmatrix}\Bigg|_{\mathbf{x} = \mathbf{x_p},\mathbf{u} = \mathbf{u_p}} $ $ \mathbf{\tilde{B}} = \begin{bmatrix} \frac{\partial h_{1}}{\partial u_{1}} & \frac{\partial h_{1}}{\partial u_{p}}\\ \vdots & \vdots \\ \frac{\partial h_{n}}{\partial u_{1}} & \frac{\partial h_{n}}{\partial u_{p}} \end{bmatrix}\Bigg|_{\mathbf{x} = \mathbf{x_p},\mathbf{u} = \mathbf{u_p}} $. #LQR - linear quadratic regulator ## Definition Linear quadratic regulator for finding the control input $\mathbf{u} = \mathbf{P}\mathbf{r}- \mathbf{K} \mathbf{x}$, based on optimizing the cost function $J = \int_{0}^{\infty} \left(\mathbf{x^T}(t)\mathbf{Q}\mathbf{x}(t) + \mathbf{u^T}(t)\mathbf{R}\mathbf{u}(t) \right) dt$. If the upper bound of the integral in the cost function is a finite time, the resulting control will be timevarying. ## When can we use a LQR? We can use a LQR when we have a linear controllable system. See the sections [Linearization](#Linearization) and [Controllability](#Controllability). ## How do we use a LQR? We need to choose the positive definite $\mathbf{Q}$ and $\mathbf{R}$ in order to tune the regulator. ###Bryson's rule Bryson's rule can be used as a starting point to tune the regulator. $ Q_{ii} = \frac{1}{\text{maximum accepted value of }x_i^2} $ $ R_{jj} = \frac{1}{\text{maximum accepted value of }u_j^2} $ ###Begin the punishment With or without using Bryson's rule, you should adjust the diagonal elements of $\mathbf{Q}$ and $\mathbf{R}$ based on the response of your system. Increasing the diagonal elements of $\mathbf{Q}$ amounts to punishing the state errors of the corresponding states in $\mathbf{x}$ (increasing $Q_{22}$ punishes errors in state $x_{2}$). Decreasing the diagonal elements of $\mathbf{R}$ amounts to punishing the errors in requires actuation in $\mathbf{u}$. Increasing a diagonal element of $\mathbf{R}$ in many cases corresponds to decreasing a diagonal element of $\mathbf{Q}$. Scaling both matrices by a constant will not affect the cost function, as the constant can be moved outside the integral. ###Finding K The desired feedback gain K can be found by using the MATLAB command K = lqr(A,B,Q,R). ###Finding optional feedforward gain P Assuming that feedback results in a stable equilibrium yields the steady-state condition $$ \mathbf{0} = \mathbf{\dot{x}} = \mathbf{A}\mathbf{x}_\infty + \mathbf{B}\mathbf{u} = \mathbf{A}\mathbf{x}_\infty + \mathbf{B}(\mathbf{P}\mathbf{r_0} - \mathbf{K}\mathbf{x}_\infty) $$ $$ (\mathbf{A}-\mathbf{B}\mathbf{K})\mathbf{x}_\infty = -\mathbf{B}\mathbf{P}\mathbf{r_0} $$ Inserting $\mathbf{y} = \mathbf{C}\mathbf{x}$ yields $$ \mathbf{y}_\infty = [\mathbf{C}(\mathbf{B}\mathbf{K} - \mathbf{A})^{-1}\mathbf{B}]\mathbf{P}\mathbf{r_0} $$ we want $\mathbf{y}_\infty = \mathbf{r_0}$, hence we choose $$ \mathbf{P} = [\mathbf{C}(\mathbf{B}\mathbf{K} - \mathbf{A})^{-1}\mathbf{B}]^{-1} $$ ##Integral effect in LQR Integral effect in a LQR will compensate for process disturbances and (asymptotically) set the correct bias if reference feed forward is omitted. #State estimation Example of simulink state estimation implementation: ![Example of simulink state estimation implementation](https://s3-eu-west-1.amazonaws.com/wikipendium-public/14799125480xa0f1.jpg) Example of state estimation pole placement: ![Example of state estimation pole placement](https://s3-eu-west-1.amazonaws.com/wikipendium-public/14799126590xd8bbc.jpg) #Algorithm for discrete Kalman filter Given: a priori estimate error covariance, initial a priori state estimate, process noise covariance and measurement noise covariance. 1. Compute the Kalman gain using the estimate error covariance and the measurement noise: - $\mathbf{L}_k = \mathbf{P}_k^{-}\mathbf{C}^T(\mathbf{C}\mathbf{P}_k^{-}\mathbf{C}^T+\mathbf{{\bar{R}_v}})^{-1}$. 2. Update the estimate with the measurement: - $\mathbf{\hat{x}}_k =\mathbf{\hat{x}}_k ^{-}+\mathbf{L}_k(\mathbf{y}_k-\mathbf{C}\mathbf{\hat{x}}_k ^{-})$. 3. Compute error covariance for updated estimate: - $\mathbf{P}_k = (\mathbb{I}-\mathbf{L}_k\mathbf{C})\mathbf{P}_k^{-}(\mathbb{I}-\mathbf{L}_k\mathbf{C})^{T}+\mathbf{L}_k\mathbf{\bar{R}_v}\mathbf{L}_k^{T}$. 4. Project ahead: - $\mathbf{P}_{k+1}^{-} = \mathbf{A}\mathbf{P}_k^{-}\mathbf{A}^{T}+\mathbf{E}\mathbf{Q_w}\mathbf{E}^{T}$ - $\mathbf{\hat{x}}_{k+1}^{-} = \mathbf{A}\mathbf{\hat{x}}_k+\mathbf{B}\mathbf{u}_k$. 5. Repeat with $k = k +1$. # Power Spectral Density ## Definition The power spectrum $ S_x(j \omega) $ of a time series $ x(t) $ describes the distribution of power into frequency components composing that specific signal. By Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies or a spectrum over a continous range. In other words, the power spectral density function shows the strength of the variations (energy) as a function of frequency. It shows at which frequencies variations are strong and opposite. The unit of the Power Spectral Density is energy per frequency. The way to compute the PSD is by Fast Fourier Transform (FFT) or by computing the autocorrelation function and then transform it. Since a white noise is a stochastic process and with infinite energy, it has a flat PSD. The power of a signal is defined as: $$ P = \lim_{T\to \infty} \frac{1}{2T} \int\limits_{-T}^T x(t)^2 dt $$ $$ S_x (j \omega) = {|G(j \omega)|}^2 \cdot S_h(j \omega) = G(j \omega) G(-j \omega) \cdot S_h(j \omega) $$ Where $ S_h(j \omega) $ is the spectral density of the input noise. #Stability ##BIBO stability A system is Bounded Input Bounded Output (BIBO) stable if every bounded input results in a bounded output given $\mathbf{x_0}=\mathbf{0}$ (zero-state response). Asymptotic stability implies BIBO stability (but not necessarily the other way). - A $\underline{\text{continuous}}$ system is BIBO stable if every pole of the transfer function has a negative real part. The poles must be in the $\underline{\text{strict left half}}$ of the s-plane
- If the system is $\underline{\text{discrete}}$, the poles have to be $\underline{\text{inside the unit circle}}$ in the z-plane for BIBO stability. - Given a impulse response of a system, the system i BIBO stable if, and only if the impulse response $g(t)$ is absolutely integable on the domain $[0,\infty)$. That means there has to exist a $M > 0$ such that:
##Lyapunov stability A system is Lyapunov stable if every finite initial state results in a bounded response (but not necessarily goes to zero = marginal stability). This is therefore the stability of the zero-input response, with no forcing input ($\mathbf{u}=\mathbf{0}$). If for any given positive definite symmetric matrix N, the Lyapunov equation has a unique symmetric $\underline{\text{positive definite}}$ solution M (Sylvester criterion), the system is asymptotically stable. Lyapunov equation: $$ {\vec{A}}^{T}\vec{M} + \vec{M}\vec{A} = -\vec{N}$$ ####Checking if a matrix is positive definite - Check if all eigenvalues are postitive or - Check if all leading principal minors are postive - In a 2x2-M-matrix, this refers to $M_{11}$ and $det(M)$. - In a 3x3-M-matrix, this refers to $M_{11}$ , $det(M_{11}, M_{12}, M_{21}, M_{22})$ and $det(M)$ - Etc. ##Marginal stability Lyapunov stability where $\mathbf{x}$ doesn't go to 0 as time goes to $\infty$ (it's only bounded). All eigenvalues of A have zero or negative real parts. and eigenvalues with zero real parts must be simple roots (multiplicity one) of the minimal polynomial of A. ##Asymptotic stability Lyapunov stability where $\mathbf{x}$ goes to 0 as time goes to $\infty$. All eigenvalues of A have negative real parts. #Process Classification ## Deterministic Process Knowledge about the signal and it's statistical properties for $t \leq t_0$ makes identification of the parametres such that we can know the rest of the process for $ t \geq t_0 $. ## Wide Sense-Stationary Process A process is stationary if all its density functions are time-invariant. In the case of WSS, this only applies to the mean, variance and autocorrelation. The mean and the variance have to be constant and the autocorrelation function have to be dependent only on the time difference $ t_2 - t_1 $. ## Ergodicity If the process is ergotic, the ............................?? #Useful tips (Timesaving etc..) - For any block-triangular matrix $$ \mathbf{G} = \begin{bmatrix} G_{11} & G_{12}\\ 0 & G_{22}\\ \end{bmatrix} $$ or $$ \mathbf{G} = \begin{bmatrix} G_{11} & 0 & 0\\ G_{21} & G_{22}& 0\\ G_{31} & G_{32}& G_{33}\\ \end{bmatrix} $$ it holds that $det(G) = det(G_{11})det(G_{22})...det(G_{nn})$. It is the union of the determinants of its diagonal elements. Leading to that the eigenvalues of $G$; $\lambda_1, \lambda_2...\lambda_n$, is equal the the eigenvalues of $G_{11}, G_{22} ... G_{nn}$ # Exam Statistics from 2003 # Contributors Mainly written by Kristian Fjelde Pedersen, Anders Vatland, Christian Peter Bech Aschehoug and Rune Nordmo
  • Contact
  • Twitter
  • Statistics
  • Report a bug
  • Wikipendium cc-by-sa
Wikipendium is ad-free and costs nothing to use. Please help keep Wikipendium alive by donating today!