Wikipendium

History Compendium
Log in
This is an old version of the compendium, written Nov. 23, 2016, 2:37 p.m. Changes made in this revision were made by runenordmo. View rendered version.
Previous version Next version

TTK4115: Linear System Theory

$$ \newcommand{\dt}{\,\mathrm{d}t} \newcommand{\dx}{\,\mathrm{d}x} \newcommand{\dy}{\,\mathrm{d}y} \newcommand{\dh}{\,\mathrm{d}h} \newcommand{\pt}{\partial t} \newcommand{\px}{\partial x} \newcommand{\py}{\partial y} \newcommand{\QEDA}{\hfill\ensuremath{\blacksquare}} \newcommand{\QEDB}{\hfill\ensuremath{\square}} \newcommand{\R}{\mathbb{R}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\bmat}[1]{\begin{bmatrix}#1\end{bmatrix}} \renewcommand{\vec}[1]{\mathbf{#1}} $$ #Realization ##Definition Given a rational and proper matrix function $\hat{G}(s)$, a realization is any state-space model $(\vec{A};\vec{B};\vec{C};\vec{D})$ such that $\hat{G}(s)$ is the corresponding transfer matrix, i.e. $$\hat{G}(s) = \vec{C}(s\vec{I} - \vec{A})^{-1}\vec{B} + \vec{D}$$ A realization is said to be minimal if its state has the least achievable dimension; in particular, this corresponds to the requirement that state-space model $(\vec{A};\vec{B};\vec{C};\vec{D})$ is both controllable and observable. #Jordan Form ## Definition The Jordan form of a system can be derived by means of a transformation matrix $ \vec{T} $ such that $$ \vec{J} = \vec{T}^{-1} \vec{A} \vec{T}, \qquad \vec{\hat{B}} = \vec{T}^{-1} \vec{B}, \qquad \vec{\hat{C}} = \vec{C} \vec{T} $$ where T consist of the eigenvectors of A $$ \vec{T}= [ \textbf{v}_1 \quad \textbf{v}_2 \quad ... \quad \textbf{v}_n ] $$ #LQR - linear quadratic regulator ## When can we use a LQR? We can use a LQR when we have a linear controllable system. If we have a non-linear system, we can use linearize the system around an operating point (often chosen to be equilibrium): $ \mathbf{x} = \mathbf{x_p} \quad \mathbf{u} = \mathbf{u_p} $.
Linearizing amounts to transforming our system from $\boldmathbf{\dot{x}} = \boldmathbf{A} \boldmathbf{x} + \boldmathbf{B} \bold{u}$ to $\boldmathbf{u}$ to $\mathbf{\dot{\tilde{x}}} = \boldmathbf{A} \boldmathbf{\tilde{x}} + \boldmathbf{B} \bold{\tilde{u}}.
# Power Spectral Density ## Definition The power spectrum $ S_x(j \omega) $ of a time series $ x(t) $ describes the distribution of power into frequency components composing that specific signal. By Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies or a spectrum over a continous range. In other words, the power spectral density function shows the strength of the variations (energy) as a function of frequency. It shows at which frequencies variations are strong and opposite. The unit of the Power Spectral Density is energy per frequency. The way to compute the PSD is by Fast Fourier Transform (FFT) or by computing the autocorrelation function and then transform it. Since a white noise is a stochastic process and with infinite energy, it has a flat PSD. The power of a signal is defined as: $$ P = \lim_{T\to \infty} \frac{1}{2T} \int\limits_{-T}^T x(t)^2 dt $$ $$ S_x (j \omega) = {|G(j \omega)|}^2 \cdot S_h(j \omega) = G(j \omega) G(-j \omega) \cdot S_h(j \omega) $$ # Exam Statistics from 2003
  • Contact
  • Twitter
  • Statistics
  • Report a bug
  • Wikipendium cc-by-sa
Wikipendium is ad-free and costs nothing to use. Please help keep Wikipendium alive by donating today!