Wikipendium

History Compendium
Log in
This is an old version of the compendium, written May 31, 2021, 12:26 p.m. Changes made in this revision were made by tajoon. View rendered version.
Previous version

TEP4280: Introduction to Computational Fluid Dynamics

# Model Equations ## Burger's Equation $$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \nu \frac{\partial ^2 u}{\partial x^2} $$ The Burger's equation is parabolic. The inviscid version of the equation is $$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = 0$$ The linear version of the Burger's Equation is often called the Convection–Diffusion equation $$\frac{\partial u}{\partial t} + u_0 \frac{\partial u}{\partial x} = \alpha \frac{\partial ^2 u}{\partial x^2} $$ ### Discretization Schemes ### Application ## Diffusion Equation $$\frac{\partial u}{\partial t} = \alpha \frac{\partial ^2 u}{\partial x^2} $$ The diffusion equation is parabolic. ### Discretization Schemes ### Application #### Heat Conduction $$\frac{\partial T}{\partial t} = \alpha \frac{\partial ^2 T}{\partial x^2} $$ where $T$ is the temperature, and $alpha$ is the heat conduction constant. In 2D the equation will be #### Flow in porous media $$\frac{\partial u}{\partial t} = c \frac{\partial ^2 u}{\partial x^2} $$ ## Poisson Equation $$\frac{\partial ^2 u}{\partial x^2} + \frac{\partial ^2 u}{\partial y^2} = f(x, y)$$ Setting $f(x, y) = 0 $ will give the Laplace equation. The equation is elliptic. The equation can be used to express 2D steady heat conduction : $$\frac{\partial }{\partial x}\left(k\frac{\partial T}{\partial x}\right) + \frac{\partial }{\partial y}\left(k\frac{\partial T}{\partial y}\right) = 0$$ ### Discretization Schemes #### Central Difference ##### 2D Steady Heat Equation The 2D steady heat conduction equation can be dicretized by finite differences to $$k\frac{T_{i+1, j} - 2T_{i, j} + T_{i-1, j}}{(\Delta x)^2} + k\frac{T_{i+1, j} - 2T_{i, j} + T_{i-1, j}}{(\Delta y)^2} = 0$$ $$\left(\frac{2k}{(\Delta x)^2 + (\Delta y)^2}\right) T_{i, j} = \frac{k}{(\Delta x)^2}T_{i-1, j} + \frac{k}{(\Delta x)^2}T_{i+1, j} + \frac{k}{(\Delta y)^2}T_{i, j-1} + \frac{k}{(\Delta y)^2}T_{i, j+1} $$ which can be written on a more compact notation $$a_P T_{i, j} = a_W T_{i-1, j} + a_E T_{i+1, j} + a_S T_{i, j-1} + a_N T_{i, j+1}$$ where $a_W = a_E = \frac{k}{(\Delta x)^2}$, $ a_S = a_N = \frac{k}{(\Delta x)^2}$, and $a_P = a_W + a_E + a_S + a_N$ At boundaries with __Dirichlet boundary condition__ (i.e. boundaries with a constant value that does not change), the cell which is "in" the boundary is not included in the computational domain. Instead, the neighbouring cell is solved with a special equation. If the boundary is immidiatly below cell $(x_i, y_1)$, then the value $T_{i, 1}$ in the cell is calculated by $$ a_P T_{i, 1} = a_W T_{i -1, 1} + a_E T_{i+1, 1} + a_N T_{i, 2} + \tilde{S}_u$$ where $\tilde{S}_u$ is the representing the boundary condition $T_B(x_i)$ through the relationship $\tilde{S}_u = \frac{k}{\Delta y^2} T_B (x_i)$. If the boundary condition is in another cell, then the same process applies, in that the contribution of the neighbouring cell is substituted by $\tilde{S}_u = \frac{k}{\Delta x^2} T_B (x_i)$ or $\tilde{S}_u = \frac{k}{\Delta y^2} T_B (y_i)$ depending on if the boundary is immidiatly left, right, above or below. For __Neumann boundary condition__, a similar approach is to be taken. The core equation describing $T_{i, j}$ is the same, but this time $a_S$ is exchanged with $\tilde a_S = a_S + a_N$, and $\tilde S = \frac{2k}{\Delta y} g(x_i)$, where $g(x_i)$ is the gradient at the boundary. ## Wave Equation $$\frac{\partial ^2 u}{\partial t^2} = \alpha_0 ^2 \frac{\partial ^2 u}{\partial x^2}$$ The Wave Equation is hyperbolic. ## Linear Advection Equation $$\frac{\partial u}{\partial t} + \alpha_0 \frac{\partial u}{\partial x} = 0$$ The Linear Advection Eqaution is hyperbolic. The exact solution is $$ u = u_0 f(x - \alpha_0 t)$$ ### Discretization Schemes #### FTCS The FTCS scheme of the advection equation is _unconditionally unstable_. $$u^{n+1}_j = u^n_j - \frac{\alpha_0 \Delta t}{\Delta x}(u^n_{j+1} - u^n_{j-1}) $$ #### Explicit Upwind Scheme $$u^{n+1}_j = u^n_j - \frac{\alpha_ 0 \Delta t}{\Delta x} (u^n_j - u^n_{j-1}) $$ The scheme is stable in the interval $$0 \space \leq \space \frac{\alpha_ 0 \Delta t}{\Delta x} \space \leq \space 1$$ $$ TE = \mathcal{O}(\Delta t, \Delta x)$$ #### Implicit Upwind Scheme $$u^{n+1}_j = \frac{u^n_j + \frac{\alpha_0 \Delta t}{\Delta x} u^{n+1}_{j-1}}{1 + \frac{\alpha_0 \Delta t}{\Delta x}} $$ # Boundary Conditions # Numerical Methods ## Euler $$ y_{n+1} = y_n + h \cdot f(t_n, y_n)$$ ### Errors $$ e_{n+1} = \mathcal{O} (h^2)$$ $$ E_n = \mathcal{O} (h)$$ ### Stability Region $$ \{z \in \mathbb{C} |\quad |1 + z| \leq 1\}$$ ## Implicitt Euler $$ y_{n+1} = y_n + h \cdot f(t_{n+1}, y_{n+1})$$ ### Errors $$ E_n = \mathcal{O} (h)$$ ### Stability Region $$ \{z \in \mathbb{C} |\quad |\frac{1}{1- z}| \leq 1\}$$ ## Trapezoidal $$y_{n+1} = y_n + h\cdot \frac{1}{2}\cdot [f(x_n, y_n) +(f(x_{n+1}, y_{n+1})] $$ ### Errors $$ e_{n+1} = \mathcal{O} (h^3)$$ $$ E_n = \mathcal{O} (h^2)$$ ### Stability region $$ \{z \in \mathbb{C} | Re(z) < 0\}$$ ## Heun $$y_{n+1} = y_n + h\cdot \frac{1}{2}\cdot [f(x_n, y_n) +(f(x_{n+1}, y*_{n+1})] $$ where $y*_{n+1} = y_n + h\cdot f(x_n, y_n) $ ### Errors $$ e_{n+1} = \mathcal{O} (h^3)$$ $$ E_n = \mathcal{O} (h^2)$$ ### Stability Region $$ \{z \in \mathbb{C} |\quad |1 + z + \frac{z^2}{2}| \leq 1\}$$ ## Runge-Kutta $$ k_1 = f(t_n, y_n)$$ $$ k_2 = f(t_n + \frac{h}{2}, y_n + \frac{h}{2}\cdot k_1)$$ $$ k_3 = f(t_n + \frac{h}{2}, y_n + \frac{h}{2}\cdot k_2)$$ $$ k_2 = f(t_n + h, y_n + h\cdot k_3)$$ $$ y_{n+1} = y_n + \frac{h}{6} \cdot (k_1 + 2k_2 + 2k_3 + k4)$$ ### Errors $$ e_{n+1} = \mathcal{O} (h^5)$$ $$ E_n = \mathcal{O} (h^4)$$ ### Stability Region $$ \{z \in \mathbb{C} |\quad |1 + z + \frac{z^2}{2}+ \frac{z^3}{6}+ \frac{z^4}{24}| \leq 1\}$$ ## Solution of Linear Systems of Equations Most systems will be on the form $\mathbf{Ax = b}$, where $\mathbf{A}$ is a $n \times n$ matrix, and $\mathbf{x} $ and $\mathbf{b} $ are vectors of length $n$. $\mathbf{A}$ and $\mathbf{b}$ be are given, while $\mathbf{x}$ is to be determined. ### Direct Methods Direct methods will provide the exact solution for the gien equation. For problems where $\mathbf{A}$ is a __tridiagonal matrix__, it can be solve dwith direct methods. A tridiagonal matrix will only have non-zero entries in its three central diagonals, resulting in the following system $$\begin{bmatrix} d_1 & a_1 & & & & & \\ b_2 & d_2 & a_2 & & & & \\ & \ddots & \ddots & \ddots & & & \\ & & b_j & d_j & a_j & & \\ & & &\ddots & \ddots & \ddots & \\ & & & & b_{NJ-1} & d_{NJ-1} & a_{NJ-1} \\ & & & & & b_{NJ} & d_{NJ} \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \\ \vdots \\ u_j \\ \vdots \\ u_{NJ-1} \\ u_{NJ} \end{bmatrix} = \begin{bmatrix} c_1 \\ c_2 \\ \vdots \\ c_j \\ \vdots \\ c_{NJ-1} \\ c_{NJ} \end{bmatrix}$$ Implicit diffusion equation, implicit linear advection equation and similar are appropriate to solve with direct methods $\mathbf{A}$ is diagonally dominant if a element on the diagonal on any row is larger than or equal to the sum of the _other_ entries on the same row $$|a_{ii}| \geq \sum_{j=1}^{NJ} |a_{ij}|$$ and for at least one row must be _strictly_ larger than the sum of the other entris in the same row $$|a_{ii}| > \sum_{j=1}^{NJ} |a_{ij}|$$ For tridiagonal matricies, that only have the elements $a_{j}$, $b_{j}$ and $d_{j}$, this simplifies to $$ |d_{j}| \geq |a_j| + |b_j|$$ and for at least one $j$ $$ |d_j| > |a_j| + |b_j|$$ #### TDMA If the tridiagonal matrix $\mathbf{A}$ is _diagonally dominant_, the system can be solved with __TDMA__, which is a simplified form of Gaussian elimination. TDMA can solve the system in $\mathcal{O}(n)$ instead of $\mathcal{O}(n^3)$ required by Gaussian elimination. The process is split up in 4 distinct steps ##### Step 1: LU Decomposition
##### Step We assume that the matrix $\mathbf{A}$ is the product of two sparse $n \times n$ matrices $L$ and $U$ $$ \mathbf{A} = \mathbf{LU} $$ $$ \mathbf{L} = \begin{bmatrix} 1 & & & & & & \\ \beta_2: Bac & 1 & & & & & \\ & \beta_3 & 1 & & & & \\ & & \ddots & \ddots & & & \\ & & &\ddots & \ddots & & \\ & & & & \beta_{NJ-1} & 1 & \\ & & & & & \beta_{NJ} & 1 \end{bmatrix} $$ $$ \mathbf{U} = \begin{bmatrix} \delta_1 & a_1 & & & & & \\ & \delta_2 & a_2 & & & & \\ & & \ddots & \ddots & & & \\ & & & \delta_j & a_j & & \\ & & & & \ddots & \ddots & \\ & & & & & \delta_{NJ-1} & a_{NJ-1} \\ & & & & & & \delta_{NJ} \end{bmatrix} $$ where $\delta_1 = d_1$, $\beta_j = \frac{b_j}{\delta_{j-1}}$, and $\delta_j = d_j - \beta_j a_{j-1} $. The original system of equations $\mathbf{Ax = b}$ is then equal to $\mathbf{LUx = b}$. #### Cholesky Factorization If $\mathbf{A}$ is symmetrict, and psotive definite, then a tridiagonal matrix $\mathbf{L}$ exists such that $$\mathbf{A = LL^T}$$ This solution requires only $\approx \frac{n^3}{3}$, ward substitution ##### Step 3: Forhich is half the ward substition ##### Step 4: Final solution ork of LU decomposition, and half the storage. ### Iterative Methods Iterative methods requires and initial guess to start its procedures.
# Discretization methods ## FDM ## Definitions for FDM #### General Finite Difference Methods approximates in-between states and derivatives as $$ \frac{d u(x_{j +\frac{1}{2}})}{dx} \approx \frac{u_{j +1} - u_j}{\Delta x}$$ $$ \frac{d^2 u(x_j)}{dx^2} \approx \frac{u_{j +1} -2 u_j + u_{j-1}}{\Delta x^2}$$ #### Consistency A FDM of a PDE is called __consistent__ if it approximates the PDE such that the truncation error TE = FDM - PDE goes to zero as $\Delta x $ and $\Delta t$ goes to zero. #### Stability A FDM of a PDE is called __stable__, if the 2-norm of the FDM solution $ \mathbf{u}^n $ stays bounded for any time level $n$. Translated to english; for an initial value problem with inital condition $u(x, 0) = u^0(x)$, $ \mathbf{u}^n $ satisfies $$\parallel\mathbf{u}^n \parallel _2 \space\leq \space Ke^{\alpha t_n}\parallel\mathbf{u}^0 \parallel _2$$ where $K$ and $\alpha$ are constants independent $\mathbf{u}^0$, $\Delta x$, and $\Delta t$. Usually $ \alpha = 0$. Stability for a discretization method is derived through __von Neumann stability analysis__, also known as Fourier analysis. This analysis requires the following 1. The FDM is linear. 2. The FDM has constants coefficients. 3. The problem has __periodic__ boundary conditions. 4. The grid is __equidistant__. #### Convergence and Lax Equivalance Theorem If a consistent FDM is stable, it will also be convergent. The order of convergence will be equal the order of accuracy. ### FTCS $$ \frac{u_j^{n+1} - u_j ^n}{\Delta t} = \alpha \frac{u_{j +1}^n -2 u_j^n + u_{j-1}^n}{\Delta x^2}$$ #### Accuracy $$ TE = \mathcal{O}(\Delta t) + \mathcal{O}(\Delta x^2)$$ Order of convergence is equal to order of accuracy $$\parallel\mathbf{u}^n - u(\mathbf{x}, t_n) \parallel _2 = \mathcal{O}(\Delta t) + \mathcal{O}(\Delta x^2)$$ #### Stability AN FTCS scheme is stable when $$ 0 \leq \frac{\alpha \Delta t}{\Delta x^2} \leq \frac{1}{2} $$ ### BTCS - simple implcit $$ \frac{u_j^{n+1} - u_j ^n}{\Delta t} = \alpha \frac{u_{j +1}^{n+1} -2 u_j^{n+1} + u_{j-1}^{n+1}}{\Delta x^2}$$ #### Accuracy $$ TE = \mathcal{O}(\Delta t) + \mathcal{O}(\Delta x^2)$$ #### Stability BTCS is __unconditionally stable__. $$ |g(k \Delta x)| = \frac{1}{1 + 4\frac{\alpha \Delta t}{\Delta x^2} sin^2(\frac{k \Delta x}{2})} \leq 1$$ BTCS is convergent for all $$\frac{\alpha \Delta t}{\Delta x^2} \geq 0 $$ ### Explicit Upwind scheme ## FVM ## FEM ## Spectral method ## Spectral element methods # Termonology
  • Contact
  • Twitter
  • Statistics
  • Report a bug
  • Wikipendium cc-by-sa
Wikipendium is ad-free and costs nothing to use. Please help keep Wikipendium alive by donating today!