Wikipendium

History Compendium
Log in
This is an old version of the compendium, written Aug. 5, 2016, 7:51 p.m. Changes made in this revision were made by trmd. View rendered version.
Previous version Next version

TMA4185: Coding Theory

$$ \newcommand{\Gr}{\mathscr{G}} \newcommand{\Ri}{\mathscr{R}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\F}{\mathbb{F}} \newcommand{\C}{\mathcal{C}} \newcommand{\div}{\,\vert\,} \newcommand{\ndiv}{\,\not\vert\,} \newcommand{\neq}{\not=} \newcommand{\b}{\textbf} \newcommand{\bmat}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\barr}[2]{\left[\begin{array}{#1}#2\end{array}\right]} $$ # Motivation [#] TODO # Basics ## Codes
We wish to send some message $m$ from a source, over a channel, and receive it at some recipient. To do this we encode it, creating a new message $c$ (we often use $x$ from a source, over a channel, and receive it at some reciinstead). This message hopient. To do this we encode it, creating a new message $c$. This message hopefully is of such a character that it allows us to succesfully decode $xm$ at the recipient.
Before we go about constructing an encoding scheme, we make a few definitions: >**Definition.** A **code** $\C$ is a set of **codewords**, each a tuple of some length $n$; $(a_1, a_2\ldots a_n)$. A code $\C$ is said to be an $(n, M)$ code, if each code is of length $n$ and the number of codewords in $\C$ is $M$. Put in other words, we say that a code describes a set of possible encodings of inputs (messages). Typically, the codewords are $n$-tuples over some field $\F_q$. If the field is $\F_2$ we call the code a **binary code**. If the field is $\F_3$ we call the code a **ternary code**. We often write a codeword $(a_1, a_2, a_3 \dots)$ in a concatenated form $a_1a_2a_3\dots$. In the most general of terms, this is all the structure necessary required to define codes. However, we almost exclusively work with *linear* codes: >**Definition.** A **linear code** is a code where all linear manipulations of one or more codewords result in a new codeword, i.e. for all $\b x, \b y \in \C$: > (i) $ax \in C,\> a \in \F$ > (ii) $x + y \in C$. **Example.** Let $\C = \{000, 110, 011, 101, 111\}$ be a code over $\F_2$. It is a linear code as can be easily verified. ## Generator matrices We typically convert a message $x$ into a codeword $c$ through the means of a **generator matrix**. Such a matrix also imposes a (so far undefined, but definitely necessary) length requirement on the messages: **Definition.** A **generator matrix** $G$ of a code $\C$ is any matrix of dimension $n \times k$ where $n$ is the length of each codeword, and $k$ is the length of each message, which given the set of all message $X$ generates the set $\C$ through matrix multiplication: $\C = XG$. Alternatively, we can say that the rows of $G$ must span $\C$. **Example.** Let $G = \bmat{1 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 1}$ be a generator matrix of a code $\C$ over the field $F_2$. The matrix takes as "input" any message of length 3, and outputs a code of length 4. For instance take a message $x = [1, 0, 1]$. This encodes as $xG = [1, 1, 1, 0]$. We can calculate the entirity of $\C$ by combining the rows of $G$ in arbitrary fashion: $$
\C = \{ 0000, \\ 1101, \\ 1010, \\ 0011, \\ 1110, \\ 0111, \\ 1001, \\ 0100, \\ \}
$$ We note to no surprise that the amount of distinct codewords in $\C$ is the same as the amount of possible inputs: $2^3 = 8$.
>**Definition.** A generator matrix is said to be in **standard** form if it is on the form $[I_k| \mid A]$.
**Example.** The matrix $$\barr{cc|cc}{ 1 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1}$$ is on standard form. The matrix $$\barr{cc|cc}{1 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1}$$ is not.
## Parity check matrices An interesting property of a linear code is that as it is a subspace of some vector space (in particular, of the subspace $F_q^n$), it is the *kernel* of some linear transformation $H$. For each codeword $x \in \C$, $Hc^T = \b 0$. So why is this of any importance to us? Looking back at our motivation for defining error-correcting codes, it is precisely that of detecting errors. As $H$ (the construction of which we haven't touched on yet) multiplied by any codeword will yield $\b 0$, we know that if a received message of some code does not yield $\b 0$ on multiplication with $H$, then some error has occured. In fact, if our code is cleverly constructed, we can also figure out which error has occured, if the amount of errors are not too large. **Definition.** The **Parity-check matrix** $H$ of a code $C$ is a $(n-k) \times n$ matrix, defined by $H\b x^T = \b 0,\> \forall x \in \C$. If we have a generator matrix on standard form, we have a nice way of figuring out $H$: **Theorem.** If $G = [I_k \mid A]$ then $H = [-A^T \mid I_{n-k}]$. *Proof.* Clearly $HG^T = [-A^T \mid I_{n-k}]\cdot [I_k \mid A]^T =-A^T + A^T = \mathcal{O}$. **Example.** Let $$G = \barr{ccc|cc}{ 1 & 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 1 & 0 }$$ be the generator matrix of some code $\C$ over $F_2$. Then $\C$ has parity check matrix (note that $-1 = 1 \mod 2$) $$H = \barr{ccc|cc}{ 1 & 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 }$$
## Dual codes ## Weights and distances ## Puncturing and extending codes ## Equivalence of codes ## Examples # Some typical codes ## Hamming codes ## Reed-Muller codes ## Golay codes # Bounds on codes ## $A_q(n,d)$ and $B_q(n, d)$ ## MDS codes ## Gilbert Lower Bound ## Exampels # Cyclic codes
  • Contact
  • Twitter
  • Statistics
  • Report a bug
  • Wikipendium cc-by-sa
Wikipendium is ad-free and costs nothing to use. Please help keep Wikipendium alive by donating today!