Wikipendium

History Compendium
Log in
This is an old version of the compendium, written Aug. 6, 2016, 12:59 p.m. Changes made in this revision were made by trmd. View rendered version.
Previous version Next version

TMA4185: Coding Theory

$$ \newcommand{\Gr}{\mathscr{G}} \newcommand{\Ri}{\mathscr{R}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\F}{\mathbb{F}} \newcommand{\C}{\mathcal{C}} \newcommand{\O}{\mathcal{O}} \newcommand{\div}{\,\vert\,} \newcommand{\ndiv}{\,\not\vert\,} \newcommand{\neq}{\not=} \newcommand{\b}{\textbf} \newcommand{\bmat}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\barr}[2]{\left[\begin{array}{#1}#2\end{array}\right]} $$ # Motivation [#] TODO # Basics ## Codes
We wish to send some message $m$ from a source, over a channel, and receive it at some recipient. To do this we encode it, creating a new message $c$ (we often useoften denoted by $x$ insteadaswell).
This message hopefully is of such a character that it allows us to succesfully decode $m$ at the recipient. Before we go about constructing an encoding scheme, we make a few definitions: >**Definition.** A **code** $\C$ is a set of **codewords**, each a tuple of some length $n$; $(a_1, a_2\ldots a_n)$.
A code $\C$ is said to be an $(n, M)$ code, if each codeword is of length $n$ and the number of codewords in $\C$ is $M$.
Put in other words, we say that a code describes a set of possible encodings of inputs (messages). Typically, the codewords are $n$-tuples over some field $\F_q$. If the field is $\F_2$ we call the code a **binary code**. If the field is $\F_3$ we call the code a **ternary code**. We often write a codeword $(a_1, a_2, a_3 \dots)$ in a concatenated form $a_1a_2a_3\dots$. In the most general of terms, this is all the structure necessary required to define codes. However, we almost exclusively work with *linear* codes: >**Definition.** A **linear code** is a code where all linear manipulations of one or more codewords result in a new codeword, i.e. for all $\b x, \b y \in \C$: > (i) $ax \in C,\> a \in \F$ > (ii) $x + y \in C$. **Example.** Let $\C = \{000, 110, 011, 101, 111\}$ be a code over $\F_2$. It is a linear code as can be easily verified. ## Generator matrices We typically convert a message $x$ into a codeword $c$ through the means of a **generator matrix**. Such a matrix also imposes a (so far undefined, but definitely necessary) length requirement on the messages: **Definition.** A **generator matrix** $G$ of a code $\C$ is any matrix of dimension $n \times k$, where $n$ is the length of each codeword, and $k$ is the length of each message, whose rows form a basis for $\C$. Equivalently, we can say that the rows of $G$ must span $\C$. **Example.** Let $$G = \bmat{1 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 1}$$ be a generator matrix of a code $\C$ over the field $F_2$. The matrix takes as "input" any message of length 3, and outputs a code of length 4. For instance take a message $x = [1, 0, 1]$. This encodes as $xG = [1, 1, 1, 0]$. We can calculate the entirity of $\C$ by combining the rows of $G$ in arbitrary fashion: $$ \C = \{0000, 1101, 1010, 0011, 1110, 0111, 1001, 0100\} $$ We note to no surprise that the amount of distinct codewords in $\C$ is the same as the amount of possible inputs: $2^3 = 8$. If a linear code takes input of length $k$ and procudes codewords of length $n$, we say that the code has dimension $k$ and is a $[n, k]$ code. >**Definition.** A generator matrix is said to be in **standard** form if it is on the form $[I_k \mid A]$. **Example.** The matrix $$\barr{cc|cc}{ 1 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1}$$ is on standard form. The matrix $$\barr{cc|cc}{1 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1}$$ is not. ## Parity check matrices An interesting property of a linear code is that as it is a subspace of some vector space (in particular, of the subspace $F_q^n$), it is the *kernel* of some linear transformation $H$. For each codeword $x \in \C$, $Hc^T = \b 0$. So why is this of any importance to us? Looking back at our motivation for defining error-correcting codes, it is precisely that of detecting errors. As $H$ (the construction of which we haven't touched on yet) multiplied by any codeword will yield $\b 0$, we know that if a received message of some code does not yield $\b 0$ on multiplication with $H$, then some error has occured. In fact, if our code is cleverly constructed, we can also figure out which error has occured, if the amount of errors are not too large. **Definition.** The **Parity-check matrix** $H$ of a code $C$ is a $(n-k) \times n$ matrix, defined by $H\b x^T = \b 0,\> \forall x \in \C$. If we have a generator matrix on standard form, we have a nice way of figuring out $H$: **Theorem.** If $G = [I_k \mid A]$ then $H = [-A^T \mid I_{n-k}]$. *Proof.* Clearly $HG^T = [-A^T \mid I_{n-k}]\cdot [I_k \mid A]^T =-A^T + A^T = \mathcal{O}$. **Example.** Let $$G = \barr{ccc|cc}{ 1 & 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 1 & 0 }$$ be the generator matrix of some code $\C$ over $F_2$. Then $\C$ has parity check matrix (note that $-1 = 1 \mod 2$) $$H = \barr{ccc|cc}{ 1 & 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 }$$ Finally we note (without proof) that the rows of $H$ is also necessarily independent. ## Dual codes **Definition.** Let $G$ be the generator matrix of some code $\C$ with parity check matrix $H$. Then the code defined by the rows of $H$ is called the dual code of $G$, and is denoted $G^{\perp}$. A code is said to be **self-orthagonal** if $G \subseteq G^{\perp}$, and **self-dual** if $G = G^{\perp}$. ## Weights and distances One interesting property of codes is that of *weight* and *distance*. >**Definition.** The (Hamming) **distance** $d(x, y)$ between two codewords $\b x and \b y$ of some code $\C$ is defined as the number of coordinates in which $\b x$ and $\b y$ differ. $ $ >**Definition.** The (Hamming) **weight** $wt(x, y)$of a codeword $\b x$ is defined as the number of non-zero coordinates of $\b x$. We make the following observation: >**Theorem.** If $x, y \in \F_q^n$ then $d(x, y) = wt(x - y)$. *Proof.* Should be fairly clear: simply observe that $x-y$ is zero in all coordinates where $x$ and $y$ are similar. >**Definition.** The **minimum distance** of a code $\C$ is the minumum distance between any two codewords of $\C$. $ $ >**Theorem.** If $\C$ is a linear code, then the minimum distance of $\C$ is the same as the minimum weight of the non-zero codewords of $\C$. If an $[n, k]$ code has minimum distance $d$, we say that the code is an $[n, k, d]$-code. ## Puncturing and extending codes Having already created some code $\C$, a simple way to create 'new' codes from $\C$ is through either *puncture* or *extension*. ### Puncturing As might be clear from the name, **puncturing** a code implies removing some $i$'th coordinate from all codewords in the code. What might the minimum distance $d$ and dimension $k$ of a punctured code? >**Theorem.** Let $\C$ be a linear code, and let $\C^*$ be the code $\C$ punctured in the $i$'th coordinate. Then: > (i) If $\C$ has a codeword of minumum weight with a non-zero entry in the punctured coordinate $i$, the minimum distance of the new code decreases by $1$: $\C^*$ is an  $[n - 1, k, d - 1]$ code. > (ii) If $\C$ has minimum distance of $1$, and there are two codewords of $\C$ which differ only in the $i$'th coordinate, the new code $\C^*$ has dimension $k-1$: $\C^*$ is an $[n - 1, k-1, d^*]$, with $d^* > 1$. We proof this informally be giving some intuition through an example: Let $\C$ be the binary code generated by $$G = \bmat{1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1}$$. We see that the minimum distance of this code is $2$ as all the codewords are $\{0000,1100,0011,1111\}$. Let $\C^*$ be the code punctured in the 3rd coordinate. The new code has generator matrix $$G^* = \bmat{1 & 1 & 0 \\ 0 & 0 & 1}$$. The minimum weight of this code is $1$, as the codewords are $\{000, 001, 110, 111\}$. E.g. the distance between $110$ and $111$ is $1$. Contrarily, let $\C$ be the binary code generated by $$G = \bmat{1 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1}$$. The minimum distance of this code is $2$. Puncturing this code in the third coordinate gives us a code with generator matrix $$G^* = \bmat{1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1}$$. As already noted, the minimum distance of this code is also $2$. For a demonstration of the second property of the theorem, let $\C$ be the binary code with the generator matrix $$G = \bmat{1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1},$$ with dimension $2$. The codewords of this code are $\{0000, 0111, 1000, 1111\}$. Puncturing this code in the first coordinate, we get the code with generator matrix $$G^* = \bmat{1 & 1 & 1 \\ & 1 & 1 & 1} ~ \bmat{1 & 1 & 1}..$$ Clearly this code has dimension $1$. ### Extension We can in similar fashion extend a code by adding a coordinate. Typically, we do this to get a new code consisting of only even-like codewords (codewords whose length are even). Clearly, the inverse of the previous theorem applies for extension. **Example.** Take the binary code generated by the matrix $$G = \bmat{1 & 0 & 1 \\ 0 & 1 & 1}$$ The code is a $[3, 2, 2]$ code. We extend the code by adding a new coordinate with column vector $\bmat{1 \\ 0}$ in between the second and third coordinate: $$G^* = \bmat{1 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1}$$, as can be checked, this is also a $[4, 2, 2]$ code. We see that the minimum distance has not decreased, as the extended code $G^*$ has a minimum weight codeword without a $1$ in the second coordinate: $1101 + 0101 = 1000$. In other words, to figure out the dimension and minimum distance of the extended code, simply apply the previous theorem in reverse. ## Direct sum of codes We can create larger codes from smaller codes by concatenating them through *direct sum*: >**Definition.** Let $G_1$ and $G_2$ be generator matrices for linear codes $\C_1$ and $\C_2$ with parity check matrices $H_1$ and $H_2$. The **direct sum** of the codes $\C_1$ and $C_2$ is the code with generator- and parity check matrix > $$G = \bmat{G_1 & \O \\ \O & G_2}\quad H = \bmat{H_1 & \O \\ \O & H_2}$$ > The new code is a $[n_1+n_2, k_1+k_2, \min{d_1, d_2}$ code. Sadly, as the minimum distance of the new code is no larger than any of the original codes, the new code is of little use (as we will see). ## The $( u \mid u + v )$-construction In somewhat similar fashion, we can concatenate two codes using what is called the $(b u \mid \b u + \b v)$ construction. >**Definition.** Let $G_1$ and $G_2$ be generator matrices for linear codes $\C_1$ and $\C_2$ of the same length $n$ (but not necessarily same dimension) with parity check matrices $H_1$ and $H_2$. The $\b{(u} \mid \b{u + v)}$ construction of the codes $\C_1$ and $C_2$ is the code $\C$ with generator- and parity check matrix > $$G = \bmat{G_1 & G_1 \\ \O & G_2}\quad H = \bmat{H_1 & \O \\ -H_2 & H_2}$$ > The new code is a $[2n, k_1+k_2, \min{2d_1, d_2}$ code. We can also write the code $\C$ more concisely, by $$\C = \{ (\b u, \b u + \b v) \mid u \in \C_1,\> v \in \C_2\}.$$ ## Equivalence of codes There are multiple ways in which codes can be "equivalent", or "essentially" the same. One way is to assume equality as vector spaces (i.e. the vector spaces are isomorphic), however in this case, it should be fairly clear that properties such as weight might not be retained across the isomorphism - we could for instance create some isomorphism mapping all vectors of weight $2$ to some vector of weight $3$ and vica-versa under the only constraint that the vector spaces should be isomorphic. Thus, we look at a different class of equivalence, namely *permutation equivalence*: >**Definition.** Two codes $\C_1$ and $\C_2$ are permutation equivalent if there is a permutation of coordinates which sends $\C_1$ to $\C_2$. **Example.** Let $\C_1$ and $\C_2$ be two binary codes defined by the generator matrices $$ G_1 = \bmat{1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1},\quad G_2 = \bmat{0 & 1 & 1 & 1 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1} $$ We see that we can acquire $G_2$ from $G_1$ by switching the first and second coordinate. Thus $C_1$ and $C_2$ are permutation equivalent. Granted that two permutation codes are "essentially the same", we can for any generator matrix of some code, acquire a generator matrix on standard form of some permutation equivalent code. This is indeed handy for finding parity check matrices for an essentially alike code. **Example.** Let $\C$ be the code with generator matrix $$G = \bmat{1 & 1 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 & 1}$$ Clearly, this matrix can not be manipulated by row-operations into a matrix on standard form, as we we never be able to "separate" the first and second coordinates. However, we can create a permutation equivalent code $\C^*$ by swapping the second and fourth coordinate: $$G^* = \bmat{1 & 0 & 1 & 1 & 1 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 & 1}$$ This matrix is row-reducible to standard-form: $$ \begin{align} G^* = \bmat{ 1 & 0 & 1 & 1 & 1 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 & 1} \sim \bmat{ 1 & 0 & 1 & 1 & 1 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0} \sim \barr{ccc|cc}{ 1 & 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1} \end{align} $$ This matrix is now on standard-form. Consequently we can easily find its parity-check matrix: $$ H^* = \barr{ccc|cc}{ 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 1 & 0 & 1 } $$ ## Encoding and decoding
Before we start defining some codes, and more complex methodologies, we look briefly on the process of encoding and decoding some messages. ### Simple example Say we have the code $\C$ defined by the generator matrix $$G = \bmat{1 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1}.$$ As the matrix is on standard form, we can easily find its parity check matrix $$H = \bmat{1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1}$$. Lets say that we encode the message $x = 01$ using $G$. We get $$xG = \bmat{0 & 1}\bmat{ 1 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 } = 0101 $$ This is our codeword $c$. Say we transmit this code, receive it at some endpoint, and wish to verify that the received message is a codeword. We feed $0101$ into $H$, which gives us $$ H\b c^T = \bmat{ 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1}\bmat{0 \\ 1 \\ 0 \\ 1} = \b 0 $$ Thus we know that, while we can't rule out that the message has been completely mangled, at least the received message *is* a codeword. If the likelyhood of *one* error during transmission is small, we know that the message with great probability is intact. As $G$ is in standard form, clearly $c_1 c_2 = x_1 x_2$, and the original code is easily retrieved. Take now a received encoded message $c_2 0111$. Again, we wish to figure out if some error has occured during transmission. We multiply by $H$: $$ H\bc_2^T = \bmat{ 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1}\bmat{0 \\ 1 \\ 1 \\ 1} = \bmat{1 & 0} \neq \b 0 $$ Thus some error has occured during transmission. Can we retrieve the error? Well if we assume that only one error has occured, it is clear that it must have happened in the third coordinate, as we know $0101$ is a codeword, and changing any other coordinate will simply produce more errors. (Try for instance $0110$ or $1111$). Thus we know that the most likely original codeword was $0101$. ### Examples with Hamming code [#] TODO ## Extra Examples [#] TODO
# Some codes ## Binary repetition code ## Hamming codes ## Reed-Muller codes ## Golay codes # Bounds on codes ## $A_q(n,d)$ and $B_q(n, d)$ ## MDS codes ## Gilbert Lower Bound ## Exampels # Cyclic codes
  • Contact
  • Twitter
  • Statistics
  • Report a bug
  • Wikipendium cc-by-sa
Wikipendium is ad-free and costs nothing to use. Please help keep Wikipendium alive by donating today!