...

61 Chapter 61 Coding Theory

by taratuta

on
Category: Documents
44

views

Report

Comments

Transcript

61 Chapter 61 Coding Theory
61
Coding Theory
Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Linear Block Codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Main Linear Coding Problem and Distance
Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61.4 Important Classes of Linear Codes 1 . . . . . . . . . . . . . . . .
61.5 Important Classes of Linear Codes 2 . . . . . . . . . . . . . . . .
61.6 Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61.1
61.2
61.3
Joachim Rosenthal
University of Zürich
Paul Weiner
Saint Mary’s University of Minnesota
61-1
61-3
61-5
61-6
61-8
61-11
61-13
Sometimes errors occur when data is transmitted over a channel. A binary 0 may be corrupted to a 1,
or vice versa. Error control coding adds redundancy to a transmitted message allowing for detection
and correction of errors. For example, if 0 stands for “no” and 1 stands for “yes”, a single bit error will
completely change a message. If we repeat the message seven times, so 0000000 stands for “no” and
1111111 stands for “yes”, then it would require at least 4 bit errors to change a message, assuming that
a message is decoded to either 0000000 or 1111111 according to which bit is in the majority in the
received string. Thus, this simple code can be used to correct up to 3 errors. This code also detects up to 6
errors, in the sense that if between 1 and 6 errors occur in transmission, the received string will not be a
codeword, and the receiver will recognize that one or more errors have occurred. The extra bits provide error
protection.
Let F = Fq be the finite field with q elements. It is customary in the coding literature to write message
blocks and vectors of the vector spaces Fk and Fn as row vectors, and we shall follow that practice in this
chapter.
61.1
Basic Concepts
Definitions:
A block code of length n over F is a subset C of the vector space Fn . Elements of C are codewords. Elements
of the finite field F form the alphabet.
Message blocks of length k are strings of k symbols from the alphabet. Message blocks may also be
identified with vectors in the vector space Fk .
An encoder is a one-to-one mapping
ϕ : Fk −→ Fn
whose image is contained in the code C.
61-1
61-2
Handbook of Linear Algebra
The process of applying ϕ to a k-digit message block to obtain a codeword is encoding.
A noisy channel is a channel in which transmission errors may occur.
A codeword v ∈ C ⊆ Fn that has been transmitted may be received as y = v + e, where e ∈ Fn is an
error vector.
Given a received word y = v + e ∈ Fn , the receiver estimates the corresponding codeword, thus,
obtaining ṽ ∈ C. This estimation process is called decoding. If ṽ = v, then correct decoding has
occurred.
Decoding may be thought of as a mapping ψ : Fn −→ C. The mapping ψ is a decoder. Generally a
decoder ψ should have the property that if the received word is in fact a codeword, it is decoded to itself.
The word error rate for a particular decoder is the probability Perr that the decoder outputs a wrong
codeword.
The process of encoding, transmission, and decoding is summarized by
ψ
Fk
−→ Fn
n.t.
−→ Fn
−→
u
−→
−→
−→ ṽ,
ϕ
v
y
C
where “n.t.” stands for “noisy transmission.”
For v, w ∈ Fn the Hamming distance, or simply the distance, between v and w is d(v, w) = |{i | v i =
w i }|.
The distance of a code C ⊆ Fn is defined by d(C) = min{d(v, w) | v, w ∈ C and v = w}.
The Hamming weight or weight of v is defined to be wt(v) = d(v, 0).
The closed ball of radius t about the vector v ∈ Fn is the set
B(v, t) = {y ∈ Fn | d(v, y) ≤ t}.
Nearest-neighbor decoding refers to decoding a received word y to a nearest codeword (with respect
to Hamming distance).
A code C ⊆ Fn is called a perfect code if there exists a nonnegative integer t such that
1. Fn = v∈C B(v, t), and
2. For distinct v, w ∈ C, B(v, t) ∩ B(w, t) = φ.
That is, Fn is the disjoint union of closed t-balls about the codewords.
Facts:
1. The minimum number of symbol errors it would take for v to be changed to w is d(v, w).
2. A nearest-neighbor decoder need not be unique as it is possible that some received words y might
have two or more codewords at an equal minimum distance. The decoder may decode y to any of
these nearest codewords.
3. [Hil86, p. 5] The Hamming distance is a metric on Fn .
4. [Hil86,
p. 7] If the distance of a code C is d, then C can detect up to d − 1 errors. It can correct up
errors.
to d−1
2
Examples:
1. The binary message block 1011 can be identified with the vector (1, 0, 1, 1) ∈ F42 .
2. The distance between the binary strings 0011 and 1110 is 3 since these two bit strings differ in three
places.
3. We may picture nearest-neighbor decoding by taking a closed ball of expanding radius about the
received word y. As soon as the ball includes a codeword, that codeword is a nearest-neighbor to
which we may decode y.
61-3
Coding Theory
61.2
Linear Block Codes
Definitions:
A linear block code of length n and dimension k over F = Fq is a k-dimensional linear subspace, C, of the
vector space Fn . One says C is an [n, k]-code over F. If one wants to include the distance as a parameter,
one says that C is an [n, k, d]-code over F. A binary linear block code (BLBC) is a linear block code over
the binary field F2 = {0, 1}.
Two linear codes over Fq are equivalent if one can be obtained from the other by steps of the following
types: (i) permutation of the coordinates of the codewords, and (ii) multiplication of the field elements in
one coordinate of the code by a nonzero scalar in Fq .
The rate of a linear [n, k]-code is nk .
Let C be a linear [n, k]-code over F. A generator matrix for C is a matrix G ∈ Fk×n whose k rows form
a basis for C. A generator matrix for C is also called an encoder for C.
A generator matrix G for an [n, k]-code is a systematic encoder if G can be partitioned, perhaps after
a column permutation, as G = [Ik | A], where A ∈ Fk×(n−k) . Then u ∈ Fk is encoded as uG = [u | uA]
so that the first k symbols of the codeword uG are the message word u, and the last n − k symbols form
the parity check symbols.
Let C be a linear [n, k]-code over F. A parity check matrix for C is a matrix H ∈ F(n−k)×n such that
C = {v ∈ Fn | vH T = 0 ∈ F(n−k) }.
Let C be a linear [n, k]-code over F with generator matrix G ∈ Fk×n and parity check matrix H ∈
F(n−k)×n . Then H itself is the generator matrix of another code, denoted C ⊥ . C ⊥ is the dual code of C.
A linear block code, C, is self-dual if C = C ⊥ .
Let C be an [n, k]-code with parity check matrix H. For any vector y ∈ Fn , the syndrome of y is the
vector yH T ∈ F(n−k) .
Facts:
1. [Hil86, p. 47] There are q k codewords in an [n, k]-code over F = Fq .
2. The rate, nk , of an [n, k]-code tells what fraction of transmitted symbols carry information. The com, tells how much of the transmission is redundant (for error
plementary fraction, n−k
n
protection).
3. If G is a generator matrix for the linear [n, k]-code, C, then C = {uG | u ∈ Fk }. Thus, the mapping
ϕ : Fk −→ Fn given by ϕ(u) = uG is an encoder for C.
4. [Hil86, p. 48] The distance of a linear code is equal to the minimum weight of all nonzero codewords.
5. Every [n, k] linear block code has an (n − k) × n parity check matrix.
6. For a given [n, k] linear code C with generator matrix G , an (n − k) × n matrix H is a parity check
matrix for C if and only if the rows of H are a basis for the left kernel of G T .
7. If H is a parity check matrix for the linear code C, then C is the left kernel of H T .
8. A generator matrix of C is a parity check matrix of C ⊥ .
9. [MS77, p. 33] Suppose H ∈ F(n−k)×n is a parity check matrix for an [n, k]-code C over F. Then the
distance of C is equal to the smallest number of columns of H for which a linear dependency may
be found.
10. [Hil86, p. 70] If G = [Ik | A] is a systematic encoder for the [n, k]-code C, then H = [−AT | In−k ]
is a parity check matrix for C.
11. [MS77, p. 22]Shannon’s Coding Theorem for the Binary Symmetric Channel. Assume that the transmission channel has a fixed symbol error probability p that the symbol 1 is corrupted to a 0 or vice
versa. Let
C := 1 + p log2 p + (1 − p) log2 (1 − p).
61-4
Handbook of Linear Algebra
Then for every > 0 and every R < C there exists, for any sufficiently large n, a binary linear
[n, k] code having transmission rate k/n ≥ R and word error probability Perr ≤ .
Examples:
1. The binary repetition code of length 7 is a [7, 1] BLBC. A generator matrix is
G= 1
1
1
1
1
1 .
1
This code consists of two codewords, namely 0000000 and 1111111. The rate of this code is 17 . For
every 7 bits transmitted, there is only 1 bit of information, with the remaining 6 redundant bits
providing error protection. The distance of this code is 7. A parity check matrix for this code is the
6 × 7 matrix
I6 .
H = J 6,1
2. Binary even weight codes. The binary even weight code of length n is a [n, n − 1]-code that consists
of all even weight words in Fn2 . A generator matrix is given by
J (n−1),1 .
G = In−1
This code detects any single error, but cannot correct any errors. A binary even weight code may
also be constructed by taking all binary strings of length n − 1 and to each adding an nth parity
check bit chosen so that the weight of the resulting string is even. The 8-bit ASCII code (American
Standard Code for Information Interchange) is constructed in this manner.
Note that the binary even weight code of length n and the binary repetition code of length n are
dual codes.
3. The ISBN (International Standard Book Number) code is a [10, 9]-code over F11 (the finite field
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, X} where field operations are performed modulo 11. X is the symbol for 10
in this field). This code may be defined by the parity check matrix
H= 1
2
3
4
5
6
7
8
X .
9
Thus, a string of 10 digits, v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 , from F11 is a valid ISBN if and only if
10
i =1 i v i ≡ 0 (mod 11). The ISBN code will detect any single error or any error where two
digits are interchanged (transposition error). The first nine digits of an ISBN carry information
about the book (such as publisher). The tenth digit is a check
digit. Given the first 9 digits of an
9
ISBN, the check digit may be computed by forming the sum i =1 i v i and reducing modulo 11.
The value thus obtained is v 10 (if the value is 10, then v 10 = X).
Note: Currently the ISBN system does not use X as any of the first 9 digits. However, mathematically there is no reason not to use X anywhere in an ISBN.
4. The binary linear [8, 4] code with generator matrix
⎡
1
0
1
1
0
0
0
1
⎤
⎢0 1 0 1 1 0 0 1⎥
⎢
⎥
⎥
⎣0 0 1 0 1 1 0 1⎦
0 0 0 1 0 1 1 1
G =⎢
is a self-dual code (note that over the binary field, G G T = 0). This code is called the extended
binary [8, 4, 4] code. All pairs of different codewords are at distance 4 or 8.
61-5
Coding Theory
61.3
Main Linear Coding Problem and Distance Bounds
Definitions:
Let Aq (n, d) be the largest possible number so that there exists a block code C ⊆ Fqn of distance d = d(C)
and cardinality |C| = Aq (n, d). The Main Coding Theory Problem asks for the determination of Aq (n, d)
for different values of n, d, q .
Similarly, let Bq (n, d) be the largest possible number so that there exists a linear [n, k] block code C
of distance d = d(C) and cardinality |C| = Bq (n, d). The Main Linear Coding Problem asks for the
determination of Bq (n, d) for different values of n, d, q .
An [n, k, d] linear code is maximum distance separable (MDS) if d = n − k + 1.
Facts:
1. Finding a value of Bq (n, d) is equivalent to finding the largest dimension, k, for which there is an
[n, k, d]-code over Fq . For such maximal k, we have Bq (n, d) = q k .
2. Bq (n, d) ≤ Aq (n, d) for all n, q , d.
3. [Rom92, p. 170] Aq (n, 1) = Bq (n, 1) = q n .
4. Bq (n, 2) = q n−1 . Such a code is achieved by taking C to be a code with parity check matrix J 1,n .
5. [Rom92, p. 170] Aq (n, n) = Bq (n, n) = q .
6. [Rom92, p. 170] For n ≥ 2, Aq (n, d) ≤ q Aq (n − 1, d).
7. [Hil86, Theorem 14.5] Bq (n, 3) = q n−r , where r is the unique integer such that
(q r −1 − 1)/(q − 1) < n ≤ (q r −1 − 1)/(q − 1).
8. [Rom92, Theorem 4.5.7]The sphere-packing bound. Suppose C is a block code of length n over Fq
with distance d. Let t = d−1
. Then
2
n
n
n
2
t
Aq (n, d) 1 +
(q − 1) +
(q − 1) + · · ·
(q − 1) ≤ q n .
1
2
t
9. [Hil86, pp. 20–21] The code C is perfect if and only if equality holds in the sphere-packing bound.
10. [Rom92, Theorem 4.5.7] Suppose C is a block code of length n over Fq with distance d. Then
Aq (n, d) ≤ q n−d+1 .
11. The Singleton bound for linear block codes. For any [n, k, d] linear code over Fq , we have
d ≤ n − k + 1.
12. An (n − k) × n matrix H with entries in F is the parity check matrix of an MDS code if and only
if any n − k columns of H are linearly independent. Equivalently, H has the property that any full
size minor of H is invertible.
13. [MS77, pp. 546–547] The Griesmer bound. Let N(k, d) be the length of the shortest binary linear
code of dimension k and distance d. Then one has
N(k, d) ≥
k−1 d
.
2i
i =0
The right-hand side in the above inequality is, of course, always at least d + k − 1.
, then
14. [Rom92, Theorem 4.5.16] The Plotkin bound. If d ≥ q n−n
q
Aq (n, d) ≤
dq
.
d − qn − n
61-6
Handbook of Linear Algebra
15. [Hil86, pp. 91–92] The Gilbert Varshamov bound. Assume q is a prime power. For any n and d, if k
is the largest integer satisfying
qk <
d−2 i =0
qn
n−1
(q − 1)i
i
,
then Aq (n, d) ≥ q k .
Examples:
1. Consider the binary [7,4] code C defined by the parity check matrix
⎡
⎤
0
0
0
1
1
1
1
H = ⎣0
1
1
0
0
1
1⎦.
1
0
1
0
1
0
1
⎢
⎥
This code has 24 = 16 elements. Since any two columns of H are linearly independent, but the first
three columns are dependent, it follows that d(C) = 3. So this code meets the sphere packing bound
with t = 1. Observe that reading downward, the 7 columns of H form the numbers 1 through 7 in
binary. This leads to a very nice nearest-neighbor decoding scheme that corrects any single error.
If the received word is y, the syndrome s = yH T consists of 3 bits. Interpret these 3 bits as a binary
number, x. If x is 0, the received word is a codeword and we assume that no errors have occurred.
If x is not 0, it is one of the numbers from 1 to 7. In y we change the bit in position x. This will give
us a codeword (the unique codeword) at a distance of 1 from y. We presume that this codeword
is the intended transmission. For instance, if we receive the vector y = 1100100, we compute the
syndrome s = yH T = 110. Interpreting 110 as a binary number, we obtain x = 6. We change the
sixth bit of y and obtain the codeword 1100110. This codeword is at a Hamming distance 1 from
y, and we assume it was the intended transmission.
2. Assume α1 , . . . , αn are pairwise different nonzero elements of a finite field F. Let
⎡
⎢
⎢
H := ⎢
⎢
⎣
α1
α2
...
αn
α12
..
.
α22
...
αn2
..
.
α1n−k
α2n−k
⎤
⎥
⎥
⎥.
⎥
⎦
. . . αnn−k
Then any full-size minor is the determinant of a Vandermonde matrix and, hence, nonzero. So H
represents the parity check matrix of an MDS code.
61.4
Important Classes of Linear Codes 1
Definitions:
Let F = Fq .
Let r be a positive integer. Let P(Fr ) be the set of all 1-dimensional subspaces of Fr . Choose a basis
vector for each element of P(Fr ). Let H be a matrix whose columns are these basis vectors. The matrix H
is the parity check matrix of a Hamming code, C.
A cyclic code is a linear code C for which every cyclic shift of a codeword is again a codeword.
Let F[x] be the polynomial ring in one indeterminate over F. Let x n − 1 be the ideal generated by
x n − 1. The quotient ring F[x]/x n − 1 is denoted Rn . The polynomials of Rn are identified with vectors
61-7
Coding Theory
in Fn via
a0 + a1 x + a2 x 2 + · · · an−1 x n−1 −→ [a0
a1
a2
· · · an−1 ].
Suppose k < n are positive integers and r = n − k. A polynomial g (x) = a0 + a1 x + a2 x 2 + · · · + ar x r
dividing x n − 1 ∈ F[x] is a generator polynomial of a cyclic [n, k]-code, C. The polynomial h(x) =
b0 + b1 x + b2 x 2 + · · · + bk x k chosen so that g (x)h(x) = x n − 1 is the parity check polynomial of C.
Facts:
1. The ring Rn forms an n-dimensional vector space over F. The monomials {1, x, x 2 , . . . , x n−1 } form
a basis. It further has an algebra structure over F (i.e., there is a well-defined multiplication). In fact,
polynomials in Rn are multiplied as usual, but x n = 1 in Rn , so exponents are reduced modulo n.
2. The cardinality of P(Fr ) is n = (q r − 1)/(q − 1). Thus, the parity check matrix, H, of a Hamming
code is of size r × n. The corresponding Hamming code C has q k elements where k = n − r .
Furthermore, the distance d(C) = 3. The Hamming code reaches the sphere packing bound and
hence is a perfect code in all cases.
3. [Hil86, pp. 147–148] For cyclic codes, Rn has the structure of a principal ideal ring (any ideal is
generated by a single polynomial). Thus, any cyclic code has a generator polynomial g (x) where
g (x) is a divisor of x n − 1. Moreover, every cyclic code has a parity check polynomial, namely
h(x) = (x n − 1)/g (x).
4. [Hil86, Chap. 12] If C is a cyclic code with generator polynomial g (x) = a0 +a1 x +a2 x 2 +· · ·+ar x r
and parity check polynomial h(x) = b0 +b1 x +b2 x 2 +· · ·+bk x k , then the corresponding generator
matrix, G , and parity check matrix, H, of C are given by
⎡
a0
⎢
⎢0
⎢
⎢
G =⎢0
⎢
⎢
⎣
0
⎤
a1
a2
···
ar
0
0
···
0
a0
a1
a2
···
ar
0
···
0⎥
0
a0
a1
a2
···
ar
..
.
..
..
.
..
.
..
.
..
0
0
···
a0
a1
a2
⎥
⎥
··· 0⎥
⎥
⎥
..
⎥
.
⎦
· · · ar
···
b2
b1
b0
0
0
···
0
bk
···
b2
b1
b0
0
···
0⎥
0
bk
···
b2
b1
b0
..
.
..
..
.
..
..
.
..
0
0
···
bk
···
b2
.
.
and
⎡
bk
⎢
⎢0
⎢
⎢
H =⎢0
⎢
⎢
⎣
0
.
.
.
⎤
⎥
⎥
··· 0⎥
⎥.
⎥
..
⎥
.
⎦
b1 b0
Examples:
1. For q = 5 and r = 2, we get the [6, 4, 3]-Hamming code over F5 with parity check matrix
H=
1
0
1
1
1
1
0
1
1
2
3
4
61-8
Handbook of Linear Algebra
and generator matrix
⎡
4
4
1
0
0
0
⎤
⎢4 3 0 1 0 0⎥
⎢
⎥
⎥.
⎣4 2 0 0 1 0⎦
4 1 0 0 0 1
G =⎢
2. The binary repetition code of length n is a cyclic code with generator polynomial 1 + x + · · · + x n−1 .
The binary even weight code is a cyclic code with generator polynomial 1 + x.
3. Over F2 , the polynomial x 7 − 1 = x 7 + 1 = (1 + x)(1 + x + x 3 )(1 + x 2 + x 3 ). Hence, g (x) =
1 + x + x 3 is the generator polynomial of a binary [7, 4]-cyclic code. The corresponding parity
check polynomial is h(x) = (1 + x)(1 + x 2 + x 3 ) = 1 + x + x 2 + x 4 . So, the generator matrix G
and parity check matrix H for this code are given by
⎡
1
1
0
1
0
0
0
⎤
⎢0 1 1 0 1 0 0⎥
⎢
⎥
G =⎢
⎥
⎣0 0 1 1 0 1 0⎦
0 0 0 1 1 0 1
⎡
and
⎤
1
0
1
1
1
0
0
H = ⎣0
1
0
1
1
1
0⎦.
0
0
1
0
1
1
1
⎢
⎥
Noting that the columns of H represent all the nonzero elements of F32 , we see that this code is an
equivalent version of the Hamming [7, 4]-perfect code.
61.5
Important Classes of Linear Codes 2
Definitions:
The Golay codes [Gol49] are two cyclic codes defined as follows.
The binary Golay code, G 23 , is a [23, 12, 7] binary cyclic code with generator polynomial g (x) =
11
x + x 10 + x 6 + x 5 + x 4 + x 2 + 1 .
The ternary Golay
code, G 11 , is an [11, 6,5] cyclic code over the ternary field F3 with generator
polynomial g (x) = x 5 + 2 x 3 + x 2 + 2 x + 2 .
Denote by F[x; k − 1] the set of all polynomials of degree at most k − 1 over F. Assume F has size
|F| > n and let α be a primitive of F (i.e., α is a generator of the cyclic group F∗ ). Define the linear map
(called the evaluation map) by
ev :
F[x; k − 1]
−→
Fn
f
−→
( f (α), . . . , f (α n )) .
The image of this map is a Reed–Solomon code.
Let F = Fq be a fixed finite field and consider the polynomial p(x) = x n − 1 ∈ F[x] and let the root
w of p(x) be a primitive nth root of unity. Let b, d be positive integers.
Bq (n, d, b, w ) := {c (x) ∈ F[x; k − 1] | c (w b+i ) = 0 for i = 0, . . . , d − 2}
is called a Bose–Chadhuri–Hocquenghem (BCH) code having designed distance d. In the special case
when n = q m − 1, Bq (n, d, b, w ) is a primitive BCH code; and if b = 1, Bq (n, d, b, w ) is a narrow sense
BCH code.
61-9
Coding Theory
Facts:
1. Over the binary field F2 ,
x 23 + 1 = (x 11 + x 10 + x 6 + x 5 + x 4 + x 2 + 1)(x 11 + x 9 + x 7 + x 6 + x 5 + x + 1)(x + 1).
Therefore, the binary Golay code, G 23 , has parity check polynomial h(x) = (x 11 + x 9 + x 7 + x 6 +
x 5 + x + 1)(x + 1).
2. [Hil86, pp. 153–157] The binary Golay code G 23 is a [23, 12, 7] code having 4096 elements. G 23 is
a perfect code.
3. Over the ternary field F3 ,
x 11 − 1 = (x 5 + 2x 3 + x 2 + 2x + 2)(x 5 + x 4 + 2x 3 + x 2 + 2)(x + 2).
Therefore, the ternary Golay code, G 11 , has parity check polynomial h(x) = (x 5 +x 4 +2x 3 +x 2 +2)
(x + 2).
4. [Hil86, pp. 157–159] The Golay code G 11 is a ternary [11, 6, 5] code having 729 elements. G 11 is a
perfect code.
5. [Hil86, Theorem 9.5] Here is a complete catalog of perfect linear codes up to equivalence:
(a) The trivial perfect codes: All of Fn , and the binary repetition codes of odd length.
(b) The Hamming codes.
(c) The Golay codes G 23 and G 11 .
6. Regarding Reed–Solomon codes, the linear map ev is one-to-one. Hence, the defined Reed–
Solomon code is an [n, k] code. By the fundamental theorem of algebra the minimum weight
of a nonzero code word is n − k + 1. It follows that a Reed–Solomon code is MDS. With regard to
the natural basis 1, x, . . . , x k−1 of F[x; k − 1], this code has a generator matrix:
⎡
⎢
⎢
G := ⎢
⎢
⎣
1
1
...
1
α
..
.
α2
...
αn
..
.
α k−1
α 2(k−1)
⎤
⎥
⎥
⎥.
⎥
⎦
. . . α n(k−1)
7. Regarding BCH codes, it is immediate from the definition that Bq (n, d, b, w ) has a parity check
matrix of the form
⎡
1
⎢1
⎢
H := ⎢
⎢ ..
⎣.
1
wb
...
w (n−1)b
w b+1
...
w (n−1)(b+1)
..
.
w b+d−2
⎤
⎥
⎥
⎥.
⎥
⎦
. . . w (n−1)(b+d−2)
We would like to stress that Bq (n, d, b, w ) is the Fq kernel of H and that the entries of H are in
general not in the base field Fq . Assume w ∈ Fq s , where Fq s is an extension field of Fq . The Fq s
kernel of H is a Reed–Solomon code of distance d. As a consequence, we have that Bq (n, d, b, w )
has distance at least d.
8. [Rom92, Chap. 8] The BCH code Bq (n, d, b, w ) is a cyclic code. For i = 0, . . . , d − 2, let mi (x) ∈
Fq [x] be the minimal polynomial of w b+i and let
g (x) := lcm{mi (x) | i = 0, . . . , d − 2} ∈ Fq [x].
Then g (x) is a generator of the cyclic code Bq (n, d, b, w ) and its dimension is
dim Bq (n, d, b, w ) = n − deg g (x).
61-10
Handbook of Linear Algebra
9. [Mas69], [FWRT94], [Sud97], [KV03] Reed–Solomon codes and BCH codes can be efficiently
decoded up to half the designed distance using the Berlekamp–Massey algorithm [Mas69]. Extensions for this algorithm exist for algebraic geometric codes [FWRT94]. These algorithms have the
major drawback that no soft information can be processed. This means that the decoder needs a
vector in Fn as an input and it is not possible to process real numbers as they naturally occur in
wireless communication. In 1997 Sudan [Sud97] achieved a major breakthrough when he came up
with a polynomial time algorithm for list decoding of Reed–Solomon codes. Since that time the
technique has been vastly improved and generalized and a broad overview is given in [Gur04] and
[KV03].
Examples:
1. Algebraic Geometric Codes. In 1977, V.D. Goppa had the idea to vastly generalize the construction
techniques of Reed–Solomon codes and BCH codes. Goppa codes became famous in 1982 when
Tsfasman, Vladut, and Zink [TVZ82] constructed an infinite family of codes that exceeds the
Gilbert Varshamov bound. (For further details, the reader is referred to the extensive literature on
this subject [Gop81], [Gop88], [LG88], [TV91], and [Wal00].
Consider the projective plane P2F and let V ⊂ P2F be a smooth curve of degree θ. This means
there is an irreducible homogeneous polynomial ϕ(x, y, z) ∈ F[x, y, z] of degree θ with
V = {(α, β, γ ) ∈ P2F | ϕ(α, β, γ ) = 0}.
The smoothness is characterized by the fact that there is no point on the curve where the partial
derivatives vanish simultaneously. (V ) := F[x, y, z]/ < ϕ > is called the homogeneous coordinate ring. As (V ) is a domain, there is a well-defined quotient field k(V ), which is called the
homogeneous function field. (See e.g. [Ful89, p.91].)
If f ∈ k(V ), then f has an associated divisor
( f ) :=
ord P ( f )P ,
P ∈V
where ord P ( f ) is m if f has a pole of order m at P and −m if f has a zero of order m at P .
Let D = P1 + · · · + Pn be a divisor on V of degree n. Define the vector space
L (D) := { f ∈ k(V )∗ | ( f ) + D ≥ 0} ∪ {0}.
Let G be a divisor on V having support disjoint from D. Consider the linear map:
ev :
L (G )
−→
Fn
f
−→
( f (P1 ), . . . , f (Pn )).
The image C(D, G ) of the linear evaluation map ev is called an algebraic geometric Goppa (AG)
code. Note that F[x; k − 1] is a linear space of the form L (G ) and in this way Reed–Solomon codes
are AG codes.
Let C(D, G ) be an algebraic geometric Goppa code defined from a curve of degree θ. If deg G < n,
then
1
dim C (D, G ) = dim L (G ) − dim L (G − D) ≥ deg G + 1 − (θ − 1)(θ − 2)
2
and for the distance one has the lower bound
d(C(D, G )) ≥ n − deg G.
Note that 12 (θ − 1)(θ − 2) is the genus of the curve. One way to construct AG codes with good
distance parameters is, therefore, to construct curves of low genus having many Fq rational points.
61-11
Coding Theory
2. Low density parity check codes. A class of linear codes of tremendous practical importance was
introduced by R.G. Gallager in his dissertation [Gal63]. These binary linear codes were called low
density parity check (LDPC) codes by the author and are characterized by the property that the
code can be written as the kernel of a very sparse matrix. For example, Gallager studied codes whose
parity check matrix had three 1s in every column and six 1s in every row randomly chosen.
LDPC codes come with the remarkable property that efficient decoding algorithms exist whose
complexity increases linearly with the code length [RU01]. One of simplest algorithms is the bit
flipping algorithm. For this, assume that H is a sparse parity check matrix, a code word v was
sent, and a vector y = v + e was received. The algorithm then checks bit-by-bit if the weight of
the syndrome vector yH T increases or decreases if the bit under consideration is flipped. The bit
flipping algorithm is one of the most simple type of message passing algorithms.
LDPC codes were later generalized by R. M. Tanner [Tan81] who defined linear codes defined
through graphs. A good overview to this active area of research can be found in [MR01]. The
class of LDPC codes and codes on graphs became prominent again after the discovery of Turbo
codes [BGT93].
61.6
Convolutional Codes
Aside from linear block codes, the codes most used in engineering practice are convolutional codes. For
this, consider the situation where a large amount of data has to be encoded by a linear block code. Let
m0 , . . . , mτ ∈ Fk be τ + 1 blocks of messages to be transmitted and assume G is the generator matrix of
a linear [n, k] code. The encoding scheme:
mi −→ mi G = xi ,
i = 0, . . . , τ
can be compactly written in the form
m(z) −→ m(z)G = x(z),
where
m(z) :=
τ
mi z i ∈ Fk [z]
and
x(z) :=
i =0
τ
xi z i ∈ Fn [z].
i =0
Elias [Eli55] had the original idea to replace in this encoding scheme the generator matrix G with a k × n
matrix G (z) whose entries are elements of the polynomial ring F[z].
Definitions:
Consider the polynomial ring R := F[z]. A submodule C ⊆ R n is called a convolutional code. The rank
of the code C is the rank, k, of C considered as an R-module. C is an [n, k]-convolutional code. The rate
of C is k/n.
A generator matrix for the rate k/n convolutional code C is a k × n polynomial matrix G with entries
in R such that
C = {mG = x | m ∈ R k , x ∈ R n }.
The degree of the convolutional code C with generator matrix G is defined to be the largest degree of any
k × k minor ofG .
τ
Let x(z) = i =0 xi z i ∈ R n be a code vector. Define the weight of x(z) by
wt(x(z)) :=
τ
i =0
wt(xi ).
61-12
Handbook of Linear Algebra
The free distance of the convolutional code C is defined as
dfree (C) := min{wt(x(z)) | 0 = x(z) ∈ C}.
(61.1)
Facts:
1. [Hun74] R is a principal ideal domain and C is, therefore, a free R-module. As such, C has a
well-defined rank k, and a k × n generator matrix G with entries in R, such that
C = {mG = x | m ∈ R k , x ∈ R n }.
2. If G 1 and G 2 are both k × n generator matrices of the same convolutional code C, then there exists
a unimodular matrix U ∈ Gl k (R) such that G 2 = U G 1 .
3. The degree of a convolutional code is well-defined (i.e., it does not depend on the particular generator matrix chosen). In the literature the degree is sometimes also called the total memory [LC83]
or the overall constraint length [JZ99] or the complexity [Pir88].
4. The free distance of a convolutional code measures the smallest distance between any two different
code words.
5. A major design problem in the area of convolutional codes is the construction of [n, k] codes of degree δ whose free distance is maximal or near maximal. This question constitutes the generalization
of the main linear coding problem.
6. If C is an [n, k] convolutional code having G (z) as a generator matrix, then there exists an (n−k)×n
parity check matrix H(z) with entries in R. H(z) is characterized by the property that G (z)H(z)T =
0k×k .
7. Convolutional codes of degree zero correspond to linear block codes.
8. Convolutional codes of small degree can be efficiently decoded using the Viterbi decoding algorithm [LC83, Chap. 11]. This algorithm has the distinct advantage that soft decision can be
processed and due to this fact convolutional codes became the codes of choice for many wireless
applications.
9. The free distance of an [n, k] convolutional code of degree δ must satisfy the generalized Singleton
bound [RS99]:
dfree (C) ≤ (n − k)
δ k
+ 1 + δ + 1.
10. Convolutional codes achieving the generalized Singleton bound are called MDS convolutional
codes. Such codes exist as soon as the field size is large enough [RS99].
11. The literature on convolutional codes is not consistent in terms of terminology and definitions.
Classical articles are [MS67] and [For70]. The articles [McE98] and [Ros01] provide surveys on
recent developments and further connections. Generalizations to multidimensional convolutional
codes were considered and the reader will find further reading and references in [GLRW00].
In contrast to the theory of linear block codes, there are not so many algebraic construction
techniques of convolutional codes with a good designed distance and even fewer constructions of
codes that can be efficiently decoded. The reader will find algebraic constructions of convolutional
codes in [GLS04], [Jus75], [Pir88], [RSY96], [SGLR01], and in the bibliography of these articles.
Examples:
1. Consider the finite field F5 . The generator matrix G (z) = (z, z + 1, z + 2, z + 3) defines an
[n, k] = [4, 1] convolutional code C of degree δ = 1 and distance dfree (C) = 7. The generalized
Singleton bound for these parameters n, k, and δ is 8 and the defined code is, therefore, not an MDS
convolutional code.
61-13
Coding Theory
2. [RS99, Ex. 2.13]. Consider the finite field F7 and the generator matrix
G (z) =
(z 2 + 1)
(3z 2 + 1)
(5z 2 + 1)
(z − 1)
(z − 2)
(2z − 3)
.
Then G (z) defines an [n, k] = [3, 2] convolutional code C of degree δ = 3 and distance dfree (C) = 6.
The generalized Singleton bound for these parameters n, k, and δ is 6 and the defined code is therefore
an MDS convolutional code.
References
[BGT93] C. Berrou, A. Glavieux, and P. Thitimajshima. Near Shannon limit error-correcting coding and
decoding: Turbo-codes. In Proc. of IEEE Int. Conference on Communication, pp. 1064–1070, Geneva,
Switzerland, May 1993.
[Bla03] R.E. Blahut. Algebraic Codes for Data Transmission. Cambridge University Press, Cambridge, 2003.
[Eli55] P. Elias. Coding for noisy channels. IRE Conv. Rec., 4:37–46, 1955.
[For70] G.D. Forney, Jr. Convolutional codes I: Algebraic structure. IEEE Trans. Inform. Theory, IT16(5):720–738, 1970.
[Ful89] W. Fulton. Algebraic Curves. Addison Wesley, Reading, MA, 1989. (Originally published by
Benjamin/Cummings in 1969.)
[FWRT94] G.L. Feng, V.K. Wei, T.R.N. Rao, and K. Tzeng. Simplified understanding and efficient decoding
of algebraic geometric codes. IEEE Trans. Inform. Theory, IT-40(4):981–1002, 1994.
[Gal63] R.G. Gallager. Low-Density Parity Check Codes. M.I.T. Press, Cambridge, MA, 1963. Number 21
in Research monograph series.
[GLRW00] H. Gluesing-Luerssen, J. Rosenthal, and P. A. Weiner. Duality between multidimensional
convolutional codes and systems. In F. Colonius, U. Helmke, F. Wirth, and D. Prätzel-Wolters, Eds.,
Advances in Mathematical Systems Theory, A Volume in Honor of Diederich Hinrichsen, pp. 135–150.
Birkhauser, Boston, 2000.
[GLS04] H. Gluesing-Luerssen and W. Schmale. On cyclic convolutional codes. Acta Appl. Math, 82:183–
237, 2004.
[Gol49] M.J.E. Golay. Notes on digital coding. Proc. IEEE, 37:657, 1949.
[Gop81] V.D. Goppa. Codes on algebraic curves. Soviet Math. Dolk., 24(1):170–172, 1981.
[Gop88] V.D. Goppa. Geometry and Codes. Kluwer Academic Publisher, Dordecht, The Netherlands,
1988.
[Gur04] V. Guruswami. List Decoding of Error-Correcting Codes, Vol. 3282 of Lecture Notes in Computer
Science. Springer, Heidelberg, 2004.
[Hil86] R. Hill. A First Course in Coding Theory. Oxford Applied Mathematics and Computing Science
Series. The Clarendon Press/Oxford University Press, New York, 1986.
[HP03] W.C. Huffman and V. Pless. Fundamentals of Error-Correcting Codes. Cambridge University Press,
Cambridge, 2003.
[Hun74] T.W. Hungerford. Algebra. Graduate Texts in Mathematics. Springer, New York, 1974.
[Jus75] J. Justesen. An algebraic construction of rate 1/ν convolutional codes. IEEE Trans. Inform. Theory,
IT-21(1):577–580, 1975.
[JZ99] R. Johannesson and K.Sh. Zigangirov. Fundamentals of Convolutional Coding. IEEE Press, New
York, 1999.
[KV03] R. Koetter and A. Vardy. Algebraic soft-decision decoding of Reed–Solomon codes. IEEE Trans.
Inform. Theory, 49(11):2809–2825, 2003.
[LC83] S. Lin and D.J. Costello, Jr. Error Control Coding: Fundamentals and Applications. Prentice-Hall,
Upper Saddle River, NJ, 1983.
61-14
Handbook of Linear Algebra
[LG88] J.H. van Lint and G. van der Geer. Introduction to Coding Theory and Algebraic Geometry. Birkhäuser
Verlag, Basel, 1988.
[Lin82] J.H. van Lint. Introduction to Coding Theory. Springer-Verlag, Berlin, New York, 1982.
[Mas69] J.L. Massey. Shift-register synthesis and BCH decoding. IEEE Trans. Inform. Theory, IT-15:122–
127, 1969.
[McE98] R.J. McEliece. The algebraic theory of convolutional codes. In V. Pless and W.C. Huffman, Eds.,
Handbook of Coding Theory, Vol. 1, pp. 1065–1138. Elsevier Science Publishers, Amsterdam, The
Netherlands, 1998.
[MR01] B. Marcus and J. Rosenthal, Eds. Codes, Systems and Graphical Models. IMA Vol. 123. SpringerVerlag, New York, 2001.
[MS67] J.L. Massey and M.K. Sain. Codes, automata, and continuous systems: explicit interconnections.
IEEE Trans. Automat. Contr., AC-12(6):644–650, 1967.
[MS77] F.J. MacWilliams and N.J.A. Sloane. The Theory of Error-Correcting Codes. North Holland,
Amsterdam, 1977.
[PHB98] V.S. Pless, W.C. Huffman, and R.A. Brualdi, Eds. Handbook of Coding Theory. Vol. I, II. NorthHolland, Amsterdam, 1998.
[Pir88] Ph. Piret. Convolutional Codes, an Algebraic Approach. MIT Press, Cambridge, MA, 1988.
[Rom92] S. Roman. Coding and Information Theory. Graduate Texts in Mathematics. Springer-Verlag,
New York/Berlin, 1992.
[Ros01] J. Rosenthal. Connections between linear systems and convolutional codes. In B. Marcus and
J. Rosenthal, Eds., Codes, Systems and Graphical Models, IMA Vol. 123, pp. 39–66. Springer-Verlag,
Heidelberg, 2001.
[RS99] J. Rosenthal and R. Smarandache. Maximum distance separable convolutional codes. Appl. Alg.
Eng. Comm. Comp., 10(1):15–32, 1999.
[RSY96] J. Rosenthal, J.M. Schumacher, and E.V. York. On behaviors and convolutional codes. IEEE Trans.
Inform. Theory, 42(6, part 1):1881–1891, 1996.
[RU01] T. Richardson and R. Urbanke. An introduction to the analysis of iterative coding systems. In
B. Marcus and J. Rosenthal, Eds., Codes, Systems and Graphical Models, IMA Vol. 123, pp. 1–37.
Springer-Verlag, Heidelberg, 2001.
[SGLR01] R. Smarandache, H. Gluesing-Luerssen, and J. Rosenthal. Constructions for MDS-convolutional
codes. IEEE Trans. Inform. Theory, 47(5):2045–2049, 2001.
[Sud97] M. Sudan. Decoding of Reed–Solomon codes beyond the error-correction bound. J. Complex.,
13(1):180–193, 1997.
[Tan81] R.M. Tanner. A recursive approach to low complexity codes. IEEE Trans. Inform. Theory, 27(5):533–
547, 1981.
[TV91] M.A. Tsfasman and S.G. Vlǎduţ. Algebraic Geometric Codes. Mathematics and Its Application.
Kluwer Academic Publishers, Dordecht, The Netherlands, 1991.
[TVZ82] M.A. Tsfasman, S.G. Vlǎduţ, and Th. Zink. On Goppa codes which are better than the Varshamov–
Gilbert bound. Math. Nachr., 109:21–28, 1982.
[Wal00] J.L. Walker. Codes and Curves. American Mathematical Society, Providence, RI, 2000. IAS/Park
City Mathematical Subseries.
Web Resources
1. http://www.win.tue.nl/˜aeb/voorlincod.html (A.E. Brouwer’s Web site on bounds on the minimum
distance of linear codes).
2. http://www.eng.tau.ac.il/˜litsyn/tableand/ (Simon Litsyn’s table of nonlinear binary codes).
3. http://www.research.att.com/˜njas/codes/ (Neil J. A. Sloane’s Web site of linear [and some nonlinear] codes over various fields).
Fly UP