Detailed instructions for use are in the User's Guide.
[. . . ] Communications ToolboxTM 4 User's Guide
How to Contact The MathWorks
Web Newsgroup www. mathworks. com/contact_TS. html Technical Support
www. mathworks. com comp. soft-sys. matlab suggest@mathworks. com bugs@mathworks. com doc@mathworks. com service@mathworks. com info@mathworks. com
Product enhancement suggestions Bug reports Documentation error reports Order status, license renewals, passcodes Sales, pricing, and general information
508-647-7000 (Phone) 508-647-7001 (Fax) The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760-2098
For contact information about worldwide offices, see the MathWorks Web site. Communications ToolboxTM User's Guide © COPYRIGHT 19962010 by The MathWorks, Inc.
The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. [. . . ] For example, the commands
[g, t] = bchgenpoly(31, 16); t t= 3
find that a [31, 16] BCH code can correct up to three errors in each codeword.
Finding Generator and Parity-Check Matrices
To find a parity-check and generator matrix for a Hamming code with codeword length 2^m-1, use the hammgen function as below. m must be at least three.
[parmat, genmat] = hammgen(m); % Hamming
To find a parity-check and generator matrix for a cyclic code, use the cyclgen function. You must provide the codeword length and a valid generator polynomial. You can use the cyclpoly function to produce one possible generator polynomial after you provide the codeword length and message length. For example,
[parmat, genmat] = cyclgen(7, cyclpoly(7, 4)); % Cyclic
7-30
Block Coding
Converting Between Parity-Check and Generator Matrices
The gen2par function converts a generator matrix into a parity-check matrix, and vice versa. The reference page for gen2par contains examples to illustrate this.
Selected Bibliography for Block Coding
[1] Berlekamp, Elwyn R. , Algebraic Coding Theory, New York, McGraw-Hill, 1968. Bibb Cain, Error-Correction Coding for Digital Communications, New York, Plenum Press, 1981. Costello, Jr. , Error Control Coding: Fundamentals and Applications, Englewood Cliffs, NJ, Prentice-Hall, 1983. Weldon, Jr. , Error-Correcting Codes, 2nd ed. , Cambridge, MA, MIT Press, 1972. H. , Introduction to Coding Theory, New York, Springer-Verlag, 1982. [6] Wicker, Stephen B. , Error Control Systems for Digital Communication and Storage, Upper Saddle River, NJ, Prentice Hall, 1995. [7] Gallager, Robert G. , Low-Density Parity-Check Codes, Cambridge, MA, MIT Press, 1963. [8] Ryan, William E. , "An introduction to LDPC codes, " Coding and Signal Processing for Magnetic Recoding Systems (Vasic, B. , ed. ), CRC Press, 2004.
7-31
7
Error Detection and Correction
Convolutional Coding
In this section. . . "Section Overview" on page 7-32 "Convolutional Coding Features of the Toolbox" on page 7-32 "Polynomial Description of a Convolutional Encoder" on page 7-32 "Trellis Description of a Convolutional Encoder" on page 7-36 "Creating and Decoding Convolutional Codes" on page 7-39 "Examples of Convolutional Coding" on page 7-42 "Selected Bibliography for Convolutional Coding" on page 7-45
Section Overview
Convolutional coding is a special case of error-control coding. Unlike a block coder, a convolutional coder is not a memoryless device. Even though a convolutional coder accepts a fixed number of message symbols and produces a fixed number of code symbols, its computations depend not only on the current set of input symbols but on some of the previous input symbols.
Convolutional Coding Features of the Toolbox
Communications Toolbox supports feedforward or feedback convolutional codes that can be described by a trellis structure or a set of generator polynomials. It uses the Viterbi algorithm to implement hard-decision and soft-decision decoding. For background information about convolutional coding, see the works listed in "Selected Bibliography for Convolutional Coding" on page 7-45.
Polynomial Description of a Convolutional Encoder
A polynomial description of a convolutional encoder describes the connections among shift registers and modulo 2 adders. For example, the figure below depicts a feedforward convolutional encoder that has one input, two outputs, and two shift registers.
7-32
Convolutional Coding
+
First output
Input
z-1
z-1
+
Second output
A polynomial description of a convolutional encoder has either two or three components, depending on whether the encoder is a feedforward or feedback type: · Constraint lengths · Generator polynomials · Feedback connection polynomials (for feedback encoders only)
Constraint Lengths
The constraint lengths of the encoder form a vector whose length is the number of inputs in the encoder diagram. The elements of this vector indicate the number of bits stored in each shift register, including the current input bits. In the figure above, the constraint length is three. It is a scalar because the encoder has one input stream, and its value is one plus the number of shift registers for that input.
Generator Polynomials
If the encoder diagram has k inputs and n outputs, the code generator matrix is a k-by-n matrix. [. . . ] b = transmitted bit (one of the K bits in an M-ary constellation, assuming all M points are equally probable, where K = log2(M)) S0 = ideal symbols/constellation points with bit 0 (at the given bit position) S1 = ideal symbols/constellation points with bit 1(at the given bit position) sx = Inphase (or X) coordinate of ideal symbols/constellation points.
C-7
C
Algorithms
sy = Quadrature (or Y) coordinate of ideal symbols/constellation points.
2 = Noise Variance. This cmoposite noise variance is the sum of noise components along theInphase axis and Quadrature axis which are assumed to be independent and of equal power.
For these two bits, as we are looking along the in-phase axis, it can be simplified as:
1 (( x- sx )2 ) e 2 sS o L(b) = log e 1 (( x- sx )2 ) e 2 sS 1
In the summation terms in the numerator and denominator, the effect of the nearest point would outweigh the rest of the points. Hence, this can be further simplified to:
L(b) -
1
2
((x - sx0 )2 - (x - sx1 )2 )
where Sx0 equals the nearest ideal constellation point with 0 mapping where Sx1 equals the nearest ideal constellation point with 1mapping Consider the case when there is no noise in received signal and the constellation point with 00 mapping is received. From the previous figure, it is clear that the MSB has the best noise resistance against error. [. . . ]