Detailed instructions for use are in the User's Guide.
[. . . ] Neural Network ToolboxTM 6 User's Guide
Howard Demuth Mark Beale Martin Hagan
How to Contact The MathWorks
Web Newsgroup www. mathworks. com/contact_TS. html Technical support
www. mathworks. com comp. soft-sys. matlab suggest@mathworks. com bugs@mathworks. com doc@mathworks. com service@mathworks. com info@mathworks. com
Product enhancement suggestions Bug reports Documentation error reports Order status, license renewals, passcodes Sales, pricing, and general information
508-647-7000 (Phone) 508-647-7001 (Fax) The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760-2098
For contact information about worldwide offices, see the MathWorks Web site.
Neural Network ToolboxTM User's Guide
© COPYRIGHT 19922010 by The MathWorks, Inc.
The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form without prior written consent from The MathWorks, Inc. [. . . ] For n = 0 the equation Wp + b = 0 specifies such a decision boundary, as shown below (adapted with thanks from [HDB96]).
.
p a<0 -b/w
1, 2
2
a>0 W
Wp+b=0 p -b/w
1, 1 1
Input vectors in the upper right gray area lead to an output greater than 0. Input vectors in the lower left white area lead to an output less than 0. Thus, the ADALINE can be used to classify objects into two categories.
10-5
10
Adaptive Filters and Adaptive Training
However, ADALINE can classify objects in this way only when the objects are linearly separable. Thus, ADALINE has the same limitation as the perceptron. We can create a network similar to the one shown using this command:
net = newlin([-1 1; -1 1], 1);
The first matrix of arguments specifies typical two-element input vectors, and the last argument 1 indicates that the network has a single output. The network weights and biases are set to zero, by default. You can see the current values using the commands:
W = net. IW{1, 1} W= 0 0
and
b = net. b{1} b= 0
You can also assign arbitrary values to the weights and bias, such as 2 and 3 for the weights and -4 for the bias:
net. IW{1, 1} = [2 3]; net. b{1} = -4;
You can simulate the ADAPLINE for a particular input vector.
p = [5; 6]; a = sim(net, p) a= 24
To summarize, you can create an ADALINE network with newlin, adjust its elements as you want, and simulate it with sim. You can find more about newlin by typing help newlin.
10-6
Least Mean Square Error
Least Mean Square Error
Like the perceptron learning rule, the least mean square error (LMS) algorithm is an example of supervised training, in which the learning rule is provided with a set of examples of desired network behavior. {p 1, t 1} , { p 2, t 2} , . . . , {p Q, tQ} Here pq is an input to the network, and tq is the corresponding target output. As each input is applied to the network, the network output is compared to the target. The error is calculated as the difference between the target output and the network output. The goal is to minimize the average of the sum of these errors.
Q Q
1 mse = --Q
k=1
1 e ( k ) = --Q
2
( t(k ) a( k))
k=1
2
The LMS algorithm adjusts the weights and biases of the ADALINE so as to minimize this mean square error. Fortunately, the mean square error performance index for the ADALINE network is a quadratic function. Thus, the performance index will either have one global minimum, a weak minimum, or no minimum, depending on the characteristics of the input vectors. Specifically, the characteristics of the input vectors determine whether or not a unique solution exists. You can learn more about this topic in Chapter 10 of [HDB96].
10-7
10
Adaptive Filters and Adaptive Training
LMS Algorithm (learnwh)
Adaptive networks will use the LMS algorithm or Widrow-Hoff learning algorithm based on an approximate steepest descent procedure. Here again, adaptive linear networks are trained on examples of correct behavior. The LMS algorithm, shown below, is discussed in detail in Chapter 4, "Linear Filters. " W ( k + 1 ) = W ( k ) + 2e ( k )p ( k ) b ( k + 1 ) = b ( k ) + 2e ( k )
T
10-8
Adaptive Filtering (adapt)
Adaptive Filtering (adapt)
The ADALINE network, much like the perceptron, can only solve linearly separable problems. It is, however, one of the most widely used neural networks found in practical applications. Adaptive filtering is one of its major application areas.
Tapped Delay Line
You need a new component, the tapped delay line, to make full use of the ADALINE network. Such a delay line is shown in the next figure. The input signal enters from the left and passes through N-1 delays. [. . . ] Transfer that maps inputs greater than or equal to 0 to +1, and all other values to -1. Produces the input as its output as long as the input is in the range -1 to 1. Outside that range the output is -1 and +1, respectively. Squashing function of the form shown below that maps the input to the interval (-1, 1). [. . . ]