1. NARX =
Nonlinear autoregressive with exogenous (external) input
y(t) = f(y(t – 1), ..., y(t – d), x(t – 1), ..., (t – d))
2. NAR =
Nonlinear autoregressive
y(t) = f(y(t – 1), ..., y(t – d))
3. NARX without Feedback as in p 1.
---------------------------------------------------------------------------------
Static, feedforward-dynamic, and recurrent-dynamic networks
1.Static:
p = {0 0 1 1 1 1 0 0 0 0 0 0};
stem(cell2mat(p))
net = linearlayer;
net.inputs{1}.size = 1;
net.layers{1}.dimensions = 1;
net.biasConnect = 0;
net.IW{1,1} = 2;
view(net)
Simulate:
a = net(p);
stem(cell2mat(a))
2. Dynamic
net = linearlayer([0 1]);
net.inputs{1}.size = 1;
net.layers{1}.dimensions = 1;
net.biasConnect = 0;
net.IW{1,1} = [1 1];
view(net)
a = net(p);
stem(cell2mat(a))
3. Recurrent-dynamic network
net = narxnet(0,1,[],'closed');
net.inputs{1}.size = 1;
net.layers{1}.dimensions = 1;
net.biasConnect = 0;
net.LW{1} = .5;
net.IW{1} = 1;
view(net)
a = net(p);
stem(cell2mat(a))
Notice that recurrent-dynamic networks typically have a longer response than
feedforward-dynamic networks. For linear networks, feedforward-dynamic networks
are called finite impulse response (FIR), because the response to an impulse input will
become zero after a finite amount of time. Linear recurrent-dynamic networks are called
infinite impulse response (IIR), because the response to an impulse can decay to zero (for
a stable network), but it will never become exactly equal to zero. An impulse response for
a nonlinear network cannot be defined, but the ideas of finite and infinite responses do
carry over.
Focused time-delay
neural network (FTDNN)
Notice that the input to the network is the same as the target.
ftdnn_net = timedelaynet([1:8],10);
ftdnn_net.trainParam.epochs = 1000;
ftdnn_net.divideFcn = '';
p = y(9:end);
t = y(9:end);
Pi=y(1:8);
ftdnn_net = train(ftdnn_net,p,t,Pi);
yp = ftdnn_net(p,Pi);
e = gsubtract(yp,t);
rmse = sqrt(mse(e))
linear: to Compare
lin_net = linearlayer([1:8]);
lin_net.trainFcn='trainlm';
[lin_net,tr] = train(lin_net,p,t,Pi);
lin_yp = lin_net(p,Pi);
lin_e = gsubtract(lin_yp,t);
lin_rmse = sqrt(mse(lin_e))
[X,Xi,Ai,T,EW,shift] = preparets(net,inputs,targets,feedback,EW)
y(t) = f (y(t −1), y(t − 2), , y(t − ny ),u(t −1),u(t − 2), ,u(t − nu))
Nonlinear autoregressive with exogenous (external) input
y(t) = f(y(t – 1), ..., y(t – d), x(t – 1), ..., (t – d))
2. NAR =
Nonlinear autoregressive
y(t) = f(y(t – 1), ..., y(t – d))
3. NARX without Feedback as in p 1.
---------------------------------------------------------------------------------
Static, feedforward-dynamic, and recurrent-dynamic networks
1.Static:
p = {0 0 1 1 1 1 0 0 0 0 0 0};
stem(cell2mat(p))
net = linearlayer;
net.inputs{1}.size = 1;
net.layers{1}.dimensions = 1;
net.biasConnect = 0;
net.IW{1,1} = 2;
view(net)
Simulate:
a = net(p);
stem(cell2mat(a))
2. Dynamic
net = linearlayer([0 1]);
net.inputs{1}.size = 1;
net.layers{1}.dimensions = 1;
net.biasConnect = 0;
net.IW{1,1} = [1 1];
view(net)
a = net(p);
stem(cell2mat(a))
3. Recurrent-dynamic network
net = narxnet(0,1,[],'closed');
net.inputs{1}.size = 1;
net.layers{1}.dimensions = 1;
net.biasConnect = 0;
net.LW{1} = .5;
net.IW{1} = 1;
view(net)
a = net(p);
stem(cell2mat(a))
Notice that recurrent-dynamic networks typically have a longer response than
feedforward-dynamic networks. For linear networks, feedforward-dynamic networks
are called finite impulse response (FIR), because the response to an impulse input will
become zero after a finite amount of time. Linear recurrent-dynamic networks are called
infinite impulse response (IIR), because the response to an impulse can decay to zero (for
a stable network), but it will never become exactly equal to zero. An impulse response for
a nonlinear network cannot be defined, but the ideas of finite and infinite responses do
carry over.
Focused time-delay
neural network (FTDNN)
Notice that the input to the network is the same as the target.
ftdnn_net = timedelaynet([1:8],10);
ftdnn_net.trainParam.epochs = 1000;
ftdnn_net.divideFcn = '';
p = y(9:end);
t = y(9:end);
Pi=y(1:8);
ftdnn_net = train(ftdnn_net,p,t,Pi);
yp = ftdnn_net(p,Pi);
e = gsubtract(yp,t);
rmse = sqrt(mse(e))
linear: to Compare
lin_net = linearlayer([1:8]);
lin_net.trainFcn='trainlm';
[lin_net,tr] = train(lin_net,p,t,Pi);
lin_yp = lin_net(p,Pi);
lin_e = gsubtract(lin_yp,t);
lin_rmse = sqrt(mse(lin_e))
If you have an application for a dynamic network, try the linear network first(linearlayer) and then the FTDNN (timedelaynet). If neither network issatisfactory, try one of the more complex dynamic networks discussed in the remainder ofthis topic.
Prepare
p = y(9:end);
t = y(9:end);
Pi = y(1:8);
Replace with
[p,Pi,Ai,t] = preparets(ftdnn_net,y,y);
[X,Xi,Ai,T,EW,shift] = preparets(net,inputs,targets,feedback,EW)
The input arguments for preparets are the network object (net), the external (nonfeedback)
input to the network (inputs), the non-feedback target (targets), the
feedback target (feedback), and the error weights (EW) (see “Train Neural Networks
with Error Weights” on page 3-43). The difference between external and feedback
signals will become clearer when the NARX network is described in “Design Time Series
NARX Feedback Neural Networks” on page 3-22. For the FTDNN network, there is
no feedback signal.
The return arguments for preparets are the time shift between network inputs and
outputs (shift), the network input for training and simulation (X), the initial inputs
(Xi) for loading the tapped delay lines for input weights, the initial layer outputs (Ai) for
loading the tapped delay lines for layer weights, the training targets (T), and the error
weights (EW).
Time Series Distributed Delay Neural Networks TDNN
time = 0:99;
y1 = sin(2*pi*time/10);
y2 = sin(2*pi*time/5);
y = [y1 y2 y1 y2];
t1 = ones(1,100);
t2 = -ones(1,100);
t = [t1 t2 t1 t2];
d1 = 0:4;
d2 = 0:3;
p = con2seq(y);
t = con2seq(t);
dtdnn_net = distdelaynet({d1,d2},5);
dtdnn_net.trainFcn = 'trainbr';
dtdnn_net.divideFcn = '';
dtdnn_net.trainParam.epochs = 100;
dtdnn_net = train(dtdnn_net,p,t);
yp = sim(dtdnn_net,p);
plotresponse(t,yp)
NARX is
nonlinear
autoregressive network with exogenous inputs (NARX) is a recurrent dynamic network,
with feedback connections enclosing several layers of the network. The NARX model is
based on the linear ARX model, which is commonly used in time-series modeling.
Комментариев нет:
Отправить комментарий