суббота, 23 декабря 2017 г.
Вселенная – Борис Штерн
017. Прорывы и тупики в изучении Вселенной – Борис Штерн
https://www.youtube.com/watch?v=zRyUyz4Uxzg
Борис Штерн: "Исследование Вселенной: прорывы и тупики XXI века"
https://www.youtube.com/watch?v=Sopu-Mxz73w
Борис Штерн: "Наука и фантастика: экзопланеты и новая книга "Ковчег 47 Либра".
https://www.youtube.com/watch?v=eYfTGkj6OsE
воскресенье, 9 апреля 2017 г.
Portf Optimization
https://www.mathworks.com/help/finance/examples/portfolio-optimization-examples-1.html
% https://www.mathworks.com/help/finance/examples/portfolio-optimization-examples-1.html
%% Clear
clc;
clear all;
close all;
%% Set up the Data
load BlueChipStockMoments
mret = MarketMean;
mrsk = sqrt(MarketVar);
cret = CashMean;
crsk = sqrt(CashVar);
%% Create a Portfolio Object
p = Portfolio('AssetList', AssetList, 'RiskFreeRate', CashMean);
p = setAssetMoments(p, AssetMean, AssetCovar);
p = setInitPort(p, 1/p.NumAssets);
[ersk, eret] = estimatePortMoments(p, p.InitPort);
clf;
portfolioexamples_plot('Asset Risks and Returns', ...
{'scatter', mrsk, mret, {'Market'}}, ...
{'scatter', crsk, cret, {'Cash'}}, ...
{'scatter', ersk, eret, {'Equal'}}, ...
{'scatter', sqrt(diag(p.AssetCovar)), p.AssetMean, p.AssetList, '.r'});
%% Set up a Portfolio Optimization Problem
p = setDefaultConstraints(p);
pwgt = estimateFrontier(p, 20);
[prsk, pret] = estimatePortMoments(p, pwgt);
%% Plot efficient frontier
clf;
portfolioexamples_plot('Efficient Frontier', ...
{'line', prsk, pret}, ...
{'scatter', [mrsk, crsk, ersk], [mret, cret, eret], {'Market', 'Cash', 'Equal'}}, ...
{'scatter', sqrt(diag(p.AssetCovar)), p.AssetMean, p.AssetList, '.r'});
%% Illustrate the Tangent Line to the Efficient Frontier
q = setBudget(p, 0, 1);
qwgt = estimateFrontier(q, 20);
[qrsk, qret] = estimatePortMoments(q, qwgt);
%% Plot efficient frontier with tangent line (0 to 1 cash)
clf;
portfolioexamples_plot('Efficient Frontier with Tangent Line', ...
{'line', prsk, pret}, ...
{'line', qrsk, qret, [], [], 1}, ...
{'scatter', [mrsk, crsk, ersk], [mret, cret, eret], {'Market', 'Cash', 'Equal'}}, ...
{'scatter', sqrt(diag(p.AssetCovar)), p.AssetMean, p.AssetList, '.r'});
%% Obtain Range of Risks and Returns
[rsk, ret] = estimatePortMoments(p, estimateFrontierLimits(p));
display(rsk);
display(ret);
%% Find a Portfolio with a Targeted Return and Targeted Risk
TargetReturn = 0.20; % input target annualized return and risk here
TargetRisk = 0.15;
% Obtain portfolios with targeted return and risk
awgt = estimateFrontierByReturn(p, TargetReturn/12);
[arsk, aret] = estimatePortMoments(p, awgt);
bwgt = estimateFrontierByRisk(p, TargetRisk/sqrt(12));
[brsk, bret] = estimatePortMoments(p, bwgt);
% Plot efficient frontier with targeted portfolios
clf;
portfolioexamples_plot('Efficient Frontier with Targeted Portfolios', ...
{'line', prsk, pret}, ...
{'scatter', [mrsk, crsk, ersk], [mret, cret, eret], {'Market', 'Cash', 'Equal'}}, ...
{'scatter', arsk, aret, {sprintf('%g%% Return',100*TargetReturn)}}, ...
{'scatter', brsk, bret, {sprintf('%g%% Risk',100*TargetRisk)}}, ...
{'scatter', sqrt(diag(p.AssetCovar)), p.AssetMean, p.AssetList, '.r'});
%% Blotters
aBlotter = dataset({100*awgt(awgt > 0),'Weight'}, 'obsnames', p.AssetList(awgt > 0));
displayPortfolio(sprintf('Portfolio with %g%% Target Return', 100*TargetReturn), aBlotter, false);
bBlotter = dataset({100*bwgt(bwgt > 0),'Weight'}, 'obsnames', p.AssetList(bwgt > 0));
displayPortfolio(sprintf('Portfolio with %g%% Target Risk', 100*TargetRisk), bBlotter, false);
%% Transactions Costs
BuyCost = 0.0020;
SellCost = 0.0020;
q = setCosts(p, BuyCost, SellCost);
qwgt = estimateFrontier(q, 20);
[qrsk, qret] = estimatePortMoments(q, qwgt);
% Plot efficient frontiers with gross and net returns
clf;
portfolioexamples_plot('Efficient Frontier with and without Transaction Costs', ...
{'line', prsk, pret, {'Gross'}, ':b'}, ...
{'line', qrsk, qret, {'Net'}}, ...
{'scatter', [mrsk, crsk, ersk], [mret, cret, eret], {'Market', 'Cash', 'Equal'}}, ...
{'scatter', sqrt(diag(p.AssetCovar)), p.AssetMean, p.AssetList, '.r'});
%% Turnover Constraint
BuyCost = 0.0020;
SellCost = 0.0020;
Turnover = 0.2;
q = setCosts(p, BuyCost, SellCost);
q = setTurnover(q, Turnover);
[qwgt, qbuy, qsell] = estimateFrontier(q, 20);
[qrsk, qret] = estimatePortMoments(q, qwgt);
% Plot efficient frontier with turnover constraint
clf;
portfolioexamples_plot('Efficient Frontier with Turnover Constraint', ...
{'line', prsk, pret, {'Unconstrained'}, ':b'}, ...
{'line', qrsk, qret, {sprintf('%g%% Turnover', 100*Turnover)}}, ...
{'scatter', [mrsk, crsk, ersk], [mret, cret, eret], {'Market', 'Cash', 'Equal'}}, ...
{'scatter', sqrt(diag(p.AssetCovar)), p.AssetMean, p.AssetList, '.r'});
displaySumOfTransactions(Turnover, qbuy, qsell)
%% Tracking-Error Constraint
ii = [15, 16, 20, 21, 23, 25, 27, 29, 30]; % indexes of assets to include in tracking portfolio
TrackingError = 0.05/sqrt(12);
TrackingPort = zeros(30, 1);
TrackingPort(ii) = 1;
TrackingPort = (1/sum(TrackingPort))*TrackingPort;
% ERROR steTracckingError
q = setTrackingError(p, TrackingError, TrackingPort);
qwgt = estimateFrontier(q, 20);
[qrsk, qret] = estimatePortMoments(q, qwgt);
[trsk, tret] = estimatePortMoments(q, TrackingPort);
% Plot efficient frontier with tracking-error constraint
clf;
portfolioexamples_plot('Efficient Frontier with 5% Tracking-Error Constraint', ...
{'line', prsk, pret, {'Unconstrained'}, ':b'}, ...
{'line', qrsk, qret, {'Tracking'}}, ...
{'scatter', [mrsk, crsk], [mret, cret], {'Market', 'Cash'}}, ...
{'scatter', trsk, tret, {'Tracking'}, 'r'});
%% Combined Turnover and Tracking-Error Constraints
Turnover = 0.3;
InitPort = (1/q.NumAssets)*ones(q.NumAssets, 1);
ii = [15, 16, 20, 21, 23, 25, 27, 29, 30]; % indexes of assets to include in tracking portfolio
TrackingError = 0.05/sqrt(12);
TrackingPort = zeros(30, 1);
TrackingPort(ii) = 1;
TrackingPort = (1/sum(TrackingPort))*TrackingPort;
q = setTurnover(q, Turnover, InitPort);
qwgt = estimateFrontier(q, 20);
[qrsk, qret] = estimatePortMoments(q, qwgt);
[trsk, tret] = estimatePortMoments(q, TrackingPort);
[ersk, eret] = estimatePortMoments(q, InitPort);
% Plot efficient frontier with combined turnover and tracking-error constraint
clf;
portfolioexamples_plot('Efficient Frontier with Turnover and Tracking-Error Constraint', ...
{'line', prsk, pret, {'Unconstrained'}, ':b'}, ...
{'line', qrsk, qret, {'Turnover & Tracking'}}, ...
{'scatter', [mrsk, crsk], [mret, cret], {'Market', 'Cash'}}, ...
{'scatter', trsk, tret, {'Tracking'}, 'r'}, ...
{'scatter', ersk, eret, {'Initial'}, 'b'});
%% Maximize the Sharpe Ratio
p = setInitPort(p, 0);
swgt = estimateMaxSharpeRatio(p);
[srsk, sret] = estimatePortMoments(p, swgt);
% Plot efficient frontier with portfolio that attains maximum Sharpe ratio
clf;
portfolioexamples_plot('Efficient Frontier with Maximum Sharpe Ratio Portfolio', ...
{'line', prsk, pret}, ...
{'scatter', srsk, sret, {'Sharpe'}}, ...
{'scatter', [mrsk, crsk, ersk], [mret, cret, eret], {'Market', 'Cash', 'Equal'}}, ...
{'scatter', sqrt(diag(p.AssetCovar)), p.AssetMean, p.AssetList, '.r'});
% Set up a dataset object that contains the portfolio that maximizes the Sharpe ratio
Blotter = dataset({100*swgt(swgt > 0),'Weight'}, 'obsnames', AssetList(swgt > 0));
displayPortfolio('Portfolio with Maximum Sharpe Ratio', Blotter, false);
%% Confirm that Maximum Sharpe Ratio is a Maximum
psratio = (pret - p.RiskFreeRate) ./ prsk;
ssratio = (sret - p.RiskFreeRate) / srsk;
clf;
subplot(2,1,1);
plot(prsk, pret, 'LineWidth', 2);
hold on
scatter(srsk, sret, 'g', 'filled');
title('\bfEfficient Frontier');
xlabel('Portfolio Risk');
ylabel('Portfolio Return');
hold off
subplot(2,1,2);
plot(prsk, psratio, 'LineWidth', 2);
hold on
scatter(srsk, ssratio, 'g', 'filled');
title('\bfSharpe Ratio');
xlabel('Portfolio Risk');
ylabel('Sharpe Ratio');
hold off
%% Illustrate that Sharpe is the Tangent Portfolio
q = setBudget(p, 0, 1);
qwgt = estimateFrontier(q, 20);
[qrsk, qret] = estimatePortMoments(q, qwgt);
% Plot that shows Sharpe ratio portfolio is the tangency portfolio
clf;
portfolioexamples_plot('Efficient Frontier with Maximum Sharpe Ratio Portfolio', ...
{'line', prsk, pret}, ...
{'line', qrsk, qret, [], [], 1}, ...
{'scatter', srsk, sret, {'Sharpe'}}, ...
{'scatter', [mrsk, crsk, ersk], [mret, cret, eret], {'Market', 'Cash', 'Equal'}}, ...
{'scatter', sqrt(diag(p.AssetCovar)), p.AssetMean, p.AssetList, '.r'});
%% Dollar-Neutral Hedge-Fund Structure 130 -30
Exposure = 1;
q = setBounds(p, -Exposure, Exposure);
q = setBudget(q, 0, 0);
q = setOneWayTurnover(q, Exposure, Exposure, 0);
[qwgt, qlong, qshort] = estimateFrontier(q, 20);
[qrsk, qret] = estimatePortMoments(q, qwgt);
[qswgt, qslong, qsshort] = estimateMaxSharpeRatio(q);
[qsrsk, qsret] = estimatePortMoments(q, qswgt);
% Plot efficient frontier for a dollar-neutral fund structure with tangency portfolio
clf;
portfolioexamples_plot('Efficient Frontier with Dollar-Neutral Portfolio', ...
{'line', prsk, pret, {'Standard'}, 'b:'}, ...
{'line', qrsk, qret, {'Dollar-Neutral'}, 'b'}, ...
{'scatter', qsrsk, qsret, {'Sharpe'}}, ...
{'scatter', [mrsk, crsk, ersk], [mret, cret, eret], {'Market', 'Cash', 'Equal'}}, ...
{'scatter', sqrt(diag(p.AssetCovar)), p.AssetMean, p.AssetList, '.r'});
% Set up a dataset object that contains the portfolio that maximizes the Sharpe ratio
Blotter = dataset({100*qswgt(abs(qswgt) > 1.0e-4), 'Weight'}, ...
{100*qslong(abs(qswgt) > 1.0e-4), 'Long'}, ...
{100*qsshort(abs(qswgt) > 1.0e-4), 'Short'}, ...
'obsnames', AssetList(abs(qswgt) > 1.0e-4));
displayPortfolio('Dollar-Neutral Portfolio with Maximum Sharpe Ratio', Blotter, true, 'Dollar-Neutral');
%% 130/30 Fund Structure
Leverage = 0.3;
q = setBounds(p, -Leverage, 1 + Leverage);
q = setBudget(q, 1, 1);
q = setOneWayTurnover(q, 1 + Leverage, Leverage);
[qwgt, qbuy, qsell] = estimateFrontier(q, 20);
[qrsk, qret] = estimatePortMoments(q, qwgt);
[qswgt, qslong, qsshort] = estimateMaxSharpeRatio(q);
[qsrsk, qsret] = estimatePortMoments(q, qswgt);
% Plot efficient frontier for a 130-30 fund structure with tangency portfolio
clf;
portfolioexamples_plot(sprintf('Efficient Frontier with %g-%g Portfolio', ...
100*(1 + Leverage),100*Leverage), ...
{'line', prsk, pret, {'Standard'}, 'b:'}, ...
{'line', qrsk, qret, {'130-30'}, 'b'}, ...
{'scatter', qsrsk, qsret, {'Sharpe'}}, ...
{'scatter', [mrsk, crsk, ersk], [mret, cret, eret], {'Market', 'Cash', 'Equal'}}, ...
{'scatter', sqrt(diag(p.AssetCovar)), p.AssetMean, p.AssetList, '.r'});
% Set up a dataset object that contains the portfolio that maximizes the Sharpe ratio
Blotter = dataset({100*qswgt(abs(qswgt) > 1.0e-4), 'Weight'}, ...
{100*qslong(abs(qswgt) > 1.0e-4), 'Long'}, ...
{100*qsshort(abs(qswgt) > 1.0e-4), 'Short'}, ...
'obsnames', AssetList(abs(qswgt) > 1.0e-4));
displayPortfolio(sprintf('%g-%g Portfolio with Maximum Sharpe Ratio', 100*(1 + Leverage), 100*Leverage), Blotter, true, sprintf('%g-%g', 100*(1 + Leverage), 100*Leverage));
пятница, 31 марта 2017 г.
Classification Examples
Classification Examples(fisheriris)
https://www.mathworks.com/solutions/machine-learning/examples.html
Classification Probability upon "load fisheriris"
https://www.mathworks.com/products/demos/machine-learning/classification_probability/classification_probability.html
Get Started with Statistics and Machine Learning Toolbox
Getting Started with Statistics and Machine Learning Toolbox
https://www.mathworks.com/help/stats/getting-started-12.html
Machine Learning
Machine Learning for Predictive Modelling
Rory Adams, MathWorks
https://www.mathworks.com/videos/machine-learning-for-predictive-modelling-116621.html
Machine Learning with MATLAB: Getting Started with Classification
Richard Willey, MathWorks
https://www.mathworks.com/videos/machine-learning-with-matlab-getting-started-with-classification-81766.html
Applying Multivariate Classification in the Life Sciences with Statistics Toolbox
https://www.mathworks.com/matlabcentral/fileexchange/25807-applying-multivariate-classification-in-the-life-sciences-with-statistics-toolbox
https://www.mathworks.com/videos/applying-multivariate-classification-in-the-life-sciences-with-statistics-toolbox-81659.html?elqsid=1490960464073&potential_use=Home
An Introduction to Classification
Richard Willey, MathWorks
https://www.mathworks.com/videos/an-introduction-to-classification-68891.html
Machine Learnings Examples
https://www.mathworks.com/products/statistics.html
Examples Machine Learning with MATLAB
https://www.mathworks.com/campaigns/products/offer/machine-learning-with-matlab-conf.html?elqsid=1490946484814&potential_use=Home
Machine Learning Made Easy
https://www.mathworks.com/videos/machine-learning-with-matlab-100694.html?s_tid=conf_addres_DA_eb
Signal Processing and Machine Learning Techniques for Sensor Data Analytics
https://www.mathworks.com/videos/signal-processing-and-machine-learning-techniques-for-sensor-data-analytics-107549.html?s_tid=conf_addres_DA_eb
Supervised Learning Workflow and Algorithms
https://www.mathworks.com/help/stats/supervised-learning-machine-learning-workflow-and-algorithms.html?s_tid=conf_addres_DA_eb
Data-Driven Insights with MATLAB Analytics: An Energy Load Forecasting Case Study
https://www.mathworks.com/company/newsletters/articles/data-driven-insights-with-matlab-analytics-an-energy-load-forecasting-case-study.html?s_tid=conf_addres_DA_eb
MACHINE LEARNINGS Examples
https://www.mathworks.com/solutions/machine-learning/examples.html?s_tid=conf_addres_DA_eb
Classification Learner
https://www.mathworks.com/products/statistics/classification-learner.html?s_tid=conf_addres_DA_eb
MachineLearning
четверг, 30 марта 2017 г.
Finance App Examples
Using MATLAB to Develop and Deploy Financial Models
https://www.mathworks.com/videos/using-matlab-to-develop-and-deploy-financial-models-81642.html
Using MATLAB to Optimize Portfolios with Financial Toolbox
Analyzing Investment Strategies with CVaR Portfolio Optimization in MATLAB
https://www.mathworks.com/videos/analyzing-investment-strategies-with-cvar-portfolio-optimization-in-matlab-81942.html
Deep Learning ML, Caffe,
http://matlab.ru/webinars/tecnology-Deep-Learning
http://caffe.berkeleyvision.org/
http://www.vlfeat.org/matconvnet/
https://github.com/vlfeat/matconvnet
https://habrahabr.ru/post/301084/
AlexNet
https://ru.wikipedia.org/wiki/%D0%A1%D0%B2%D1%91%D1%80%D1%82%D0%BE%D1%87%D0%BD%D0%B0%D1%8F_%D0%BD%D0%B5%D0%B9%D1%80%D0%BE%D0%BD%D0%BD%D0%B0%D1%8F_%D1%81%D0%B5%D1%82%D1%8C
AlexNet
https://www.mathworks.com/matlabcentral/fileexchange/59133-neural-network-toolbox-tm--model-for-alexnet-network?s_tid=srchtitle
ML Parallel
Параллельные вычисления при моделировании с серией параметров
http://matlab.ru/videos/Parallel%27nye-vychisleniya-pri-modelirovanii-s-seriej-parametrov
среда, 29 марта 2017 г.
ML + VS
Интеграция MATLAB с Microsoft Visual Studio
http://matlab.ru/videos/integraciya-matlab-s-microsoft-visual-studio
ML Parameters Estimation
Онлайн оценка параметров модели Online Parameter Estimation
http://matlab.ru/videos/onlajn-ocenka-parametrov-modeli-online-parameter-estimation
вторник, 28 марта 2017 г.
Simple graphics examples for Matlab 3D
1. Elefant
[x,y,z] = meshgrid(-2:0.1:2);
F=(x.^2 + y.^2 + z.^2) - 3;
isosurface(x,y,z,F,0);
F=(x.^3 + y.^3 + z.^3) -2;
isosurface(x,y,z,F,0);
axis auto
axis([-3, +3 , -3, +3, -3, +3]);
2.
[x,y,z] = meshgrid(-2:0.1:2);
F=x.^2 + y.^2 + z.^2 -3;
isosurface(x,y,z,F,0);
F=x.^2 + y.^3 + z.^2 -3;
isosurface(x,y,z,F,0);
F=x.^2 + y.^3 + z.^2 +3;
isosurface(x,y,z,F,0);
axis([-2.5, +2.5 , -2.5, +2.5,-2.5, +2.5]);
% axis auto
3.
clc
[x,y,z] = meshgrid(-2:0.1:2);
F=x.^4 + y.^4 + z.^4 - (x.^2 + y.^2 + z.^2);
% surface(x,y,z,F,0)
% isosurface(x,y,z,F,0)
hpatch = patch(isosurface(x,y,z,F,0));
isonormals(x,y,z,F,hpatch); %посчитаем нормали для красоты
set(hpatch,'FaceColor','r','EdgeColor','none'); %немного цвета для красоты
camlight left; lighting phong; %немного света для красоты
[x,y,z] = meshgrid(-2:0.1:2);
F=(x.^2 + y.^2 + z.^2) - 3;
isosurface(x,y,z,F,0);
F=(x.^3 + y.^3 + z.^3) -2;
isosurface(x,y,z,F,0);
axis auto
axis([-3, +3 , -3, +3, -3, +3]);
2.
[x,y,z] = meshgrid(-2:0.1:2);
F=x.^2 + y.^2 + z.^2 -3;
isosurface(x,y,z,F,0);
F=x.^2 + y.^3 + z.^2 -3;
isosurface(x,y,z,F,0);
F=x.^2 + y.^3 + z.^2 +3;
isosurface(x,y,z,F,0);
axis([-2.5, +2.5 , -2.5, +2.5,-2.5, +2.5]);
% axis auto
3.
clc
[x,y,z] = meshgrid(-2:0.1:2);
F=x.^4 + y.^4 + z.^4 - (x.^2 + y.^2 + z.^2);
% surface(x,y,z,F,0)
% isosurface(x,y,z,F,0)
hpatch = patch(isosurface(x,y,z,F,0));
isonormals(x,y,z,F,hpatch); %посчитаем нормали для красоты
set(hpatch,'FaceColor','r','EdgeColor','none'); %немного цвета для красоты
camlight left; lighting phong; %немного света для красоты
Simple graphics examples for Matlab
1.
N = 50;
NgrStr = 6;
NgrColumn = 1;
y1 = randn(N,1);
stat.mean = mean(y);
stat.std = std(y);
stat.median = median(y);
y2 = filter([ 1 1]/2,1,y);
t = linspace(0, 0.01, N);
subplot(NgrStr,NgrColumn,1);
plot(t,y1);
grid on;
subplot(NgrStr,NgrColumn,2);
plot(t,y2);
grid on;
subplot(NgrStr,1,3);
plot(y,'DisplayName','y');hold on;plot(y2,'DisplayName','y2');hold off;
grid on;
subplot(NgrStr,NgrColumn,4);
plot(y1,y2);
grid on;
subplot(NgrStr,NgrColumn,5);
stem(y2-y1);
grid on;
subplot(NgrStr,NgrColumn,6);
hist(y2-y1);
grid on;
N = 50;
NgrStr = 6;
NgrColumn = 1;
y1 = randn(N,1);
stat.mean = mean(y);
stat.std = std(y);
stat.median = median(y);
y2 = filter([ 1 1]/2,1,y);
t = linspace(0, 0.01, N);
subplot(NgrStr,NgrColumn,1);
plot(t,y1);
grid on;
subplot(NgrStr,NgrColumn,2);
plot(t,y2);
grid on;
subplot(NgrStr,1,3);
plot(y,'DisplayName','y');hold on;plot(y2,'DisplayName','y2');hold off;
grid on;
subplot(NgrStr,NgrColumn,4);
plot(y1,y2);
grid on;
subplot(NgrStr,NgrColumn,5);
stem(y2-y1);
grid on;
subplot(NgrStr,NgrColumn,6);
hist(y2-y1);
grid on;
.Net Assembly from MatLab
.Net Assembly from MatLab
1.
NET.addAssembly('System.Speech');
ss = System.Speech.Synthesis.SpeechSynthesizer;
ss.Volume = 100;
Speak(ss,'You can use .NET Library in Matrix Laboratory');
2.
asmpath = 'D:\VC\1305\gs.trade\GS.Matlab\bin\Debug\';
asmname = 'GS.Matlab.dll';
asm = NET.addAssembly(fullfile(asmpath,asmname));
obj = GS.Matlab.MyGraph;
mlData = cell(obj.getNewData)
objArr = cell(obj.getObjectArray);
objNewDataArr = obj.getNewDataProp;
figure('Name',char(mlData{1}))
plot(double(mlData{2}(2)))
xlabel(char(mlData{2}(1)))
3.
% F:\Work\Math\Matlab\Net
% NetDocCell_01.m
R = 3;
C=1;
asmpath = 'D:\VC\1305\gs.trade\GS.Matlab\bin\Debug\';
asmname = 'GS.Matlab.dll';
asm = NET.addAssembly(fullfile(asmpath,asmname));
obj = GS.Matlab.MyGraph;
mlData = cell(obj.getNewData);
%objArr = cell(obj.getObjectArray);
objNewDataArr = obj.getNewDataProp;
figure('Name',char(mlData{1}));
subplot(R,C,1);
% figure('Name',char(mlData{1}));
plot(double(mlData{2}(2)));
xlabel(char(mlData{2}(1)));
subplot(R,C,2);
objArr = obj.getObjectArray();
doubles1 = double(objArr(1));
doubles2 = double(objArr(2));
plot([doubles1 doubles2]);
subplot(R,C,3);
objDouble = obj.getDoubleArray;
doubles = double(objDouble);
plot(doubles);
------------------------------------------------------------
.cs
D:\VC\1305\gs.trade\GS.MAtlab\
namespace GS.Matlab
{
// Call Methods from .Net Assembly from Matlab
public class MyGraph
{
public Object[] getNewData()
/*
* Create a System.Object array to use in MATLAB examples.
* Returns containerArr System.Object array containing:
* fLabel System.String object
* plotData System.Object array containing:
* xLabel System.String object
* doubleArr System.Double array
*/
{
String fLabel = "Figure Showing New Graph Data";
Double[] doubleArr =
{
18, 32, 3.133, 44, -9.9, -13, 33.03
};
String xLabel = "X-Axis Label";
Object[] plotData = {xLabel, doubleArr};
Object[] containerArr = {fLabel, plotData};
return containerArr;
}
public Object[] getNewDataProp {
get
{
String fLabel = "Figure Showing New Graph Data";
Double[] doubleArr =
{
18, 32, 3.133, 44, -9.9, -13, 33.03
};
String xLabel = "X-Axis Label";
Object[] plotData = {xLabel, doubleArr};
Object[] containerArr = {fLabel, plotData};
return containerArr;
}
}
public object[] getObjectArray()
{
var arr1 = new double[] {1, 2, 3, 4, 5};
var arr2 = new double[] { 6, 7, 8, 9, 10 };
return new object[] {arr1, arr2};
}
public double[] getDoubleArray()
{
return new double[] { 1, 2, 3, 4, 5 };
}
public object[] getObjectArrayProp => new object[]
{
new double[] {1,2,3},
new double[] {4,5,6,7,8,9}
};
public double[] getDoubleArrayProp => new double[] { 1, 2, 3, 4, 5 };
}
}
1.
NET.addAssembly('System.Speech');
ss = System.Speech.Synthesis.SpeechSynthesizer;
ss.Volume = 100;
Speak(ss,'You can use .NET Library in Matrix Laboratory');
2.
asmpath = 'D:\VC\1305\gs.trade\GS.Matlab\bin\Debug\';
asmname = 'GS.Matlab.dll';
asm = NET.addAssembly(fullfile(asmpath,asmname));
obj = GS.Matlab.MyGraph;
mlData = cell(obj.getNewData)
objArr = cell(obj.getObjectArray);
objNewDataArr = obj.getNewDataProp;
figure('Name',char(mlData{1}))
plot(double(mlData{2}(2)))
xlabel(char(mlData{2}(1)))
3.
% F:\Work\Math\Matlab\Net
% NetDocCell_01.m
R = 3;
C=1;
asmpath = 'D:\VC\1305\gs.trade\GS.Matlab\bin\Debug\';
asmname = 'GS.Matlab.dll';
asm = NET.addAssembly(fullfile(asmpath,asmname));
obj = GS.Matlab.MyGraph;
mlData = cell(obj.getNewData);
%objArr = cell(obj.getObjectArray);
objNewDataArr = obj.getNewDataProp;
figure('Name',char(mlData{1}));
subplot(R,C,1);
% figure('Name',char(mlData{1}));
plot(double(mlData{2}(2)));
xlabel(char(mlData{2}(1)));
subplot(R,C,2);
objArr = obj.getObjectArray();
doubles1 = double(objArr(1));
doubles2 = double(objArr(2));
plot([doubles1 doubles2]);
subplot(R,C,3);
objDouble = obj.getDoubleArray;
doubles = double(objDouble);
plot(doubles);
------------------------------------------------------------
.cs
D:\VC\1305\gs.trade\GS.MAtlab\
namespace GS.Matlab
{
// Call Methods from .Net Assembly from Matlab
public class MyGraph
{
public Object[] getNewData()
/*
* Create a System.Object array to use in MATLAB examples.
* Returns containerArr System.Object array containing:
* fLabel System.String object
* plotData System.Object array containing:
* xLabel System.String object
* doubleArr System.Double array
*/
{
String fLabel = "Figure Showing New Graph Data";
Double[] doubleArr =
{
18, 32, 3.133, 44, -9.9, -13, 33.03
};
String xLabel = "X-Axis Label";
Object[] plotData = {xLabel, doubleArr};
Object[] containerArr = {fLabel, plotData};
return containerArr;
}
public Object[] getNewDataProp {
get
{
String fLabel = "Figure Showing New Graph Data";
Double[] doubleArr =
{
18, 32, 3.133, 44, -9.9, -13, 33.03
};
String xLabel = "X-Axis Label";
Object[] plotData = {xLabel, doubleArr};
Object[] containerArr = {fLabel, plotData};
return containerArr;
}
}
public object[] getObjectArray()
{
var arr1 = new double[] {1, 2, 3, 4, 5};
var arr2 = new double[] { 6, 7, 8, 9, 10 };
return new object[] {arr1, arr2};
}
public double[] getDoubleArray()
{
return new double[] { 1, 2, 3, 4, 5 };
}
public object[] getObjectArrayProp => new object[]
{
new double[] {1,2,3},
new double[] {4,5,6,7,8,9}
};
public double[] getDoubleArrayProp => new double[] { 1, 2, 3, 4, 5 };
}
}
воскресенье, 26 марта 2017 г.
Track
R=2;C=2;
vz = 10; % velocity constant
a = -32; % acceleration constant
subplot(R,C,1);
t = 0:.1:1;
z = vz*t;
% z = vz*t + 1/2*a*t.^2;
vx = 2;
x = vx*t;
%x = vx*t + 1/2*a*t.^2;
vy = 3;
y = vy*t;
% y = vy*t + 1/2*a*t.^2;
u = gradient(x);
v = gradient(y);
w = gradient(z);
scale = 0;
quiver3(x,y,z,u,v,w,scale);
view([70,18]);
subplot(R,C,2)
% z = vz*t;
z = vz*t + 1/2*a*t.^2;
vx = 2;
x = vx*t;
% x = vx*t + 1/2*a*t.^2;
vy = 3;
y = vy*t;
% y = vy*t + 1/2*a*t.^2;
u = gradient(x);
v = gradient(y);
w = gradient(z);
scale = 0;
quiver3(x,y,z,u,v,w,scale);
view([70,18]);
subplot(R,C,3)
% z = vz*t;
z = vz*t + 1/2*a*t.^2;
vx = 2;
% x = vx*t;
x = vx*t + 1/2*a*t.^2;
vy = 3;
y = vy*t;
% y = vy*t + 1/2*a*t.^2;
u = gradient(x);
v = gradient(y);
w = gradient(z);
scale = 0;
quiver3(x,y,z,u,v,w,scale);
view([70,18]);
subplot(R,C,4)
% z = vz*t;
z = vz*t + 1/2*a*t.^2;
vx = 2;
% x = vx*t;
x = vx*t + 1/2*a*t.^2;
vy = 3;
%y = vy*t;
y = vy*t + 1/2*a*t.^2;
u = gradient(x);
v = gradient(y);
w = gradient(z);
scale = 0;
quiver3(x,y,z,u,v,w,scale);
view([70,18]);
vz = 10; % velocity constant
a = -32; % acceleration constant
subplot(R,C,1);
t = 0:.1:1;
z = vz*t;
% z = vz*t + 1/2*a*t.^2;
vx = 2;
x = vx*t;
%x = vx*t + 1/2*a*t.^2;
vy = 3;
y = vy*t;
% y = vy*t + 1/2*a*t.^2;
u = gradient(x);
v = gradient(y);
w = gradient(z);
scale = 0;
quiver3(x,y,z,u,v,w,scale);
view([70,18]);
subplot(R,C,2)
% z = vz*t;
z = vz*t + 1/2*a*t.^2;
vx = 2;
x = vx*t;
% x = vx*t + 1/2*a*t.^2;
vy = 3;
y = vy*t;
% y = vy*t + 1/2*a*t.^2;
u = gradient(x);
v = gradient(y);
w = gradient(z);
scale = 0;
quiver3(x,y,z,u,v,w,scale);
view([70,18]);
subplot(R,C,3)
% z = vz*t;
z = vz*t + 1/2*a*t.^2;
vx = 2;
% x = vx*t;
x = vx*t + 1/2*a*t.^2;
vy = 3;
y = vy*t;
% y = vy*t + 1/2*a*t.^2;
u = gradient(x);
v = gradient(y);
w = gradient(z);
scale = 0;
quiver3(x,y,z,u,v,w,scale);
view([70,18]);
subplot(R,C,4)
% z = vz*t;
z = vz*t + 1/2*a*t.^2;
vx = 2;
% x = vx*t;
x = vx*t + 1/2*a*t.^2;
vy = 3;
%y = vy*t;
y = vy*t + 1/2*a*t.^2;
u = gradient(x);
v = gradient(y);
w = gradient(z);
scale = 0;
quiver3(x,y,z,u,v,w,scale);
view([70,18]);
Combine Stem and Line
x = linspace(0,2*pi,60);
a = sin(x);
b = cos(x);
stem(x,a+b);
hold on
plot(x,a)
plot(x,b)
hold off
legend('a+b','a = sin(x)','b = cos(x)')
xlabel('Time in \musecs')
ylabel('Magnitude')
title('Linear Combination of Two Functions');
a = sin(x);
b = cos(x);
stem(x,a+b);
hold on
plot(x,a)
plot(x,b)
hold off
legend('a+b','a = sin(x)','b = cos(x)')
xlabel('Time in \musecs')
ylabel('Magnitude')
title('Linear Combination of Two Functions');
Include Loop Variable Value in Graph Title
x = linspace(0,10,100);
for k = 1:4
subplot(2,2,k);
yk = sin(k*x);
plot(x,yk)
title(['y = sin(' num2str(k) 'x)'])
end
Legend Mean Value + Text Signs
R=5;C=1;
dat = rand(50,1);
subplot(R,C,1);
plot(dat)
m = mean(dat);
ax = gca;
xlimits = ax.XLim;
h = line([xlimits(1),xlimits(2)],[m,m],'Color','k','LineStyle','--');
legend(h,'mean of data');
subplot(R,C,2);
g1 = hggroup;
g2 = hggroup;
t = linspace(0,2*pi,100);
plot(t,sin(t),'b','Parent',g1)
hold on
plot(t,sin(t+1/7),'b','Parent',g1)
plot(t,sin(t+2/7),'b','Parent',g1)
plot(t,sin(t+3/7),'b','Parent',g1)
plot(t,cos(t),'g','Parent',g2)
plot(t,cos(t+1/7),'g','Parent',g2)
plot(t,cos(t+2/7),'g','Parent',g2)
plot(t,cos(t+3/7),'g','Parent',g2)
hold off % reset hold state to off
legend([g1,g2],'sine','cosine')
subplot(R,C,3);
x = linspace(0,2*pi,100);
y1 = sin(x);
p1 = plot(x,y1,'DisplayName','sin(x)');
hold on
y2 = sin(x) + pi/2;
p2 = plot(x,y2,'DisplayName','sin(x) + \pi/2');
y3 = sin(x) + pi;
p3 = plot(x,y3,'DisplayName','sin(x) + \pi');
hold off
legend([p1 p2 p3])
subplot(R,C,4);
t = linspace(0,2*pi,50);
y = sin(t);
plot(t,y);
x1 = pi;
y1 = sin(pi);
str1 = '\leftarrow sin(\pi) = 0';
text(x1,y1,str1);
x2 = 3*pi/4;
y2 = sin(3*pi/4);
str2 = '\leftarrow sin(3\pi/4) = 0.71';
text(x2,y2,str2)
x3 = 5*pi/4;
y3 = sin(5*pi/4);
str3 = 'sin(5\pi/4) = -0.71 \rightarrow';
text(x3,y3,str3,'HorizontalAlignment','right');
subplot(R,C,5);
x = linspace(-3,3);
y = (x/5-x.^3).*exp(-2*x.^2);
plot(x,y);
indexmin = find(min(y) == y);
xmin = x(indexmin);
ymin = y(indexmin);
indexmax = find(max(y) == y);
xmax = x(indexmax);
ymax = y(indexmax);
strmin = ['Minimum = ',num2str(ymin)];
text(xmin,ymin,strmin,'HorizontalAlignment','left');
strmax = ['Maximum = ',num2str(ymax)];
text(xmax,ymax,strmax,'HorizontalAlignment','right');
dat = rand(50,1);
subplot(R,C,1);
plot(dat)
m = mean(dat);
ax = gca;
xlimits = ax.XLim;
h = line([xlimits(1),xlimits(2)],[m,m],'Color','k','LineStyle','--');
legend(h,'mean of data');
subplot(R,C,2);
g1 = hggroup;
g2 = hggroup;
t = linspace(0,2*pi,100);
plot(t,sin(t),'b','Parent',g1)
hold on
plot(t,sin(t+1/7),'b','Parent',g1)
plot(t,sin(t+2/7),'b','Parent',g1)
plot(t,sin(t+3/7),'b','Parent',g1)
plot(t,cos(t),'g','Parent',g2)
plot(t,cos(t+1/7),'g','Parent',g2)
plot(t,cos(t+2/7),'g','Parent',g2)
plot(t,cos(t+3/7),'g','Parent',g2)
hold off % reset hold state to off
legend([g1,g2],'sine','cosine')
subplot(R,C,3);
x = linspace(0,2*pi,100);
y1 = sin(x);
p1 = plot(x,y1,'DisplayName','sin(x)');
hold on
y2 = sin(x) + pi/2;
p2 = plot(x,y2,'DisplayName','sin(x) + \pi/2');
y3 = sin(x) + pi;
p3 = plot(x,y3,'DisplayName','sin(x) + \pi');
hold off
legend([p1 p2 p3])
subplot(R,C,4);
t = linspace(0,2*pi,50);
y = sin(t);
plot(t,y);
x1 = pi;
y1 = sin(pi);
str1 = '\leftarrow sin(\pi) = 0';
text(x1,y1,str1);
x2 = 3*pi/4;
y2 = sin(3*pi/4);
str2 = '\leftarrow sin(3\pi/4) = 0.71';
text(x2,y2,str2)
x3 = 5*pi/4;
y3 = sin(5*pi/4);
str3 = 'sin(5\pi/4) = -0.71 \rightarrow';
text(x3,y3,str3,'HorizontalAlignment','right');
subplot(R,C,5);
x = linspace(-3,3);
y = (x/5-x.^3).*exp(-2*x.^2);
plot(x,y);
indexmin = find(min(y) == y);
xmin = x(indexmin);
ymin = y(indexmin);
indexmax = find(max(y) == y);
xmax = x(indexmax);
ymax = y(indexmax);
strmin = ['Minimum = ',num2str(ymin)];
text(xmin,ymin,strmin,'HorizontalAlignment','left');
strmax = ['Maximum = ',num2str(ymax)];
text(xmax,ymax,strmax,'HorizontalAlignment','right');
Graph with Two y-Axes
1.
R = 5; C = 1;
subplot(R,C,1);
x = linspace(0,2*pi,25);
y1 = sin(x);
% y2 = 0.5*sin(x);
y2 = exp(-1/3*x).*sin(x);
plot(x,y1);
grid on
hold on
stem(x,y2);
hold off
subplot(R,C,2);
A = 1000;
a = 0.005;
b = 0.005;
t = 0:900;
z1 = A*exp(-a*t);
z2 = sin(b*t);
[ax,p1,p2] = plotyy(t,z1,t,z2,'semilogy','plot');
ylabel(ax(1),'Semilog Plot') % label left y-axis
ylabel(ax(2),'Linear Plot') % label right y-axis
xlabel(ax(2),'Time') % label x-axis
p1.LineStyle = '--';
p1.LineWidth = 2;
p2.LineWidth = 2;
grid(ax(1),'on')
R = 5; C = 1;
subplot(R,C,1);
x = linspace(0,2*pi,25);
y1 = sin(x);
% y2 = 0.5*sin(x);
y2 = exp(-1/3*x).*sin(x);
plot(x,y1);
grid on
hold on
stem(x,y2);
hold off
subplot(R,C,2);
A = 1000;
a = 0.005;
b = 0.005;
t = 0:900;
z1 = A*exp(-a*t);
z2 = sin(b*t);
[ax,p1,p2] = plotyy(t,z1,t,z2,'semilogy','plot');
ylabel(ax(1),'Semilog Plot') % label left y-axis
ylabel(ax(2),'Linear Plot') % label right y-axis
xlabel(ax(2),'Time') % label x-axis
p1.LineStyle = '--';
p1.LineWidth = 2;
p2.LineWidth = 2;
grid(ax(1),'on')
пятница, 24 марта 2017 г.
Call .Net Methods from ML
1.
R = 3;
C=1;
asmpath = 'D:\VC\1305\gs.trade\GS.Matlab\bin\Debug\';
asmname = 'GS.Matlab.dll';
asm = NET.addAssembly(fullfile(asmpath,asmname));
obj = GS.Matlab.MyGraph;
mlData = cell(obj.getNewData);
%objArr = cell(obj.getObjectArray);
objNewDataArr = obj.getNewDataProp;
figure('Name',char(mlData{1}));
subplot(R,C,1);
% figure('Name',char(mlData{1}));
plot(double(mlData{2}(2)));
xlabel(char(mlData{2}(1)));
subplot(R,C,2);
objArr = obj.getObjectArray();
doubles1 = double(objArr(1));
doubles2 = double(objArr(2));
plot([doubles1 doubles2]);
subplot(R,C,3);
objDouble = obj.getDoubleArray;
doubles = double(objDouble);
plot(doubles);
R = 3;
C=1;
asmpath = 'D:\VC\1305\gs.trade\GS.Matlab\bin\Debug\';
asmname = 'GS.Matlab.dll';
asm = NET.addAssembly(fullfile(asmpath,asmname));
obj = GS.Matlab.MyGraph;
mlData = cell(obj.getNewData);
%objArr = cell(obj.getObjectArray);
objNewDataArr = obj.getNewDataProp;
figure('Name',char(mlData{1}));
subplot(R,C,1);
% figure('Name',char(mlData{1}));
plot(double(mlData{2}(2)));
xlabel(char(mlData{2}(1)));
subplot(R,C,2);
objArr = obj.getObjectArray();
doubles1 = double(objArr(1));
doubles2 = double(objArr(2));
plot([doubles1 doubles2]);
subplot(R,C,3);
objDouble = obj.getDoubleArray;
doubles = double(objDouble);
plot(doubles);
webread matlab and Net classes
webRead
https://www.mathworks.com/help/matlab/ref/webread.html
Convert .NET Arrays to Cell Arrays
https://www.mathworks.com/help/matlab/matlab_external/net-arrays-to-cell-arrays.html
Pass Cell Arrays of .NET Data
https://www.mathworks.com/help/matlab/matlab_external/tips-for-working-with-cell-arrays-of-net-data.html
Handle Data Returned from .NET Objects
https://www.mathworks.com/help/matlab/matlab_external/handling-net-data-in-matlab_bte9owt-1.html#bte9paq-1
Access a Simple .NET Class
https://www.mathworks.com/help/matlab/matlab_external/access-a-simple-net-class.html
Call .NET Methods With ref Keyword
https://www.mathworks.com/help/matlab/matlab_external/call-net-methods-with-ref-keyword.html
Calling .NET Methods
matlab_external/calling-net-methods.html?s_tid=gn_loc_drop
Use .NET methods in MATLAB®, method signatures, arguments by reference, optional arguments
https://www.mathworks.com/help/matlab/methods-.html
понедельник, 27 февраля 2017 г.
Multistep Neural Network Prediction example
Multistep Neural Network Prediction
https://www.mathworks.com/help/nnet/ug/multistep-neural-network-prediction.html
% Multistep Neural Network Prediction
% Set Up in Open-Loop Mode
[X,T] = maglev_dataset;
net = narxnet(1:2,1:2,10);
[x,xi,ai,t] = preparets(net,X,{},T);
net = train(net,x,t,xi,ai);
y = net(x,xi,ai);
view(net)
% Multistep Closed-Loop Prediction From Initial Conditions
% Close loop
netc = closeloop(net);
view(netc);
[x,xi,ai,t] = preparets(netc,X,{},T);
yc = netc(x,xi,ai);
plot(1:3999, cell2mat(t), 'b', 1: 3999, cell2mat(yc),'r');
% Multistep Closed-Loop Prediction Following Known Sequence
x1 = x(1:20);
t1 = t(1:20);
x2 = x(21:40);
% The open-loop neural network is then simulated on this data.
[x,xi,ai,t] = preparets(net,x1,{},t1);
[y1,xf,af] = net(x,xi,ai);
%Now the final input and layer states returned by the network are converted to closed-loop form along with the network.
% The final input states xf and layer states af of the open-loop network become the initial input states xi and layer states ai of the closed-loop network.
[netc,xi,ai] = closeloop(net,xf,af);
%Typically use preparets to define initial input and layer states.
% Since these have already been obtained from the end of the open-loop simulation,
% you do not need preparets to continue with the 20 step predictions of the closed-loop network
[y2,xf,af] = netc(x2,xi,ai);
% Note that you can set x2 to different sequences of inputs to test different scenarios for however many time steps you would like to make predictions.
% For example, to predict the magnetic levitation system's behavior if 10 random inputs are used:
x2 = num2cell(rand(1,10));
[y2,xf,af] = netc(x2,xi,ai);
% Following Closed-Loop Simulation with Open-Loop Simulation
[~,xi,ai] = openloop(netc,xf,af);
% Now you can define continuations of the external input and open-loop
% feedback, and simulate the open-loop network
x3 = num2cell(rand(2,10));
y3 = net(x3,xi,ai);
% Set Up in Open-Loop Mode
[X,T] = maglev_dataset;
net = narxnet(1:2,1:2,10);
[x,xi,ai,t] = preparets(net,X,{},T);
net = train(net,x,t,xi,ai);
y = net(x,xi,ai);
view(net)
% Multistep Closed-Loop Prediction From Initial Conditions
% Close loop
netc = closeloop(net);
view(netc);
[x,xi,ai,t] = preparets(netc,X,{},T);
yc = netc(x,xi,ai);
plot(1:3999, cell2mat(t), 'b', 1: 3999, cell2mat(yc),'r');
% Multistep Closed-Loop Prediction Following Known Sequence
x1 = x(1:20);
t1 = t(1:20);
x2 = x(21:40);
% The open-loop neural network is then simulated on this data.
[x,xi,ai,t] = preparets(net,x1,{},t1);
[y1,xf,af] = net(x,xi,ai);
%Now the final input and layer states returned by the network are converted to closed-loop form along with the network.
% The final input states xf and layer states af of the open-loop network become the initial input states xi and layer states ai of the closed-loop network.
[netc,xi,ai] = closeloop(net,xf,af);
%Typically use preparets to define initial input and layer states.
% Since these have already been obtained from the end of the open-loop simulation,
% you do not need preparets to continue with the 20 step predictions of the closed-loop network
[y2,xf,af] = netc(x2,xi,ai);
% Note that you can set x2 to different sequences of inputs to test different scenarios for however many time steps you would like to make predictions.
% For example, to predict the magnetic levitation system's behavior if 10 random inputs are used:
x2 = num2cell(rand(1,10));
[y2,xf,af] = netc(x2,xi,ai);
% Following Closed-Loop Simulation with Open-Loop Simulation
[~,xi,ai] = openloop(netc,xf,af);
% Now you can define continuations of the external input and open-loop
% feedback, and simulate the open-loop network
x3 = num2cell(rand(2,10));
y3 = net(x3,xi,ai);
Data Generator function
%% Data generator function
function [X,Xtrain,Ytrain,fig] = data_generator()
% data generator
X = 0.01:.01:10;
f = abs(besselj(2,X*7).*asind(X/2) + (X.^1.95)) + 2;
fig = figure;
plot(X,f,'b-')
hold on
grid on
% available data points
Ytrain = f + 5*(rand(1,length(f))-.5);
Xtrain = X([181:450 601:830]);
Ytrain = Ytrain([181:450 601:830]);
plot(Xtrain,Ytrain,'kx')
xlabel('x')
ylabel('y')
ylim([0 100])
legend('original function','available data','location','northwest')
function [X,Xtrain,Ytrain,fig] = data_generator()
% data generator
X = 0.01:.01:10;
f = abs(besselj(2,X*7).*asind(X/2) + (X.^1.95)) + 2;
fig = figure;
plot(X,f,'b-')
hold on
grid on
% available data points
Ytrain = f + 5*(rand(1,length(f))-.5);
Xtrain = X([181:450 601:830]);
Ytrain = Ytrain([181:450 601:830]);
plot(Xtrain,Ytrain,'kx')
xlabel('x')
ylabel('y')
ylim([0 100])
legend('original function','available data','location','northwest')
Simple Narx Example
1.
% Simple First Narx Net Example
load magdata
y = con2seq(y);
u = con2seq(u);
% y = Feedback Signal and Taget signal
sizey = size(y);
sizeu = size(u);
d1 = [1:2];
d2 = [1:2];
narx_net = narxnet(d1,d2,10);
narx_net.divideFcn = '';
narx_net.trainParam.min_grad = 1e-10;
[p,Pi,Ai,t] = preparets(narx_net, u, {}, y);
narx_net.view;
narx_net = train(narx_net,p,t,Pi);
% Simulation
yp = sim(narx_net, p, Pi);
e = cell2mat(yp)-cell2mat(t);
plot(e);
% closed loop
narx_net_closed = closeloop(narx_net);
view(narx_net_closed)
y1 = y(1700:2600);
u1 = u(1700:2600);
[p1,Pi1,Ai1,t1] = preparets(narx_net_closed,u1,{},y1);
yp1 = narx_net_closed(p1,Pi1,Ai1);
TS = size(t1,2);
plot(1:TS,cell2mat(t1),'b',1:TS,cell2mat(yp1),'r')
2. % Multiple External Variables in X cell two paramers
[X, T] = ph_dataset;
net = narxnet(10);
[x, xi,ai,t] = preparets(net,X,{},T);
net.view
net = train(net,x,t,xi,ai);
y=net(x,xi,ai);
e = gsubtract(t,y);
sz = size(t);
%plot(cell2mat(e));
plot(1:1991,cell2mat(t),'b',1:1991,cell2mat(y),'r');
воскресенье, 26 февраля 2017 г.
NN Generalization Overfitting, Normalization, Preparets
Improve Neural Network Generalization and Avoid Overfitting
https://www.mathworks.com/help/nnet/ug/improve-neural-network-generalization-and-avoid-overfitting.html
Preparets
https://www.mathworks.com/help/nnet/ref/preparets.html
NARX Reffs
Design Time Series NARX Feedback Neural Networks
https://www.mathworks.com/help/nnet/ug/design-time-series-narx-feedback-neural-networks.html?s_tid=gn_loc_drop
Modeling and Prediction with NARX and Time-Delay Networks
https://www.mathworks.com/help/nnet/modeling-and-prediction-with-narx-and-time-delay-networks.html
Multistep Neural Network Prediction
https://www.mathworks.com/help/nnet/ug/multistep-neural-network-prediction.html
https://en.wikipedia.org/wiki/Nonlinear_autoregressive_exogenous_model
https://www.researchgate.net/post/Can_anyone_help_regarding_NARX_network_in_network_timeseries_analysis_tool
http://www.wseas.us/e-library/transactions/research/2008/27-464.pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1032.9079&rep=rep1&type=pdf
Improve Neural Network Generalization and Avoid Overfitting
https://www.mathworks.com/help/nnet/ug/improve-neural-network-generalization-and-avoid-overfitting.html
NARX
Design Time Series NARX Feedback Neural Networks
All the specific dynamic networks discussed so far have either been focused networks,
with the dynamics only at the input layer, or feedforward networks. The nonlinear
autoregressive network with exogenous inputs (NARX) is a recurrent dynamic network,
with feedback connections enclosing several layers of the network. The NARX model is
based on the linear ARX model, which is commonly used in time-series modeling.
The defining equation for the NARX model is
y(t) = f (y(t −1), y(t − 2), , y(t − ny ),u(t −1),u(t − 2), ,u(t − nu)) … …
where the next value of the dependent output signal y(t) is regressed on previous values
of the output signal and previous values of an independent (exogenous) input signal. You
can implement the NARX model by using a feedforward neural network to approximate
the function f. A diagram of the resulting network is shown below, where a two-layer
feedforward network is used for the approximation. This implementation also allows for a
vector ARX model, where the input and output can be multidimensional.
There are many applications for the NARX network. It can be used as a predictor, to
predict the next value of the input signal. It can also be used for nonlinear filtering, in
which the target output is a noise-free version of the input signal. The use of the NARX
network is shown in another important application, the modeling of nonlinear dynamic
systems.
Design Time Series NARX Feedback Neural Networks
3-23
Before showing the training of the NARX network, an important configuration that is
useful in training needs explanation. You can consider the output of the NARX network
to be an estimate of the output of some nonlinear dynamic system that you are trying
to model. The output is fed back to the input of the feedforward neural network as part
of the standard NARX architecture, as shown in the left figure below. Because the true
output is available during the training of the network, you could create a series-parallel
architecture (see [NaPa91]), in which the true output is used instead of feeding back the
estimated output, as shown in the right figure below. This has two advantages. The first
is that the input to the feedforward network is more accurate. The second is that the
resulting network has a purely feedforward architecture, and static backpropagation can
be used for training.
The following shows the use of the series-parallel architecture for training a NARX
network to model a dynamic system.
The example of the NARX network is the magnetic levitation system described beginning
in “Use the NARMA-L2 Controller Block” on page 4-18. The bottom graph in the
following figure shows the voltage applied to the electromagnet, and the top graph shows
the position of the permanent magnet. The data was collected at a sampling interval of
0.01 seconds to form two time series.
The goal is to develop a NARX model for this magnetic levitation system.
3 Dynamic Neural Networks
3-24
First, load the training data. Use tapped delay lines with two delays for both the input
and the output, so training begins with the third data point. There are two inputs to the
series-parallel network, the u(t) sequence and the y(t) sequence.
load magdata
y = con2seq(y);
u = con2seq(u);
Create the series-parallel NARX network using the function narxnet. Use 10 neurons in
the hidden layer and use trainlm for the training function, and then prepare the data
with preparets:
d1 = [1:2];
d2 = [1:2];
narx_net = narxnet(d1,d2,10);
narx_net.divideFcn = '';
narx_net.trainParam.min_grad = 1e-10;
[p,Pi,Ai,t] = preparets(narx_net,u,{},y);
(Notice that the y sequence is considered a feedback signal, which is an input that is
also an output (target). Later, when you close the loop, the appropriate output will be
connected to the appropriate input.) Now you are ready to train the network.
narx_net = train(narx_net,p,t,Pi);
Design Time Series NARX Feedback Neural Networks
3-25
You can now simulate the network and plot the resulting errors for the series-parallel
implementation.
yp = sim(narx_net,p,Pi);
e = cell2mat(yp)-cell2mat(t);
plot(e)
You can see that the errors are very small. However, because of the series-parallel
configuration, these are errors for only a one-step-ahead prediction. A more stringent test
would be to rearrange the network into the original parallel form (closed loop) and then
to perform an iterated prediction over many time steps. Now the parallel operation is
shown.
3 Dynamic Neural Networks
3-26
There is a toolbox function (closeloop) for converting NARX (and other) networks from
the series-parallel configuration (open loop), which is useful for training, to the parallel
configuration (closed loop), which is useful for multi-step-ahead prediction. The following
command illustrates how to convert the network that you just trained to parallel form:
narx_net_closed = closeloop(narx_net);
To see the differences between the two networks, you can use the view command:
view(narx_net)
view(narx_net_closed)
Design Time Series NARX Feedback Neural Networks
3-27
All of the training is done in open loop (also called series-parallel architecture), including
the validation and testing steps. The typical workflow is to fully create the network in
open loop, and only when it has been trained (which includes validation and testing
steps) is it transformed to closed loop for multistep-ahead prediction. Likewise, the R
values in the GUI are computed based on the open-loop training results.
You can now use the closed-loop (parallel) configuration to perform an iterated prediction
of 900 time steps. In this network you need to load the two initial inputs and the two
initial outputs as initial conditions. You can use the preparets function to prepare the
data. It will use the network structure to determine how to divide and shift the data
appropriately.
y1 = y(1700:2600);
u1 = u(1700:2600);
[p1,Pi1,Ai1,t1] = preparets(narx_net_closed,u1,{},y1);
yp1 = narx_net_closed(p1,Pi1,Ai1);
TS = size(t1,2);
plot(1:TS,cell2mat(t1),'b',1:TS,cell2mat(yp1),'r')
3 Dynamic Neural Networks
3-28
The figure illustrates the iterated prediction. The blue line is the actual position of the
magnet, and the red line is the position predicted by the NARX neural network. Even
though the network is predicting 900 time steps ahead, the prediction is very accurate.
In order for the parallel response (iterated prediction) to be accurate, it is important that
the network be trained so that the errors in the series-parallel configuration (one-stepahead
prediction) are very small.
You can also create a parallel (closed loop) NARX network, using the narxnet command
with the fourth input argument set to 'closed', and train that network directly.
Generally, the training takes longer, and the resulting performance is not as good as that
obtained with series-parallel training.
Design Time Series NARX Feedback Neural Networks
3-29
Each time a neural network is trained, can result in a different solution due to different
initial weight and bias values and different divisions of data into training, validation,
and test sets. As a result, different neural networks trained on the same problem can
give different outputs for the same input. To ensure that a neural network of good
accuracy has been found, retrain several times.
There are several other techniques for improving upon initial solutions if higher accuracy
is desired. For more information, see “Improve Neural Network Generalization and Avoid
Overfitting” on page 8-31.
Multiple External Variables
The maglev example showed how to model a time series with a single external input
value over time. But the NARX network will work for problems with multiple external
input elements and predict series with multiple elements. In these cases, the input and
target consist of row cell arrays representing time, but with each cell element being an Nby-
1 vector for the N elements of the input or target signal.
For example, here is a dataset which consists of 2-element external variables predicting a
1-element series.
[X,T] = ph_dataset;
The external inputs X are formatted as a row cell array of 2-element vectors, with each
vector representing acid and base solution flow. The targets represent the resulting pH of
the solution over time.
You can reformat your own multi-element series data from matrix form to neural
network time-series form with the function con2seq.
The process for training a network proceeds as it did above for the maglev problem.
net = narxnet(10);
[x,xi,ai,t] = preparets(net,X,{},T);
net = train(net,x,t,xi,ai);
y = net(x,xi,ai);
e = gsubtract(t,y);
To see examples of using NARX networks being applied in open-loop form, closedloop
form and open/closed-loop multistep prediction see “Multistep Neural Network
Prediction” on page 3-51.
All the specific dynamic networks discussed so far have either been focused networks,
with the dynamics only at the input layer, or feedforward networks. The nonlinear
autoregressive network with exogenous inputs (NARX) is a recurrent dynamic network,
with feedback connections enclosing several layers of the network. The NARX model is
based on the linear ARX model, which is commonly used in time-series modeling.
The defining equation for the NARX model is
y(t) = f (y(t −1), y(t − 2), , y(t − ny ),u(t −1),u(t − 2), ,u(t − nu)) … …
where the next value of the dependent output signal y(t) is regressed on previous values
of the output signal and previous values of an independent (exogenous) input signal. You
can implement the NARX model by using a feedforward neural network to approximate
the function f. A diagram of the resulting network is shown below, where a two-layer
feedforward network is used for the approximation. This implementation also allows for a
vector ARX model, where the input and output can be multidimensional.
There are many applications for the NARX network. It can be used as a predictor, to
predict the next value of the input signal. It can also be used for nonlinear filtering, in
which the target output is a noise-free version of the input signal. The use of the NARX
network is shown in another important application, the modeling of nonlinear dynamic
systems.
Design Time Series NARX Feedback Neural Networks
3-23
Before showing the training of the NARX network, an important configuration that is
useful in training needs explanation. You can consider the output of the NARX network
to be an estimate of the output of some nonlinear dynamic system that you are trying
to model. The output is fed back to the input of the feedforward neural network as part
of the standard NARX architecture, as shown in the left figure below. Because the true
output is available during the training of the network, you could create a series-parallel
architecture (see [NaPa91]), in which the true output is used instead of feeding back the
estimated output, as shown in the right figure below. This has two advantages. The first
is that the input to the feedforward network is more accurate. The second is that the
resulting network has a purely feedforward architecture, and static backpropagation can
be used for training.
The following shows the use of the series-parallel architecture for training a NARX
network to model a dynamic system.
The example of the NARX network is the magnetic levitation system described beginning
in “Use the NARMA-L2 Controller Block” on page 4-18. The bottom graph in the
following figure shows the voltage applied to the electromagnet, and the top graph shows
the position of the permanent magnet. The data was collected at a sampling interval of
0.01 seconds to form two time series.
The goal is to develop a NARX model for this magnetic levitation system.
3 Dynamic Neural Networks
3-24
First, load the training data. Use tapped delay lines with two delays for both the input
and the output, so training begins with the third data point. There are two inputs to the
series-parallel network, the u(t) sequence and the y(t) sequence.
load magdata
y = con2seq(y);
u = con2seq(u);
Create the series-parallel NARX network using the function narxnet. Use 10 neurons in
the hidden layer and use trainlm for the training function, and then prepare the data
with preparets:
d1 = [1:2];
d2 = [1:2];
narx_net = narxnet(d1,d2,10);
narx_net.divideFcn = '';
narx_net.trainParam.min_grad = 1e-10;
[p,Pi,Ai,t] = preparets(narx_net,u,{},y);
(Notice that the y sequence is considered a feedback signal, which is an input that is
also an output (target). Later, when you close the loop, the appropriate output will be
connected to the appropriate input.) Now you are ready to train the network.
narx_net = train(narx_net,p,t,Pi);
Design Time Series NARX Feedback Neural Networks
3-25
You can now simulate the network and plot the resulting errors for the series-parallel
implementation.
yp = sim(narx_net,p,Pi);
e = cell2mat(yp)-cell2mat(t);
plot(e)
You can see that the errors are very small. However, because of the series-parallel
configuration, these are errors for only a one-step-ahead prediction. A more stringent test
would be to rearrange the network into the original parallel form (closed loop) and then
to perform an iterated prediction over many time steps. Now the parallel operation is
shown.
3 Dynamic Neural Networks
3-26
There is a toolbox function (closeloop) for converting NARX (and other) networks from
the series-parallel configuration (open loop), which is useful for training, to the parallel
configuration (closed loop), which is useful for multi-step-ahead prediction. The following
command illustrates how to convert the network that you just trained to parallel form:
narx_net_closed = closeloop(narx_net);
To see the differences between the two networks, you can use the view command:
view(narx_net)
view(narx_net_closed)
Design Time Series NARX Feedback Neural Networks
3-27
All of the training is done in open loop (also called series-parallel architecture), including
the validation and testing steps. The typical workflow is to fully create the network in
open loop, and only when it has been trained (which includes validation and testing
steps) is it transformed to closed loop for multistep-ahead prediction. Likewise, the R
values in the GUI are computed based on the open-loop training results.
You can now use the closed-loop (parallel) configuration to perform an iterated prediction
of 900 time steps. In this network you need to load the two initial inputs and the two
initial outputs as initial conditions. You can use the preparets function to prepare the
data. It will use the network structure to determine how to divide and shift the data
appropriately.
y1 = y(1700:2600);
u1 = u(1700:2600);
[p1,Pi1,Ai1,t1] = preparets(narx_net_closed,u1,{},y1);
yp1 = narx_net_closed(p1,Pi1,Ai1);
TS = size(t1,2);
plot(1:TS,cell2mat(t1),'b',1:TS,cell2mat(yp1),'r')
3 Dynamic Neural Networks
3-28
The figure illustrates the iterated prediction. The blue line is the actual position of the
magnet, and the red line is the position predicted by the NARX neural network. Even
though the network is predicting 900 time steps ahead, the prediction is very accurate.
In order for the parallel response (iterated prediction) to be accurate, it is important that
the network be trained so that the errors in the series-parallel configuration (one-stepahead
prediction) are very small.
You can also create a parallel (closed loop) NARX network, using the narxnet command
with the fourth input argument set to 'closed', and train that network directly.
Generally, the training takes longer, and the resulting performance is not as good as that
obtained with series-parallel training.
Design Time Series NARX Feedback Neural Networks
3-29
Each time a neural network is trained, can result in a different solution due to different
initial weight and bias values and different divisions of data into training, validation,
and test sets. As a result, different neural networks trained on the same problem can
give different outputs for the same input. To ensure that a neural network of good
accuracy has been found, retrain several times.
There are several other techniques for improving upon initial solutions if higher accuracy
is desired. For more information, see “Improve Neural Network Generalization and Avoid
Overfitting” on page 8-31.
Multiple External Variables
The maglev example showed how to model a time series with a single external input
value over time. But the NARX network will work for problems with multiple external
input elements and predict series with multiple elements. In these cases, the input and
target consist of row cell arrays representing time, but with each cell element being an Nby-
1 vector for the N elements of the input or target signal.
For example, here is a dataset which consists of 2-element external variables predicting a
1-element series.
[X,T] = ph_dataset;
The external inputs X are formatted as a row cell array of 2-element vectors, with each
vector representing acid and base solution flow. The targets represent the resulting pH of
the solution over time.
You can reformat your own multi-element series data from matrix form to neural
network time-series form with the function con2seq.
The process for training a network proceeds as it did above for the maglev problem.
net = narxnet(10);
[x,xi,ai,t] = preparets(net,X,{},T);
net = train(net,x,t,xi,ai);
y = net(x,xi,ai);
e = gsubtract(t,y);
To see examples of using NARX networks being applied in open-loop form, closedloop
form and open/closed-loop multistep prediction see “Multistep Neural Network
Prediction” on page 3-51.
Подписаться на:
Комментарии (Atom)


