Single-layer neural networks can be trained using various learning algorithms. The best-known algorithms are the Adaline, Perceptron and Backpropagation algorithms for supervised learning. The first two are specific to single-layer neural networks while the third can be generalized to multi-layer perceptrons.
标签: Single-layer algorithms best-known networks
上传时间: 2015-06-17
上传用户:赵云兴
小波神经网络好文章!A method for fault detection is proposed using a trained neural network as the nominal model of the system to be monitored. Partial physical knowledge, if available, can be combined with the nominal model to perform fault isolation.
标签: detection proposed network nominal
上传时间: 2014-12-08
上传用户:gmh1314
neural network trained with unscented kalman filter
标签: unscented network trained neural
上传时间: 2013-12-11
上传用户:colinal
neural network trained with extended kalman filter
标签: extended network trained neural
上传时间: 2017-07-07
上传用户:凤临西北
neural network utility is a Neural Networks library for the C++ Programmer. It is entirely object oriented and focuses on reducing tedious and confusing problems of programming neural networks. By this I mean that network layers are easily defined. An entire multi-layer network can be created in a few lines, and trained with two functions. Layers can be connected to one another easily and painlessly.
标签: Programmer Networks entirely network
上传时间: 2013-12-24
上传用户:liuchee
This function calculates Akaike s final prediction error % estimate of the average generalization error. % % [FPE,deff,varest,H] = fpe(NetDef,W1,W2,PHI,Y,trparms) produces the % final prediction error estimate (fpe), the effective number of % weights in the network if the network has been trained with % weight decay, an estimate of the noise variance, and the Gauss-Newton % Hessian. %
标签: generalization calculates prediction function
上传时间: 2014-12-03
上传用户:maizezhen
% Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is possible to use regularization by % weight decay. Also pruned (ie. not fully connected) networks can % be trained. % % Given a set of corresponding input-output pairs and an initial % network, % [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms) % trains the network with the Levenberg-Marquardt method. % % The activation functions can be either linear or tanh. The % network architecture is defined by the matrix NetDef which % has two rows. The first row specifies the hidden layer and the % second row specifies the output layer.
标签: Levenberg-Marquardt desired network neural
上传时间: 2016-12-27
上传用户:jcljkh
This function calculates Akaike s final prediction error % estimate of the average generalization error for network % models generated by NNARX, NNOE, NNARMAX1+2, or their recursive % counterparts. % % [FPE,deff,varest,H] = nnfpe(method,NetDef,W1,W2,U,Y,NN,trparms,skip,Chat) % produces the final prediction error estimate (fpe), the effective number % of weights in the network if it has been trained with weight decay, % an estimate of the noise variance, and the Gauss-Newton Hessian. %
标签: generalization calculates prediction function
上传时间: 2016-12-27
上传用户:脚趾头
This function applies the Optimal Brain Surgeon (OBS) strategy for % pruning neural network models of dynamic systems. That is networks % trained by NNARX, NNOE, NNARMAX1, NNARMAX2, or their recursive % counterparts.
标签: function strategy Optimal Surgeon
上传时间: 2013-12-19
上传用户:ma1301115706
Train a two layer neural network with a recursive prediction error % algorithm ("recursive Gauss-Newton"). Also pruned (i.e., not fully % connected) networks can be trained. % % The activation functions can either be linear or tanh. The network % architecture is defined by the matrix NetDef , which has of two % rows. The first row specifies the hidden layer while the second % specifies the output layer.
标签: recursive prediction algorithm Gauss-Ne
上传时间: 2016-12-27
上传用户:ljt101007