Using Jacobi method and Gauss-Seidel iterative methods to solve the following system The required precision is =0.00001, and the maximum Iteration number N=25. Compare the number of Iterations and the convergence of these two methods
标签: Gauss-Seidel iterative following methods
上传时间: 2016-02-06
上传用户:zmy123
The MDP toolbox proposes functions related to the resolution of discrete-time Markov Decision Process : finite horizon, value Iteration, policy Iteration, linear programming algorithms with some variants. The functions (m-functions) were developped with MATLAB v6.0 (one of the functions requires the Mathworks Optimization Toolbox) by the decision team of the Biometry and Artificial Intelligence Unit of INRA Toulouse (France). The version 2.0 (February 2005) handles sparse matrices and contains an example
标签: discrete-time resolution functions Decision
上传时间: 2014-01-01
上传用户:xuanjie
数值线性代数的Matlab应用程序包 共13个程序函数,每个程序函数有相应的例子函数一一对应,以*Example.m命名 程序名称 用途 Method 方法 GrmSch.m QR因子分解 classical Gram-Schmidt orthogonalization 格拉母-斯密特 MGrmSch.m QR因子分解 modified Gram-Schmidt Iteration 修正格拉母-斯密特 householder.m QR因子分解 Householder 豪斯霍尔德QR因子分解 ZXEC.m 最小二乘拟合 polynomial interpolant 最小二乘插值多项式 NCLU.m LU因子分解 Gaussian elimination 不选主元素的高斯消元 PALU.m LU因子分解 partial pivoting Gaussian elimination 部分选主元的高斯消元 cholesky.m 楚因子分解 Cholesky Factorization 楚列斯基因子分解 PwItrt.m 求最大特征值 Power Iteration 幂迭代 Jacobi.m 求特征值 Jacobi Iteration 按标准行方式次序的雅可比算法 Anld.m 求上Hessenberg Arnoldi Iteration 阿诺尔迪迭代 zuisu.m 解线性方程组 Steepest descent 最速下降法 CG.m 解线性方程组 Gradients 共轭梯度 BCG.m 解线性方程组 Biconjugate Gradients 双共轭梯度
上传时间: 2016-05-17
上传用户:小鹏
% EM algorithm for k multidimensional Gaussian mixture estimation % % Inputs: % X(n,d) - input data, n=number of observations, d=dimension of variable % k - maximum number of Gaussian components allowed % ltol - percentage of the log likelihood difference between 2 Iterations ([] for none) % maxiter - maximum number of Iteration allowed ([] for none) % pflag - 1 for plotting GM for 1D or 2D cases only, 0 otherwise ([] for none) % Init - structure of initial W, M, V: Init.W, Init.M, Init.V ([] for none) % % Ouputs: % W(1,k) - estimated weights of GM % M(d,k) - estimated mean vectors of GM % V(d,d,k) - estimated covariance matrices of GM % L - log likelihood of estimates %
标签: multidimensional estimation algorithm Gaussian
上传时间: 2013-12-03
上传用户:我们的船长
function [U,center,result,w,obj_fcn]= fenlei(data) [data_n,in_n] = size(data) m= 2 % Exponent for U max_iter = 100 % Max. Iteration min_impro =1e-5 % Min. improvement c=3 [center, U, obj_fcn] = fcm(data, c) for i=1:max_iter if F(U)>0.98 break else w_new=eye(in_n,in_n) center1=sum(center)/c a=center1(1)./center1 deta=center-center1(ones(c,1),:) w=sqrt(sum(deta.^2)).*a for j=1:in_n w_new(j,j)=w(j) end data1=data*w_new [center, U, obj_fcn] = fcm(data1, c) center=center./w(ones(c,1),:) obj_fcn=obj_fcn/sum(w.^2) end end display(i) result=zeros(1,data_n) U_=max(U) for i=1:data_n for j=1:c if U(j,i)==U_(i) result(i)=j continue end end end
标签: data function Exponent obj_fcn
上传时间: 2013-12-18
上传用户:ynzfm
% Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is possible to use regularization by % weight decay. Also pruned (ie. not fully connected) networks can % be trained. % % Given a set of corresponding input-output pairs and an initial % network, % [W1,W2,critvec,Iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms) % trains the network with the Levenberg-Marquardt method. % % The activation functions can be either linear or tanh. The % network architecture is defined by the matrix NetDef which % has two rows. The first row specifies the hidden layer and the % second row specifies the output layer.
标签: Levenberg-Marquardt desired network neural
上传时间: 2016-12-27
上传用户:jcljkh
The False-Position method to solve a linear equation The Bisection method to solve linear equation Jacobi Iteration on a 3D plane
标签: equation method linear solve
上传时间: 2014-09-11
上传用户:kelimu
program to solve a finite difference discretization of Helmholtz equation : (d2/dx2)u + (d2/dy2)u - alpha u = f using Jacobi iterative method. COMMENTS: OpenMP version 3: 1 PR outside the Iteration loop, 4 Barriers Directives are used in this code to achieve paralleism. All do loops are parallized with default static scheduling.
标签: discretization difference Helmholtz equation
上传时间: 2014-01-11
上传用户:bruce5996
A fractal is generally "a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole,"[1] a property called self-similarity. The term was coined by Benoî t Mandelbrot in 1975 and was derived from the Latin fractus meaning "broken" or "fractured." A mathematical fractal is based on an equation that undergoes Iteration, a form of feedback based on recursion.[2]
标签: fragmented generally geometric fractal
上传时间: 2014-01-18
上传用户:as275944189