资源简介

局部保持映射(LPP)是由何晓飞提出的一种用于降维的流型学习算法,它是一种线性的算法。

资源截图

代码片段和文件信息

function [eigvector eigvalue Y] = LPP(X W options)
% LPP: Locality Preserving Projections
%
%       [eigvector eigvalue] = LPP(X W options)

%             Input:
%               X       - Data matrix. Each row vector of fea is a data point.
%               W       - Affinity matrix. You can either call “constructW“
%                         to construct the W or construct it by yourself.
%               options - Struct value in Matlab. The fields in options
%                         that can be set:
%                            ReducedDim   -  The dimensionality of the
%                                            reduced subspace. If 0
%                                            all the dimensions will be
%                                            kept. Default is 0.
%                            PCARatio     -  The percentage of principal
%                                            component kept in the PCA
%                                            step. The percentage is
%                                            calculated based on the
%                                            eigenvalue. Default is 1
%                                            (100% all the non-zero
%                                            eigenvalues will be kept.
%             Output:
%               eigvector - Each column is an embedding function for a new
%                           data point (row vector) x  y = x*eigvector
%                           will be the embedding result of x.
%               eigvalue  - The eigvalue of LPP eigen-problem. sorted from
%                           smallest to largest. 


%       [eigvector eigvalue Y] = LPP(X W options)   
%               
%               Y:  The embedding results Each row vector is a data point.
%                   Y = X*eigvector
%
%
%    Examples:
%
%       fea = rand(5070);
%       options = [];
%       options.Metric = ‘Euclidean‘;
%       options.NeighborMode = ‘KNN‘;
%       options.k = 5;
%       options.WeightMode = ‘HeatKernel‘;
%       options.t = 1;
%       W = constructW(feaoptions);
%       options.PCARatio = 0.99
%       [eigvector eigvalue Y] = LPP(fea W options);
%       
%       
%       fea = rand(5070);
%       gnd = [ones(101);ones(151)*2;ones(101)*3;ones(151)*4];
%       options = [];
%       options.Metric = ‘Euclidean‘;
%       options.NeighborMode = ‘Supervised‘;
%       options.gnd = gnd;
%       options.bLDA = 1;
%       W = constructW(feaoptions);      
%       options.PCARatio = 1;
%       [eigvector eigvalue Y] = LPP(fea W options);


% Note: After applying some simple algebra the smallest eigenvalue problem:
%    X^T*L*X = \lemda X^T*D*X
%      is equivalent to the largest eigenvalue problem:
%    X^T*W*X = \beta X^T*D*X
%  where L=D-W;  \lemda= 1 - \beta.
% Thus the smallest eigenvalue problem can be transformed to a largest 
% eigenvalue problem. Such tr

 属性            大小     日期    时间   名称
----------- ---------  ---------- -----  ----

     文件       5487  2014-07-14 16:25  LPP.m

----------- ---------  ---------- -----  ----

                 5487                    1


评论

共有 条评论