Skip to content

Commit d62f973

Browse files
committed
Commit 3.2
1 parent aa863e8 commit d62f973

File tree

345 files changed

+25630
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

345 files changed

+25630
-0
lines changed

Contents.m

Lines changed: 174 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,174 @@
1+
% Netlab Toolbox
2+
% Version 3.2.1 31-Oct-2001
3+
%
4+
% conffig - Display a confusion matrix.
5+
% confmat - Compute a confusion matrix.
6+
% conjgrad - Conjugate gradients optimization.
7+
% consist - Check that arguments are consistent.
8+
% datread - Read data from an ascii file.
9+
% datwrite - Write data to ascii file.
10+
% dem2ddat - Generates two dimensional data for demos.
11+
% demard - Automatic relevance determination using the MLP.
12+
% demev1 - Demonstrate Bayesian regression for the MLP.
13+
% demev2 - Demonstrate Bayesian classification for the MLP.
14+
% demev3 - Demonstrate Bayesian regression for the RBF.
15+
% demgauss - Demonstrate sampling from Gaussian distributions.
16+
% demglm1 - Demonstrate simple classification using a generalized linear model.
17+
% demglm2 - Demonstrate simple classification using a generalized linear model.
18+
% demgmm1 - Demonstrate density modelling with a Gaussian mixture model.
19+
% demgmm3 - Demonstrate density modelling with a Gaussian mixture model.
20+
% demgmm4 - Demonstrate density modelling with a Gaussian mixture model.
21+
% demgmm5 - Demonstrate density modelling with a PPCA mixture model.
22+
% demgp - Demonstrate simple regression using a Gaussian Process.
23+
% demgpard - Demonstrate ARD using a Gaussian Process.
24+
% demgpot - Computes the gradient of the negative log likelihood for a mixture model.
25+
% demgtm1 - Demonstrate EM for GTM.
26+
% demgtm2 - Demonstrate GTM for visualisation.
27+
% demhint - Demonstration of Hinton diagram for 2-layer feed-forward network.
28+
% demhmc1 - Demonstrate Hybrid Monte Carlo sampling on mixture of two Gaussians.
29+
% demhmc2 - Demonstrate Bayesian regression with Hybrid Monte Carlo sampling.
30+
% demhmc3 - Demonstrate Bayesian regression with Hybrid Monte Carlo sampling.
31+
% demkmean - Demonstrate simple clustering model trained with K-means.
32+
% demknn1 - Demonstrate nearest neighbour classifier.
33+
% demmdn1 - Demonstrate fitting a multi-valued function using a Mixture Density Network.
34+
% demmet1 - Demonstrate Markov Chain Monte Carlo sampling on a Gaussian.
35+
% demmlp1 - Demonstrate simple regression using a multi-layer perceptron
36+
% demmlp2 - Demonstrate simple classification using a multi-layer perceptron
37+
% demnlab - A front-end Graphical User Interface to the demos
38+
% demns1 - Demonstrate Neuroscale for visualisation.
39+
% demolgd1 - Demonstrate simple MLP optimisation with on-line gradient descent
40+
% demopt1 - Demonstrate different optimisers on Rosenbrock's function.
41+
% dempot - Computes the negative log likelihood for a mixture model.
42+
% demprgp - Demonstrate sampling from a Gaussian Process prior.
43+
% demprior - Demonstrate sampling from a multi-parameter Gaussian prior.
44+
% demrbf1 - Demonstrate simple regression using a radial basis function network.
45+
% demsom1 - Demonstrate SOM for visualisation.
46+
% demtrain - Demonstrate training of MLP network.
47+
% dist2 - Calculates squared distance between two sets of points.
48+
% eigdec - Sorted eigendecomposition
49+
% errbayes - Evaluate Bayesian error function for network.
50+
% evidence - Re-estimate hyperparameters using evidence approximation.
51+
% fevbayes - Evaluate Bayesian regularisation for network forward propagation.
52+
% gauss - Evaluate a Gaussian distribution.
53+
% gbayes - Evaluate gradient of Bayesian error function for network.
54+
% glm - Create a generalized linear model.
55+
% glmderiv - Evaluate derivatives of GLM outputs with respect to weights.
56+
% glmerr - Evaluate error function for generalized linear model.
57+
% glmevfwd - Forward propagation with evidence for GLM
58+
% glmfwd - Forward propagation through generalized linear model.
59+
% glmgrad - Evaluate gradient of error function for generalized linear model.
60+
% glmhess - Evaluate the Hessian matrix for a generalised linear model.
61+
% glminit - Initialise the weights in a generalized linear model.
62+
% glmpak - Combines weights and biases into one weights vector.
63+
% glmtrain - Specialised training of generalized linear model
64+
% glmunpak - Separates weights vector into weight and bias matrices.
65+
% gmm - Creates a Gaussian mixture model with specified architecture.
66+
% gmmactiv - Computes the activations of a Gaussian mixture model.
67+
% gmmem - EM algorithm for Gaussian mixture model.
68+
% gmminit - Initialises Gaussian mixture model from data
69+
% gmmpak - Combines all the parameters in a Gaussian mixture model into one vector.
70+
% gmmpost - Computes the class posterior probabilities of a Gaussian mixture model.
71+
% gmmprob - Computes the data probability for a Gaussian mixture model.
72+
% gmmsamp - Sample from a Gaussian mixture distribution.
73+
% gmmunpak - Separates a vector of Gaussian mixture model parameters into its components.
74+
% gp - Create a Gaussian Process.
75+
% gpcovar - Calculate the covariance for a Gaussian Process.
76+
% gpcovarf - Calculate the covariance function for a Gaussian Process.
77+
% gpcovarp - Calculate the prior covariance for a Gaussian Process.
78+
% gperr - Evaluate error function for Gaussian Process.
79+
% gpfwd - Forward propagation through Gaussian Process.
80+
% gpgrad - Evaluate error gradient for Gaussian Process.
81+
% gpinit - Initialise Gaussian Process model.
82+
% gppak - Combines GP hyperparameters into one vector.
83+
% gpunpak - Separates hyperparameter vector into components.
84+
% gradchek - Checks a user-defined gradient function using finite differences.
85+
% graddesc - Gradient descent optimization.
86+
% gsamp - Sample from a Gaussian distribution.
87+
% gtm - Create a Generative Topographic Map.
88+
% gtmem - EM algorithm for Generative Topographic Mapping.
89+
% gtmfwd - Forward propagation through GTM.
90+
% gtminit - Initialise the weights and latent sample in a GTM.
91+
% gtmlmean - Mean responsibility for data in a GTM.
92+
% gtmlmode - Mode responsibility for data in a GTM.
93+
% gtmmag - Magnification factors for a GTM
94+
% gtmpost - Latent space responsibility for data in a GTM.
95+
% gtmprob - Probability for data under a GTM.
96+
% hbayes - Evaluate Hessian of Bayesian error function for network.
97+
% hesschek - Use central differences to confirm correct evaluation of Hessian matrix.
98+
% hintmat - Evaluates the coordinates of the patches for a Hinton diagram.
99+
% hinton - Plot Hinton diagram for a weight matrix.
100+
% histp - Histogram estimate of 1-dimensional probability distribution.
101+
% hmc - Hybrid Monte Carlo sampling.
102+
% kmeans - Trains a k means cluster model.
103+
% knn - Creates a K-nearest-neighbour classifier.
104+
% knnfwd - Forward propagation through a K-nearest-neighbour classifier.
105+
% linef - Calculate function value along a line.
106+
% linemin - One dimensional minimization.
107+
% mdn - Creates a Mixture Density Network with specified architecture.
108+
% mdn2gmm - Converts an MDN mixture data structure to array of GMMs.
109+
% mdndist2 - Calculates squared distance between centres of Gaussian kernels and data
110+
% mdnerr - Evaluate error function for Mixture Density Network.
111+
% mdnfwd - Forward propagation through Mixture Density Network.
112+
% mdngrad - Evaluate gradient of error function for Mixture Density Network.
113+
% mdninit - Initialise the weights in a Mixture Density Network.
114+
% mdnpak - Combines weights and biases into one weights vector.
115+
% mdnpost - Computes the posterior probability for each MDN mixture component.
116+
% mdnprob - Computes the data probability likelihood for an MDN mixture structure.
117+
% mdnunpak - Separates weights vector into weight and bias matrices.
118+
% metrop - Markov Chain Monte Carlo sampling with Metropolis algorithm.
119+
% minbrack - Bracket a minimum of a function of one variable.
120+
% mlp - Create a 2-layer feedforward network.
121+
% mlpbkp - Backpropagate gradient of error function for 2-layer network.
122+
% mlpderiv - Evaluate derivatives of network outputs with respect to weights.
123+
% mlperr - Evaluate error function for 2-layer network.
124+
% mlpevfwd - Forward propagation with evidence for MLP
125+
% mlpfwd - Forward propagation through 2-layer network.
126+
% mlpgrad - Evaluate gradient of error function for 2-layer network.
127+
% mlphdotv - Evaluate the product of the data Hessian with a vector.
128+
% mlphess - Evaluate the Hessian matrix for a multi-layer perceptron network.
129+
% mlphint - Plot Hinton diagram for 2-layer feed-forward network.
130+
% mlpinit - Initialise the weights in a 2-layer feedforward network.
131+
% mlppak - Combines weights and biases into one weights vector.
132+
% mlpprior - Create Gaussian prior for mlp.
133+
% mlptrain - Utility to train an MLP network for demtrain
134+
% mlpunpak - Separates weights vector into weight and bias matrices.
135+
% netderiv - Evaluate derivatives of network outputs by weights generically.
136+
% neterr - Evaluate network error function for generic optimizers
137+
% netevfwd - Generic forward propagation with evidence for network
138+
% netgrad - Evaluate network error gradient for generic optimizers
139+
% nethess - Evaluate network Hessian
140+
% netinit - Initialise the weights in a network.
141+
% netopt - Optimize the weights in a network model.
142+
% netpak - Combines weights and biases into one weights vector.
143+
% netunpak - Separates weights vector into weight and bias matrices.
144+
% olgd - On-line gradient descent optimization.
145+
% pca - Principal Components Analysis
146+
% plotmat - Display a matrix.
147+
% ppca - Probabilistic Principal Components Analysis
148+
% quasinew - Quasi-Newton optimization.
149+
% rbf - Creates an RBF network with specified architecture
150+
% rbfbkp - Backpropagate gradient of error function for RBF network.
151+
% rbfderiv - Evaluate derivatives of RBF network outputs with respect to weights.
152+
% rbferr - Evaluate error function for RBF network.
153+
% rbfevfwd - Forward propagation with evidence for RBF
154+
% rbffwd - Forward propagation through RBF network with linear outputs.
155+
% rbfgrad - Evaluate gradient of error function for RBF network.
156+
% rbfhess - Evaluate the Hessian matrix for RBF network.
157+
% rbfjacob - Evaluate derivatives of RBF network outputs with respect to inputs.
158+
% rbfpak - Combines all the parameters in an RBF network into one weights vector.
159+
% rbfprior - Create Gaussian prior and output layer mask for RBF.
160+
% rbfsetbf - Set basis functions of RBF from data.
161+
% rbfsetfw - Set basis function widths of RBF.
162+
% rbftrain - Two stage training of RBF network.
163+
% rbfunpak - Separates a vector of RBF weights into its components.
164+
% rosegrad - Calculate gradient of Rosenbrock's function.
165+
% rosen - Calculate Rosenbrock's function.
166+
% scg - Scaled conjugate gradient optimization.
167+
% som - Creates a Self-Organising Map.
168+
% somfwd - Forward propagation through a Self-Organising Map.
169+
% sompak - Combines node weights into one weights matrix.
170+
% somtrain - Kohonen training algorithm for SOM.
171+
% somunpak - Replaces node weights in SOM.
172+
%
173+
% Copyright (c) Ian T Nabney (1996-2001)
174+
%

conffig.m

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
function fh=conffig(y, t)
2+
%CONFFIG Display a confusion matrix.
3+
%
4+
% Description
5+
% CONFFIG(Y, T) displays the confusion matrix and classification
6+
% performance for the predictions mat{y} compared with the targets T.
7+
% The data is assumed to be in a 1-of-N encoding, unless there is just
8+
% one column, when it is assumed to be a 2 class problem with a 0-1
9+
% encoding. Each row of Y and T corresponds to a single example.
10+
%
11+
% In the confusion matrix, the rows represent the true classes and the
12+
% columns the predicted classes.
13+
%
14+
% FH = CONFFIG(Y, T) also returns the figure handle FH which can be
15+
% used, for instance, to delete the figure when it is no longer needed.
16+
%
17+
% See also
18+
% CONFMAT, DEMTRAIN
19+
%
20+
21+
% Copyright (c) Ian T Nabney (1996-2001)
22+
23+
[C, rate] = confmat(y, t);
24+
25+
fh = figure('Name', 'Confusion matrix', ...
26+
'NumberTitle', 'off');
27+
28+
plotmat(C, 'k', 'k', 14);
29+
title(['Classification rate: ' num2str(rate(1)) '%'], 'FontSize', 14);

confmat.m

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
function [C,rate]=confmat(Y,T)
2+
%CONFMAT Compute a confusion matrix.
3+
%
4+
% Description
5+
% [C, RATE] = CONFMAT(Y, T) computes the confusion matrix C and
6+
% classification performance RATE for the predictions mat{y} compared
7+
% with the targets T. The data is assumed to be in a 1-of-N encoding,
8+
% unless there is just one column, when it is assumed to be a 2 class
9+
% problem with a 0-1 encoding. Each row of Y and T corresponds to a
10+
% single example.
11+
%
12+
% In the confusion matrix, the rows represent the true classes and the
13+
% columns the predicted classes. The vector RATE has two entries: the
14+
% percentage of correct classifications and the total number of correct
15+
% classifications.
16+
%
17+
% See also
18+
% CONFFIG, DEMTRAIN
19+
%
20+
21+
% Copyright (c) Ian T Nabney (1996-2001)
22+
23+
[n c]=size(Y);
24+
[n2 c2]=size(T);
25+
26+
if n~=n2 | c~=c2
27+
error('Outputs and targets are different sizes')
28+
end
29+
30+
if c > 1
31+
% Find the winning class assuming 1-of-N encoding
32+
[maximum Yclass] = max(Y', [], 1);
33+
34+
TL=[1:c]*T';
35+
else
36+
% Assume two classes with 0-1 encoding
37+
c = 2;
38+
class2 = find(T > 0.5);
39+
TL = ones(n, 1);
40+
TL(class2) = 2;
41+
class2 = find(Y > 0.5);
42+
Yclass = ones(n, 1);
43+
Yclass(class2) = 2;
44+
end
45+
46+
% Compute
47+
correct = (Yclass==TL);
48+
total=sum(sum(correct));
49+
rate=[total*100/n total];
50+
51+
C=zeros(c,c);
52+
for i=1:c
53+
for j=1:c
54+
C(i,j) = sum((Yclass==j).*(TL==i));
55+
end
56+
end

0 commit comments

Comments
 (0)