Tabs modified.

parent 0f3045f4
 function model = EM_tensorGMM(Data, model) % Training of a task-parameterized Gaussian mixture model (GMM) with an expectation-maximization (EM) algorithm. % The approach allows the modulation of the centers and covariance matrices of the Gaussians with respect to % external parameters represented in the form of candidate coordinate systems. % Training of a task-parameterized Gaussian mixture model (GMM) with an expectation-maximization (EM) algorithm. % The approach allows the modulation of the centers and covariance matrices of the Gaussians with respect to % external parameters represented in the form of candidate coordinate systems. % % Author: Sylvain Calinon, 2014 % http://programming-by-demonstration.org/SylvainCalinon % % This source code is given for free! In exchange, I would be grateful if you cite % the following reference in any academic publication that uses this code or part of it: % This source code is given for free! In exchange, I would be grateful if you cite % the following reference in any academic publication that uses this code or part of it: % % @inproceedings{Calinon14ICRA, % author="Calinon, S. and Bruno, D. and Caldwell, D. G.", ... ... @@ -34,131 +34,45 @@ for nbIter=1:nbMaxSteps [L, GAMMA, GAMMA0] = computeGamma(Data, model); %See 'computeGamma' function below and Eq. (2.0.5) in doc/TechnicalReport.pdf GAMMA2 = GAMMA ./ repmat(sum(GAMMA,2),1,nbData); %M-step for i=1:model.nbStates for i=1:model.nbStates %Update Priors model.Priors(i) = sum(sum(GAMMA(i,:))) / nbData; %See Eq. (2.0.6) in doc/TechnicalReport.pdf for m=1:model.nbFrames %Matricization/flattening of tensor DataMat(:,:) = Data(:,m,:); %Update Mu %Update Mu model.Mu(:,m,i) = DataMat * GAMMA2(i,:)'; %See Eq. (2.0.7) in doc/TechnicalReport.pdf %Update Sigma (regularization term is optional) %Update Sigma (regularization term is optional) DataTmp = DataMat - repmat(model.Mu(:,m,i),1,nbData); model.Sigma(:,:,m,i) = DataTmp * diag(GAMMA2(i,:)) * DataTmp' + eye(model.nbVar) * diagRegularizationFactor; %See Eq. (2.0.8) and (2.1.2) in doc/TechnicalReport.pdf end end %Compute average log-likelihood %Compute average log-likelihood LL(nbIter) = sum(log(sum(L,1))) / size(L,2); %See Eq. (2.0.4) in doc/TechnicalReport.pdf %Stop the algorithm if EM converged (small change of LL) if nbIter>nbMinSteps if LL(nbIter)-LL(nbIter-1)nbMinSteps if LL(nbIter)-LL(nbIter-1)
 ... ... @@ -12,7 +12,7 @@ for t=1:nbData %Compute activation weight %See Eq. (3.0.5) in doc/TechnicalReport.pdf for i=1:model.nbStates H(i,t) = model.Priors(i) * gaussPDF(DataIn(:,t), model.Mu(in,i), model.Sigma(in,in,i)); H(i,t) = model.Priors(i) * gaussPDF(DataIn(:,t), model.Mu(in,i), model.Sigma(in,in,i)); end H(:,t) = H(:,t)/sum(H(:,t)); %Compute expected conditional means ... ... @@ -25,9 +25,9 @@ for t=1:nbData %See Eq. (3.0.4) in doc/TechnicalReport.pdf for i=1:model.nbStates SigmaTmp = model.Sigma(out,out,i) - model.Sigma(out,in,i)/model.Sigma(in,in,i) * model.Sigma(in,out,i); expSigma(:,:,t) = expSigma(:,:,t) + H(i,t) * (SigmaTmp + MuTmp(:,i)*MuTmp(:,i)'); expSigma(:,:,t) = expSigma(:,:,t) + H(i,t) * (SigmaTmp + MuTmp(:,i)*MuTmp(:,i)'); for j=1:model.nbStates expSigma(:,:,t) = expSigma(:,:,t) - H(i,t)*H(j,t) * (MuTmp(:,i)*MuTmp(:,j)'); expSigma(:,:,t) = expSigma(:,:,t) - H(i,t)*H(j,t) * (MuTmp(:,i)*MuTmp(:,j)'); end end end ... ...
This diff is collapsed.
This diff is collapsed.