Você está na página 1de 6

Machine Learning Project Report

A MODEL FREE NEURAL CONTROLSCHEM FOR


VISUAL MOTOR CO-ORDINATION USING
KSOM NETWORK
(Akhil S Menon-14MAC0059,S.Shiva Subramanian-14MAC0061)

OBJECTIVE
To establish visual motor co-ordination of a robotic arm by using a model free
approach based on Kohonens Self Organizing network(KSOM),by considering
three- dimensional view of workspace,and comparing the same with a model
based approach,where required codings are performed in MATLAB.
DESCRIPTION
Model Based Approach: Here,we use combined model of manipulator and
camera that can be used to compute joint space co-ordinates,given camera coordinates.Here,we have to derive a model that will transform Cartesian coordinates of object to camera input co-ordinates and to obtain the desired
transformation by inverse kinematics method.

Fig) VMC Setup in IIT Kanpur


The required observations and the data are obtained and referred from the VMC
setup in IIT,Kanpur.

Model Free Approach:Here,a learning paradigm is adapted using on line data


to learn required mapping.The approach is highly flexible,and has possibility to
achieve highest accuracy.

MATLAB CODING
1)Coding for a sample KSOM network

clc;
clear
close
m=0;
% for
%
%
%
%
% end

all;
all;
i=1:10;
for j=1:10;
m=m+1;
x(m,1:2)=[i j];
end

x=rand(100,2);
w=rand(100,2);
m=0;
etao=2;
sigo=10;
for i=1:10;
for j=1:10;
m=m+1;
d(m,1:2)=[i j];
end
end
% figure(1);
% plot(d(:,1),d(:,2),'o');
figure(2);
plot(x(:,1),x(:,2),'og');
hold on;
for epoch=1:100;
eta=etao*exp(-epoch/3);
sig=sigo*exp(-epoch/10);
for i=1:100;
s(i)=(x(i,1)-w(i,1))^2+(x(i,2)-w(i,2))^2;
end
[smax,p]=max(s);
for i=1:100;
t(i)=(d(i,1)-d(p,1))^2+((d(i,2)-d(p,2))^2);
h(i)=exp(-t(i)/(2*sig^2));
w(i,:)=w(i,:)+eta*h(i)*(x(i,:)-w(i,:));
end
% figure(3);
% plot(w(:,1),w(:,2),'*r')
% drawnow;
% pause(1);
end
plot(w(:,1),w(:,2),'*r');

Result for the coding

2)Coding for model based approach ,by applying inverse kinematics


(considening 3D workspace having 336 nodes)
clc;
clear all;
close all;
% Initialization

del_theta=zeros(3,1,336);
l1=25.4;%
l2=25.4;
l3=25.4;
t=5;
err=zeros(5000,1);
del_u=zeros(3,1,500);
% neuron parameter initializations
% gama definition
g=zeros(3,1,336);
n=1;
for k=0:3
for j=0:6
for i=0:11
g(:,:,n)=[i j k]';
n=n+1;
end
end
end
for n=1:300

theta(:,:,n)=rand(3,1);
R=l2*cos(theta(2,:,n))+l3*cos(theta(3,:,n))+ t;
X=R*cos(theta(1,:,n));
Y=R*sin(theta(1,:,n));
Z=l2*sin(theta(2,:,n))+l3*sin(theta(3,:,n))+l1;
ut(:,:,n)=[X,Y,Z]';
end
figure();
scatter3(ut(1,:,1),ut(2,:,1),ut(3,:,1));hold on;
wg=60*rand(3,1,336);%figure();
scatter3(wg(1,:,:),wg(2,:,:),wg(3,:,:),'r');
thetag=rand(3,1,336);
Ag=rand(3,3,336);
%%
for iter=1:5000
iter
% for p=1:300
so=0;% summation initialization for normalizer
sg=[0 0 0]'; % summation initialization for theta_o out
sAg=[0 0 0]'; % % summation initialization for jacobian matrix
sg1=[0 0 0]';% summation initialization for theta_1 out
sthg=[0 0 0]';% summation initialization for reference vector theta_g

% determination of winning neuron


for n=1:336
dist(n)=norm(wg(:,:,n)-ut(:,:,1));
end
min=1000000;
pos=1;
for i=1:336
if(dist(i)<min)
min=dist(i);
pos =i;
end
end
S=g(:,:,pos); % Index of winning neuron
% Calculation of Distance measure
eta(iter)= (0.05)^(iter/5000); % learning rate
sgma(iter)=2.5*(0.01/2.5)^((iter/5000)); % width function
for i=1:336
hg(i)=exp(-1*(norm(g(:,:,i) - S)^2)/(2*(sgma(iter))^2));
so=so+hg(i);
% theta_o output calculation
sg=sg+hg(i).*(thetag(:,:,i)+ Ag(:,:,i)*(ut(:,:,1)-wg(:,:,i)));
end
theta_o=sg/so;
R=l2*cos(theta_o(2))+l3*cos(theta_o(3))+ t;
Xo=R*cos(theta_o(1));
Yo=R*sin(theta_o(1));
Zo=l2*sin(theta_o(2))+l3*sin(theta_o(3))+l1;
uo=[Xo Yo Zo]';
%calculation of theta_1
for i=1:336
sg1=sg1+hg(i)* Ag(:,:,i)*(ut(:,:,1)-uo);
end
theta_1=theta_o +sg1/so;

% evaluation of u_1
R=l2*cos(theta_1(2))+l3*cos(theta_1(3))+ t;
X1=R*cos(theta_1(1));
Y1=R*sin(theta_1(1));
Z1=l2*sin(theta_1(2))+l3*sin(theta_1(3))+l1;
u1=[X1 Y1 Z1]';
% Parameter updates
del_u(:,:,1)=u1-uo;
for i=1:336
sAg=sAg + hg(i)*(Ag(:,:,i)*del_u(:,:,1));
sthg=sthg + hg(i)*(thetag(:,:,i)+ Ag(:,:,i)*(uo-wg(:,:,i)));
del_theta=theta_1-theta_o;
end
for i=1:336
Ag(:,:,i)= Ag(:,:,i)+(eta(iter)*hg(i)*((del_theta(sAg/so))*(del_u(:,:,1))')/(so*(norm(del_u(:,:,1)))^2));
thetag(:,:,i)=thetag(:,:,i)+(eta(iter)*hg(i)*(theta_o-sthg/so)/(so));
wg(:,:,i)= wg(:,:,i)+((eta(iter)*hg(i)).*(ut(:,:,1)-wg(:,:,i)));
end
err(iter)=norm(ut(:,:,1)-u1);
% end
end
figure();
scatter3(ut(1,:,1),ut(2,:,1),ut(3,:,1));hold on;
% wg=rand(3,1,336);%figure();
scatter3(wg(1,:,:),wg(2,:,:),wg(3,:,:),'r');
%%
x=[1:5000];figure();
plot(err);
xlabel('number of iterations');
ylabel('error');
title('ut-u1');

Result for the coding

fig1)scattered

parameters

fig2)updated parameters

Fig3)error plot for 5000 iterations

FUTURE SCOPE
Currently working on the KSOM based model free approach coding for the
visual motor manipulation,for 336 nodes,performing 3000 iterations.After the
results are obtained.comparisons of results with model based approach has to be
done.

Você também pode gostar