ECE 8527: INTRODUCTION TO MACHINE LEARNING AND PATTERN
RECOGNITION
Exam NO. 1
Meysam Golmohammadi
1
Problem No. 1: Consider a two-class discrete distribution problem:
π1 : {[0,0], [2,0], [2,2], [0,2]}
π2 : {[1,1], [2,1], [1,2], [3,3]}
(20 pts) (a) Compute the minimum achievable error rate by a linear machine (hint: draw a picture of the
data). Assume the classes are equiprobable.
(10 pts) (b) Assume the priors for each class are: π(π1 ) = πΌ andπ(π2 ) = 1 β πΌ. Sketch π(πΈ) as a
function of πΌ for a maximum likelihood classifier based on the assumption that each class is drawn from
a multivariate Gaussian distribution. Compare and contrast your answer with your answer to (a). Be very
specific in your sketch and label all critical points. Unlabeled plots will receive no partial credit.
(5 pts) (c) Assume you are not constrained to a linear machine. What is the minimum achievable error
rate that can be achieved for this data? Is this value different than (a)? If so, why? How might you
achieve such a solution? Compare and contrast this solution to (a).
Solution:
As we do not have additional knowledge, using Occamβs Razor rule, we will use the simplest model
possible which is a Gaussian model for the data in each class. In next step, using mean and covariance
formula, we calculate mean vector and covariance matrix for every class. So we have:
Class 1:
π1 : {[0,0], [2,0], [2,2], [0,2]}
π1 = [1 1]
1
πΊ1 = [
0
0
]
1
Class 2:
π2 : {[1,1], [2,1], [1,2], [3,3]}
π2 = [1.75 1.75]
0.69 0.44
πΊ2 = [
]
0.44 0.69
2
A linear machine is a classifier that uses linear discriminant functions. As in this problem the covariance
matrices are different for each class, the resulting discriminant functions are inherently quadratic and
can be expressed by:
ππ (π±) = π± π ππ π± + π°ππ‘ π± + π€π0
(1.1)
Where
1
ππ = β πΊπβ1
2
(1.2)
π°π = πΊπβ1 ππ
(1.3)
And:
1
1
π€π0 = β ππ‘π πΊπβ1 ππ β ln|πΊπ | + ln π(ππ )
2
2
(1.4)
The decision surfaces for a linear machine are hyperquadrics and can be obtained by the linear
equations of:
ππ (π±) = ππ (π±)
(1.5)
Using Matlab code which presented at below and equations of (1.1) to (1.5) for the class 1 we will have:
ππ = [
βπ. π
π
]
π
βπ. π
1
π°π = [ ]
1
π€π0 = β1.69
g1 = - x1^2/2 + x1 - x2^2/2 + x2 β 1.69
Using the same approach for class 2 we have:
ππ = [
βπ. ππ π. ππ
]
π. ππ βπ. ππ
1.56
π°π = [
]
1.56
π€π0 = β2.78
g2 = (14*x1)/9 + (14*x2)/9 - x1*((11*x1)/9 - (7*x2)/9) + x2*((7*x1)/9 - (11*x2)/9) β 2.78
3
Using 1.5 for we will have:
x1*((11*x1)/9 - (7*x2)/9) - (5*x2)/9 - x1^2/2 - x2^2/2 - (5*x1)/9 - x2*((7*x1)/9 - (11*x2)/9) + 1.09=0
The plot of this curve is illustrated in Fig 1. And minimum error case in this situation is 25%.
Fig 1
The Matlab code can be found in the next page.
% ------------------------------------------------------------------------% Author: Meysam Golmohammadi
% Date : 10/01/2015
% ------------------------------------------------------------------------% Exam NO. 1
%% Problem 1. Part 1
% ------------------------------------------------------------------------close all
clc
clear
% ------------------------------------------------------------------------% define symbloes for two classes
%
syms x1 x2
x=[x1;x2];
% ------------------------------------------------------------------------% calculate discrimanant function for class 1
4
%
p_class1=0.5;
class1=[0 0; 2 0;0 2;2 2];
mean_class1=transpose(mean(class1));
cov_class1=3/4*cov(class1);
wu1=-0.5*inv(cov_class1);
wl1=inv(cov_class1)*mean_class1;
wi01=-0.5*transpose(mean_class1)*inv(cov_class1)*mean_class10.5*log(det(cov_class1))+log(p_class1);
g1=transpose(x)*wu1*x+transpose(wl1)*x+wi01;
% ------------------------------------------------------------------------% calculate discrimanant function for class 2
%
p_class2=0.5;
class2=[1 1; 2 1; 1 2; 3 3];
mean_class2=transpose(mean(class2));
cov_class2=3/4*cov(class2);
wu2=-0.5*inv(cov_class2)
wl2=inv(cov_class2)*mean_class2
wi02=-0.5*transpose(mean_class2)*inv(cov_class2)*mean_class20.5*log(det(cov_class2))+log(p_class2)
g2=transpose(x)*wu2*x+transpose(wl2)*x+wi02
% ------------------------------------------------------------------------% plot the curve and data point
%
curv=g1-g2
ezplot(curv,[0 5 0 5])
hold on
x=[0 2 0 2];
y=[0 0 2 2];
plot(x,y,'*')
hold on
x=[1 2 1 3];
y=[1 1 2 3];
plot(x,y,'*')
axis square
5
Additionally this problem was solved using Java Applet and you can see the answer in Fig. 2.
Fig 2
6
1.b)
There are several answers to this problem depending on what assumptions we make. One answer could
be like Fig.3.
By the way in general case when π(π1 ) > π(π2 ) which means the prior probability of class 1 is greater
than the prior probability of class 2, the classifier curve moves toward the mean of class 2 and in this
way the posterior probability of class 1 increases. Also when π(π1 ) < π(π2 ) which means the prior
probability of class 2 is greater than the prior probability of class 1, the classifier curve moves toward the
mean of class 1 and in this way the posterior probability of class 2 increases.
Fig. 3
7
1.C)
In this part we can use a lot of curves to separate the classes completely from each other. Using the SVM
algorithm in the Java applet, one of these curves is illustrated in Fig. 4. As we can see, we obtain a
minimum error zero in this case. But in this case we have overtraining and in the real world when data
increases the error rate will increase also.
8
Problem No. 2: Suppose we have a random sample X1, X2,..., Xn where:
ο§
ο§
Xi = 0 if a randomly selected student does not own a laptop, and
Xi = 1 if a randomly selected student does own a laptop.
(35 pts) (a) Assuming that the Xi are independent Bernoulli random variables with unknown
parameter p:
π(π₯; π) = (π)π₯π (1 β π)1βπ₯π
where π₯π = 0 ππ 1 and 0 < π < 1. Find the maximum likelihood estimator of p, the proportion of
students who own a laptop.
Solution:
π(π₯; π) = (π)π₯π (1 β π)1βπ₯π
The likelihood for a particular sequence of n samples is
π
π(π₯; π) = β(π)π₯π (1 β π)1βπ₯π
π=1
and the log-likelihood function is then
π
π(π) = β π₯π ln π + (1 β π₯π ) ln(1 β π)
π=1
To find the maximum of π(π), we set βπ π(π) = 0 and get
π
π
π=1
π=1
1
1
βπ π(π) = β π₯π β
β(1 β π₯π ) = 0
π
1βπ
This implies that
π
π
π=1
π=1
1
1
β π₯π =
β(1 β π₯π )
π
1βπ
which can be rewritten as
π
π
(1 β π) β π₯π = π(π β β π₯π )
π=1
π=1
The final solution is then
π
1
π = β π₯π
π
π=1
9
Problem No. 3: Letβs assume you have a 2D Gaussian source which generates random vectors of the form
[π₯1 , π₯2 ]. You observe the following data: [1,1], [2,2], [3,3]. You were told the mean of this source was 0
and the standard deviation was 1.
(25 pts) (a) Using Bayesian estimation techniques, what is your best estimate of the mean based on
these observations?
(5 pts) (b) Now, suppose you observe a 4th value: [0,0]. How does this impact your estimate of the
mean? Explain, being as specific as possible. Support your explanation with calculations and equations.
Solution:
3.a)
As we assume a 2D Gaussian source, using mean and covariance formula, we calculate mean vector and
covariance matrix. So we have:
Class 1:
π: {[1,1], [2,2], [3,3]}
2
π=[ ]
2
πΊ=[
0.67
0
]
0
0.67
In Baysian estimation, the prior information is combined with the empirical information in the samples
to obtain a posteriori density p(ΞΌ|D). For 1D data we have these equations:
ππ = (
ππ2 =
ππ02
π2
πΜ
+
)
(
) π0
π
ππ02 + π 2
ππ02 + π 2
π02 π 2
ππ02 + π 2
For multivariate we have these equations:
ππ = πΊπ (πΊπ +
π βπ
π
π
Μ π + πΊ(πΊπ + πΊ)βπ ππ
πΊ) π
π
π
π
πΊπ = πΊπ (πΊπ +
π βπ π
πΊ)
π
π
10
Here we have:
ππ = [0 0]
πΊπ = [
1 0
]
0 1
As we have multivariate case in this problem, we are using the last equations. I wrote a Matlab code for
this part that can be found at the end of this solution. Using these equations and Matlab code we have:
ππ = [
πΊπ = [
1.38
]
1.38
0.67
0
0.82 0.82
0.15 0.15
]+[
]=[
]
0
0.67
0.82 0.82
0.15 0.15
3.b)
Class 1:
π: {[0,0], [1,1], [2,2], [3,3]}
π=[
πΊ=[
1.5
]
1.5
1.25 1.25
]
1.25 1.25
Using these equations and Matlab code we have:
ππ = [
πΊπ = [
0.92
]
0.92
1.44 1.44
]
1.44 1.44
Roughly speaking, ππ represents our best guess for π after observing n samples, and ππ2 measures our
uncertainty about this guess. Since ππ2 decreases monotonically with n β approaching π 2 /π as n
approaches infinity β each additional observation decreases our uncertainty about the true value of π.
As n increases, p(ΞΌ|D) becomes more and more sharply peaked, approaching a Dirac delta function as n
approaches infinity. This behavior is commonly known as Bayesian learning.
11
% ------------------------------------------------------------------------% Author: Meysam Golmohammadi
% Date : 10/03/2015
% ------------------------------------------------------------------------% Exam NO. 1
%% Problem 3. Part 1
% ------------------------------------------------------------------------close all
clc
clear
% ------------------------------------------------------------------------class1=[1 1; 2 2; 3 3];
n=size(class1,1)
mean_class1=transpose(mean(class1))
cov_class1=(n-1)/n*cov(class1)
mean0=transpose([0 0])
cov0=[1 0; 0 1];
meann=(cov0*inv(cov0+(1/n).*(cov_class1))*mean_class1)+(1/n).*cov_class1*inv(
cov0+(1/n).*(cov_class1))*mean0
covn=1/n.*cov0*inv(cov0+(1/n).*(cov_class1))*cov_class1
cov=covn+cov_class1
%% Problem 3. Part 2
% ------------close all
clc
clear
class1=[0 0; 1 1; 2 2; 3 3];
n=size(class1,1)
mean_class1=transpose(mean(class1))
cov_class1=(n-1)/n*cov(class1)
mean0=transpose([0 0])
cov0=[1 0; 0 1];
meann=(cov0*inv(cov0+(1/n).*(cov_class1))*mean_class1)+(1/n).*cov_class1*inv(
cov0+(1/n).*(cov_class1))*mean0
covn=1/n.*cov0*inv(cov0+(1/n).*(cov_class1))*cov_class1
cov=covn+cov_class1
12
© Copyright 2025 Paperzz