Course Guide
MAT 211 Probability and Statistics
Spring 2016
Offered by
Department of Physical Sciences
School of Engineering and Computer Science
Independent University, Bangladesh
Course Coordinator:
Dr. Shipra Banik, Associate Professor
Instructors:
Dr. Shipra Banik
Ms. Proma Anwer Khan
Mr. Shohel Ahmed
Dr. Shifiqul Islam
1
Course Outline
Pre-requisite: MAT 101 or equivalent. Instructional Format p/w: 21½ -hours lectures
Course objectives
An understanding of statistics is needed in the implementation of uncertainty calculations in different
fields. It is understandable clearly by anyone, today information is everywhere and one will be
bombarded with the numerical information. What is needed then? Skills are needed today to deal with all
of numerical information. First, need to be critical consumers of information presented by others and
second, need to be able to reduce large amounts of data into a meaningful form so that one can make
effective interpretations, judgments and decisions. The course „MAT 211 Probability and Statistics‟ is an
important foundation course offered by IUB and suited for all undergraduate students who wish to major
under the non-SECS, IUB. It covers all the elementary topics in statistics and explains how theories can
be applied to solve real world problems. Topics include: Descriptive Statistics Techniques, Probability
Theory with Important Probability Distributions, Sampling Theory, Statistical Inference, Linear
Correlation and Regression Theories.
Textbook: All students should collect:
Anderson D.R., Sweeney, D.J. and Thomas A.W. (2011), Statistics for Business and Economics (11th
Edition), South-Western, A Division of Thomson Learning.
Recommended Reference
Murray R. Spigel and Larry J. Stephens (2008), Schaum‟s Outline of Theory and Problems of Statistics
(Fourth edition), Schaum‟s Outline Series, McGraw-Hill.
Evaluation criteria
Homework will be assigned weekly. Students are not required to hand those back for grading but
completing the given homework is essential for understanding the material and performing satisfactorily
on examinations.
The weighting scheme is as follows:
Class Attendance – 5%, Two Class tests (CT) – 35%(20% + 15%), Mid-term test (MT) - 20% and Final
test (FT) - 40%
Rules and regulations
Students are required to attend classes on time and to take well-organized notes.
If a student misses or fails to attend a class, it is his/her sole responsibility to obtain missing
information (for examples, change of exam dates, omit/add some topics, lecture notes, new home
works etc).
For a test, no extensions or alternative times are possible and also there is no word for make-up.
For any unavoidable circumstances, the test will be strictly held on the next lecture.
No extra work will be given to improve the grade.
Students are required to show matured behaviour in class. For examples, cellular phones will be
shut off during class lectures and examinations. Eating, drinking, chewing gum, reading
newspapers, socialization and sleeping are not permitted in class.
Any kinds of cheating in class are strictly prohibited and may result in a failing grade for
the course.
2
Students are advised to obtain a scientific calculator for use in the class. It is noticeable that the
two variables calculator is needed for all types of calculations.
Grading scales
Above 85%: A, 81%-85%: A-, 76%-80%: B+, 71%-75%: B, 66%-70%: B-, 61%-65%: C+, 56%-60%: C,
51%-55%: C-, 46%-50%: D+, 40%-45%: D, below 40%: F
Incomplete (I-Grade)
I-grade will be given only to a student who has completed the bulk of the course works and is unable to
complete the course due to a serious disruption not caused by the student‟s own slackness.
Mid-term and Final Test: All sections will have a common examination. Materials and date will be
announced later.
Course Plan
Lecture #
Lecture 1
Topics
Text/Reference
Introduction: Definition: variable, scales of Course Guide, pp.7-8
measurement, raw data, qualitative data,
quantitative data, cross-sectional data, time series
data, census survey, sample survey, target HW: Text
population, random sample, computer and Ex: 2,4,6,9-13, pp.21-23
statistical packages
Lecture 2
Summarizing qualitative data- Frequency Course Guide, pp.9-11
distribution, relative frequency distribution, bar HW: Text
chart, pie chart
Ex:4-10, pp.36-39
Applications from real data
Summarizing quantitative data- Frequency HW: Text
distribution, relative frequency distribution, Ex:15-21, pp.46-48
cumulative frequency distribution, Applications Ex: 39, 41and 42, pp.65-67
from real data
Lecture 3
Histogram, ogive, line chart, stem and leaf display
Applications from real data
Course Guide, pp.12-14
HW: Text
Ex:15-21, pp.46-48
Ex:25-28, pp.52-53
Summarizing bi-variate data: Cross-tabulation,
scatter diagram, Applications from real data
HW: Text
Ex:31, 33-36, pp.60-61
Lecture 4
Measures of average: simple mean, percentiles Course Guide, pp.15-16
(median, quartiles), mode
Applications from real data
HW: Text
Ex: 5-10, pp.92-94
Lecture 5
Measures of variability: variance, standard Course Guide, pp.17-18
deviation, coefficient of variation, detecting
3
outliers (five number summary), Applications HW: Text
from real data
Ex: 16-24, pp.100-102
Ex: 40-41, pp.112-113
Lecture 6
Review
Lecture 1 - Lecture 5
Lecture 7
Class Test 1(20%)
Topic
Lecture 1-Lecture 5
Lecture 8
Working with grouped data, weighted mean,
skewness, kurtosis, case study
Course Guide, pp.20-28
HW: Text
Ex: 54-57, pp.128-129
Text: Case problems 1, 2, 3, 4,
pp.137-141
Lecture 9
Lecture 10
Lecture 11
Lecture 12
Probability Theory:
Random experiment, random variable, sample Course Guide, pp.29-33
space, events, counting rules, tree diagram,
probability defined on events
HW: Text
Ex: 1-9, pp.158-159
Ex: 14-21, pp.162-164
Basic relationships of probability: addition law, Course Guide, pp.34-36
complement law, conditional law, multiplication HW: Text
law
Ex: 22-27, pp.169-170
Ex: 32-35, pp.176-177
Review
Lecture 8 - Lecture 10
Mid-term test (20%)
Topic:
Lecture 8-Lecture 10
Lecture 13
Normal Distribution
Course Guide, pp.38-40
Text, Ex: 10-25, pp.248-250
Lecture 14
Lecture 13 continued
HW: Text
Ex: 10-25, pp.248-250
Lecture 15
Class test 2(15%)
Topic:
Lecture 13 - Lecture 14
Lecture 16
Target population, random sample, table of Course Guide, pp.42-44
random numbers, simple random sampling, point HW: Text
estimates (sample mean and sample SD)
Ex: 3-8, pp.272-273
Lecture 17
Interval estimation: Parameter, statistic, margin of
error (ME), statistical tables (z-table, t-table, chi-
4
Course Guide, pp.45-53
HW: Text
Lecture 18
square table, F-table), confidence interval of
population mean, confidence interval of
population SD, Applications from real data
Interval estimations about two population means,
two standard deviations
Applications from real data
Lecture 19
Test of hypothesis
Ex:5-10, pp.315-316
Ex: 4-8, pp.457-459
Text, Chapter 10, pp.408-410,
p.416
HW: Text
Ex: 4-8, pp.413-415
Ex: 13, p.421
Course Guide, pp.55-67
Concept of hypothesis, null hypothesis, HW: Text
alternative hypothesis, one-tail tests, two-tail test, Ex:15-22, pp.369-370
tests of population mean (large samples test, small Ex: 9-12, p.459
samples test), test of population SD
Applications from real data
Lecture 20
Lecture 19 continued
Course Guide, pp.55-67
Test of hypothesis
Lecture 21
Lecture 22
Tests of two populations means, two standard Course Guide, pp.68-69
deviations
Text, Chapter 11
Applications from real data
HW: Text
Ex:12-18, pp.420-423
Ex:16-22, pp.465-466
Correlation analysis
Concepts of covariance and correlation
(Numerical measures of bi-variate data),
Course Guide, pp.70-79
Regression analysis
Text, Ex: 47-51, pp.122-124
Linear and multiple regression model, prediction,
coefficient of determination
Applications from real data
Lecture 23
Lecture 24
Lecture 22 continued
HW: Text
Ex: 4-14, Ex: 18-21, pp.570-582
Review of Final Test (40%)
5
Topics and date will be
announced later
MAT211 Lecture Notes
Spring 2016
6
Lecture-1
Chapter-1: Introduction
Important definitions
Data, elements, variable, observations, raw data, qualitative data, quantitative data, scales of measurement
population, random sample, census, sample survey, cross-sectional data, time series data, Computer and
statistical analysis, glossary.
Textbook: Anderson D.R., Sweeney, D.J. and Thomas A.W. (2011), Statistics for Business and
Economics (11th Edition), South-Western, A Division of Thomson Learning.
Data (or Variable) - Changing characteristics.
Examples: Gender, Grade, Family size, Score, Age, and many others.
Gender, Grade- Qualitative data (letter)
Family size, Score, Age - Quantitative data (numeric value)
Family size – Whole number – Discrete data
Score, Age – Continuous data
Note: ID #, cell # are qualitative data
Observations- Data size
Variable denoted by X, Y, Z or denoted by first letter (e.g. Score – S, Age –A)
Elements – Variable (X), elements x1, x2, .., xn
Raw data – Data collected by survey, census etc. It is known as ungrouped data.
Note: Always we have raw data. We have to process or make data summary by various statistical
techniques (we will learn all by Chapters 2-3).
Scales of measurement
Before analysis, scale of each of selected variables has to be defined. Specially, when we do our analysis
by statistical packages e.g. SPSS, Minitab, Strata even in Excel also. We have to assign scale for each of
variables those involve in our analysis.
There are four kinds of scale: nominal, ordinal, interval and ratio
Nominal, ordinal - Qualitative data
Nominal scale– The variables like Name, ID, Address, Cell # declare this scale. Not possible to do
analysis.
Ordinal scale – Qualitative data like test performances (excellent, good, poor etc), quality of food (good
or bad) etc. Possible to order. Some analysis is possible.
7
Interval and ratio - Quantitative data
Interval scale: Shows properties of ordinal data and interval between values are meaningful. Example
Score for 5 students. Apply ordinal concept and differences of each of two students is meaningful.
Ratio scale – Have properties of Interval data. In addition ratio of the data values are meaningful. .
Example Score for 5 students. Apply interval concept and ratio of each of two students score is
meaningful.
Details see Textbook, p.6
Target population: The set of all elements in a particular study.
Random sample: A subset of target population. Set to set will vary for each of draws
Census: Method to collect data about target population.
Sample survey: Method for collecting data about random sample.
For the purpose of statistical analysis, distinguishing time series data and cross-sectional data are
meaningful.
Time series data – Data collected over several time periods. For example, Exchange rate, interest rate,
gross national product (GNP), grosses domestic product (GDP) and many others. These sorts of data w.r.t
time are meaningful.
Cross-sectional data – Data collected at same time. For example, company‟s profit, students profile we
collect at the same time.
Note that in this course most of the data will be considered as cross-sectional data.
Computer and Statistical packages
Statistical analyses generally involve a large amount of data. That‟s why analysis frequently uses
computer softwares. Several very useful softwares are available in computing literature. These are: SPSS,
Minitab, Matlab, Excel, Stara and many others.
HW: Text
Ex: 2,4,6,9-13, pp.21-23
8
Lecture 2
Chapter-2: Summary of raw data
You will get an idea about the following:
Aim of presentation of raw data; Tabular form of raw data (e.g. Summarizing qualitative and quantitative
Data).
The aim of presentation of raw data is to make a large and complicated set of raw data into a more
compact and meaningful form. Usually, one can summarize the raw data by
(a) The tabular form
(b) The graphical form and
(c) Finally numerically such as measures of central tendency, measures of dispersion and others.
Under the tabular and the graphical form, we will learn frequency distribution (grouping data), bar graphs,
histograms, stem-leaf display method and others.
Presentation of data can be found in annual reports newspaper articles and research studies. Everyone is
exposed to those types of presentations. Hence, it is important to understand how they are prepared and
how they should be interpreted.
As indicated in the Lecture 1, data can be classified as either qualitative or quantitative.
The plan of this lecture is to introduce the tabular methods, which are commonly used to summarize both
the qualitative and the quantitative data.
Summarizing qualitative data
Recall raw data and find the following data:
Table 1: Test Performances of MAT 211
Good
Excellent
Poor
Excellent
Poor
Good
Poor
Excellent
Excellent
Good
Excellent
Excellent
Good
Poor
Good
Make a tabular and graphical summary of the above data.
Solution: Define T - Test performances and n =15. It is a qualitative data.
9
Tabular summary
T
Excellent
Good
Poor
Tally
marks
|||| |
||||
||||
Frequency (# of students) –fi, ,
i=1,2,3
6
5
4
Total n =15
Relative(percent )
Frequency – rfi (pfi)
0.40(40%)
0.33(33%)
0.26(26%)
Where relative frequency (rfi )= fi /n and percent frequency (pfi) = rfix100
Summary
There are 6 students whose performances are excellent, 5 students show good performance and so on.
There are 40% students performances are excellent, 33% percent are good and so on.
Graphical summary: Bar or Pie chart
Bar Chart
Pie Chart
8
Poor
27%
6
4
Excellent
40%
Good
33%
2
0
Excellent
Good
Poor
Data Summary: Our analysis shows that test performances observed excellent 40%, good 33% and poor
observed 26%.
Summarizing quantitative data
Now observe the following data
Table 2: Test score of MAT 101
90
87
56
67
95
88
69
78
85
59
78
93
57
46
89
We know very well these data are quantitative data. Processing of these kinds data little bit differs from
qualitative data. Follow the following:
10
Solution: Define T- Test Score and n =15
Need to find lowest and highest values of the given raw data set. Here L = 46 and H= 95. Assume # of
classes K =5. Thus, we find size of the class c= (H-L)/K = 9.810.
Tabular Summary
T
46-56
56-66
66-76
76-86
86-96
Tally
marks
||
||
||
|||
|||| |
Frequency (# of students) –fi, ,
i=1,2,3
2
2
2
3
6
Total n =15
Relative(percent )
Frequency – rfi (pfi)
0.13(13%)
0.13(13%)
0.13(13%)
0.20(20%)
0.40(40%)
Cumulative
frequency (Fi)
2
4
6
9
15
Summary
There are 6 students who got score 86 to 96, 3 students score are 76 to 86 and so on. 40% students score
86 to 96, 20% students score 76 to 86 and so on. 6 students score below 76, 4 students score below 66 and
so on.
HW: Text
Ex:4-10, pp.36-39
Ex:15-21, pp.46-48
Ex: 39, 41,42, pp.65-67
11
Lecture 3
Summarizing Raw Data Continued
Graphical summary: Histogram, Ogive
Recall Lecture 2, Table data. We need a frequency table for the above two shapes
Histogram
Ogive
16
7
14
6
12
5
10
4
8
3
6
2
4
1
2
0
0
46-56 56-66 66-76 76-86 86-96
0
20
40
60
80
100
120
Data Summary
Our analysis shows that there are 6 students score observed 86 to 96 and 2 students score observed 46 to
56 and so on.
9 students score observed less than 86, 6 students score observed less than 76 and so on.
Other Graphical summaries: stem and leaf display, line chart
Line chart - Time plots of the stock indices (We need a time series data)
12
Stem and leaf display
Stem Leaf (Unit=1.0)
4 6
5 679
6 79
7 88
8 5789
9 035
Total n=15
Summary: There are 4 students whose scores are ranging 85 to 89 and so on.
HW: Text
Ex: 15-21, pp.46-48
Ex: 25-28, pp.52-53
Chapter-2: Summarizing bi-variate data: Cross-tabulation, scatter diagram.
Summarizing bi-variate data
So far we have focused on tabular and graphical methods for one variable at a time. Often we need tabular
and graphical summaries for two variables at a time.
Tabular Method-Cross-tabulation and Graphical method- scatter diagram are such two methods to
make decision from two qualitative and/or quantitative variables.
Tabular Method-Cross-tabulation:
Problem-1:
Consider the following two variables: Quality rating and meal price($) for 10 restaurants. Data are as
follows:
Quality rating: good, very good, good, excellent, very good, good, very good, very good, very good,
good
Meal price($): 18,22,28,38,33,28,19,11,23,13.
Make a tabular summary (or cross-table and make a data summary).
13
Solution: Define X - Quality rating and Y - Meal price. Here n =10
Table: Crosstabulation of X and Y for 10 restaurants
X
Good
Very good
Excellent
Total
10-20
|| (2)
|| (2)
(0)
4
Y
20-30
|| (2)
|| (2)
(0)
4
30-40
(0)
|(1)
|(1)
2
Total
4
5
1
n=10
Data summary:
We see that there are 2 restaurants their quality of food is very good and meal prices are ranging 20$ to
30$, 1 restaurant quality of food is excellent, 4 restaurants meal prices are ranging 10$ to 20$ and so on.
Graphical method-scatter diagram
Scatter diagram provide the following information about the relationship between two variables.
strength
shape – linear, curved etc.
Direction – positive or negative
Presence of outliers
Problem -2:
Now consider the following two variables: # of commercials and total sales for 5 sound equipment stores.
Data are as follows:
# of commercials: 2, 5, 1, 3, 4 and total sales: 50, 57, 41, 54, 54
60
Data summary: There is a positive
relationship exists between # of
commercials and total sales for 5
sound equipment stores.
50
Sales
40
30
20
10
0
0
1
2
3
4
5
6
Comm
Figure: Scatter diagram of Sales and # of commercials for 5 sound equipment stores
HW: Text
Ex: 31, 33-36, pp.60-61
14
Lecture 4
Chapter 3: Summarizing Raw Data (Numerical measures)
We will learn several numerical measures that provide a data summary using numeric formulas.
Now we will learn the following:
(1) Measures of average: simple mean, weighted mean, median, mode, quartiles, percentiles
(2) Measures of variation: Range, inter-quartile range, variance, standard deviation
(3) Measures of skewness: symmetry, positive skewness, negative skewness
(4) Measures of Kurtosis: leptokurtic, platykurtic and mesokurtic
Measures of average: simple mean, weighted mean, median, mode, quartiles, percentiles
Definition of average: It is a single central value that represents the whole set of data. Different
measures of averages are: simple mean, weighted mean, median, mode, quartiles, percentiles.
We will learn the above measures for the raw data and grouped data.
Mean: Denoted by ̅ and calculated by ̅
∑
.
For example, for a set of monthly starting salaries of 5 graduates: 3450, 3550, 3550, 3480, 3355.
∑
Define X - monthly starting salaries of 5 graduates. Here ̅
= 3477.
Median, Percentiles, Quartiles
It is denoted by pi , i =1, 2, …, 99 that means there are 99 percentiles.
50th percentile is known as median and it is denoted by p50.
25th percentile is known as first quartile and it is denoted by p25.
75th percentile is known as 3rd quartile and it is denoted by p75. p50 is also known as 2nd quartile (Q2).
Thus, there are 3 quartiles: These are p25 (Q1), p50 (Q2) and p75(Q3).
Calculation of percentiles: Need to sort the data
3355
3450
3480
3550
3550
For Q2: i = (pn)/100 = (50*5)/100= 2.50. The next integer 3. Thus, Q2 is 3480.
For Q1: i = (pn)/100 =(25*5)/100 = 1.25. The next integer 2. Thus, Q1 is 3450.
For Q3: i = (pn)/100 =(75*5)/100 = 3.75. The next integer 4. Thus, Q3 is 3550.
Now consider the following data: 3450, 3550, 3550, 3480, 3355, 3490
15
Here ̅
∑
= 3.4792e+003 = 3479.2
Sort the data to calculate percentiles: 3355
3450
3480
3490
3550
3550
For Q2: i = (pn)/100 = (50*6)/100= 3. It is an average value of 3rd and 4th observations of the sorted data.
Thus, Q2 = (3480+3490)/2 = 3485.
For Q1: i = (pn)/100 =(25*6)/100 = 1.50. The next integer 2. Thus, Q1 is 3450.
For Q3: i = (pn)/100 =(75*6)/100 = 4.5. The next integer 5. Thus, Q3 is 3550.
Mode: It is the value that occurs with greatest frequency. Denoted by M0.
Consider the following observations
(1) 3450, 3550, 3550, 3480, 3355 - M0 is 3550.
(2) 3450, 3550, 3550, 3480, 3450 - M0 are 3450 and 3550.
(3) 3450, 3550, 3550, 3450, 3450 - M0 is 3450
(4) 3450, 3650, 3550, 3480, 3355 – no Mode.
Data Summary:
Mean = 3477 it means that most of graduates monthly starting salaries is about 3477$.
Median = 3485 it means that 50% graduates monthly starting salaries are observed below 3485$ and the
remaining (50%) graduates monthly starting salaries are observed over 3485$.
First quartile = 3450 it means that 25% graduates monthly starting salaries are observed below 3450$ and
the remaining (75%) graduates monthly starting salaries are observed over 3450$.
Third quartile = 3550 it means that 75% graduates monthly starting salaries are observed below 3550$
and the remaining (25%) graduates monthly starting salaries are observed over 3550$.
Mode = 3450 it means that the most common graduates monthly starting salaries is 3450$.
HW: Text
Ex: 5-10, pp.92-94
16
Lecture 5
Chapter 3_Numerical measures continued
We will learn measures of variation
Recall the concept of average (Ref. Lecture 4). Follow the following: Say for example, suppose we have
the following 2 sets of raw data:
1) 15, 15,15,15,15 – Average 15 and variation 0.
2) 15, 16,19, 13, 12– Average 15 and variation 2.73.
Statistical meaning of variation
Make a question - is there any difference exist between each of observations from the average value?
Suppose X – score of CT1 (class test 1) and for example, suppose it is calculated average score 15.
Next investigation will be to see differences between each of student‟s marks to average marks.
If difference is 0, very easy to say student score and average score is same.
If differences give us a positive (negative) sign (+(-)), we can say that student score is greater(lower) than
the average score.
How we can measure variation of a data set. Various measures (or formulas) are available to detect
variation. These are:
1. Range, R = H-L, H-highest value of a data set and L – Lowest value of a data set
2. Inter-quartile range, IR = p75 – p25, p75- 75th percentile and p25- 25th percentile
∑
3. Variance (denoted by
) and is calculated by
̅ .
4. *Standard Deviation (denoted by
and is calculated by
∑
̅ ). That means
SD = sqrt(variance).
Note: Measures of variation cannot be negative. At least can be 0, recall which indicates all students got
same scores.
Calculation for variance and SD
Recall monthly starting salaries of 5 graduates: 3450, 3550, 3550, 3480, 3355, where we found ̅
∑
= 3477 (see L3).
Calculation Table for variance and SD
̅
X
3450
3550
3550
3480
3355
Here variance,
∑
729
5329
5329
9
14884
̅ = 26280/4= 6570 and SD = sqrt(variance) = 81.05$.
17
Data summary: SD = 81.054 indicates that graduates salary varies from the average salary 3477$.
Note: Variance cannot be interpreted because its unit comes as a square. For example if mean = 3477$
then variance = 6570$2. Taking square root of variance removes this problem (going back to the original
unit of data), which is standard deviation (SD).
So, no interpretation for variance and talk always on SD measure.
Coefficient of variation
See Text, p.99
HW: Text, Ex: 16-24, pp.100-102
Detecting outliers (Five number summary)
See, Text, pp.109-111
HW: Text, Ex: 40-41, pp. 112-113
18
Lecture 7
Class Test 1 (20%)
Exam Time: 90 minutes
Requirements:
1) Must need a two variables scientific calculator (No alternatives).
2) Mobile will be shut off during exam time.
Format of questions
1) Lecture 1- Lecture 5 solved and HW problems
2) Related Text book questions
/Good Luck with your first test/
19
Lecture 8
Working with grouped data
So far we focused calculation of all measures of average and variation for ungrouped (raw data).
Sometimes grouped data (frequency table) is available. In this situation, formula for ungrouped (raw data)
is invalid. Follow the following:
Recall Tabular summary, where X- Test Score and n =15 (Lecture 2)
Frequency (# of students) –
fi, , i=1,2,3
2
2
2
3
6
Total n =15
X
46-56
56-66
66-76
76-86
86-96
X
46-56
56-66
66-76
76-86
86-96
Total
Frequency (# of
students) –fi, , i=1,2,3
2
2
2
3
6
n =15
∑
Grouped mean (weighted mean) ̅
Here variance,
15.02
∑
Midpoints
(mi)
51
61
71
81
91
̅
̅
fimi
102
122
142
243
546
1155
1352
512
72
48
1176
3160
= 1155/15=77
=3160/14=225.71 and SD = sqrt(variance) = sqrt(225.71) =
Data Summary: SD = 15.02 indicates that students score varies from the average score 77.
HW: Text, Ex: 54-55, pp.128-129
Text: Case problems 1, 2, 3, 4, pp.137-141
20
Measures of Skewness
We can get a general impression of skewness by drawing a histogram. To understand the concept of
skewness, consider the following 3 histograms:
Figure -1
Figure-2
Figure-3
8
8
6
6
8
6
6
4
4
2
2
3
2
3
2
2
2
6
6
2
4
2
2
0
0
2
2
2
3
0
46-5656-6666-7676-8686-96
Figure-1 is known as positively skewed or skewed to the right.
Figure -2 is known as normal/symmetric frequency curve.
Figure-3 is known as negatively skewed or skewed to the left.
There are two types of skewness. These are (1) positively skewed or skewed to the right (2) negatively
skewed or skewed to the left.
Note that the normal/symmetric frequency curve is known as non-skewed curve (skewness is absent).
Definition: It gives us idea about the direction of variation of a raw data set.
Figure -1 - direction of variation is observed in left (most of frequencies).
Figure -2 - direction of variation is observed in middle (most of frequencies).
Figure -3 - direction of variation is observed in right (most of frequencies).
Recall X – test score.
Figure 1 says us most of students have poor performances. It means that most of students score below the
average value.
Figure 2 says us most of students have average performances. It means that most of students score near to
(more/less) the average value.
Figure 3 says us most of students have good performances. It means that most of students score over the
average value.
Measure of skewness
To detect whether skewness is present or not in a set of raw data, we will use the most commonly used
formula, known as Karl Pearson‟s (known as Father of Statistics) coefficient of skewness. It is defined as
SK = 3(mean-median)/SD
Note that this formula will work for ungrouped/grouped data.
21
Problem
Suppose X – test score. Let mean = 15, median (50th percentile or 2nd quartile) = 17 and SD =3.
Here SK = -2.00.
Data summary: SK = -2.00 it means that the test score is negatively skewed. It means that most of
students score over 15.
Let mean = 18, median= 14 and SD =5. Here SK = 2.40.
Data summary: SK = 2.40 it means that the test score is positively skewed. It means that most of students
score below 18.
Let mean = 16, median= 16 and SD =5. Here SK = 0.
Data summary: SK = 0 it means that the test score is symmetric. It means that few student‟s score below
and over 16.
Kurtosis
Suppose if a distribution is symmetric, the next question is about the central peak: Is it high or sharp or
short or broad.
Pearson (1905) described kurtosis in comparison with the normal distribution and used phases leptokurtic,
platykurtic and mesokurtic to describe different distributions.
If the distribution has more values in the tails of the distribution and a peak, it is leptokurtic. It is a curve
like two heaping kangaroos has long tails and is peaked up in the center.
If there are fewer values in the tails, more in the shoulders and less in the peak, it is platykurtic.
A platykurtic curve, like a platypus, has a short tail and is flat-topped.
HW: Text book
Ex: 5 and 6, pp.92-93 (Calculate skewness and interpret)
22
Solved Case Study
Review-Lecture 1 –Lecture 8
The following data are obtained on a variable X, the cpu time in seconds required to run a program using
a statistical package:
6.2
5.8
4.6
4.9
7.1
5.2
8.1
0.2
3.4
4.5
8.0
7.9
6.1
5.6
5.5
3.1
6.8
4.6
3.8
2.6
4.5
4.6
7.7
3.8
4.1
6.1
4.1
4.4
5.2
1.5
a) Construct a stem-leaf diagram for these data. Interpret this table.
b) Break these data into 6 classes and construct a frequency, relative frequency and
cumulative frequency table and interpret the tables using non-technical languages.
c) Using the frequency table, calculate sample mean and sample standard deviation and interpret these
two measures.
d) Construct a histogram. Construct also cumulative frequency ogive and use this ogive to approximate
th
the 50 percentile, the first quartile and the third quartile.
e) Calculate sample skewness and interpret.
Solution: Denote X - cpu time in seconds required to run a program using a statistical package.
(Please note that answers of the above questions can vary, please check your works very carefully).
a)
Table –1: Stem-and-Leaf Display: X, n = 30
Leaf Unit = 0.10
Stem leaf
0 2
1 5
2 6
3 1488
23
4 114556669
5 22568
6 1128
7 179
8 01
Interpretation: Table 1 shows that to run 9 programs need time 4.1 to 4.9 seconds, 5 programs need 5.2
to 5.8 seconds and so on.
b)
Table –2: Frequency Distribution of X
X(Classes)
Frequency(fi)
__________________
0.2–1.5
1
1.5-2.8
1
2.8-4.1
6
4.1-5.4
10
5.4-6.7
6
6.7-8.1
6
__________________
n =30
Table –3: Relative Frequency Distribution of X
X(Classes)
Relative Frequency(rfi)
______________________________
0.2–1.5
0.03
1.5-2.8
0.03
2.8-4.1
0.20
4.1-5.4
0.33
5.4-6.7
0.20
6.7-8.1.1.1.1
0.20
_________________________________
24
6
rf
i
=1
i 1
Table–4: Cumulative Frequency Distribution of X
X(Classes)
Cumulative Frequency(cfior Fi)
_______________________________________
0.2–1.5
1
1.5-2.8
2
2.8-4.1
8
5.4-6.7
18
5.4-6.7
24
6.7-8.4
30
___________________________________________
Interpretation: Table 2 shows that 9 programs need times 4.1 to 5.4 seconds, Table 3 shows that 30
percent programs need times 4.1 to 5.4 seconds and Table 4 shows that 18 programs need at most 5.4
seconds and so on.
c)
Descriptive Statistics: X
Variable
n Minimum Maximum
X
30
Variable
Mean
X
5.0
0.20
8.1
Median(Q2)
4.75
StDev
Q1
Q3
1.859
4.02
6.12
Interpretation:
Mean = 5.0 seconds means that most of times to run a program need approx. 5 seconds.
Median = 4.75 seconds means that 50% programs to run need less than 4.75 seconds and rest of 50% need
more than 4.75 seconds.
Standard deviation = 1.859 seconds means all the times a program did not take 5 seconds to run.
25
Q1 = 4.02 seconds means that first 25% programs to run need less than 4.02 seconds and rest of 75%
need more than 4.02 seconds.
Q3 = 6.12 seconds means that first 75% programs to run need less than 6.12 seconds and rest of 25%
need more than 6.12 seconds.
Formulae
Mean - Ungrouped data:
Formula: x =
1 n
x i , where is the summation sign.
n i 1
Mean - Grouped data (WEIGHTED MEAN):
Formula: x =
1 k
f i m i , where fi are the frequency of the ith class and mi are the midpoints of the ith
n i 1
class, midpoint = (LCL+UCL) of the ith class/2 and k are the total no. of classes.
Median for Ungrouped Data:
To obtain the median of an ungrouped data set, arrange data in ascending order (smallest value to largest
value). n is odd, median is at position (n+1)2 in the ordered list. n is even, median is the mean at positions
(n/2) and (n/2)+1 in the ordered list.
Median for Grouped Data:
n2 FM e 1
c , where l M e is the LCL of the median class, FMe 1 is the cf below the
f M e
Formula: M e l M e
median class, f M e is the frequency of the median class and c is the size of the median class.
Quartiles for Ungrouped Data:
To calculate percentile for a small set of data, arrange the data in ascending order.
Compute an index i, where i = (p/100)n, p is the percentile of interest and n is the total no. of
observations.
If i is not an integer, round up. The next integer greater than i denote the position of the pth percentile.
If i is an integer, the pth percentile is the mean of the value of the positions i and i + 1.
26
Quartiles for Grouped Data:
pn
100
Fpi 1
Formula for ith percentile: p i l pi
c , i = 1, 2, ..., 99, Where l pi is the LCL of the ith
f pi
percentile class, Fpi 1 is the cf below the ith percentile class, f pi is the frequency of the ith percentile class
and c is the size of the ith percentile class. For application, refer the calculation of median for grouped
data.
Standard Deviation for Ungrouped Data:
Formula:
1 n
1 n
( x i x ) 2 , where xi are the raw data and x = x i .
n 1 i 1
n i 1
Standard Deviation for Grouped Data:
Formula:
1 n
f i (m i x ) 2 , where fi are the frequencies of the ith class, mi are the mid-points
n 1 i 1
of the ith class interval and x =
1 k
fi mi .
n i 1
d) Histogram of X
# of programs
9
6
3
0
0.2-1.5
1.5-2.8
2.8-4.1
4.1-5.4
cpu time
Interpretation: See Table 2 message.
Ogive of X:
Do by yourself
Interpretation: See Table 4 message.
e) Sample skewness
27
5.4-6.7
6.7-8.1
skewness(X) = 0.4034
Formula - S K
3( x M e )
, where 3 S k 3 i.e. skewness can range from -3 to +3. Interpretation
of Sk
A value near -3, such as – 2.78 indicates considerable negative skewness.
A value such as 1.57 indicates moderate positive skewness.
A value of 0 indicates the distribution is symmetrical and there is no skewness present.
Interpretation: Sk = 0.4034 indicate that to run a few programs need time more than 5 seconds.
More case studies:
Text: Case problems 1, 2, 3, 4, pp.137-141
28
Lecture 9
Chapter-4_Introduction to probability
Some Important Definitions:
Random experiment, random variable, sample space, events (simple event, compound event), counting
rules, combinations, permutations, tree diagram, probability defined on events
Introduction
We finished our first important part of the course (known as data summary). Even we sat for CT1. Now
we are moving in the 2nd very important part of the course, namely “Chance Theory”. It is also known as
“Probability Theory”. The word “Chance or Probability” frequently we are using in our real life. For
examples:
(i)
(ii)
(iii)
What is the chance of getting grade-A for the course MAT 211?
What is the chance that sales will decease if we increase prices of a commodity?
What is the chance that a new investment will be profitable?
and so many other situations not possible to record all.
It is true word “Chance or Probability” using frequently.
To understand consider a situation. For example if we ask the following question to 3 students:
What is the chance of getting grade-A for the course MAT 211?
Say for example, answered the following:
Student-1: Chance is 95%
Student-2: Chance is 100%
Student-3: Chance is 10%
Let‟s explain their predicted values under the chance theory. What we can observe:
Student 1 is 95% confident he/she is getting grade A. That means past experience tells us out of 100
students, 95 students had grade A.
Student 2 is 100% confident he/she is getting grade A. All the students got grade A.
Student 3 is only 10% confident (less confident) he/she is getting grade A. Only 10 students out of 100
students got grade A.
How calculated?
Recall the relative frequency method, where relative frequency = frequency/n and apply this. We will get
the answer. Suppose n =100, # of students got grade A is 95 (frequency), here probability is 0.95.
This formula we will use to calculate probability. Follow the following
To calculate probability of an event (recall in the previous example, one possible event grade A), we have
to very familiar with the following terms:
29
Random experiment, random variable, sample space, events
Random experiment – It is the process of getting all possible events. Events are also known as
outcomes.
Random variable- It is denoted by r.v. It is the event which one we will be interested from the all
possible outcomes. In the previous example, grade A is the random variable.
Note that r.v. will vary from experiment to experiment.
Sample Space: It is very very important. Without, it will not be possible to calculate chance of an event.
It is denoted by S. It is all the possible outcomes of a random experiment. It is just the set (recall set –
collection of all objects). Sometimes it will be not possible to calculate easily (note that to get the idea
about S, we have to practice a lot!).
Several methods will be used to find S. These are:
Our knowledge, tree diagram (a wonderful method) and counting rules (permutation, combination). We
will use combination approach most of times; however permutation approach also will be used.
Events – It is denoted by E. It is a possible outcome of our random experiment.
Formula to compute probability of an event: It is denoted by P(E) and calculated as
P(E) = (#of E)/S,
0P(E)1
If
P(E) = 0, no chance to occur (improbable event).
P(E) = 0.5, 50% chance that the E will occur.
P(E) – 1.0, 100% chance, that the E will occur (sure event)
Recall grade example
P(grade A) =(#of E)/S = 95/100 = 0.95, where S = {all possible grades}, E = grade A.
Summary: The randomly selected student will get grade A, chance is 95%.
Some random experiments and S (Text, p.143)
Random experiment 1:
Toss a fair coin. S = {H,T}, H-head and T-tail. If E – head, then P(H) = ½ = 0.5 and P(T) = ½ = 0.5.
Random experiment 2:
Select a part for inspection, S = {defective, non-defective}.
Random experiment 3:
Conduct a sales call, S = {purchase, no purchase}.
Random experiment 4:
Roll a fair die, S = {1,2,3,4,5,6}.
30
Random experiment 5:
Play a football game, S = {win, lose, tie}.
Note that in the sample space, S all possible events read as “or”. Be careful not “and”. It is impossible to
get H or T in one experiment. Win and lose in one game is also impossible (realize it!)
Important concepts
mutually exclusive events, equally like events, Tree diagram, combination
Mutually exclusive events – It is the event where two possible events cannot occur simultaneously. Toss a
coin, H and T cannot occur in a single random experiment. It is written as P(HT) =0.
If P(HT) 0, events are mutually inclusive. Toss two coins (or one coin two times), H and T can occur
in this random experiment. For example, p(HT) =0.50, where S = {HH, HT, TH, TT}.
Equally like events – Two events has equal chance of being occur. Toss a coin, P(H) = P(T) = 0.5.
Tree diagram – It is a technique to make a summary of all possible events of a random experiment
graphically.
Combination - It is a formula to make a summary of all possible events of a random experiment.
Counting rules – Two rules: Combination and Permutation
Combination – It allows one to count the number of experimental outcomes when the experiment
involves selecting n objects from a set of N objects.
For example, if we want to select 5 students from a group of 10 students, then
possible ways students can be selected, here S = 252.
Permutation – It allows one to count the number of experimental outcomes when n objects are to be
selected from a set of N objects, where the order of selection is important.
For example, if we want to select 5 students from a group of 10 students ( where order is important , then
possible ways students can be selected, here S = 30240.
Ex: 1. How many ways can three items can be selected from a group of six items? Use the letters A, B, C,
D, E and F to identify the items and list each of the different combinations of three items.
Solution: S =
possible ways letters can be selected. Some examples, ABC, ABD,
ABE, ABF, ……. DEF.
Ex: 2. How many permutations of three items can be selected from a group of six items? Use the letters
A, B, C, D, E and F to identify the items and list each of the different permutations of items B, D and F.
Solution: S =
possible ways letters can be selected.
31
Different permutations of items B, D and F: BDF, BFD, DBF, DFB, FDB, FBD, 6 outcomes.
Ex:3: An experiment with three outcomes has been repeated 50 times and it was learned that E1 occurred
20 times, E2 occurred 13 times and E3 occurred 17 times. Assign probabilities to the outcomes.
Solution: S = {E1, E2, E3}. Here P(E1) = 20/50=0.40, P(E2) = 13/50=0.26, P(E3) = 17/50=0.34
P(S) = P(E1)+ P(E2)+ P(E3)= 0.40+0.26+0.34 = 1.0
Ex:4: A decision maker subjectively assigned the following probabilities to the four outcomes of an
experiment: P(E1) = 0.10, P(E2) = 0.15, P(E3) = 0.40 and P(E4) = 0.20. Are these probability
assignments valid? Explain
Solution: S = {E1, E2, E3, E4}. Here p(E1) = 0.10, p(E2) = 0.15, p(E3) = 0.40, p(E4) = 0.20.
p(S)=p(E1)+p(E2)+p(E3)+p(E4)=0.10+0.15+0.40+0.20=0.85<1.0. Thus, probability assignments invalid
because P(S) 1.
The above two problems tell us for any random experiment, P(S) = 1.
Ex:5: Suppose that a manager of a large apartment complex provide the following probability estimates
about the number of vacancies that will exist next month
Vacancies:
0
Probability: 0.05
1
2
0.15 0.35
3
4
5
0.25 0.10 0.10
Provide the probability of each of the following events:
a. No vacancies
b. At least four vacancies
c. Two or fewer vacancies
Solution: S={0,1,2,3,4,5}
a. p(0)=0.05
b. p(At least four vacancies) = P(4)+P(5)=0.20.
c.
p(Two or fewer vacancies)= P(0)+P(1)+P(2)=0.05+0.15+ 0.35=0.55.
32
Ex:6: The National Sporting Goods Association conducted a survey of persons 7 years of age or older
about participation in sports activities. The total population in this age group was reported at 248.5
million, with 120.9 million male and 127.6 million female. The number of participation for the top five
sports activities appears here
Activity
Bicycle riding
Camping
Exercise walking
Exercising with equipment
Swimming
Participants
Male
Female
22.2
21.0
25.6
24.3
28.7
57.7
20.4
24.4
26.4
34.4
a. For a randomly selected female, estimate the probability of participation in each of the sports
activities
b. For a randomly selected male, estimate the probability of participation in each of the sports
activities
c. For a randomly selected person, what is the probability the person participates to exercise
walking?
d. Suppose you just happen to see an exercise walker going by. What is the probability the walker is
a woman? What is the probability the walker is a man?
Solution: S = {Br, C, EW, EE,S}, where Br - Bicycle riding, C- Camping, EW- Exercise walking, EEExercising with equipment and S – Swimming.
a. Female can come from any sports activities. Thus P(F) = (21/248.5) +(24.3/248.5) +… +
(34.4/248.5).
b. Male can come from any sports activities. Thus P(M) = (22.2/248.5) +(25.6/248.5) +… +
(26.4/248.5).
c. Person can be male or female. Thus, P(EW) = P(Male EW) +P(Female EW) = (28.7/248.5)
+(57.7/248.5)=86.4/248.5=0.34 = 34%.
d. We have to consider exercise walker population. Thus, P(woman/EW) = 57.7/ (28.7+57.7) =
57.7/86.4 = 0.67 = 67%. P(man/EW) = 28.7/ (28.7+57.7) = 28.7/86.4 = 0.33 = 33%.
HW: Textbook
Ex: 1-9, pp.158-159
Ex: 14-21, pp.162-164
33
Lecture 10
Basic relationships of probability (addition law, complement law, conditional law, multiplication law)
Addition Law
Suppose we have two events A and B (A, B S). The chance of occurring A or B is written as
P(AB) = P(A) + P(B) - P(AB), if two events are not mutually exclusive.
P(AB) = P(A) + P(B), if two events are mutually exclusive.
Keywords: Or, at least
Problem 1
Consider a case of a small assembly plant with 50 employees. Suppose on occasion, some of the workers
fail to meet the performances standards by completing work late or assembly a defective product. At the
end of a performance evaluation period, the production manager found that 5 of the 50 workers completed
work late, 6 of the 50 workers assembled a defective product and 2 of the 50 workers both completed
work late and assembled a defective product. Suppose one employee if selected randomly what is the
probability that the worker completed work as late or will assembled a defective product?
Solution: Let L- work is completed late, D - assembled product as defective. Total employees S = 50.
We have to find P(LD). We know P(LD) = P(L) + P(D) - P(LD) = (5/50)+(6/50) – (2/50) =
0.10+0.12-0.04 = 0.18 = 18%.
The chance is 18% the worker completed work as late or will assembled a defective product.
Problem 2
A telephone survey to determine viewer response to a new television show obtained the following data
Rating:
Poor
Below average Average
Frequency: 4
8
11
Suppose a viewer is selected randomly
(i)
(ii)
Above average
14
Excellent
13
What is the chance that the viewer will rate the new show as average or better?
What is the chance that the viewer will rate the new show as average or worse?
Solution: Total possible viewers S = 50.
(i)
P(average or better) = (11/50) + (14/50)+(13/50)
The viewer will rate the new show as average or better, chance is 76%.
(ii)
P(average or worse) = (11/50) + (8/50) + (4/50) = 0.46 = 46%
The viewer will rate the new show as average or worse, chance is 46%.
34
Complement law (very useful law many cases!).
Suppose we have one event A, then the chance of not getting A event is defined as
P(Ac) = 1-P(A), AS,
Keyword: not
Recall Problem 1
(i)
(ii)
What is the chance that the randomly selected worker completed work will not be late?
Suppose one employee if selected randomly what is the probability that the worker completed
work as late nor will assembled a defective product?
Solution: (i) P(Lc) = 1-P(L) = 1-0.10 =0.90 = 90%
The chance is 90% that the randomly selected worker completed work will not be late.
(ii) P(LD)c= 1- P(LD)= 1-0.18 = 0.82 = 82%
The chance is 82% that the randomly selected worker completed work as late nor will assembled a
defective product.
Conditional law - Keyword: If, given, known, conditional
Suppose we have two events A and B (A, B S), the chance of getting A when B is known (or B when A
is known) is defined as
P(A/B) = P(AB)/P(B), P(B) 0
P(B/A) = P(AB)/P(A), P(A) 0
To understand the concept, consider the following situation:
Roll a die. What is the chance of getting the die will show
(i)
(ii)
(iii)
(iv)
(v)
(vi)
2
Even number
2 or even number
Not 2
2 given that die will show even number
2 given that die will show odd number
Solution: S = 6. (i) P(2) = 1/6 (ii) P(Even number) =3/6 (iii) P(2even number) = (1/6)+(3/6)-(1/6)
(iv)P(2c)=1-P(2) = 5/6 (v) P(2/even number) = 1/3 (vi) P(2/odd number)=0
Observe carefully (i) to (iv) are unconditional probabilities, but (v) to (vi) are conditional probabilities.
Here to calculate (i) to (iv) we used unconditional sample space, whether to calculate (v) to (vi) we used
conditional sample space, where has given condition from the roll we need even or odd numbers.
Multiplication law
Suppose we have two events A and B (A, B S), the chance of getting A and B is defined as
P(AB) = P(A/B)P(B) if A and B events are dependent
35
P(AB) = P(A) P(B) if A and B events are independent
Keyword: both, joint, altogether, and
Problem
Consider the situation of the promotion status of male and female officers of a major metropolitan police
force in the eastern United States. The force consists of 1200 officers, 960 men and 240 women. Over the
past two years 324 officers on the public force received promotions. The specific breakdown of
promotions for male and female officers is shown in the following Table
Table: Promotion status of police officers over the past two years
Promoted
Not Promoted
Total
Men
288
672
960
Women
36
204
240
Total
324
876
1200
a)
b)
c)
d)
Find a Joint probability table.
Find marginal probabilities.
Suppose a male officer is selected randomly, what is the chance that the officer will be promoted?
Suppose a female officer is selected randomly, what is the chance that the officer will not be
promoted?
e) Suppose an officer is selected randomly who got promotion, what is the chance that the officer
will be male?
f) Suppose an officer is selected randomly who did not get promotion, what is the chance that the
officer will be female?
Solution: Here S = 1200 officers
a) Joint probability table for promotion status
Promoted
Not Promoted
Total
Men
0.24
0.56
0.80
Women
0.03
0.17
0.20
Total
0.27
0.73
1.00
b) P(Men) = 0.80, P(Women) = 0.20, P(Promoted) = 0.27, P(Not Promoted) = 0.73, these are known as
marginal probabilities.
c) P(Promoted/Men)=288/960.
d) P(Not Promoted/ Female) =204/240.
e) P(Male/Promotion) = 288/324.
f) P(Female/not Promoted) = 204/876.
HW: Text, Ex: 22-27, pp.169-170 and Ex: 32-35, pp.176-177
36
Lecture 12
Mid-term test -20%
Requirements:
1) Must need a two variables scientific calculator (No alternatives).
2) Mobile will be shut off during exam time.
Format of questions
3) Lecture 8-Lecture 10 solved and HW problems
4) Related Text book questions
/Good Luck/
37
Lecture 13
Chapter 6
Normal distribution
It is invented by Abraham de Moivre, a French mathematician in 1733. The form or shape can be given
in the following:
The mathematical equation depends upon the two parameters mean () and standard deviation ()
follows:
f ( X)
1
2
e 1 / 2( X ) ,
2
X
2
where = mean of normal variable, = SD of the normal variable ( and determine the location and
shape of the normal probability distribution) and e are mathematical constants, which values are equal to
3.14 and 2.728 respectively.
By notation X ~ N(,) read as X is normally distributed with mean and standard deviation .
It is true that once and are specified, the normal curve is completely determined.
Standard Normal Probability Distribution
A random variable that has a normal distribution with a mean of zero and standard deviation of one is said
to have a standard normal probability distribution.
The letter Z is commonly used to designate this particular normal random variable.
The standard normal probability distribution, areas under the normal curve have been computed and are
available in tables that can be used in computing probabilities.
The final page table is an example of such a table.
38
Computing Probabilities for Any Normal Probability Distribution
The reason for discussing the standard normal distribution so extensively is that probabilities for all
normal distributions are computed by using the standard normal distribution. That is, when we have a
normal distribution with any mean and standard deviation, we answer probability questions about the
distribution by first converting to the standard normal distribution. Then we can use Table and the
appropriate Z values to find the desired probabilities.
Problem
Consider according to a survey, subscribers to The Wall Street Journal Interactive Edition spend average
of 27 hours per week using the computer at work. Assume the normal distribution applies and that the
standard deviation is 8 hours.
a) What is the probability a randomly selected subscriber spends less than 11 hours using the
computer at work?
b) What percentage of the subscribers spends more than 40 hours per week using the computer at
work?
c) A person is classified as a heavy user if he or she is in the upper 20% in terms of hours of usage.
How many hours must a subscriber use the computer in order to be classified as a heavy user?
Solution
Denote X = No. of hours per week using the computer at work, X~N(27, SD = 8)
a) Need to find p(X<11) = p(Z<-2) = 0.028
b) p(X>40) = p(Z>1.62) = 0.0526 i.e. 5.26%.
c) Need to find X when p = 0.20. Thus, X = + Z = 27 + 8Z. When p = 0.20, then find Z and substitute
the Z-value to get the value of X.
HW: Ex: 10-25, pp.248-250
39
40
Lecture 15
Class Test _2 -15%
Requirements:
1) Must need a two variables scientific calculator (No alternatives).
2) Mobile will be shut off during exam time.
Format of questions
1) Lecture 13-Lecture 14 solved and HW problems
2) Related Text book questions
41
Lecture 16
Chapter 8
Random Sampling
Our step here how to collect random samples from the target population and how to summarize collected
raw data effective ways so that general peoples can understand so clearly.
Generally, there are two ways the required information may be obtained:
a) Census survey and
b) Sample survey.
The total count of all units of the population for a certain characteristics known as complete enumeration,
also termed census survey.
Money, manpower and time required for carrying out complete enumeration will generally be large and
there are many situations where complete enumeration is not possible. Thus, sample enumeration or
sample survey is used to select a random part of the population using the table of random numbers (e.g.
see Text, p.269) have been constructed by each of the digits 0,1, …, 9.
The method of drawing a random number consist the following steps:
a) Identify N units in the population with the numbers 1 to N.
b) Select at random, any page of the random number table and pick up the numbers in any row,
column or diagonal at random.
The population units corresponding to the numbers in step b) constitute the random samples.
42
To illustrate how to select sample by the method of use of table of random numbers, consider the
following problem:
Suppose the monthly pocket money (TK/-) given to each of the 50 School of Business students at IUB as
follows:
Pocket Money (TK/-)
1100
1500
8900
4500
2700
3800
3000
6700
2600
3600
7500
7900
4600
2000
2400
1300
8500
6500
6200
5800
6000
6800
9200
3800
1200
8000
7100
8600
8700
6300
7600
7700
2600
7800
2000
9000
7300
8400
1700
2500
5700
5300
5500
1700
3700
5400
2400
4000
1200
7300
To draw a random sample of size 10 from a population of size 50, first of all, need to identify the 50 units
of the population with the numbers 1 to 50.
Pocket Money (TK/-)
1100(01) 1500(02) 8900(03) 4500(04) 2700 3800 3000
(05) (06) (07)
6700
(08)
2600
(09)
3600(10)
7500(11)
7900
(12)
4600
(13)
2000
(14)
2400 1300 8500
(15) (16) (17)
6500
(18)
6200
(19)
5800(20)
6000
(21)
6800
(22)
9200
(23)
3800
(24)
1200 8000 7100
(25) (26) (27)
8600
(28)
8700
(29)
6300
(30)
7600(31)
7700
(32)
2600(33)
7800
(34)
2000 9000 7300 8400(38)
(35) (36) (37)
1700
(39)
2500
(40)
5700
(41)
5300
(42)
5500
(43)
1700
(44)
3700 5400 2400
(45) (46) (47)
1200(49)
7300
(50)
4000
(48)
Then, in the given random number table, starting with the first number and moving row wise (or column
wise or diagonal wise) to pick out the numbers in pairs, one by one, ignoring those numbers which are
greater than 50, until a selection of 10 numbers is made.
# Selected row-wise sample numbers: 27, 15, 45, 11, 02, 14, 18, 07, 39, 31
43
# Selected row-wise monthly pocket money (TK/-) of 10 students out of 50: 7100, 2400, 3700, 7500,
1500, 2000, 6500, 3000, 1700, 7600
HW:
Calculate mean and standard deviation of 10 students‟ monthly pocket money (Use formula and
Scientific calculator)
Text, Ex: 3-8, pp.272-273
44
Lecture 17
Chapter 8_Interval estimation (Estimation of Parameters)
Aim
Be familiar how to construct a confidence interval for the population parameter.
The sample statistic is calculated from the sample data and the population parameter is inferred (or
estimated) from this sample statistic. In alternative words, statistics are calculated; parameters are
estimated.
Two types of estimates we find: point estimate and interval estimate.
Point Estimate – It is the single best value. For example, mean and SD of total marks for a course of IUB
students are point estimates because these are single value.
Interval Estimate - Confidence Interval
The point estimate is varying for sample to sample and going to be different from the population
parameter because due to the sampling error. There is no way to know who close it is to the actual
parameter. For this reason, statisticians like to give an interval estimate (confidence interval), which is a
range of values used to estimate the parameter.
A confidence interval is an interval estimate with a specific level of confidence. A level of confidence is
the probability that the interval estimate will contain the parameter. The level of confidence is 1 - . 1-
area lies within the confidence interval.
Confidence interval for based on large samples
Problem
Suppose, total marks for a course of 35 randomly selected IUB students is normally distributed with mean
78 and SD 9. Find 90%, 95% and 99% confidence intervals for population mean . Make a summary
based on findings.
Solution:
We have given X~N(78,9), where X - total marks for a course of 10 randomly selected IUB students and
n=35.
90% confidence interval for :
̅
̂
̅
√
Here ̅ =78, ̂ =9, n=35, =1-0.90 = 0.10, /2 = 0.05 and
=
̂
√
=1.65
Thus,
√
√
45
Summary: Based on our findings, we are 90% confident that population mean is ranging 75.5 to 80.5.
95% confidence interval for :
̅
̂
̅
√
Here ̅ =78, ̂ =9, n=35, =1-0.95 = 0.05, /2 = 0.025 and
̂
√
=
=1.96
Thus,
√
√
Summary:
Based on our findings, we are 95% confident that population mean is ranging 75.01 to 80.98.
99% confidence interval for :
̅
̂
̅
√
Here ̅ =78, ̂ =9, n=35, =1-0.99 = 0.01, /2 = 0.005 and
̂
√
=
=2.58
Thus,
√
√
Summary:
Based on our findings, we are 99% confident that population mean is ranging 74.07 to 81.92.
Practice problems
1. In an effort to estimate the mean amount spent per customer for dinner at a major Atlanta restaurant,
data were collected for a sample of 49 customers over a three-week period. Assume a population
deviation of $2.50.
a. At a 95% confidence level, what is the margin of error?
b. If the sample mean is $22.6, what is the 95% confidence interval for the population mean?
46
Guideline:
X- Amount spent per customer for dinner at a major Atlanta restaurant. Here n=49, SD = ̂
a) Find Margin of error (ME) =
̂
√
, here =1-0.95 = 0.05, /2 = 0.025 and
b) 95% confidence interval for the population mean:
̂
̅
√
̅
=
$2.50
=1.96
̂
√
(Solve it)
2. Have a machine filling bags of popcorn; weight of bags known to be normally distributed with mean
weight 14.1 oz and SD 0.3 oz. Take sample of 40 bags, what‟s a 95% confidence interval for population
mean ?
Guideline:
a) X - weight of bags. Here n=40, ̅ =14.1, ̂ =0.3 =1-0.95 = 0.05, /2 = 0.025 and
=1.96
=
95% confidence interval for population mean :
̅
̂
√
̅
̂
√
(Solve it)
3. The National Quality Research Center at the University of Michigan provides a quarterly measure of
consumer opinions about products and services (The Wall Street Journal, February 18, 2013). A survey of
40 restaurants in the Fast Food/ Pizza group showed a sample mean customer satisfaction index of 71.
Past data indicate that the population standard deviation of the index has been relatively stable with =5.
a. Using 95% confidence, determine the margin of error.
b. Determine the margin of error if 99% confidence is desired.
Guideline:
Follow 1 and 2 questions guideline
4. The undergraduate GPA for students admitted to the top graduate business schools is 3.37. Assume this
estimate is based on a sample of 120 students admitted to the top schools. Using past years' data, the
population standard deviation can be assumed known with .28. What is the 95% confidence interval
estimate of the mean undergraduate GPA for students admitted to the top graduate business schools?
Guideline:
Follow 1 and 2 questions guideline
HW: Text,
47
Confidence interval for based on small samples
When sample size is less than 30 i.e. n<30, the mean has a Student's t distribution. The Student's t
distribution was created by William S. Gosset, an Irish worker. He wouldn't allow him to publish his
work under his name, so he used the pseudonym "Student".
The Student's t distribution is very similar to the standard normal distribution.
It is symmetric about its mean
As the sample size increases, the t distribution approaches the normal distribution.
It is bell shaped.
The t-scores can be negative or positive, but the probabilities are always positive.
(1-)100% confidence interval for :
̅
̂
̅
̂
√
√
Problem
Suppose we have given sample heights of 20 IUB students, where
= 67.3", SD = 3.6" and the
distribution is symmetric. Develop 95% confidence interval for and make a summary based on your
findings.
Solution:
We have given X~N(67.3,3.6), where X - heights of 20 randomly selected IUB students and n=20.
95% confidence interval for :
̅
̂
̅
√
Here ̅ =67.3, ̂ =3.6, n=25, =1-0.95 = 0.05, /2 = 0.025 and
̂
√
=
=2.093
Thus,
√
√
Summary: Based on our findings, we are 95% confident that population mean is ranging 65.61 to 68.98.
48
49
Practice problems
1. The International Air Transport Association surveys business travelers to develop quality ratings for
transatlantic gateway airports. The maximum possible rating is 10. Suppose a simple random sample of
25 business travelers is selected and each traveler is asked to provide a rating for the Miami International
Airport. The ratings obtained from the sample of 25 business travelers follow.
6, 4, 6, 8, 7, 7, 6, 3, 3, 8, 10, 4, 8, 7, 8, 7, 5, 9, 5, 8, 4, 3, 8, 5,5
Develop a 95% confidence interval estimate of the population mean rating for Miami.
2. Text book, Ex.15-17, p.324
3. Have a machine filling bags of popcorn; weight of bags known to be normally distributed with mean
weight 10.5 oz and SD 0.8 oz. Take sample of 10 bags, what‟s a 90% confidence interval for population
mean ?
Confidence interval for variance and standard deviation
We have learned that estimates of population means can be made from sample means, and confidence
intervals can be constructed to better describe those estimates. Similarly, we can estimate a population
standard deviation from a sample standard deviation, and when the original population is normally
distributed, we can construct confidence intervals of the standard deviation as well
Variances and standard deviations are a very different type of measure than an average, so we can expect
some major differences in the way estimates are made.
We know that the population variance formula, when used on a sample, does not give an unbiased
estimate of the population variance. In fact, it tends to underestimate the actual population variance. For
that reason, there are two formulas for variance, one for a population and one for a sample. The sample
variance formula is an unbiased estimator of the population variance.
Also, both variance and standard deviation are nonnegative numbers. Since neither can take on a negative
value, thus the normal distribution cannot be the distribution of a variance or a standard deviation. It can
be shown that if the original population of data is normally distributed, then the expression
chi-square distribution with n−1 degrees of freedom.
has a
The chi-square distribution of the quantity
allows us to construct confidence intervals for the
variance and the standard deviation (when the original population of data is normally distributed).
(1-)100% confidence interval for 2:
where the
values are based on a chi-square distribution with n-1 degress of freedom and 1- is the
confidence coefficient (Details see, Text, p.440)
50
(1-)100% confidence interval for :
√
where the
√
values are based on a chi-square distribution with n-1 degress of freedom and 1- is the
confidence coefficient (Details see, Text, p.440).
Problem-1
A statistician chooses 27 randomly selected dates and when examining the occupancy records of a
particular motel for those dates, finds a standard deviation of 5.86 rooms rented. If the number of rooms
rented is normally distributed, find the 95% confidence interval for the population standard deviation of
the number of rooms rented.
Solution:
Here X - Number of rooms rented, S = 5.86 and n=27
95% confidence interval for the population standard deviation ():
√
√
Here
√
√
Summary: Based on our findings, we are 95% confident that population standard deviation is ranging
4.615 to 8.031.
Problem-2
A statistician chooses 27 randomly selected dates and when examining the occupancy records of a
particular motel for those dates, finds a standard deviation of 5.86 rooms rented. If the number of rooms
rented is normally distributed, find the 95% confidence interval for the population variance of the number
of rooms rented.
Solution:
Here X - Number of rooms rented, S = 5.86 and n=27
95% confidence interval for the population variance (2):
51
Here
Summary: Based on our findings, we are 95% confident that population variance is ranging 21.297 to
64.492
Practice problems
1. The variance in drug weights is critical in the pharmaceutical industry. For a specific drug,
with weights measured in grams, a sample of 18 units provided a sample variance of s2=0.36.
a. Construct a 90% confidence interval estimate of the population variance for the weight of this
drug.
b. Construct a 90% confidence interval estimate of the population standard deviation.
2. The daily car rental rates for a sample of eight cities follow:
City
Daily Car Rental Rate ($)
Atlanta
Chicago
69
72
Dallas
75
New Orleans
67
Phoenix
62
Pittsburgh
65
San Francisco
61
Seattle
59
a. Compute the sample variance and the sample standard deviation for these data.
b. What is the 95% confidence interval estimate of the variance of car rental rates for the population?
c. What is the 90% confidence interval estimate of the standard deviation for the population?
52
53
Lecture 18
Chapter 10
Interval estimations about two population means, standards deviations, see Text. Chapter 10
54
Lecture 19
Tests of hypothesis
In general, we do not know the true value of population parameters (mean, proportion, variance,
SD and others). They must be estimated based on random samples. However, we do have
hypotheses about what the true values are.
The major purpose of hypothesis testing is to choose between two competing hypotheses about
the value of a population parameter.
Actually, in hypothesis testing we begin by making a tentative assumption about a population
parameter. This tentative assumption is called the null hypothesis and is denoted by H0.
It is needed then to define another hypothesis, called the alternative hypothesis, which is the
opposite in H0. It is denoted by Ha or H1.
Both the null and alternative hypothesis should be stated before any statistical test of significance
is conducted.
In general, it is most convenient to always have the null hypothesis contain an equal sign, e.g.
(1) H0: μ = 100
H1: μ 100
(2) H0: μ 100
H1: μ < 100
(3) H0: μ 100
H1: μ > 100
Thus, note that
under H0, signs are =, and
under H1, signs are , < and >
In general, a hypothesis tests about the values of the population mean take one of the following
three forms:
H0: = 0
H0: 0
H0: 0
H1: 0
H1: < 0
H1: > 0
55
For example, consider the following problems in choosing the proper form for a hypothesis test:
Problem 1
The manager of an automobile dealership is considering a new bonus plan designed to increase
sales volume. Currently, the mean sales volume is 14 automobiles per month. The manager
wants to conduct a research study to see whether the new bonus plan increases sales volume. To
collect data on the plan, a sample of sales personnel will be allowed to sell under the new bonus
plan for a 1-month period. Define the null and the alternative hypotheses.
Solution: Here H0: 14 and H1: > 14.
Problem 2
The manager of an automobile dealership is considering a new bonus plan designed to increase
sales volume. Currently, the mean sales volume is 14 automobiles per month. The manager
wants to conduct a research study to see whether the new bonus plan decreases sales volume. To
collect data on the plan, a sample of sales personnel will be allowed to sell under the new bonus
plan for a 1-month period. Define the null and the alternative hypotheses.
Solution: Here H0: 14 and H1: < 14.
Problem 3
The manager of an automobile dealership is considering a new bonus plan designed to increase
sales volume. Currently, the mean sales volume is 14 automobiles per month. The manager
wants to conduct a research study to see whether the new bonus plan changes sales volume. To
collect data on the plan, a sample of sales personnel will be allowed to sell under the new bonus
plan for a 1-month period. Define the null and the alternative hypotheses.
Solution: Here H0: = 14 and H1: 14.
Steps for conducting a of hypothesis test
1. Develop H0 and H1.
2. Specify the level of significance, , which defines unlikely values of sample statistic if the
null hypothesis is true. It is selected by the researcher at start. The common values of are 0.01,
0.05 and 0.10 and is most common 0.05.
3. Select the test statistic (a quantity calculated using the sample values that is used to perform
the hypothesis test) that will be used to test the hypothesis.
Guidelines to select test statistic:
56
Tests on population mean ()
a) Use Z-statistic when n>30 and SD known
b) Use t-statistic when n30 and SD unknown
c) Tests on population variance and SD (2 and )
Use 2-statistic.
4. Use to determine the critical value (A boundary values that define the critical region from
the non-critical region or acceptance region. Based upon given risk level ) for the test statistic
and state the rejection rule for H0.
Critical region (CR) or rejection region (RR) are the area of the test statistic for which H0 is false.
Non-critical region or acceptance region (AR) are the area of the test statistic for which H 0 is
true.
5. Collect the sample data and compute the value of the test statistic.
6. Use the value of the test statistic and the rejection rule to determine whether to reject H0.
Using the p-value to make decision:
The probability when H0 is true, of obtaining a sample result that is at least as unlikely as what is
observed. More clearly, the p-value is a measure of the likelihood of the sample results when H0 is
assumed to be true. The smaller the p-value, the less likely it is that the sample results came from a
situation whether the H0 is true. It is often called the observed level of significance. The user can then
compare the p-value to and draw a hypothesis test conclusion without referring to a statistical table):
Use the value of the test statistic to compute the p-value.
Reject H0 if p-value < .
Problem-4
Individuals filing federal income tax returns prior to March 31 had an average refund of $1056.
Consider the population of last minute filers who mail their returns during the last 5 days of the
income tax period typically April 10 to April 15. A researcher suggests that one of the reasons
individuals wait until the last 5 days to file their returns is that on average those individuals have
a lower refund than early fillers.
a) Develop appropriate hypotheses such that rejection of null hypothesis will support the
researchers argument.
b. Using 5% level of significance, what is the critical value for the test statistic and what is the
rejection rule?
57
c. For a sample of 400 individuals who filed a return between April 10 and April 15, the sample
mean refund was $910 and the sample standard deviation was $1600. Compute the value of the
test statistic.
d. What is your conclusion?
e. What is the p-value for the test?
Solution
Denote X - Individuals federal income tax returns prior to March 31. Here n = 400, ̅ = $910 and
= $1600.
(a) Set up the following hypotheses:
H0: $1056 vs. H1: < $1056
(b) We find that n > 30, choose the z-statistic. The critical value of the z-statistic at the 5% level
of significance is found from the z table is -1.645.
Rejection rule: Reject H0 if zcal -1.645
(c) Test Statistic zcal = (sqrt(400)(910 - 1056))/1600 = -1.8250
(d) Conclusion
Decision: Reject the null hypothesis.
Thus, it is possible to conclude that we are 95% confident that we may reject the null hypothesis
and alternatively accept the alternative hypothesis. More clearly, based on sample evidence, it
may be concluded that the researchers claim is true that means individuals filing federal income
tax returns between April 10 to April 15 had an average refund of lower than $1056.
58
Problem- 5
Individuals filing federal income tax returns prior to March 31 had an average refund of $1056.
Consider the population of last minute filers who mail their returns during the last 5 days of the
income tax period typically April 10 to April 15. A researcher suggests that one of the reasons
individuals wait until the last 5 days to file their returns is that on average those individuals have
grater refund than early fillers.
a) Develop appropriate hypotheses such that rejection of null hypothesis will support the
researchers argument.
b. Using 5% level of significance, what is the critical value for the test statistic and what is the
rejection rule?
c. For a sample of 400 individuals who filed a return between April 10 and April 15, the sample
mean refund was $910 and the sample standard deviation was $1600. Compute the value of the
test statistic.
d. What is your conclusion?
e. What is the p-value for the test?
Solution
Denote X - Individuals federal income tax returns prior to March 31. Here n = 400, ̅ = $910 and
= $1600.
(a) Set up the following hypotheses:
H0: $1056 vs. H1: > $1056
(b) We find that n > 30, choose the z-statistic. The critical value of the z-statistic at the 5% level
of significance is found from the z table is 1.645.
Rejection rule: Reject H0 if zcal 1.645
(c) Test Statistic zcal = (sqrt(400)(910 - 1056))/1600 = -1.8250
59
(d) Conclusion
Decision: Accept the null hypothesis.
Thus, it is possible to conclude that we are 95% confident that we may accept the null hypothesis
and alternatively reject the alternative hypothesis. More clearly, based on sample evidence, it
may be concluded that the researchers claim is false that means individuals filing federal income
tax returns between April 10 to April 15 had an average refund no greater than $1056.
Problem- 6
Individuals filing federal income tax returns prior to March 31 had an average refund of $1056.
Consider the population of last minute filers who mail their returns during the last 5 days of the
income tax period typically April 10 to April 15. A researcher suggests that one of the reasons
individuals wait until the last 5 days to file their returns is that on average those individuals have
changed refund than early fillers.
a) Develop appropriate hypotheses such that rejection of null hypothesis will support the
researchers argument.
b. Using 5% level of significance, what is the critical value for the test statistic and what is the
rejection rule?
c. For a sample of 400 individuals who filed a return between April 10 and April 15, the sample
mean refund was $910 and the sample standard deviation was $1600. Compute the value of the
test statistic.
d. What is your conclusion?
e. What is the p-value for the test?
Solution
Denote X - Individuals federal income tax returns prior to March 31. Here n=400, ̅ =$910 and
=$1600.
(a) Set up the following hypotheses:
H0: =$1056 vs. H1: $1056
60
(b) We find that n>30, choose the z-statistic. The critical value of the z-statistic at the 5% level of
significance is found from the z table is 1.960.
Rejection rule: Reject H0 if zcal 1.960 or zcal -1.960
(c) Test Statistic zcal = (sqrt(400)(910 - 1056))/1600 = -1.8250
(d) Conclusion
Decision: Accept the null hypothesis.
Thus, it is possible to conclude that we are 95% confident that we may accept the null hypothesis
and alternatively reject the alternative hypothesis. More clearly, based on sample evidence, it
may be concluded that the researchers claim is false that means individuals filing federal income
tax returns between April 10 to April 15 had an average refund not changes from $1056.
Practice problem
The Edison Electric Institute has published figures on the annual number of kilowatt-hours
expanded by various home appliances. It is claimed that a vacuum cleaner expends an average of
46 kilowatt-hours per year. If a random sample of 42 homes included in a planned study
indicates that vacuum cleaners expend an average of 42 kilowatt-hours per year with a SD of
11.9 kilowatt-hours, does this suggest at the 0.10 level of significance that vacuum cleaners
expend, on the average, less than 46 kilowatt-hours annually? Assume that the population of
kilowatt-hours to be normal.
Guideline
X- Number of kilowatt-hours expanded for homes for vacuum cleaners. Here n =42, ̅
SD = 11.9.
61
42 and
H0: 46 vs. H1: < 46
We find that n>30, choose the z-statistic. The critical value of the z-statistic at the 5% level of
significance is found from the z table is 1.645.
(Solve it, follow problem 4)
Test for population mean for small samples and SD unknown
Problem-7
Individuals filing federal income tax returns prior to March 31 had an average refund of $1056.
Consider the population of last minute filers who mail their returns during the last 5 days of the
income tax period typically April 10 to April 15. A researcher suggests that one of the reasons
individuals wait until the last 5 days to file their returns is that on average those individuals have
a lower refund than early fillers.
a) Develop appropriate hypotheses such that rejection of null hypothesis will support the
researchers argument.
b. Using 5% level of significance, what is the critical value for the test statistic and what is the
rejection rule?
c. For a sample of 10 individuals who filed a return between April 10 and April 15, the sample
mean refund was $910 and the sample standard deviation was $1600. Compute the value of the
test statistic.
d. What is your conclusion?
e. What is the p-value for the test?
Solution
Denote X - Individuals federal income tax returns prior to March 31. Here n = 10, ̅ = $910 and
= $1600.
(a) Set up the following hypotheses:
H0: $1056 vs. H1: < $1056
(b) We find that n30, choose the t-statistic. The critical value of the t-statistic at the 5% level
of significance with 9 df is found from the t table is -1.833.
Rejection rule: Reject H0 if tcal -1.833
62
(c) Test Statistic tcal = (sqrt(10)(910 - 1056))/1600 = -1.36
(d) Conclusion
Decision: Accept the null hypothesis.
Thus, it is possible to conclude that we are 95% confident that we may accept the null hypothesis
and alternatively reject the alternative hypothesis. More clearly, based on sample evidence, it
may be concluded that the researchers claim is false that means individuals filing federal income
tax returns between April 10 to April 15 had an average refund is not lower than $1056.
Practice problem
Joan‟s Nursery specializes in custom-designed landscaping for residential areas. The estimated
labor cost associated with a particular landscaping proposal is based on the number of plantings
of trees, shrubs and so on to be used for the project. For cost-estimating purposes, managers use
2 hours of labor time for the planting of a medium-size tree. Actual times from a sample of 15
plantings during the past month follow (time in hours): 1.9, 1.7, 2.8, 2.4, 2.6, 2.5, 2.8, 3.2, 1.6
and 2.5. Using the 0.05 level of significance, test to see whether the mean tree planting time
exceeds 2 hours?
Guideline:
X- Tree planting time. Here n=15, mean = 2.4 and SD = 0.52 (Used calculator to find it)
H0: 2 vs. H1: > 2
We find that n30, choose the t-statistic. The critical value of the t-statistic at the 5% level of
significance with 14 df is found from the t table is 1.761.
(Solve it, follow problem 7)
63
Tests for standard deviation
(1) H0: 2 20 vs. H1: 2 >20
(2) H0: 2 20 vs. H1: 2 < 20
(3) H0: 2 = 20 vs. H1: 2 20
Test Statistic:
2
= (n-1)s2/20, where 20 is the hypothesized value for the population variance.
Problem 8
A Fortune study found that the variance in the number of vehicles owned or leased by
subscribers to Fortune magazine is 0.94. Assume a sample of 12 subscribers to another magazine
provided the following data on the number of vehicles owned or leased: 2, 1, 2, 0, 3, 2, 2, 1, 2, 1,
0 and 1. a. Compute the sample variance in the number of vehicles owned or leased by the 12
subscribers. B. Test the hypothesis H0: 2 = 0.94 to determine if the variance in the number of
vehicles owned or leased by subscribers of the other magazine differ from 2 = 0.94 for Fortune.
Using a 0.05 level of significance, what is your conclusion?
Solution
Denote X –The number of vehicles owned or leased by subscribers of Fortune magazine. Here n
= 12, sample variance s2= 0.81.
Set up the following hypotheses
H0: 2 = 0.94 vs. H1: 2 0.94.
Note that the alternative is two-sided so we should get two rejections regions in both the lower
and the upper tails of the sampling distribution.
Test statistic: 2-statistic. With H0: 2 = 0.94, the value of the
1)s2/20 = (11x0.81)/0.94 = 9.478.
2
statistic is computed as (n-
The critical values of the 2 statistic at the 5% level of significance will be 20.975 and 20.025
respectively. Using 11 degrees of freedom, the critical values are found from the 2 table are
2
2
0.975 = 3.815 and
0.025 =21.920 respectively.
The rejection rule: Reject H0 if
2
3.815 or
2
21.920
64
Decision:
Accept the null hypothesis.
Thus, it is possible to conclude that we are 95% confident that we may accept the null
hypothesis. More clearly, based on sample evidence, it may be concluded that the variance in the
number of vehicles owned or leased by subscribers of the other magazine do not differ from the
claim for Fortune.
Practice problem
Home mortgage interest rates for 30-year fixed rate loans vary throughout the country. During
the summer of 2000, data are available from various parts of the country suggested that the SD of
the interest rates was 0.096. The corresponding variance in interest rates would be 0.0092.
Consider a follow up study in the summer of 2003. The interest rates for 30-years fixed rate
loans at a sample of 20 lending institutions had a sample SD of 0.114. Conduct a hypothesis test
H0: 2 = 0.0092 to see whether the sample data indicate that the variability in interest rates
decreased. Using the 0.01 level of significance, what is your conclusion?
Guideline
X- Home mortgage interest rates for 30-year. Here n = 20, sample SD=0.114, population SD =
= 0.096, population variance = 2 = 0.0092 and = 0.01
Set up the following hypotheses
H0: 2 0.0092 vs. H1: 2 < 0.0092.
Test statistic: 2-statistic. With H0: 2 0.0092, the value of the
1)s2/20 = (190. 1142)/0.0092 = 26.83
65
2
statistic is computed as (n-
The critical values of the 2 statistic at the 1% level of significance will be 20.990. Using 19
degrees of freedom, the critical value is found from the 2 table are 20.990 =7.633.
The rejection rule: Reject H0 if
2
7.633
Decision:
{Insert decision curve}
Accept the null hypothesis.
Thus, it is possible to conclude that we are 99% confident that we may accept the null
hypothesis. More clearly, based on sample evidence, it may be concluded the variability in
interest rates increased.
HW: Text, Chapter 11
Summary on Tests of Hypothesis (One Sample)
One Sample Tests
Population Mean () Test
Population Proportion (P) test
Population SD () Test
i) H0: = 5 vs. H1: 5
i) H0: P = 0.6 vs. H1:P 0.6
i) H0: =1.5 vs. H1: 1.5
ii) H0: 5 vs. H1: < 5
ii) H0:P 0.6 vs. H1:P<0.6
ii) H0: 1.5 vs. H1: <1.5
iii) H0: 5 vs. H1: > 5
iii) H0:P 0.6 vs. H1:P > 0.6
iii) H0: 1.5 vs. H1: >1.5
66
Note: i) Two-sided or two-tailed tests and the other two’s are one-sided or one-tail lower or upper
tests.
Statistic:
Zcal =
Statistic:
n (x Ho )
x
Zcal:
(large
sample test n>30)
or
tcal =
Statistic:
n (x Ho )
sx
(small sample test n30.
p̂ PH0
p
, where p
PH0 (1 PH0 )
n
2
(n 1)S 2x
2
, where S x is
2
H0
the sample variance.
Distribution:
Distribution:
Standard Normal Z and use Z-table to
have Ztab.
Chi-square and use chi-square
table for 2tab.
Format of Ztab= Z for one-sided test
and for two-sided test Ztab= Z/2
Note that Chi-square table is
very similar to t-table.
Distribution:
For example 2tab = 2(n-1), for
one-sided test and 2tab = 2(n1),/2 for two sided test
Standard Normal Z (or t)
and use Z-table (or t-table)
to have Ztab or ttab.
e.g. format of Ztab= Z for
one-sided test and for twosided test Ztab= Z/2
ttab= t(n-1), for one-sided test
and ttab= t(n-1),/2 for twosided test.
67
Lecture 21
Tests of two populations means, two standard deviations, Applications from real data
See Text, Chapter 11
Summary on Tests of Hypothesis (Two Samples)
Two Samples Tests
Population Means Test
Population Proportions test
Population SDs Test
i) H0: 1 = 2 vs. H1: 1 2
i) H0: P1 = P2 vs. H1:P1 P2
i) H0: 1 = 2 vs. H1: 1 2
ii) H0: 1 2 vs. H1: 1 < 2
ii) H0: P1 P2 vs. H1:P1<P2
ii) H0: 1 2 vs. H1: 1<2
iii) H0: 1 2 vs. H1: 1 > 2
iii) H0:P1 P2 vs. H1:P1 > P2
iii) H0: 1 2 vs. H1: 1>2
Note: i) Two-sided or two-tailed tests and the other two’s are one-sided or one-tail lower or upper
tests.
Statistic:
Zcal =
(x1 x 2 )
(large
x1 x 2
sample test at least one
sample >30)
or
tcal =
Statistic:
Statistic:
Zcal:
F
p̂1 p̂ 2
, where
p̂1 p̂ 2
p̂1 p̂ 2
S12
2
2
, where S1 and S2 are
2
S2
the sample variances.
P1 (1 P1 ) P2 (1 P2 )
n1
n2
Distribution:
(x1 x 2 )
S x1 x 2
(small samples 30).
Distribution:
F and use F-table for Ftab.
Standard Normal Z and use Z-table to
have Ztab.
Format of Ztab= Z for one –sided test
and for two-sided Ztab= Z/2
Distribution:
Standard Normal Z (or t)
68
For example Ftab = Fn, for onesided test and Ftab = Fn,/2 for
two sided test, where n = n1+ n22.
and use Z-table (or t-table)
to have Ztab or ttab.
e.g. format of Ztab= Z for
one-sided test and for twosided Ztab= Z/2 and ttab= tn,
for one-sided and ttab= tn,/2
for two sided, where n = n1+
n2-2
69
Lectures 22-23
Chapter 14_Correlation and Regression Analysis
70
71
72
73
74
Application from real data
Correlation Analysis
1) Scatter Diagram – To guess relationship between two variables
2) Correlation coefficient (rxy) will indicate us percent of relation exists between two
variables.
Let‟s consider the following problem to understand it very clearly!
Problem
Consider two variables
x (No. of TV commercials): 2,5,1,3,4,1,5,3,4,2
y(Total sales): 50,57,41,54,54,38,63,48,59,46
Find the relationship between two variables and make a summary based on your findings.
Solution:
Denote x - No. of TV commercials and y- Total sales because it is believable that sales depends
on No. of commercials
Make a shape of Scatter diagram to see what sorts of relation exist between and x and y.
70
60
50
40
Total Sales
30
20
10
0
0
1
2
3
4
5
6
No. of TV Commercials
Summary: We see that there is a positive relation exists between no. of TV commercials and total
sales.
To understand very clearly what percent relation exist between x and y, we will apply the following
formula (known as correlation coefficient) is defined as
75
∑
where
̅
∑
∑
̅
∑
̅
∑
̅
̅
̅ )
Make the following calculation table (details Must see Textbook, pp.115-116) to find rxy
No. of TV
Commercials(x)
2
5
1
3
4
1
5
3
4
2
Total
30
Total
Sales(y)
50
57
41
54
54
38
63
48
59
46
510
1
4
4
0
1
4
4
0
1
1
20
1
36
100
9
9
169
144
9
64
25
566
1
12
20
0
3
26
24
0
8
5
99
Thus, from the table we get,
1.49,
7.93,
11
= 11/(1.49x7.93) =0.9310
Summary
We see that =0.93 means that when no. of TV commercials increases there is a 93% chance that total
sales may be increased.
76
Application from real data
Regression Analysis
Here aims based on random samples data
(1) Fit a model
(2) Predict y and x values
Fitting a model:
Consider the following two variables regression model
Yi = α + βXi + ei, i = 1,2,….,n
Where Y= dependent variable(e.g. total sales)
α =constant
β = regression coefficient or slope
X = dependent variable(e.g. No. of commercials)
e = random error
Here there are two parameters α and β. These two will be estimated based on random samples data.
Using the Ordinary Least Square method, we find that estimated values of α and β
̂
̂
̂ ̅
̅
∑
̅
∑
̅
̅
Estimated model y on x:
yi = ̂+ ̂ xi , i = 1,2,….,n
Prediction or Forecasting
The predicted model is defined by
yp = ̂+ ̂ xp
Let‟s consider the following problem to understand it very clearly!
Problem:
Recall the following two variables
x (No. of TV commercials): 2,5,1,3,4,1,5,3,4,2
77
y(Total sales): 50,57,41,54,54,38,63,48,59,46
(i)
(ii)
Fit a model y on x.
Predict (or forecast) total sales when x=5.
Solution:
Consider the following two variables regression model
Yi = α + βXi + ei, i = 1,2,….,n
where Y= Total sales
α =constant
β = regression coefficient y on x
X = No. of commercials
e = random error
Two parameters α and β will be estimated based on random samples data y and x.
Calculation table
No. of TV
Commercials(x)
2
5
1
3
4
1
5
3
4
2
Total
30
Total
Sales(y)
50
57
41
54
54
38
63
48
59
46
510
1
4
4
0
1
4
4
0
1
1
20
1
36
100
9
9
169
144
9
64
25
566
(ii) We know that estimated model y on x: yi = ̂+ ̂ xi, where
̂
̂
̂ ̅
̅
∑
̅
∑
̅
̅
We find from the calculation table
̂ = 99/20= 4.95
78
1
12
20
0
3
26
24
0
8
5
99
̂
51-(4.95x3) = 36.15
Thus, estimated model y on x becomes: yi= 36.15+4.95xi
Summary
̂=36.15 means that if there are no commercials (i.e. x=0), then expected sales may be 36.15$.
̂ =4.95 means that when no. of TV commercials increases there is a chance that total sales may be
increased.
(ii) We know that the predicted model is: yp = ̂+ ̂ xp, i = 1,2,….
According to question, we have to predict total sales, when x=5.
Thus yp =36.15+(4.95x5)=60.9$.
So, we can expect when there are 5 commercials in a week, company can expect total sales 60.9$.
HW: Text
Ex: 47-51, pp.122-124
Ex: 4-14, 18-21, pp.570-582
/End of lecture notes/
79
© Copyright 2026 Paperzz