PowerPoint Sunusu

PROBABILITY & STATİSTİCS
Prof. Dr. Orhan TORKUL
Res. Assist. Furkan YENER
Probability & Statistics

Many of problems we face daily as industiral engineers have
elements of risk, uncertanity, or variabilty associated with them. For
example, we cannot always predict what the demand will be for a
particular inventory item. We cannot always be sure just how many
people will shop at a grocery store and desire to check out during a
particular hour. We cannot always be sure that the quality of raw
materials from one of our suppliers will be consistent. We are not
prophets who can accurately predict results in advance. But we as
industrial engineers are educated in the use of applied probability
and statistics to make intelligent engineering decisions despite our
lack of complete knowledge about future events.
Probability of an Event

An event may consist of any combination of possiable outcomes of
an experiment. It is up to the experimenter to define events that are
meaningful in the experiment. Let us consider the experiment of
drawing one ball from a box containing ten balls, numbered 6
through 10 are white. The probability of drawing any particular one of
the ten balls on any performance of the experiment is 0,1. We can
define many events relating to the experiment of drawing one ball
from the box as follows:

𝐸1 is the event of drawing an even-numbered ball
𝐸2 is the event of drawing green ball
𝐸3 is the event of drawing even-numbered green ball.


Sample Points Counting Rules
Addition rule: For two seperated event; firtst one has 𝑁1 different
possibilities, second has 𝑁2 different possibilities. One of these
events may happen 𝑁1 + 𝑁2 different probablity.
Ex: If we throw a dice, what is the possiblities for odd and even
numbers?
Solution: There are 3 odd and 3 even number on a dice so that 3 + 3 = 6
is the number of possibilities.

Multiplication Rule: For two non-seperated event; firtst one has 𝑁1
different possibilities, second has 𝑁2 different possibilities. One of
these events may happen 𝑁1 x 𝑁2 different probablity.
Ex: If a coin has tossed 3 times, what is the number of probabilities?
Solution: when a coin has tossed, there are 2 possibilities. If it happens
3 times, 2 x 2 x 2 = 8 different event can arise.

Combinations

In considering problems involving finite sample spaces of equally
likely outcomes, we are often tempted to count the fruquencies of
interest. This procedure is perfectly valid, but counting may be time
consuming and tedious. Therefore, we most often use a formula to
determine numbers of combinations. The number of combinations of
n things taken k at a time is expressed by the following notation and
formula:
𝐶𝑛,𝑘 =
𝑛
𝑘
=
𝑛!
𝑘! 𝑛−𝑘 !
Estimating Probabilites

Let us consider an experiment to be the inspection of a reel of magnetic
tape produced by a certain process. Each repetition of this experiment
will consist of the examinations of a different reel of tape, the possible
decisions being accepti rework and reject. The number of repetitions
corresponds to the number of reels of tape, and the possible decisions
number only three; thus, both the number of repetitions and number of
decisions are finite.
𝑛𝑖
= number of repetitions of the experiment that will result in
decision 𝑖
𝑘
= number of different possible decisions
𝑁
= total number of repetitions
𝑃 𝑥𝑖 = probability to happen 𝑥𝑖 .
𝑃 𝑥𝑖 =
𝑛𝑖
𝑁
(𝑖 = 1,2, … , 𝑘)
Some Important Probability Distributions
1.
2.
3.
4.
5.
6.
7.
8.
Discrete Distribution Properties
Binomial Distribution
Poisson Distribution
Discrete Uniform Distribution
Uniform Distribution
Normal Distribution
Exponential Distribution
Rectangular Distribution
Normal Distributions

We shall define the general form of the normal distrubition for the continuous
random variable 𝑥 as follows:
2 /2σ2
1
−(𝑥−µ)
𝑓 𝑥 =
𝑒
for −∞ < 𝑥 < ∞ ;
2πσ
This distribution is one of the most interesting and useful that we can study. It has a
single peak at the mean and is symmetrical about that point. If we plot an example
of a normal distribution, it will be readily apparent that it is bell-shaped with mean µ
and variance σ2 . In practice, many distributions are well approximated by the
normal distribution. Some examples include bolt diameter, construction errors,
resistance of a specified type of wire, weight of a packaged material, and so on.
Expected Values and Variability

Mean;
The mean or expected value of discrete random variable x is denoted by
the letter µ and is defined as follows:
µ=𝐸 𝑥 =
𝑥𝑖 𝑝(𝑥𝑖 )
𝑖
Where 𝑝(𝑥𝑖 ) is the probability that 𝑥 takes on the value 𝑥𝑖 .

Variance;
The measure of variability that we have considered is called the
variance(σ2 ), and it is defined as follows.
σ2 = 𝐸(𝑥 − µ)2 = 𝐸 𝑥 2 − µ2
Populations and Samples

Much of the work of the applied professions involves the study of
only a subset of the total items of interest, in the hope of making
statistical inferences about the total. An engineer might collect data
on machine utilization for 1 month, hoping to infer from it machines
utilization information for many months or years. An automobile
manufacturer might test small number of automobiles and then
make generalized statements about all the automobiles produced
during that model year. An inspection team might use destructive
inspection on a small percentage of items in order to infer
caracteristics of the total number beng produced. In order to
describe this process accurately, we must clearly understand the
meaning of population and sample.
Population

A population, in the broadest sense, is the total set of elements
about which knowledge is desired. Some populations are relatively
small, for example, the number of space shuttles; other populations
are large, for example, all the electric light bulbs now in existence
and to be produced in the future. All elements of a population do not
have to be in existence, as the last example indicates. The
important thing to remember is that the population must be
definable.
Sample


A sample is a subset of a population. In extreme situations the
sample may be the complete population or it may consist of no
elements at all. Of course, this latter sample would yield no
information and we shall not consider it further. Remember that the
purpose of a sample is to yield inferences about the population from
which it was taken.
The two most important futures of a sample are its size and the
manner in which it was selected. Much of the study of sampling
statistics concerns the determination of these two characteristics. As
expected, this determination is based on the specific conditions
prescribing the purpose of the sample.
Sample Statistics

A sample statistic is a value calculated from a sample that
may be used to estimate a population parameter such as a
mean or variance.Two important sample statistics are the
sample mean and the sample variance.

The sample mean is defined as follows:
𝑛
𝑥=
𝑖=1

𝑥𝑖
𝑛
The sample variance is defined as follows:
𝑛
𝑠2 =
𝑖=1
(𝑥𝑖 − 𝑥)2
𝑛−1
Distribution of Sample Means

We often make inferences about a population from the average
value of a sample.This usually requires that we know the
parameters of the distribution of means. Naturally, the expexted
value of the sample average is µ, the same mean value as held by
the population. The variance of the sample means σ2𝑥 , differs from
the population variance and is given by the following.
2
σ
σ2𝑥 =
𝑛
Example
EXAMPLE: The delay times (handling, setting, and positioning the tools) for cutting 6
parts on an engine lathe are 0.6 1.2 0.9 1.0 0.6 and 0.8 minutes. Calculate variance.
Solution: First we calculate the mean:
0.6 + 1.2 + 0.9 + 1.0 + 0.6 + 0.8
𝑥=
= 0.85 Then we set up the work required to find
6
(𝑥𝑖 −𝑥)2 in the following table:
𝐗𝐢
𝐗 𝐢 -𝐱
(𝐗 𝐢 −𝐱)𝟐
0,6
-0,25
0,0625
1,2
0,35
0,1225
0,9
0,05
0,0025
1
0,15
0,0225
0,6
-0,25
0,0625
0,8
-0,05
0,0025
5,1
0,00
0,2750
We divide 0.2750 by 6−1 = 5 to obtain
𝑠2 =
0.2750
5
= 0.055 (minute)2 = Variance.
Central Limit Theorem

In essence, Central limit theorem says: ıf 𝑥 has a distribution with a
finite variance σ2 , then the random variable 𝑥 has a distribution that
approaches normality as the sample size tends to infinity.
Fortunately, for many population distributions often encountered,
sample sizes as low as 𝑛 = 4 produce sample average distributions
which are workably close to normal.

We use the central limit theorem extensively in quality control,
probabilistic models, or project manager.
THANKS